Saturday, May 19, 2012

What Will You Pay To Play?

Last week, Kill Screen published “Will Work For Fun,” in which Mike Thomsen argues that the free-to-play model has changed the way we assign value to games. I don’t think anyone can dispute that. But Mike’s essay also contains a number of other assertions I’m not sure hold water, particularly around the nature of play. For the purposes of this post, though, I’m going to focus on his assumptions about value.

On Depreciation
In discussing how the value of games changes, Mike presents the following example:

You might have waited in line at a midnight launch to get Madden NFL 06 a few years ago and today find it in a bargain bin for $1. The game is unchanged, and its inherent value as an experience is identical to what it was then. What has changed is the perception of its value, driven by corporations whose business model relied on there constantly being a new game every year, one which necessarily obviated last year's model. We were not simply buying games in the days of brontosaurus-sized publishers like EA, we were paying for the privilege of participating in a zeitgeist that was largely created by corporate branding.

I think there’s a misrepresentation of the concept of depreciation here. I do some (very basic) accounting in my day job, so I’ll do my best to explain as I understand it.

The classic example of depreciation is a car. It’s often said that a new car loses a big chunk of its value as soon as it’s driven off the lot. The “inherent value” of the car is unchanged, but since when it leaves the lot it becomes a “used car,” its market value immediately decreases. So Mike has a salient point that market value is strongly tied to people’s perception of value. Ask a collector: an item is worth only as much as what someone is willing to pay for it.

It’s easy to understand depreciation with the example of a car. (For the moment, let’s assume value judgments about the car’s style or brand are irrelevant.) Over time, the car wears down due to normal usage. It’s naturally not worth as much as a new car when it’s got several thousand miles on it, because its functionality has decreased. The car’s decreased value is attributable to its decreased utility. To be sure, people’s perception of the car’s decreased utility also contributes to its decreased value, but there is also physical, tangible evidence of its degradation.

It’s trickier to apply that reasoning to videogames, where physical degradation is not the contributing factor. As Mike points out, the copy of Madden NFL 06 in the bargain bin is not $1 because the disc is scratched, but because the market’s perception of its value has decreased. But he also makes a few assertions here that I think are flawed.

First, Mike claims that the “inherent value” the game “is unchanged.” Is NFL 06 the same game in 2012 as it was six years ago? To some extent, yes. But it is also fundamentally different in ways that compromise its utility. The multiplayer servers, for example, are almost certainly shut down by this point, removing a major feature. Even if the servers weren’t shut down, very few people would be playing this game; its value is partly dependent on how many people are playing it, not necessarily in the sense of the “zeitgeist” Mike mentions, but in practical terms of finding opponents. The game’s value would have also plummeted had a new Xbox been released; technological advances are a driving force in depreciation. Similarly, as new standards for game design and performance become more prevalent, games designed with older paradigms become less attractive. In other words, the game disc itself is not the entire product; it’s also the support services associated with it, along with its player base. Games do lose value over time, just not because of physical use.

Second, Mike attributes the decreased value of the game to the publisher’s business model, which pushes out a new version every year. I think Mike might be implying that the changes to gameplay between Madden games of one year and the next are too incremental to justify their cost. If so, I’d tend to agree. But he’s ignoring here that one of the key drivers for producing a new NFL game (or any sports game) every year is that the sport itself has changed. In the case of football, changing rosters (particularly in the age of free agency) can completely alter the landscape of the league, and thus the sport. My own Buffalo Bills, for example, are suddenly a contender with the few big player acquisitions they’ve made this offseason. The Madden games are designed and billed as simulations (or at least reflections) of the sport; if the sport changes, the videogames must change, or they lose their value very quickly. The experience is certainly not “identical” to what it was years ago.

Finally, in his last two sentences above, Mike seems to be implying that EA intentionally contributes to the depreciation of its own games by releasing a new one every year. It’s possible that’s the case, since it certainly can’t be argued EA is not prioritizing new game sales. (So is every publisher, in case you haven’t noticed.) But I suspect that if this is true, it’s infrequent. No company, even EA, wants to be perceived as a purveyor of easily disposable products that lose their value quickly. Think of how many car commercials emphasize the car’s durability. Sure, a videogame is not a car, but if the game doesn’t retain a significant portion of its value over time, the publisher loses out on profits. Hence the popularity of trends designed to ensure games retain value over time, like DLC.

Some games depreciate in value simply because they suck. Once word gets out that a game is awful, its price often suffers a sharp decline. You could find a copy of Rogue Warrior in the bargain bin a scant month after release because it was a terrible game that received negative reviews. Retailers couldn’t get away with charging $60 because nobody would pay that much for such a lousy product. Yet great games that have been out for months, and even years, today retain a significant portion of their value because consumers agree they do. This is the case for both heavily-marketed AAA releases and smaller cult favorites. Branding absolutely plays a big part in market value. But I think Mike assumes consumers, and thus prices, are more susceptible to marketing than they are. If they were, Homefront and Red Faction: Armageddon would still be selling for $34.95 instead of $14.95.

What Will You Pay To Play?
I’m wary of conflating “price” with “value,” since as as the rise of fantastic free and cheap games has made abundantly clear, the two concepts are not identical. I’m equally wary of coming off like some kind of free-market nut by emphasizing the market’s role in determining value. But I do find Mike’s claim that corporate business practices, including marketing, are the key drivers of value hard to swallow.

I do think there’s a related point to be made, though, about pricing. I’m not alone in thinking that the default $60 price point for new releases smacks uncomfortably of record companies’ collusion to fix CD prices in the 1990s. I’ve long thought that more flexibility in pricing could help alleviate the industry’s woes, including piracy and the dreaded “used game problem.” (For an interesting discussion on “middle ground” pricing, check out this episode of the GWJ Conference Call.)

The free-to-play model is undeniably burgeoning, but it’s still experiencing growing pains. Mike’s point that free-to-play games lay bare the explicitly capitalist underpinnings of videogames is well-taken. I have a few games on my iPad that contain no less than three different types of in-game currency, all of which can be bypassed by spending real-world currency. There is something indelibly icky about this kind of design, the kind that makes every interaction feel like a transaction. It’s also often implemented in clumsy, inelegant ways that distract from the play experience.

But free-to-play is not going away anytime soon. I do believe it has to innovate or else suffer a slow, painful death. Eventually a critical mass of consumers will tire of its crasser tactics, or some new technology will render these types of games obsolete, and the model will prove unsustainable. Advertising revenue will undoubtedly keep it afloat for longer than it would otherwise be; one of the reasons console games can’t as easily exploit the free-to-play model is that it’s much less complex to push ads to mobile platforms. For now, we’ll have to get used to it.

Thursday, January 5, 2012

Fair Play

Every time Jenn Frank publishes a new piece, she proves why she continues to be my favorite games writer. Aside from being the most skilled personal essayist in this space, she also has an uncanny knack for teasing broader themes out of details. She's done it again, dammit, in her latest blog post, "On games of chance and 'cheating.'" Please go read it now, if you haven't already, since this post is in response to Jenn's. Incidentally, so is this image:


[Quick aside: I almost didn't write this post because Jenn is one of those writers I affectionately call "hamstringers"—writers whose stuff is so good, you feel hamstrung even thinking about writing anything yourself. In retrospect, that vaguely porcine nickname is probably not very flattering to anyone. Sorry, Jenn. Here's another Gosling.]

Anyway, I won't bother recapping Jenn's article here because you really should've read it already. But I do want to respond to one idea that clicked with me. I apologize if this is a little rambling, but here goes.

The theme of "fairness" in gaming is endlessly interesting to me. That's probably because, as Jenn implies, how we judge the "justness" of players' various approaches to the rules of games—which are microcosms, after all—can say a lot about our beliefs about much larger issues in life.

Jenn relates the story of Jeopardy! contestant (not Super Bowl-winning running back) Roger Craig, whose intense preparation, including statistical analysis of most likely question topics, helped him become earn the highest total single-day winnings in the show's long history. Jenn wonders if Craig's obsessive study of the game crossed the boundaries of "fair play." She decides it didn't, but does arrive at this thought: "we socially dictate such a fine line between ‘knowing the real rules’ and ‘cheating.'"

That story reminded me of this fascinating piece from Esquire on Terry Kniess, the first contestant on The Price Is Right to win both Showcases with a perfect bid. A former weatherman, Kniess took a job as a security employee for a casino. Studying patrons, he became an expert in reading the ways they would exploit the games to their advantage—not by "cheating," necessarily, but by identifying and capitalizing on opportunities and weaknesses (e.g., counting cards). He became an expert at pattern recognition. Kniess eventually moved to the other side of the blackjack table, teaching himself to count cards—the quintessential example of an exploit that's as close to cheating as you can get without actually cheating.

In 2008, Kniess and his wife turned their attention to The Price Is Right, obsessively studying past shows to memorize the retail prices of common items. When he finally appeared as a contestant, he pulled off an incredible victory:

That winning bid was incredible in the truest sense of the word: it defied credibility. Especially given the preponderance of winning contestants on this particular episode, host Drew Carey and the crew were convinced the fix was in. From the Esquire article, here's Carey's reaction:
"Everybody thought someone had cheated. We'd just fired Roger Dobkowitz, and all the fan groups were upset about it. I thought, Fuck, they just fucking fucked us over. Somebody fucked us over. I remember asking, 'Are we ever going to air this?' And nobody could see how we could. So I thought the show was never going to air. I thought somebody had cheated us, and I thought the whole show was over. I thought they were going to shut us down, and I thought I was going to be out of a job."

And just over there, just on the other side of that curtain, was twice-perfect Terry Kniess, still dancing to the music. "I was like, Fuck this guy," Carey says. "When it came time to announce the winner, I thought, It's not airing anyway. So fuck him."
But the episode did air, and Carey's lackluster reaction to the perfect bid made more headlines than the bid itself. Carey suspected Kniess was colluding with another obsessive The Price Is Right fan, former contestant Ted Slauson, who happened to have been seated with Kniess in the audience. But former host Bob Barker would have crowned Kniess king of the game, the article says, for this "studio magic."  In this view Kniess wasn't a charlatan, but "only...a disciplined man, a weatherman who had spent a lifetime being accurate, and who had also been a little bit lucky, and who had won a game that was made to be broken."

That's my emphasis there. A game that was made to be broken: is that what most games are, really? The Price Is Right, like all game shows (and most videogames), is a power fantasy. We root for the contestants to win because on some level we place ourselves in their shoes. We want them to break the game, if "breaking" means exploiting the rules to their advantage. We like to believe that the game can be beaten, that with determination, effort, and intelligence, we can overcome. It's Man against the System. With guts and smarts, you can make something of yourself, no matter who you are or where you come from. You make your own luck. That's the American Dream.

Terry Kniess and Roger Craig, are men who made their own luck. Through their intense preparation, these players exploited patterns in the game's system to reduce its advantage over them. Yet while we admire their hard work, we also tend to be suspicious of this kind of player. They've ruined the real fantasy, which is that we can win without obsessive preparation or superior talent. We can simply get lucky. If people didn't believe that, they'd never watch game shows. No wonder Drew Carey was pissed.

Jenn discusses players like Kniess and Craig who seek out and master the "secret rules" the rest of us don't consider, those patterns that are invisible to most of us, but once recognized, can be exploited to great advantage. Is it ethical to do so? Back to Jenn's point: there's a fine line between "mastery" and "cheating." It's hard to argue either Kniess or Craig was "cheating": their regimens were self-directed, required persistence and intelligence, and were intensely focused on the games' victory conditions. No outside manipulation of the rules or content of the games was necessary. It's easy to see them as heroes in their narratives.

That's why the case of self-proclaimed Survivor "villain" Jon Dalton, a.k.a. Jonny Fairplay, is so interesting here. I haven't seen much Survivor myself, but from what I gather contestants are encouraged, even expected, to make deceit a major tactic. Although he never won the game, Dalton became notorious for his psychological manipulation of the other contestants. Most egregiously, he lied about his grandmother dying to gain sympathy votes. Is this morally ugly? Probably. But it's certainly much closer to mastery than to cheating. Through study of previous seasons of the show, Dalton perceived an opportunity to exploit a weakness, and he capitalized on it. That weakness just happened to be a quality we value, empathy.

The irony of Dalton's self-appointed nickname was not lost on the audience and pop culture at large. He was a compelling player precisely because he appeared to straddle that fine line between mastery and cheating. To him, certainly, his tactics exemplified "fair play": he stayed within the rules of the game, but manipulated the odds in his favor, as any good player should. From what I know of him, which is blessedly little, Dalton seems to be a pretty repellent person on a personal level, and it's not surprising he's played up his "villain" image to extend his brief moment in the spotlight. Some of the vilification is no doubt attributable to him actually being a douchebag. But what's relevant to Jenn's discussion is how problematic his exploitation of the game was compared to Kniess's or Craig's. Dalton manipulated people, and even though that was well within the rules of the game, he was vilified for it. Kniess and Craig merely mastered the rules of an inanimate system. A system, in both cases, that features prominent sponsorships from faceless corporations hawking retail goods, entities no one feels guilty about cheating.

So back to Jenn's thought: "we socially dictate such a fine line between ‘knowing the real rules’ and ‘cheating.'" It's the social part of it that gums up the works, right? As children we're taught that cheating is wrong, that playing fair is the moral choice. So naturally, we immediately turn our energy to figuring out how to work the system to our advantage within the confines of "fair play." We seek out ways to bend the unspoken understandings that inform games. You'll get thrown out of a casino for counting cards not because it violates a rule of Blackjack, but because it violates a convention of play. It dispels the illusion of luck, of randomness, that makes the game enticing. It also pisses off the house, which is the only entity that is allowed to fleece anyone in this setting, dammit.

The perception of "fairness" is the key here. A child who loses a game will often cry "No fair!", as if the act of losing itself were an injustice. His notion of fairness is tied to his desired outcome. It is a transparently and often hilariously subjective thing.

Yet as we mature, we learn how to temper our, um, tempers. We gain a broader understanding of both the letter and the spirit of the rules games set forth. We realize the necessity of a level playing field in which rules are clearly defined and not subject to change based on our mood or recent sugar intake. We accept that playing games isn't always about winning, but about sharing a fun experience. In order for a game to be fun, we believe, it must strive for balance. And players have an obligation to approach that in good faith. When we perceive that they don't—even if they in fact have—we get upset. A contract has been broken.

Consider "spawn camping" in FPSes. It's dickish to camp out near a spawn point and waste other players as soon as they appear, but for years this behavior was perfectly within the confines of games' rules. Modern FPSes often have code in place to prevent it, and rightly so—it's more fun for everyone that way. But if the game allows you to do something, even something as dickish as spawn camping or lying about your grandma's death to get sympathy votes, is it really cheating? After all, the game is made to be broken.

That's why good QA testers are some of the most valuable employees a videogame studio has. They stretch the limits of the game not only on a technical level, but also in terms of which behaviors it should and should not allow. They break the game and break it again until the opportunities for exploitation, or at least exploitation that's not fun, are minimized. Players still find ways to manipulate the systems, of course. But how much of that is "breaking the game," and how much of it is strategy and skill? I wouldn't accuse StarCraft of being broken just because Boxer could trounce me.

Instead, I think when we call a game "broken" we mean that some facet of its ruleset is unbalanced. The level of challenge is out of whack. Something feels biased one way or the other, so much so that exploiting the imbalance feels closer to cheating than to mastery. In games, as in life, we crave that ideal level playing field. Even if we know it's often a myth.