NowGamer

Titanfall, Servers & The Problem With Review Scores

Adam Barnes

Feature


As games become less about the the on-disc content and more about a service, how can the industry use review scores?

Published on Mar 13, 2014

There’s been a bit of an uproar this week. Titanfall reviews landed, but many – such as IGN and Eurogamer – opted not to place a score on their reviews as they wanted to test the game ‘in the wild’.

It’s a noble idea, and one that got me thinking. As a result I ended up having a long discussion with a friend of mine on the topic of review scores.

He believed that if a game – on launch – is unplayable or has issues then the score should reflect that. His argument, it seemed, was more to pressure developers and publishers into providing a more stable game at launch.

He wanted those in charge to be accountable for the frustration and ire we’ve all felt when a multiplayer game inevitably suffers from problems at launch.

On my side, however, I saw a review as a critique of a product, as a permanent, purchasable item that – whether bought the week of launch or a year later – would still have those mechanical gameplay elements that made it either good or bad.

My case stated that if – say – Titanfall suffered from a week’s worth of server troubles as Respawn struggled to combat the high influx of server traffic, but was from then on complaint-free then should our permanent score reflect something that would only be a temporary flaw?

Would that be fair to the efforts of the developer?

Our review – like so many – was conducted under ‘ideal circumstances’ with LAN-quality connections so that lag was non-existent. 

There’s no reason gamers currently playing the game can’t achieve an equal connection standard, and in that optimum condition Titanfall is bloody good fun. Mechanically, it is a 9/10.

But the games industry is in a state of flux, and has been for some time. This traditional method of reviewing – or, more to the point, scoring – games isn’t necessarily the right way anymore.

By and large popular multiplayer games will suffer at launch. Our industry is unique when it comes to the sudden upsurge of network traffic due to a product launching.

Look at any hyped, connected game of the last few years and you’ll see a large number of stories regarding botched servers, unbearable lag and other ills we’ve come to expect.

Internet traffic is tricky to predict at the best of times, and even stress-tests and intentionally overloading servers with open betas often won’t prepare developers for the flood of data that comes rushing in as the clock strikes midnight.

As more and more games are becoming services – not products – then we, as an industry, need to reconsider the forms that our critique takes.

What’s the solution? Well that I’m not so sure about.

We could take the Polygon approach, whereby scores are adjusted over time as and when the service is altered – be it server troubles, patches that resolve key problems or even the addition of new content.

Take Payday 2, for example, which had received a huge patch on PS3 quite some time after launch, adding in a heap of content and fixing the numerous bugs and problems the game had.

We love Payday 2 at NowGamer – even in spite of those flaws – but having access to these improvements during review would’ve certainly bumped that score up.

Do we go back and re-review it? Should we adjust the score to be relevant to the game post-patch, or should Overkill Software be punished for releasing a game that – at launch – did suffer from one-too-many bugs and mechanical faults?

Or what about Elder Scrolls Online or Destiny? Both games plan to extend their content over time, to add extra for the players and to evolve over the course of the games’ respective lifespans.

Under the traditional method, the score we give either of these games may not be conducive to the scores they deserve later down the line. Would that be fair for everyone, consumers and developers alike?

Do we periodically return to games to ensure those scores are correct? Should we give a couple of weeks leeway to a networked game's launch, to ensure a thorough test of the systems and servers in place?

Or do we scrap the scoring system entirely? It’s hard to ignore the criticism that many corners of the internet place on scoring anyway, perhaps it’s just best to cast off the system altogether.

Not to forget that developers and publishers alike are often dependent on Metacritic as a means to gauge the success of a game, perhaps unfairly.

The fact is, however, Metacritic wouldn’t maintain such a level of importance if consumers didn’t often rely on it. It’s a quick and easy means to find out if that game you’re thinking of buying is generally well-regarded or despised, there’s no ignoring that.

Yet surely a score is meaningless without its accompanying text?

There isn’t a clear solution and it’s something that NowGamer – if not the industry – will be looking at resolving in the future. And your opinion on the matter would help in understanding which track we need to be on.

Games aren’t just games, anymore. Now they’re services, and we need to appraise them as such.

It’s clear the industry needs to tackle this problem, and it’s unlikely that there will be a one-size-fits-all solution.

But as long as you can come to NowGamer and get an honest opinion on whether or not a game is for you, then we’ve done our jobs properly.

Holding back the score to test servers doesn't seem like a permanent fix, but then nor does ignoring the criticism that developers and publishers should receive for launching with an unplayable game.

Fairness and honesty will always be the tenets of NowGamer, but right now there are a lot of questions that need answering - for us and the industry as a whole.

 

Tags

More Articles >>>

Titanfall Screen 2.jpg
Author Profile

Most Viewed

NowGamer on Twitter