We, the games buying public are a very demanding bunch, nothing less than top quality entertainment will suffice. Quite right too, if we’re going to spend forty quid on a brand new game then we expect forty quids worth of entertainment. How the value is measured is completely different matter altogether, it’s a bit like asking how long the first level in Fallout 3 is. There simply is no yardstick with which to measure by. Each and every one of us will have differing views, if only sometimes by a small degree. So, the question is: What do review scores really mean?
Now, I’m relatively new to the reviewing scene, sure I’ve played video games for a huge amount of years and know what I like and don’t like. But quite often in the past I’ve looked to reviewers like myself for inspiration when it comes to that tough ‘should I or shouldn’t I’ purchase decision. Usually there’s some sort of gauge or measurement at the end of a review that numerically sums up the reviewer’s experience of the game, you’ve all probably seen it presented in way such as: ‘Overall 7/10′. What exactly does that mean? Is it only 70% of a game? If they made it 30% bigger would it get the cherished 100%? Of course not! There are so many factors that go into a game it’s virtually impossible to clearly define it simply as a number. As you know, here at XGZ we avoid a points scoring system on our reviews, like it or hate it, that’s the way we do it. I’ll be the first to admit; initially I queried the logic behind not scoring a review when I first started writing for XGZ. I mean how can you possibly know how good a game is if it doesn’t have a score? Are you going to go out and buy a game that could possibly only be a 7/10 or even a 6/10? Just consider that number, yes it’s a perceived measurement of quality, but what is the unit of measurement?
Besides playing and writing about games I, like many, have a career that brings in the cash to allow me to afford to play games, and do that living stuff as well. My slightly less interesting occupation is that of a design engineer, and has absolutely nothing to do with video games whatsoever. However, one thing it does have is hard and fast units of measurement; length, weight, speed etc, etc, they’re all tangible figures that quantify a known characteristic of an object. Now, if I used a similar method of measurement as applied in games reviews to specify my design, I’d be in big trouble. Imagine designing a high precision component based on a series of rudimentary opinions, it just wouldn’t work, nothing would fit together and I’d be out of a job. Yes I know it’s a bit of a tenuous link, but I think it proves the measurement issue. And if that isn’t proof enough for you, just pick a run-of-the-mill game and find as many reviews as you can on the ‘net. The chances are the disparity between the high and low scores will equal a good-sized barn door in engineering terms.
So what’s the answer in refining the results of a vast array of differing review scores? You could always take an average score of all of the reviews… A ‘meta’ score will surely give an accurate and unbiased answer? Well, yes and no in my opinion. Let’s say all the reviews score a game with top marks. For a title that scores this well you’re likely to find that providing you don’t hate the genre, you’ll get yourself a decent game. Likewise a title that is universally scathed and criticised is prone to be a big pile of digital poo. Okay, so we’ve established a safe haven with those that rake in the scores predominantly at the extremes of the scale. But, what about all those left somewhere in the middle? There are an awful lot of those ‘fair to middling’ games, far more than those that take unanimous top marks. What now? You may have one review that scores a game as low as 40% and at the other end of the scale, another that scores it big. Who’s right? Providing the site or magazine that gives the high marks isn’t being ‘influenced’ by external sources, shall we say, then they’re probably both right. Ah! We have a problem Houston; our ‘average’ review has just crashed the ‘meta’ system. What do we do now? Is this game really good, really bad, or somewhere in the middle as the review conglomeration indicates?
As a case study I’m going to put forward a game that appeared in the relative youth of the Xbox 360, one that struggled in terms of its commercial success, but is one of those ‘middling’ games I speak of. Backed by the might of Sega, From Software’s Chromehounds received numerical scores ranging from the forties (percent) to ninety and above. If you punch it into Metacritic you’ll find it gets an under whelming average score of 71, neither top draw or straight to trade-in material. Unfortunately, and I say this because I personally rate the game very highly, many reviewers probably didn’t have enough time to extract the potential out of From Software’s mech sim. I have spent well over 100 hours playing Chromehounds, and agreed, it only lasted that long because I was involved with a regular and active ‘troop’ of online friends. However, the enjoyment that my robot battling colleagues and I extracted from building intricate custom robot designs, and then blowing them up was immense.
Sadly Sega have announced that the Chromehounds servers will be closing in January 2010 leaving only the offline content available. Arguably it was the offline element that received the majority of the reviewer’s scorn, and rightly so as it really only served as a training aid for the multiplayer component. So, don’t go and rush to buy it now, the ‘game over’ switch is close to being pulled – permanently. However, this is partly the point – the negative reviews that were received for the game were largely based on the offline material. It sounds like a lot of these reviewers received the code prior to the Chromehounds servers being switched on. Can those scores be considered fair?
Despite a fair majority of the early Chromehounds reviews being negative I was one of a few that still invested forty smackers of my own hard earned cash on the game. Why was this, surely a misjudged purchase? Nope, I read! I looked at the content of the reviews and soon built up a fair idea of what the game entailed. Some bemoaned the slow plodding pace of the mechs, but equally praised the scope for strategy. In pretty much any review I looked at I could find positives and negatives, however to me it was obvious from the outset that Chromehounds offered a unique console experience. The persistent world that the Neroimus war took place in was a truly awesome beast when you considered just how many players could affect the balance of power. Three factions, each with different technology and near infinite configurations of ‘hounds all trying to gain an advantage on the map. Fantastic!
My point is this: Chromehounds won my money and eventually my heart not through numerical figures at the end of a review, but by appealing to my personal interests. There were elements that were attractive to both the design engineer and chess player in me. I only found out those details existed by reading the content of reviews; the score at the end ultimately was irrelevant. Equally there were online friends I knew of that took the plunge and bought Chromehounds with very little research. Some were amazed by the experience, whereas others dismissed it as a huge pile of robot scrap that wasn’t a patch on the popular Mechwarrior games. Why was that do you think? Hmmm, could be something to do with a little human trait that makes us all different. Unlike the giant Hounds in From Software’s game, we are not robots; we all have a unique combination of preferences.
As a reviewer I have to be objective in my analysis of a game, maybe there are things that I don’t like, but if it’s personal opinion should I convey it? Of course I should, but always with considered reason and a well-formed argument. My experience as a gamer gives me the groundings for what I know is good and bad. Without having played games and witnessing nuances such as unresponsive controls, poor graphics and sound etc how can I ‘measure’ a game’s quality? Ah! There’s that measurement word, measurement is surely numerical isn’t it? So many questions and not a definitive answer… You’re not going to get one either!
The best explanation I can offer is that fortunately for us, games are continuously evolving and becoming ever better. A top scoring game of three years ago say, if released today would be unlikely to receive the same levels of acclaim. Based on this point surely the ‘meta-average’ score should decrease with age? Well, it doesn’t, the score only represents a titles general quality at time of release. There are some games that may defy the odds of time a little slower than others due to innovative and unique features. But as sure as eggs are eggs, the competition will catch up, take over, and leave the former ninety percent plus game in the dust.
If you’re the type of gamer that takes your purchases seriously (which, unless you’re loaded I’m sure you are), then the chances are you look at more than one review before buying. Many will consult the likes of Metacritic to gauge the general opinion, however, just remember to take those scores with a pinch of salt and read the review content – otherwise you may just miss out on ‘your’ Chromehounds.