Brad Wardell's site for talking about the customization of Windows.
Published on February 3, 2013 By Frogboy In Business

Full priced Stardock title releases of the last 5 years as measured on Metacritic.

image

One of these is not like the others.


Comments (Page 1)
2 Pages1 2 
on Feb 03, 2013

Even if the anomaly is ignored, it's a downward trend overall.

 

on Feb 03, 2013

Can't expect every game to get 90+, all the scores but WoM are good.  

on Feb 04, 2013

Lord Xia
Can't expect every game to get 90+, all the scores but WoM are good.  

Sure, the individual scores are good, but the fact that they are trending downward over time should be of concern.

Is it because consumer expectaions are changing? Is it technical?, Is it staffing levels (workload)? Skill set distribution?, etc.

I'm not familar with the software industry, but these are the kind of things I would consider where I work.

on Feb 04, 2013

I think Galciv2 Twilight being the highest rated game speaks volumes about the rating system. I am still waiting patiently for Stardock to implement fixes to allow the AI to use all the racial specific tech. How do you get off selling a single player game thats AI cant use the major feature advertised? Your top rated game was a scam.

on Feb 04, 2013

I've got some experience with statistics (I had to take it in college and did mediocre in it).  This isn't a good graph to really show a "trend".  The scale should go from 0 to 100.  Also, there are too few scores.  Also, reality check.  Getting a game review in the 90+ is exceptionally rare, and you simply can't consider that the standard to which you should compare all other released games.  First game is a 95, all other games should be 90+.  Prepare for disappointment.  Should be proud of anything in the green, 76+.     

on Feb 04, 2013

Actually, I'm not so sure that the trend necessarily indicates Stardock is doing worse.  You have tried a number of genres, and you've had to learn some new things along the way.  Not sticking to a formulaic franchise means that the games you come up with are fresh.  What you also need to factor in is the post-release support of those games which makes those games better by spades by listening to the players.

You need to consider that those scores are averages (right?)  and therefore there could be a lot of variation in there, so what you would really want is to ask Metacritic nicely to send you a scatter plot for your games, unless you know a way to scour that data yourself.  A game that has three reviews with high marks and two with low marks obviously gets a lower average than one with five high mark reviews, but unless you know how high and how low, it's not much to go on.

Personally I would rate Fallen Enchantress a bit higher than either Dark Avatar or Twilight of the Arnor, and I played a lot of hours of those.  The reason I say that is because although there are still some things to work out with the game, the AI is a bit further on in development.  Whereas in GC2 I can crank it up to Tough (which is the highest difficulty before the AI starts to get bonuses) and I'll know that I can win, in FE at Challenging I can regularly get my butt handed to me.  So you must be doing something right there.

on Feb 04, 2013

I wonder about the demographics of the responses (age, sex, etc.) and how each demographic rated the individual games (granted similar sampling sizes).

on Feb 04, 2013

Lord Xia
I've got some experience with statistics (I had to take it in college and did mediocre in it).  This isn't a good graph to really show a "trend".  The scale should go from 0 to 100.  Also, there are too few scores.  Also, reality check.  Getting a game review in the 90+ is exceptionally rare, and you simply can't consider that the standard to which you should compare all other released games.  First game is a 95, all other games should be 90+.  Prepare for disappointment.  Should be proud of anything in the green, 76+.     

Assuming all games were rated under the same criteria, the scale is fine, and a 100 point scale wouldn't be necessary to "normalize" the data. I'd say 8 data points is sufficient to assess a trend. (A statistically valid conclusion and a trend are two different things). The graph covers 5 years of activity. Just hoew many more would you need? In my business 6 data points (6 months) is enough to have confidence in a trend. (I don't mean confidence in the statistical sense)

But again, I'm not focusing on the individual scores. When you see a graph heading south, it's time to try to figure out why.

My guess is consumer expectations. A systematic risk that's mostly out of SD's control.

on Feb 04, 2013

Two things:
The 'real' score of a game on metacritic can basically be measured between the aggregate scores of the reviewers and the aggregate scores of the user base. Most of Stardock's games are in line with one another while having reasonably good scores. Take a look at some other "Triple A" titles and tell me if you can say the same. Stardock doesn't have hundreds of millions to sink into "advertising heh-heh campaigns'.

Secondly, find some of the most well regarded RTSes [or any other game] that has ever been released and ask yourself how they would fare in reviewing circles if they were released today.

All this graph really suggests to me is people like Stardock's sci-fi games more than their fantasy games. Another interesting point of data you might consider adding to that graph is the trend of metacritic scores overall during the same time period.

 

 

on Feb 04, 2013

Seems to me the only thing you can learn from that, is that the majority of the review scores used by MetaCritic, were pulled out of the reviewers hats.

I'm not entirely sure if that's a good thing for you guys, though. Because I'd have rated most of those releases lower. And hey, 5'ish for E:WoM is extremely generous. At release it wasn't just a terrible game, it wasn't in a playable state. It should have gotten a 1

on Feb 04, 2013

I also think reviewers are being more critical now than in past years.  For a long stretch certain magazines were giving AAA titles super high scores that were entirely undeserving of those scores.  At the time I could only guess the scores were inflated because the magazine is under pressure to keep advertising dollars.

on Feb 04, 2013

Simsum
Seems to me the only thing you can learn from that, is that the majority of the review scores used by MetaCritic, were pulled out of the reviewers hats.

Well since he used the Metacritic data in his OP, it seems he believes the data is meaningful.

I think we all agree that WoM was an anomaly...didn't need a graph to see that.

So I assumed he made the post for a deeper reason other than an opportunity to quote a song from Sesame Street.

on Feb 04, 2013

Personally I think it is because of PlayStations, WII's, xBox'es etc.  No one plays games on the PC much any more.  My son even got rid of all of his PC games years ago (when he was 15) and has never even thought of buying a PC game since he got his PlayStation.

on Feb 04, 2013

It's the economy.

on Feb 04, 2013

Expectation versus reputation can account for the 'trend' which cannot really be considered of statistical value as it's too small a sample to be truly viable....

2 Pages1 2