XFactor UK Social Media Monitoring: Stop counting and start analysing! (Full study)
Yesterdays flurry of X-Factor info-graphics and studies report a positive correlation between social media, the viewing figures and the end result largely based on cumulative data such as likes, followers and views. Interesting as this is, with a wealth of data publicly available on Social Media platforms such as Twitter, does an over reliance upon automated quantitative stats mean we are missing out on more robust and reliable analysis?
The X Factor bandwagon?
With the X Factor now established as somewhat of a British institution, it stands to reason that a flurry of social media activity accompanied the show on a weekly basis. Following this, it stands to reason that a number of Social agencies and individuals would monitor this activity and attempt to draw some correlation and conclusion from the data they pull. I should start by saying we were no different, we too saw the opportunity early on and began collecting data on a weekly basis using a listening tool. You could say by working on this report since the start of live shows we were guilty of being on the bandwagon before it existed, the key difference however is that we have analysed this data manually to draw robust reliable conclusions.
Predicting the winner: The value is not in buzz or platform volume but in declared social intent
I have also seen a few studies claiming that social media had predicted the winner. These claims were primarily based upon volume of Twitter followers and ‘likes’ as a positive measure of propensity-to-vote. For me, this is almost the same as saying that the more Facebook likes you have the more sales you will get. Of course, whilst the stat is relevant to the overall debate, it should not be regarded as reliable proof of intent.
Measuring declared social intent for X Factor winners and weekly bottom contestants:
The simple part is setting up keyword buckets around positive intent and relevant X Factor tweets on your listening tool. It’s then an ongoing task to tweak and refine before reaching that ‘sweet spot’ where your dashboard becomes more of a help than a spammy hinderance. The hard work then begins by sifting through the data this yields manually to rank questionable mentions. It is this time and dedication that allows us to draw clean and accurate data which can be relied upon effectively.
The dangers of over reliance on quantitative ‘buzz’ data
A good example of this is the loveable scamp Frankie Cocozza. If we were to chart social buzz alone, he would actually chart a close second place to the overall winners ‘Little Mix’. If you are an ITV executive this could be useful as you could assume that overall social buzz is a good indicator that people are talking about the show. More buzz, should mean more ratings and higher ad premiums.
However without analysing this further, you may also assume that he is a popular act and potentially safe from the boot. Cocozza before being ‘escorted from the competition’ was in fact in the bottom two in the weeks previous and as his popularity/buzz grew he lost favor with the voting public. This is of course where mistakes can be made and we lose confidence in the valuable insights that effective social media monitoring can yield.
Quantitative data is still valuable but qualitative data is the opportunity
I should be clear that quantitative data is of course relevant to the Social/XFactor debate. It’s a good clear indicator of an act’s fan base, not to mention a good baseline of interrogation. However quantitative data is easy to report and challenging to interrogate because of resource, time-over-reliance on automated data etc. In my opinion this is why most studies of this nature fall at the final hurdle as a result. Social media (collectively speaking) is the greatest crowd sourcing platform in existence. It offers a great opportunity to gain some real insight around the intent of individuals and their actions within the digital environment and beyond. This information does not come easy and requires real attention. More than this however is that temptation to draw conclusions from data that is automated and not robust poses not only a risk to brands but ultimately damages the reputation and potential of the discipline as a whole.
Get in touch:
I have included a link to to the full study below on Slideshare for those interested.