The latest moral outrage to hit the internet centered on news this week that Facebook had been emotionally manipulating people via the news feed, denying them positive content to see if it made them sad, and vice versa. Angry tweets and blogs abounded, the digital equivalent of grabbing pitchforks and torches.
The crux of the argument was that the whole thing was a little bit creepy. You can serve ads targeted by behaviour, sure, but trying to make me happy or sad is apparently a step too far.
Facebook seemed a bit bemused by the whole thing, seeing it as just another way to “improve services”. It seems it just wanted to make sure your mates’ moaning about the football scores wasn’t going to drive you away (and prevent it from serving you up banner impressions). Positive reinforcement can only work in its favour.
Some people, particularly those in the academic world, claimed the issue was one of consent. Yet almost the whole point of marketing is to create an emotional reaction in people without their consent. We look to surprise and delight people. Charities try to shock us, or create compassion, to inspire us to give or volunteer. This is a manipulative business – and it’s not just marketing. Do you optimise any of your web properties based on user behaviour? In the words of entrepreneur and wag Marc Andreessen: “Run a website? Make any changes based on measurements? Congratulations, you’re running a psychology experiment.”
So why the outrage? Well, first off, Facebook seems to have done it on a pretty grand scale – 700,000 users producing millions of interactions and datasets: that’s a chunk of information that most scientists would kill to have access to. Only now, in our digital age, is this scale of experimentation possible – and, with a few exceptions, it’s possible only for a few private corporations: the Amazons, Googles and Facebooks.
Second, ethics aren’t in step with tech. There’s a whole other article here, really: what does privacy mean in the post-Snowden age? Can anyone who uses a smartphone and regularly logs into Facebook or its ilk expect privacy to be defined in the same way as it was 50, or even 10, years ago? When we willingly give so much up to privately owned corporations, why do we expect blue-eyed altruism in return?
So, scale and moral/ethical violation, possibly (or possibly not) based on our increasingly unreal expectations of what people should know about us. These things feed into the fundamental reason why everyone’s so creeped out; Facebook shattered the illusion that we are in control. We scrolled down our timelines, chuckling at photos and sighing at platitudes, and forgot that someone else had their hand on the tap.
Jon Davie is managing director of Zone