What Facebook's social experiment means for you
Facebook has hit the headlines for all the wrong reasons: Last week it emerged that nearly a quarter of a million users' news feeds were deliberately manipulated to see if such manipulation could change their moods. Critics say that's unethical at best and downright evil at worst, and the UK Information Commissioner's Office has announced that it will investigate whether Facebook has breached data protection legislation.
Facebook in privacy shocker. Hold the front page!
This is a bit bigger than the usual "let's move all the privacy settings and make your pics public again" changes Facebook likes to make.
Is it? What actually happened?
For a week in 2012, Facebook data scientists meddled with the news feeds of over 689,000 users as part of a study with Cornell University and the University of California. Some users were shown more negative content; others, more positive. Facebook then analyzed those users' own posts to see if the content they were shown had made them more positive or more negative.
Surely sites and social networks analyze user data all the time?
They do, and it's called A/B testing: you give two groups of users different versions of your content and see which is more successful. This goes beyond that, though: Facebook wasn't just observing users, but actively trying to change their emotions.
And that's bad because...?
It's bad because nobody was asked whether they wanted to participate in what is effectively a psychological study. Facebook does mention that it'll use your data for research in its terms and conditions, but that bit of the T&Cs wasn't added until after this experiment had already taken place.
It's arguably irresponsible too: How many of the people whose news feeds were made more negative were people with vulnerable emotional states or mental illnesses such as depression?
What did the study find?
The cheerier your feed, the cheerier your posts are likely to be, and vice versa. The more emotional the language used, the more you're likely to post; if your feed is full of fairly flat and unexciting language, you'll be less inclined to join in.
How have people reacted to the news of the study?
The reaction of Erin Kissane (director of content at OpenNews) on Twitter was typical: "Get off Facebook. Get your family off Facebook. If you work there, quit. They're f---ing awful."
What does Facebook say about it?
"Mumble mumble mumble. Look! A duck!"
No. In a statement Facebook said: "This research was conducted for a single week in 2012 and none of the data used was associated with a specific person's Facebook account. We do research to improve our services and to make the content people see on Facebook as relevant and engaging as possible."
In a public Facebook post, study co-author Adam Kramer wrote: "Our goal was never to upset anyone... in hindsight, the research benefits of the paper may not have justified all of this anxiety."
So it has apologized?
Kinda. Sorta. Not really. Chief operating officer Sheryl Sandberg made one of those non-apology apologies so beloved of celebrities and politicians: the study was "poorly communicated, and for that communication we apologize. We never meant to upset you." Translation: we're not sorry we did it, but we're sorry that you're annoyed about it.
Is this a one-off?
No. As the Wall Street Journal reports, Facebook's data scientists get up to all kinds of tomfoolery — including locking a whole bunch of people out of Facebook until they proved they were human. Facebook knew they were: it just wanted to test some anti-fraud systems.
Is there a conspiracy theory?
Is AOL a CIA front? Of course there is. Cornell University, which worked with Facebook on the study, originally said that the US Army's Army Research Office helped fund the experiment. That has now been corrected to say that the study "received no external funding," but the internet is awash with tales of military involvement.
That isn't as far fetched as it sounds. The US created a "Cuban Twitter" to foment unrest in Cuba, and as Glenn Greenwald reports, security services are all over social media: "western governments are seeking to exploit the internet as a means to manipulate political activity and shape political discourse. Those programmes, carried out in secrecy and with little accountability (it seems nobody in Congress knew of the 'Cuban Twitter' programme in any detail) threaten the integrity of the internet itself."
Facebook is no stranger to manipulating public opinion. In 2010, it encouraged an estimated 340,000 additional people to get out and vote by subtly changing the banners on their feeds. As Laurie Penny writes in the New Statesman, that gives Facebook enormous power: "What if Facebook, for example, chose to subtly alter its voting message in swing states? What if the selected populations that didn't see a get-out-and-vote message just happened to be in, say, majority African-American neighbourhoods?"
Is it time to make a tinfoil hat?
Probably not. A few changes to news feeds is hardly the distillation of pure evil, and it's clear that the study is acting as a lightning rod for many people's loathing of Facebook. However, the controversy should be a reminder that Facebook is no mere carrier of your information: it actively intervenes in what you see, using algorithms to present you with what it thinks will encourage you to spend the most time using the service.
That's very different from rival services such as Twitter, which show you everything and let you decide what matters and what doesn't, and the difference between your news feed in chronological view (if you can find it) and Top Stories view is dramatic.
Here's a conspiracy theory we can get behind: As with most free services on the internet, Facebook's users are its product and its customers are advertisers. Would Facebook deliberately manipulate the emotional content of your news feed to make you more receptive to advertisers' messages? And if it did, how would you know?