As you may have heard (since it appears to have become the hyped up internet story of the weekend), the Proceedings of the National Academy of Sciences (PNAS) recently published a study done by Facebook
, with an assist from researchers at UCSF and Cornell, in which they directly tried (and apparently succeeded) to manipulate the emotions of 689,003 users of Facebook for a week. The participants -- without realizing they were a part of the study -- had their news feeds "manipulated" so that they showed all good news or all bad news. The idea was to see if this made the users themselves feel good or bad. Contradicting some other research
which found that looking at photos of your happy friends made you sad, this research apparently found that happy stuff in your feed makes you happy. But what's got a lot of people up in arms is the other side of that coin: seeing a lot of negative stories in your feed appears to make people mad.
There are, of course, many different ways to view this: and the immediate response from many is "damn, that's creepy." Even the editor of the study, admits to the Atlantic, that she found it to be questionable
"I was concerned," she told me in a phone interview, "until I queried the authors and they said their local institutional review board had approved it—and apparently on the grounds that Facebook apparently manipulates people's News Feeds all the time... I understand why people have concerns. I think their beef is with Facebook, really, not the research."
Law professor James Grimmelmann digs deeper into both the ethics and legality of the study and finds that there's a pretty good chance the study broke the law
, beyond breaking standard research ethics practices. Many people have pointed out, as the editor above did, that because Facebook manipulates its news feed all the time, this was considered acceptable and didn't require any new consent (and Facebook's terms of service say that they may use your data for research). However, Grimmelmann isn't buying it. He points to the official government policy
on research on human subjects, which has specific requirements, many of which were not met.
While those rules apply to universities and federally funded research, many people assumed that they don't apply to Facebook as a private company. Except... this research involved two universities... and it was federally funded
(in part) [Update
: Cornell has updated its original story that claimed federal funding to now say the study did not
receive outside funding.]. The rest of Grimmelmann's rant is worth reading as well, as he lays out in great detail why he thinks this is wrong.
While I do find the whole thing creepy
, and think that Facebook probably could have and should have gotten more informed consent about this, there is a big part of this that is still blurry. The lines aren't as clear as some people are making them out to be. People are correct in noting that Facebook changes their newsfeed all the time, and of course Facebook is constantly tracking how that impacts things. So there's always some "manipulation" going on -- though, usually it's to try to drive greater adoption, usage and (of course) profits. Is it really that different when it's done just to track emotional well-being?
As Chris Dixon notes
, doing basic a/b testing is common for lots of sites, and he's unclear how this is all that different. Of course, many people pointed out that manipulating someone's emotions to make them feel bad is (or at least feels
) different, leading him to point out that plenty of entertainment offerings (movies, video games, music) also manipulate our emotions as well
-- though Dixon's colleague Benedict Evans points out that there's a sort of informed consent
when you "choose" to go to see a sad movie. Though, of course, a possible counter is that there are plenty of situations in which emotions are manipulated without such consent (think: advertising). In the end, this may just come down to being about what people expect
If anything, what I think this does is really to highlight
how much Facebook manipulates the newsfeed. This is something very few people seem to think about or consider. Facebook's newsfeed system has always been something of a black box (which is a reason that I prefer Twitter's setup where you get the self-chosen firehose, rather than some algorithm (or researchers' decisions) picking what I get to see). And, thus, in the end, while Facebook may have failed to get the level of "informed consent" necessary for such a study, it may have, in turn, done a much better job of accidentally "informing" a lot more people how its newsfeeds get manipulated. Whether or not that leads more people to rely on Facebook less, well, perhaps that will be the subject of a future study...