You’ve undoubtedly heard by now about Facebook’s large scale emotion manipulation study, conducted on their site users. The study, published in the Proceedings of the National Academy of Sciences of the United States of America, found that when Facebook users saw a greater concentration of negative posts in their newsfeed, they were more likely to post negative statuses themselves; the same pattern emerged for positive status updates. [This research probably also partially explains Facebook’s insistence on pushing the Top Stories sort on users regardless of their preference; it’s the manipulation in a massive social science study. Which doesn’t make it any less of a violation of users’ sense of autonomy, and thus a poor motivational experience.]
The study has its problems, which I’ll get into, but the thing that really makes me angry about it is the cavalier attitude it reveals toward informed consent. Informed consent is a requirement of human subjects research. What is means is that if a person is being manipulated in any way, they must give explicit permission to the researcher to be a part of the study. The informed piece is important: It means the study participant must understand what activities will take place as part of the study, so he can make the decision to participate with full knowledge of risks and benefits. This doesn’t mean you have to tell the participant your hypotheses or the specific manipulations that will take place, but you do need to make sure he understands what he will experience. In the case of the Facebook study, informed consent might look like “I agree to have the content of my news feed determined by an algorithm that may show some types of stories more often than others.”
Facebook, of course, did not do this. They claim that informed consent was covered as part of their user agreement. Technically, they’re right–the user agreement gives Facebook permission to do all sorts of stuff with user data. [EDIT July 1 2014: A new report suggests that Facebook only updated their TOS to mention user research four months after the completion of this study, which means they no longer have this justification to rest on.] But it violates the principles of informed consent in several ways:
- It’s lengthy and not always written in plain English. Informed consent documents should be easy to understand and put the important information front-and-center.
- The user agreement is accepted upon signing up for the Facebook service. This means many users haven’t seen the document in years. Informed consent should be obtained in a timeline reasonably proximal to the research.
- Whenever possible, participants in a research study should have the opportunity to ask clarifying questions about the procedure. That did not happen here.
- Informed consent includes clearly explaining any potential benefits or risks to participation.
- There was no mechanism provided to opt out of the research, short of de-activating the Facebook account. This effectively constitutes coercion.
- Participants were not debriefed, as Robinson Meyer points out. Debriefing occurs at the end of the study, when participants are filled in on the purpose of the study and the hypothesized outcomes. It is a critical component of the informed consent process, and almost certainly did not happen here. Hearing about the published study in the news does not constitute debriefing, by the way, so Facebook has not fulfilled its duties in this respect with the recent media blitz.
- Some sources are now suggesting that the study protocol was not reviewed by an Institutional Review Board (which frankly would not surprise me; I was quite surprised an IRB approved a protocol with such weak informed consent).
Honestly, the manipulation in this study won’t cause long-term harm to anyone who was affected. But that’s not really the point. The point is that there are professional codes of honor that were deeply violated here. The Belmont Report, a seminal document describing how researchers interact with study participants, specifically highlights “respect for persons” as one of the three cornerstones of ethical research. I certainly don’t feel Facebook has shown respect for persons here. As a professional in a field that has a long and ugly history of ethical missteps (start with the Tuskegee Syphilis Study if you’re interested), and which has worked hard to put systems in place to avoid repeating such grave errors, I take this kind of behavior from Facebook somewhat personally.
I read another comment from someone who pointed out that companies manipulate consumer experiences all the time in order to change outcomes. This is true. But when Google tests different types of search results or Ebay tweaks product arrays, they do so in the name of boosting their business. They do not purport to be doing research for the peer reviewed literature. They do not pretend to be social scientists. It is wrapping this work in the guise of an academic endeavor while flouting the attendant academic ethics that is problematic.
On to the other issues with the research, in case you’re curious what popped out to me from the initial reading:
- The researchers don’t actually measure emotion, but rather emotional expression through words. It’s not just splitting hairs–what you say may not actually or accurately reflect how you feel. It’s also not clear if the methodology was able to accommodate sarcasm, jokes, or context (which was a huge challenge in building IBM’s Watson–these human forms of expression are difficult for machines).
- The results don’t account for the effect of situational cues on posting behavior. One commenter on an article I read noted that if one of her Facebook friends posts about a death, she’s not going to respond by posting a lighthearted status about her pet kitten. If a person’s news feed is filled with a particular type of story, that serves as a subtle cue about what might be appropriate to post. It does not necessarily influence the emotional experience of the user.
- It doesn’t really extend our understanding of what we already know from emotional contagion research. Usually to be published, a study has to contribute new understanding about a phenomenon. While I suppose the setting of the Facebook study is unique, the actual results don’t challenge expectations or in any way update our understanding about how people’s mood states influence each other.
- It’s a statistical trick. Generally speaking, statistically significant results in a study are a function of two things. One is the effect size, or the magnitude of the response to whatever the study manipulation is. The bigger the effect size, the more likely the result is significant. The second is the sample size, or the number of measurements in the study. If you have a big enough sample size, you can detect effect sizes that are mere blips on the radar in real life. In the Facebook study, there were hundreds of thousands of measurements taken, and an effect size that pans out to one word chosen differently per thousand words written. This is not a meaningful real world change.
It’s disappointing to me that this research was accepted for peer-reviewed publication. It sends a disheartening message about the importance of ethics in human subjects research, and it also reinforces that if you’re big enough and powerful enough, you can get away with stuff the little guys can’t (not that they should want to). It also proves yet again what John Oliver told us about net neutrality: If you want to do something evil, hide it inside something boring.
Additional perspectives on the study which I found valuable to read:
- Facebook’s Unethical Experiment by Katy Waldman
- Facebook’s Science Experiment on Users Shows the Company is Even More Powerful and Unethical Than We Thought by David Holmes
- Facebook Doesn’t Understand The Fuss About Its Emotion Manipulation Study by Kashmir Hill
- Everything We Know About Facebook’s Secret Mood Manipulation Experiment by Robinson Meyer
- As Flies to Wanton Boys by James Grimmelmann
It just occurred to me that the “most recent” feed is probably the closest thing a Facebook user has to opting out of deliberate feed manipulations–which makes the inability to permanently set that as your preference not only maddening, but unethical.