Research Methods Matter: The Case of Coffee and Productivity

Research Methods MatterAs an undergraduate studying psychology, I dreaded my required research methods classes. Years later as a graduate student instructor, I saw the same lack of enthusiasm in my own students. That’s right, I opted to teach research methods. At some point as I became a better researcher and scientist, it became clear how important research methods are and how interesting they can be.

Unlike many of the topics I learned in school, I can use research methods every day. One example from the news last week is a researcher who was able to sneak a bogus study about chocolate and weight loss by reviewers who didn’t use their methods savvy. Another recent example is this cute, quirky, and decidedly non-scientific study of how coffee consumption impacts productivity in the workplace. The authors of the article concluded the following:

The results demonstrate that drinking coffee does affect the work rate and that regular consumption of coffee increases the motivation, mood and overall productivity in the office.

Not so fast. Anyone well-trained in research methods can spot several issues that make this conclusion impossible to draw given the study’s particulars. Here are just a few:

1. The study was not blinded. People were asked to drink coffee as normal one week and abstain from it a different week. People knew exactly which condition they were in at a given time, which may have led them to interpret or report their motivation, mood, and productivity differently. Blinding a study, so people aren’t sure what’s being manipulated, leads to stronger results.

2. All dependent variables were self-reported. As a psychologist, I firmly believe in the validity of self-report to study certain phenomena. However, productivity can be objectively measured, but wasn’t here. Given that research participants knew whether they were consuming coffee or not, having only self-reported data further weakens the conclusions. Actually measuring people’s work output would have offered a stronger set of results.

5-28-2015 11-20-59 AM3. Time and condition are confounded. In week 1, all participants abstained from coffee. In week 2, all participants drank coffee as usual. This means that other events during week 1 or week 2 might have also impacted results. Maybe week 1 coincided with the end of a major project, so the reduced productivity was because people had little work to do. Maybe week 2 coincided with bonuses being paid, so everyone’s moods were elevated. A better study design would have assigned half of the workers to drink coffee in week 1, and half in week 2, to avoid this confound and allow a cleaner interpretation of results.

4. The study was conducted by a coffee vending company. Look, I don’t know for sure, but there’s a pretty good chance that Strong Vend, the company conducting the study, wanted to see the results come out a certain way. Even researchers who have a strong opinion about what the results could or should be can conduct a good study, but there are steps they usually take to make sure that happens. One of those steps is having a research assistant who is blind to the study hypotheses (see #1) actually run the research. Someone who knows the “right” results to a human subjects study could subtly bias people to respond a certain way. Even if Strong Vend hired neutral researchers to survey the workers in the study, my guess is that those people knew they were working for a coffee company and could figure out what the “right” study results might be.

These are just a few very basic issues with this study–which, by the way, I don’t think was truly intended to be an input into the scientific literature, and so probably doesn’t deserve this type of scrutiny from me. But I think it’s worthwhile to illustrate how a basic understanding of good scientific research methods can make you a better consumer of information and news.

Everybody is selling something. Don’t you want to be in a position to sniff out what’s worth buying?