One thing coronavirus has exposed more broadly is the importance of (good) ethics in tech and design. There have been people raising the alarm about potential missteps and ways to guard against them for years. Some of the more thoughtful books about ethics with algorithms and tech design were published in the last few years by Cathy O’Neil, Sara Wachter-Boettcher, and Virginia Eubanks. Books don’t get written overnight; each one represents years of thinking, writing, and research on the part of the author, and none of it done in a dark cave. Yet the ethical dilemmas raised by these authors seemed to be more of an industry concern (and a niche one at that) and not something that the general public paid attention to. Continue reading How Psychology Helped Me Recognize Toxic Tech
A key part of our process in a behavior change design project is to do a literature review. We comb the published peer-reviewed literature to find research that will help us understand the current project. For example, on a recent project where we wanted to design a wellness app for people on Medicaid and Medicare health plans, we looked at research on how social determinants of health (SDoH) affect access to wellness services and care and outcomes associated with community-based health and wellness models. The information we learn from the literature review helps us shape our own research by understanding what, if anything, we might be able to apply from the previous work, the types of questions we might want to ask, and the types of solutions that have worked on similar problems. Continue reading Every Project Needs Its Own Research
The reproducibility crisis has hit psychology hard. In writing Engaged: Designing for Behavior Change, I found myself having to double check whether some of the studies I learned about previously are still considered valid to cite. Part of the book writing process was a technical review, in which I asked five experts to read the manuscript and offer feedback about the accuracy and completeness of the information therein. From that feedback I went through another round of re-review of the information I’d included. Continue reading Can Context-Bound Research Replicate?
OK, so maybe Stephen Colbert wrote this list of tips for The Daily Show with Jon Stewart interviewers working on field pieces and not people like me who are doing field research for less entertaining purposes. No big deal. I read this list of advice in The Daily Show (The Book): An Oral History and knew it was just as useful for my type of research as it is for theirs. This is great advice for developing a rapport with someone, getting good information, and bringing a conversation back to a point. So without further ado: Continue reading Advice for Field Research From Stephen Colbert
Like many psychologists, I was dismayed to see the results of a recent study that attempted to replicate 100 different psychology studies, and managed to support the results in only 36% of cases. The inferential statistical analyses used to make sense of the results of psychology studies are intended to sift through patterns and separate the reliable ones–the ones that aren’t just blips in the data, that are strong enough that they probably represent some real phenomenon–from the spurious. Clearly, in many cases, they are failing. Continue reading Replication, Validity, and the File Drawer Problem in Psychological Research
As an undergraduate studying psychology, I dreaded my required research methods classes. Years later as a graduate student instructor, I saw the same lack of enthusiasm in my own students. That’s right, I opted to teach research methods. At some point as I became a better researcher and scientist, it became clear how important research methods are and how interesting they can be. Continue reading Research Methods Matter: The Case of Coffee and Productivity