One thing coronavirus has exposed more broadly is the importance of (good) ethics in tech and design. There have been people raising the alarm about potential missteps and ways to guard against them for years. Some of the more thoughtful books about ethics with algorithms and tech design were published in the last few years by Cathy O’Neil, Sara Wachter-Boettcher, and Virginia Eubanks. Books don’t get written overnight; each one represents years of thinking, writing, and research on the part of the author, and none of it done in a dark cave. Yet the ethical dilemmas raised by these authors seemed to be more of an industry concern (and a niche one at that) and not something that the general public paid attention to. Finally in the last few years the negative effects of unethical or ethically agnostic design started to provoke righteous outrage. And now, the coronavirus pandemic is exposing the hold of toxic tech over many of the basic functions of daily life. Why has it taken so long, and why was the danger more obvious early on to some people than others?
Among design and behavior change professionals, I chalk some of it up to the training backgrounds people come from. Tech and design offer a big tent in the sense that the people working in that blended field come from a range of professional training and backgrounds. In the subset of behavior change design where I find myself, there are the people like me who were trained as psychologists, but also people trained as designers, UX researchers, content specialists, strategists, and so on. It’s among the psychologists in my personal circles that I saw the earliest awareness that some of the cool algorithmic design, machine learning, and anticipatory features might have ill consequences.
When I was doing my undergrad degree in psychology, I enrolled in a lab class where I’d be a research assistant administering studies. Before I could so much as say hello to a research subject, I had to complete a robust training on ethics for working with human subjects. It was a training I’d repeat on a regular basis, all the way through the end of grad school. On top of that, every study protocol we worked on had to be submitted to the university’s Institutional Review Board who would provide feedback on its appropriateness. No data could be collected until the IRB gave the stamp of approval; data collection had to stop if the approval period expired without a renewal. Even though I rolled my eyes at some of what the IRB demanded (I mention in Engaged that I once had to show participants videos of puppies to restore their moods after a study), it was a good grounding in the idea that you don’t do anything that faces external to your team without picking apart the what-ifs and taking precautions against negative effects.
This all happened in the larger context of the history of psychology, which was baked into my formal education. I can think of at least five classes I took (and then one that I taught twice as a grad student) where we learned in depth not just about research ethics but the historical atrocities that shaped modern ethical codes. And there are some real doozies.
So it wasn’t just learning that it was ethically dubious to, say, run an experimental intervention for reasons of curiosity but no expectation that it would benefit anyone. It was drawing the line back from that type of study to research conducted on prisoners in Nazi camps or captured enemies in military prisons, which often had minimal scientific value and maximum horror. Every undergrad psych student learns about the Tuskegee Syphilis Study. Courses cover the Nuremberg Trials, not just because of the resulting Nuremberg Code governing human subjects research, but also for its lessons in what Hannah Arendt called the banality of evil. Small missteps today could lead to much bigger misbehavior down the line, and people actually suffer for it.
As I list the books that I’ve found instructive in warding off ethical missteps, it strikes me that they’re all authored by women. I know plenty of deeply ethical men, but I also think that because women, people of color, and sexual minorities are more likely to be on the receiving end of accidental or intentional maltreatment, they’re more sensitive to its potential. It’s just another reason why diverse teams create stronger ideas (this is true, if the teams are set up in a truly inclusive way; there’s data and everything).
I don’t think it’s fair to say that Facebook should have specifically predicted its technology’s role in election disinformation, or that the first people to create smart phones should have specifically anticipated the extent to which people would be tracked by their phone usage. But you don’t have to know the exact ways a technology will be abused to think seriously about building guardrails that limit how it is applied. Facebook made early decisions about their editorial policies and the types of content that could be included in paid advertisements that directly influenced the current state of affairs. Those could have been different without knowing exactly how they’d one day go wrong. I can remember conversations *I* was a part of around the launch of the first iPhone where people were saying how great it would be if marketers could predict consumers’ needs based on their location data or Google searches and serve up just the right content. Spoiler: It’s not that great. And while the scientist in me recognizes the need for processes like contract tracing to contain coronavirus, the ethicist cringes at the abuse potential when that data is accessible to Apple, Google, Amazon, or even federal and local governments.
Anyone with experience and training is capable of becoming better at research and design ethics. I think people trained through psychology are well-positioned to anticipate just how bad what seems like a small oversight or misstep can eventually become. It makes me less fun at parties but I’m glad to have some sense of the extremes where putting “science” or “product” over people can go. Maybe we all need more exposure to the horror stories to make better everyday choices. Because, and this is important: Avoiding similar future problems requires active choices on our part right now. Doing nothing guarantees a rerun of historical mistakes.
Our active choices must go beyond on own individual spheres of control. The pandemic has made it clear that while individual choices matter, they are not usually enough to create major systemic change. For that we need organizing, advocacy, and policy change. One of my intentions for the next several months is to figure out how I can direct my energies toward more systems-level advocacy to make positive changes for our post-pandemic future. Stay tuned, and suggestions welcome.
Further Reading
American Psychological Association (APA) (1992). Ethical Principles of Psychologists and Code of Conduct.
Arendt, H. (1963). Eichmann in Jerusalem: A Report on the Banality of Evil. New York: Penguin Group.
Eubanks, V. (2017). Automating Inequality:
How High-Tech Tools Profile, Police, and Punish the Poor. New York: St. Martin’s Press.
Noble, S. U. (2018). Algorithms of Oppression: How Search Engines Reinforce Racism. New York: New York University Press.
O’Neil, C. (2016). Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. New York: Broadway Books.
Wachter-Boettcher, S. (2017). Technically Wrong: Sexist Apps, Biased Algorithms, and Other Threats of Toxic Tech. New York: WW Norton & Company.