At this point it’s pretty clear why someone would be worried about bias. We’re biased (Part 1). Consciously suppressing our biases might not work (Part 2). And our bias seems to tamper with significant, real-world decisions (Part 3). So now that we’re good and scared, let’s think about what we can do. Below are more than 10 debiasing strategies that fall into 3 categories: debiasing our stereotypes, debiasing our environment, and debiasing our decision procedures.
1. Debiasing our stereotypes
In the last post, we learned that implicit attitudes and stereotypes can badly affect our judgments. Here is how stereotypes can be formed and reformed.
Conditioning
In conditioning, we repeatedly present ourselves with a pair of stimuli until we begin to associate one thing with the other (De Houwer, Thomas, and Baeyens 2001; Hofmann, De Houwer, Perugini, Baeyens, and Crombez 2010). So, for example, if someone consumes media that repeatedly presents certain ideas or groups in a negative light, then they will cultivate a negative implicit attitude about these ideas or groups (Arendt 2013; Arendt & Northup 2015; Matthes & Schmuk 2015). Or if a certain profession is dominated by white men, then people will associate membership in this profession with being white and male — and this might have a self-reinforcing effect on the profession’s makeup.
Counterconditioning
We can also use conditioning against our existing biases. Some call this counterconditioning (Stewart, Latu, Kawakami, and Myers 2010; VansteenWagen, Baeyens, and Hermans 2015). One fairly easy and effective way to countercondition a negative stereotype is just to imagine a positive version of a negatively valence concept (Blair, Ma, and Lenton 2001, Markland et al 2015).
Here are examples of how we might use counterconditioning. When nominating, recommending, or choosing someone for an opportunity, we might alot some time to think of non-stereotypical candidates from underrepresented groups. If we’re teaching, then we might assign material from underrepresented groups. If we’re an office manager, then we might choose imagery for the office that represents a more diverse group of people.
2. Debiasing our environment
It turns out that altering the decision environment can also support debiasing. So if you have a say in the decision environment, then think about the following.
Ask for justification
Put decision-makers in a position that forces them to justify their decisions to a third-party (Lerner & Tetlock 1999, Simonson & Nye 1992). The point of this is to prevent overconfidence in our judgment, e.g., optimism bias (Ben-David, Graham, and Harvey 2013, Moore & Healy 2008).
Consider alternatives
Try to diversify the decision-makers (Bell, Villado, Lukasik, Belau, and Briggs 2011; Shor, Rijt, Miltsov, Kulkarni, and Skiena 2015). The hope is that your intuitions will be challenged. After all, “Training in normative rules often fails when people have strong intuitions and do not pause to think more deeply (McKenzie and Liersch 2011)” (Soll, Milkman, and Payne 2014). So find people who question our intuitions (Koriat, Lichtenstein & Fischhoff 1980). If you can’t find someone to disagree with you, disagree with yourself: simply imagine arguments for different conclusions (Herzog and Hertwig 2009, 2014; Keeney 2012).
Make information easier to consume
Remove irrelevant data, e.g., a job candidate’s identity, the status quo, unnecessary complexity, etc. (Célérier & Vallée 2014; Thaler & Sunstein 2008). And present the remaining data in a way that allows for easy comparison (Heath & Heath, 2010; Hsee 1996, Russo 1977). And present the data in the most relevant scale (Camilleri & Larrick 2014; Burson, Larrick, & Lynch, 2009; Larrick and Soll 2008). Talk about probabilities in terms of relative frequencies or, better yet, represent probabilities with visualizations [examples] (Fagerlin, Wange, and Ubel 2005; Galesic, Garcia- Retamero, & Gigerenzer 2009; Hoffrage, Lindsey, Hertwig, and Gigerenzer 2000).
3. Debiasing our procedures
Certain reasoning strategies can help you with debiasing (e.g., Larrick 2004; Fong and Nisbett 1991, Larrick, Morgan, and Nisbett 1990). Here are some ways that we might debias our decision procedures.
First, decision-procedures checklists, criteria, and rubrics. Use them, especially if you are repeatedly making one kind of judgment, e.g., grading, reviewing applications, etc. (Gawande 2010; Heath, Larrick, and Klayman 1998; Hales & Pronovost 2006).
Second, quantitative models. Predictive linear models often do better than our own reasoning (Dawes 1979 and Dawes, Faust, & Meehl, 1989; Bishop & Trout 2008). So if we have a good model by which to make a decision, then we should be wary of deviating from the model until we have good reason to do so.
Finally, the status quo. We tend towards the status quo. So if it is already well-known that a particular choice optimizes our desired outcome(s), then default to that choice (Chapman, Li, Colby, and Yoon 2010; Johnson, Bellman, and Lohse 2002; Johnson & Goldstein 2003; Madrian & Shea 2001; Sunstein 2015). And — going back to section 2 — if people want to opt-out of the optimized status quo, then ask them for justification.
Conclusion
As you can see, there are many debiasing tools in our toolkit. We can debias our stereotypes, our environment, and our procedures. While these tools have been shown to help, they might not entirely solve the problem. And there maybe be at least one more option at our disposal: feedback. That will be the topic of Part 5.
Series Outline
Part 1: I talk about the anxiety I had about being implicitly biased. “Am I discriminating without even knowing it?! How do I stop?!”
Part 2: What is implicit bias? Check out this post to learn about the theory and some of the findings.
Part 3: What are the real-world consequences of implicit bias? Research suggests implicit bias can impact hiring, pay, promotions, and more.
Part 4: [Jump to the top]
Part 5: Can feedback help? What if people are not used to giving each other feedback about their biases. Check out how to give and encourage feedback about our biases.
Hi Nick,
I just discovered your blog the other day and I am really enjoying it! As far as implicit bias goes I have a few thoughts and questions. The example I’m about to use may sound kind of strange, but I think it’s somewhat relevant to the concept of implicit bias, or at least of implicit attitudes/conceptions.
Early on in high school I took a course in sex Ed where, among other things, we were taught the names of the most common STDs accompanied by a visual slideshow. Being young and mischievous, kids at school began using the newly aquired terminology just to get a rise out of their peers (or, in some cases, teachers). Any time a kid vocalized one of the terms, anyone within earshot would betray an expression of distaste. It’s my best guess that, like myself, the other kids were beset by unpleasant mental imagery upon hearing these words. Up until about the time I graduated college, where I spent time reading Ancient Greek philosophy and mythology, I could not hear the word “syphilis” without feeling slightly repulsed by the memory of those images on the projector screen in sex Ed class.
After my exposure to mythology, I was acquainted with a variety of Greek words and names, many of which bore close resemblance, phonetically and textually, to the word “syphilis”. I can’t pinpoint the exact moment in time when the change occurred or remember how gradually it occurred, but I do know that around this time I began to adopt a new (perhaps implicit) “attitude” towards the word. Maybe you can call it a fresh psychological association. Either way, the result was this: if I hear the word now (it’s not the kind of word I hear very often), I seem to immediately think about Greek mythology instead of high school sex Ed.
I know that this isn’t a technical example of implicit “bias”, but I think it may shed some light on the function of “conditioning” and how it can influence implicit bias by creating new mental associations.
I also want to clarify that it’s not my intention whatsoever to offend or pass judgement on anyone who has dealt with or is dealing with a disease. This example was solely meant to serve the discussion on implicit thought processes.
Do you think the same (or similar) associative process I talked about is also at play during “implicit bias”?
Hi Nick.
Interesting example! This shows not only how associations can be formed, but how they can be modulated. It sounds like you cultivated an association with ‘syphilus’ and certain images and/or related feelings and then (a) extinguished the negative association by pairing ‘syphilis’-sounding words with something new (i.e., Greek mythology) thereby (b) building a new association between ‘syphilis’-sounding words and the relevant concepts in Greek mythology. Thanks for sharing! I wish you well!