Philosophers are often trying to understand their intuitions about thought experiments. Traditionally, philosophers do this via introspection. But these days, some philosophers do it more scientifically: they survey people’s’ intuitions and use quantitative arguments for theories about the intuitions. In this post, I want to point out that one of philosophers’ traditional methods might be a kind of proto-psychology. And if that is right, you might wonder, “Is one method better than the other?” By the end of the post, you’ll know of at least one philosopher who argues that the more scientific approach is better.
What if traveling abroad were somehow bad for you? Well, a series of studies seem to find that “[traveling abroad] can lead to [lying and cheating] by increasing moral relativism” (Lu et al 2017, 1, 3). This finding has just the right combination of intuitive plausibility and surprise for us to want to share it uncritically. So, instead, let’s take a look at the methods, measures, and philosophical nuances of the topic. As usual, a bit of reflection makes the finding a bit less exciting and it reveals a need for follow-up research.
“They’re biased, so they’re wrong!” That’s a fallacy. We can call it the bias fallacy. Here’s why it’s a fallacy: being biased doesn’t entail being wrong. So when someone jumps from the observation that So-and-so is biased to the conclusion that So-and-so is wrong, they commit the bias fallacy. It’s that simple.
In this post, I’ll give some examples of the fallacy, explain the fallacy, and then suggest how we should respond to the bias fallacy.
1. Examples of The Bias Fallacy
You’ve probably seen instances of the bias fallacy all over the internet.
Everybody thinks they're the shit… Your opinion is biased, therefore it is false.
— Bowtie Boss (@THINK_lika_BOSS) March 28, 2012
In my experience, the fallacy is a rhetorical device. The purpose of the bias fallacy is to dismiss some person or their claims.
Like many rhetorical devices, this one is logically fallacious. So it’s ineffective. At least, it should be ineffective. That is, we should not be persuaded by it.
So if you’ve seen the bias fallacy online, then go ahead and set the record straight:'They're biased, so they're wrong.' Not so fast! We can be biased without being wrong. #TheBiasFallacyClick To Tweet Continue reading The Bias Fallacy
Apparently, when I impersonate conservatives, I do it with a southern US accent (e.g., “‘Murica!”, “Don’t mess with Texas!”, etc.). I don’t intentionally adopt the accent. In fact, I never even knew I was doing it until my partner pointed it out to me! Without my partner’s third-person perspective, I might never have noticed. I might have just continued mocking people with southern accents. In fact, that wouldn’t be surprising given what we learned in this series [Part 1 – Part 5]. So if we want to do something about our biases, then we would do well to seek this kind of third-personal feedback. Let’s call it bias feedback.
The bias feedback I received from my partner can be characterized as bottom-up and informal. Bottom up because it came from a peer rather than from a position of authority. And informal because it happened freely in ordinary conversation rather than as part of some kind of compulsory process. Many people are uncomfortable with informal, bottom-up feedback. So if informal, bottom-up feedback is to be accepted in some contexts, then it might have to be integrated into that context’s culture. There might be a few ways to do this. Continue reading Implicit Bias | Part 5: Bias Feedback
At this point it’s pretty clear why someone would be worried about bias. We’re biased (Part 1). Consciously suppressing our biases might not work (Part 2). And our bias seems to tamper with significant, real-world decisions (Part 3). So now that we’re good and scared, let’s think about what we can do. Below are more than 10 debiasing strategies that fall into 3 categories: debiasing our stereotypes, debiasing our environment, and debiasing our decision procedures.
In the last post, we learned that implicit attitudes and stereotypes can badly affect our judgments. Here is how stereotypes can be formed and reformed.
In conditioning, we repeatedly present ourselves with a pair of stimuli until we begin to associate one thing with the other (De Houwer, Thomas, and Baeyens 2001; Hofmann, De Houwer, Perugini, Baeyens, and Crombez 2010). So, for example, if someone consumes media that repeatedly presents certain ideas or groups in a negative light, then they will cultivate a negative implicit attitude about these ideas or groups (Arendt 2013; Arendt & Northup 2015; Matthes & Schmuk 2015). Or if a certain profession is dominated by white men, then people will associate membership in this profession with being white and male — and this might have a self-reinforcing effect on the profession’s makeup.
We can also use conditioning against our existing biases. Some call this counterconditioning (Stewart, Latu, Kawakami, and Myers 2010; VansteenWagen, Baeyens, and Hermans 2015). One fairly easy and effective way to countercondition a negative stereotype is just to imagine a positive version of a negatively valence concept (Blair, Ma, and Lenton 2001, Markland et al 2015).
Here are examples of how we might use counterconditioning. When nominating, recommending, or choosing someone for an opportunity, we might alot some time to think of non-stereotypical candidates from underrepresented groups. If we’re teaching, then we might assign material from underrepresented groups. If we’re an office manager, then we might choose imagery for the office that represents a more diverse group of people.
2. Debiasing our environment
It turns out that altering the decision environment can also support debiasing. So if you have a say in the decision environment, then think about the following.
Ask for justification
Put decision-makers in a position that forces them to justify their decisions to a third-party (Lerner & Tetlock 1999, Simonson & Nye 1992). The point of this is to prevent overconfidence in our judgment, e.g., optimism bias (Ben-David, Graham, and Harvey 2013, Moore & Healy 2008).
Try to diversify the decision-makers (Bell, Villado, Lukasik, Belau, and Briggs 2011; Shor, Rijt, Miltsov, Kulkarni, and Skiena 2015). The hope is that your intuitions will be challenged. After all, “Training in normative rules often fails when people have strong intuitions and do not pause to think more deeply (McKenzie and Liersch 2011)” (Soll, Milkman, and Payne 2014). So find people who question our intuitions (Koriat, Lichtenstein & Fischhoff 1980). If you can’t find someone to disagree with you, disagree with yourself: simply imagine arguments for different conclusions (Herzog and Hertwig 2009, 2014; Keeney 2012).
Make information easier to consume
Remove irrelevant data, e.g., a job candidate’s identity, the status quo, unnecessary complexity, etc. (Célérier & Vallée 2014; Thaler & Sunstein 2008). And present the remaining data in a way that allows for easy comparison (Heath & Heath, 2010; Hsee 1996, Russo 1977). And present the data in the most relevant scale (Camilleri & Larrick 2014; Burson, Larrick, & Lynch, 2009; Larrick and Soll 2008). Talk about probabilities in terms of relative frequencies or, better yet, represent probabilities with visualizations [examples] (Fagerlin, Wange, and Ubel 2005; Galesic, Garcia- Retamero, & Gigerenzer 2009; Hoffrage, Lindsey, Hertwig, and Gigerenzer 2000).
3. Debiasing our procedures
Certain reasoning strategies can help you with debiasing (e.g., Larrick 2004; Fong and Nisbett 1991, Larrick, Morgan, and Nisbett 1990). Here are some ways that we might debias our decision procedures.
First, decision-procedures checklists, criteria, and rubrics. Use them, especially if you are repeatedly making one kind of judgment, e.g., grading, reviewing applications, etc. (Gawande 2010; Heath, Larrick, and Klayman 1998; Hales & Pronovost 2006).
Second, quantitative models. Predictive linear models often do better than our own reasoning (Dawes 1979 and Dawes, Faust, & Meehl, 1989; Bishop & Trout 2008). So if we have a good model by which to make a decision, then we should be wary of deviating from the model until we have good reason to do so.
Finally, the status quo. We tend towards the status quo. So if it is already well-known that a particular choice optimizes our desired outcome(s), then default to that choice (Chapman, Li, Colby, and Yoon 2010; Johnson, Bellman, and Lohse 2002; Johnson & Goldstein 2003; Madrian & Shea 2001; Sunstein 2015). And — going back to section 2 — if people want to opt-out of the optimized status quo, then ask them for justification. 😉
As you can see, there are many debiasing tools in our toolkit. We can debias our stereotypes, our environment, and our procedures. While these tools have been shown to help, they might not entirely solve the problem. And there maybe be at least one more option at our disposal: feedback. That will be the topic of Part 5.
Part 1: I talk about the anxiety I had about being implicitly biased. “Am I discriminating without even knowing it?! How do I stop?!”
Part 2: What is implicit bias? Check out this post to learn about the theory and some of the findings.
Part 3: What are the real-world consequences of implicit bias? Research suggests implicit bias can impact hiring, pay, promotions, and more.
Part 4: [Jump to the top]
Part 5: Can feedback help? What if people are not used to giving each other feedback about their biases. Check out how to give and encourage feedback about our biases.
Think about decisions that people make every day. A committee decides who to hire. A supervisor rates an employee’s performance. A teacher grades a student’s assignment. A jury arrives at a verdict. A Supreme Court judge casts their vote. An emergency medical technician decides which victim to approach first. A police officer decides whether to shoot. These are instances in which workplace bias can have significant consequences.
I won’t be able to highlight every area of research on workplace bias. So I cannot delve into the findings that police officers’ sometimes show racial bias in decisions to shoot (Sim, Correll, and Sadler 2013, Experiment 2; see Correll et al 2007, Ma and Correll 2011 Study 2 for findings that indicate no racial bias). And I cannot go into detail about how all-white juries are significantly more likely than other juries to convict black defendants (Anwar, Bayer, Hjalmarsson 2012).
GENDER BIAS AT WORK
Instead, I’ll focus on the instances of workplace bias to which most people can relate. If you’re like most people, then you need to work to live, right? So let’s talk about how bias can affect our chances of being hired. Continue reading Implicit Bias | Part 3: Workplace Bias
If our reasoning were biased, then we’d notice it, right? Not quite. We are conscious of very few (if any) of the processes that influence our reasoning. Some unconscious processes bias our reasoning in measurable ways. This is sometimes referred to as implicit bias. In this post, I’ll talk about the theory behind our implicit biases and mention a couple surprising findings.
The literature on implicit bias is vast (and steadily growing). So there’s no way I can review it all here. To find even more research on implicit bias, see the next two posts, the links in this series, and the links in the comments.† Continue reading Implicit Bias | Part 2: What is implicit bias?