"Otakuthon 2014: ???" adapted. CC BY 2.0 Generic.

Implicit Bias | Part 2: What is implicit bias?

If our reasoning were biased, then we’d notice it, right? Not quite. We are conscious of very few (if any) of the processes that influence our reasoning. So, some processes bias our reasoning in ways that we do not always endorse. This is sometimes referred to as implicit bias. In this post, I’ll talk about the theory behind our implicit biases and mention a couple surprising findings.

The literature on implicit bias is vast (and steadily growing). So there’s no way I can review it all here. To find even more research on implicit bias, see the next two posts, the links in this series, and the links in the comments.†

1.  Implicit Associations

There are a few ways to discover and gauge implicit biases. One way is the implicit association test (IAT) [Wikipedia]. The IAT is a categorization task. The task reveals how we associate certain categories — e.g., race, gender, religion, etc. — with certain feelings — e.g., positive feelings, negative feelings, etc. These associations are …well, discouraging. A screen recording of the race implicit association testFor instance, the race IAT reveals that people are faster to (a) associate a positive word with a white face than a black face and to (b) associate a negative word with a black face than a white face (Greenwald, McGhee, and Schwartz 1998). Here’s the kicker: even people that are committed to not being racist seem to be implicitly racist — e.g., black people often show an implicit preference for white faces on the race IAT.

Consider the implications. First, there is evidence that we are biased. Second, there is evidence that we’re not even aware of our bias. And third, there is evidence that we might be biased despite our best intentions to not be biased.

Disclaimer: the IAT is not well-suited to predict individuals’ propensity for biased behavior. The IAT is more useful for predicting group or system-level behavior (Greenwald, Bajani, and Nosek 2015). So don’t read too much into your own IAT results. Rather, consider the bigger picture (ibid.).

2.  The Automaticity of Assocaions

One explanation of this evidence is that racial features automatically trigger positive or negative attitudes (Fazio, Sanbonmatsu, Powell, Kardes 1986). The idea is this: if a thought is associated with a positive or negative feeling, then thinking that thought quickly and automatically triggers positive or negative attitudes and these attitudes influence our reasoning and judgments — often without our conscious awareness of it.

It’s like those times when someone does something that you really dislike. Your friend tells you that they’re voting for the other political party. The referee penalizes your favorite team. Or my pet-peeve: someone makes claims that do not follow from the evidence(!). You feel upset as soon as it happens. Your heart rate increases and a sense of internal pressure builds. You’re frustrated, threatened, and/or maybe even stressed. You didn’t reason your way to these feelings. They just happened.

3.  Reasoning About Our Implicit Biases

Notice that this theory about implicit bias makes testable predictions! After all, if implicit bias can be the result of processes that we do not consciously endorse, then there is more to bias than our conscious reasoning.

And that is what experimental psychologists find. Researchers find people who behave in implicitly biased ways.  Then researchers present these people with good reasons to not be biased. Sadly, these reasons — if they have any effect at all — have only a small affect on peoples’ implicitly biased behavior (Briñol, Petty, and McCaslin 2009: 291-302; Briñol, Petty, and Horcajo n.d.).

Reasoning About Racism

Consider an example. Gretchen Sechrist and Charles Stangor tested 50 undergraduate psychology students for racial bias (according to the Pro-Black Scale, see Katz & Hass 1988; see Sechrist and Stangor 2001). The students whose behavior was most biased were told one of two things: either (a) most of their peers accepted racial bias or (b) only a small minority of their peers accepted racial bias. After that, the students were asked to wait in a chair in the hallway. In the hallways was a line of seven chairs and an black student was sitting at one end of the chairs (see Figure 1).

A graphic showing seven chairs in a line. The chairs are numbered. Someone is sitting in the first chair. The number of chairs between a research participant and a black person can indicate the degree of a participants implicit bias.
Figure 1. Adapted from Macrae, Bodenhausen, Milne, & Jetten, 1994. Illustrating method from Sechrist and Stangor 2001.

So, the students had to make a choice about where to sit in proximity to a peer who is black. And they had to make this choice right after they were told about their peers accepting or rejecting their racial bias.

Researchers found that students would sit closer or further away from a black peer depending on whether their peers accepted or rejected racial bias. When students were told that 81% their peers accepted racial bias, they sat further away. When they were told that only 19% their peers accepted racial, then they sat closer. The suggestion is that participants were encouraged to inhibit their own racial bias in response to their peers’ views about racial bias.

Reasoning Isn’t Enough

Unfortunately, this had only a small effect on participants’ bias. In fact, the number of chairs by which the two groups differed was just one. One!

In my experience with the literature, the effect sizes of reasoning on implicit attitudes are usually small. (And sometimes the effect sizes are not even reported!)

This suggests that our conscious reasoning might be limited when it comes to reducing implicitly biased behavior. That is the fourth major point of this post: Our desire or reasons to be less biased might not be enough to fully overcome our biases. 


So now that you’ve been introduced to the literature, remember:

  1. We can be biased.
  2. We can be unaware of our bias.
  3. We can be biased despite our best efforts.
  4. We can be biased despite being aware of reasons not to be biased.

The next few posts will present even more of the research on bias — especially Part 3 and Part 4.

Series Outline

Part 1: I talk about the anxiety I had about being implicitly biased. “Am I discriminating without even knowing it?! How do I stop?!”

Part 2: [Jump back to the top]

Part 3: What are the real-world consequences of implicit bias? Research suggests implicit bias can impact hiring, pay, promotions, and more.

Part 4: What can we do about implicit bias? The research on de-biasing suggests a few strategies for limiting biased judgments and behavior. Spoiler: many people and institutions aren’t using these strategies.

Part 5: How can we address each others’ biases? It’s time for Feedback 101, featuring tips for developing a culture of helpful feedback and some examples of how to give feedback to one another.

† I am not being careful to distinguish between implicit bias and implicit attitude. There might be important differences, but they are not crucial for the purposes of this post.

Featured image: “Otakuthon 2014: ???” from PikawilCC BY 2.0; adapted by Nick Byrd

Published by

Nick Byrd

Nick is a cognitive scientist at Florida State University studying reasoning, wellbeing, and willpower. Check out his blog at byrdnick.com/blog

10 thoughts on “Implicit Bias | Part 2: What is implicit bias?”

  1. Thanks for writing this Nick. I have been observing the little “ticks” I face when just truthfully answering people’s questions. If I am asked the question “Are you good at sales?” My answer is “Yes, I am an excellent sales person.” In this case, I actually have awards to back this up.

    If the question is asked by a man, and I answer that way, it seems to be accepted without any negative facial expressions and with positive verbal feedback.

    If the question is asked by a woman, and I answer the same way, I get questioning/negative feedback in both facial expressions and verbal cues.

    *These examples are from “casual” interviews and informal conversations.*

    It is also worrisome that if I am just “myself” (strong and confident about my talents) I could be marked down, in a formal interview by men by being “too strong” and by women by being a “bragger” (and yes, I have been called that when just making a true statement about my talents.)

    Seems that I can’t win.

    1. I am sorry to hear about the interview experiences, Lisa. That sounds frustrating.

      If the literature about judgments towards women (vs. men) is any indication about your situation, then you might be right: you can’t win. The data suggest a lose-lose situation: a choice between downplaying stereotypically male traits (and thereby risk being seem as less “goal-oriented” (etc.) which, who knows, might count against an applicant) or embrace stereotypically male traits (and thereby risk being perceived as less competent, socially adept, etc. as a male applicant with the same traits). It’s maddening!

      Ideally certain policies and protocols would account for these circumstances. Hopefully, I find some advice about this situation for my next post.

      Thanks for sharing! I wish you well, Lisa!

  2. My solution so far (I am a woman and I work in the Academia) has been downplaying my work (and hope that “it speaks for itself”). But one needs to be strong enough for that since it can be at times frustrating to see the opposite attitude in men who have done much less and brag/complain much more.

    1. Hi smef,

      I can see how your solution could be difficult at times. Many men aren’t shy when it comes to talking about their own work or their own perceived strengths, etc. — even when it’s unsolicited. I look forward to a day when people no longer need to practice this restraint for fear of bias.

      And for what it’s worth, you’ve piqued my interest. I find myself wanting to ask you about your work — I’d ask, but I do not want to pressure someone who is commenting anonymously to identify themselves. Maybe this is a benefit of your solution; it can create a bit of intrigue. :)

      I wish you well!

  3. This post was really interesting! I enjoy reading you blog.

    I just read an article that neuroscientists have just found that our minds make even simple decisions for us before we even have time to think it through. Like press the left button or right button. (Sorry, I didn’t save the article sorry) Now that I read this, I find it interesting that we also do not always realize our bias’.

    Consciously thinking everything through thoroughly is way more important than our society allows. Seems there is a need for an instant automated response.

    It’s sad to hear the sruggle we have as being women or African-American but I hope now that we know this, we will figure out how to fix it. Can’t wait for the next post!

    1. Hi Saschia,

      What a coincidence! I was just reading your blog the other day. I enjoyed it! Thanks for blogging!

      And thanks for your comment! The study you mention sounds like the ones Benjamin Libet and his ilk have conducted. Interesting stuff! I actually wrote about that awhile back (“Can Unconscious Decisions Be Free?”). You might also be interested what other philosophers have to say about this research — e.g., Robyn Repko Waller’s paper “Beyond Button Presses:…” and, if you have more time, Al Mele’s book Free: Why Science Hasn’t Disproved Free Will. Maybe you’re already familiar with the literature. And no worries about not saving the article. :)

      Ok, one more. If you’re interested in how much of our mental life is unconscious/automatic, you might consider checking out Keith Frankish’s “Dual-Process and Dual-System Theories of Reasoning”

      And I think you’re right: reflection is underrated by our society. And I too hope we figure out some ways to make things better for those who are systematically disadvantaged by bias (and other things). Hopefully the advice I am rounding up for the next post will prove useful!

      I look forward to reading more on your blog.


  4. This discussion of being aware of and reducing bias seems to me to point to ‘mindfulness’. Mindfulness has been shown to increase awareness of implicit beliefs by giving someone the space to observe themselves. (Brown et al., 2003). Once implicit beliefs are brought into awareness, they are then explicit beliefs that can be acted upon consciously (and therefore changed). (Gawronski et al., 2006)

    You seem to imply that being unbiased is a good goal to strive towards. You point out cognitive reasons why that MAY be impossible, but I’d like to make an argument for the idea that being unbiased is fundamentally impossible. Balcetis and Dunning reported the results of a study completed in 2006 that indicated that individuals interpret ambiguous visual stimuli according to their personal motivations (goals). Our intake of information is finite and limited by our environment, but as Balcetis and Dunning point out, it is also limited by our goals. We process our experience based on our values and goals. Having goals and values that allow for an unbiased perception of information and sensory input is what Buddhists refer to as “enlightenment” – the absence of desires and perfect moment-to-moment experience of reality as it really is. Seeking an ideal state of being unbiased is a worthy pursuit, but is it there any different instrumental value to it than seeking enlightenment as a Buddhist?

    If we accept that bias is an inherent part of being human, that clarifies the next move a bit more. If removing values and bias from ourselves generally is impossible, in which specific places should we seek greater understanding of our ‘blind spots’? Seeking understanding (a process that I believe reduces bias) leads towards a more unified construct in specific areas which is useful for discussion, creating new systems of thought, and facilitating the establishment of a world that works for increasing portions of society. But seeking general absence of bias is an unfocused pursuit destined for failure.

    I suppose I disagree with this positivist-seeming view with which you approach truth, scientific investigation, and argumentation. Pure Science theoretically pursues objectivity, but then you imply the utilization of this Pure Science towards an end – to me, this is an approach that confuses different types of science. As soon as science is utilized in a context towards an end, it loses its “Pure Science-ness”.

    I realize that this is a tangent from the thrust of your article, but I think it has huge implications for the way you discuss these ideas and present the research. It is also just fun to discuss :)


    1. Hi again Jesse! Thanks again for engaging with the post and the relevant literature. I agree: it’s fun to discuss. Thanks for another opportunity to discuss it.

      First, I did not mean to imply that overcoming bias is impossible. I’ve only suggested that the effect size of our attempts to overcome bias with conscious reasoning is often small. I didn’t say (I hope) that overcoming bias with conscious reasoning is impossible. I think that is consistent with Gawronski et al 2006. So I don’t think we disagree here.

      Second, And I don’t think that I’ve argued that we should (or can) be entirely unbiased in all contexts. I’m only assuming what I take to be a common desire: we want to be less (or un-) biased in certain contexts. So I don’t think we disagree here either.

      Third, I’m not sure what I’ve said is a “positivist-seeming view …[of] truth, scientific investigation, and argumentation.” And I’m not sure what you mean by “Pure Science”. For what it’s worth (and to borrow a line form our conversation on another post), I am a pragmatist. So I am not sure that ‘positivism’ or ‘objective’ characterize my view of science.

      So — finally — perhaps it would be fruitful to try to clarify (what I take to be) the purpose of this post: to provoke a worry — i.e., that we are unknowingly and (somewhat) uncontrollably biased. The hope is that readers will become concerned about their potential biases, and thereby motivated to learn about various debasing strategies (e.g., Part 4).

      P.S. – Is this the Gawronski et al 2006 paper that you had in mind? (There was more than one on Google Scholar)?

      Gawronski, B., & Bodenhausen, G. V. (2006). Associative and propositional processes in evaluation: An integrative review of implicit and explicit attitude change. Psychological Bulletin, 132(5), 692–731.

      1. Yep, that is the one I was talking about! If you haven’t read it, I would highly recommend checking it out!

        Ah, I see. One example of what I was picking up on was in your conclusion you said “1. We can be biased”, which I thought was a giant understatement that implied that being unbiased was a possible state. I now see that this (combined with the following three conclusion points) is a fair summary of the literature and an important point for the general public to understand – especially in this age of increasing discussion of oppression, privilege, and micro-aggressions.

        “Pure Science” is a concept from Pielke in his book “The Honest Broker” (2007). It refers to science that is pursued for the sake of raw discovery rather than to investigate policy or science that is used in service of something. Here is a diagram from the book that explains what I mean, somewhat. http://imgur.com/a/s5Ref

Comments are closed.