I love philosophy and science. I also love flowcharts because they can compress many pages of instruction into a simple chart. And three researchers from George Mason University and the University of Queensland have combined these three loves in a paper about climate change denialism. In their paper, they create a flowchart that shows how to find over a dozen fallacies in over 40 denialist claims! In this post, I’ll explain this argument-checking flowchart. First, we will identify a common denialist claim and then evaluate the argument for it. Continue reading Evaluate An Argument With Just ONE Flowchart
A public figure is accused of a sexual misdeed. You know nothing about the accused besides their name and their alleged crime. And you know nothing about the accuser except their name and their accusation. Can you believe the accuser? We often learn about such sexual harassment accusations. So it behooves us to find a principled response. The Acceptance Principle suggests that we can accept this kind of accusation. Why? I’ll explain in this post. Continue reading Sexual Harassment Accusations & The Acceptance Principle
I recently reread Tyler Burge’s “Our Entitlement to Self-knowledge” (1996). Burge argues that our capacity for critical reasoning entails a capacity for self-knowledge.
Like a lot of philosophy, this paper is barely connected to the relevant science. So when I find myself disagreeing with the authors’ assumptions, I’m not sure whether the disagreement matters. After all, we might disagree because we have different, unfalsifiable intuitions. But if we disagree about facts, then it matters: one of us is demonstrably wrong. In this post I will articulate my disagreement. I will also try to figure out whether it matters. Continue reading Why Critical Reasoning Might Not Require Self-knowledge
You might be familiar with what philosophers call an “appeal to nature“. It is a claim that something is good or right because it’s natural. Sometimes an appeal to nature is a fallacy. In this post, I discuss the possibility that an appeal to intuition is that kind of fallacy.
1. Different Brain, Different Intuition
First, imagine that your brain and my brain are radically different from one another. If this were the case, then it would be unsurprising to find that your intuitions were different than mine. Indeed, evidence suggests that even minor differences between brains are linked to differences in intuition (Amodio et al 2007, Kanai et al 2011).
This implies that our appeals to intuition (etc.) might be contingent upon brains being a certain way. In other words, differences in intuitions seem to be the result of differences in natural properties.†
This week, I’m talking about implicit bias over at The Brains Blog. I’m including my portion of the discussion below.
1. The Implicit Association Test (IAT)
The implicit association test (IAT) is one way to measure implicitly biased behavior. In the IAT, “participants […] are asked to rapidly categorize two [kinds of stimuli] (black vs. white [faces]) [into one of] two attributes (‘good’ vs. ‘bad’). Differences in response latency (and sometimes differences in error-rates) are then treated as a measure of the association between the target [stimuli] and the target attribute” (Huebner 2016). Likewise, changes in response latencies and error-rates resulting from experimental interventions are treated as experimentally manipulated changes in associations.
2. The Effect Of Philosophy
As philosophers, we are in the business of arguments and their propositions, not associations. So we might wonder whether we can use arguments to intervene on our implicitly biased behavior. And it turns out that we can — even if the findings are not always significant and the effect sizes are often small. Some think that this effect of arguments on IAT performance falsifies the idea that implicitly biased behavior is realized by associations (Mandelbaum 2015). The idea is that propositions are fundamentally different than associations. So associations cannot be modified by propositions. So if an arguments’ propositions can change participants’ implicitly biased behavior — as measured by the IAT — then implicit biases might “not [be] predicated on [associations] but [rather] unconscious propositionally structured beliefs” (Mandelbaum 2015, bracketed text and italics added). But there is some reason to think that such falsification relies on oversimplification. After all, there are many processes involved in our behavior — implicitly biased or otherwise. So there are many processes that need to be accounted for when trying to measure the effect of an intervention on our implicitly biased behavior — e.g., participants’ concern about discrimination, their motivation to respond without prejudice (Plant & Devine, 1998), and their personal awareness of bias. So what happens when we control for these variables? In many cases, we find that argument-like interventions on implicitly biased behavior are actually explained by changes in participants’ concern(s), motivation(s), and/or awareness, but not changes in associations (Devine, Forscher, Austin, and Cox 2013; Conrey, Sherman, Gawronski, Hugenberg, and Groom 2005). Continue reading Implicit Bias & Philosophy
When you step back and question your beliefs and assumptions, do you expect to change your mind? Should you? I think that reflective reasoning is supposed to change our minds. But it might not change our beliefs. Sometimes reflection reinforces our beliefs. And sometimes reflection makes our beliefs more extreme or partisan. I’ll explain below. Continue reading Is Reflective Reasoning Supposed To Change Your Mind?
Here are some podcasts about philosophy, cognitive science, and maybe science more generally. Feel free to share the list and/or recommend your own podcasts by contacting me.