Who is more likely to be killed by a police officer in the United States: A white person or a black person? You might think, “Police kill more white people than black people in the US. So it’s the white person.” That answer contains a fallacy: the base rate fallacy. This post explains the fallacy, provides some examples, and suggests how to avoid it.Continue reading The Base Rate Fallacy
Philosophy takes many forms. So do its podcasts. Here are some of the most popular philosophy podcasts that I have found. I listen to almost all of them, so feel free to contact me if you have questions that are not answered in each podcast’s description below.
I love philosophy and science. I also love flowcharts because they can compress many pages of instruction into a simple chart. And three researchers from George Mason University and the University of Queensland have combined these three loves in a paper about climate change denialism. In their paper, they create a flowchart that shows how to find over a dozen fallacies in over 40 denialist claims! In this post, I’ll explain this argument-checking flowchart. First, we will identify a common denialist claim and then evaluate the argument for it. Continue reading Evaluate An Argument With Just ONE Flowchart
A public figure is accused of a sexual misdeed. You know nothing about the accused besides their name and their alleged crime. And you know nothing about the accuser except their name and their accusation. Can you believe the accuser? We often learn about such sexual harassment accusations. So it behooves us to find a principled response. The Acceptance Principle suggests that we can accept this kind of accusation. Why? I’ll explain in this post. Continue reading Sexual Harassment Accusations & The Acceptance Principle
This week I’m commenting on Nicholas Shea and Chris Frith’s “Dual-process theories and consciousness: the case for ‘Type Zero’ cognition” (2016) (open access) over at the Brains blog. My abstract is below. Head over to Brains for the full comments and subsequent discussion.
Type 1 and type 2 cognition are standard fare in psychology. Now Shea and Frith (2016) introduce type 0 cognition. This new category of cognition manifests from existing distinctions — (a) conscious vs. unconscious and (b) deliberate vs. automatic. Why do existing distinctions result in a new category? Because Shea and Frith (henceforth SF) apply each distinction to a different concept: one to representation and the other to processing. The result is a 2-by-2 taxonomy like the one below. This taxonomy classifies automatic processing over unconscious representations as type 0 cognition. And, deviating from convention, this taxonomy classified automatic processing over conscious representation(s) as type 1 cognition.
|Conscious||Type 1||Type 2|
According to SF, we deploy each type of cognition more or less successfully depending on our familiarity with the domain. When we’re familiar with the domain, we may not need to integrate information from other domains (via conscious representation) and/or deliberately attend to each step of our reasoning. So in a familiar domain, type 0 cognition might suffice.
SF briefly mention how this relates to the cognitive reflection test (CRT) (Frederick 2005). There is a puzzle about how to interpret CRT responses that do not fit a common dual-process interpretation of the CRT. In what follows, I will show how SF’s notion of domain-familiarity can make sense of these otherwise puzzling CRT responses.
- What Is Reflective Reasoning?
- Is Philosophical Reflection Ever Inappropriate?
- Is Reflective Reasoning Supposed To Change Your Mind?
- Why Critical Reasoning Might Not Require Self-knowledge
- Christine Korsgaard on Reflection and Reflective Endorsement
I recently reread Tyler Burge’s “Our Entitlement to Self-knowledge” (1996). Burge argues that our capacity for critical reasoning entails a capacity for self-knowledge.
Like a lot of philosophy, this paper is barely connected to the relevant science. So when I find myself disagreeing with the authors’ assumptions, I’m not sure whether the disagreement matters. After all, we might disagree because we have different, unfalsifiable intuitions. But if we disagree about facts, then it matters: one of us is demonstrably wrong. In this post I will articulate my disagreement. I will also try to figure out whether it matters. Continue reading Why Critical Reasoning Might Not Require Self-knowledge
Daniel Kahneman talks extensively about how we make reasoning errors because we tend to use mental shortcuts. One mental shortcut is ‘substitution‘. Substitution is what we do when we (often unconsciously) answer an easier question than the one being asked. I find that I sometimes do this in my own research. For instance, when I set out to answer the question, “How can X be rational?” I sometimes end up answering easier questions like, “How does X work?”. In an effort to avoid such mistakes, I will (1) explain the question substitution error, (2) give an example of how we can distinguish between questions, (3) give a personal example of the substitution error, and (4) say what we can do about it.
In case you’re not familiar with Kahnemen’s notion of ‘substitution’, here is some clarification. In short, substitution is this: responding to a difficult question by (often unintentionally) answering a different, easier question. People use this mental shortcut all the time. Here are some everyday instances:
|Difficult Question||Easier Question|
|How satisfied are you with your life?||What is my mood right now?|
|Should I believe what my parents believe?||Can I believe what my parents believe?|
|What are the merits/demerits of that woman who is running for president?||What do I remember people in my community saying about that woman?|
For further discussion of mental shortcuts and substitution, see Part 1 of Kahneman’s Thinking Fast and Slow (2012).
Now, how does this mental shortcut apply to research? Continue reading Research Questions & Mental Shortcuts: A Warning