Philosophy As Proto-Psychology

Philosophers are often trying to understand their intuitions about thought experiments. Traditionally, philosophers do this via introspection. But these days, some philosophers do it more scientifically: they survey people’s’ intuitions and use quantitative arguments for theories about the intuitions. In this post, I want to point out that one of philosophers’ traditional methods might be a kind of proto-psychology. And if that is right, you might wonder, “Is one method better than the other?” By the end of the post, you’ll know of at least one philosopher who argues that the more scientific approach is better.

Continue reading Philosophy As Proto-Psychology

Domain-familiarity & The Cognitive Reflection Test

This week I’m commenting on Nicholas Shea and Chris Frith’s “Dual-process theories and consciousness: the case for ‘Type Zero’ cognition” (2016) (open access) over at  the Brains blog. My abstract is below. Head over to Brains for the full comments and subsequent discussion.

Abstract

Type 1 and type 2 cognition are standard fare in psychology. Now Shea and Frith (2016) introduce type 0 cognition. This new category of cognition manifests from existing distinctions — (a) conscious vs. unconscious and (b) deliberate vs. automatic. Why do existing distinctions result in a new category? Because Shea and Frith (henceforth SF) apply each distinction to a different concept: one to representation and the other to processing. The result is a 2-by-2 taxonomy like the one below. This taxonomy classifies automatic processing over unconscious representations as type 0 cognition. And, deviating from convention, this taxonomy classified automatic processing over conscious representation(s) as type 1 cognition.

PROCESSING
AutomaticDeliberate
REPRESENTATIONUnconsciousType 0?
ConsciousType 1Type 2

According to SF, we deploy each type of cognition more or less successfully depending on our familiarity with the domain. When we’re familiar with the domain, we may not need to integrate information from other domains (via conscious representation) and/or deliberately attend to each step of our reasoning. So in a familiar domain, type 0 cognition might suffice.

SF briefly mention how this relates to the cognitive reflection test (CRT) (Frederick 2005). There is a puzzle about how to interpret CRT responses that do not fit a common dual-process interpretation of the CRT. In what follows, I will show how SF’s notion of domain-familiarity can make sense of these otherwise puzzling CRT responses.

 


Image: “Wiffel ball” from Andrew Malone as modified by Nick ByrdCC BY 2.0

 

Why Critical Reasoning Might Not Require Self-knowledge

I recently reread Tyler Burge’s “Our Entitlement to Self-knowledge” (1996). Burge argues that our capacity for critical reasoning entails a capacity for self-knowledge.

Like a lot of philosophy, this paper is barely connected to the relevant science. So when I find myself disagreeing with the authors’ assumptions, I’m not sure whether the disagreement matters. After all, we might disagree because we have different, unfalsifiable intuitions. But if we disagree about facts, then it matters: one of us is demonstrably wrong. In this post I will articulate my disagreement. I will also try to figure out whether it matters. Continue reading Why Critical Reasoning Might Not Require Self-knowledge

Are Atheists More Reflective Than Theists?

On Saturday, I was on the Veracity Hill Podcast talking about the evidence that atheists and agnostics reason more reflectively (i.e., make fewer errors) than theists.

The Discussion

  1. What do we mean by ‘reflective’? And how do we measure reflection? Who counts as a theist? And how do we measure religiosity?
  2. What do these findings about atheists and theists tell us about atheism and theism (if anything)? And how might further research answer hitherto unanswered questions about how atheists and theists reason?
  3. What are some related findings? For instance, what does this have to do with other philosophical beliefs?

The Podcast

Continue reading Are Atheists More Reflective Than Theists?

Experimental Philosophy 2.0: The Neuroscience of Philosophy

If our judgments are dependent on the brain, then maybe we can understand our judgments by studying our brains. Further, maybe we can understand our philosophical judgments by studying our brains. What do you think? Can neuroscience help us understand philosophy? Here are some studies which suggest that it can.

1.  Two Opposing Neural Networks/Judgments

Consider two different networks in the brain: the Default Mode Network (DMN) and the Task Positive Network (TPN). These networks are mutually inhibitory. When one network’s activity increases, the other network’s activity decreases. It’s a bit like a seesaw (Jack et al 2013).

Continue reading Experimental Philosophy 2.0: The Neuroscience of Philosophy

The Bias Fallacy

“They’re biased, so they’re wrong!” That’s a fallacy. We can call it the bias fallacy. Here’s why it’s a fallacy: being biased doesn’t entail being wrong. So when someone jumps from the observation that So-and-so is biased to the conclusion that So-and-so is wrong, they commit the bias fallacy. It’s that simple.

In this post, I’ll give some examples of the fallacy, explain the fallacy, and then suggest how we should respond to the bias fallacy.

1. Examples of The Bias Fallacy

You’ve probably seen instances of the bias fallacy all over the internet.

In my experience, the fallacy is a rhetorical device. The purpose of the bias fallacy is to dismiss some person or their claims.

Like many rhetorical devices, this one is logically fallacious. So it’s ineffective. At least, it should be ineffective. That is, we should not be persuaded by it.

So if you’ve seen the bias fallacy online, then go ahead and set the record straight:

'They're biased, so they're wrong.' Not so fast! We can be biased without being wrong. #TheBiasFallacyClick To Tweet  Continue reading The Bias Fallacy

Research Questions & Mental Shortcuts: A Warning

Daniel Kahneman talks extensively about how we make reasoning errors because we tend to use mental shortcuts. One mental shortcut is ‘substitution‘. Substitution is what we do when we (often unconsciously) answer an easier question than the one being asked. I find that I sometimes do this in my own research. For instance, when I set out to answer the question, “How can X be rational?” I sometimes end up answering easier questions like, “How does X work?”. In an effort to avoid such mistakes, I will (1) explain the question substitution error, (2) give an example of how we can distinguish between questions, (3) give a personal example of the substitution error, and (4) say what we can do about it.

1.  Substitution

In case you’re not familiar with Kahnemen’s notion of ‘substitution’, here is some clarification. In short, substitution is this: responding to a difficult question by (often unintentionally) answering a different, easier question. People use this mental shortcut all the time. Here are some everyday instances:

Difficult QuestionEasier Question
How satisfied are you with your life?What is my mood right now?
Should I believe what my parents believe?Can I believe what my parents believe?
What are the merits/demerits of that woman who is running for president?What do I remember people in my community saying about that woman?

For further discussion of mental shortcuts and substitution, see Part 1 of Kahneman’s Thinking Fast and Slow (2012).

Now, how does this mental shortcut apply to research?  Continue reading Research Questions & Mental Shortcuts: A Warning