Domain-familiarity & The Cognitive Reflection Test


This week I’m commenting on Nicholas Shea and Chris Frith’s “Dual-process theories and consciousness: the case for ‘Type Zero’ cognition” (2016) (open access) over at  the Brains blog. My abstract is below. Head over to Brains for the full comments and subsequent discussion.

Abstract

Type 1 and type 2 cognition are standard fare in psychology. Now Shea and Frith (2016) introduce type 0 cognition. This new category of cognition manifests from existing distinctions — (a) conscious vs. unconscious and (b) deliberate vs. automatic. Why do existing distinctions result in a new category? Because Shea and Frith (henceforth SF) apply each distinction to a different concept: one to representation and the other to processing. The result is a 2-by-2 taxonomy like the one below. This taxonomy classifies automatic processing over unconscious representations as type 0 cognition. And, deviating from convention, this taxonomy classified automatic processing over conscious representation(s) as type 1 cognition.

PROCESSING
Automatic Deliberate
REPRESENTATION Unconscious Type 0 ?
Conscious Type 1 Type 2

According to SF, we deploy each type of cognition more or less successfully depending on our familiarity with the domain. When we’re familiar with the domain, we may not need to integrate information from other domains (via conscious representation) and/or deliberately attend to each step of our reasoning. So in a familiar domain, type 0 cognition might suffice.

SF briefly mention how this relates to the cognitive reflection test (CRT) (Frederick 2005). There is a puzzle about how to interpret CRT responses that do not fit a common dual-process interpretation of the CRT. In what follows, I will show how SF’s notion of domain-familiarity can make sense of these otherwise puzzling CRT responses.

Related Posts

 


Image: “Wiffel ball” from Andrew Malone as modified by Nick ByrdCC BY 2.0

 

Why Critical Reasoning Might Not Require Self-knowledge


I recently reread Tyler Burge’s “Our Entitlement to Self-knowledge” (1996). Burge argues that our capacity for critical reasoning entails a capacity for self-knowledge.

Like a lot of philosophy, this paper is barely connected to the relevant science. So when I find myself disagreeing with the authors’ assumptions, I’m not sure whether the disagreement matters. After all, we might disagree because we have different, unfalsifiable intuitions. But if we disagree about facts, then it matters: one of us is demonstrably wrong. In this post I will articulate my disagreement. I will also try to figure out whether it matters. Continue reading Why Critical Reasoning Might Not Require Self-knowledge

Free, online conference on the philosophy and science of mind!


The Minds Online conference starts today, has three week-long, and ends on September 29th. So mark your calendars and set aside some time to read and comment.

You will find that each Minds Online session has a keynote and a few contributed papers — each contributed paper with its own invited commenters. Papers are posted for advanced reading the Saturday before their session. And public commenting for each session runs from Monday (8am, EST) to Friday.

To be notified when papers go up, subscribe by email (in the menu) or to the Minds Online post RSS feed to receive be notified when papers go up. You can also subscribe to the Minds Online comment RSS feed to stay apprised of comments.

Conference hashtag: #MindsOnline2017. The full program is below: Continue reading Free, online conference on the philosophy and science of mind!

Christine Korsgaard on Reflection and Reflective Endorsement


Christine Korsgaard’s Sources of Normativity is one of the most impressive pieces of philosophy I’ve ever read. There are many, many reasons to read the book. Right now I am reading it because I want to understand Korsgaard’s view of reflective reasoning. She thinks that reflective reasoning is important for all of morality — #NBD. And her notion of ‘reflective’ is very similar to cognitive scientists’, but not the same. In this post, I explain Korsgaards’ view and how it differs from cognitive scientists’. Continue reading Christine Korsgaard on Reflection and Reflective Endorsement

The Meaning Problem & Academic Lexicons


Sometimes I spend days trying to figure out what someone means when they use an otherwise common word. I spend even more time trying to the difference between two authors’ use of the same word. It’s a problem. We can call this the meaning problem. In this post I talk about the meaning problem and some solutions. I think the best solutions would be open-source academic lexicons — i.e., lexicons for every academic field edited by academics from the corresponding field. But that’s a big ask, so I will also mention a couple other (partial) solutions as well. Continue reading The Meaning Problem & Academic Lexicons

What Christopher Peacocke means by ‘Reflective Self-consciousness’


Christopher Peacocke’s The Mirror of the World (2014) is largely about self-consciousness. In the book, Peacocke distinguishes “reflective” self-consciousness from other kinds of self-consciousness. In this post, I will try to understand what Peacocke means by ‘reflective’. Spoiler: it is not what I and many other philosphers mean by ‘reflective’. Continue reading What Christopher Peacocke means by ‘Reflective Self-consciousness’

What Is Reflective Reasoning?


Last week I was talking about intuition. I think of intuition as — among other things — unconscious and automatic reasoning. The opposite of that would be conscious and deliberative reasoning. We might call that reflective reasoning.† In this post, I want to talk about reflective reasoning. How does it work? And why does it work? And — spoiler alert — why does it sometimes not work? Continue reading What Is Reflective Reasoning?

Experimental Philosophy 2.0: The Neuroscience of Philosophy


If our judgments are dependent on the brain, then maybe we can understand our judgments by studying our brains. Further, maybe we can understand our philosophical judgments by studying our brains. What do you think? Can neuroscience help us understand philosophy? Here are some studies which suggest that it can.

1.  Two Opposing Neural Networks/Judgments

Consider two different networks in the brain: the Default Mode Network (DMN) and the Task Positive Network (TPN). These networks are mutually inhibitory. When one network’s activity increases, the other network’s activity decreases. It’s a bit like a seesaw (Jack et al 2013).

Continue reading Experimental Philosophy 2.0: The Neuroscience of Philosophy

The Bias Fallacy


“They’re biased, so they’re wrong!” That’s a fallacy. We can call it the bias fallacy. Here’s why it’s a fallacy: being biased doesn’t entail being wrong. So when someone jumps from the observation that So-and-so is biased to the conclusion that So-and-so is wrong, they commit the bias fallacy. It’s that simple.

In this post, I’ll give some examples of the fallacy, explain the fallacy, and then suggest how we should respond to the bias fallacy.

1. Examples of The Bias Fallacy

You’ve probably seen instances of the bias fallacy all over the internet. In my experience, the fallacy is a rhetorical device. The purpose of the bias fallacy is to dismiss some person or their claims.

Like many rhetorical devices, this one is logically fallacious. So it’s ineffective. At least, it should be ineffective. That is, we should not be persuaded by it.

So if you’ve seen the bias fallacy online, then go ahead and set the record straight:

'They're biased, so they're wrong.' Not so fast! We can be biased without being wrong. #TheBiasFallacyClick To Tweet  Continue reading The Bias Fallacy

Research Questions & Mental Shortcuts: A Warning


Daniel Kahneman talks extensively about how we make reasoning errors because we tend to use mental shortcuts. One mental shortcut is ‘substitution‘. Substitution is what we do when we (often unconsciously) answer an easier question than the one being asked. I find that I sometimes do this in my own research. For instance, when I set out to answer the question, “How can X be rational?” I sometimes end up answering easier questions like, “How does X work?”. In an effort to avoid such mistakes, I will (1) explain the question substitution error, (2) give an example of how we can distinguish between questions, (3) give a personal example of the substitution error, and (4) say what we can do about it.

1.  Substitution

In case you’re not familiar with Kahnemen’s notion of ‘substitution’, here is some clarification. In short, substitution is this: responding to a difficult question by (often unintentionally) answering a different, easier question. People use this mental shortcut all the time. Here are some everyday instances:

Difficult Question Easier Question
How satisfied are you with your life? What is my mood right now?
Should I believe what my parents believe? Can I believe what my parents believe?
What are the merits/demerits of that woman who is running for president? What do I remember people in my community saying about that woman?

For further discussion of mental shortcuts and substitution, see Part 1 of Kahneman’s Thinking Fast and Slow (2012).

Now, how does this mental shortcut apply to research?  Continue reading Research Questions & Mental Shortcuts: A Warning