What if traveling abroad were somehow bad for you? Well, a series of studies seem to find that “[traveling abroad] can lead to [lying and cheating] by increasing moral relativism” (Lu et al 2017, 1, 3). This finding has just the right combination of intuitive plausibility and surprise for us to want to share it uncritically. So, instead, let’s take a look at the methods, measures, and philosophical nuances of the topic. As usual, a bit of reflection makes the finding a bit less exciting and it reveals a need for follow-up research.
This week I’m commenting on Nicholas Shea and Chris Frith’s “Dual-process theories and consciousness: the case for ‘Type Zero’ cognition” (2016) (open access) over at the Brains blog. My abstract is below. Head over to Brains for the full comments and subsequent discussion.
Type 1 and type 2 cognition are standard fare in psychology. Now Shea and Frith (2016) introduce type 0 cognition. This new category of cognition manifests from existing distinctions — (a) conscious vs. unconscious and (b) deliberate vs. automatic. Why do existing distinctions result in a new category? Because Shea and Frith (henceforth SF) apply each distinction to a different concept: one to representation and the other to processing. The result is a 2-by-2 taxonomy like the one below. This taxonomy classifies automatic processing over unconscious representations as type 0 cognition. And, deviating from convention, this taxonomy classified automatic processing over conscious representation(s) as type 1 cognition.
|Conscious||Type 1||Type 2|
According to SF, we deploy each type of cognition more or less successfully depending on our familiarity with the domain. When we’re familiar with the domain, we may not need to integrate information from other domains (via conscious representation) and/or deliberately attend to each step of our reasoning. So in a familiar domain, type 0 cognition might suffice.
SF briefly mention how this relates to the cognitive reflection test (CRT) (Frederick 2005). There is a puzzle about how to interpret CRT responses that do not fit a common dual-process interpretation of the CRT. In what follows, I will show how SF’s notion of domain-familiarity can make sense of these otherwise puzzling CRT responses.
I recently reread Tyler Burge’s “Our Entitlement to Self-knowledge” (1996). Burge argues that our capacity for critical reasoning entails a capacity for self-knowledge.
Like a lot of philosophy, this paper is barely connected to the relevant science. So when I find myself disagreeing with the authors’ assumptions, I’m not sure whether the disagreement matters. After all, we might disagree because we have different, unfalsifiable intuitions. But if we disagree about facts, then it matters: one of us is demonstrably wrong. In this post I will articulate my disagreement. I will also try to figure out whether it matters. Continue reading Why Critical Reasoning Might Not Require Self-knowledge
On Saturday, I was on the Veracity Hill Podcast talking about the evidence that atheists and agnostics reason more reflectively (i.e., make fewer errors) than theists.
- What do we mean by ‘reflective’? And how do we measure reflection? Who counts as a theist? And how do we measure religiosity?
- What do these findings about atheists and theists tell us about atheism and theism (if anything)? And how might further research answer hitherto unanswered questions about how atheists and theists reason?
- What are some related findings? For instance, what does this have to do with other philosophical beliefs?
“They’re biased, so they’re wrong!” That’s a fallacy. We can call it the bias fallacy. Here’s why it’s a fallacy: being biased doesn’t entail being wrong. So when someone jumps from the observation that So-and-so is biased to the conclusion that So-and-so is wrong, they commit the bias fallacy. It’s that simple.
In this post, I’ll give some examples of the fallacy, explain the fallacy, and then suggest how we should respond to the bias fallacy.
1. Examples of The Bias Fallacy
You’ve probably seen instances of the bias fallacy all over the internet.
Everybody thinks they're the shit… Your opinion is biased, therefore it is false.
— Bowtie Boss (@THINK_lika_BOSS) March 28, 2012
In my experience, the fallacy is a rhetorical device. The purpose of the bias fallacy is to dismiss some person or their claims.
Like many rhetorical devices, this one is logically fallacious. So it’s ineffective. At least, it should be ineffective. That is, we should not be persuaded by it.
So if you’ve seen the bias fallacy online, then go ahead and set the record straight:'They're biased, so they're wrong.' Not so fast! We can be biased without being wrong. #TheBiasFallacyClick To Tweet Continue reading The Bias Fallacy
As I look back on 2016, I also look back on the posts that received the most attention. Here are the top 5:
Top 5 Posts of 2016
- 30+ Online Resources For Studying & Teaching Philosophy | Dec 18, 2016
- 30+ Podcasts About Cognitive Science & Philosophy | Dec 21, 2016
- Voting Third Party: A Wasted Vote? | July 24, 2016
- Addiction vs. Habit: An Infographic | October 24, 2016
- 50+ Blogs About Cognitive Science and/of Philosophy | Dec 11, 2016
In the next post, I’ll talk about my plans for 2017.
Apparently, when I impersonate conservatives, I do it with a southern US accent (e.g., “‘Murica!”, “Don’t mess with Texas!”, etc.). I don’t intentionally adopt the accent. In fact, I never even knew I was doing it until my partner pointed it out to me! Without my partner’s third-person perspective, I might never have noticed. I might have just continued mocking people with southern accents. In fact, that wouldn’t be surprising given what we learned in this series [Part 1 – Part 5]. So if we want to do something about our biases, then we would do well to seek this kind of third-personal feedback. Let’s call it bias feedback.
The bias feedback I received from my partner can be characterized as bottom-up and informal. Bottom up because it came from a peer rather than from a position of authority. And informal because it happened freely in ordinary conversation rather than as part of some kind of compulsory process. Many people are uncomfortable with informal, bottom-up feedback. So if informal, bottom-up feedback is to be accepted in some contexts, then it might have to be integrated into that context’s culture. There might be a few ways to do this. Continue reading Implicit Bias | Part 5: Bias Feedback