One of my favorite researchers is Chandra Sripada. Sripada is a professor of both philosophy and psychiatry. My research also crosses the humanities-science divide(s). So, I often wonder how to replicate a multi-disciplinary career like Sripada’s. A look at Sripada’s CV reveals a career path involving multiple advanced degrees, internships/residencies, etc. If you are like me, then you (or your partner) might want a more efficient path to a career. In this post, I share advice about how to obtain multi-disciplinary training from philosophy graduate programs. Continue reading Multi-disciplinary Philosophy PhD Programs
One of the things that cognitive scientists do is look for, identify, and describe mechanisms. For example, cognitive scientists are interested in our ability (or proclivity) to ascribe mental states to others things and creatures. So, some posit a “theory of mind” mechanism. But, intuitively, there will not be a mechanism for every one of our abilities or behaviors. For example, it would be surprising if there were mechanism for driving a car. But if that is right, then we need principled reasons to think so. Or, at the very least, we need a story about why some of our abilities have mechanisms and others don’t. In this post, I’ll briefly consider four such stories. One of the take-aways will be that it is not obvious why some abilities (like driving a car) do not have mechanisms. Another take-away will be that it is not obvious what scientists mean by ‘mechanism’. Continue reading On Inferring Mechanisms In Cognitive Science
Philosophers are often trying to understand their intuitions about thought experiments. Traditionally, philosophers do this via introspection. But these days, some philosophers do it more scientifically: they survey people’s’ intuitions and use quantitative arguments for theories about the intuitions. In this post, I want to point out that one of philosophers’ traditional methods might be a kind of proto-psychology. And if that is right, you might wonder, “Is one method better than the other?” By the end of the post, you’ll know of at least one philosopher who argues that the more scientific approach is better. Continue reading Philosophy As Proto-Psychology
This week I’m commenting on Nicholas Shea and Chris Frith’s “Dual-process theories and consciousness: the case for ‘Type Zero’ cognition” (2016) (open access) over at the Brains blog. My abstract is below. Head over to Brains for the full comments and subsequent discussion.
Type 1 and type 2 cognition are standard fare in psychology. Now Shea and Frith (2016) introduce type 0 cognition. This new category of cognition manifests from existing distinctions — (a) conscious vs. unconscious and (b) deliberate vs. automatic. Why do existing distinctions result in a new category? Because Shea and Frith (henceforth SF) apply each distinction to a different concept: one to representation and the other to processing. The result is a 2-by-2 taxonomy like the one below. This taxonomy classifies automatic processing over unconscious representations as type 0 cognition. And, deviating from convention, this taxonomy classified automatic processing over conscious representation(s) as type 1 cognition.
|Conscious||Type 1||Type 2|
According to SF, we deploy each type of cognition more or less successfully depending on our familiarity with the domain. When we’re familiar with the domain, we may not need to integrate information from other domains (via conscious representation) and/or deliberately attend to each step of our reasoning. So in a familiar domain, type 0 cognition might suffice.
SF briefly mention how this relates to the cognitive reflection test (CRT) (Frederick 2005). There is a puzzle about how to interpret CRT responses that do not fit a common dual-process interpretation of the CRT. In what follows, I will show how SF’s notion of domain-familiarity can make sense of these otherwise puzzling CRT responses.
- What Is Reflective Reasoning?
- Is Philosophical Reflection Ever Inappropriate?
- Is Reflective Reasoning Supposed To Change Your Mind?
- Why Critical Reasoning Might Not Require Self-knowledge
- Christine Korsgaard on Reflection and Reflective Endorsement
I recently reread Tyler Burge’s “Our Entitlement to Self-knowledge” (1996). Burge argues that our capacity for critical reasoning entails a capacity for self-knowledge.
Like a lot of philosophy, this paper is barely connected to the relevant science. So when I find myself disagreeing with the authors’ assumptions, I’m not sure whether the disagreement matters. After all, we might disagree because we have different, unfalsifiable intuitions. But if we disagree about facts, then it matters: one of us is demonstrably wrong. In this post I will articulate my disagreement. I will also try to figure out whether it matters. Continue reading Why Critical Reasoning Might Not Require Self-knowledge
On Saturday, I was on the Veracity Hill Podcast talking about the evidence that atheists and agnostics reason more reflectively (i.e., make fewer errors) than theists.
- What do we mean by ‘reflective’? And how do we measure reflection? Who counts as a theist? And how do we measure religiosity?
- What do these findings about atheists and theists tell us about atheism and theism (if anything)? And how might further research answer hitherto unanswered questions about how atheists and theists reason?
- What are some related findings? For instance, what does this have to do with other philosophical beliefs?
If our judgments are dependent on the brain, then maybe we can understand our judgments by studying our brains. Further, maybe we can understand our philosophical judgments by studying our brains. What do you think? Can neuroscience help us understand philosophy? Here are some studies which suggest that it can.
1. Two Opposing Neural Networks/Judgments
Consider two different networks in the brain: the Default Mode Network (DMN) and the Task Positive Network (TPN). These networks are mutually inhibitory. When one network’s activity increases, the other network’s activity decreases. It’s a bit like a seesaw (Jack et al 2013).