If our judgments are dependent on the brain, then maybe we can understand our judgments by studying our brains. Further, maybe we can understand our philosophical judgments by studying our brains. What do you think? Can neuroscience help us understand philosophy? Here are some studies which suggest that it can.
1. Two Opposing Neural Networks/Judgments
Consider two different networks in the brain: the Default Mode Network (DMN) and the Task Positive Network (TPN). These networks are mutually inhibitory. When one network’s activity increases, the other network’s activity decreases. It’s a bit like a seesaw (Jack et al 2013).
And each network seems to be activated by certain types of reasoning. The DMN is activated during social reasoning, like when we think about our own and/or others’ minds. But the TPN is activated by mechanical, causal, logical, or mathematical reasoning, like when we watch videos about physics (Jack et al 2013).
Further, these two networks are associated with two different intuitions.
Different Networks, Different Intuitions
In one study, people were given either a social reasoning task or a mechanical reasoning task. And so peoples’ DMN or TPN were activated. Then people were asked questions about the moral status and mindedness of animals. People who did the social task were more likely to think that animals have minds and moral status than people who did the mechanical task. So peoples’ intuitions differed depending on what they were thinking about prior to having the intuition. And that difference was explained by the different levels of activation in each of the two brain networks.
This illustrates how our philosophical judgments will change as a result of seemingly unrelated thoughts and their impact on our brain.
2. Brain Damage & Moral Judgment
The concept of intention is a pivotal in moral judgment (Cushman et al 2013). Unsurprising, right? We are more likely to blame someone for their action when they did it intentionally. And we are less likely to blame someone if their action was unintentional. So what does neuroscience tell us about this?
Right Temporal Parietal Junction
Young and colleagues found that some areas of the brain are particularly sensitive to our attributions of intention. One area is the right temporal parietal junction (RTPJ) (Young et al 2007). This seems to be the area of the brain involved in helping us distinguish intentional from unintentional action. So what happens to our moral judgments when we alter the RTPJ? Fortunately Young and colleagues have already started to answer this question (Koster-Hale et al 2013).
First, researchers recruited two groups of people. One group had damage to their RTPJ. The other group did not. Then the researchers presented both groups with a battery of moral scenarios involving intentional and unintentional harm. As you might have guessed, most people judged intentional harm more harshly. But the people with damage to their RTPJ were less likely to make this distinction. And this difference was related to the difference in activity in their RTPJ.
Again, this seems to indicate that changes in our brains can cause changes in our philosophical judgments.
Ventromedial Prefrontal Cortex
Young and colleagues also found that brain lesions can have an effect on our reaction to moral dilemmas (Koenigs et al 2007). More specifically, people with damage to certain areas of the prefrontal cortex — like the ventromedial prefrontal cortex (VMPFC) — “produce[d] an abnormally ‘utilitarian’ pattern of judgments on moral dilemmas”.
Once again, these results suggest that variations in our brain can cause variations in our philosophical judgments.
It seems that our philosophical judgments depend on features of our brains — and perhaps vice versa. And if that is right, then neuroscience is well suited to understand our philosophical judgments. However, if we can understand our widely shared philosophical judgments in terms of neural properties, then do we need to think that these judgments are explained by their being true? Maybe our common philosophical judgments are just natural consequences of our biology—a process that cares more about reproductive success than truth per se.
If you’re new to the blog and you’re interested in more of this, then consider subscribing to the blog or following me on social media. In the meantime, here are a few other posts that you might like:
- “The Bias Fallacy“
- “The Appeal To Intuition: A Fallacy?“
- “Exercise, Neuroscience, and the Network Theory of Well-being“
- “3 Obstacles For Research About Cheating & Morality“
And if you want to learn more about the neuroscience of philosophy, check out Moral Brains: The Neuroscience of Morality by S. Matthew Liao:
Featured image: A super-photoshopped set of structural and functional MRI images.
4 thoughts on “Experimental Philosophy 2.0: The Neuroscience of Philosophy”
Yet another recent study linking brain damage to moral decision making:
Thanks for the tip Gerard! I’ll be sure to take a look ASAP.
Nick, why do you think that being the cause of some change in the process may on a par be meant to be the source of it?
Hi Jave. I’m not sure that I understand your question. Can you rephrase it?
Comments are closed.