Kouider et al have recently reported that infants’ cortical activity (when viewing faces) is isomorphic to that of adults who consciously perceive faces. They conclude that conscious perception develops between 5 and 15 months of age. After reading their paper, I want to consider a different conclusion. Perhaps Kouider et al didn’t find a marker of conscious perception. Maybe they found a marker of unconscious perception.
Tag: Cognitive Science
Derek Leben’s “When Psychology Undermines [Moral and Religious] Beliefs”
Abstract
This paper attempts to specify the conditions under which a psychological explanation can undermine or debunk a set of beliefs. The focus will be on moral and religious beliefs, where a growing debate has emerged about the epistemic implications of cognitive science. Recent proposals by Joshua Greene and Paul Bloom will be taken as paradigmatic attempts to undermine beliefs with psychology. I will argue that a belief p may be undermined whenever: (i) p is evidentially based on an intuition which (ii) can be explained by a psychological mechanism that is (iii) unreliable for the task of believing p; and (iv) any other evidence for belief p is based on rationalization. I will also consider and defend two equally valid arguments for establishing unreliability:the redundancy argument and the argument from irrelevant factors. With this more specific understanding of debunking arguments, it is possible to develop new replies to some objections to psychological debunking arguments from both ethics and philosophy of religion.
Continue reading Derek Leben’s “When Psychology Undermines [Moral and Religious] Beliefs”
Intermountain Philosophy Conference: Abstract
(Image credit: “Legacy Bridge, University of Utah” by Daderot via Wikipedia [public domain])
I will be at the University of Utah presenting a paper at the Intermountain Philosophy Conference tomorrow entitled “Neurobiological Correlates of Philosophical Belief & Judgment: What This Means for Philosophy.” An abstract is below. The conference website is here.
It is becoming increasingly common to find journals publishing articles that demonstrate psychological correlates (e.g. Adelstein, Deyong, Arvan) and biological correlates (e.g. Harris, Hsu, Stern) of various self-reported beliefs and judgments. It is perhaps most common to find articles reporting the correlates of political beliefs and judgments (e.g. Amodio, Arvan, Hatemi, Kanai, Tost). This paper sets out to show that philosophical beliefs are also worth experimental attention. But that is not all: I hypothesize that variations in peoples’ biology—perhaps their neurobiology in particular—could correlate with variations in their proclivity towards or aversion to particular philosophical beliefs and judgments. In the first section of the paper, I lay out what we might expect to learn about our philosophical beliefs from our neurobiology. Before I conclude that philosophical beliefs (or philosophical cognition) are worthy of experimental attention, I mention some philosophical and methodological concerns and some objections to the suggested research. I am careful to note along the way that while many of the conclusions reached by this research could be illuminating, we none of it should be devastating to philosophy. That is not to say that the research wouldn’t inspire some methodological reform (e.g., whether and how philosophers appeal to intuition or exploit certain language), but it would by no means “end” philosophy.
The Hard Problem of Consciousness: A Cognition Problem?
A couple month’s ago, I was at a conference where Anthony Jack proposed a very interesting theory: maybe we have two neural systems (Task Positive Network [TPN] and Default Mode Network [DMN]) that produce conflicting intuitions about some age-old philosophical puzzles. These conflicting intuitions lead us to get stuck when thinking about these puzzles (e.g. the hard problem of consciousness, the explanatory gap, or qualitative consciousness) are the result of conflicting intuitions (Jack et al 2013).
I was struck by Jack’s presentation for two reasons: (1) I was presenting a poster with a similar motivation at the same conference and (2) I have long been interested in a biological examination of (academic) philosophers.
Continue reading The Hard Problem of Consciousness: A Cognition Problem?
Philosophers’ Brains
This link is a poster about philosophers’ brains that I presented at the Towards a Science of Consciousness Conference in Tuscon—I gave a talk based on this poster at University of Utah. Use the link to see a full-size PDF that will allow you to zoom ad nauseum without the blurriness—vector graphics are so cool!
Summary
We should not be surprised if some of the differences between philosophers views correlate with differences between philosophers’ brains. I list a handful of neurobiological differences that already correlate with philosophical differences among non-philosophers. It’s not obvious what we should glean from the possibility that philosophers’ brains could differ as a function of their views. After all, it might be that studying certain views changes our brain. That would not be surprising or concerning, really. But if it were the other way around — e.g., that structural/functional differences in brains predisposed us towards some views and away from other views — then that might be concerning. What if academic philosophy is just an exercise of post hoc rationalization of the views that philosophers’ brains are predisposed toward? Of course, it’s entirely possible that causation works in both directions. But even that could be concerning because that is compatible with self-reinforcing feedback loops. For instance, perhaps we are neurally predisposed to certain views, so we study those views which further predisposes us toward that view (and away from its alternatives). But these questions are getting ahead of the evidence. Hopefully, the neuroscience of philosophy will provide some answers. Until then, check out the poster to see what questions the research has already answered.
Related Posts
Higher-order Thought v. Higher-order Cortex
During a morning session of the SPP, Benjamin Kozuch made the following argument involving higher order thought:
-
- If Higher order theories of consciousness are true, then prefrontal lesions should produce manifest deficits in consciousness (as defined by HOT).
- PF lesions do not produce manifest deficits in consciousness.
- Therefore, many HO theories are not true.
Liad Murdik, in her comments, adeptly pointed out that the PFC is commonly taken to be a center (location, module, etc.) of HO states by a number of people, but this might be a mistake. She explains: it does not follow from the notion that the PFC is associated with higher order mental capacity (i.e. what makes humans more cognitively advanced than, say, mammals without a PFC) that the PFC is the location of HO thought or states. HO thoughts and states could very well be the product of dynamic relationships between various cortices.
Continue reading Higher-order Thought v. Higher-order Cortex