(Photo credit: “Crack [Cocaine]” by Agência Brasil licensed under CC by 3.0)
This week I will be at the 2013 Consciousness and Experiential Psychology conference and the 4th Annual Experimental Philosophy Workshop in Bristol, England. I look forward to (1) feedback and (2) afternoon tea. Below is a précis of a paper I will present:
John Bargh and colleagues have recently outlined “Selfish Goal Theory” (see Huang and Bargh, forthcoming). They claim that (1) mental representations called “goals” which are (2) selfish, (3) autonomous, and sometimes (4) consciously inaccessible adequately explain a variety of otherwise puzzling behaviors (e.g., addiction, self-destructive behavior, etc.). The details of (1) through (4) are below.
Continue reading Do We Need Bargh’s Selfish Goals?
Kouider et al have recently reported that infants’ cortical activity (when viewing faces) is isomorphic to that of adults who consciously perceive faces. They conclude that conscious perception develops between 5 and 15 months of age. After reading their paper, I want to consider a different conclusion. Perhaps Kouider et al didn’t find a marker of conscious perception. Maybe they found a marker of unconscious perception.
Continue reading Unconscious Perception in Infants?
This paper attempts to specify the conditions under which a psychological explanation can undermine or debunk a set of beliefs. The focus will be on moral and religious beliefs, where a growing debate has emerged about the epistemic implications of cognitive science. Recent proposals by Joshua Greene and Paul Bloom will be taken as paradigmatic attempts to undermine beliefs with psychology. I will argue that a belief p may be undermined whenever: (i) p is evidentially based on an intuition which (ii) can be explained by a psychological mechanism that is (iii) unreliable for the task of believing p; and (iv) any other evidence for belief p is based on rationalization. I will also consider and defend two equally valid arguments for establishing unreliability:the redundancy argument and the argument from irrelevant factors. With this more specific understanding of debunking arguments, it is possible to develop new replies to some objections to psychological debunking arguments from both ethics and philosophy of religion.
Continue reading Derek Leben’s “When Psychology Undermines [Moral and Religious] Beliefs”
I have ventured beyond my areas of competence again: ethics. I find ethics to be massively complicated because so much of it seems to be bypassing unsettled empirical questions. Anyway, to try to avoid a misstep, I am reaching out to the wiser.
I have finally read some of Rawls’s A Theory of Justice—I am continually surprised at how many alleged “classics” I have yet to read. While I am sympathetic to most of it (and perhaps naively so), I am curious about how Rawls’s theory would apply to not just a single society, but a plurality of societies (like the plurality of nations on our planet). I have surveyed the first 3 chapters, paying special attention to section 58 (where he deals, briefly, with this very question). I have also skimmed Leif Wenar’s “Why Rawls is Not a Cosmopolitan Egalitarian” [PDF] (2006).
The trouble I am having is the following. It seems that Rawls allows for redistribution within societies, but not between societies—that is, per his principle of self-determination in section 58.
Continue reading Rawls & Cosmopolitan Egalitarian Redistribution
This link is a poster about philosophers’ brains that I presented at the Towards a Science of Consciousness Conference in Tuscon—I gave a talk based on this poster at University of Utah. Use the link to see a full-size PDF that will allow you to zoom ad nauseum without the blurriness—vector graphics are so cool!
We should not be surprised if some of the differences between philosophers views correlate with differences between philosophers’ brains. I list a handful of neurobiological differences that already correlate with philosophical differences among non-philosophers. It’s not obvious what we should glean from the possibility that philosophers’ brains could differ as a function of their views. After all, it might be that studying certain views changes our brain. That would not be surprising or concerning, really. But if it were the other way around — e.g., that structural/functional differences in brains predisposed us towards some views and away from other views — then that might be concerning. What if academic philosophy is just an exercise of post hoc rationalization of the views that philosophers’ brains are predisposed toward? Of course, it’s entirely possible that causation works in both directions. But even that could be concerning because that is compatible with self-reinforcing feedback loops. For instance, perhaps we are neurally predisposed to certain views, so we study those views which further predisposes us toward that view (and away from its alternatives). But these questions are getting ahead of the evidence. Hopefully, the neuroscience of philosophy will provide some answers. Until then, check out the poster to see what questions the research has already answered.
During a morning session of the SPP, Benjamin Kozuch made the following argument involving higher order thought:
- If Higher order theories of consciousness are true, then prefrontal lesions should produce manifest deficits in consciousness (as defined by HOT).
- PF lesions do not produce manifest deficits in consciousness.
- Therefore, many HO theories are not true.
Liad Murdik, in her comments, adeptly pointed out that the PFC is commonly taken to be a center (location, module, etc.) of HO states by a number of people, but this might be a mistake. She explains: it does not follow from the notion that the PFC is associated with higher order mental capacity (i.e. what makes humans more cognitively advanced than, say, mammals without a PFC) that the PFC is the location of HO thought or states. HO thoughts and states could very well be the product of dynamic relationships between various cortices.
Continue reading Higher-order Thought v. Higher-order Cortex