Research

My research provides a new framework for thinking about the role of intuition and reflection in philosophical, moral, religious, and social reasoning. By applying empirical and philosophical tools—to theories of, for example, implicit bias and moral psychology—I challenge and clarify both philosophers’ and scientists’ understanding of the mind.

Intuition & Reflection

Reasoning is my primary area of study, particularly intuition and reflection. So, for example, when we consider questions such as, “How much should I donate?”, our first, intuitive response might to just pick an amount that feels right. Appealing to intuition like this is standard fare in philosophical discourse (Chalmers, 2014; Kornblith, 1998; Mallon, 2016). However, when we step back and reflect on our intuitive response, we might ask further questions. “Is that amount too much? Too little? Must I donate anything at all? Why donate to this rather than that? How should I distribute my donations to various causes?” Such reflection can either change or confirm our initial intuitive judgment. And this kind of reflection features in many of philosophy’s most influential concepts such as reflective agency (Kennett & Fine, 2009; Velleman, 1989; 2000; Wallace, 2006), reflective endorsement (Korsgaard, 1996), reflective equilibrium (Goodman, 1983; Rawls, 1971), reflective knowledge (Sosa, 1991), reflective scrutiny (Hursthouse, 1999), or reflective self-consciousness (Peacocke, 2014). My research investigates how intuition and reflection work and how they feature in philosophical, moral, religious, and social reasoning.

Great Minds Do Not Think Alike: Reflection & Philosophers’ Philosophical Views

One understudied population is academic philosophers. So I investigate individual differences in reflective reasoning and personality among philosophers and non-philosophers. I find that much of what we observe among non-philosophers—e.g., that less reflective non-philosophers are more likely to believe that god exists (Pennycook et al., 2016) and that causing harm is wrong even when it ensures their preferred outcome (e.g., Hannikainen & Cova, forthcoming)—is also observed among philosophers (Byrd, in prep.). Curiously, people who study philosophy tend to be more reflective (e.g., Kuhn, 1991; Livengood et al., 2010). So, I also study how the impact between reflective reasoning and philosophical judgment interacts with philosophical training—e.g., how many years someone has studied philosophy and how many philosophy courses someone has taught. In two large studies, I found that highly reflective people tend toward different views than less reflective people, regardless of philosophical training. This suggests that there is more to the link between reflection and philosophical differences than philosophical training. It might be that different styles of reasoning produce lasting differences in the way people analyze philosophical questions.

Not All Who Ponder Count Costs: Reflection & Moral Dilemma Judgments

In a paper in Cognition, Paul Conway and I present two studies that challenge the prevailing theory of how reflection features in moral judgment. Imagine that five people face immanent harm. However, if you harm another person, the five people will be spared from harm. Is it appropriate to harm the one to spare the five? Past work found that more reflective people were more likely to accept such harm tradeoffs (e.g., Paxton et al., 2012; Hannikainen & Cova, forthcoming). However, that work measured reflection with mathematical tasks. And, of course, the moral dilemma is, in part, a mathematical task—one vs. five. So, accepting harm tradeoffs might be explained by mathematical reflection rather than reflection per se. Two studies examined moral dilemma responses and performance on both mathematical and non-mathematical measures of reflection. Sure enough, accepting harm tradeoffs correlated only with mathematical reflection. However, both accepting and rejecting harm tradeoffs correlated with non-mathematical reflection. So, the alleged link between reflection and accepting harm tradeoffs might be better explained by math than reflection per se.

Bounded Reflectivism: The Problem of (and Solution to) Both Lazy and Biased Reasoning

While reflection is a staple in the history of ideas—featuring centrally in theories of agency, induction, justice, justification, knowledge, and normativity—there is ambiguity about the meaning and importance of reflection. So, I synthesized a new model of reflection from the science of reasoning, which has been presented at conferences and symposia and will soon be submitted for publication. The model clarifies what scholars mean by reflection and how reflection is only sometimes a good supplement to intuition. For instance, in politically polarized discussions, reflection can lead to more (not less) polarization, but in more dispassionate scientific discussions, reflection can lead to agreement and progress. The model coins ‘epistemic identity’, the phenomena of taking certain beliefs to be part of our identity—e.g., beliefs about religion, morality, politics, gender, etc. So, when we feel that our epistemic identity is threatened, then we reflect in order to defend this identity and their corresponding beliefs rather than to find the best evidence and arguments. The solution, according to the model, is not to suppress our epistemic identities, but embrace them. If we are in the grip of an epistemic identity and its corresponding beliefs, then we should appeal to superordinate identities that are less bound to an ideology (e.g., the scientific identity), our ingroup (e.g., the human identity), or our present moment (e.g., the identity of who we aspire to be).

Reducing Implicit Bias via Reflection and Counterconditioning

In a paper in Synthese titled, “What We Can (And Can’t) Infer About Implicit Bias”, I show that, contrary to some philosophers claims, implicit biases are not entirely automatic and unconscious. Indeed, I show that we can reflectively counter-condition our implicitly biased behavior—e.g., by exposing ourselves to counterstereotypes.

Debiasing In the Classroom: A Qualitative Study

Studying implicit bias prompted me to conduct qualitative pilot studies of debiasing among introductory philosophy students. For example, a stereotype of academic philosophers is that they are old, white, and male. Surveying my students’ stereotypes on the first day of class confirms this. So, throughout my courses, I present students with counterstereotypic images of scholars. Specifically, I present images of the assigned scholars only if the scholars are women or black. End-of-the-semester surveys suggest that my students’ stereotypes of scholars are less sexist and more racially inclusive (see Teaching Portfolio). These preliminary findings have been turned into a Debiasing Workshop that I have presented in various places, most recently at Florida State University’s Spring Conversation Series on Topics in Diversity & Inclusion in Research.

Miscellaneous

Other topics I am writing about include depression, cognitive therapy, scientific realism, cosmopolitan egalitarianism, racial integration, unconscious intentions, free will, personal identity, the non-identity problem, and posthumous harm. [Jump To Top]