Are Atheists More Reflective Than Theists?


On Saturday, I was on the Veracity Hill Podcast talking about the evidence that atheists and agnostics reason more reflectively (i.e., make fewer errors) than theists.

The Discussion

  1. What do we mean by ‘reflective’? And how do we measure reflection? Who counts as a theist? And how do we measure religiosity?
  2. What do these findings about atheists and theists tell us about atheism and theism (if anything)? And how might further research answer hitherto unanswered questions about how atheists and theists reason?
  3. What are some related findings? For instance, what does this have to do with other philosophical beliefs?

The Podcast

Continue reading Are Atheists More Reflective Than Theists?

What Is Reflective Reasoning?


Last week I was talking about intuition. I think of intuition as — among other things — unconscious and automatic reasoning. The opposite of that would be conscious and deliberative reasoning. We might call that reflective reasoning.† In this post, I want to talk about reflective reasoning. How does it work? And why does it work? And — spoiler alert — why does it sometimes not work? Continue reading What Is Reflective Reasoning?

The Appeal to Intuition: A Fallacy?


You might be familiar with what philosophers call an “appeal to nature“. It is a claim that something is good or bad because of how natural it is. Sometimes an appeal to nature is a fallacy. In this post, I discuss the possibility that an appeal to intuition is that kind of fallacy.

Continue reading The Appeal to Intuition: A Fallacy?

Experimental Philosophy 2.0: The Neuroscience of Philosophy


If our judgments are dependent on the brain, then maybe we can understand our judgments by studying our brains. Further, maybe we can understand our philosophical judgments by studying our brains. What do you think? Can neuroscience help us understand philosophy? Here are some studies which suggest that it can.

1.  Two Opposing Neural Networks/Judgments

Consider two different networks in the brain: the Default Mode Network (DMN) and the Task Positive Network (TPN). These networks are mutually inhibitory. When one network’s activity increases, the other network’s activity decreases. It’s a bit like a seesaw (Jack et al 2013).

Continue reading Experimental Philosophy 2.0: The Neuroscience of Philosophy

The Bias Fallacy


“They’re biased, so they’re wrong!” That’s a fallacy. We can call it the bias fallacy. Here’s why it’s a fallacy: being biased doesn’t entail being wrong. So when someone jumps from the observation that So-and-so is biased to the conclusion that So-and-so is wrong, they commit the bias fallacy. It’s that simple.

In this post, I’ll give some examples of the fallacy, explain the fallacy, and then suggest how we should respond to the bias fallacy.

1. Examples of The Bias Fallacy

You’ve probably seen instances of the bias fallacy all over the internet. In my experience, the fallacy is a rhetorical device. The purpose of the bias fallacy is to dismiss some person or their claims.

Like many rhetorical devices, this one is logically fallacious. So it’s ineffective. At least, it should be ineffective. That is, we should not be persuaded by it.

So if you’ve seen the bias fallacy online, then go ahead and set the record straight:

'They're biased, so they're wrong.' Not so fast! We can be biased without being wrong. #TheBiasFallacyClick To Tweet  Continue reading The Bias Fallacy

A Definition of ‘Fake News’ (and Related Terms)


If the public discourse in the United States is any indication, then people in the US mean different things by ‘fake news’. Naturally, then, it is time to agree on a definition of ‘fake news’. While we’re at it, let’s distinguish ‘fake news’ from other terms.

1.  Let’s Agree On Terms

As I see it, we will need to distinguish between at least three terms: fake news, conspiracy theory, and journalism.

A Definition of ‘Fake News’

Also known as “fictional news”. Characterized by outlandish stories — sometimes about paranormal and supernatural events. Any explicit claims to truth are obviously belied by their only semi-serious and comedic tone. Examples include many of the cover stories of the Weekly World News as well as some of the satirical punchlines of The Daily Show.

A Definition of ‘Conspiracy Theory’

Bad explanations designed to glorify their author and undermine the author’s perceived nemeses. Sometimes unfalsifiable. Alas, believed by many people. Examples are voluminous. Examples include certain explanations of the assignation of John F. Kennedy and InfoWars’ Alex Jones’s claims that the Sandy Hook shootings were staged.

A Definition of ‘Journalism’

Continue reading A Definition of ‘Fake News’ (and Related Terms)

Research Questions & Mental Shortcuts: A Warning


Daniel Kahneman talks extensively about how we make reasoning errors because we tend to use mental shortcuts. One mental shortcut is ‘substitution‘. Substitution is what we do when we (often unconsciously) answer an easier question than the one being asked. I find that I sometimes do this in my own research. For instance, when I set out to answer the question, “How can X be rational?” I sometimes end up answering easier questions like, “How does X work?”. In an effort to avoid such mistakes, I will (1) explain the question substitution error, (2) give an example of how we can distinguish between questions, (3) give a personal example of the substitution error, and (4) say what we can do about it.

1.  Substitution

In case you’re not familiar with Kahnemen’s notion of ‘substitution’, here is some clarification. In short, substitution is this: responding to a difficult question by (often unintentionally) answering a different, easier question. People use this mental shortcut all the time. Here are some everyday instances:

Difficult Question Easier Question
How satisfied are you with your life? What is my mood right now?
Should I believe what my parents believe? Can I believe what my parents believe?
What are the merits/demerits of that woman who is running for president? What do I remember people in my community saying about that woman?

For further discussion of mental shortcuts and substitution, see Part 1 of Kahneman’s Thinking Fast and Slow (2012).

Now, how does this mental shortcut apply to research?  Continue reading Research Questions & Mental Shortcuts: A Warning

Crystal Meth & Your Brain: An Infographic


Methamphetamine use is on the rise (Drug Enforcement Administration 2015). And so are crystal-meth-related drug convictions (see “State Sentencing…”). So what do we know about crystal meth? In particular, what does crystal meth do to your body and brain? The South Shore Recovery Center has some answers. In fact, they’ve done us the favor of turning those answers into the infographic below.
Continue reading Crystal Meth & Your Brain: An Infographic

Implicit Bias & Philosophy


This week, I’m talking about implicit bias over at The Brains Blog. I’m including my portion of the discussion below.

1.  The Implicit Association Test (IAT)

A screen recording of the race implicit association test

The implicit association test (IAT) is one way to measure implicitly biased behavior. In the IAT, “participants […] are asked to rapidly categorize two [kinds of stimuli] (black vs. white [faces]) [into one of] two attributes (‘good’ vs. ‘bad’). Differences in response latency (and sometimes differences in error-rates) are then treated as a measure of the association between the target [stimuli] and the target attribute” (Huebner 2016). Likewise, changes in response latencies and error-rates resulting from experimental interventions are treated as experimentally manipulated changes in associations.

2.  The Effect Of Philosophy

As philosophers, we are in the business of arguments and their propositions, not associations. So we might wonder whether we can use arguments to intervene on our implicitly biased behavior. And it turns out that we can — even if the findings are not always significant and the effect sizes are often small. Some think that this effect of arguments on IAT performance falsifies the idea that implicitly biased behavior is realized by associations (Mandelbaum 2015). The idea is that propositions are fundamentally different than associations. So associations cannot be modified by propositions. So if an arguments’ propositions can change participants’ implicitly biased behavior — as measured by the IAT — then implicit biases might “not [be] predicated on [associations] but [rather] unconscious propositionally structured beliefs” (Mandelbaum 2015, bracketed text and italics added). But there is some reason to think that such falsification relies on oversimplification. After all, there are many processes involved in our behavior — implicitly biased or otherwise. So there are many processes that need to be accounted for when trying to measure the effect of an intervention on our implicitly biased behavior — e.g., participants’ concern about discrimination, their motivation to respond without prejudice (Plant & Devine, 1998), and their personal awareness of bias. So what happens when we control for these variables? In many cases, we find that argument-like interventions on implicitly biased behavior are actually explained by changes in participants’ concern(s), motivation(s), and/or awareness, but not changes in associations (Devine, Forscher, Austin, and Cox 2013; Conrey, Sherman, Gawronski, Hugenberg, and Groom 2005). Continue reading Implicit Bias & Philosophy

Is Reflective Reasoning Supposed To Change Your Mind?


When you step back and question your beliefs and assumptions, do you expect to change your mind? Should you? I think that reflective reasoning is supposed to change our minds. But it might not change our beliefs. Sometimes reflection reinforces our beliefs. And sometimes reflection makes our beliefs more extreme or partisan. I’ll explain below. Continue reading Is Reflective Reasoning Supposed To Change Your Mind?