Free, online conference on the philosophy and science of mind!

The Minds Online conference starts today, has three week-long, and ends on September 29th. So mark your calendars and set aside some time to read and comment.

You will find that each Minds Online session has a keynote and a few contributed papers — each contributed paper with its own invited commenters. Papers are posted for advanced reading the Saturday before their session. And public commenting for each session runs from Monday (8am, EST) to Friday.

To be notified when papers go up, subscribe by email (in the menu) or to the Minds Online post RSS feed to receive be notified when papers go up. You can also subscribe to the Minds Online comment RSS feed to stay apprised of comments.

Conference hashtag: #MindsOnline2017. The full program is below: Continue reading Free, online conference on the philosophy and science of mind!

The Meaning Problem & Academic Lexicons

Sometimes I spend days trying to figure out what someone means when they use an otherwise common word. I spend even more time trying to the difference between two authors’ use of the same word. It’s a problem. We can call this the meaning problem. In this post I talk about the meaning problem and some solutions. I think the best solutions would be open-source academic lexicons — i.e., lexicons for every academic field edited by academics from the corresponding field. But that’s a big ask, so I will also mention a couple other (partial) solutions as well. Continue reading The Meaning Problem & Academic Lexicons

Are Atheists More Reflective Than Theists?

On Saturday, I was on the Veracity Hill Podcast talking about the evidence that atheists and agnostics reason more reflectively (i.e., make fewer errors) than theists.

The Discussion

  1. What do we mean by ‘reflective’? And how do we measure reflection? Who counts as a theist? And how do we measure religiosity?
  2. What do these findings about atheists and theists tell us about atheism and theism (if anything)? And how might further research answer hitherto unanswered questions about how atheists and theists reason?
  3. What are some related findings? For instance, what does this have to do with other philosophical beliefs?

The Podcast

Continue reading Are Atheists More Reflective Than Theists?

The Appeal to Intuition: A Fallacy?

You might be familiar with what philosophers call an “appeal to nature“. It is a claim that something is good or right because it’s natural. Sometimes an appeal to nature is a fallacy. In this post, I discuss the possibility that an appeal to intuition is that kind of fallacy.

1.  Different Brain, Different Intuition

First, imagine that your brain and my brain are radically different from one another. If this were the case, then it would be unsurprising to find that your intuitions were different than mine. Indeed, evidence suggests that even minor differences between brains are linked to differences in intuition (Amodio et al 2007Kanai et al 2011).

This implies that our appeals to intuition (etc.) might be contingent upon brains being a certain way. In other words, differences in intuitions seem to be the result of differences in natural properties.†

Continue reading The Appeal to Intuition: A Fallacy?

Experimental Philosophy 2.0: The Neuroscience of Philosophy

If our judgments are dependent on the brain, then maybe we can understand our judgments by studying our brains. Further, maybe we can understand our philosophical judgments by studying our brains. What do you think? Can neuroscience help us understand philosophy? Here are some studies which suggest that it can.

1.  Two Opposing Neural Networks/Judgments

Consider two different networks in the brain: the Default Mode Network (DMN) and the Task Positive Network (TPN). These networks are mutually inhibitory. When one network’s activity increases, the other network’s activity decreases. It’s a bit like a seesaw (Jack et al 2013).

Continue reading Experimental Philosophy 2.0: The Neuroscience of Philosophy

The Bias Fallacy

“They’re biased, so they’re wrong!” That’s a fallacy. Call it the bias fallacy. Here’s why it’s a fallacy: being biased doesn’t entail that everything one does is wrong. So when someone jumps from the observation that someone is biased to the conclusion that they’re wrong, they have committed a fallacy. It’s that simple.

In this post, I’ll give some examples of the fallacy, explain the fallacy, and then suggest how we should respond to the bias fallacy.

1. Examples of The Bias Fallacy

You’ve probably seen instances of the bias fallacy all over the internet.

In my experience, the fallacy is a rhetorical device. The purpose of the bias fallacy is to dismiss some person or their claims.

Like many rhetorical devices, this one is logically fallacious. So it’s ineffective. At least, it should be ineffective. That is, we should not be persuaded by it.

So if you’ve seen the bias fallacy online, then go ahead and set the record straight:

'They're biased, so they're wrong.' Not so fast! We can be biased without being wrong. #TheBiasFallacyClick To Tweet

And if you really want to have some fun, go ahead and join the discussion on Reddit. Continue reading The Bias Fallacy

Implicit Bias & Philosophy

This week, I’m talking about implicit bias over at The Brains Blog. I’m including my portion of the discussion below.

1.  The Implicit Association Test (IAT)

The implicit association test (IAT) is one way to measure implicitly biased behavior. In the IAT, “participants […] are asked to rapidly categorize two [kinds of stimuli] (black vs. white [faces]) [into one of] two attributes (‘good’ vs. ‘bad’). Differences in response latency (and sometimes differences in error-rates) are then treated as a measure of the association between the target [stimuli] and the target attribute” (Huebner 2016). Likewise, changes in response latencies and error-rates resulting from experimental interventions are treated as experimentally manipulated changes in associations.

2.  The Effect Of Philosophy

As philosophers, we are in the business of arguments and their propositions, not associations. So we might wonder whether we can use arguments to intervene on our implicitly biased behavior. And it turns out that we can — even if the findings are not always significant and the effect sizes are often small. Some think that this effect of arguments on IAT performance falsifies the idea that implicitly biased behavior is realized by associations (Mandelbaum 2015). The idea is that propositions are fundamentally different than associations. So associations cannot be modified by propositions. So if an arguments’ propositions can change participants’ implicitly biased behavior — as measured by the IAT — then implicit biases might “not [be] predicated on [associations] but [rather] unconscious propositionally structured beliefs” (Mandelbaum 2015, bracketed text and italics added). But there is some reason to think that such falsification relies on oversimplification. After all, there are many processes involved in our behavior — implicitly biased or otherwise. So there are many processes that need to be accounted for when trying to measure the effect of an intervention on our implicitly biased behavior — e.g., participants’ concern about discrimination, their motivation to respond without prejudice (Plant & Devine, 1998), and their personal awareness of bias. So what happens when we control for these variables? In many cases, we find that argument-like interventions on implicitly biased behavior are actually explained by changes in participants’ concern(s), motivation(s), and/or awareness, but not changes in associations (Devine, Forscher, Austin, and Cox 2013; Conrey, Sherman, Gawronski, Hugenberg, and Groom 2005). Continue reading Implicit Bias & Philosophy