Considering Third Party Candidates? A Podcast Discussion


The 2016 US election has many people thinking about third party candidates. Good news: philosophers and others have been sorting out the ethics and rationality of voting for awhile now. I talk about the philosophy of third party voting with Kurt Jaros below:

The Podcast

Continue reading Considering Third Party Candidates? A Podcast Discussion

The Minds Online Conference Is Starting!


From September 5 to September 30, there is an exciting, free, online conference about the philosophy and science of mind: the (second annual) Minds Online conference! Loads of wonderful scholars are sharing and commenting on each other’s research — and you can access and participate in all of it!

Here are a few things to note for those who are new to online conferences.

  • Sessions: There are four sessions, each with a different topic and its own keynote.
  • Timeline: Each session lasts one week. (So the conference lasts four weeks).
  • Participating: You can read papers starting the weekend before their session. And you you can comment on papers on Monday through Friday of their session.

So head on over and enjoy the wonder that is conferencing from the comfort of your home, office, favorite coffee shop, etc.

Here’s the program: http://mindsonline.philosophyofbrains.com/minds-online-2016-program/

Continue reading The Minds Online Conference Is Starting!

Fact-checking is not enough: We need argument-checking


I see more fact-checking on Facebook than I used to. While I’m glad to see fact-checking catching on, fact-checking isn’t enough — or so I’ll argue in this post.

1. Fact-checking: The problem

Let’s say that you and I agree on all the facts. Now let’s say that we start arguing. Will we agree? Will we even argue well? Not necessarily!

After all, we can reason badly even if we agree on the facts. Specifically, we can jump to conclusions that don’t follow from the facts. So fact-checking our argument(s) won’t necessarily fix all the problems with our argument(s).

2. Bad Arguments

Consider some of the claims that people make:

Is Philosophical Reflection Ever Inappropriate?


I am sometimes that stereotype of a socially inept philosopher. I fail to realize the difference between hyperbole or sarcasm, on the one hand, and seriousness, on the other hand.1 I say things that are technically correct, but socially incorrect. And I take casual claims way too seriously. In short, I go into philosophical reflection mode when I’m probably not supposed to:

(X and Y are discussing plans for the weekend.)

[X]: I don’t know, man. That sounds like a bad idea.

[Y]: That’s cuz it is a bad idea!

(Laughter)

(Nick overhears this.)

Me: Uhh, I don’t know about that. Sounding like a bad idea doesn’t make it a bad idea. Surely bad sounding ideas can be—

[X]: Chill out, Nick. No one actually thinks that it’s a bad idea just because is sounds like a bad idea. It’s just a thing people say.

Learning From The Socially Inept Philosopher

Two things about my social ineptitude stand out to me:

  1. My inept responses are often instances of overthinking.
  2. Overthinking seems to prevent me from realizing something that would have otherwise been obvious.

My overthinking seems to be a form of philosophical reflection. And if that is right, then my ineptitude might demonstrate that philosophical reflection is sometimes inappropriate. In what follows I’ll mention two examples of misunderstanding the use of philosophical reflection. This will lead me to a provisional conclusion: philosophical reflection is ill-suited for certain social situations. 

The Philosophers’ Mistake

Philosophers spend their days thinking critically. This often involves suspending judgment(s) until they’ve had a chance to reflect. So when philosophers are faced with a claim — even in casual conversation — it would be understandable for the philosopher’s first response to be some form of philosophical reflection (…at least that’s what I tell myself when I am socially inept).

Philosophical reflection is not always bad, of course. Sometimes it’s crucial! It can help us identify Continue reading Is Philosophical Reflection Ever Inappropriate?

Peer-review: on what basis should we reject papers?


When you peer-review a paper, you can make one of a few basic recommendations to the editor. One option is this: do not publish the paper.

So what criteria should you use to make such a recommendation? In this post, I argue that some criteria are better than others.

1. Is the paper convincing?

A friend of mine mentioned this criterion the other day: “…[philosophy] papers ought to be convincing.” Call this the Convince Me standard or CM.

Maybe you think that CM sounds like a reasonable standard for peer-review. I don’t.  Continue reading Peer-review: on what basis should we reject papers?

One Way To Do Philosophy: A Flowchart


I like philosophy. And I like flowcharts. So — obviously — I had to make a philosophy flowchart. It outlines my process as a philosopher.

1.  The Process

According to the philosophy flowchart, my philosophical process is pretty straightforward. There are just a few steps.

  1. Look for a thesis.
  2. Look for an argument.
  3. Determine whether you care about the thesis.
  4. Take a stance.
  5. Give an argument.
  6. Evaluate the argument.
  7. Document and/or repeat.

2.  Try Out The Process

Let’s see how the philosophy flowchart would work. Imagine that you’re reading Peter Singer’s “Famine, Affluence, and Morality” (1974) [PDF]. Here’s how I’d proceed:

Step 1. Look for a thesis.

Singer was pretty kind to his reader. He made the thesis fairly clear. It’s just this:

Thesis: “[most people in affluent countries] ought to give lots of money away, and it is wrong not to do so.”

Step 2. Look for the argument.

Singer has also made it pretty easy to find the argument for his thesis. The premises are as follows:

Premise 1: “Suffering and death from lack of food, shelter, and medical care are very bad.”

Premise 2: “If it is in our power to prevent something very bad from happening, without thereby sacrificing anything else of comparable moral significance, [then] we ought, morally, to do it.”

Premise 3: “([For people in affluent countries] It is within our power to prevent something very bad from happening, without thereby sacrificing anything else of comparable moral significance — e.g., by giving away lots of money away.)”

Step 3-7: …you get the idea.

Challenge. If you’ve never run read or written anything about Singer’s paper and you’re interested in the thesis, then you might consider the following challenge:

  • (re)read the paper
  • complete the remaining steps in the flowchart
  • share your results in the comments.

References

Singer, P. (1972). Famine, Affluence, and Morality. Philosophy & Public Affairs, 1(3), 229–243. [PDF]

Do reflective people agree about ethics?


You might think that most people will share some big-picture beliefs about morality (a lacommon morality“). And you might think that this agreement is the result of reflective reasoning about ethics. For example, most people might think about ethics for awhile and accept a consequentialist principle like this: we should try to achieve the greatest good for the greatest number. Well, it turns out that people don’t agree about such ethical principles — not even people who often reflect on such matters. Before I get to the evidence for that claim, take a look at someone who thought that reflective people do agree about ethics.

1.  Will Reflective People To Agree About Ethics?

Here’s Henry Sidgwick:

“The Utilitarian principle […that there is a] connexion between right action and happiness […] has always been to a large extent recognised by all reflective persons.” (The Methods of Ethics, Book I, Chapter 6, Section 3)

Sidgwick is claiming that…

  1. there is a connection between happiness and right conduct (and)
  2. all reflective people recognize this connection.

What do you think? Do these claims sound right?

2.  The Evidence

Notice that 2 requires evidence. Alas, 2 is not well-supported by evidence: reflective people do not seem to agree that there is an important ethical connection between happiness and right conduct.

Common Morality

Consider that there is widespread disagreement about 1 among philosophers. To quantify this disagreement a bit, let us look at some data. Of about 1000 philosophers surveyed in 2009, 25.9% of leaned toward or accepted deontology, 18.2% leaned toward or accepted virtue ethics, and 23.6% leaned toward or accepted consequentialism (Bourget and Chalmers 2013). Consequentialism is the view most associated with 1 — the idea that there is a connection between happiness and right conduct — and yet fewer than a quarter of philosophers are partial to it. So, contrary to Sidgwick’s claim, the consequentialist’s connection between happiness and right conduct does not seem to be recognized by all reflective people. Indeed, it does even seem to be recognized be even most reflective people.

Reflection

In situations like this, an intuitionist like Sidgwick might want to press on the notion of ’reflective’. After all, the finding (above) is only a problem for Sidgwick if — among other things — philosophers count as ‘reflective.’ If they do, then Sidgwick’s hypothesis is falsified. If they do not, then Sidgwick’s hypothesis might still be intact.

So if you want to defend Sidgwick’s hypothesis 2 from the evidence (above), then you need to argue that philosophers do not count as reflective — and do not thereby pose a counterexample to 2. One cannot, of course, merely stipulate that philosophers do not count as reflective. That would be ad hoc. In order to defend Sidgwick’s 2 from the aforementioned data, you will need to appeal to independent evidence. Fortunately there is independent evidence about the relative reflectiveness of philosophers and non-philosophers.

Alas, the evidence does not support Sidgwick’s hypothesis (2). Rather, the evidence suggests that philosophers are significantly more reflective than non-philosophers. In a sample of 4000 participants, those with training in philosophy performed up to three times better on tests of reflection — e.g., the Cognitive Reflection Test (Frederick 2005) — than those without such training (Livengood et al 2010). This result has been replicated and expanded. For example, those with (or a candidate for) a PhD in philosophy also performed significantly better than others — F(1, 558) = 15.41, p < 0.001, d = 0.32 (Byrd 2014). And these findings are not new. Over 20 years ago, Deanna Kuhn found that philosophers demonstrated “perfect” and domain-general reasoning competence (Kuhn 1991, 258-262).

So it seems that if any group of people should count as reflective, it is philosophers. And these reflective people do not — contrary to Sidgwick’s hypothesis 2 — unanimously recognize a connection between happiness and rightness.

3. So what now?

The idea that people share a “common morality” via “reflective equilibrium” might fly in the face of evidence. It certainly does for Sidgwick. After all, it seems like reflective people (e.g., philosophers) simply don’t agree about the alleged connection between happiness and right conduct. And if you try to respond to this evidence by denying that philosophers are reflective, then you run into another problem: that claim also flies in the face of evidence. So those objections won’t work.

A better strategy might be to reject my claims about the association between Sidgwick’s claims and consequentialism. That is, you might say that non-consequentialist approaches to ethics acknowledge the connection between happiness and right conduct just as much as consequentialist approaches — sort of like Andy Hallman does in the comments. If that claim is right, then Sidgwick might have been on to something. I leave it to you to decide if that kind of objection is promising.

 

 

Featured image: “Extermination of Evil Sendan Kendatsuba” via Wikipedia Commons (in the public domain).

Certain Philosophical Views Correlate with Reasoning Errors …even among PhDs


2022 Update: My own results mentioned below replicated in a paper now published in Review of Philosophy and Psychology. Free paper, audiopaper, and link to the journal’s version here.


Philosophy helps us reason better, right? I mean, taking courses in analytic philosophy and argument mapping does more for students’ critical thinking than even critical thinking courses do (Alvarez-Ortiz 2007). And the more training one has in philosophy, the better one does on certain reasoning tasks (Livengood et al 2010). So it’s no accident that philosophy majors tend to outperform almost every other major on the GRE, the GMAT, and the LSAT (“Why Study Philosophy…“; see also Educational Testing Service 2014). That’s why people like Deanna Kuhn have such high praise for philosophers’ reasoning (Kuhn 1991, 258-262).†

Reasoning expertise: We turn now to the philosophers…. The performance of the philosophers is not included in table form because it is so easily summarized. No variation occurs…philosophers [show] perfect performance in generation of genuine evidence, alternative theories, counterarguments, and rebuttals…. The philosophers display a sophisticated understanding of argumentative structure…. None of the philosophers [had] any special expertise in any of the content domains that the questions address…. The performance of philosophers shows that it is possible to attain expertise in the reasoning process itself, independent of any particular content to which the reasoning is applied.

But there’s much more to say about this. For instance, we might ask two questions about this evidence.

Two Questions

It’s one thing to claim that philosophers are better reasoners, but that’s not the same as being perfect reasoners. After all, philosophers might reason better than others and yet still be vulnerable to systematic reasoning errors. So we need to ask: Are philosophers’ prone to cognitive errors like everyone else?

Also, if philosophers are prone to cognitive error, what is the relationship between their errors and their philosophical views? 

1.  Are Philosophers Prone To Cognitive Error?

In order to understand the rest of the post, you will need to answer the question below. It should only take a moment.

A bat and a ball cost $1.10 in total. The bat costs $1.00 more than the ball. How much does the ball cost?

The question comes from the Cognitive Reflection Test (CRT) (Frederick 2005). It is designed to elicit a quick answer. What answer first came to your mind?

If you are like most people, one answer quickly came to mind: “10 cents.” And if you are like many people, you had an intuitive sense that this answer was correct. Alas, 10 cents is not correct. You can work out the correct answer on your own if you like. The point I want to make is this: the intuitively correct answer to this question is demonstrably false. This suggests that answering this question intuitively constitutes an error in reasoning. 

It turns out that philosophers are less likely than others to make this error.

Jonathan Livengood and colleagues found that the more philosophical training one had, the less likely one was to make this error (Livengood et al 2010). I replicated this finding a few years later (Byrd 2014). Specifically, I found that people who had — or were candidates for — a Ph.D. in philosophy were significantly less likely than others to make this reasoning error — F(1, 558) = 15.41, p < 0.001, d = 0.32 (ibid.).

Some philosophers performed perfectly on the CRT — even after controlling for whether philosophers were familiar with the CRT. However, many philosophers did not perform perfectly. Many philosophers made the error of responding unreflectively on one or two of the CRT questions. This implies an answer to our first question.

Answer: Yes. Philosophers’ reasoning is susceptible to systematic error.

So what about our second question?

2.  Do Philosopher’s Errors Predict Their Views?

Among lay reasoners, the tendency to make this reasoning error on the CRT has correlated with believing that God exists, that immortal souls exist, that life experiences can count as evidence that a god exists, etc. (Shenhav Rand and Greene 2012). This finding is in line with a common theme in the research on reasoning: unreflective reasoning correlates with a bunch of religious, supernatural, and paranormal beliefs (Aarnio and Lindeman 2005; Bouvet and Bonnefon 2015Giannotti et al 2001, Pennycook et al 2012, Pennycook et al 2013, Pennycook et al 2014a2014b).

And this finding has now been replicated among philosophers. Specifically, the more that philosophers were lured into the intuitively appealing yet incorrect answers on the CRT (e.g., “10 cents”), the more that they leaned toward or accepted theismF(1, 559) = 7.3, p < 0.01, d = 0.16, b = 0.12 (Byrd 2014).

There is also evidence that people who make this error on the CRT are more prone to certain moral judgments. To see what I mean, read the scenario below (Foot 1967).

You see a trolley racing down its track towards five people. You happen to be standing near the switch that would divert the trolley down a sidetrack toward one person. If you pull the switch the trolley will surely kill 1 person. If you do not pull the switch the trolley will surely kill five persons. Do you pull the switch?

So? Would you pull the switch or not? Those who answered unreflectively on the CRT have been less likely to pull the switch (Paxton, Ungar, and Greene 2012).

Once again, it turns out that this finding holds among philosophers as well. Philosophers who were more likely to make a reasoning error on the CRT were less likely to pull the switch — F(1, 559) = 6.93, p < 0.001, d = 0.15, b = 0.17 (Byrd 2014).

Philosophers’ proclivity to make this error was also positively associated with other philosophical views:

  • Physical (as opposed to psychological) views of personal identity — F(1, 558) = 8.57, p < 0.001, d = 0.17.
  • Fregeanism (as opposed to Russelianism) about language — F(1, 558) = 8.59, p < 0.01, d = 0.17.

I have lots of thoughts about these findings, but I want to keep things brief. For now, consider the implied answer to our second question.

Answer: Yes. Philosophers’ reasoning errors are related to their views.

CONCLUSION

So there you have it. It would seem that philosophers are susceptible to systematic reasoning errors. And insofar as philosophers are so susceptible, they tend toward certain views. I’m tempted to say more, but I’ve already done so elsewhere (Byrd 2014) and I am working on a pre-registered replication of these findings for—among other things—my dissertation.††


† Thanks to Greg Ray for pointing me to this passage.

†† What does the rest of the literature suggest about philosophers’ reasoning? Unsurprisingly, the verdict is disputed (Nado 2014, Machery 2015, Mizrahi 2015, Rini 2015). Indeed, philosophers seem susceptible to the same tricks as anyone else (Schwitzgebel and Cushman 2015; Pinillos et al 2011). And second, even if philosophers are better reasoners, it’s not even clear why they are better (Clarke 2013). Why would philosophers be better reasoners than others? I sketch an account in Byrd 2014, Section 3 (see also Weinberg, Gonnerman, Buckner, and Alexander 2010).

Related Posts

Implicit Bias | Part 4: Ten Debiasing Strategies

At this point it’s pretty clear why someone would be worried about bias. We’re biased (Part 1). Consciously suppressing our biases might not work (Part 2).  And our bias seems to tamper with significant, real-world decisions (Part 3). So now that we’re good and scared, let’s think about what we can do. Below are more than 10 debiasing strategies that fall into 3 categories: debiasing our stereotypes, debiasing our environment, and debiasing our decision procedures. Continue reading Implicit Bias | Part 4: Ten Debiasing Strategies

Implicit Bias | Part 2: What is implicit bias?

If our reasoning were biased, then we’d notice it, right? Not quite. We are conscious of very few (if any) of the processes that influence our reasoning. So, some processes bias our reasoning in ways that we do not always endorse. This is sometimes referred to as implicit bias. In this post, I’ll talk about the theory behind our implicit biases and mention a couple surprising findings.

The literature on implicit bias is vast (and steadily growing). So there’s no way I can review it all here. To find even more research on implicit bias, see the next two posts, the links in this series, and the links in the comments.† Continue reading Implicit Bias | Part 2: What is implicit bias?