I see more fact-checking on Facebook than I used to. While I’m glad to see fact-checking catching on, fact-checking isn’t enough — or so I’ll argue in this post.
1. Fact-checking: The problem
Let’s say that you and I agree on all the facts. Now let’s say that we start arguing. Will we argue well? Not necessarily!
After all, we can reason badly even if we agree on the facts. Specifically, we can jump to conclusions that don’t follow from the facts. So fact-checking our argument(s) won’t necessarily fix all the problems with our argument(s).
2. Bad Arguments
Consider some of the claims that people make:
- The new federal healthcare policy caused [A]
- The president increased/decreased in unemployment
- The such-and-such trade agreement caused [C]
- Gun-carrying in the US causes [D]
- Undocumented immigrants caused [E]
- The proposed tax cuts are causing…
Notice that all of these claims are causal claims. They are not merely correlational. Unfortunately, no one has run the kinds of studies that can support these causal claims. So even if we cite all the facts about healthcare, unemployment, guns, taxes, etc. we cannot validly reason our way to any of the conclusions above.
3. Example Of Good Causal Arguments
Before we can claim that some thing caused some other thing, we need to do a few things.
- Collect data about all variables that might be causally relevant to what we’re studying.
- Assign all people (or whatever we’re studying) to each condition randomly.
- Include a control condition (and a placebo/sham condition, if possible).
- Make enough observations for powerful statistical analysis (at least 50 observations per variable per condition).
- Include all relevant variables in the final analysis and reporting (see 1).
- Subject all data, methods, analysis, etc. to peer review.
Very few — if any — of these steps have been completed when it comes to healthcare, unemployment, guns, taxes, and the other stuff that we argue about. But that is hardly surprising. After all, we can’t randomly assign people to different public policies, laws, tax brackets, etc. And since we can’t do this, there are loads of causal claims about healthcare, unemployment, guns, etc. that we simply cannot defend.
4. How To Get Better At Reasoning
Maybe you’re interested in trying to check your own arguments. If so, here are a few things to think about.
4.1 Acknowledge your limitations
For example, when we are tempted to make a causal claim, ask ourself:
- Did I complete steps 1-6?
- Has anyone completed steps 1-6 and published their results?
If not, then avoid making a causal claim.
Instead, acknowledge the limitations involved in arriving to your conclusion — e.g., “I don’t know what the research suggests, but it seems to me that…” or “in my own limited experience, I find that…” or “one or two studies find a correlation between…”.
4.2 Test yourself
Try explaining how your conclusion follows from the evidence. After all, we often feel that we’ve reasoned correctly before we’ve explained each step in our reasoning process (Thompson and Morsanyi 2012). But when we try explain our argument, we often spot an error or realize that we don’t understand things as well as we thought that we did (Fernbach et al 2013).
4.3 Look for good evidence
Statistically significant findings are cheap. You can easily find a few if you try. So it is not enough to point to just one or two findings — I’m looking at you, science journalists! If you want to make a strong case for something, then you to point to findings that have replicated many times in many contexts. That means citing many papers (or reviews/meta-analyses of many papers).
4.4 Look for opposing evidence.
Like I said, it’s easy to find a few statistically significant results. And so it is very easy to find some evidence for almost any conclusion. So if we are predisposed to a particular conclusion (and let’s be real: we are), then we will have to work hard to find (and not ignore) findings that challenge our desired conclusion (Flynn, Nyhan, and Reifler 2016; see also motivated reasoning). [Another opportunity for a stern look at science journalists/reporters.]
4.5 Take your time
You might have realized that these steps take a while. You have to search, read, maybe take notes, explain, and — of course — think. But it might be worth it: some people reason more carefully when they take their time (Paxton et al 2012).
In 2014 I found that atheist and agnostic philosophers are more reflective than theist philosophers (Byrd). Since statistically significant findings are cheap, this single finding was not that insightful. After all, my finding might have been a statistical fluke. I briefly looked for opposing evidence, but didn’t find any. And then, a year later, someone published research that challenged my finding (Finley et al 2015). So I had to discount my finding. However, a 2016 meta-analysis confirmed that most published studies find that atheists and agnostics are more reflective (Pennycook et al). Now I’m a bit more confident in the finding.
When evaluating a claim, look for evidence. But don’t just look for any evidence. Look for a preponderance of evidence. To find that, look for meta-analyses, meta-syntheses, and systematic reviews. (Beware: there might not be a preponderance of evidence.) Don’t jump to conclusions that don’t follow from the evidence. Instead, acknowledge the limits of what we can infer from the evidence. And explain how your conclusion follows from the evidence. Finally, take your time. It might be months or years before you find all the relevant evidence and arrive at the best conclusion(s).
5. How To Help Others Get Better
Improving our own reasoning is only a small part of the problem. What about everyone else? What do we do about them?
I don’t know about you, but spotting errors in others‘ reasoning is much easier than spotting errors in my own reasoning. Turns out that I am not alone (Trouche et al 2015). So we would be wise to ask each other to look for our reasoning errors.
But what about public figures? We cannot easily contact them and point out their reasoning error(s). And even if we could, it’s not clear that correcting them would improve their future arguments. What we need is a way to publicize public figures’ reasoning errors. We need large-scale argument-checking. Perhaps that is how we can prevent the spread and acceptance of misinformation (Nyhan 2010; Nyhan and Reifler 2015b). You might think that argument-checking is a pipe dream. However, the good people over at Clearer Thinking are already doing it! Check out their argument checking videos of some of the 2016 US presidential (primary) debates.
Another solution might be to ensure that every kid is given the tools of reasoning. In the US, people are not taught much — if anything — about reasoning. For instance, none of my primary or secondary schools offered courses in logic, philosophy, or critical thinking. There is some evidence that such courses can improve our reasoning (Attridge et al 2016; Alvarez-Ortiz 2007), even at a very early age (Gorard et al 2015). So perhaps its time to institute reasoning curriculum in primary and secondary school in the US.
Maybe you’re not convinced that accountability and education will work. That’s fair. But that’s just a reason to do more research. After all, it is science that will ultimately reveal the most reliable methods of improving our reasoning.
Image credit: © Nick Byrd 2016; background image in public domain here.