Use Disconfirming Evidence to Make Your Decisions Suck Less
In this article, I explain disconfirming evidence., give a disconfirming evidence example, and discuss disconfirming evidence and the scientific method.
In Confirmation Bias: Why your decisions suck and you fight with your friends, I explained why confirmation bias can cause us to make poor decisions with sometimes serious consequences. I also told you about the power of using disconfirming evidence to counter confirmation bias. In this article, I’ll expand on that post by explaining disconfirming evidence. Then I'll follow up with a disconfirming evidence example in the form of a famous study. I'll close by describing how disconfirming evidence fits into the scientific method.
The Two Types of Evidence
We must depend on evidence to guide our thinking when we try to lead a life of reason and intellect. Proof is, after all, the foundation on which the truth rests. Yet, we often only look at one kind of evidence while ignoring other pertinent information. When we do, we increase our chances of making an error.
Confirming evidence is the first type of evidence and what we most often search for.
We're wired to want to be correct, and confirming evidence is the quickest, easiest way for us to get there. That may sound good, but bias is also more likely to lurk in confirming evidence.
The problem is that confirming evidence often supports more than one hypothesis. But in our haste to prove our belief, our focus becomes too narrowly focused on one point causing us to miss other possible theories.
Disconfirming evidence is the second type. Unlike confirming evidence, disconfirming evidence attempts to prove our hypothesis wrong. It's a valuable tool for eliminating hypotheses from the pool of beliefs created by confirming evidence.
Despite disconfirming evidence’s value, our brain gravitates towards finding confirming evidence, as P.C. Wason demonstrated in a famous study.
An Example of Disconfirming Evidence
P.C. Wason conducted an experiment in 1960 that illustrates the danger of relying on confirming evidence. The study went like this.
I have a secret rule in my head that the sequence 2-4-6 follows. Your job is to figure it out.
You analyze the 2-4-6 sequence and give me your own three-number sequence, and I'll tell you if your series complies with my rule. You're allowed to provide me with as many sequences as you want. Then, when you think you've figured out the rule, you give me your guess.
It may sound pretty straightforward, but only six out of 29 participants succeeded.
Based on the 2-4-6 series, you may think the rule is “a series of numbers that increase by two.” So, you give me the sequences 10-12-14, 1-3-5, and 9-11-13.
Or you may think the rule is “the difference between the first two numbers equals the difference between the last two.” In that case, you give me the sequences 1-2-3, 10-16-22, and 50-60-70.
I would tell you that your sequence is valid according to my actual rule in each instance. But, if you guessed your hypothesized rule, you would have been wrong.
That’s because the actual rule is “any three ascending numbers.”
You went wrong in that you only tested sequences that confirmed your hypothesized rule while ignoring disconfirming sequences. The same mistake most of the study participants made as well.
For instance, if you thought the rule was "a series of numbers that increase by two," you should have given me a sequence like 1-2-3 that disproved your guess. Doing so would have told you that you had the wrong rule.
Then you could have updated your hypothesis based on this new information.
The study’s results led Wason to conclude that getting the correct rule requires “a willingness to attempt to falsify the hypotheses, and thus to test those intuitive ideas that so often carry the feeling of certitude.”
Why do we ignore disconfirming evidence?
We accept information consistent with our beliefs while rejecting conflicting information without thinking about it.
Max Bazerman calls this tendency the confirmation trap.
The issue is that our default information processing method makes us accept confirmatory evidence without question unless there is an unavoidable reason not to. On the other hand, we accept disconfirming evidence only after finding we can't dismiss it.
So, it’s easier for us to accept confirming evidence than it is disconfirming evidence.
The reason is twofold.
First, we want to be correct. We feel better when we read a study or hear a news report that supports our beliefs. But, when the opposite happens, we’re put in a state of cognitive dissonance.
Second, we only have limited attention and cognitive resources. So, we only seek out confirming information.
Similarly, we’ll interpret incomplete information so that it’s consistent with our beliefs.
Memory works the same way to conserve resources. When we consider a hypothesis, any information in our memory consistent with that hypothesis becomes available while other information stays buried.
So, confirmation bias serves to keep us feeling good while reducing our cognitive load.
But, as Wason demonstrated in his research, favoring confirming evidence can lead to poor decisions.
Karl Popper and Falsifiability – Disconfirming Evidence and the Scientific Method
“No amount of experimentation can ever prove me right; a single experiment can prove me wrong.” [1]
Karl Popper was a scientific philosopher famous for coming up with the notion of falsifiability in science.
We use inductive reasoning when we attempt to confirm a hypothesis. In these cases, we’re drawing an inference about the unobserved based on what we observed.[2]
For example, if I see a woman walk her dog past my house every morning for the past week at 9 AM, I may expect her to do the same tomorrow. But I can't know that for sure because the future is uncertain. The best I can say is that the woman will probably walk her dog by my house at 9 AM tomorrow.
According to Popper, the proper role of observations is not to confirm hypotheses but to criticize and refute them. So, we can avoid the above problem of induction if we instead seek to falsify the theory.
Let’s go back to the above example to see falsification in action.
Based on observation, I hypothesize that at 9 AM the woman walks her dog past my house every morning.
I can approach testing this hypothesis in two ways. I can either seek to prove my hypothesis or disprove and reject it.
So, every morning I sit outside with my cup of coffee. And when I see the woman walk by with her dog, I check off the day. In this case, I’m focusing on confirming evidence.
However, the problem with this approach is that I’ll never be able to prove that my hypothesis is correct. I can’t predict with certainty that the woman will walk her dog by my house tomorrow, next week, or next year.
But, if the woman doesn’t walk by even once, I know my hypothesis is incorrect, and I can stop testing it.
Then, I can use the new information to update my hypothesis to something like the woman walks her dog past my house every morning at 9 AM when it's sunny.
Pseudoscience
Pseudoscience is a theory or practice that doesn't have a scientific foundation. If a hypothesis can't be falsified, according to Popper, we should disregard it as pseudoscience.
An example will help explain. The psychologist Alfred Adler developed the theory that compensating for feelings of inferiority motivates human behavior.
Now, let’s take two cases. In the first, a man murders a child by drowning it. In the second, a man risks his own life to rescue a child from drowning.
We could argue that in both cases, the man acted out of feelings of inferiority. In the first, he tried to prove himself by committing a crime. In the second, he did so by performing a heroic act.
Adler's theory could, therefore, explain any act of human nature. It's impossible to reject. And as such, we can't subject it to scientific analysis.[3]
Conclusion
We all have beliefs about the world that we cling to. When it comes to testing those beliefs, we’re prone to only test the cases that confirm our beliefs. But, as Wason showed, this positive test strategy can lead to misjudgment because it doesn’t account for the possibility that a test case can fit many hypotheses.
Seeking out disconfirming evidence, on the other hand, gives us a way to reject conclusions. And in doing so, we know when to stop putting resources into testing a hypothesis. The use of falsifiability also gives us a way to distinguish science from pseudoscience.
[1] While we attribute this quote to Einstein, he likely didn’t say it. But, it’s still good to keep in mind.
[2] I find it easier to think in terms of specific and general. In inductive reasoning, we look at a smaller, distinctive selection from the population to draw a conclusion about the larger, more general population.
[3] This doesn’t mean the theory is wrong. Only that we can’t test it.
Bibliography
Zimring, James C. What Science Is and How It Really Works. Cambridge University Press, 2019.
Kahneman, Daniel. Thinking, Fast and Slow. New York: Farrar, Straus and Giroux, 2015.
Bazerman, Max, and Don Moore. Judgment in Managerial Decision Making, 8th Edition. John Wiley & Sons, 2012.
Klayman, Joshua, and Young-won Ha. “Confirmation, Disconfirmation, and Information in Hypothesis Testing.” Psychological Review 94, no. 2 (1987): 211–28. https://doi.org/10.1037/0033-295x.94.2.211.
Wason, P. C. “On the Failure to Eliminate Hypotheses in a Conceptual Task.” Quarterly Journal of Experimental Psychology 12, no. 3 (1960): 129–40. https://doi.org/10.1080/17470216008416717.
Image by Lisette Brodey from Pixabay