A person on another forum I frequent posted this comment today and I think it was interesting enough to bear some though.
The degree of skepticism should be proportional to the consequences of being wrong.
This comment was about a specific situation under discussion at the time. At first, I thought, “yeah, that makes sense”. But then I started wondering, how do we define the consequences of being wrong. For example, the vast majority of people have very little to do with evolution in their daily lives, what is the consequence of being wrong about evolution?
If we think about this from a consequence/benefit perspective, then something like climate change is a no-brainer. The consequences of being wrong about human induced climate change are basically, the horrible death of everything on the planet (yes, I’m exaggerating, roaches would probably survive). The benefit is getting rid of polluting industries, ecosystem damaging extraction techniques, AND climate change.
So, for climate change should we be more or less skeptical?
If someone’s life is in danger, then we probably should be much more skeptical. Anti-vaccine proponents think that vaccines cause autism. We KNOW that not having vaccines results in mumps, whooping cough, and dozens of horrible, but preventable, diseases. We should be skeptical of anti-vaccine claims. There have been deaths of non-vaccinated people.
I don’t think that the degree of skepticism is based on the consequences. For me, the degree of skepticism that I have toward a claim is based on two things.
The first is, how does the claim fit into my experience and knowledge. If someone claims to be able to use their mind to levitate their entire body off the ground, then I am highly skeptical. If someone claims to have a jet-pack that can lift them off the ground, then I am much, much less skeptical. Same result, but methods are very different.
The second thing I consider is the source. This isn’t an ad hominem or any other logical fallacy. It’s using experience and knowledge to assign a degree of skepticism to a claim made by that person. If my dad claims that President Obama secretly signed a bill to take all the guns away from registered owners, then I will be highly skeptical. My dad is a fierce Obama hater, gun-nut, and conspiracy theorist. Those traits, learned from years of experience, suggest that about that topic, my dad is less than a credible source.
On the other hand, if my dad tells me how to clean a firearm, then I am not very skeptical at all of his claims. My years of experience with my dad tell me that he is very knowledgeable about firearms. My skepticism is further reduced if I know that he owns one of the firearms under discussion.
At this point, I’m not going to categorically say that my dad is wrong about Obama signing that bill. That’s NOT skepticism.
A skeptic will say something like, “That’s a very interesting claim. Do you have any evidence to support it?” (Side note: soon after this exchange occurs my dad will stop talking and then ignore you for the rest of your life.)
Being skeptical of a claim is not the same as saying that claim is wrong. We may think it’s wrong. We may even hope that the claim is wrong. I think that the skeptic might even be justified to say, “There is a very, very high probability that this claim is wrong.” This is, again, based on the knowledge and experience of the skeptic.
Should I add the “consequences of being wrong” to my skepticism plan?
I think maybe that I do this already, just without really thinking about it. I don’t think anything of posting about an article I read in Nature or Science. I will think long and hard about a post based on something a friend posts on facebook. Is it because of the source (above) or because of the embarrassment that it will cause me if I accept something that I should have been skeptical of?
I’m not sure I can answer that one.
But back to consequences. I’m honestly having trouble finding a hypothetical situation that deals with consequences, but doesn’t automatically run afoul of the two conditions above. The only one I can think of would be so highly contrived as to be meaningless.
It’s if I had a close friend, whom I know is intelligent and well read on a subject, make a claim about that subject. My limited knowledge of that subject, suggests that my friend’s claim is correct. However, if I’m wrong, then I get stabbed a hundred times with a rusty ice-pick.
Then, we start having to think about consequences. I don’t know if that’s a really valid example, like I said, it’s pretty contrived. It reminds me vaguely of the “would you stake your life on this answer” kind of plot from a bad movie.
I’m not saying that I never consider the consequences of things. I am saying that basing skepticism on the consequences of something may not be effective. Of course, if one is basing skepticism on nothing at all, then thinking about the consequences is at least better than nothing.
Help me out here. What are your thoughts on using consequences for determining how skeptical you should be of a claim?