• Does the Fine-Tuning Argument Commit ‘The Fallacy of Affirming the Consequent’?

    This blog post will discuss a series of syllogisms that spell out the fine-tuning argument and are a must-read for understanding this post.

    The conclusion of Syllogism A follows necessarily from the previous premises, so we won’t bother disputing it.

    The first premise of Syllogism B is: “Theories which predict a very specific state of affairs (which would otherwise have a low probability of occurring) gain probability for themselves when this state of affairs is verified.” Those who are not scientific realists may wish to dispute this. They would say that this premise is the fallacy of affirming the consequent. The fallacy occurs when someone reasons that “If A occurred, we should observe B. We observe B. Therefore, A occurred.” Put in less abstract terms, we could see this type of argument as follows: “If Ryan won the lottery, he should have a brand new car (we know that one of the first things Ryan would do if he won the lottery is buy a new car). Ryan has a new car. Therefore, Ryan won the lottery.” The problem is that the cause of Ryan’s new car could be a lot of things besides winning the lottery. Maybe he won a new car from a contest held by a local radio station. Maybe he simply went out and bought it, in spite of the financial pain it would cause him. And so on and so on.

    Here’s a comment I left on commonsense atheism which explains my views on this:

    I don’t think affirming the consequent is always fallacious. On page 207 of “29 Evidences for MacroEvolution” Douglas Theobald states:

    “all scientific conclusions rely upon the fallacy of affirming the consequent, and in doing so they rely upon inductive extrapolation.”

    http://www.talkorigins.org/pdf/comdesc.pdf

    But of course it would be absurd to say that scientific conclusions are therefore untrustworthy.
    You know, I’ve read that Bayes’ Theorem logically proves “that a hypothesis is confirmed by any body of data that its truth renders probable”

    http://plato.stanford.edu/entries/bayes-theorem/

    But the problem is that even if when we take all the theories that we are aware of and see which one renders the evidence we have most probable, in most cases it will not be possible to become aware of all the logically possible theories that could explain the evidence, and therefore we can’t know if there is a better theory that we have yet to think of.

    A good example is the alleged “fine tuning” of the laws of physics. It is quite concievable that tomorrow someone will come up with a new account of the fine tuning that no one else thought of. Especially in light of the fact that so many accounts of the fine-tuning — Like Smolin’s cosmological natural selection and Paul Davies’ observer selection conjecture — wouldn’t have been the kind of things I would have ever concieved of before I had heard of them.
    So, even if we were able to say that Cosmological Natural Selection (for example) was the most probable explanation for the fine-tuning, we’d have to keep in mind that it is only the most probable KNOWN explanation, because there maybe another theory that is even more probable which we are not yet aware of.

    So, how can we be justified in believing scientific conclusions? Are we justified in believing them at all? I think so. I define a belief as a proposition which one thinks and acts as if true. A belief is a proposition which you RELY UPON to be true. And what better proposition to rely upon than the best known explanation [of whatever phenomenon]?

    If you’re in a position where you need to choose a theory to rely on (as scientists are, often times) then why not rely on the best explanation found to date? That’s my solution to the problem.

    Category: Uncategorized

    Article by: Nicholas Covington

    I am an armchair philosopher with interests in Ethics, Epistemology (that's philosophy of knowledge), Philosophy of Religion, Politics and what I call "Optimal Lifestyle Habits."