• The Problem of Induction

    How do we know that the sun will rise tomorrow? Because it always has in the past (this is an example of inductive reasoning). How do we know that the past will predict the future? If we say that it will because it always has in the past, then we’re making a circular argument. So how do we know that the past will predict the future? Do we have any justifiable way of showing that the sun will probably come up tomorrow? That gravity will continue to act tomorrow as it has today? Here’s a list of answers that have been given to the problem. I should say that not all of these answers are mutually exclusive; more than one can be right, there may be multiple reasons that induction is valid.

    The Gambler’s Justification

    If inductive reasoning works, it will produce reliable conclusions. If it does not work, it shouldn’t be any worse at predicting the future than chance. According to this line of reasoning, inductive reasoning must have a probability at least a little bit higher than 50%. I think there’s something to this, although it certainly doesn’t give us very much certainty about inductive inferences.

    The Semantic Objection

    Some have formulated the problem of induction as a question: “Is Induction rational?” They argue that part of the definition of “rational” is “conforms to inductive reasoning” and therefore the question is no different from the question “Is the law legal?” I don’t think this objection carries much weight. The problem of induction is the problem of how we know whether our past experience will probably match our future experience. Rationality means believing only what is probably or certainly true. Put that way, the semantic objection dissolves.

    Popper’s Falsificationism

    Karl Popper believed that induction was a myth. He believed that the process of science is putting forward different theories and then keeping them until they are proven wrong (falsified). So, Popper would say that “the sun always rises” is a hypothesis that we continue believing because it has never been falsified. The trouble is, the hypothesis “the sun always rises” and “the sun has always risen in the past but won’t rise tomorrow” can both claim equal status in that they are equally testable and have survived the same number of potentially falsifying tests. Secondly, it is rarely possible to absolutely falsify any theory. There is nearly always some way, some small chance, that any possible outcome might be observed under any given theory. Could Popper weaken his thesis and claim that theories ought to be discarded when we find evidence that is unlikely under the theory? No. Because finding evidence that is unlikely under a theory does not by itself show that the theory itself is unlikely. Example: Suppose a lottery is held. Margaret Smith wins this lottery. If I propose a theory that the lottery was random and fair, and the odds of Margaret Smith winning under those conditions is one in a million, would Smith’s win constitute evidence that the lottery wasn’t fair after all? If you believe that finding evidence that is improbable under one theory is sufficient to show a theory is false, then since Smith’s win is improbable, it follows that the fair lottery hypothesis must be false. Of course, that can’t be right. This is just a sample of the problems with Popper’s thesis. Of course, Popperian falsificationism does have a lot going for it; Indeed, it does match a lot of our intuitions about the nature of science and about how to tell legitimate from illegitimate hypotheses, so I think it likely that it is an approximation of the truth, as are most philosophical theories.

    Statistical Justification

    Imagine you walk up to a gumball machine but cannot see the contents within the machine. You put in a quarter, and out comes a pink gumball. Suppose that you take the time to put in 100 quarters to receive 100 gumballs, and all of them are pink. Is it probable that the next turn of the gumball machine will produce a pink gumball? Yes. This can be demonstrated: Finding every one of a hundred gumballs to be pink is not likely unless most or all of the gumballs are pink, in which case it is likely. If most or all of the gumballs in the machine are pink, then any single gumball is likely to be pink. If any single gumball is likely to be pink, then the gumball you will receive upon your next turn is likely to be pink. The full logical/mathematical justification for this reasoning can be gained through an understanding Bayes’ Theorem, and has been written about here. I believe this is logically sound, although I’d like to note that it only justifies the conclusion that most of the time things will be as we have seen them. It does not justify the conclusion that things will always be the way we’ve seen them. I think that the rate of radiometric decay has been the same throughout the past 13 billion years. However, according to this reasoning, I can only be certain that most of the time the decay rate is as I have observed it. If, on sufficiently rare occasions, the rate can change, then that would mean there’s a really decent chance that radiometric dates are wrong. Of course, given the actual facts about radiometric dating (different methods give the same answer, it is consistent with geological evidence, and so on) we can be certain that radiometric dating is fine, but this thought experiment still raises a good question about our inductive expectations and leaves us wondering how to justify them. The statistical justification can’t justify all of our inductive expectations, although it does provide a solid justification for most of them.

    Ockham’s Razor

    Philosopher Paul Draper and I independently came up with idea of justifying induction with Ockham’s razor. Ockham’s razor says that the simplest explanation is probably correct. Uniformity is simpler than variety. Therefore, uniformity between past and present is probably correct. Of course, if this answer is to be accepted then Ockham’s razor cannot be justified inductively (that would be arguing in a circle). Further, the acceptance of this justification depends upon how one interprets Ockham’s razor. The word “simplicity” can have many meanings, and only some definitions would logically imply a greater probability. I think that fleshing out the answers to both of those questions would make the answer the same as the Solmonoff’s answer, and so it is to this that I now turn.

    Solmonoff Induction

    All hypotheses can be represented by computer code, or by some string of symbols. Solmonoff recommends giving a greater bayesian prior probability to the hypothesis containing the least amount of information that is consistent with the evidence (Call this “Shortest Sequence Prefence” or SSP). Of course, this would justify inductive hypotheses: If we created a computer simulation, the shortest code about the simulated swans’ color would be to color them all the same. Thus, if every swan we have seen is white, we’d be justified by SSP in inferring that all swans are white. Why is SSP valid? I think the best answer to this is that every symbol, every one and zero, is like an assumption inside the hypothesis that it is a part of. Naturally, a greater number of assumptions will, all else being equal, have a greater chance of not holding true than a smaller number of assumptions. Think about it: every assumption added to a hypothesis brings with it an extra chance of that hypothesis being false, simply because each part of the hypothesis must hold good in order for the whole to be true. I believe that some version of the principle of indifference is assumed in this thesis, and although this is controversial, I do not have a problem with it. I need to add that is quite possible my understanding of this thesis is wrong, and I hope that I will be corrected if it is.

    Category: Uncategorized

    Article by: Nicholas Covington

    I am an armchair philosopher with interests in Ethics, Epistemology (that's philosophy of knowledge), Philosophy of Religion, Politics and what I call "Optimal Lifestyle Habits."