General relativity vs. Newtonian mechanics is a recent example
Naval: There’s also Solomonoff’s theory of induction. I don’t know if you’ve looked at that at all?
Brett: I’ve heard of it, but I haven’t investigated it.
Naval: I’m going to mangle the description. It says that if you want to find a new theory that explains why something is happening—in this case something that’s encoded as a binary string—then the correct one is a probability-weighted theory that takes into account all possible theories and weighs them based on their complexity. The simpler theories are more likely to be true, and the more complex ones are less likely to be true. You sum them all together, and that’s how you figure out the correct probability distribution function for your explanation.
Brett: That’s similar to Bayesianism, isn’t it? In both cases they’re assuming you can enumerate all the possible theories. But it’s very rare in science to have more than one viable theory. There was the Newtonian theory of gravity and the theory of general relativity. That’s one of the rare occasions where you had two competing theories. It’s almost unknown to have three competing theories to try and weigh.
Naval: What confuses people is that induction and Bayesianism work well for finite, constrained spaces that are already known. They’re not good for new explanations.
Bayesianism says, “I got new information and used it to weigh the previous probability predictions that I had. Now I’ve changed my probability based on the new data, so I believe that something different is going to happen.”
For example, there’s the classic Monty Hall problem from the “Let’s Make a Deal” TV show. Monty Hall calls you up, and there’s three doors. One has a treasure behind it, and there’s nothing behind the other two.
You pick a door—one, two or three. Then he opens one of the other two doors and shows you there’s nothing behind it.
Hall asks, “Now, do you want to change your vote?”
Naive probability says you shouldn’t change your vote. Why should it matter that one of the ones he showed you doesn’t have something? The probability should not have changed.
But Bayesianism says you’ve got new information, so you should revise your guess and switch to the other door.
An easier way to understand this is to imagine there were 100 doors and you pick one at random. Then he opens 98 of the remaining 99 and shows you there’s nothing behind them.
Now do you switch?
Of course you do. You had one in 100 odds of picking the right door the first time, and now your odds are 99 in 100.
So it becomes much more obvious when you change the thought exercise to being one of the two.
People discover this and say, “Of course, now I’m a smart Bayesian. I can update my priors based on new information. That’s what smart people do. Therefore, I’m a Bayesian.” But it in no way helps you discover new knowledge or new explanations.
Brett: That’s the uncontroversial use of Bayesianism, which is a very powerful tool.
It’s used in medicine to try and figure out which medicines might be more effective than others. There are whole areas of mathematics like Bayesianism that can be applied in science without controversy at all.
It becomes controversial when we say that Bayesianism is the way to generate new explanations or the way to judge one explanation against another.
In fact, the way we generate new explanations is through creativity. And the way we judge one explanation against another is either through experimental refutation or a straightforward criticism, when we realize that one explanation is bad.