Episode 129: Is Probability Real?

  • Links to this episode: Spotify / Apple Podcasts
  • This transcript was generated with AI using PodcastTranscriptor.
  • Unofficial AI-generated transcripts. These may contain mistakes. Please check against the actual podcast.
  • Speakers are denoted as color names.

Transcript

[00:00:00]  Blue: Hello out there! This week on the Theory of Anything podcast, Bruce takes a deep dive into what precisely David Deutsch and other critical rationalists mean when they claim that probability doesn’t exist. I think Bruce is wrestling with the implications of our best theories more than practically anyone I know of, and I hope someone else gets something out of this too.

[00:00:34]  Red: Welcome to Theory of Anything podcast, hey Peter. Hello, Bruce. How are you? Good. We’re going to talk about… This is going to be a short episode today, but we’re going to talk about a conversation I had on Twitter with David Deutsch about probability, which left me confused, and I wanted to actually talk through what he said and explain why I find it confusing. Okay. Think of this as part of our ongoing series of we did that episode on probability previously and Deutsch’s concept of probability, and I have yet to get to the episode where we’re going to talk about his talk, Physics Without Probability, but this is kind of set up for that where I’m trying to reveal my past attempts to make sense of his theories on probability and why I find them very, very confusing.

[00:01:25]  Blue: Okay.

[00:01:26]  Red: So back in episode 112, concepts versus words, words versus concepts, does randomness exist? I asked what Deutsch could possibly mean when he claims randomness does not exist. Now, I pointed out that even though quantum mechanics is deterministic at the level of the multiverse, from the point of view of an observer living in a universe, which means all observers imaginable, there is still something that is exactly matches the probability calculus that we call randomness today. And it’s very distinct from pseudo -randomness because it’s measurably different from it. And I give examples of this, actually showed visuals of how they’re different.

[00:02:05]  Blue: Just to pause one second, just to back up, am I correct? I mean that in the conversation he had with us on our podcast, kind of the set, I mean, it was a really heady conversation when you got into like stochasticity, stochasticity and probability with him. But I think he said pretty clearly that his claim that probability does not exist was based on many worlds. Is that accurate?

[00:02:42]  Red: What he actually said, I pushed him on this. Okay, so understand that when I did that conversation, I hadn’t at that point in time gone and listened to his podcast on physics without probability.

[00:02:57]  Blue: His lecture.

[00:02:57]  Red: Yeah. So I didn’t have a strong understanding of his view. I had had this conversation I’m about to explain that left me confused. And I had heard what he had said on the increments podcast, which also left me confused.

[00:03:12]  Blue: Okay.

[00:03:12]  Red: He says some good things on the increments podcast that I actually strongly agree with. And I’m planning to eventually show that I don’t entirely disagree with David Deutch, that there’s a core idea that he’s getting at that I think is actually correct. Okay. So

[00:03:27]  Blue: it seems like his take is that he seems to feel very strongly that because of many worlds stochasticity in some meta level does not exist. But and then your point is that on a practical level, probability or randomness or whatever stochasticity, I’m having a hard time saying that word today. I don’t know why, but it’s a meaningful concept.

[00:04:00]  Red: Yes.

[00:04:00]  Blue: Is that how I can get my brain around?

[00:04:04]  Red: Yes. Okay. Let me make my argument stronger here. You’re exactly correct. When I pushed David Deutch and I said, I kept saying to him, why don’t we just call quantum effects randomness? Because that’s what they are, right? And he finally, he made several attempts and we’re going to talk about that in future podcasts where he made several attempts to explain why he didn’t think it was randomness. And I always kind of came back at him and said, but this is what randomness is. And he finally said, this was a good clarification, although it caught me off guard because it wasn’t what I was expecting. He said, it’s only random if it comes from a nondeterministic source and the multiverse is nondeterministic, so it’s not randomness. And that was his answer. But that answer is essentialism. It doesn’t change. Wait, wait, wait.

[00:04:49]  Blue: You just said the multiverse is nondeterministic. It is deterministic.

[00:04:55]  Unknown: Yes,

[00:04:55]  Blue: sorry. Yes, I just want to clarify that. It’s non -random.

[00:04:58]  Red: Sorry, I meant to say it’s non -random. But that answer is 100 % essentialist. And so it’s a mistake. Like you absolutely should not make essentialist arguments like that. Okay. The problem here is that he’s starting with this idea that the word randomness means or stochasticity means it didn’t come from a deterministic source. The problem he’s missing is that words always have implicit understandings based on context, right? If you were to write down the definition of stochasticity as something that was coming from a random source instead of a deterministic source, then yeah, I see where he’s coming from. But of course these terms were invented by people that lived in a universe, not some observer in the multiverse. So when we say it’s stochastic or it’s deterministic, it’s stochastic because it’s a not a determined source, we don’t mean at the level of the multiverse. We don’t. We don’t mean that. Okay. We mean at the level of a universe, this is a random process, not a determined process. That’s what randomness actually means in terms of how people really use it in real life. Okay. That’s an example of don’t vague man your theories, clarify. I’m clarifying something that even though people will say, oh, it’s stochastic because it came from a random source. And if it’s not a random source, we call it pseudo randomness instead. There’s always has been the implication implicit maybe until we knew about the multiverse, that we were talking about from the point of view of a observer in the universe. If you’re looking at how these words are used inside of a universe by real people, quantum mechanics is randomness, period end of story. It is what we mean by randomness. And

[00:06:54]  Blue: did he accept this characterization of the debate? Did you get that feeling?

[00:06:58]  Red: He didn’t. I kind of, when he gave the answer that it was just because he calls it nonrandom because it actually the multiverse is determined, that caught me off guard. I didn’t even know what to say to that because I could see immediately that this was a definitional thing. He was defining randomness in a narrow way different than the way people would normally use the term. I mean, he’s right for that way of defining things. Okay.

[00:07:22]  Blue: Well, maybe he, I’m don’t mean to psycho -psychologize him, but I mean, if he spent his entire career decade after decade arguing with the smartest people in the world about the many worlds interpretation, maybe he has quite somewhat of an emotional kind of attachment to -

[00:07:45]  Red: So that’s a good question. And we’re actually going to talk about that a little bit today because I am going to psycho -analyze him slightly on something he said. The problem is, is that a lot of the things he say, it’s very difficult to figure out what he’s getting at. No doubt that he is very insistent on not using probabilistic language. And that’s what I’m actually going to show as an example today. And why that is, is maybe an open question. It could be precisely what you said. He has spent his whole career arguing to people know the multiverse actually exists and it’s determined. It could also be that he spent his great deal of his career arguing with Bayesians, right? Where Bayesians want to put absolutely everything into probability theory. And he went along with Popper, feels that’s an inappropriate thing to do, epistemologically speaking. And so it may be that that has created a sort of hostility towards the language of probability because it does get abused in various ways. I think what I’m trying to suggest here though is that we need to be more open to the idea that words have multiple meanings. And this is the point I made in episode 112. If you want to really accept Deutsch’s point, don’t collapse randomness and pseudo -randomness into one thing, pseudo -randomness. You will only confuse yourself if you do that. It’s okay to make three categories. We can call it like true randomness, actual randomness, and pseudo -randomness. If you can wrap your mind around, notice how this is focusing in, not fuzzing out, okay? And it’s always this is going to be the way you go about it.

[00:09:28]  Red: If you can accept that Deutsch’s point is correct, but still hold in your mind that there is a distinction between actual randomness and pseudo -randomness, then you’re okay. But I know from talking to crit rats, they don’t accept that. They are completely confused on the subject because of how Deutsch chose to word it, okay? And this is why I keep pushing this, that there is something off with the way he’s kind of lensing this, okay? So it seems to me that Deutsch, something isn’t random unless it comes from a non -deterministic source, even at the level of the multiverse, even though I am arguing that’s not what the term originally meant in terms of actual usage, okay? That that is now a new, more narrow usage. While deterministic versus non -deterministic is a fairly typical way of defining pseudo -randomness versus randomness, we usually have a single universe and an observer inside that universe in mind, not an imaginary observer of the multiverse, which does not exist. So it feels strange to insist that QM is, quote, pseudo -randomness just because the multiverse is deterministic, that collapses randomness and pseudo -randomness into the same thing, even though in reality, they are physically quite different, very, very, very different, and we need to keep this distinction in our minds, okay? It then forces us to bend over backwards, inventing new strange terms to express this idea that there are two kinds of pseudo -randomness, the kind that only approximates the probability calculus, what we used to call pseudo -randomness, and this new kind that exactly matches the probability calculus and behaves exactly like what we used to call true -randomness, which supposedly doesn’t exist. It’s, it’s, it’s horrifically confusing, like it’s, it’s terribly confusing.

[00:11:14]  Red: This is not the way to get it straight in your mind by trying to go about it wording it this way. Presumably the word randomness was invented by observers inside universes, us, to point to whatever you want to call it that happens to be exactly identical to what we today call randomness. And really because of that, I just can’t go down the road of using language the way he uses it here, because I think, I think it has caused confusion. I think it even has caused confusion for himself, okay? Because he’s, we do use words to kind of create categories in our mind, and once you say there is no randomness, you’ve collapsed the two, the two categories that used to exist into one category and now you can’t think clearly about them. It would sure be easier just continue to use the term randomness and pseudo -randomness as we all will always did, and just note instead that randomness is now known to be due to quantum effects, that they are in an imaginary, non -existent observer of the multiverse. We would see that this randomness is in fact a deterministic splitting of the universes. In other words, keep the original terms and tack on a caveat that explains it. That’s a much better way to go about this. Now, I previously said I wanted to go over Deutsche’s talk on physics. We’re not doing that yet, but I am going to describe a short conversation I had with him on Twitter about this very subject. How do crit -rats understand David’s theory? Here is a long tweet from a crit -rat that explains his understanding of Deutsche’s view in detail. And in fact, he felt so strongly about this.

[00:12:45]  Red: He posted it word for word twice, like months or years apart or something. One of David’s, this is the quote now, this is the tweet now. One of David’s most controversial but valuable contributions to society is the rejection of probability, with the exception of example card games, as an expression of truthiness or justified beliefs when applied to predictions or conclusions. There is no probability of how likely we are to be hit by a meteor tomorrow. There is no probability of how likely the stock market is going to be doing well, doing as well tomorrow as it is today. There is no PDOOM, no percentage that you can put on whether AI is likely to wipe out humanity or not. Things either happen or they don’t, and we can either explain why they will and how or we can’t. All attempts at putting percentage on a prediction is really just guesswork dressed up as reasoning. If a meteor is going to hit us tomorrow, it’s already on the way and the probability is 100%. If the stock market is going to crash tomorrow and the reasons for it’s crash, then the reasons for its crash have already been put in motion maybe decades before. If the AI is going to kill us, if the AI is going to kill us, all depends on what we decide to do with it, not with what some calculation says. David’s primary critique is for the field of science, but it goes beyond that. Far too many of decisions done in the modern society is based around the false certainty of using Bayesian probability. It’s like a placebo for a society that demands certainty in an uncertain world. It’s not just false, it’s regressive as it slows down knowledge creation.

[00:14:32]  Red: The only thing that can change the outcome of the future is the creation of new knowledge, knowledge based on good explanations that are hard to vary. We can create knowledge that allows us to divert the meteor before it hits Earth. We can create knowledge that allows us to hinder a crash of the stock market, both directly or indirectly. We can create knowledge that lets us evolve side by side with very powerful AI instead of enslaving it or being enslaved by it. Like everything else in life, there’s no guarantees, but there are definitely better or worse ways to deal with uncertainty. Probability just isn’t one of them. Now, I don’t want to be unfair here. First of all, this is probably an okay summary of David’s views. I don’t know that for sure. We know the CritRat community will enhance David’s views in various ways and end up with misunderstandings, and it’s possible that that is the case here. Furthermore, I think you can defend part of his view if you take them from a certain viewpoint. And I would even add that it’s nearly impossible to caveat what you’re saying with enough caveats that it’s literally true all the time. So what I’m about to do may be a little unfair. And I’m emitting this upfront. I’m just trying to make a point. I’m not trying to do a broader argument against this viewpoint. But we should note that much of what he says is for sure strictly false if taken literally. And you really have to see that that is the case. If you’re going to hold this view, you should at least hold it in a way that you understand that it isn’t meant to be strictly true.

[00:16:10]  Red: For example, anything involving quantum effects from the double split experiment to radioactive decay is genuinely probabilistic, at least from the perspective of real observers, not hypothetical ones watching the entire multiverse. One might argue that he accounts for this when he says probability applies to random games. Though he interestingly uses card games, which might have been the least representative example he could have used there. I doubt that there are few or any quantum effects involved in card games, for example. But let’s take insurance markets as a better counter example. No one thinks mortality for an individual is anything other than deterministic. So his critique that it will happen or not absolutely applies to all mortality situations. Yet we still, even though we know it’s a deterministic process, we know it’s not a random process, we still apply probability theory extremely effectively via insurance and people make real money doing so. Calling that quote, guesswork dressed up as reasoning, that’s wrong. It’s just wrong. This is real reasoning we’re doing. Now you can make similar points with, say, manufacturing error rates, every kind of insurance, credit scoring, loan approval, warranty pricing, election forecasting. The list goes on and on here. So whatever merits his view might have, it is not possible to take his view literally because if you take it literally, it is for sure saying a number of false things. Now, to be fair, he does focus on, and this is probably his intent, some somewhat more questionable uses of probability that maybe need a bit more discussion. For example, I kind of doubt he would actually say, oh, and insurance is therefore wrong. Don’t do insurance. I mean, there’s so much overwhelming evidence that you can use probability theory with insurance.

[00:18:16]  Red: So specifically, he talks about estimating the probability that AI will destroy humanity. That is really and truly a questionable use of probability theory when Bayesians do that, I admit. However, he lumps that freely with the probability that an asteroid will hit Earth as if that’s the same and they’re not the same. Those are two very different examples. So there is a reasonable interpretation of, quote, the probability an asteroid will hit us this year, say. Even under frequentism, that’s true. You start with the assumption possibly false, but explainable that asteroid impacts can be modeled as a frequency. Then you look at how often asteroids hit Earth and infer the probability of an impact next year, given no further information available, such as like, say, seeing an asteroid in a telescope on its way to Earth. That frequency is roughly one impact over every one out of every 500,000 years. I actually looked it up. So given that, that it’s one in 500,000, the probability for any given year is 0.0002 % that we’ll be hit by an Earth, you know, life -destroying asteroid. I hate to say it, but that number you can tell me that’s not a good use of probability theory and you’re just wrong. That is a very reasonable and useful number right there. You ignore it at your peril, okay? Yes, you can object that really it will hit or it won’t and that’s true, but that ignores what this number actually represents, which isn’t even attempting to tell you if an asteroid will hit tomorrow or not.

[00:20:00]  Blue: I hope this isn’t too much a tangent or anything, but frequentism kind of hung up on that word. It’s not something I hear very much. Is that, how is that different than Bayesianism?

[00:20:15]  Red: So there’s the orthodox view of statistics which is often called frequentism.

[00:20:20]  Blue: Okay.

[00:20:21]  Red: So it would be the more historical, I mean they both go way back, so calling it more historical is inaccurate.

[00:20:26]  Blue: Yeah.

[00:20:27]  Red: It would be more accurate to say that there have been more frequentists in the past than Bayesians and Bayesianism is kind of a newer idea that’s caught on more recently.

[00:20:38]  Blue: So Bayesianism is more specific or they just completely different?

[00:20:42]  Red: They are different.

[00:20:43]  Blue: So

[00:20:43]  Red: there tends to be a way to translate between the two. So they’re not completely different, right? But they are philosophically very different views of what statistics is. So frequentism would be the view that to have a probability it must be based on an actual frequency of something, okay? Whereas Bayesian statistics takes the subjective view that what you’re actually measuring is the plausibility of a theory based on what you currently know. And therefore it places the probability as a measure of your current knowledge state, okay?

[00:21:15]  Blue: So it’s taking into account human minds and knowledge into this equation. Is that fair? Where frequentism is more just a simple calculation?

[00:21:28]  Red: Yeah, so frequentism is trying to root itself in actual frequencies of something. So let’s use the example of AI -dumerism versus an asteroid. An asteroid hitting the earth has a specific frequency that’s measurable. Therefore, for a frequentist statistician, wouldn’t mind using statistics to try to model something about this, okay? And a frequentist would never try to measure the statistics, the probability that AIs go to wipe us out because there is no frequency of that. It’s never happened before, okay? Now if we had like a hundred worlds each of which invented AI and half of them got wiped out by their AI, at that point a frequentist would then assign a number statistically to the probability of getting wiped out by AI. But they would require that there’s a real frequency that actually exists somewhere that we have access to, okay? Whereas a Bayesian is more comfortable with this idea that probability calculus is really a plausibility calculus, not a measure of frequency. And therefore they would be more comfortable with the idea that you can assign a probability to a one -off event that there’s no frequency behind, okay? Such as AI, to be honest, the AI example is bad even for Bayseans. Like when Bayseans use it, they’re really largely misusing Bayesian statistics, not using it correctly. There are better examples I could give you. I’d have to go look them up of where you can assign a probability to a one -off event and mathematically it actually works out based on certain assumptions, okay? So there are cases where I think the Bayseans are correct that you don’t have to have a frequency.

[00:23:14]  Red: But in general, I have real strong leanings towards frequentism because I really want to see probabilities used where you’re not just making stuff up, which is what Bayesian epistemologists tend to do and is what they get mostly criticized for by critical rationalists, is that they will say, well, my gut tells me that there’s a 95 % chance that AI will wipe us out, you know? And they might even try to justify it because in the past, that’s what happened when a more intelligent society found a less intelligent society. Like they got their explanations for it, but they’re basically just making crap up, right?

[00:23:49]  Blue: Yeah, maybe that’s… Is that the central criticism of Bayseans? Is that they’re making crap up?

[00:23:58]  Red: Yes, yes. Okay,

[00:24:01]  Blue: so… But frequency tests don’t do, don’t make up stuff. Do not

[00:24:05]  Red: do that, right.

[00:24:06]  Blue: Oh, okay, okay. Well, in my own way, I guess I’m understanding it.

[00:24:10]  Red: Okay, let me go back to what I was saying. You could respond to all this with the asteroid example. It either will hit you or it won’t. That’s basically what this crit -rat said, right? But that really ignores what this number actually represents, which isn’t even an attempt to tell you if an asteroid will hit tomorrow or not. If you think I’m wrong, the easiest way to see that this is, that I’m getting this right, is just imagine that we lived in a galaxy where asteroids hit the planet every five years, on average. So now you have a… There’s every year there’s a 20 % chance you’re going to be hit by a life -ending asteroid. You better believe you’d want to know that fact or you’d be an idiot to claim, well, the asteroid either will hit or it won’t. You’re just making wild guesses disguised as reasoning. Like this number that we put as a probability, it’s not meant to say there’s a random process and that random process says that an asteroid’s going to hit us next year. That just isn’t what either frequentism or Bayesianism is. And the fact that crit -rats think that really shows more ignorance towards probability theory of either philosophy than it shows, and it actually represents a good argument. And you can say something like, what if we currently see an asteroid head and straight to Earth? Is it still a 0.002 % chance? Okay, the moment you say that, you’ve shown you have no clue how probability calculus works. That’s just not… There’s this idea called total probability where you have to total evidence. We have to look at all evidence available to you, otherwise the number is meaningless.

[00:25:48]  Red: Like this is something that is well covered in the literature and just some of these arguments are just kind of crappy. Okay? Now, AI doomerism isn’t in the same category as the asteroid example for the reasons that you and I just discussed, Peter. There’s no observable frequency of AI doom that we can reference. So on this point, this crit -rat I’m quoting, he is partially correct. There’s an issue here. There’s a problem worth mentioning and he is rightly bringing the problem up. However, consider this argument. Imagine a non -existent observer of the multiverse. If I ask them right now at this moment, what are the odds that AGI wipes out humanity? Okay, that is not a question that couldn’t be answered by an observer in the multiverse. Okay? That probability that I’m asking for, it physically exists. It’s the proportion of branches of the multiverse where AGI destroys humanity from that moment forward based on all the branches. We have no way to measure that probability because we have no observers of the multiverse. And any estimate would be a completely wild guess. But I want to be clear. The statement, the question, what is the probability that AI wipes out humanity? That’s not a misuse of probability theory. That is a physically real probabilistic quantity that actually physically exists across the multiverse. Okay? If you understand it to be the proportions of the multiverse from this moment forward where AI wipes out humanity. So it is not meaningless. Even here, it is not meaningless to talk of the probability that AI wipes out humanity.

[00:27:32]  Red: Now a better argument here against Bayesians who talk about this would be that anyone that offers a probability of AGI wiping out humanity is basically basing it on nothing and that they’re making crap up. Okay? That unlike the asteroid wiping out Earth example where there’s an actual frequency we can reference that this is that when a Bayesian says there’s 95 % chance that AI will wipe out humanity, they are literally just making crap up. There is nothing else you need to worry about. The fact that they’re making crap up, that’s a valid criticism. The fact that there is no probability that AI wipes out humanity, that is a false criticism because there is a probability that AI wipes out humanity. It exists as an actual number physically in the multiverse if you believe in the quantum multiverse. So we need to get down to what’s the fair criticisms and what’s the bad criticisms? We should use good criticisms, right? So even here, this critrats argument that there is no probability of how likely AI will destroy humanity, it’s just wrong. Okay? Without a doubt, that’s a physically real quantity equivalent to the proportions of the multiverse. Now, despite all my caveats here, I feel like this critrats point isn’t without merit. I mean, like I’m concentrating on how it could imply something false, how it does imply something false maybe, but like I’m intentionally ignoring the fact that he’s actually making a kind of a fair point which is there’s a kind of stupidity to the way probability theory gets used, modernly. Like, I think that’s his underlying point and I don’t really disagree with that underlying point. Okay?

[00:29:08]  Red: Because there really is something weirdly off with how Bayesians, particularly within the effective altruism community, invoke probability theory. Okay? And he at least did make an exception for games of chance. Surely games of chance, like a roulette wheel, for example, are appropriate uses of probability theory, right? In fact, Deutsch in Fabric of Reality, he talks about this and he confirms that this is a good use of probability theory. He says, quote, on the other hand, if the sequence on a virtual roulette wheel that comes up looks unfair, we cannot know for sure that it is, but we might be able to say that the rendering is probably inaccurate. For example, if zero came up on the rendered roulette wheel on 10 consecutive spins, we would conclude that we probably do not have an accurate rendering of a fair roulette wheel. Now, that statement, I would be hard pressed to say there’s anything wrong with that statement. So it turns out David Deutsch decided that this was an inaccurate statement. Brett Hall, on a podcast, he, now keep in mind, Brett Hall knew David Deutsch had changed his mind. Okay? So on a podcast, quoting this part of Fabric of Reality, he claimed that a theory is either falsified or not, and there’s no probably falsified, and therefore this statement of David Deutsch in this book was incorrect. Now, David Deutsch, knowing that Brett was expressing his David Deutsch’s new current view, he replied with a tweet to Brett, saying, yes, Brett is quite right. What I said in the Fabric of Reality about probability in the roulette wheel is flat out wrong. Okay, so Deutsch is saying that the following statement is flat out wrong.

[00:31:00]  Red: If zero came up on our rendered roulette wheel on 10 consecutive spins, we should conclude that we probably do not have an accurate rendering of a fair roulette wheel. Does that really look like a flat out wrong statement to you? Because it does not to me. That honestly, sincerely looks like a correct statement to me. So I am deeply confused that Deutsch and Brett are saying that this is a wrong statement, a flat out wrong statement, because it really looks like it’s a completely correct statement.

[00:31:37]  Blue: So, okay, so if I roll a dice and I get a six, 10 times in a row, I mean, from one perspective, I would probably realistically conclude that there’s something wrong with the dice.

[00:31:54]  Red: Yeah, and you’d be right to. I mean, maybe a more

[00:31:57]  Blue: accurate conclusion is that I’m just in a low -amplitude branch of the multiverse.

[00:32:06]  Red: Yes, that could be,

[00:32:09]  Blue: right? But is that my understanding? You are understanding

[00:32:14]  Red: this correctly, right? Okay, what would be the right conclusion? The right conclusion would be that you probably have an unfair die. That is exactly it. Probability theory is specifically about how you determine that, even though you can’t know it for sure. Okay, and that is what you’re studying, what statistics, okay? And this is, this is a roulette wheel we’re talking about. We’re not talking about AI demerism. We’re not talking about even asteroids. Okay, we are talking about a game of chance. We are talking about exactly what probability theory was designed to handle. And Deutsch is saying that this is a flat -out wrong statement. So obviously I was confused. So I asked David Deutsch on Twitter to clarify his meaning. Here’s the conversation that followed. So first me, I admit that I don’t actually understand why saying probably falsified. Perhaps I should have said probably inaccurate. I think Brett said probably falsified. But in any case, David Deutsch got my point. So I don’t actually understand why saying probably falsified is wrong when talking about an experiment that relies on probability calculus, like a roulette wheel. Popper went to Great Lengths to work out how to use probability calculus experiments with his epistemology. Note that I’m clearly pointing out that the roulette wheel example is an undisputed correct use of the probability calculus. It even relies heavily on small quantum effects. So we’re likely talking about real randomness here, not pseudo randomness, okay? So this is nothing like the AI demer example. It’s not even like the asteroid example. But regardless, if this doesn’t fit the probability calculus, I don’t see how anything could, right? Like, so this is weird that he’s saying this. Here is Deutsch’s reply to me.

[00:33:58]  Red: Probably inaccurate, as a term, isn’t an attribute a roulette wheel could have. Probably falsified isn’t an attribute a theory could have. Interpreted as someone’s brain, not the wheel or the theory, probably can’t refer to numbers obeying the probability calculus. Now, I want to be fair to David, as fair to David here as I can. And I generally want to interpret him charitably and assume he’s saying something correct or at least I can see where he’s coming from. And let’s be honest. It may be that he just doesn’t understand what I’m asking, right? He’s very busy. He’s a world famous scientist. It’d be unreasonable to expect him to carefully parse my question and then maybe come back and ask clarifying questions. Like, that isn’t really going to happen on Twitter. But when I look at that answer, it just seems really off. I think you could argue that each statement individually is a correct statement. But I don’t see how you could possibly see it as an answer to the question I posed, like at all. I could argue, as I said, that he’s technically correct, that a roulette wheel cannot, in and of itself, have the trait of being probably inaccurate. It either is or it isn’t. Okay, I accept that. But that was not what anyone claimed and it was not part of my question to him. Nor does the fact have any bearing whatsoever on whether his statement of fabric of reality is correct or not. Of course, saying, quote, we probably do not have an accurate rendering of a fair roulette wheel is not even pretending to make a claim that there is some physical trait of the wheel that is called probably inaccurate.

[00:35:50]  Red: So I don’t understand why we’ve jumped to that as an answer when it’s not even something that’s been brought up at all. Okay, it seems far more reasonable to read that statement, which mind you, was Deutsche’s statement, right? Quote, we have as we have probabilistic evidence of some determined level or strength, typically put as a p -value, that this wheel is unfair or bias. But we might be wrong because there is some chance determined by the p -value that a fair wheel would give this result to, though that is unlikely determined by the p -value. That seems to me like that’s what probably inaccurate would mean. Like there’s such an obvious way to read the phrase probably inaccurate or probably falsified that doesn’t require you to read it as making a really strange claim that there’s a physical trait called probably inaccurate attached to the wheel. Saying probably inaccurate isn’t an attribute of a roulette wheel could have seems like an almost willful misreading of frankly his own original statement in fabric of reality. So this is why I’m confused. What I will say is that I know for sure saying something is probably inaccurate is surely not normally intended as Deutsch is taking it as a claim about some physical trait or something. I guess maybe someone could be intending that in some cases. I can’t think of any cases where someone has intended that. But I guess it’s not impossible, but surely it’s not the norm. And what would Deutsch have us say instead here? Does he want us to simply say that the roulette wheel is fair is tentatively falsified? That was what Brett said we should say instead.

[00:37:38]  Red: And I guess there’s nothing wrong with that, but all it does is it’s the exact same statement, but it drops some useful information, specifically that this was based on a probabilistic inference. It seems like you’re saying the same thing, but in a less explicit way. I don’t think I see a difference just because you drop the probabilistic language that that gains you a thing. In fact, it loses you something important. Now, I made an attempt to ask for clarification. So this is me, quote, I agree that the term probably falsified isn’t an attribute of a roulette wheel, but I do not see how the term probably falsified might not be described, might not describe the state of a theory or understanding of the theory due to a probabilistic observation. I’m not one to argue terms here. So what would be a better way to talk about the difference between a probabilistic falsification, a finite set of observations about a roulette wheel, and a non probabilistic falsification, a black swan? Okay, let me, in case it’s not clear, this isn’t that unclear, but just to clarify this, think about how you have a theory like all swans are black. You aren’t going to statistically measure and falsify that theory. You’re going to go find a black swan, okay? But if it’s a medical example, like we did with the Karen example, that actually is going to be treated through probability theory, and you’re going to do the inference using probability theory, okay? And surely in the case of a roulette wheel, there’s no doubt at all you’re going to be using probability theory to be able to try to falsify the theory that it’s rigged or the theory that it’s fair, okay?

[00:39:16]  Red: So I’m pointing out that in some cases the theory is probabilistic and in some cases the theory is not. And in the case of a probabilistic theory, you’re going to use a probabilistic set of observations to falsify it. In the case of a non probabilistic theory, you’re going to not use a set of probabilistic observations, you’re just going to have a deterministic observation. So I’m giving a concrete example of my concern. We might have a theory, and I feel like I’m being very clear here, right? There is a distinction here worth making. That’s what I’m trying to say here. Trying to collapse them into a single concept seems problematic to me. Now here’s Deutsch’s response. There are probabilistic and non probabilistic theories, not falsifications. Each element, each has elements of the other, meaning probabilistic and non probabilistic theories. Both are tested in the same way. If observations are unexpected, we try to explain why. If we reject a true theory, it isn’t because we were wrong about its probability. We do not have access to actual p -values anymore than actual probabilities. Now again, I want to read Deutsch as terribly as I can here, but this is really hard to figure out what he’s saying. Is he taking issue with how I worded it? Is he saying, don’t use the word probabilistic falsification? First of all, the word probabilistic falsification just isn’t that bad a term, and it’s relatively clear what it means in context. We do have a difference between a probabilistic falsification and a non probabilistic falsification. We don’t want to collapse two things that are not the same to be the same through our language. That’s a bad idea. So consider his claim here. Both are tested in the same way.

[00:41:01]  Red: If observations are unexpected, we try to explain why. So let’s make this concrete. Suppose in a legal case, you need to determine whether a roulette wheel is rigged. Someone tests the wheel to see if its outcomes match what you’d expect from a fair wheel. They record everything carefully. Maybe let’s even say on video, so anyone can go back and look at the results themselves. Then the wheel burns. Maybe somebody destroyed the evidence, or maybe not. We don’t know. So we analyze the outcomes we recorded so far before the wheel burned. And we discover that we have a p -value of 0.5, meaning that if the wheel were fair, that’s a counterfactual, there’s only a 5 % chance we’d have gotten the result and the result this extreme. Now, note that this is sometimes wrongly interpreted as there’s a 5 % chance the wheel is fair. But that isn’t quite correct. There’s actually a counterfactual claim being made. If the wheel is fair, wheel is fair, there is only a 5 % chance we’d get this result. That’s what a p -value actually means. And that’s not the same as saying there’s a 5 % chance it’s fair. In fact, you could work out what the actual percent of chance is that it’s fair and you would have to use Bayesian updates and it wouldn’t be exactly the same number. So yes, of course, we’re going to quote, try to explain why. That’s exactly what it means to consider the theory that the wheel is unfair. That’s one explanation. And quote, it happened by chance, is the other explanation. Those are precisely the two explanations we’re testing between. Now, suppose you must make a judgment.

[00:42:38]  Red: Do you, as the expert on this case, declare that the wheel is rigged or not? Of course you don’t know because this is a probabilistic falsification. All you know is that if the wheel were fair, there’s only a 5 % chance of seeing this outcome. That’s it. Nothing more. Whether you decide the wheel was rigged depends on the stakes, most likely. 5 % is quite suggestive, but it’s hardly overwhelming. I mean, if you think about it, it’s only a one in 20 chance. An expert might even say something like this in court. Odds are this was a rigged wheel, but there is still a decent chance it wasn’t. And of course, we can play with the numbers. If 5 % is unproblematic for you, let’s say that you come out and there’s a p -value of 0.2. And so now there’s only a 20 % chance that you would have gotten this outcome if the wheel were fair, which is a very fuzzy number at this point. Honestly, pick whatever number makes you uncomfortable. That’s what it means to do critical rationalism. Make this as hard on yourself as possible and you’ll learn something. That’s what we’re doing with critical rationalism. The key point is that a p -value of 0.5 is typical because it does typically constitute meaningful evidence by any fair standard. If perhaps, perhaps not as strong as we’d like. In most contexts, context would take that as sufficient evidence that the wheel was rigged and would treat the hypothesis that it was fair as effectively falsified, even though we know that if the wheel was fair, there’s still a 5 % chance we would have gotten this outcome. Falsified through a probabilistic observation against a probabilistic theory.

[00:44:24]  Red: In other words, we’d consider it probably falsified. This doesn’t seem at all unclear. Like it just doesn’t seem unclear to me. And insisting that we instead call it only falsified drops the useful information that any reasonable person would want to know what’s the actual p -value here. It isn’t that Deutsche statement is incorrect, but it seems to have missed the point. Testing a theory about whether a roulette wheel is fair is exactly what probability theory is for. Probability calculus was invented for this. It’s nothing like the asteroid example. It’s nothing like the AGI example. If we can’t use probability theory here, when can we use it? What alternative would there even be to using probability theory to falsify a theory in this situation? So real life is full of cases like this. Probabilistic theories exist. We do use probability theory to probabilistically falsify them. That is, in fact, the entire point of Popper’s chapter on probability in the logic of scientific discovery. He understands that if he can’t get probabilistic theories to work as part of his falsification, then it’s his epistemology that is refuted, not the theories. Okay, so he goes to great lengths to set up. And basically what Popper’s answer was is he said, look, it’s basically convention. You decide, he doesn’t say p -value here, but to put it in my own terms, you decide a p -value. You decide, you know what? I’m going to consider it falsified if I get a p -value of 0.05 or better. And that’s a convention. You could have picked a different p -value. It would have been fine. But by convention, that’s improbable enough that we’re going to now declare the theory tentatively falsified. Now, someone can challenge that.

[00:46:07]  Red: They can come back. They can redo the experiment. They may get a different result. Right? And then at that point, because the p -value of 5, if we’re actually have a fair wheel and it was just by chance, if you have another, I still have the wheel. In this case, we’re saying it’s burned, but let’s say you still have it. You would just go do another experiment. And you should be able to find a different result because it was just chance, so it’s not going to repeat. This is what Popper actually explains, is that you won’t have a repeatable experiment if your p -value, if it’s just chance that you falsified this, you’ll end up coming to challenge it eventually. Therefore, he thinks that probabilistic theories can be falsified, even though strictly speaking, they can’t be falsified. They can effectively be falsified using probability theory. Popper’s result matches Orthodox frequentist at the time understanding of probability theory, which is good because that’s a very good theory. So of course, it made sense that he was trying to make sure his theory matched what we understood about statistics. Now, I did try to ask follow up questions to David Deutsch. I didn’t get any further answers. I mean, he’s busy, of course not. Needless to say, I was really more confused at the end than when I started. It’s very hard for me to believe. So one thing you could claim here is that David Deutsch is making a radical claim. He’s saying do not use probability theory to falsify hypotheses that the roulette wheel is fair. There’s almost no chance he is saying that. If he was saying that, that would be for sure false.

[00:47:32]  Red: We could just ignore him and would move on because you will determine this because it’s a roulette wheel, you will use probability theory and statistics to falsify any theory about it. Okay, that is the only way to do it. I guess I shouldn’t say it’s the only way to do it. Maybe you could like get inside and see if it’s rigged or something like that. I mean, in the example I’m using where it’s burned, it’s gone. You only have these observations. You really have no choice to, but to use probability theory at this point. So I don’t think he’s making a radical claim. I suspect he’s making a much less radical and I would argue confusing claim. Something like this is what I have in mind, that he’s saying we shouldn’t say probably inaccurate or probably false because that wording might, that wording might misleadingly imply that a roulette wheel has a physical property called probably rigged. That’s the most charitable interpretation I could come up with. And yes, on that reading, it is true that the roulette wheel doesn’t have a physical trait named probably rigged. But also, no one actually interprets the phrase that way. It seems pretty clear that even Deutsch, in fabric of reality, that he didn’t intend it that way either. So the idea that avoiding this phrase prevents confusion, it just doesn’t ring true to me at all. His alternative wording seems to me to be far more confusing. And consider the fact that we just went over an example of a crit rap trying to explain David Deutsch’s views on this. There’s a whole bunch of misunderstandings in that explanation precisely because this wording is so dang confusing.

[00:49:08]  Red: So again, collapsing probabilistic falsification into simply falsification, that really is just throwing away useful information. So that’s not a desirable thing to do. If this is what Deutsch is trying to say, and I just can’t think of an alternative at this moment, the whole issue starts to feel like essentialism, a mere disagreement over what words were allowed to use to express our ideas. If that’s correct, maybe it is, maybe it isn’t. I would have been a lot more comfortable with Deutsch’s point if he had stated it in a way of explaining that he’s talking about preferred terminology. While acknowledging that phrases like probably inaccurate aren’t meant to imply that the roulette wheel has a literally physical property called probably inaccurate. Okay. In other words, I could have imagined Deutsch instead of saying it the way he did. He could have said, well, when you say probably inaccurate, you probably actually just mean that you’re using probability theory and your falsification has a probabilistic element. Like he could have explained that. And then he could have said, but I don’t like that term because to me it seems to imply that there’s this physical property called probably inaccurate that the roulette wheel has. If he had said that, that would have not been confusing and it would have been okay. I wouldn’t have agreed with him. I would have gone on using the term probably inaccurate because I disagree with his point at this point. But there wouldn’t have been any confusion if he had said what he meant there. Or maybe I’m just missing the point. He’s an expert on a lot of these things and I’m not. Maybe there’s something here and I just don’t understand what his point is.

[00:50:48]  Red: But I’m clearly not the only one confused. Our crit rat friend that I quoted is also confused and if I am missing his point, I would love for someone to explain to me what his actual point was. Like I kind of doubt anyone can do it. I suspect that this is an extremely confusing way to go about explaining your point. And was there even a realistic chance I was going to be anything but confused the way it’s now been worded? And let me ask an honest question here. Was my question a fair question? Does telling me that a roulette wheel can’t have the trait of being probably incorrect really attempt to answer my sincere question? It seems to me that it’s not even really an attempt to answer the question that I am asking. It feels like something big is me agmissed here. So after this exchange, I was obviously more confused than ever. I also can’t agree that we should call a probabilistic falsification only a falsification. There are in fact levels of confidence in our falsifications when the theory in question is probabilistic like this and we should make that clear because it’s part of reality. I have heavy suspicions by this point that we might be dealing with and this is what you were getting at Peter a sort of hostility towards use of probabilistic language without actually disagreeing with what the language actually means namely that this is a falsification that relies on probability theory and thus has probabilities attached based on certain assumptions. He probably does mean that he’s not disagreeing with that, I think. I think it’s really just don’t use that language. I think that’s what he’s really saying.

[00:52:34]  Red: So I’d like to next get into his talk not maybe next podcast but next time we come back to this thread get into his talk on physics without probability and let’s see if that clears any of this up but I do want to kind of call out the degree to which it’s just confusing what he’s trying to get at that there’s a sincere set of questions here that need to be addressed by the crit -rack community and their use of probability theory and they’re not really doing it at this point. They’re coming up with kind of pat responses. Oh, there’s no physical trait of being probably incorrect or the asteroid’s either going to hit or it isn’t there. They’re saying pat answers that aren’t getting to the heart of the actual opposing viewpoint or the questions that it would raise and we really need to do better than that. We need a better set of if we’re going to go down this path of rewriting probability theory to be something more epistemologically correct this isn’t the way to do it. Let me just put it that way. This is really not a good way to go about it. So it’s just going to lead to confusion. It’s just going to cause… I don’t think there’s any chance that the average crit -rat who follows Deutch on this has a coherent view of what probability theory even is. I seriously doubt it’s even the case at this point because it is so confusing the way it’s being stated. Okay, that is the end of my rant on that. Any questions about that, Peter?

[00:54:04]  Blue: Not really interesting episode. You remember when they had… I don’t know if you remember when you were a kid in the 80s or 90s or whatever. They used to have those bumper stickers that said question authority. It was like really popular bumper sticker. Well, I think that that probably is the essence of critical rationalism and you’re doing a great job on that one, Bruce. Better than anyone I’m familiar with questioning our best authorities out there. Rightly or wrongly.

[00:54:37]  Red: Yeah, let me say, I do think… Like when I take Deutch’s point of view in total here, there’s a core nut to what he’s saying that I actually totally agree with.

[00:54:47]  Blue: Oh, yeah.

[00:54:48]  Red: Sorry, I don’t mean in general. I mean specifically on probability theory.

[00:54:51]  Blue: Yeah, definitely.

[00:54:52]  Red: Right. And so I’m not trying to say Deutch is wrong about probability theory. I think he’s wrong the way he’s wording things. I think it’s very confusing. I think he makes claims that are false. It reminds me a little of his theory around universal explainers, right? Where there’s the theory of universal explainership and then there’s these supposed implications of the theory of universal explainership that we’ve critiqued on the show. Things like animals don’t feel things because of universal explainership or all humans are equally intelligent. There’s various supposed implications of universal explainership theory that crit rats will talk about. And I think none of them are actually implications of universal explainership theory. The fact that those implications are wrong does not impact the original theory of universal explainership because they were never actually implications to begin with.

[00:55:49]  Blue: That’s a great distillation.

[00:55:51]  Red: Right, okay. I think we’ve got the same thing going on here. Deutch is trying his best to get at a problem with Bayesian probability. He’s trying to use constructor theory to get there. He’s trying to use his understanding of quantum theory to get there. And he’s right. He’s right, ultimately, that there’s a problem with Bayesian epistemology. Okay, and he’s right. He even has, I think, I haven’t gotten to what his actual point of view is. That’s what we’re saving for a future podcast. If you were to try to boil down all the different things he says, there’s kind of this one thing he keeps saying about how it’s actually based on explanation. Right, and that probability is only useful when you have an explanation that you can treat it like a probability. That is spot on. Okay, I don’t know if I even know of anybody else. Not even Deborah Mayo’s theory, which I’m very partial to, gets that right. Okay, so if you want to know what my current personal theory is, it’s actually a combination of that part of Deutch’s theory of probability mixed with Deborah Mayo.

[00:56:53]  Blue: Wait, wait. Can you say that again, the part you really agree with?

[00:56:57]  Red: That you only use probability theory when you have an explanation that says that this is a good case to use probability theory.

[00:57:03]  Blue: Oh, okay. Okay,

[00:57:05]  Red: I mean, maybe not saying it the way Deutch said it. Maybe it sounded better, but it all comes down to explanation that your explanation determines if you should be using probability theory or not. If probability theory is, if you have an explanation that says, probability theory will be a good approximation of what I’m looking for, then you use probability theory and it will give you good answers.

[00:57:27]  Blue: Hmm.

[00:57:28]  Red: That seems to be Deutch’s view. I think that that is a spot on answer. I think almost everything else he says is off. And I don’t know, it really feels a lot like the whole universal explainership thing. He is spot on correct about universal explainers. He is wrong about every single implication he pulls from it. And I think we’ve got a similar thing going on that he’s onto something true and we really need to pay attention and learn it. But he’s going about it in a way that it’s absolutely going to cause mass confusion.

[00:58:03]  Blue: Well, I guess it just demonstrates that all of us engaged in a battle of ideas tend to do have a tendency to get a little bit dogmatic, take things a little bit extreme. But I guess it’s hard to know when we’re being too dogmatic and when we’re just following the rational implications of our theories too. So it’s a lifelong struggle. It is. I appreciate your thoughts here, Bruce.

[00:58:31]  Red: It is. Thank you.

[00:58:39]  Blue: Hello again. If you’ve made it this far, please consider giving us a nice rating on whatever platform you use, or even making a financial contribution through the link provided in the show notes. As you probably know, we are a podcast loosely tied together by the Popper -Deutch theory of knowledge. We believe David Deutsch’s four strands tie everything together. So we discuss science, knowledge, computation, politics, art, and especially the search for artificial general intelligence. Also, please consider connecting with Bruce on X at B. Nielsen 01. Also, please consider joining the Facebook group, The Many Worlds of David Deutsch, where Bruce and I first started connecting. Thank you.


Links to this episode: Spotify / Apple Podcasts

Generated with AI using PodcastTranscriptor. Unofficial AI-generated transcripts. These may contain mistakes; please verify against the actual podcast.