Episode 101: Wolfram, Rucker, and the Computational Nature of Reality

  • Links to this episode: Spotify / Apple Podcasts
  • This transcript was generated with AI using PodcastTranscriptor.
  • Unofficial AI-generated transcripts. These may contain mistakes. Please check against the actual podcast.
  • Speakers are denoted as color names.

Transcript

[00:00:00]  Blue: Hello out there! On this episode of the Theory of anything podcast, Bruce takes a deep dive into Stephen Wolfram’s ideas regarding computational universality, which, as I understand it, goes further than the Church -Turing -Deutsch thesis, in that Wolfram’s theories imply that all of nature could be simulated even by relatively simple systems. So even nature itself may be computational, rather than something that can just be simulated on a Turing machine or quantum computer. For those that don’t know, as I didn’t, Stephen Wolfram is a renowned physicist, computer scientist, and entrepreneur. Bruce also talks about the related ideas on philosophy of computation, promoted by Rudy Rucker, who I’m afraid is another name I did not know, though I now understand he is a mathematician, computer scientist, and science fiction author associated with the cyberpunk genre. Both thinkers apparently believe, rightly or wrongly, that the complexity of life and the universe can be explained by relatively simple computational rules. This is probably one of our heavier episodes, and truthfully, I’m barely hanging on at times, but I feel that it is a good introduction, at least for myself and hopefully others, to the ideas regarding the nature of computation and reality itself, and I really appreciate Bruce turning me on to these fascinating polymaths. Welcome to the Therve Anything podcast, hey Peter. Hello Bruce, how you doing?

[00:01:41]  Red: Good. I’m sure these episodes are going to be aired out of order, but our last episode that we recorded prior to this one was the Stephen Hicks episode where we interviewed him. So before we jump into today’s episode, which is actually about Stephen Wolfram, I wanted to get your take on what you thought of the conversation with Stephen Hicks, Peter, and I had some of my own thoughts on it.

[00:02:04]  Blue: Well, I am curious what you think about what he said about critical rationalism. I had the feeling that somehow, I don’t know, I have a friend who was an objectivist who’s actually talked to Stephen Hicks quite a bit about some of these issues and probably has a pretty similar perspective on a lot of things. Oftentimes, I find that we’re in these philosophical conversations when you’re getting into the weeds about justificationism and things like that. We all have subtly different ideas about what these concepts mean, I think, and there’s different emphasis on different things. There’s in these conversations.

[00:02:54]  Red: That’s exactly what I was going to say. Interesting. Keep going.

[00:02:58]  Blue: It was a wonderful conversation, wonderful man. It was just an honor to speak to someone who is so prominent and influential. I still feel like this is a low amplitude event in the multiverse, and it was really,

[00:03:20]  Red: really a good conversation.

[00:03:23]  Unknown: Thank you for this opportunity, Bruce, to be on this podcast and talk to you and other amazing people.

[00:03:29]  Red: Here’s the thing I found interesting about it. First of all, I would hardly call him negative on Popper.

[00:03:38]  Blue: Yeah, that’s true.

[00:03:40]  Red: He comes across like, I mean, as he put it, Popper’s one of the good guys. He separates the world roughly like we all do into the ones who are kind of on the right track and the ones that are completely off base and are misleading people. Popper’s in his mind, Popper’s in the camp of the ones that are leading people in the right direction. He sees Popper as kind of roughly speaking as on his side. While he outrightly stated he thinks Ayn Rand’s objectivism is a better epistemology than a more correct epistemology than Popper’s, he also outrightly admitted that she didn’t have very much content in her epistemology compared to Popper. He was like outrightly admitting as an objectivist, actually, you’ve got us beat in an important way. And I’m admitting that basically is what you’re saying, right?

[00:04:35]  Blue: So

[00:04:36]  Red: I found that really fascinating, too. Yeah. And then when we asked him, like, this is the thing that was burning on my mind was, what do you disagree with Popper over? Since you’ve got so many, and by the way, he’s read Popper, right? Like, this isn’t some guy who has heard about Popper and, you know, is spouting off about the things he’s heard that Popper said. This is someone who’s actually read Popper in depth and is giving his opinion on Popper, right?

[00:05:06]  Unknown: Yeah.

[00:05:06]  Red: Okay, which is really critically important for what I’m about to say. So the thing that was burning on my mind was, what is it you actually disagree with Popper over? And the main thing he brought up, it seems like he brought a few things up, and I’m going off memory, so I’m not going to remember them all. But the one that jumped out at me, probably because it’s one I’ve been harping out on this show, is he said, Popper only believes in negative evidence, but it’s clear that there is such a thing as positive evidence.

[00:05:38]  Blue: I knew you were going to say that, yeah.

[00:05:41]  Red: So, and now I’ve quoted Popper positively on positive evidence. That’s what the concept of corroboration is, okay? And it’s prominent in his writings, this concept of corroboration, right? And I’ve at least, you know, in our episodes on ad hoc versus easy to vary, I gave exact quotes from Popper where he makes it very clear that the reason why positive outcomes on a test matter is because they show that the test was non -ad hoc, that it had independently testable consequences, or as Deutsch would say, it has reach,

[00:06:19]  Blue: right?

[00:06:20]  Red: And so Popper, at least in the paragraphs I’ve yanked out, he clearly has a concept of positive evidence. And yet nobody, and I mean nobody reads Popper and comes away feeling like he has a positive, he has a view of positive evidence, even though he does. And it’s a combination of things that seem to be a problem, like that one paragraph I yanked out, like it’s the best paragraph you can find in Popper where he makes it clear why positive evidence is so important, right? But it’s like a single paragraph, like a super clearly stated one. So like there’s no doubt what he meant, but it just isn’t something he’s emphasizing, right? And then he does emphasize this concept of corroboration, but even most crit rats I know are very confused as to why he’s emphasizing corroboration. In fact, some giant percentage of crit rats think that Popper was wrong to emphasize corroboration and that he was off base when he did, because all you do is you make conjectures and then you try to falsify them and there’s no such thing as positive evidence, so you don’t need a concept of corroboration and it’s completely irrelevant to his epistemology. And Popper, he was in a one crit rat, we’ve quoted him, I’ll leave the name of who it is anonymous, as saying, well, Popper was just, he was trying to use the language of philosophers of his time, like he just puts zero on Popper’s talk about corroborations. It’s this anomaly that we can ignore in Popper because it’s got no meaning. And I can see why someone like Stephen Hicks would read Popper in depth, come away with this idea that Popper is completely against any sort of positive evidence, meaning anything.

[00:08:07]  Red: And then say, you know what, it’s obvious that’s wrong. And then go to objectivism instead, right? Because they have a concept of, in my opinion, incorrect concept of positive evidence whereas Popper has the correct concept of positive evidence. And so it doesn’t really surprise me that you see a lot of smart guys like Stephen Hicks turn away from Popper. And this is why when you ask me, why do I think it is that Popper hasn’t caught on better? I think it’s Popper’s fault, right? Like, so many of these people have read Popper in depth, it’s not just that they’re hearing him and they’re absolutely coming away with ideas that are kind of obviously false. And some of them are propounding them saying, this is correct, there’s no concept of positive evidence at all, it’s completely meaningless. And some are saying that’s true and some are saying that’s false, but very few are coming away with the point of view that I’m expressing, which is, look, it’s not that positive evidence justifies a theory, it doesn’t, right, as true, but it does show you that the theory is non -ad hoc. And that’s what matters in this case. It’s a different axis, that’s why we had the two axis episode. It’s a different way of thinking about what positive evidence means. Negative evidence tells you something about the truth of the theory. Positive evidence tells you something about the verisimilitude of the theory, which isn’t really the same as saying the truth of the theory. And I think it’s difficult for people to wrap their minds around this. And I think part of the reason why is because of this concept of justificationism. So that was another one that Stephen Hicks kind of kept talking about.

[00:09:49]  Red: You can tell he has a, I’m not quite sure what his view on justificationism is. Like he said he was against it, right? And yet he kept talking about that like it was a problem with Popper. At least that’s the way I interpreted him. It was a little unclear. And I think the issue here is that there’s two possible ways to think of justificationism. There’s the idea of justifying the theory as true or sufficiently true, justifiably true. This is the idea of justified true belief, which is not the same as certainty. It’s supposed to be that it’s certain enough, as opposed to justifying the theory as the best we currently have, as a preference to all other currently known theories. That second one’s a completely legitimate form of justificationism that Popper completely agrees with. And again, I’ve given the exact quotes from Popper, where he says this, right? Like, I’m not making this up, right? And I know people think I’m making it up. That’s why I have to go listen to the episodes where I actually quote Popper. But it’s a certain kind of justificationism that Popper attacks and shows is wrong. The idea of justifying preference for a theory is completely correct.

[00:11:01]  Blue: Well, it takes guts to be out there on your own, Bruce. But what you’re saying makes perfect sense to me, at least. So you see fallibilism as perfectly compatible big picture with a certain view of positive evidence. New wants to be view of positive evidence. Yes.

[00:11:23]  Red: In fact, in fact, fallibilism doesn’t require it. But Popper’s epistemology requires this concept of being able to tell that a theory is non ad hoc via positive outcomes to an experiment,

[00:11:36]  Blue: right?

[00:11:38]  Red: If Popper talks about this, and I’ve given the quotes in past podcasts, but if you only move from, this is what he says, if you only move from one theory to the next, and you never do independent tests of the theory, from a certain point of view, it may seem like you’re making progress. Like you may say, oh, we have this problem with the old theory. So I’m going to imagine this new theory that solves that old problem, but also makes all the same predictions as the old theory where it got things right. And let’s say that you never independently test your theory. Your theory could be right, I guess, but it could be completely ad hoc, right? And this is exactly why you want your theory to make completely independent predictions different than the problem you’re trying to solve. And then you want to go tested and you want to come out with a positive outcome. That’s what we mean, corroborating a theory. That’s what it means to corroborate a theory. It doesn’t mean, to corroborate a theory doesn’t mean it has a positive outcome. It’s a certain kind of positive outcome, one that was in an independent test that made some sort of prediction that you could never have made without the theory and therefore was unexpected. And more to the point, in theory could have refuted the theory had the prediction failed, right? In any case, I feel like if Stephen Hicks could reconceptualize Popper in the way I’m suggesting, which really is just Popper, right? Like it is what Popper is saying is not Bruce making stuff up. It’s Popper, right? That I actually feel like it deals with every single criticism he had of Popper, right?

[00:13:19]  Red: The problem is, is that even most people who say they believe in Popper kind of really agree with the things that Stephen Hicks is saying is wrong with Popper. Anyhow, that was my thought throughout the whole interview I kept thinking. I wish I could like take a few hours and try to explain to him and find the quotes and say, look, here’s how I read Popper. I think this addresses your concerns. Yeah.

[00:13:44]  Blue: Yeah. Yeah. As much as I loved our hour -long conversation, we probably could have done an epic three hours with him, but he’s a busy guy, of course. So yeah.

[00:13:57]  Red: So I felt like it was a wonderful interview. Even the parts that maybe I disagreed with him over, I felt like he was just so authentic in explaining why he struggles with Popper in a way that I think would resonate with many, many, many people, if that makes any sense. Yeah.

[00:14:18]  Blue: Well, good. I’m glad it worked out. And I, like I said, I feel a lot of satisfaction that that happened. And I think we just need to start casting our net wider in terms of who we reach out to about coming on our humble podcast. It’s a bit like the internet dating thing. We just got to put ourselves out there and invite some people on.

[00:14:43]  Red: Get rejected. Yeah.

[00:14:44]  Blue: Get rejected. Get used to it.

[00:14:48]  Red: All right. Well, let’s get into the actual episode today, the actual topic today, which is the theories of Stephen Wolfram, but as interpreted by Rudy Rucker. Now, Rudy Rucker, I have a book of his that I read that I loved called The Life Box, The Seashell and the Soul, and this episode is going to summarize some of the ideas from that book. Rucker is a scientist himself and he’s a science fiction author, obviously not a super famous one because Peter had never heard of him. I don’t think I had heard of him prior to reading this book, to be honest. So there you go. But Rucker isn’t just, you know, just some science fiction author. He is a scientist that knows what he’s talking about. And he actually makes some adjustments to Wolfram’s theories where he thinks Wolfram’s gotten a few things wrong. But he really likes Wolfram’s theories and he does a fantastic job of explaining them.

[00:15:43]  Blue: And sorry, I’m woefully unprepared for this. But just to clarify, this is in the context of a fiction of a novel, basically.

[00:15:53]  Red: No, no, no. This is a nonfiction book.

[00:15:55]  Blue: Okay. So he’s a science fiction author, but he wrote a nonfiction book.

[00:15:59]  Red: Yes, he does both. He writes both types of books. Okay,

[00:16:03]  Blue: got

[00:16:03]  Red: it. Okay. Now, a question we may need to answer is who is Steven Wolfram? So I was talking, I’ve mentioned Wolfram and his theories all sorts of times on this podcast. And at some point, I don’t remember when Peter goes, I don’t know who Wolfram is. And I’m like, oh, like I’m acting like everybody knows who Wolfram is and probably not everybody knows who Wolfram is.

[00:16:30]  Blue: Well, I did listen. I think of the name just didn’t register first, but I did listen to at least one of the interviews on Lex Fredman.

[00:16:36]  Unknown: I

[00:16:36]  Blue: think he’s been on there a couple times, maybe.

[00:16:39]  Unknown: Yeah.

[00:16:41]  Red: So he is really big. And maybe this is why I know him so well is because he’s really big in like transhumanist circles. He’s one of the main proponents of kind of the modern religious transhumanist view. You know, like I’ve mentioned, I have at least some associations with the Mormon transhumanist movement. They, a lot of their ideas are very similar to his. And in fact, they probably got a lot of their ideas from his books. I don’t know if he necessarily invented these ideas. Like you can track a lot of these ideas to other people. So he’s maybe not so much an inventor as a chief seller, propounder. And he’s definitely increased these ideas and made them a lot bolder and more specific in a lot of ways. He’s also someone who’s, he’s like started a lot of different famous businesses. He’s written a whole bunch of books that talk about the singularity. If the very fact that you know about the singularity, which he did not invent, Werner Winge invented it. But the fact that you know about it is probably because of him, even if you don’t realize it, because he’s the one who popularized the idea.

[00:17:50]  Blue: And just to be clear, we’re talking about the technological singularity.

[00:17:53]  Red: Technological singularity.

[00:17:54]  Blue: Or where like the AI or AGI just becomes a smart one.

[00:18:00]  Red: Sometimes this is a nightmare scenario. He sees it as a positive scenario that the technological singularity hits and the AI, AGI builds the smarter AGI, which builds the smarter AGI. And soon you can’t even predict the level of growth that’s going on and because it’s so exponential and we find ourselves in a heavenly state where things are profoundly wonderful. I mean, like some of this sounds very similar to Deutsch and it’s not an accident that it does. But

[00:18:30]  Blue: Deutsch would disagree with that though. Deutsch would disagree with the specifics.

[00:18:34]  Unknown: Yes.

[00:18:34]  Red: Yeah. So I would say that the Omega Point is very deeply technology singularity -ish,

[00:18:44]  Unknown: right?

[00:18:46]  Red: But like it relies on, the Wolfram version relies on this idea of artificial superintelligence, which I don’t think Deutsch would ever accept because of the idea of a universal explainer. But I definitely think Wolfram has a lot of ideas similar to Deutsch. In fact, as we’re going to see, he has an idea called universal automatism, which is very similar to maybe arguably the same as the Church -Turing -Deutsch thesis, though he arrived at it a totally different way. Also arguably not the same as the Church -Turing -Deutsch thesis. We’re going to talk about if it’s the same or not and make two arguments there. So with that kind of introduction, also Wolfram is the Wolfram behind Wolfram Alpha, the search engine that has an AI that lets you do math. It’s pretty cool. Like I’ve played with it. It’s a little too advanced for me, so I haven’t done that much with it, but it’s an incredible tool that exists out there.

[00:19:54]  Blue: So he sounds like a guy kind of like Deutsch in a way who has his hand in both science and philosophy. Do you say that’s accurate?

[00:20:01]  Red: Yes, I do. I think that’s accurate. The thing that maybe makes him different from Deutsch is that he’s also an entrepreneur. So he’s taken these ideas and started businesses based on them and things like that.

[00:20:12]  Blue: I see.

[00:20:13]  Red: Let me start with a quote from page five of Rutger’s book, although he’s quoting Stephen Wolfram. He says, it is possible to view every process that occurs in nature or elsewhere as a computation. So Rutger calls this universal automatism. Actually, that may be Rutger’s term, not Wolfram’s. I think I just said it wrong that it was Wolfram’s. Though he says he isn’t sure he believes it. Rutger says he isn’t sure he believes it, but that is what Wolfram believes.

[00:20:40]  Blue: So this is really just the church -turing -Deutsch thesis.

[00:20:44]  Red: Or is it? It might actually be a belief that computation is the most basic element of reality, which is not the church -turing -Deutsch thesis. It’s often unclear which of these two ideas people have in mind for the simple reason that many people can’t tell the difference between these two views. Most of us think very reductionistically, even when we mean not to. So now, I’ve used Saadia as an example here because I know that she’s taking issue with the church -turing -Deutsch thesis. One of her main concerns may be semi -legitimate concerns, so I’m going to say that. But she’s often said, well, I feel like Bruce, when you’re arguing for the church -turing -Deutsch thesis, that you’re being a reductionist. Because I accept the church -turing -Deutsch thesis, maybe in her mind, that’s the same as saying the church -turing -Deutsch thesis is the same as saying computation is the fundamental level of reality. To her, it feels like the church -turing -Deutsch thesis is saying computation is the fundamental level of reality.

[00:21:50]  Blue: Yeah, that’s a very subtle distinction that it took me a while to get. So Deutch would not assert that reality is computation, but he would say that reality is computational. I might not be saying that quite right, but

[00:22:08]  Red: is

[00:22:09]  Blue: that a

[00:22:09]  Red: fair

[00:22:09]  Blue: way of putting it?

[00:22:10]  Red: Yes, yes. So we’re going to get a little bit more specific than that. Let me just say, though, that the fact that you’ve struggled with it completely explains why Saudi’s concern is at least a little bit valid. There’s some sort of distinction here that’s so subtle that it’s really hard for people’s minds to latch on to it. If you pay attention to what a Deutchian, a fan of David Deutch would say, they will at times act like computation is the fundamental level of reality, because they’ve misunderstood that that’s not what Deutch means, right? Yeah. And Saadia would argue, so she sent me an article from David Deutch called It From Cubit, and I can’t remember if it was actually her or her husband that sent it to me. They both talked with me about it, so I sometimes get them confused, and her husband being Mark Barrows, who we had on the show also. What they asked me was, look, we know Deutch says that computation isn’t the fundamental level of reality, but how else would you read this paper? And I went through and I read the paper, and I honestly stumped myself as I read through the paper as to what he’s saying and how it differs from the idea of computation being the fundamental level of reality. So I can understand their confusion on this subject. Apparently, a lot of us are confused as to what Deutch is actually getting at here. But to be clear, Deutch has made numerous, very, very clear statements that computation is not the fundamental level of reality, that reality is not computation.

[00:24:01]  Blue: So I think that’s why it helped me. When I thought of it that way, it helped me to understand why the simulation hypothesis is not a kind of natural conclusion from the Church -Turian -Deutsch thesis. Like if you think of reality as computation, then it’s kind of like, well, what are the chances that we, like, Bostrom might argue? What are the chances that we are not living in a simulation and it kind of seems slim? But it seems to me that’s not what Deutch is asserting at all.

[00:24:36]  Red: No, it’s not. And in fact, Bostrom’s argument is just bad across the board.

[00:24:41]  Blue: I know. That gets into a tangent. But

[00:24:43]  Red: yeah. So, okay. So, but let’s talk about this for a second. So let’s take someone who’s asserting that the Church -Turian -Deutsch thesis is reductionistic. Let’s take that point of view seriously, regardless of who it comes from. I can see why a person who says that would just really have a hard time imagining everything physical being simulatable unless what you meant by that was that you can reduce everything to a computation. Okay. So if I say everything in reality can be simulated on a computer, which is something Deutch says, doesn’t that by definition mean that I can reduce everything to a computation? Stop and think about that for a second.

[00:25:30]  Blue: Well, sort of, yeah.

[00:25:33]  Red: It kind of sort of means that, right?

[00:25:35]  Unknown: Okay.

[00:25:36]  Red: So, QED, Church -Turian -Deutsch thesis is reductionistic, end of proof. Okay. And I think this is where these people are coming from when they make this argument. Okay. Let me take this argument and let me pull it apart a little bit because I honestly think there’s something legitimate here they’re saying, but it’s just confused enough. And if we unconfuse a little bit, it will make more sense. Okay. So, in fact, computation can’t be reduced in a straightforward sense at all. Let me make proof of that. Okay. So what is the atom of logic? It’s really tempting to think of the atom of logic as being the not and gate because it’s well known that the not and gate is the simplest logic gate that is universal. Literally any logical thing you can come up with, any program you want to run, any algorithm you want to run can be built out of nothing but not and gates.

[00:26:34]  Blue: Okay.

[00:26:34]  Red: So, if

[00:26:35]  Blue: you had just, okay, as I’m not a computer guy, not a programmer, but as I understand it, with this basic concept, this basic, what would you call it, like the not and function or whatever, you could program anything.

[00:26:50]  Red: That’s correct.

[00:26:51]  Blue: Is that fair? Okay. That is completely fair. Okay.

[00:26:55]  Red: Now, it is, so it is certainly true then that we can, quote, reduce any logical statement, no matter how complex, any program basically, no matter how complex, to a series of simple not and gates.

[00:27:10]  Blue: It kind of makes it sound like a magic thing or something. I kind of like it. I mean, it sounds reductionistic, I guess, but you could also look at it kind of positively, like, it’s beautiful. Yes. Not in the end. Yeah. Now,

[00:27:21]  Red: here’s where the beauty really begins. Here’s the thing. You can also reduce, in that sense, a not and gate to a not gate and an and gate. Okay. Now, isn’t that obvious? Like, obviously, you can reduce a not and gate to a not, to a not gate and an and gate.

[00:27:41]  Blue: Okay. Isn’t that

[00:27:43]  Red: obvious?

[00:27:44]  Blue: Yeah.

[00:27:44]  Red: Okay. It’s every bit as obvious as the first statement. Okay. Then we can then in turn reduce those not gates and those and gates to not and gates.

[00:27:55]  Unknown: Okay.

[00:27:56]  Red: And if we continue this reduction forever, it’s turtles literally all the way down. Okay.

[00:28:03]  Blue: Wow. That is that is cool.

[00:28:05]  Red: So logic is thus not strictly speaking, reducible in the physical sense of the word reduction. I eat down to atoms or smaller particles. Okay. We may use the word reducible as an analogy. Remember, all words are fuzzy analogies, according to Hofsteder. Okay. So when we say I can, I can reduce anything in nature. I can simulate anything in nature. That means I can reduce everything physical to computation. That’s not the same use of the word reduction as when we talk about reducing things down to atoms, reducing atoms down to elementary particles. It’s not the same. Okay. It’s an analogy and it’s an imperfect analogy at best.

[00:28:55]  Blue: All right.

[00:28:56]  Red: If you want to call it reducing, sure. And if you want to then say that makes you a reductionist, sure. But it’s no longer the kind of reductionism that’s problematic. And this is the whole problem is that the people who are making these statements are actually getting confused about words, not concepts.

[00:29:17]  Blue: So it sounds like you’re kind of thinking more deeply about what the concept of reductionism really means.

[00:29:23]  Red: That’s right.

[00:29:24]  Blue: That’s what I’m kind of getting. Okay.

[00:29:26]  Red: Okay. Now, let me, if you don’t still don’t really follow what I’m saying, let me give you a couple more arguments that might be intuitively a little more appealing. Okay. I feel like what I just did is actually formally the correct argument. And it’s why the concept of reducing to an algorithm in no way is reductionistic in the philosophical, problem philosophical sense.

[00:29:50]  Blue: No, it was really interesting to put that. I feel like I just had about five aha moments there. I need to like think about this a little more.

[00:29:57]  Red: So I’m arguing that it’s a mistake to try to work out if the Church -Torrent -Deutsch thesis means computation or physics is, quote, more fundamental. In some sense, that question is, in my opinion, a simple category error. But let’s try to ask it anyhow. Let’s try to take it seriously as a question, even though I’ve just proven it’s a meaningless question. Okay. So we might say something like this. All of physics is explained via math. Okay. Well, that’s true. All of physics theories today are explained via math, math equations. Okay. And we might say that all math that can be computed is always computed on a computer, even when we’re simulating physics. Okay. Except

[00:30:40]  Blue: for some of Roger Penrose’s stuff, right?

[00:30:43]  Red: Okay.

[00:30:44]  Blue: Sorry, that gets into my tangent.

[00:30:46]  Red: Cheap shot, Peter. Cheap shot.

[00:30:48]  Blue: Okay.

[00:30:52]  Red: What a computer can’t or can’t compute is in fact constrained by the laws of physics. This is why Deutsch tries to declare computational theory a branch of physics. It’s not a branch of physics in the way you would normally think of a branch of physics, but you can see his point. In fact, computational theory is the study of the limits of what you can physically compute. If you had different laws of physics, you could compute different things. He brings this point up in beginning of infinity. Okay. This is, all of these are true. And they’re kind of like, I don’t think anybody doubts any of the statements I just made. Not even Saadia. Nobody doubts what I just said. Not even Roger Penrose. Okay. Now, if you held the gun to my head and you said, Bruce, you have only two choices. You must choose which is more fundamental, computation or physics. I think with that gun, of course, I don’t want to answer either of those because I think, okay, but I’ve got a gun in my head and I’ve got no choice. I’m going to answer physics. And the reason why is because it’s the physics that constrains the computation.

[00:32:04]  Blue: Now,

[00:32:05]  Red: honestly, I’d still think that’s a lame answer. And it ignores what a silly question the question really was. The real answer is, of course, something more like, look, computation is emergent. Your question is like asking, which is more fundamental physics or poetry? The question just doesn’t make sense to me. That computation can simulate anything in reality, doesn’t tell us anything about the fundamentalness of anything, as if it’s some fundamentalist, some sort of property that a thing can have. It just means the two happen to be isomorphic. And it’s not like it’s some weird coincidence that they happen to be isomorphic, since a computer is always a physical object that utilizes physics to do its computing. This is why Deutsch instead claims that computation is a branch of physics. But many find this correct, in my opinion, answer unsatisfying because it feels more like math than physics. And I know Saudis argued that one with me, too, says, look, everybody believes it’s math and it’s treated like math, and you do proofs with it. It’s math, Bruce, it’s math, right?

[00:33:12]  Blue: You’re talking about computation.

[00:33:14]  Red: Yes, computational theory.

[00:33:16]  Blue: Okay.

[00:33:16]  Red: Okay. Now, it isn’t surprising since it’s literally, it is literally the study of what math we are able to do to physically compute. So yes, it’s math. It’s like the intersection of math and physics, right? So it’s both. Like there’s nothing wrong with it being both. Nor is it physics in the normal straightforward sense that you’re probably normal used to, where we don’t use the LHC or other physics experiments of any kind to advance our knowledge in computational theory. We don’t go out and, you know, build radio telescopes to try to advance our computational theory knowledge, right? It’s treated way more like a mathematical discipline. Even the invention of the quantum computer required no physics experiments, right? I mean, Deutsch sat down and figured out how to map quantum phenomena to the Turing machine. And that was how he came up with quantum computational theory. It didn’t require him to go get time with the LHC to work out computational theory. That just isn’t what computational theory is, right? So that’s why I think I would answer physics is more fundamental, even though I know that’s a stupid answer, is because, and I think that this is why Deutsch, I think I’m going to do this in a future slide here that I’m getting to, but it’s why Deutsch will immediately point out that there are objects that exceed the universal computer, like a universal constructor can do anything a universal computer can do, and then some, like it can construct things, right? It’s not really that surprising that there are objects in reality that have a greater repertoire than a computer. And the fact that they do doesn’t undermine the tertiary Deutsch thesis. And

[00:35:18]  Red: the fact that, like, if I were to, I’ve seen people argue this, they’ll say, well, you know, if the universal constructor exceeds the repertoire of a computer, then that undermines the tertiary Deutsch thesis. No, you just don’t understand when you say that, right? Like, you’ve misunderstood. So, but that’s why I would, that’s why I think I would answer that even just based on everything that we’re talking about, physics feels more fundamental to me than computation. And so, and that’s probably why I can never really get on board with Saudia’s arguments is because I’ll always answer, no, neither is more fundamental. That’s a dumb question. Let’s not even ask that question. But like somewhere inside, there’s a part of me that going, no, it’s actually physics. Physics is more fundamental.

[00:36:00]  Blue: Even

[00:36:01]  Red: though I know that’s not true, right?

[00:36:02]  Blue: But

[00:36:03]  Red: just as much computation feels fundamental to her under CTD, it CTD to me feels like physics is fundamental.

[00:36:13]  Blue: They’re interwoven concepts.

[00:36:15]  Red: Right, exactly. So, definition, this is from page 12 of the book, Rutgers book, a computation is a process that obeys finitely describable rules. So, a computation is utterly deterministic, or in other words, non -random. The rules act like a kind of recipe for generating future states of that computation. Now, Rutger points out that describable is a slippery notion. Logicians have established that describable can’t, in fact, have a formally precise meaning. Otherwise, a phrase like the following would be a valid description of a number. And now here’s the description. Let the very number be the smallest integer that can be described in less than 18 words. If such a number existed, then the definition of a very number, which was the definition was 17 words long, would in fact describe that integer in less than 18 words. So, of course, this is just a form of Godel’s paradox. So, what we’ve really done is we’ve shown that the concept of describable is non -computable, or it’s equivalent to the halting problem, in other words. Now, many Deutschians make a strange argument that I need to dispense with before I continue. They will claim that AGI will not be an algorithm because an algorithm contains inputs and outputs, and an AGI will not have an ending and won’t have inputs and outputs. And I’ve heard this so many times, and it’s usually attributed to Deutsch, and I think at some point I actually started to believe Deutsch had said it, but I actually looked it up for this podcast. And Deutsch not only has never said that, but he says the opposite of it. So, I’ll get the actual quote from Deutsch here in a second.

[00:38:01]  Red: But you’ll hear this one all the time, and I don’t know where it originated from. It is false. Okay. So, the idea that the AGI program isn’t an algorithm is just not true.

[00:38:12]  Blue: Now, people think that Deutsch, sorry, I might have spaced out for a second. They think that they think that Deutsch said that AGI is not an algorithm? Yes. Oh, okay.

[00:38:22]  Red: A program but not an algorithm.

[00:38:25]  Blue: Okay. Oh, yeah, I had not heard that. I thought he was very clear on that.

[00:38:30]  Red: I’ve heard it from so many different people.

[00:38:32]  Blue: Okay. Okay.

[00:38:33]  Red: So, Rutger uses a humorous example of a calculator. Is a calculator strictly speaking an algorithm? It’s not like the calculator freezes up at the end of its computation, and you can never use it again. The computation that the pocket calculator is doing, it waits around to be used again. So, I guess you could say it’s not an algorithm, right? So, Rutger says of this, this is page 17. This freezing up definition of halting is appropriate for certain simple models of computations, such as an abstract device known as Turing machines. But for more general kinds of computation, freezing up is too narrow a notion of a computation being done. So, he gives examples. Go out and use Google Maps, okay, to find directions to your location. You may initially think of this computation as halting because it gives you a final result. But in reality, your web browser continually pulls for the mouse and for it continues to see if you’re going to click on something. So, the computation never halts, right? Your PC continually runs background processes, in fact. So, this whole halting thing, which is part of the definition of an algorithm, a formal definition of an algorithm, really is just a matter of convenience for humans to allow us to carve out parts of a program and think of them as having beginnings and ends for us to think of them in a certain way. So, AGI’s will be a collection of algorithms, just like any other program, okay? Now, as I said, I don’t know where this idea comes from, that AGI won’t be an algorithm. And it’s been attributed to Deutsch, but apparently wrongly so.

[00:40:22]  Red: So, for example, in Deutsch’s famous Aeon article about AGI on why we haven’t discovered AGI yet, he says, so in this case, and actually in all other cases of programming, genuine AGI is only an algorithm with the right functionality would suffice, only an algorithm with the right functionality would suffice. So, he refers to AGI as an algorithm there. He also, elsewhere in the article, refers to the software running on the human brain as the human’s algorithm. So, this idea does not come from Deutsch, okay? So, hopefully we’ve done away with that, but keep this in mind. And I know that some of my audience may have this idea, AGI’s not an algorithm. We’re going to now show that, now that I’m trying to make sure you realize that’s a false idea, stop thinking of it that way. I think the only sense in which that’s a true statement is that it’s actually more like it’s a collection of algorithms that run together, right? But programs that don’t halt, like go play Skyrim. There’s no end to Skyrim, right? It’s not an algorithm in that sense, but it is a collection of algorithms. There’s an algorithm where you take some sort of input, it updates, it’s model of the world, and then it paints the screen to map what you see, okay? And you can think of that whole process as a single algorithm. It’s important to realize that you can make that exact same trick with what humans do, okay? And I’m going to give examples of this as we go along, or Rucker’s going to give examples of this as we go along.

[00:41:57]  Red: So now there are functions that are computable and there’s functions that are not computable, the halting problem being the quintessential example of a non -computable function. But even among ones that are computable, many of them, even though you can precisely define what you want computed and how to compute it, you still may not be able to feasibly compute it. This is of course the concept of intractability, okay? Even though computations are deterministic, they can yield surprising results. This is from page 20. That is to say many computations are unpredictable because they yield surprising results. You couldn’t have foreseen that outcome just by looking at the computation itself, looking at the algorithm itself, okay? So let’s try to define predictability a bit better. So P is predictable. This is all Rucker still. If there is a shortcut computation Q that computes the same results as P. I say it’s all Rucker, but he’s quoting Wolfer. But very much faster. Okay, let me read that again. P is predictable. If there is a shortcut computation Q that computes the same result as P, but much faster, otherwise P is said to be unpredictable. This definition might make you uncomfortable at first, but it’s actually self -evidently true if you stop and think about it. Let’s use an example of a predictable computation. Let’s say the orbits of the planets in the solar system. Let’s say you want to predict the position of what the planets will be in a million years. Now, one way you could do that would be to create a simulation of the planets and then run it for a million orbits of the Earth around the sun, i.e. a million years. But you’d not need to do that.

[00:43:44]  Red: You could do that and that would simulate it and then you could then know what the result would be and you could get your prediction that way. But you’d not need to do that because the orbits are periodic. There is in fact a shortcut computation that can get the result without having to do a full simulation point by point. Now, let’s make the following assumption. Assumption A is computation A has no short computation that computes the same results. This is an assumption about some given computation. Now, my question for you is this. Can you predict this computation without running the computation itself? If you can, you just violated the very assumption that this whole thing was based on. So, the answer logically speaking must be no, you can’t predict this computation without actually running the computation. Any such computation that has no shortcut computation by definition must be unpredictable and surprising in its results.

[00:44:51]  Blue: And so, did you just answer the halting problem?

[00:44:54]  Red: No. No. No.

[00:44:57]  Blue: Nothing of that groundbreaking. Halting

[00:44:59]  Red: problem. This has nothing to do with the halting problem.

[00:45:02]  Blue: Okay, sorry.

[00:45:05]  Red: Actually, that’s not true. It does have something to do with the halting problem. And in fact, I’m going to get to what it has to do with the halting problem in just a second. But it’s not like I’ve solved the halting problem. I’m going to rely on the halting problem, if that makes any sense. Must a computation be complex to be unpredictable? The answer is maybe surprisingly no. There are many simple computations that have no shortcut, and thus are unpredictable and thus surprising. So, from page 22 to 23, Rucker says, the notion of computer programs being unpredictable is surprising because we tend to suppose that being deterministic means being boring. Note also that since we don’t feel ourselves to be boring, we imagine that we must be non -deterministic and thus not at all like a rules -based computational system. Rucker now goes over Wolfram’s classes of computation. This is a really interesting theory. Let me just say that this theory does have some problems. And Rucker actually points out some of the problems. But we’re going to go over the theory in detail because I feel like even though it’s probably ultimately not quite right, it’s like onto something, if that makes sense. Very similitude. So, class 1 would be a computation that enters a constant state. So, no surprise at all. Class 2 would be that it generates a repetitive or nested pattern like the orbits of the planets around the Sun. So, again, no surprise. Class 3 would produce a messy, random -looking crud. So, no structure at all. But the results can’t be predicted that they are surprising. Or, class 4, they produce a gnarly, interesting, non -repeating pattern.

[00:46:59]  Red: So, even though there’s obviously some sort of pattern, the pattern is surprising. So, there’s this structure and there’s an obvious pattern just looking at it. But the pattern is surprising. Okay. Note, you can’t always be sure which class you are.

[00:47:15]  Unknown: Okay.

[00:47:15]  Red: So, if you’re class 4, like, you can’t be sure if you’re class 4 or really class 2, let’s say. But you just haven’t repeated yet. And class 3 and 4 may be indistinguishable from each other at first 2. Okay. So, these aren’t, again, these aren’t decidable classes. You can’t just say, oh, this, unless you’ve got a good explanation for it. We’ll give examples where you do can have a good explanation for it. You can’t just look at the pattern produced by computation and say, oh, that’s clearly a class 3. That’s clearly a class 4. They’re not really meant to be used that way. I think what we want to take away from this is that these 4 classes exist, right? Not that we can formally tell which one’s which, if that makes any sense. But I don’t think anybody doubts that there are computations that fit into these 4 classes.

[00:48:06]  Unknown: Okay.

[00:48:08]  Red: It may also differ by input for a single computation. In fact, defining what with certainty. So, for example, I may have a single computation that with a certain input immediately produces a constant state in which case it’s, there’s no surprise. But with a different set of inputs, it may turn into a gnarly, interesting, non -repeating pattern, which means it’s class 4 instead. Okay. So, in fact, defining with certainty which class computation is, is in fact itself known to be incomputable, equivalent to the halting problem. Yet it isn’t hard to see that these are useful if rough classifications. Now, up to this point, I’ve pretty much entirely agree with Wolfram. This is me talking now. But this next doesn’t seem quite right to me. So, we’re going to dig into it a little bit further. Wolfram has something called the principle of computational equivalence or the PCE principle of computational equivalence. Almost all process, and it’s defined as almost all processes that are not obviously simple can be viewed as computations of equivalent sophistication. Okay. So, I’m going to say that again, the PCE, the principle of computational equivalence is that almost all processes that are not obviously simple can be viewed as computations of equivalent sophistication. Peter, run your bullcrap counter across that statement. Tell me what you think.

[00:49:39]  Blue: Sorry. Can you, you said it twice. Can you just say one more time?

[00:49:43]  Red: Almost all processes that are not obviously simple can be viewed as computations of equivalent sophistication. So, if a computation isn’t obviously simple, then all computations are equivalently, equivalently sophisticated, other than the -

[00:50:00]  Blue: Am I, am I crazy or is that a lot like Deutsche’s principle of optimism? You don’t

[00:50:05]  Red: explain.

[00:50:06]  Blue: Are all, all problems, if they are interesting, are oh, all the ball.

[00:50:11]  Red: No, I didn’t see the connection first, but you’re right. There is a connection that -

[00:50:15]  Blue: Yeah.

[00:50:15]  Unknown: Yeah.

[00:50:16]  Blue: That’s how I’m kind of reading it, which, you know, something I’ve given

[00:50:20]  Red: a lot of thought to, and I think - Apparently, your, your bullcrap counter doesn’t go off on the PCE. Mine does. Like the

[00:50:27]  Unknown: -

[00:50:27]  Red: Okay, okay. That, it seems to me like it’s obviously false. I’m going to dig into it. It’s not as obviously false as my bullcrap meter thinks. Okay. Okay. But, but like, to me, that just seems kind of ridiculous, this idea that think about in nature any, any computation out there that’s of any sophistication that they’re all equivalent, right? How can that be? Like it, it just seems wrong to me, right? I just intuitively it seems wrong.

[00:50:55]  Blue: Fair enough.

[00:50:56]  Red: So this means that all complex computations, ones that you can’t see, can’t see immediately are simple, are equivalently complex. So Rucker gives the example of the motions of leaves on a tree to be a sophisticated computation as sophisticated as the brain. Okay. So you’ve got a tree, its leaves are moving in the wind, and that computation is as sophisticated as a human brain doing, you know, running the mind. Okay. Doesn’t that seem wrong to you? Like that seems wrong to me, right? Well,

[00:51:34]  Blue: I think what Deutsch would - One thing I kind of get from Deutsch is that brains aren’t, aren’t all that complicated in some ways. We just, I mean, we don’t understand them because we don’t understand that the program that is, but that is running on our brains. But we, you know, it’s not like anything that crazy is happening that they couldn’t be replicated on a, on a fairly simple computer. I can see I’m

[00:52:06]  Red: thoroughly failing to convince Peter that, that this has something intuitively wrong with it. Peter’s got the exact opposite intuition. Well, I,

[00:52:16]  Blue: you know, it could be, I’m just looking at it through my, my use a controversial term, Deutchian lens. Yeah, but

[00:52:23]  Red: yeah. You know, to some degree, I want to actually emphasize this. I do think human beings want to reason through their bull crap meter, right? They want to use their intuitions and kind of just subjectively say, no, that explanation seems silly to me. Oh, no, that seems like it really explains things well. And one of the things I’ve really kind of emphasized in our last few podcasts is that that’s the opposite of Popper’s epistemology, right? Popper’s epistemology is in some sense really about how do you squeeze out those subjective criticisms from the process and get down to things that we can all agree upon, what I’ve called objective criticisms. And the reason why is because our intuitions, as strong as we might feel them, they’re just sometimes just really completely wrong. You can have a very strong intuition that you can’t possibly be wrong about something and you can be wrong anyhow. And so I’m going to argue that the PCE is maybe closer to the truth than my intuitions suggest. Okay. So let me say this. The reason why I feel like I don’t buy the PCE is for the obvious reason that we can give a realistic simulation of leaves on a tree with far more limited computation than we could of a brain to put this more plainly. I could today go into unity and make a tree blowing its leaves in the wind and it would take X amount of computation, a very small amount, right? Whereas if I was trying to simulate Peter’s brain in unity, I probably wouldn’t have anywhere near the resources necessary to do it.

[00:54:10]  Blue: Fair enough.

[00:54:11]  Red: So based on that simple example, it’s certainly my first impulse is something’s wrong with the PCE.

[00:54:19]  Blue: The brain is more like a forest, maybe.

[00:54:23]  Red: Well, okay. So that would be an example. If a tree blowing in the wind is a sophisticated computation, why wouldn’t a whole forest of trees be a more sophisticated, more complex computation? Again, that seems like it’s really obvious and it seems like it’s a contradiction to the PCE. Okay. So let me take this a little bit further, though, because maybe what’s going on here is that I’m reading it too literally and I need to read it a little bit more charitably. So I think a charitable reading here would probably be something more like this. Okay, wise guy. I can imagine Stephen Wolff from saying this to me. Okay, wise guy. But can you give an exact simulation of the leaves blowing in the wind on a tree without actually having a specific tree with a specific wind on a specific earth with a specific environment? And I think that’s really probably what Wolfram’s trying to get at here. Okay, because I think the answer to that is no, I can’t. Okay. So I would concede the point at least that much. This is where Wolfram’s theories really turn out to not simply be a version of the Church Turing -Deutsch thesis. The reason why is because the Church Turing -Deutsch thesis says we can simulate leaves blowing on a tree realistically, not we can simulate an exact tree in an exact situation. In fact, I tend to agree with Wolfram here that it is presumably simply impossible to simulate an exact tree in an exact environment with an exact wind without basically having that exact tree in the exact environment with an exact wind. If I do accept that view, and I’m still not sure if I do or not, there are some consequences. So

[00:56:13]  Red: one of the consequences is called, this is from page 27 now, the principle of computational unpredictability, PCU, which is most naturally occurring complex computations are unpredictable, where complex means either class three or class four, i.e., there simply is no shortcut for that specific computation. And so it gives surprising results that can’t be predicted except by running the full computation itself. And to quote Rucker directly, it follows that for many systems, no systemic prediction can be done so that there is no general way to shortcut their process of evolution. And as a result, their behavior must be considered computationally unpredictable. Though I feel the PCE is not quite right in some way, I feel like it’s making a point that does lead to a correct idea in the PCU, namely that the only way to predict the most computations is to actually do them. However, Wolfram goes too far in my opinion, or at least depending on how you read him, when he goes on to say, and this, I believe, is the fundamental reason that traditional theoretical sciences has never managed to get far in studying most types of systems whose behavior is not ultimately quite simple. That is a quote from Wolfram. I’m quoting him from Rucker’s book on page 28. Wolfram needs to take a harder look at the Church Turing -Deutsch Thesis here. The distinction between simulating a kind of system follows from the CTD as opposed to simulating a specific system which does not follow from the CTD. And that difference is deeply relevant here. We will never simulate most systems. That’s Wolfram’s point. But we can understand the simple computations that create those complex, unpredictable, and surprising outcomes. That’s the Church Turing -Deutsch Thesis’s point, in my opinion.

[00:58:16]  Red: And maybe we could see them as not at odds with each other if we interpret them in this way. I do think that that’s probably a slight tweak from what Wolfram originally said or meant. Maybe even a slight enough one, but one that has profound consequences. But that’s my opinion on it, that if we can accept this tweak, then I can maybe accept Wolfram’s point of view here. So to some degree, Wolfram does seem to get this. From page 106, the bad news that Wolfram brings for physics is that in any physically realistic situation, our exact formulas fail and we’re forced to step -by -step simulations. Okay, that’s true. We can’t just run the computation and get the result. We have to actually simulate every atom moving in relationship to every other atom. So a real object’s motion is page 106. A real object’s motion will at times be carrying out a class four computation. So in a formal sense, the object’s motion will be unpredictable, meaning that no simple formula can give full accuracy. Now from page 28, when a computation generates an interesting and unexpected pattern or behavior, we call this emergence, which I think is a really great way to understand emergence. Emergence is when a computation generates an interesting and unexpected pattern of behavior. An example would be the Mandelbrot set. It’s a famous example of emergence from simple rules that have infinite complexity. But we can immediately see it isn’t just random. It forms rough, non -periodic patterns, making it a class four pattern, i.e. it produces gnarly, interesting, non -repeating patterns. Another famous example of emergence is the interesting, non -repeating patterns of cellular automata.

[01:00:14]  Red: And this is really what Wolfram has kind of made himself famous for, was his studies into cellular automata. In fact, he has made wild claims about physics really being cellular automata and using cellular automata to recreate our physics theories. These are challenged, and I don’t think I agree with him on that front. But the concept of cellular automata are interesting. Go look that up. I’m not going to get into what cellular automata are, but go look it up. It’s usually a very simple set of rules where you turn on pixels or turn off pixels, and the rules give surprising results that create pretty pictures and interesting looking patterns and things like that that you could never have predicted from the simple rules.

[01:01:01]  Blue: Well, I was just looking at the, I’m one step behind here, looking at the Mandelbrot set on chat GPT here, and so this is about fractals, right?

[01:01:11]  Red: Yes, actually Google the Mandelbrot set and just look at a picture of it. It’s beautiful. It’s very interesting looking. And

[01:01:20]  Blue: this guy, this is someone who invented the concept of fractals, or is just one example of what a fractal is?

[01:01:28]  Red: Yeah, no. So the Mandelbrot, I can’t remember the story of the Mandelbrot set. Like it seems like it was, they had, we’re running a little program and they decided to draw a picture of it on the screen and immediately saw that it was this really interesting looking picture. And it was surprising. Like, have you looked it up? Like, look it up and take a look at it. Okay, now here’s the thing that’s cool about it. If you zoom in on any of the details in the Mandelbrot set, the zoomed in version is just as repeatedly interesting.

[01:02:02]  Blue: Okay. It all kind of looks roughly similar, but you can tell it’s not the same, right? I see.

[01:02:09]  Red: So you like zoom in on the details, and it turns out that what looked like simple detail, once you get close enough, it starts to turn into this interesting set of details itself that kind of look like the Mandelbrot set. And you can immediately tell, I’m still looking at the Mandelbrot set, because it’s so similar. And yet it’s not the exact same thing, right? Okay. Because it’s infinitely, surprisingly interesting. There’s an obvious pattern there, but the pattern is not periodic. Okay. So this is

[01:02:40]  Blue: a kind of fractal, that not all fractals are Mandelbrot. That’s right. That’s right. Okay. Interesting.

[01:02:48]  Red: So this is why Wolfram tries to work out how to drive physics from something like cellular autotomata. He’s trying to fit physics into a framework of a class 4 computation. Now I personally have my doubts that this is the right way to go about it. And honestly, it doesn’t seem like that even is an implication of his theory. Compare this to how I accept the universal explainership hypothesis, but Phil Deutsch has derived several theories from it that aren’t actually implied by the theory. I feel the same way here, that I agree with the basics of what Wolfram’s saying, but it does not at all seem to me that we should try to force fit physics into a cellular, to the framework of a cellular automata. Even if you accept his theory as correct, I just don’t think that’s an implication that makes sense to me. Now from page 30, emergence is different from unpredictability. On the one hand, we can have unpredictable computations that don’t have any high -level emergent pattern. The dull digits of pi would be an example of this. On the other hand, we can have computations that generate emergent patterns that are in the long run predictable. So this is often page 30, that was all Rucker. So Rucker gives these examples, and I can’t pronounce all these, the vigniac vote rule, ultimately predictable and thus class 2. The flocking behavior, seeing flocking behavior with like birds, usually class 4 we think, but sometimes class 2. The Mandelbrot assumed to be class 4, but no way to prove it. You kind of look at it and you can kind of immediately tell it’s class 4. It’s almost the quintessential example of a class 4, but there’s no way to prove it.

[01:04:26]  Red: Okay. So from page 43, a computation is universal if it can emulate any other computation. This is an interesting point. Now I had a crit rat friend that thought that the fact that the human brain was equivalent to a universal computation was a stunning revelation. He wanted to go around teaching children and people, you have a universal computer in your brain. Isn’t that amazing? Right? The problem is that nearly every computation is universal. In fact, it’s so difficult to come up with non -universal computational machines that you have to carefully stop and think about how to make something like a finite automata or a pushdown automata such that they’re not equivalent to a Turing machine. The vast majority of computing machines that you will invent, if you just like, go out and just invent one, it will be a Turing machine. Like it’ll be equivalent to a Turing machine. It won’t be equivalent to a finite automata because the vast majority of imaginable computations are universal.

[01:05:29]  Blue: Yeah. When I first read Beginning of Infinity, I thought that that was pretty much David Doich’s point was that human brains are universal computers, but now I realize that universal computers are a dime a dozen.

[01:05:42]  Red: What

[01:05:43]  Blue: he’s really saying is that it’s something, it’s a universal explainer.

[01:05:49]  Red: Which is a different thing that

[01:05:50]  Blue: has some, and it’s more that he’s using the universal computer thing as more

[01:05:55]  Red: of an

[01:05:55]  Blue: analogy, I guess. That’s how I’m currently

[01:05:58]  Red: understanding it. As we’ve discussed on this podcast, there are some connections between the two concepts, which I think further confuses people, but they are not the same concept and it is more of an analogy. Okay. Okay. From page 43, quoting Rucker, when we examine the naturally occurring computational systems around us, like air currents or growing plants or even dot drying paint, there seems to be reason to believe that the vast majority of these systems support universal computation. That may be surprising too many. Okay. If you’re ever curious, go look up the pool table computer. You can use a pool table to build a universal computer. Universal computers are literally a dime a dozen. As such, we have every reason to believe that animal brains, especially ones with a neocortex like mammals, have universal computers for brains as well as humans. Now, of course, don’t confuse this with universal explainer ship, which is something different, as you just said. Okay. Now, we might take the PCE, which is almost all processes that are not obviously simple, can be viewed as computations of equivalent sophistication, and we might take it to mean something like this. Most naturally occurring complex computations can emulate each other. This is from page 49. In fact, given that nearly all computations are universal, this must be correct. And with my small tweak that I’ve suggested that we’re talking about a specific emulation of something, not a general emulation of a class of something, we also have every reason to believe that nearly everything in nature has a huge amount of computational power to it. This gives rise to a reformulation of the PCE as most naturally occurring complex computations are universal, and

[01:07:53]  Red: thus a reformulated PCU as most naturally occurring complex computations are unpredictable from page 43. Now, on page 87, Rucker actually references the three -body problem made famous by the show, the three -body problem, or the books, as an example of how it is impossible, even for a mere three bodies, to predict the outcome of a computation without simply actually running the computation in real life. This was the basis for the now famous three -body problem storyline on Netflix, where there are aliens that can’t make good plans for their civilization, due to being near three suns that make their planets orbit wholly impossible to predict, or maybe it was two suns, because the planet was the third body. I can’t remember. It doesn’t matter.

[01:08:41]  Blue: So basically, if there’s two planets, it’s completely predictable, but if there’s three, you’ve just got to run the simulation. There’s no way to compute it.

[01:08:52]  Unknown: Yeah, well,

[01:08:52]  Red: they have to be, like,

[01:08:53]  Unknown: perfectly

[01:08:53]  Red: sized. Like, obviously, we have multiple planets in our solar system, and they’re completely predictable. But the reason why is because the sun is absolutely trouncing everything else in terms of its gravity, right? Oh, okay. So with the three -body problem, the aliens lived, I think it was with three suns. And this isn’t just

[01:09:11]  Blue: about planets. This is just a statement about physical reality, right?

[01:09:15]  Unknown: Right.

[01:09:15]  Red: So they’re equally -sized suns, and so their planet was bumping between them, and they didn’t have a stable orbit. They would orbit around one sun, then they would get sucked away by another sun. And so it was basically impossible to predict what was going to happen, because the three -body problem is thoroughly unpredictable without actually doing it in nature. That’s the whole basis for the storyline of these aliens and the three -body problem and why they want to invade Earth.

[01:09:41]  Blue: Did they even explain that on the show?

[01:09:44]  Red: They had a cool graphic where they showed it.

[01:09:47]  Unknown: Yeah.

[01:09:48]  Blue: Okay. Because I need to watch it again.

[01:09:50]  Red: Yeah. I thought they did a pretty good job of explaining it. Okay.

[01:09:55]  Blue: Okay.

[01:09:56]  Red: Okay. Page 104. Even our most highly parallel digital computers have a minuscule number of computational nodes compared to nature. That was, I think a quote from Rucker. So this kills the hypothesis that we’re living in a simulation based on the supposed logic that most living beings will live in simulations. It’s actually impossible to build a computer, at least according to laws of physics as we currently understand them, absent something like, say, an Omega Point computer. It’s impossible to build a computer that can simulate the whole universe. It would take a computer much larger than the whole universe. So here I am ignoring a special sort of computation like the Omega Point at the moment. But at the moment, the Omega Point is discredited theory. So I feel okay to ignore it. Okay. So Rucker says digital computers have no hope of visibly emulating the full richness of the physical world in real time. Okay. So there was an interesting discussion I had recently. I quoted Pedro Domingos. He’s a famous professor, computation, AI, that sort of thing. And he made a quote. I should have looked it up for the show, but he made a quote that I quoted where he basically made the argument I just made, making fun of people who believe that we live in a simulation because it’s basically, to make a simulation that simulates the whole universe. And therefore this whole idea that most people will live in a simulation, it just isn’t true because each simulation has much, much, much reduced resources compared to the outer world that the simulation is running in. Okay. So it just isn’t the case, like the whole argument that Bostrom makes is based on this completely wacky, unfounded, ridiculous idea. Okay.

[01:11:53]  Red: Every crit -rat that responded to me immediately said, no, that’s a terrible argument that he’s making because I can just imagine that we’re in a simulation and that simulation, but the outer world that we’re running in has 10 to the 10 to the 10 to the 10 to the 10, you know, number of computations as the simulation. Here’s the problem with that response and why that response is itself correct, but completely misses the point. What Pedro is doing is he’s forcing you to think about your assumptions. He’s pointing out that to make your simulation hypothesis work, you can’t simply look at the world you’re in and then say, look in this world, we should assume that most people will live in simulations based on our current understanding of the laws of physics and therefore we should assume that we’re in a simulation. Okay. That does not follow from the laws of physics as we understand them. In fact, the opposite follows from the laws of physics as we understand them. Okay. To be able to make that, you’re actually throwing in an extra assumption, which is the one the crit -rats were all trying to quote to me. You’re actually saying there exists a world that isn’t our world and doesn’t follow our laws of physics and is different from our world. And it has 10 to the 10 to the 10 to the 10th additional level of computation compared to the real world universe. And this other world, if you first posit its existence based on no reason at all, not based on trying to solve a problem, not based because you needed it as part of an explanation, it makes no predictions. It’s a purely supernatural belief. Okay.

[01:13:47]  Red: If you’re willing to make that additional assumption, sure, then Nick Bostrom’s argument now makes sense, but only under that circumstance. Okay. Once we realize that’s the case, that’s how you actually dismiss his argument, is okay, your argument is in fact starting with a totally supernatural assumption. And if you don’t start with that assumption, if you start with the assumption that the laws of physics actually apply, then what you’re saying doesn’t make sense and your whole argument falls apart. And that was what I liked about his argument. Okay. So page 110, Rucker says Wolfram also speaks of such unpredictable computations as irreducible or as intrinsically random. Now, Rucker takes issue and so do I with the wording, with wording that that wording saying that was because more technically it should be pseudo random, because what he’s calling random here, intrinsically random here, comes out of a fully deterministic but unpredictable process. Note, and I’m going to get to this in a second, I think Rucker has also misused the term pseudo random here, because an unpredictable process is not necessarily pseudo random. Now, as far back as episode seven, we talked about how there are two kinds of probability, probability due to an actual random process, say predicting the roles of a six -sided die, and probability due to simply being ignorant, say predicting a hidden six -sided die that has already been thrown. So it now definitely has a side that’s up and you just don’t happen to know what it is. Okay. We have something similar going on here. We utilize the same probability calculus for both kinds of situations, probability due to actual randomness, probability due to ignorance. And the probability calculus can be used in both circumstances.

[01:15:51]  Red: I will explain how I know that. That is absolutely the case. It can also be abused. And I know Deutsch gets very vehement on this. And in fact, I’m going to argue so strongly that it almost turns into a hatred of the word probability. And I probably need to explore that further in a separate podcast. Okay. Yes, using probability calculus for ignorance can turn out to be an abuse of the probability calculus, but it isn’t always an abuse of the probability calculus. Okay. Let me just say that for now. Okay. That really

[01:16:32]  Blue: distills things.

[01:16:34]  Red: Yeah. So one argument is sometimes here is that really the two kinds of randomness are the same. Both are really lack of knowledge of initial conditions via chaos theory. Now quantum mechanics has strongly challenged this view. I would strongly challenge it has refuted this view and suggested, at least from the point of view of a conscious being inside of a living inside of a universe, which is all of us, that randomness is fundamental to reality. So yes, I know Deutsch has several talks where he seems to claim otherwise. Let me actually in just a moment quote him and I will explain why what I’m saying is not at odds with what he is saying and why I actually take issue with the way he words things because I feel it causes misunderstandings. I would also note that most crit rats I’ve talked to about this seem seem to fundamentally misunderstand the notion of randomness due to these talks from Deutsch. For example, a deterministic multiverse where one sixth of all universes each get a different die roll is not at odds with the concept of randomness, but it’s actually an explanation of how randomness is fundamental to reality from the point of view of an observer. Obviously, as an observer, some version of you ends up in each universe, but that is the same as saying that from your point of view, there is a one in six chance the die will come up on any one of the sides. Your meaning, whoever it is, that is the one that sees that side. And presumably, we invented the term randomness and probability to explain such observations from the point of view of an observer in a single universe.

[01:18:19]  Red: Therefore, QM is not at odds with the concept of randomness or probability. In fact, it hides it deeply into reality in such a way that it is fundamental to reality. Okay, that randomness is a real thing and probability is a real thing. Okay. Let me come back to that point because I understand why people get confused on this and I understand where Deutsch is trying to say, but he says it in a way that I feel is a little misleading. So let me actually hold the quotes to help explain here. So here’s a quote from Deutsch from Fabric of Reality. He says, it is perhaps worth stressing the distinction between unpredictability and intractability. Unpredictability has nothing to do with the available computer resources. Classical systems are unpredictable or would be if classical systems existed because of their sensitivity to initial conditions. Quantum systems do not have that sensitivity, but are unpredictable because they behave differently in different universes and so appear random in most universes. So notice how Wolfram refers to any classical unpredictability as randomness and that this is misleading because that’s not what the term randomness normally means. So score one for Deutsch here, but also notice that Deutsch calls classical unpredictable unpredictability, intractability and reserves the term unpredictability only for true randomness. Now this is also a misleading use of terms as an intractable algorithm is unpredictable. So you can’t reserve that term just for randomness like Deutsch is trying to do. So score one for Wolfram here. So neither gentleman uses terms in such a way that we aren’t likely to get confused. I find both of their use of terms very, very, very confusing. Okay. Let me make some suggestion.

[01:20:22]  Red: Then Rucker also uses terms in a way that I find confusing. Rucker tried to use the term pseudo random to refer to any unpredictable process. Here’s the actual quote. I might mention in passing that computer scientists also use the word pseudo random to refer to unpredictable processes. Okay. That seems pretty clear. This is not actually how the term pseudo random is normally used. It is not normally used as equivalent to unpredictable. Now record does go on and kind of explains this. So I don’t know if he’s really confused. I think it’s just that one sentence is confusing the way it’s worded. So he explains it like this. He says, any programming environment will have built into it some predefined algorithm that produces reasonable random looking sequences of numbers. These algorithms are often called pseudo randomizers. The pseudo refers to the fact that these are in fact deterministic computations. This is also from page 110. An unpredictable algorithm may or may not produce a reasonable random looking sequence. Isn’t that kind of Rucker’s point that it may produce some sort of pattern that is clearly not random? Okay. So isn’t that the difference between a class three and a class four computation under Wolfram’s classifications? So it’s a mistake to try to equate unpredictability with pseudo randomness. Specifically, pseudo randomness is a kind of unpredictability, but unpredictability is not always pseudo random would be the correct or at least the less confusing way to work things. However, you can use a fully deterministic process and treat it identically to being a sarcastic and random truly a sarcastic that is to say truly random process. The fact that you can do that is notable. Okay. And this is this is part of what I’m trying to explain.

[01:22:20]  Red: This is why Turing machines don’t require a randomizer to be considered a universal computer. If sarcastic processes or random processes were necessary for some algorithms to work, then the Turing machine wouldn’t be a universal computer. You would have to be a Turing machine with a with a randomizer attached would be necessary to have a universal computer. That just isn’t the case, right? Like literally every algorithm that you can imagine with a randomizer, you can do it with a pseudo randomizer and it will still work. Okay. Here’s something and this is going to be a controversial statement, but it just follows from everything we’re talking about. This is why Bayesians are correct or at least sometimes correct is what I mean to treat ignorance as probability, because the probability calculus does apply to pseudo random numbers, even though it’s a fully deterministic non -random process. This is why you, when you try to claim probability calculus doesn’t apply to ignorance, which is what I’ve seen many crit rats try to claim and often quoting Deutsch, maybe not understanding Deutsch, that they’ve missed something. If I go run a computer program and I have a pseudo random number generator that’s been, it’s a deterministic process that’s been intentionally made to look like a random pattern. Okay. It’s surprising in exactly the same way a random pattern would be, almost the same way. The way you would deal with that would be with the probability calculus and it would work correctly, even though it is not actual randomness. Okay. This means that the Bayesians are correct at least in some circumstances to apply a probability to non -random processes and instead think of it as a form of ignorance.

[01:24:17]  Red: If you don’t believe that, you are the one that’s wrong and the Bayesians are right. The correct way to go about criticizing the Bayesians isn’t to try to deny that very essence. It’s so easy to go show its right by writing a pseudo randomizer. It’s easy to refute your view if you think they’re wrong altogether. What you really should be criticizing is when they use it in cases where it isn’t a pseudo random process. This is a topic for another time. Okay. So the Bayesians do get something wrong here. It just isn’t what the crit rats think it is. Also note that Deutsch claims that there is no such thing as true randomness. Let me quote here again just so you can be clear that he did say this. He says from fabric of reality, quantum systems do not have that sensitivity but are unpredictable because they behave differently in different use universes and so appear random in most universes. Notice the wording so appear random. This is also terribly misleading that would only have existed had quantum physics not been many worlds and that what we call randomness is actually only apparent randomness to him. But I take issue with this misleading way of wording things. It’s not that I think he’s conceptually wrong here. Okay. But presumably the word random was invented by people living in single universes to explain why some things were fundamentally unpredictable from their point of view from their point of view in a single universe.

[01:25:58]  Red: So insisting that this kind of unpredictability is only only appears random forcibly assigns the word random a very strange meaning of a non -existent thing that only can appear in a quantum universe that doesn’t have many worlds that doesn’t exist in real life but happens to be identical in every way to how individual universes work from the point of view of a single observer. And the reason why I raise this is I have had numerous quit rats tell me that there’s no such thing as probability because of many worlds and it’s such a basic misunderstanding of what the term probability normally means right of quantum physics is fundamentally probabilistic from the point of view of an observer. The fact that it’s not probabilistic at the level of the multiverse that’s also true but observers don’t exist at the level of the multiverse. So it doesn’t matter. It doesn’t matter. There’s no randomness at the level of the multiverse quantum physics means that there is randomness at the level of an observer in a universe period end of story. This makes probability fundamental to reality according to quantum mechanics. So let me just back up and let me make my own suggested use of terms which I think will be more clear across the board and will avoid a lot of this confusion. So let’s define intractability as any process or computation that has no shortcut computation. I think that’s a good way of understanding intractable. Let’s define random as any process that is nondeterministic and thus stochastic at least from the point of view of an observer in a universe.

[01:27:49]  Red: This is presumably what the term originally meant anyhow and I just don’t see the need to forcibly assign it a non -existent meaning that would strike me as the essentials mistake anyhow. Pseudo random is a fully deterministic process that is unpredictable due to its intractability but outputs a spread similar to what a true random process would look like. Note that not all unpredictable processes are pseudo random but all pseudo random processes are unpredictable and finally unpredictable we’re going to define as any process that is either random or intractable and thus cannot be predicted and they may or may not be pseudo random. Okay with that out of the way which I think is a way more clear way of wording things that just gets past a lot of the misunderstandings I keep hearing. Let’s get back to the regular schedule program. So Rucker argues that PCE and PCU are just conjectures and not mathematical proofs. He says that on page 387. He states multiple times that he does not agree with the PCE principle of computational equivalence. Instead he suggests a weakened form of it that he calls the natural unsolvability hypothesis or he calls it the NUH which can be perhaps mathematically proven. Without going into a full proof let me give you a high level sketch of what Rucker has in mind. So recall that a human mind being a computation can be thought of as an algorithm. Okay that’s why I spent a lot of time making sure it was clear that Neutch did not claim that the human mind wasn’t an algorithm. Example suppose Peter is building a model plane.

[01:29:33]  Red: This act of Peter building a model plane is an algorithm because it has an identifiable input and a checkable final state. Now you may not be used to thinking of something like this as an algorithm but this is what we mean by algorithm normally speaking. Now of course Peter himself is not an algorithm in the sense that what we call Peter is just some computation that halts and then it’s done though neither is nearly any computation you work with including a pocket calculator. So it’s more a matter of how we talk about programs. It’s convenient to slice them up into starting and ending states and we conveniently call this an algorithm. One use of your pocket calculator is an algorithm and Peter building a model plane is also an algorithm. What does any of this have to do with predictability? Let’s say I have a little device that could predict with 100 accuracy whatever Peter was going to do. So if I ask this device if Peter will someday be the CEO of Google it could tell me with 100 accuracy if he would or would not be the CEO of Google. But Peter is a universal computation so this device must be able to solve the halting problem. That’s impossible thus this little prediction device must not exist. Rucker argues that this is really what Wolfram whether he realizes or not is trying to argue that for a universal computation to be predictable it would have to also violate computational theory. But wait this is not a very good mathematical proof for one thing it nearly it assumes nearly all computations in nature are universal which we of course know not all of them are.

[01:31:33]  Red: Now this is one of Wolfram’s assumptions but why should you buy it? Rucker shows that even non -universal computations commonly have undecidable questions for them. Sure if the computation is really simple say always terminates with an answer of one then you could solve the halting problem for that particular computation you could say oh it always halts with the answer of one. But the moment a computation starts to get complicated it either automatically becomes universal and thus the argument above applies or it does not but it starts to have its own lesser version of the halting problem to deal with. So this lines up nicely with Wolfram’s assumptions that there are four classes of computations doesn’t it? With class one and class two being predictable and class three and class four being unpredictable even if you can’t prove which class you can’t prove them universal or even that they are not universal. So a corollary from Rucker’s NUH is mostly naturally occurring complex computations are runtime unbounded relative to some target detector algorithm this is from page 423. Example am I doing the right thing to write a best seller answer? But wait this isn’t a very good mathematical proof for one thing it assumes that nearly all computations in nature are universal and as we just said not all of them are. Now this is one of Wolfram’s assumptions but why should you assume that? Sure if the computation is really simple say let’s say it always terminates with an answer of one then you can solve the halting problem for that particular computation it obviously always halts with the answer of one but the moment a computation starts to get complicated it either automatically becomes universal and thus the above argument applies

[01:33:34]  Red: that to predict the outcome would violate the halting problem or it does not become universal but it starts to have its own lesser version of the halting problem so this is the main thing that Rucker is trying to explain is that even a non -universal computation unless it’s something like it always terminates with one has its own undecidability problem and therefore even if you want to start with the assumption that not every single computation is universal because not all of them are still the vast majority of them would have a decidability problem and thus have the halting problem on them so this lines up nicely with Wolfram’s assumption that there are four classes of computations doesn’t it with class one and class two being predictable and class three and four being unpredictable even if you can’t prove that they’re universal and even if they’re not universal this still lines up well with his classes so now a corollary to Rucker’s nuh is from page 423 most naturally occurring complex computations are runtime unbounded relative to some target detector algorithm what does that mean an example of that is if you were to ask the question am I doing the right thing to write a bestseller the answer computationally speaking is you must wait to see if your book is a bestseller because if you could answer the question definitively then you would have a target predictor that would be equivalent to solving the halting problem which we know is impossible okay um

[01:35:13]  Red: now all of this let me see if I can kind of summarize everything that we’ve set up to this point using Wolfram’s theories nature is everything in nature is a terribly large sophisticated complicated computation and everything in nature is connected to everything else the those we those that wind on the leaves um it’s tempting to say it’s a smaller computation than the rest of the forest but the computation of the specific wind on that specific tree would require you to compute the rest of the forest to be able to figure out what specific air currents are coming to that particular tree that’s why the sophistication of the tree with its leaves on the wind is equivalent of sophistication to the entire forest with its leaves on the wind because you have to have basically almost everything to take into consideration a specific tree with a specific leaves once you realize that’s the case and once you realize that almost all computations are universal and even the ones that aren’t still have a decidability problem you can start to understand why it is that

[01:36:31]  Red: entirely deterministic computations are by nature almost always unpredictable by definition okay and why really when people start getting confused start getting upset over oh no you know the human mind is deterministic that destroys free will because then we’re completely predictable like the whole thing is just a misunderstanding right like all every ounce of it is a misunderstanding of even the very concept of predictable um what they really mean is that once you run the computation you know the outcome and if you were to run the same computation you would then know the same outcome okay but that never happens in nature it’s not even possible for it to happen in nature okay so

[01:37:20]  Blue: reality is algorithmic but what I’m kind of interpret you saying is because everything is interrelated that the the something like the the halting problem from that perspective like reality is just like a big halting problem or something yes it seems to

[01:37:44]  Red: decidability the the idea that you could predict the outcome of reality would be equivalent to saying you’ve solved the halting problem therefore we know it’s impossible that you can predict the outcome

[01:37:54]  Blue: yeah that’s a really compelling way to think of it wow

[01:37:57]  Red: okay now let’s back up a little bit now and let’s talk about the two bobs problem um except that it’s not a problem of course what you called the Turing world within the Turing world thought experiment that I did several episodes ago

[01:38:11]  Blue: oh yeah that was good

[01:38:12]  Red: okay does this go against what we’re talking about it doesn’t okay I know it feels like it does right so you’ve got the just to repeat the thought experiment imagine you have bob and agi living inside of a virtual world that’s running on a computer and then you have another computer that’s the exact same same algorithm and it’s the same bob running on the same computer except it’s running on a computer twice as fast as the other computer so you watch bob what he’s going to do and maybe bob’s in a whole society of agis right you watch what bob’s going to do you look at even the knowledge he’s going to create and then you go and you say oh I know bob’s about to discover you know quantum physics over on computer b and computer a is slower and it’s the same program so I can predict that bob is about to discover quantum physics on computer a because he’s running at half the speed and it’s going to happen in october that he’s going to invent quantum physics and this just drives people nuts right like they’re like oh my gosh that shows that deterministic algorithms are predictable and it shows that you know that um everything’s wrong and there’s no free will so this must be wrong and people will argue with me over it and and they’ll say no you’re misunderstood I had two crit rats that were vehemently arguing that no actually the um uh the the slower computer will end up doing something different it’s like you realize that violates computational theory right you know like and like no no no you’re you’re just not you’re not taking into consideration they they actually try to invoke deutch’s concept of fungibility in quantum physics to try to prove that this something was wrong with this thought experiment and they tried to invoke the idea that the computer would eventually make a mistake and it was a really almost humorous argument because they just did not want to admit that you can in this case predict what’s going to happen here’s the thing right the reason why you can make this prediction is because you now are an observer outside these two realities which is not the situation of an observer within a universe and if you were to let’s say that let’s say that you were interacting it’s a part of the reason why this whole thought experiment works is because one of the starting assumptions is that you’re not interacting with these universes okay let’s say you were interacting with these universes okay so you talk with both these bobs the one that’s running at half the speed and one that’s running at full speed

[01:40:56]  Red: that interaction so you tell maybe you tell bob a you know what bob b just invented quantum physics so just keep going you’re going to invent quantum physics that’s just not the way it works okay they’re not they’re not going they have different inputs now they’re not going to be predictable anymore the fact that bob b created quantum physics now that he’s interacting with that universe in some way that whole computation now needs to be taken into consideration across them and including our real world there’s not even a reason to believe bob b will invent quantum physics anymore because he’s going to just live his life differently because the inputs are different okay so they’re back to being unpredictable now the only reason the only sense in which they are predictable is if you keep them completely isolated and you are an outside observer and it’s not even hard to see why this must be the case because the moment I start to interact with the two bobs my computation is part of now the overall computation because my inputs are going whatever is going on in my world that you have to work out through computing the whole real world now drives some of the inputs when I have conversations with the two bobs and those conversations aren’t going to be identical right so now everything in nature in the real world is impacting my mood when I’m talking to the two different bobs and they interact you know once I know something about one bob I talk to the other one he’s a different person he diverges over time now because the whole computation has to now be taken into consideration including the entire computation of the real world okay so that’s why you can’t really I mean yes it’s predictable in the sense that you can predict it the second time but isn’t that kind of the same as saying it’s not predictable and and this is where I think people just sort of get really confused on this right because the second bob isn’t predictable in the traditional sense you’re really just running the same algorithm twice and yeah of course you know the outcome once you’ve run it like of course you do so it’s I so now with that whole thing in mind and that whole additional explanation let’s talk through how this applies to Wolfram’s theories okay we’re really the question we’re asking is what if you just use a faster computer so recall that any specific computation in the real world is according to Wolfram at maximum speed or so he’s conjecturing if that conjecture is true then we can conclude that computations on a computer are not at maximal speed for the simple reason that PCs keep getting faster so rucker introduces the idea of what he calls strong unpredictability I do not like that term probably should have been called strong intractability or something like that okay but let’s go with it so on page 428 he defines it as p is strongly unpredictable if and only if there is no q that can emulate p and is faster than p this is the two bobs problem

[01:44:00]  Red: shouldn’t call it a problem the two bobs thought experiment none of the computations on your pc are strongly unpredictable in that you can wait 18 months and then do the same computation faster interesting side note we tend to think of Moore’s law doubling every 18 months as an exponential speed up and if the algorithm in question is polynomial that is the case but most algorithms are intractable due to being exponential on the inputs so Moore’s law actually only gives such algorithms a linear speed up not an exponential one because they’re the problem is itself exponential so example because a chess min max search algorithm is excel is itself exponential you need quite a few exponential speed ups to go from say four plies or moves ahead to five and so Moore’s law doesn’t create doesn’t make every algorithm exponentially speed up that just isn’t the case okay in fact the vast majority of algorithms are presumably mp p is part of mp but it’s a small part of mp um you have to go back if you don’t know what i’m talking about go back and listen to the computational theory episodes of the podcast the net result of this is is that a lot of things that we try to imagine in hollywood where we just imagine computers just getting faster and faster because of Moore’s law and therefore we’ve got this giant super intelligence that’s 10 000 times you know smarter than us and or we live inside this matrix that’s richer than the real world or something like that right all of that is based on a misunderstanding of what happens with Moore’s law

[01:45:48]  Blue: okay wait i this i this i really want to understand and i’m not sure that i i get what you’re saying so you’re saying that just the algorithms don’t go go any faster even though the computers are becoming okay let’s use them let’s use the

[01:46:06]  Red: chess example okay so you’re ready you’re gonna write a chess program peter okay and you’re going to make it run by trying to make as many moves ahead as it can so it tries one move then it tries doing the move of its opponent yeah then it then it tries but then it tries doing the move that it’s going to move against its opponent well that’s not going to work it’s going to have to try every combination yeah right so it tries

[01:46:32]  Blue: more more atoms and the right more more more chess moves on the board than there are atoms known universe kind of a thing

[01:46:39]  Red: right so when you do an exponential speed up this is by the way i got this like i should have looked this up but there’s a there’s a famous book the algorithms book that they use in you know undergrad and grad programs i it was one of my textbooks for my computer science master’s degree and they have a whole page within a side where they talk about this where they say it’s actually really says does Moore’s law compensate for algorithms being slow and they said no it doesn’t and they pointed out that almost all algorithms were exponential on the inputs

[01:47:16]  Blue: okay

[01:47:17]  Red: okay so while the computer may be exponentially faster with Moore’s law doubling every 18 months or whatever because the algorithms are themselves exponential on the inputs you only get a linear speed up right they’re both exponential so they offset each other right so getting back to the chess program example um you let’s say your computer’s fast enough that it can look ahead three moves okay it can try every combination three moves out which isn’t that much right so let’s say you you say you know you’re thinking Hollywood style you know computers in 10 years they’re going to be exponentially faster so that i predict that means i’m going to be able to look ahead a thousand moves okay because i’m going to have you know a computer thousand times faster right in 10 years okay that’s a total misunderstanding you will after it’s a thousand times faster be able to go four moves ahead because you need a thousand times faster computer to go from three to four i see i see okay that’s interesting so

[01:48:31]  Red: is nature usually maximum maximally efficient in terms of speed like Wolfram believes okay surely there are some computations in nature that we can emulate faster than nature itself so it can’t be a universal truth but now but now we need to get a bit more serious about if we’re talking about Wolfram’s version of emulation which is a specific phenomena or if we mean Deuch’s which is not i don’t doubt we can emulate a waterfall to a level of accuracy that at some point i can’t tell the difference between a real waterfall and a fake one that would be consistent with Deuch’s church touring Deuch thesis claims that we can simulate anything i do doubt that it would be possible to down to the atom emulate a specific real -life waterfall exactly on a computer so if true that really is equivalent to the claim that the waterfall is at maximal speed and strong and thus strongly unpredictable no amount of Moore’s law can ever emulate that waterfall faster than the waterfall itself now rucker says my sense is that the most complex physical processes are strongly unpredictable in that sense that they represent computations that can’t be run any faster at all this is from page 430 this raises a difficult and to some troubling question the question of if artificial super intelligences are physically possible now you and i have talked about this in past podcasts you raised it and i made the statement i don’t actually know if you like if we knew what the software was that a human brain was doing and we ran it on a very very very fast computer today would it be fast enough to run the software that the brain is running there’s an assumption that not only would it be fast enough but it would be thousands of times faster maybe hundreds of thousands of times faster i don’t think we have any reason to believe that’s true i don’t know that it’s false right like i just i think that those sorts of hollywood guesses like that are entirely based on weird assumptions that don’t actually make any sense and we of course don’t know how fast the individual neuron is but it’s massively parallel processed and we don’t even understand what it’s doing right or how it interacts physically and how those like if you were to try to emulate everything in the brain that’s going on it would be a massive computation you know down to the microtubules to put roger pennrose here it would be a mass it would be completely intractable for any computer to do so there’s kind of an assumption that we know enough about how the brain works that that we can assume it’s a simple enough computation that we can and that the rest of what’s going on is is not necessary right that it’s a level of emergence so we can ignore most of the physical processes i just don’t know that we even know that right like we know so little about how the brain works on this that this is why roger pennrose is trying to argue that actually part of our computation is done by quantum effects in the mark microtubules is because he can get away with that i doubt that’s true but he can get away with that precisely because we have such a huge level of ignorance at this point okay so i don’t know what the answer is but let’s let’s explore this a little so the idea that nature is maximally fast would seem to suggest that artificial superintelligence as as is are impossible but it isn’t clear because the only absolute bar is that we can make a computation exactly equivalent to you that can emulate you with 100 accuracy that is at maximal speed so let’s ask the question a different way would an artificial general intelligence automatically be an artificial superintelligence it’s tempting here to try to cite universal explainership and claim there’s no such thing as an artificial superintelligence perhaps this is true but a valid response might be well if it’s a hundred thousand times faster that is what we mean by artificial superintelligence okay fine but why does there seem to be an almost universal assumption that our current hardware is a hundred thousand times faster than the human brain do people just automatically assume that because we can compute um we can program a computer to do a calculation that is much much faster than a human can consciously do okay but the brain’s processing power isn’t determined by its conscious processing power it’s determined by its unconscious processing power so let’s say you go play the game of catch with your with your boys peter go try to program a robot to play catch this is a very complicated calculation that’s going on that is hard to get a robot to calculate fast enough in real time but humans unconsciously do it easily all the time don’t even get me started on dogs doing it they’re like amazingly good at is it because we know how fast a cpu is compared to a single neuron okay but that ignores the massively parallel processing the brain does okay so now rucker page 254 the fact that our modern computer hardware is is essentially serial tends to discourage us from thinking deeply enough about the truly parallel algorithms being used by living organisms not just in the brain but all throughout the organism and using search methods to design the parallel algorithm takes prohibitively long so until we know how to program an agi we sincerely have no idea none if the first agi will be a hundred thousand times faster than us or one one hundredth the speed of us okay like there’s just no basis for deciding it’s one or the other okay let me just use this interesting quote from rucker so in one of his science fiction stories he imagined an experiment where they could speed grow grow a fetus he admits that this is probably physically impossible and it was just a conceit needed for the story so on page 433 he says if you try to speed grow to grow a fetus you’d like you’d likely end up with something that looked more like a stork or a cabbage than a baby

[01:55:22]  Red: so this idea that reality is at maximal speed is kind of an interesting counterpoint to the very concept of an artificial superintelligence are you getting what i’m saying here right like i i don’t i can’t say for sure that there couldn’t be such a thing as an artificial superintelligence that would require me to make some assumptions that don’t quite follow from everything we’re talking about but it does seem like

[01:55:55]  Red: it puts some pretty it forces you to really stop and think about this i’m wrapping up here but let me actually read a quote from this is from off of twitter dutch explains so they try to quote dutch sometimes they quote him wrong so i don’t know if this one’s an exactly really correct quote but it’s a long quote so i think it’s probably correct 50 years ago some far -sighted people realized that the new technology of a jetliner was soon going to make it normal for ordinary people to travel abroad for jobs and education and holidays and they were right the revolution happened but they also thought that by today all that jet travel would be supersonic and that that has not happened people at some point decided that supersonic travel is morally unacceptable and also the moon colony the mars expedition nuclear power by even the late 1970s people no longer wanted what people had wanted at the early 60s and because of that unpredictable change their prediction of what our lives were going to be like was false that’s part of a wider impediment to prediction namely unforeseen problems the fads and fallacies and blunders that are going to seem like a good idea at the time in the next 50 years by definitions we can’t foretell now so predicting our future is nothing like predicting the strength of a bridge every significant innovation has unpredictable effects and they have knockoff effects and after a few steps of that the consequences and their consequences come to be the major component of what is happening and as knowledge grows faster the time for that to happen becomes shorter and shorter the growth of knowledge is the only impediment to our ability to predict the future but in this respect it’s decisive it will impose an ever closer planning horizon beyond which we are blind to the most important detriments of what is going to determinants of what is going to happen so we face a paradox the more we create knowledge the less we know about our future you know i i agree with everything in that statement except one sentence and even that one maybe i could read terribly read as something i would i could agree with the statement that i don’t agree with is the growth of knowledge is the only impediment to our ability to predict the future i do think the growth of knowledge is an example of what will from talking about but i think that it is not the only example of what will from talking about in fact almost everything in nature qualifies even if it’s not creating knowledge

[01:58:22]  Red: and because of that things just are unpredictable just they kind of just are right um and i suppose this is part of the reason why i just feel no fear over artificial super intelligences and it’s not i when i say i feel no fear i don’t mean there is no danger of course there’s a danger

[01:58:46]  Blue: yeah but

[01:58:48]  Red: but i i this this existential threat that some people feel the ai doomers it doesn’t even make sense to me i mean everything doom is always around the corner that’s surely yes for sure right and i guess it could be by artificial superintelligence it could be not by artificial super intelligent could be by artificial intelligences that are a different race that are no smarter than us and they just happen to win beat us in a war and kill us all like like never mind if they’re super intelligences or not there’s there’s all sorts of dangers it could be that they’re the only ones defending us from some crazy group of humans that want to wipe us out you know i mean it’s who knows right like i’m i’m no more going to get worked up over the dangers of artificial intelligence superintelligence as real as that might be then i’m going to get worked up of the danger of nazis as real as that is yeah right like nazis really were a threat right

[01:59:45]  Blue: yeah no and

[01:59:46]  Red: they are kind of a super it’s hard not you know

[01:59:49]  Blue: when i read nick bostrom’s book superintelligence where he goes through all the different ways that ai or agi could destroy humanity it’s it’s hard not to to get a little freaked out by that but you know thinking it through more from the perspective you’ve presented here it um seems a lot less realistic so thank you i’m even more optimistic than i was before i guess i

[02:00:17]  Red: think there are different ways to approach the ai doomer thing this is one of them which is look you’re making all sorts of assumptions that are just totally made up they are you know getting back to the idea of rationality as severe testing which is how i interpret pauper right um really you’re being irrational it’s not that you’re necessarily wrong it could be that the nazis are going to kill us too i just not going to spend any time worrying about that until it’s an actual threat right because i don’t think it will be i don’t think we’ll ever see the emergence of nazis again and they were a threat though like they did emerge they were a threat right um a lot of the arguments that you hear don’t make as much sense to me like oh it would be racist you know maybe like but our genes coerce us and sometimes we don’t mind it like the fact that it makes sex pleasurable and so we tend to think about it a lot and make sure romance is a big part of our lives and things like that you know i mean like that is the genes coercing us does that make the genes racist against us no because we kind of like it you know it’s something you know i would argue that even moral feelings are things that are a form of kind of genetic coercion this one’s going to be controversial um and i’m talking moral feelings not morality here okay just to make a distinction um that is that i think most of us are glad we have right even though they’re really kind of from a certain point of view the genes way of trying to manipulate us and so i i don’t know like maybe we will have agi safety programs that may not be racist there may be a good form of it like since we don’t know what agis are it’s just really hard to even formulate the questions at this point so i feel like most of the arguments i think the answer is look it’s just not a local problem guys like it might be a problem in the future but it’s just not a local problem now anything you come up with to try to solve the problem you know so little that you just don’t even know what it is you need to address what about

[02:02:27]  Blue: something like the the paperclip factory where there’s you know about that boston’s thought experience that experiment there where the the the computer is wants to make a paperclip factory and then realizes that the most efficient way to do this is to destroy all of humanity and turn the world into a paperclip factory i mean it’s something like that of valley concern

[02:02:48]  Red: so okay so no of course not if the paperclip clip factory were smart enough to be able to overcome all of so the assumption here is that it’s not just a mechanical device because if it’s a mechanical device then it’s no threat at all you just go turn it off right so we’re trying to imagine a paperclip factory that also happens to be an artificial superintelligence and in its desire to create paperclips it sees the humans trying to shut it off as an impediment to its reward program and so it decides to create a giant army of robots because this is part of its goal to create paperclips and it decides to wipe out the human race you know first or something along those lines okay to imagine this you’re trying to imagine this you know kind of savant right yeah well it

[02:03:46]  Blue: assumes that that you could be a superintelligence and still be a psychopath right which maybe

[02:03:55]  Red: you could be a superintelligence and a psychopath but you’d have to be a superintelligence and also not capable of understanding how to say you know what maybe my goal should be a different goal now yeah how it doesn’t I don’t even know how you would do that to a general intelligence

[02:04:13]  Blue: or why not why not go into space and turn Jupiter into a paperclip factory

[02:04:19]  Red: it’s starting with weird assumptions that just don’t make sense right it’s like you try to get explicit with it and you mainly start going but wait it’s a it’s a superintelligence it can understand why the humans want to shut it off it can understand you know why this is probably a bad idea to go to war with the humans you know it’s you know it’s I would be more I’d be more fearful of something like the great ooze like that I’ve not heard that was from boss

[02:04:48]  Blue: yeah or maybe I heard it I remember

[02:04:49]  Red: we make a bunch of really stupid little nanobots and they they’re misprogrammed in some way so they take every atom and they just break the atoms down and you end up with just kind of dust right and just this great news and so they start to do that to the entire world they’re mindless they’re not artificial superintelligence and they’re just little automatons but they exist at the they exist at the atomic level because you can’t see them and they just basically they just turn the whole world into this great dust and everybody’s dead you know like that that scares me more I wouldn’t say I’m exactly scared of that because like we’re nowhere even close to having to worry about something like that and by the time we are we’ll have the knowledge of how to deal with something like that but that’s scarier to me than an artificial superintelligence where I can actually go in and talk to the thing you know and say you know what this is kind of a bad idea you know it’s a reason with it right so um it’s I because of that it’s not that there isn’t potential danger with AGI of course there are potential danger and maybe there will even someday be something like a superintelligence in the limited sense that they can think faster than a human being like and it’s not completely unthinkable that that could happen right I don’t know if I have severe doubts they could be a hundred times faster than us right or a million times faster than us I suspect that the human brain is so close to efficient in terms of its computation that we’ll be lucky to get something that’s like 10 times where

[02:06:30]  Blue: does memory come into this I mean it’s it’s not hard to imagine that they would be a have a million times more memory or something right

[02:06:38]  Red: no no it isn’t you’re right about that so you would have to specify what we mean by memory and this is something that’s um we had an episode about this the one where we did the argue me anything episode uh to a universal explainer the concept of memory is unclear right so for a for a Turing machine there’s kind of this clear concept of memory the tape but any there’s no reason to limit the tape because if you say if you say well it’s you know 100 000 long then yes there’s certain algorithms you can’t run but you can always just lengthen the tape and I’ve got a phone

[02:07:12]  Blue: right now that basically takes my memory into you know practically infinite

[02:07:17]  Red: yeah so to a universal explainer the concept of memory presumably when you ask the question you were conceptually thinking memories remembering something in the brain right but that’s not really your memory your memory is not really held in your brain like part of your memory is as a universal explainer right like your phone is a part of your memory stuff you write down is a part of your memory right the textbooks you buy are a part of your memory and so the concept of memory for a universal explainer doesn’t equate easily to the concept of memory for a Turing machine um so I and I think you know the kind of the Deutch answer here is probably pretty good that in some sense you’re already augmenting your memory as a human and there’s no particular reason why we couldn’t insert something into your brain and give you the ability to you know Elon Musk style be connected to the cloud to store stuff I don’t know that a super intelligence a so -called super intelligence would be any different than that right like if you try to imagine it just a straight memory it’s going to have to access it just like we do if you’re trying to imagine it more like the memory of the brain where it’s kind of contained within the neurons then you have to keep imagining a larger and larger brain to hold this that are that still have the connectivity of neurons and that’s going to slow the whole process down right so it’s actually going to

[02:08:44]  Red: start being slower so at some point you almost have to say look this I’m going to let the certain that the universal explainer ship be held within something that’s reasonably sized that runs at a decent speed and then I’m going to have to just attach memory and now it’s almost exactly like what a human’s doing when they augment their memory with their phone right so I I don’t know that it doesn’t seem to me that that really we have to even worry about the possibility of an artificial super intelligence but I will admit that that’s a little bit subjective on my part there could be such a thing as an artificial super intelligence and we just don’t know like it’s really we just don’t know it seems like the impediments are much larger than people realize but they could be addressed maybe like maybe we could make something a hundred thousand times faster than us yeah I can’t say we can’t but the impediments are way higher than people think so at least okay I wasn’t planning to do this but let’s let’s go ahead and let’s talk about this kind of AI doomer all the way okay when when you are what’s the what’s the guy who’s the big AI doomer that really popularized it Eliza Mikyakowski what’s the factual

[02:10:03]  Blue: oh oh oh oh this guy yeah Eliza Yudkowski and not the easiest name to pronounce Eliza he’s got he’s got a very long book that’s right okay

[02:10:16]  Red: yeah okay so and he’s also one of the most important people in terms of popularizing yeah yeah okay so um you know crit rats hate him for you know when you’re taking his point of view right if you’re trying to put yourself into his mindset the the things he’s imagining if you were to make them explicit they sound as they sound as silly as they actually are right you’re you’re you’re imagining that there’s no theory of intelligence first so that when you create your first AGI you know you’re creating an AGI that you almost happened upon it by accident by working with AI and suddenly it’s super intelligent you didn’t even realize that you just crossed the boundary into AGI -ness right and you have to imagine that it just so happens that the algorithm can be run a hundred thousand times faster on a modern computer this person’s probably not working on a supercomputer so on a modern laptop it’s a hundred thousand times faster than its creator right so it’s a super intelligence and you don’t know it doesn’t go through childhood right it doesn’t spend 30 years going to school to learn stuff before it’s competitive with the human you know and can even take care of itself it just sort of happens to be that all the knowledge is right there the moment it’s born and it’s already smarter than you in terms of you know knowledge that you have and it just so happens that it’s psychopathic and it has no morals and like the level of things he’s assuming that if you were to call them out explicitly you would immediately go wait that doesn’t make sense right this there’s tons of them going in there where he’s just making all sorts of assumptions that you go wouldn’t it make more sense that we would have to have an theory of intelligence first before we can make an AGI as Deutch has in my opinion correctly argued it’s not that there is zero chance we might stumble upon it upon it by accident in a certain sense evolution did right I don’t know if you can really call it by accident because evolution is in some ways very purposeful

[02:12:34]  Red: but again controversial statement um but uh the fact is though is that

[02:12:44]  Red: we probably aren’t going to it just isn’t the way these things work you know we don’t you don’t have no idea what you even in machine learning where we often stumble across on things we don’t understand the odds that we would just happen to stumble upon AGI -ness with zero understanding of what we’re doing in advance right I just I don’t even understand where that thought’s coming from right so I could almost believe it if like you were trying to emulate the brain neuron by neuron like the um the blue brain project I think that’s called right but even then is it going to be slower than us and it’s really unclear why they’re so scared right because the assumptions that they’re adding up that they’re kind of tacitly assuming there all of them seem really questionable so it seems way more likely that we’ll evolve with them right that once we start once we have a theory we’ll say look let’s make our first AGI and then it’ll be like children and we’ll have to like talk with them and we’ll understand how they interact and they won’t be super clever right out right of the bat because that’s something you have to learn over time they’ll probably be slow and painfully slow compared to us initially um it may take them like 50 or 100 years to get to where we are by age 20 you know or something like that right there are so many assumptions going in here that could just easily go the other way and in fact there’s obstacles that would even almost suggest they should go the other way unless you can explain to me how we overcome the fact that the brain is doing this massive parallel computation we’re in our computers are serial for example right someone’s going to have to probably build a massively parallel

[02:14:36]  Red: architecture to be able to get a brain to work as fat the artificial brain to work as fast as a real brain would be my guess right um so and we’ll do it once we have a theory of how to do it someone will fund that and they’ll build some sort of massively parallel specialized computer to be able to do it but it’s not gonna like escape onto the internet and run on a laptop somewhere you know it’s hollywood style just this they’re like i’m trying to say here is the doomerism it’s not that there isn’t potential doom that’s not what i’m trying to say it’s that the reason why they’re hyper focusing on this one thing is because they have really really really really questionable assumptions passively yeah right

[02:15:20]  Blue: well we should probably wrap this up but i i found your i find your take on this completely compelling this was a great episode i really learned a lot from listening to you today all

[02:15:34]  Red: right

[02:15:34]  Blue: thank you hello again if you’ve made it this far please consider giving us a nice rating on whatever platform you use or even making a financial contribution through the link provided in the show notes as you probably know we are a podcast loosely tied together by the popper deutch theory of knowledge we believe david deutch’s four strands tie everything together so we discuss science knowledge computation politics art and especially the search for artificial general intelligence also please consider connecting with bruce on x at b neilson oh one also please consider joining the facebook group the mini worlds of david deutch where bruce and i first started connecting thank you


Links to this episode: Spotify / Apple Podcasts

Generated with AI using PodcastTranscriptor. Unofficial AI-generated transcripts. These may contain mistakes; please verify against the actual podcast.