Episode 73: Argue Me Everything
- Links to this episode: Spotify / Apple Podcasts
- This transcript was generated with AI using PodcastTranscriptor.
- Unofficial AI-generated transcripts. These may contain mistakes. Please check against the actual podcast.
- Speakers are denoted as color names.
Transcript
[00:00:07] Blue: Welcome to the theory of anything podcast. Hey, Peter. Hello, Bruce. All right. Well, today we’re going to do something interesting. We’re going to do what I call an argue me anything podcast. We’ve thought about doing an ask me anything podcast like podcasts do. But I’ve been concerned that maybe our listenership size is too small to justify doing such a podcast. If I’m wrong, Peter’s been telling me, hey, maybe we can just on Peter’s Facebook page, ask people questions. Maybe we could even make it that it’s not Peter and I trying to answer. But like, we’ll take questions and then we’ll like find people who can answer the question or something like that. Maybe that’d be fun to do sometime. I mean, oh, that would be we’re both laymen. We’re not trying to pretend like we’re not. I would consider myself a highly knowledgeable layman, but I definitely feel like I’m just a layman. And so but it might be fun. I used to have people I guess I still do have people who will email me or message me and they’ll ask me questions and I’ll do my best to answer their questions. And so I think there’s some potential there. And I definitely think that at least on this podcast, we’ve expressed a very different kind of take on for strandism different than what you’re probably going to get from your from the average Deutsche and on Twitter, let’s say. And we’ve definitely been a little contrarian at times. So that’s good. That’s our niche. And so maybe that would be worth doing and ask me anything if we ever got enough listeners or something. But what we’re going to do today is arguments I’ve seen made online, not necessarily aimed at me.
[00:01:52] Blue: And I they I thought they were interesting questions. I felt like the person was saying something thoughtful. And so I wanted to. So it was kind of rattling around in my brain. And I thought, you know what? It’d be really interesting to make podcast episode where I kind of lay out how I see each of these questions and what I think the answers are and just kind of put them out there. And I’ve got three of them today. Well, let’s start with what I would of bit butter and universal explainer. So a bit better is someone on Twitter. He’s kind of a four -strander. He’s he’s like me. He’s he’s kind of contrarian. I often find myself agreeing with him. But I don’t always. And he often disagrees with me. He he put up a tweet from David Deutsch, responding to Paul Graham on Twitter. And then he disagreed with David Deutsch and raised issues based on this tweet. So let me read the tweet from Paul Graham first. He says, men commit 95 percent of homicides. I don’t consider it a slander if someone says men are inherently more violent. It seems obviously true.
[00:03:11] Red: And
[00:03:11] Blue: David Deutsch responds to Paul Graham. How is it inherently obvious? Humans aren’t inherently anything except universal. So bit butter, also known as at Mormo underscore music. He says the claim about human universality depends on the imagined preconditions of unlimited time and memory. Because all humans so far are constrained by limited time and memory. None are universal explainers. No actual universal Turing machine can exist. A universal explainer can’t exist in reality for the same reason a universal Turing machine can’t. So I want to ask the question, is bit butter correct? Now, as it turns out, this isn’t really a yes or no answer. I think the answer is he’s kind of correct, but not really. And I want to try to nuance out what the truth actually is so that people can kind of for themselves understand what Deutsch is talking about when he refers to a human as a universal explainer, and then you can kind of decide based on that how you want to see it.
[00:04:21] Red: Yeah.
[00:04:22] Blue: So. So let’s start with the universal Turing machine. So is bit butter correct about universal Turing machines that there are no what did you say? He said there are no actual universal Turing machines that cannot exist. Is that true? Well, the answer is sort of. So a physical computer, what bit butter is really getting at is that a physical computer never has infinite memory like a theoretical Turing machine. Now, he also throws in their infinite time, but a theoretical Turing machine is not allowed infinite time. So he’s actually got that part wrong. OK. So a but he’s right about the infinite memory that a theoretical Turing machine is allowed an infinite tape. So it has infinite memory and no computer that will ever exist will ever have infinite memory at any one given point in time. So that seems to me a pretty compelling way to look at it, as you put it like that in related to the universal explainer hypothesis. I mean, you know, on some theoretical level, I think it is true that we are universal explainers, or at least I suspect it’s true. I mean, but yeah, that’s the time and memory piece in there seems to be a real limited limiting factor for humans. And and, you know, maybe that’s I think that that really relates to what we’ve struggled with on this podcast. Yes. And this is where that butter is going as a response to Deutsche. It’s not the way I would have addressed it. And I’ll actually give you how I would have addressed it afterwards. So does does the fact that a finite computer
[00:06:07] Blue: sorry, a real computer, a real physical computer always has finite memory at any given point in time, does that mean that they are not universal? So the answer is, yes, sort of. So let me give you an example. A Commodore 64, let’s say I used to have a Commodore 64 when I was a kid. Let’s say that I have a traveling salesperson algorithm that I want to run on that Commodore 64. It only has 64 bytes of memory. Well, obviously, I could give it a traveling salesperson algorithm or problem using a traveling salesperson algorithm and I could overwhelm it, right? I could make enough cities that traveling for those who don’t know the traveling salesperson problem is this kind of. Classic computer science problem where you have a number of cities. There’s paths between the cities and you want to find the shortest path between all the all the cities so that the traveling salesperson can visit every single city in the shortest amount of miles or could be time if you want to make the path based on time instead. It doesn’t matter, right? Just shortest whatever unit it is that you’re you’re trying to measure.
[00:07:18] Red: And from what I understand, it’s not that it cannot be solved. It’s just that it would take an infinite amount of time for it to be solved.
[00:07:26] Blue: No, no, no. Right. No, no, no. So oh, it’s it’s it’s an NP problem. So it’s considered just cannot be solved. No, no, it can be solved. Oh, it can totally be solved. And it can be totally solved in a finite amount of time.
[00:07:41] Red: OK.
[00:07:41] Blue: It’s what’s considered intractable. OK. So so once the it’s an exponential growth, right? So as you add. So if you have a small problem, it’ll it’ll be solved in a reasonable amount of time.
[00:07:53] Red: OK,
[00:07:54] Blue: but the but as you start to add more cities and paths, the amount of time required to solve it becomes exponentially. OK, of course. Yeah. So you very, very quickly reach a point where it becomes insoluble, soluble, sorry, intractable, not insoluble. I mean, we often treat intractables and solubles the same thing. But technically speaking, it’s still soluble. It’s just intractable. OK.
[00:08:17] Red: Oh, that makes perfect sense.
[00:08:18] Blue: The sun’s going to explode long before this algorithm actually finishes, even though we know the algorithm will finish.
[00:08:26] Red: So it’s an exponential growth. I think it’s the key. OK. I know most problems are like that.
[00:08:32] Blue: Traveling salesperson problem is just a kind of quintessential example of it. But like nearly every problem we’re interested in is the set of problems that are actually tractable, the the problems that are in the class of P. That’s a very small subset of NP, right? The vast majority of search problems that we’re interested in are intractable. OK. Now, this leads to AI. Why do we have AI? AI is different algorithms that you use to try to still solve these problems, even though they’re intractable. And then like you do things like you you make it like maybe relax the requirements. It doesn’t have to be the the absolute shortest path. It can just be a nearly short path or something along those lines, right? And then AI is the study of how do you create algorithms that can still give you a useful answer to this question? And the problems are still soluble in that sense that you can usually find a way. Sometimes like if I were to. You know, I can make certain assumptions that might allow a miscible heuristic that allows me to solve it in a reasonable amount of time. And then sometimes it just doesn’t matter. Like playing having a computer play chess. That’s a that’s an intractable problem. It’s completely intractable. It doesn’t matter because it’s intractable for humans, too. So if I want to make a program that can be to human, it just it just simply has to be able to explore out. Further than the human care, right? In fact, the game of chess would be boring if it were tractable, because it’d be like tic -tac -toe that you can always win or all the stalemate.
[00:10:09] Red: Yeah,
[00:10:10] Blue: well, I mean,
[00:10:10] Red: what I what are my favorite facts is that there are more chess moves than there are atoms in the known universe. Yes, just you can’t get your mind around that. It’s just so big. Such a big just little chessboard, big, big world.
[00:10:27] Blue: Right. So. But like within any any number of moves, any plies, as they call it, it’s finite how many moves there are. It’s just that it’s such a large number that you can’t solve it. You can only solve three plies, four plies, five plies. You can’t go beyond a certain point. You get as your computers exponentially get faster since the problem is exponentially in growth, you get like a one more ply. You know, after after 10 times of Moore’s Law or something like that. Right.
[00:10:54] Red: Yeah.
[00:10:54] Blue: And so and so what they do instead is they come up with clever alternative algorithms that don’t rely solely on the search algorithm to try to find a solution. And that’s how like deep blue worked and things like that. And how human brains work, right?
[00:11:09] Red: To yes, heuristics for thinking, you know, ruling out, you know, everything that involves moving this bishop or something like that.
[00:11:16] Blue: Yeah. Yeah. OK, here’s the problem with what I just said, though. So we know that a commoner 64 is, quote, not a universal computer in a sense because it has this limited memory.
[00:11:28] Unknown: And
[00:11:28] Blue: therefore there are certain traveling salesperson problems that it can’t solve, whereas the Turing machine can’t. That this is actually the argument. OK. But it’s a little unclear what we mean by memory because the commoner 64 had this thing called a disk drive. So it wasn’t actually limited to 64 K.
[00:11:48] Unknown: Now.
[00:11:49] Blue: You could argue here. Yeah, but the disk drives also finite. So I don’t know what it was, six hundred and forty K or something. I mean, it still wasn’t that much. It was a lot more than 64 K, but it wasn’t, you know, much by today’s standards. And then score 64s went out of, you know, they don’t build them anymore. But before anyone came up with a hard drive for them, right? So the amount of memory at some point ended for the commoner 64. And there was a finite end to how much memory of commoner 64 could have. And therefore you can still safely say the commoner 64 is not a universal computer. We did not have a hard drive. So you’re saying I’m pretty sure that it just had the floppy drive.
[00:12:31] Red: But it must have had the operating system on the hard drive or something.
[00:12:34] Blue: No, no, no, the operating system existed in in Rome.
[00:12:37] Red: Oh, OK, OK.
[00:12:39] Blue: That was very normal back then. Yeah. Yeah. Yeah. OK. OK. So a lot of people would say that it didn’t have an operating system. But you always have an operating system. It’s just that the operating system was that you started your computer and basic came up, you know. I see. I see. So modern computers have no set memory size due to the idea of virtual memory. So any one, you know, if I look at my laptop, it’s got some limited amount of RAM. But it also has a much larger hard drive. And it can actually treat the hard drive as if it’s RAM because it’s got these programs that do virtual memory. OK. Now, it’s not really clear what the maximum hard drive size is for a modern computer, because they can always make better and better hard drives. So the question, what’s the maximum memory size for Commodore 64 is a well -defined question, but it’s a lot less well -defined for a modern PC. OK, are you with me so far?
[00:13:40] Red: Yes, I am. OK, good, good, good lesson on basic computer science here.
[00:13:46] Blue: OK. Now, I can imagine BitButter saying, but won’t it always be finite? Well, yes, of course, it’s true. So that’s why I have to say he’s sort of right, OK? Because at any one moment, the memory is finite. So I’m going to concede a steelman version of BitButter’s first point. So long as that you, quote, get it, that there is no such thing as, quote, the memory of the computer forever fixed in time. OK, there is no definition of the maximum of memory a PC has. And that’s going to turn out to be an important point to the way I’m going to explain this. OK. So I’m conceding BitButter’s steelman version that there’s always a finite amount. But I’m insisting that it’s unclear that there’s not a set finite amount. OK. Now, at a minimum, you could say all computers reach some technological limit of memory before they get replaced. So this is the sense in which I think BitButter is correct. Now, so why does the Turing machine have an infinite amount of memory? That’s really the question you should be asking yourself. Like, why didn’t we define the Turing machine as having a finite amount of memory? It’s interesting. It’s an interesting question because it’s actually the only place where we allow an infinity in the definition of a Turing machine. Turing machines are also defined by their number of states. You could have said the Turing machine is allowed an infinite number of states, but you do not allow that. OK, a Turing machine, by definition, has to have a finite amount of states. It’s never specified how many, but it has to be finite. OK. And the time amount of time is never considered infinite with the Turing machine.
[00:15:33] Blue: We declare a algorithm intractable. If it’s going to take an exponential amount of time, if we wanted to declare Turing machines as having an infinite amount of time, then no algorithm would be intractable. OK. So why is it that we made this one exception to memory being infinite when we refused to make that exception to anything else in the Turing machine? This is actually the question that you want to be asking. OK. So and, in fact, a Turing machine is considered to cease to cease to be a Turing machine. If, for example, you allow it to have an infinity of states, OK, it’s no longer considered a Turing machine if you decide to break that. You are allowed to break it if you want. It’s just no longer considered a Turing machine. OK. So the reason why the easiest way to explain why we do this is to explain the concept of a nondeterministic Turing machine. Now, what’s a nondeterministic Turing machine? It’s different than a Turing machine. OK. A nondeterministic Turing machine, the easiest way to explain it, and I’m not an expert in this, but I did go look this up in a book before the podcast. So I’m pretty sure I got this right that you can run. It can run any NP problem in polynomial time. OK. And the reason why is because it has essentially an infinity of parallel processing. So it’s allowed. So if you have an infinite, you know, not infinite, but a exponential search you need to do, it’s it’s able to simultaneously search all the paths simultaneously. OK. And because of that, that would mean just think about it for a second.
[00:17:11] Blue: And it should be obvious that that now means that every NP problem that we consider intractable today is now tractable in polynomial time. OK. Can you see why? If you don’t see why, I’ll make sure you understand before I continue.
[00:17:25] Red: I don’t think I see why.
[00:17:27] Blue: OK, I’ll give you an example. Let’s use chess.
[00:17:29] Unknown: OK.
[00:17:29] Blue: OK.
[00:17:29] Unknown: OK.
[00:17:30] Red: Your chest is good.
[00:17:31] Blue: So let’s say that I need to explore what one move out is for chess. How do I do that? I take every piece on the board that I own and I try moving it once one legal space and I try every legal space that can move. So then I need to after I try every single one of those moves, I then need to try what my opponent would do if they were trying to maximize their outcome. So I try every single move they could possibly try. Then from that point, I try every single move that I could possibly try in a response to them. And I just keep going until I run out of time. OK. OK. OK. OK. Suppose I could simultaneously try simultaneously. That’s the key thing. Try every single one of the moves at the same time. Would the problem still be exponentially intractable?
[00:18:27] Red: I. Well, the the simultaneously were just kind of throwing me off here seems to me. I mean, is there a limit to what you can do simultaneously? No. OK. Then then no.
[00:18:41] Blue: OK. That’s what a nondeterminatory machine is.
[00:18:44] Red: Yeah. OK. OK.
[00:18:45] Blue: So it’s a theoretical machine that doesn’t exist in real life. And then, in fact, the laws of physics don’t allow you to build.
[00:18:53] Red: OK. OK.
[00:18:54] Blue: That we that we know about theoretically. And in under different laws of physics, perhaps it could exist. Right. But it doesn’t exist in our reality. I see. But it’s a really useful tool because it allows us to think about what if you could build a machine like this, what would be the results? And then it leads to various types of questions about what do we mean by intractability? And there’s lots of interesting things that come out of that. OK. OK. Now, basically, what’s happened here is a nondeterministic Turing machine is simply a relaxation of the definition of the Turing machine. It it says, OK, you’re now allowed to do as much as you want everything simultaneously. And by doing that, even though we know you can’t build a real computer to do this, by doing that, we can immediately see, oh, that’s going to change the amount of time it takes. The class of problem changes that problems that we consider in NP class, but that we consider is not in P suddenly become P. All right, now we’re considered being polynomial time. OK. The reason why we want to define the Turing machine as we do is because we’re curious what the maximum level of computation is that it’s allowed by the laws of physics. Now, the Turing machine was meant to be a theoretical construct to help us understand computers as a class, not to help us understand any one particular computer. So an infinity of memory is a theoretical convenience in how we define the Turing machine, because it is so easy to add more memory to most computers.
[00:20:35] Blue: We probably could have defined the Turing machine as a finite amount of memory instead of an infinitive memory, although we don’t specify what the finite amount is. And I’m pretty sure the result would have been exactly the same. We could have said a Turing machine is defined as having a finite amount of memory X, but you can specify X to be anything you want. And that would have been the same computational class as a Turing machine today with an infinite amount of memory. OK. So if an algorithm requires infinite memory, it would also require infinite time, which would then mean that the program would never halt. OK, so it basically be insoluble. So what we want is we want to keep the definition as simple as possible. And it’s easier just to say, we’re going to allow it an infinite memory than to insist it has finite memory X, where X is undefined, because it turns out those two are the same thing. And, you know, not only is it defined at X, but you’re allowed to change it, right? If you define it and then you had to keep it, then that would be a problem. But if I’m allowed to, at any moment, change X, then that’s the same as saying it has an infinite memory for the purposes of what we’re trying to do with the theoretical idea of a Turing machine. So this is how we’re going to start to unravel Bitbutter’s concern here.
[00:21:57] Red: OK.
[00:21:58] Blue: So while there is a limited sense that a real computer real computer is only an approximation of a universal Turing machine, we don’t really care because we can quote always add more memory. Now, there is a possible problem here, heat death. You know, so like if you were accepting heat death as a correct cosmology, then that would mean that you can’t always add more memory. I’m not sure that would matter much to the way we want to define a Turing machine, because you’re really trying to just theoretically think about what if there was no heat death? What if I mean, like. We’re not trying to define what a real life computer can do. We’re trying to define what a theoretical computer can do because that answers interesting questions. But I would grant you that if heat death is a real thing. And by that, I mean there’s a finite amount of computation. Then obviously no physical computer will ever be have the same computational class as a Turing machine. That would just be a fact.
[00:23:03] Red: Doesn’t matter if it’s a quantum computer or it does not.
[00:23:06] Blue: Yeah. OK. So but for now, let’s accept Deutsche’s view since this is part of his view that somewhere in the multiverse, there is a computer large enough to emulate anything you can imagine. He he he actually explicitly mentions this in fabric of reality. So this is the a mega point like computer. Well,
[00:23:24] Red: actually, now that you brought that up, I’ve got to read you this one quote real quick from Frank Tipler. Quantum computers, which can code more bites of information in 400 atoms than there are atoms in the entire visible universe do not require much mass. An entire simulated city with thousands of humans and AIs can be coded in a few grams. That’s not right to you.
[00:23:49] Blue: That’s a good question. So I’ve actually I’ve actually studied quantum quantum computing. When I got involved in for strandism, I actually bought a book on quantum computing for computer scientists, which is where all my knowledge of quantum computer quantum mechanics comes from. I actually literally don’t understand quantum mechanics like a physicist would. And when Sam Kuipers on our podcast gets up and starts explaining how they do it, like I don’t recognize any of it. The way you explain quantum mechanics to a computer scientist is you use matrices and it works just fine. Right. OK. I understand it in that way.
[00:24:33] Red: Yeah.
[00:24:34] Blue: And I can tell you that a quantum computer in a few hundred atoms would not be able to simulate a city like I know that for sure.
[00:24:48] Red: So Frank Tipler is wrong.
[00:24:49] Blue: No, he actually said there said there with a few grams. And he says he says four hundred atoms.
[00:24:57] Red: Oh, wait,
[00:24:58] Blue: read the code again.
[00:24:59] Red: OK. Quantum computers, which can code more bites of information in four hundred atoms than there are atoms in the entire visible. That’s a true statement. That’s a true not require much much math mass. That’s a true entire simulated city with thousands of humans and AIs can be coded in a few grams. OK. OK. I see. OK. There’s separate concepts. Obviously, four hundred atoms does not equal a few grams. OK. That’s right.
[00:25:25] Blue: So I have no idea if a few grams could compute an entire city. I mean, like if you think about it, we can compute an entire city today on a regular computer if you mean like a video game. Now, it’s not a completely realistic simulation. So it’s unclear what he means. It’s he’s saying something that’s probably true in a way, but like he’s being vague. And so it’s really hard for me to know what he means. I guess
[00:25:52] Red: that when you kind of get into that statement and sort of extrapolate how you were talking about Mora’s Law, sort of the quantum computer. Haking over the universe, essentially, you can see how he comes up with the idea that every human that has ever exist or ever could exist could be simulated in that computer. That’s that if that’s what’s capable in grams. I mean,
[00:26:23] Blue: understand, though, the mega point actually, theory actually goes beyond that. It’s not merely the fact that it’s this certain number of atoms. Like if the universe would still had a finite number of atoms, then you you still would have a limit to what you could do. The mega point gets around that. OK, through the way he understands the the collapse of the universe. Right. It’s.
[00:26:47] Red: Oh. So
[00:26:48] Blue: you actually gain memory over time. It’s you start to exceed the amount of what you could do for a single universe. So there just is no limit at the limit, if that makes any sense.
[00:27:01] Red: It’s starting to sink in.
[00:27:03] Blue: OK. So I can barely even remember the mega point theory at this point. I remember that was what it said, but like I can’t even remember how he mathematically went about it. So there’s I think I mentioned that in one of my past podcasts that I opened up my quantum, my quantum computation for computer scientist book. And I tried to create a presentation and I just could not figure it out. And I kept looking at the the notes in the margins, which I had written when I was younger. And the guy who wrote that is got to be ten times smarter than me. Oh, he was like he was like correcting mistakes in the book. He he solved the problems that were given in the book, and then he would correct the problems and say they they have the wrong answer in the back.
[00:27:56] Red: And
[00:27:57] Blue: I’m like, I’m in shock. I’m like, I cannot understand what he’s saying, you know, and I know it’s me and I know it used to be that guy, but it just isn’t me anymore. OK. So, you know, there’s that. This isn’t something that I’ve kept up on. And so it tends to fade from my mind over time. So but but I remember that the Omega Point computer had a very clever way of getting around the limit of the number of atoms in the universe. OK, so the 30 here’s the thing. The theory of universal explainers does not have a quote, theoretical universal explainer that it is based on. Bitbutters assuming that it does. Now, maybe he made that part up himself. But maybe it makes some sense. Let me I always try to steal man the other person’s view. So the reason why we don’t have a theoretical universal explainer is because we don’t know what a universal explainer is. So it’s not possible for us to define a theoretical universal explainer the same way we would define a theoretical computer, right? A theoretical Turing machine. So maybe him saying, you know, the theoretical universal explainer assumes infinite memory in time. Maybe that’s a fair statement. But I do want to call out that he’s making that up, right? It’s not a part of Deutsche’s theory as it currently stands. But even if we were to specify a theoretical universal explainer as quote is having, quote, infinite memory, this would really pose no real problem for the theory that humans are universal explainers. Now, why? So let’s grant that Bitbutter for the sake of argument that the same technically correct concession as we’re doing for universal computers.
[00:29:44] Blue: This is the best possible scenario for Bitbutter’s argument. So I’m literally giving him everything. I’m saying, OK, I’m going to grant Bitbutter for the sake of argument that there’s a there’s a theoretical universal explainer and that it has infinite memory and infinite time and that the actual universal explainers in real life aren’t, therefore, true universal explainers because they don’t have infinite memory and infinite time. OK, so I’m giving him everything. There’s nothing I’m withholding from his argument. OK. So what does it mean to then say that a human is as an approximate universal explainer due to not having infinite memory? What does it even mean for a human to have memory? Do we mean the human’s working memory in their brain? Surely not. OK. Do we mean their long term memory? Surely not. OK. Do we mean everything that he might be able to write down on a piece of paper? You know, OK, well, how much memory is that? Can you define that amount of memory for me? Does it include computers? If he can use a computer and part of his knowledge is based on creating algorithms that work things out. Does that include his memory? Yes, of course. Right. So when we talk about the memory of a universal explainer, we have to define it as any sort of memory that he can through his tools devise to supplement the memory in his brain. OK. And this is part of what David Deutch is assuming when he talks about a person being a universal explainer. OK.
[00:31:22] Blue: So this is this is what we actually mean when we say a human is an universal explainer, adding only approximately so in any given moment in time adds no useful information and tells you nothing about the limits of that human. Now, again, if we’re starting with the assumption of heat death, then humans have finite time. That’s problem. Deutch is now wrong. If we’re starting with the assumption the assumption that something like the mega point isn’t true and that we’re going to run out of memory at some point or whatever, then bit butter is right. And David Deutch is wrong. I’ll grant you that. OK. But that isn’t part of Deutch’s theory. Deutch is starting with the assumption that in theory, there’s an infinite amount of time for every human being. You know, it’s just if there isn’t, that’s just a problem to be solved. And that there’s an infinite amount of memory that’s eventually available to that human being. Right. In this, does he state this in his book? Or is this just he does? OK. Yeah. His final chapter of fabric of reality. Go read that. He outrightly says it.
[00:32:24] Red: OK.
[00:32:25] Blue: This is why he argues against heat death in the final chapter of fabric of reality. OK. He even he even gives an argument against heat death that I’ve never really covered because it doesn’t make sense to me. And in fact, I think it’s probably wrong. But it’s I can cover it at some point if you want. But it is a very important part of David Deutch’s theories that heat death does not exist. That is that is a starting assumption for his theories. Now, I have at least once heard him say, well, if I’m wrong, then at least, you know, I’m approximately right. And so I’m going to live my life this way anyhow. Or so I I think somewhere in the back of his mind, there’s the possibility that exists, that he’s got this wrong. And he’s kind of thought through what what does that mean to me if I’m wrong? But his theories are the whole concept of the beginning of infinity is based on the assumption that there is no actual finite amount of memory or time and that it’s going to heat death is is controversial amongst cosmologists. It is.
[00:33:27] Red: It is a
[00:33:27] Blue: controversial amongst cosmologists. That is absolutely true.
[00:33:31] Red: OK.
[00:33:32] Blue: And I think for good reason, right? It’s yeah, it’s it’s it’s the view I support.
[00:33:37] Unknown: But
[00:33:37] Blue: I admit that it is not it’s difficult to say that it’s absolutely the best theory, right? It seems like heat death is a decent competitor at this point. Right.
[00:33:46] Red: Yeah.
[00:33:47] Blue: I felt differently back when the mega point theory wasn’t refuted. I felt like at that point, heat death was not a good viable competitor to the mega point theory. But when the mega point theory ended up not matching observations, it seems like heat death kind of became an equal competitor again, if that makes any sense, or maybe not even equal. Let’s just say it’s a competitor. OK, so Bitbutter’s argument is, in my view, actually just a misunderstanding of Deutsche’s theory. Does that make sense? Because Deutsche is starting with the assumption that there’s an infinite amount of time and memory available to a human being. OK. Now, a completely fair question is, is Deutsche correct in his response? So Bitbutter’s trying to explain why he feels Deutsche’s response was incorrect. Right. He’s trying to say, yeah, but there’s only a finite amount. And you just you just said that you kind of agree with Bitbutter. And I kind of do too, right? I mean, when we’re talking about a human being in real life, you know, right now, we’re not beings that are living forever. We’re beings that have lived a certain amount of finite time up to this point, right? And because of that, it doesn’t make sense to assume that there isn’t some legitimate sense in which men are inherently more violent than women. So Bitbutter’s argument, even though his argument around universal explainers was a misunderstanding of Deutsche, he’s kind of actually got a point. That’s fair. OK. Now, let me explain in my own way how I would get to the same conclusion. So Paul Graham says, if someone says men are inherently more violent, it seems obviously true.
[00:35:25] Blue: Now, let’s remember the rationality principle that we mentioned in a past podcast, the one on Bruce Caldwell’s paper. So it’s the idea that we don’t just assume that someone is saying something irrational or doing something irrational, that we try to understand how it rationally makes sense, given the knowledge that was available to them at the time. OK. And the reason we do this isn’t because we think everybody’s always rational, but because the conclusion, oh, they’re just behaving irrationally, is easy to vary. It’s got no use, right? Because it’s like saying it’s like an argument from coincidence. You can always make an argument from coincidence. Coincidences do exist. They’re sometimes right. But we don’t ever start with an argument from coincidence. We always start with the assumption that it’s not a coincidence. There are ways you might get to a coincidence as your best explanation by eliminating all the alternatives. But you would not ever start with coincidence as your best explanation. The same is true for the rationality principle. When I and I’m going to throw in there the idea of charitable reading because I believe charitable reading, which is my own term, comes from. All right, maybe it’s not. Maybe I’ve heard it from somebody else, but I think it might be my own term. It is a version of the rationality principle. It’s a subset of the rationality principle. It’s the idea that when somebody says something, if there is even one way in which I can make sense of it rationally, that’s the right way for me to interpret it. It’s I should not just assume that they said something that was irrational. And it just makes sense why this has to be so right.
[00:37:09] Blue: It’s because there’s always an infinity of ways in which I can choose to misread somebody that don’t make sense. Right. So if I’m going to do that, if I’m going to decide this person is saying something that’s irrational without eliminating every other possibility first, then obviously I can just use that as an all purpose go to argument. And that’s the type of argument that a critical, critical rationalist doesn’t allow. That’s what critical rationalism outlaws. Does that make sense? Sounds like words to live by, honestly. Yes. OK. It’s something that I feel very strongly about. And I also feel like when people like start to put words in your mouth, that it’s an incredibly, incredibly rude thing to do to them. And it’s something that we do all the time, right? And you can do it accidentally. That’s one thing. And it really comes down to whether you’ll correct error correct. If the person says, no, that’s not what I said. Do you insist they did say it or do you allow them to speak for themselves and to say, OK, what is it that you’re actually saying? That’s what really it comes down to in terms of charitable reading. OK. So. So it makes no sense to choose a way in which someone said that somebody said something was false, because there will always, always, always be a false reading available to you. So what did Paul mean here? That’s the question Paul Graham mean here. That’s the question we need to be asking. Surely he might he might mean something false like forever and ever, no matter what knowledge we gain, men will always be more violent than women. I mean, I certainly could read his statement that way.
[00:38:46] Blue: And if it if that is what he meant, then he is wrong. And it is a false statement. And I agree with Deutsche, OK? Yeah. But that seems like such an unreasonably uncharitable way to read him. OK.
[00:39:01] Unknown: Yeah.
[00:39:02] Red: I mean, it just seems to me we’re talking about two just very different levels of abstraction. You know, what what Deutsch is saying is completely valid on one level of abstraction. I think is really an interesting thing to keep in mind. But, you know, on another level of abstraction, of course, there’s a relationship between being a man and being aggressive. Right. You know, it’s not biological determinism or anything, but it’s it’s still it’s still there and all over the world. I mean.
[00:39:40] Blue: So I think that you are exactly correct that in what Deutsche is really doing is he’s ignoring what Paul Graham actually said to make his own completely unrelated point. And I think that’s what’s really going on, right? He’s saying something that is true, but it’s irrelevant to Paul Graham’s point, because they’re at totally different levels of abstraction.
[00:40:03] Red: OK.
[00:40:04] Blue: So how might we reasonably read Graham? Well, how about this? I read him as saying, genetically speaking, men are more inherently violent than women as a class, as of today, given our current level of knowledge. Now, Deutsche might be taking issue with that. OK. Maybe he is like, I know Deutschens who will say that’s totally not true. Like genes today can’t influence us. We had whole podcasts about genes and how genes can influence us, even though we’re universal explainers. There are Deutschens out there, a lot of them, in fact, that completely deny that. OK. Yeah. Here’s the thing, though. It’s just obviously a true statement, right? Genetically speaking, we don’t know about the genetics. Well, I’ll allow you maybe a challenge on that. But men are more inherently violent than women as a class. That’s a completely true statement. Like, by observation,
[00:40:56] Red: I’ll bet that if even if you got into the the five percent of homicides committed by women, there would still be some pretty significant differences in how those homicides are committed.
[00:41:11] Blue: Well, and in fact, when women do it, it’s often in response to a man, according to Stephen Pinker. I got them out of his books. I never actually checked the studies on this. But Stephen Pinker claims that even in the cases where you do have violence with women, that it was actually the man who initiated the violence. So the woman may kill her mate or whatever, but he had been abusing her something along those lines, and she didn’t see a way out. So the simple truth is, is that you have a real life observation that needs to be explained, right? And you don’t want to use the explanation. Oh, it’s just a coincidence, because that’s a bad explanation. Now, so here’s the question. Why is it that men are more violent than women as a class? OK, now, that’s a completely fair scientific question. And we do have pretty good scientific theories about the subject, right? And we know it’s somehow linked to genetics because being a man is a genetic thing, X chromosome versus Y chromosome, etc.
[00:42:18] Red: So we know it’s somehow linked to genetics.
[00:42:22] Blue: Now, I know the Deutschians have argued with on this would say, oh, but it’s just a correlation. OK, but is it? You know, it’s I mean, like we have ways that we can study correlation versus causation. These studies exist, right? I mean, it’s it’s not like this is something we just know nothing about, right? Just deciding that you’re going to declare every single correlation, a correlation instead of a causation, that’s a bad explanation. So we don’t get to do that if we’re critical rationalists.
[00:42:51] Red: Yeah, I mean, I think on some maybe some meta level that Deutsch is commenting on that it is a correlation. But it almost kind of breaks down the correlation causation idea. Yeah, you know, into it makes the whole
[00:43:08] Blue: point you the point of the idea useless, right, which is not what we want.
[00:43:13] Red: Yeah.
[00:43:14] Blue: So if you were to go to if you were to go Google this, you would find an idea just for the show, just to be sure that it is believed that testosterone levels stimulate men’s rage centers and they’re more aggressive because of that. There’s probably a lot more to it than that, but there’s not less to it than that, if that makes any sense. Right. Yeah.
[00:43:35] Red: And, you know, I mean, I I I don’t personally really dislike the idea of like demonizing men or anything too. I mean, there’s probably all kinds of useful activities that stem from the same kinds of hormones and testosterone. And, you know, it’s there’s there’s men do plenty of maybe this is a controversial statement in twenty twenty three, but men do plenty of amazing things in this world, too.
[00:44:04] Blue: Yeah.
[00:44:04] Red: And, you know, probably a lot of that is because
[00:44:06] Blue: of the testosterone. Yeah, exactly. So
[00:44:08] Red: it’s not like a but, you know, one of the things that in certain a subset of men, probably a relatively small subset because of these same chemicals in their brains and blood is do some are more likely to come much, much more likely to commit some horrendous acts, too, which seems pretty undeniable to me. But right.
[00:44:35] Blue: OK. I in me just googling it in 10 seconds. I found a study and here’s a quote. Testosterone plays a significant role in the arousal of these behavioral manifestations in the brain centers involved in aggression and on the development of the muscular system that enables their realization. I then attempted to find counter studies and the counter studies said things like this. Oh, it’s not actually true that testosterone leads to aggression. Directly, it’s actually depending on what status the man thinks he is. And like none of them were really arguing with the basics, basic idea that there is some sort of connection between testosterone and arousal. They were arguing with what this specific mechanism was and what the exceptions were and things like that, right? It was really hard to find at least in the 10 seconds I was spending. Specific counter theories, right? Now, I’m not this everything’s conjectural. So like, who knows, this might be a false theory. But my point is we have a good testable theory out there. And I don’t really see Deutsch offering an alternative testable theory, right? For him to say, oh, there’s no it’s who knows what Deutsch meant. It was at some level of abstraction. He meant just simply ultimately, inherently. It doesn’t have to be inherent over time. I’ll accept that. But I’m like arguing with fans of David Deutsch. They’ll try to they’ll literally try to say, oh, no, it’s me medic. It’s not genetic. It’s me medic. And like it’s it’s really unclear why they’re saying that because the statement men are inherently more violent due to me. Medics is still a true statement. Yeah. And in all you’ve really done is ruin a good explanation. Right.
[00:46:18] Blue: You’ve now we want to understand why it’s specifically men. What’s the genetic? But why is it lineup in correlate with genetics? And so there must be genetics must form in this explanation somewhere. Right. You have to explain it in some way. You’re welcome to explain it as a non cause, but you must explain it in some way, saying, oh, it’s just memes explains nothing. Right.
[00:46:42] Red: Well, I mean, you think about it in a real concrete way, you know, take the most aggressive, violent criminal that you possibly can find somewhere in the multiverse and you find memes that will persuade this criminal to adopt a more honorable lifestyle. I think, yes, I mean, that’s kind of why I agree with Deutsch and the principle of optimism and all this. But on a real practical level, are you going to find those memes very easily? I mean, I don’t think so.
[00:47:25] Blue: Therefore, the term men are more inherently violent has some sort of completely legitimate meaning. Even if we’re assuming that there are memes that don’t exist yet, but exist somewhere in the multiverse, that would overcome them.
[00:47:39] Red: Right. Yeah.
[00:47:40] Blue: It simply, Deutsch’s response really is just talking past Paul Graham. Paul Graham is saying something that is true, at least if you choose to read him in a charitable way. And we should really kind of leave it at that, in my opinion. Right. There’s no need to say, oh, but it’s not really inherent. It’s like, no, that’s not what he meant. Right. Let’s just be done. Let’s just accept he’s right. Let’s move on. One other point, like let’s say somebody were to say, well, no, it’s not the testosterone that leads to rage. It’s actually being larger. Now, that’s a testable theory. So let’s say we went out and we tested it. Let’s even say that we found corroborating evidence and we refuted the testosterone theory. Well, guess what? Men are larger genetically. So there’s still a genetic link, right? I mean, it’s it’s really hard to get around the genetic link, right? It’s it’s I understand why there’s such a desire to do so. But it’s just a mistake, right? It’s there really are these things we need to explain. They are going to have genetic reasons behind them that does not violate universal explainer ship theory. We really need to accept it and move on, right? It’s what’s expected here is not that you’re going to make some vague. Alternate theory. Oh, it’s memes. You need to specify a testable alternate theory. If you think memes are the reason why men are more violent than women and it’s got nothing to do with genetics, you don’t get to just say, oh, it’s memes. You need to specify exactly what means are doing it and how we would test that against the theory that it’s genetics instead.
[00:49:16] Blue: If you aren’t doing that, then the other theory is the best theory period end of story that is how we’re going to do it. How critical rationalism works. OK, I’m not ruling out the possibility it’s memes. I’m just saying I don’t know of any specific testable meme theory in existence right now, so I don’t yet consider them. That is how critical rationalism works. OK, so I think that the best way to probably read Deutsch is as talking as talking past Paul Graham to make a unrelated point. But I think that’s probably the fairest we can be in this case. Paul really is saying something meaningful here, and it really does make sense to not try to downplay his point. Men are more violent inherently than women for whatever the reason. And just to be clear, I see no reason why the testosterone theory couldn’t be true. It does not violate universal expenditure. Now, I don’t know it to be true. That’s that’s so because it’s conjectural. All scientific theories are conjectural. But the idea that testosterone impacts and causes arousal in the lower part of the brain, the mid brain, where emotions arise from that isn’t part of our universal expenditure, but that does influence us. There is nothing in that theory that contradicts universal expenditure theory at all, so it must be on the table as a legitimate possible theory. You cannot cite universal expenditure as a as a counter example to it. Yeah.
[00:50:50] Red: And there’s testimony from people or biological women who go on testosterone and will will will assert that it completely changes their their like view of the world. I mean, there’s and
[00:51:06] Blue: they’ve done this as actual causal studies, not just anecdotal information, right? They’ve actually tried. I do we actually see rises in violence if we give men extra testosterone, things like that. This is not something that’s untestable, right? So and it’s been tested many, many times. So if you want to defeat the idea that testosterone is the cause, you need to offer an equally testable alternative theory. If you’re not offering that, this is the best theory. I’m sorry, right? It’s forever open that we can refute it by giving an alternate theory and then coming to the way to to remove it by by refuting it. That’s always a possibility. We will forever leave the door open to that possibility. But you need to actually do the work to make your theory testable. You don’t get to throw something non testable out there and then claim you’ve defeated a testable theory that just isn’t how critical rationalism works. OK, I also think Paul Graham’s statement should always look. There always needs to be a sort of tacit until we understand this better and change it implied. Whenever we’re talking about. You know, genetics, genetic influence, something along those lines. Nobody ever means and we can do nothing about it, right? Nobody really believes in genetic determinism. There’s always a sort of until we understand this better and can change it implied. And again, I think that’s just the charitable rating. You’ve got to read people that way when they make statements like this. So it seems to me at least that Deutsch is incorrect in his challenge because there is a guaranteed to be at least one legitimate sense in which men are inherently more violent is correct. And it probably goes deeper than that, right?
[00:52:49] Blue: To me, this seems a little bit like the paparian war on words that we dislike the term inherent. And even though it means something correct in this in this circumstance, we’re going to go after the word inherent. And that’s what it kind of came across like to me. And I think this is really what Bit Butter was trying to get at, right? I think a steel man version of Bit Butter, even though ultimately he tried to go after the concept of people being universal expirers, which I think is the wrong thing to go after. I think what he’s really trying to say is, look, maybe we will someday overcome the whatever is causing men to be more violent than women. In fact, we will someday overcome whatever is causing men to be more violent than women. That day is not today. Yeah. And the reason why is because there’s only been up to this point a fine, fine night amount of time and memory to try to solve that problem. And it is not yet been solved. And if you read Bit Butter in this way, then I actually think he’s saying something that’s correct.
[00:53:50] Red: And I’m curious how, given this, you are going to have it. And defend the universal explainer hypothesis, which I assume it is the direction you’re going, but.
[00:54:03] Blue: Well, OK, so I think I just did, right? OK,
[00:54:06] Red: OK,
[00:54:07] Blue: it’s the universal explainer hypothesis is based on the assumption of that there is an infinite amount of space and time available to a human. And if you’re not assuming that, then, yes, you have to see it as just an approximation and
[00:54:21] Red: I think
[00:54:22] Blue: which would,
[00:54:23] Red: right? OK,
[00:54:24] Blue: and yet it’s still that’s still the way we look at a Turing machine, right? We’re trying to figure out as a class, what can humans accomplish? Not can what can any one human accomplish?
[00:54:35] Red: I
[00:54:35] Blue: think this kind of goes back to why we did the episode on what’s the principle of optimism?
[00:54:42] Red: Yeah.
[00:54:43] Blue: Is and one of the things that we kind of came out in the conversation or that at least I asked about it seemed like people were agreeing with me and had smart people there to find out if they agree with me or not. That was why they were there, right?
[00:54:54] Red: Yeah,
[00:54:54] Blue: is that the principle of optimism really applies to societies, not to individuals. Yeah, there’s a connection. I mean, since they can apply to a society that should give us hope as individuals that that real problems can be solved. But there’s there’s no guarantee any one problem that you experience in your lifetime will be solved, right?
[00:55:15] Red: Yeah.
[00:55:16] Blue: And so the same thing applies here. I can there is no even though I’m quote a universal explainer, that doesn’t mean I’m personally going to solve the problem of AGI, you know. But but I have but I have a reason to believe an explanation to that explains why as a society, we will solve it at some point.
[00:55:39] Red: Yeah.
[00:55:40] Blue: You might say, oh, you know, but what if we get hit by an asteroid and we all die out? OK, I guess you’re right. You know, it’s that’s not kind of what I really meant because it was more of a theoretical question. And I think that’s how I would defend universal explanation here is that you’re either starting with the assumption that Deutsch is right, that there is an infinite amount of time memory available to human beings, even individually. Or you’re starting with the assumption that we’re talking about humans as a class and we didn’t really mean it within a finite period of time. It’s you can go with either one of those. And there’s different implications either way. But I don’t think either way, but butters argument really made sense, if that makes any sense.
[00:56:24] Red: No, what you say, your perspective rings true to me. Yeah.
[00:56:29] Blue: OK. Now, let’s let’s move on. We’re now going to talk about Saadia’s claims on your Facebook page that we cannot simulate anything that that dutch’s assumption that we can simulate anything is wrong. So she’s raised this issue numerous times on your Facebook page in the past. And she’s offered various well -known reasons why a perfect simulation is impossible. And she’s given more than one. I’m going to concentrate on just one because it’s the one I’m interested in that I want to explain. And it is the it’s the argument from analog versus digital. So the real world is analog and continuous. Computers are digital and discrete. Therefore, it is impossible for a computer to perfectly simulate the real world. I mean, this is a super well known argument. Saadia is just citing other people who have raised it. Everybody already knows this argument, right? Yeah, she did offer others. And I I’m not going to criticize those specifically today. I feel like everything she’s raised, I have at some point offered a response to have explained why the argument that she’s raising doesn’t make sense. So that’s what I’m doing. I’m going to just take the continuous argument, the analog argument, and I’m going to explain what’s wrong with it. OK, so the reason here is her arguments actually almost identical to bit butters in the previous question and it fails for the same reason. Or you might say it’s kind of right for the same reason. Depending on how you want to look at it. OK, so she is kind of right. But not in a way that contradicts Deutsche’s claims, which is what I ultimately said for bit butter, too. Right. Because Deutsche is careful and he makes claims that she’s missed. So.
[00:58:20] Blue: Here is from Deutsche’s actual paper, the 85 paper where quantum theory, the church turning principle and the universal quantum computer, the paper that made him famous. He says, I can now state that the physical the physical version of the church turning principle, every finitely realizable physical system can be perfectly simulated by a universal model computing machine operating by finite means. OK, so there’s the claim that she’s trying to refute. OK, then he goes on to say the universal computing machine, on the other hand, need only be idealized, but theoretically permitted finitely specifiable model. So he very specifically says that that was an idealized statement, but that in real life we only need a finitely specified model. Well, why is he saying this? It’s for exactly the reasons that I explained with the bit butter and the universal train machines. OK. He goes on to say, Deutsche goes on to say, we have already seen that the universal quantum computer can perfectly simulate any turning machine and can simulate with arbitrary precision. Any quantum computer or simulator, I shall now show how Q, the quantum computer, can simulate various physical systems, real and theoretical, which are beyond the scope of the universal Turing machine. OK, so he then goes on to explain that the quantum computer has a is different than a Turing machine. OK, and this is the whole idea of the quantum computation and that it can do shores algorithm and things like that. OK. So but notice the key statement there was with arbitrary precision. OK, this is this is why this argument, the analog versus digital argument is really just a misunderstanding. OK. So Deutsche is actually agreeing. He’s saying, Deutsche has only ever claimed we can simulate to an arbitrary precision.
[01:00:17] Blue: OK, so if by the word perfectly, you mean to infinite precision, then Deutsche is agreeing with you. You cannot simulate it perfectly. But does the word perfectly really have to mean that?
[01:00:30] Red: Right.
[01:00:30] Blue: I mean, like, if you’re if you’re going to insist that that is the essential definition of perfect, then OK, I guess you’re right. But that’s not what Deutsche is using the word to mean. So you have to understand a person’s use of words the way they mean it, not the way you want to mean it. OK. So in fabric of reality, he gives the example of simulating a storm. Perfectly. And how that differs from simulating a specific storm perfectly. OK, so I want you to think about that for a second. So we don’t really have the ability to simulate a storm perfectly today because it’s a storm is based on quantum effects, or at least that’s what he’s arguing in that chapter. I forget which chapter it is, but I recall that it was in fabric of reality. But with a quantum computer, we would be able to make a perfect simulation of a storm. Now, what does that mean? It doesn’t mean that a particular storm that is happening can be perfectly simulated. It just means that you won’t be able to distinguish the difference between the simulated storm and a real storm. OK, when he says a perfect simulation, he is explicit that this is what he means. OK, and it turns out this is going to be the whole way we’re going to deal with the question that Saudi is raising. So Saudi might have a valid question here. If you could only simulate to an arbitrary level of precision, isn’t that equivalent to saying that you can’t simulate anything in reality? No, it is not.
[01:02:02] Red: And I can I give an example that I think might be a compelling.
[01:02:07] Blue: Yes, an example.
[01:02:08] Red: And if it’s not, we can cut it. But, you know, I used to work at a record store. And we I that kind of led into arguing with debating with people about this analog versus digital thing and related to related to music reproduction. And there’s a common belief, particularly amongst people who are into analog music and vinyl, that, you know, when you capture the sound waves using ones and zeros, that you’re missing something in the music.
[01:02:47] Blue: Right.
[01:02:47] Red: And that’s in there. There’s music, essentially, between the bets or however you want to put it. And I think there even used to be something that might still be on on Wikipedia, where you see the stair step idea,
[01:03:01] Blue: which
[01:03:02] Red: is from what I at least understood is completely false. You know, it’s a little counterintuitive. But the what’s called the Nyquist Shannon sampling theorem. Have you heard of that? No, I haven’t. OK, so what this indicates to me, I mean, I don’t really understand the deeper math or anything, but my understanding is that the conclusion of this is that when you increase the sampling rate for music, what you are doing is increasing the frequency range. OK, so the range of frequencies that could be possibly represented in those ones and zeros, right, which, of course, you would quickly go beyond what human beings are capable of hearing. But yeah, you’d be increasing that. But you are perfectly capturing the sound waves within that frequency range. Yeah, using ones or zeros. That, you know, that’s kind of what the the the vinyl people that I was arguing with just,
[01:04:13] Blue: you know,
[01:04:14] Red: it is so unintuitive that it’s really hard to get your your mind around. But the basic gist of it is there are there is no music between the bets. You’re using ones and zeros to perfectly capture the sound waves that are presented.
[01:04:30] Blue: You know, this was actually the example I was going to use. OK, I actually did a better job than I did. So OK, OK, so let me kind of cover this because you are completely correct. OK, yeah.
[01:04:42] Red: So so just so you could could you could you apply something similar to that whole storm? I mean, if you think of the whole storm, it’s just a series of frequencies, is that mean that there are ones and zeros that can perfectly capture that whole storm within a given frequency range, essentially.
[01:05:00] Blue: Interesting question. I’m not sure I know the answer to that question. You know, so let me let me see if I can use some examples here. So let’s say that I want to use my digital computer to simulate something that’s in real life is believed to be analog, right? And it’s continuous.
[01:05:25] Red: Yeah.
[01:05:26] Blue: Well, I’m going to I’m going to simulate it using floating point numbers, because floating point numbers take the place of real numbers in a computer setting. Now, floating point numbers are imperfect. They’re an imperfect implementation of real numbers. And you can reach a point where they don’t represent real numbers correctly. OK. But you can you can also increase what that point is by using more memory, which is kind of similar to what you’re talking about with the sampling of music and how it it will lose certain frequencies. But you can just keep increasing the amount of memory. So let’s say so when I do my simulation. It’s going to represent what that would look like, right? Like, let’s say I’m doing I don’t even need a good example here. Let’s say that I’m trying to simulate a pencil falling over. OK, so no, the typical way you would try to explain this through regular mechanics, non quantum mechanics, I think it might be different under quantum mechanics, but you would use chaos theory. You would say that the pencil’s point is it’s got these little imperfections that are microscopic. And so it’s it’s completely impossible for you to predict which direction the pencil is going to fall because you cannot measure with precision what the state the initial state was. And so it will fall over in a random direction. OK, now. Quantum physics being true, maybe that’s a false explanation. But this is typically how they would try to explain it using regular mechanics is chaos theory. I’m going to accept it because it illustrates my point. Let’s say I want to simulate a pencil falling over. OK, I don’t need to simulate the specific initial conditions of a real life pencil.
[01:07:36] Blue: I need to simulate initial conditions that are the same as a real life pencil. And I can do that with imperfect floating floating point numbers because they will represent something that could have existed in real life. So I can simulate the falling pencil problem on a computer and it will be indistinguishable from a normal real falling pencil. But I can’t necessarily simulate one specific attempt to do the falling pencil problem in real life because I can never know what those specific initial conditions are. And they might be they don’t have to be. But they could be initial conditions that require a level of precision that goes beyond what I’m currently doing with my memory. OK, but
[01:08:18] Red: if you did know all those initial conditions, then you you could simulate it.
[01:08:24] Blue: Theoretically, yes. I mean, given what’s what’s happening in these these atoms.
[01:08:29] Red: In some sense, isn’t really that complicated. Right. They could they could simulate that and wouldn’t be about perception. It would be just about. You’re simulating it to precision.
[01:08:43] Blue: That’s right. Right.
[01:08:44] Red: OK.
[01:08:45] Blue: OK. But when we talk about a perfect simulation, we mean, can I simulate a pencil falling over? Not. Can I simulate this particular pencil falling over this particular time? OK. Now, you can argue over the word perfect here, which is what I feel like Saudi is unconsciously doing. She’s saying that’s not really what perfect means. Given that’s essentialism. All right. This is a completely legitimate way to understand the word perfect because it works for the setting that we’re trying to use. Right. And if you want to say, OK, under Saudi’s definition of the word perfect, there is no such thing as a perfect simulation because in her sense of the word perfect, you have to be able to simulate a particular pencil falling over a particular time. OK, I’ll grant I’ll grant her. That’s the sense in which she’s kind of right. OK. That’s just isn’t what it just isn’t what Deutsch meant. She’s not actually it’s just like a Deutsch was talking past Paul Graham. She’s really talking past Deutsch when she says this. OK. So so the other thing is that you raise the idea that there’s real perception limits. So let’s use the CD example. Can I make a perfect simulation of a symphony? Well, most people would say yes, right? They would say I listen to a CD all the time. It’s a perfect simulation. Someone might get clever here and say, no, if you were there in real life, it wouldn’t sound that good.
[01:10:05] Red: Well, I mean, if you were there in real life, you’d be hearing sound waves bouncing around the auditorium and right hearing something. But to me, a more interesting question is, can you perfectly simulate what is on a recording of a symphony, whether it’s analog or digital?
[01:10:25] Blue: Well, and you perfectly and you can.
[01:10:27] Red: You know, the question is you can. But you
[01:10:29] Blue: can you can also simulate perfectly what it would be like to be in the auditorium, right?
[01:10:35] Red: Oh, and I’m not you mean more theoretically. You can. Yeah, you could. You could come up with the
[01:10:40] Blue: computation necessary,
[01:10:42] Red: not with today’s technology, though.
[01:10:44] Blue: Well, couldn’t you? I mean, like all you have to do to make it a perfect simulation is make it so it’s indistinguishable from the real thing.
[01:10:51] Red: OK. OK. Oh, I see what you’re saying. So I think you could. I mean, you would have to for for in a in a playback environment, you would have to surround the room with speakers, because there would be sound waves coming from behind you and above you. Well, you wouldn’t even do that.
[01:11:07] Blue: You would just need to put earphones on that then do 3D space.
[01:11:11] Red: OK, I’m not even. Yeah, I mean, I think theoretically, but I’m not sure that the earphones are quiet at that level of technology.
[01:11:20] Blue: OK. Maybe you’re right. OK, not quite. But there’s nothing in what Deutsch would say is there’s nothing in the laws of physics stopping you from doing so. Yeah, I think I
[01:11:27] Red: agree with that.
[01:11:28] Blue: It’s you. Maybe maybe we haven’t engineered the right headphones. I don’t know. I mean, like, is there a human being that can tell the difference, because I know I can’t tell the difference. If you could like somehow black me out and I have to listen and I don’t know if I’m in an actual symphony or if they put earphones on my ears, right? I don’t think I could tell, right? Like I really couldn’t. Is there a human being that’s so so good at perceiving this? Maybe maybe there’s a human being out there that’s such a, you know, fan of symphonies that they could immediately tell. No, this isn’t the real thing. Well, I think it would be different in some ways.
[01:12:06] Red: It might be better in a way. You might be actually hear more clarity in a recording than an auditorium. In some sense, I mean, I’ve had that experience when I see live classical music that is outside and it’s amplified, which doesn’t typically happen in a recording. And suddenly you can hear like things of the music that you would never hear because it’s it’s every there’s every individual section is it is miked and it actually sounds really good in a way. It’s just different than an auditory, right? But yeah.
[01:12:39] Blue: OK, so. But if someone were to say, yes, a CD is a perfect simulation, yes, you could technically challenge that. But it would still be a true statement. Like again, the rationality principle, reading people charitably, what they mean by perfect simulation has meaning in this context. Let’s accept that meaning. OK. And that’s really what we’re trying to get at here. OK. You might say, OK, it may be it’s perfect for a human. Maybe we could maybe we could even make it so that a human can’t tell if they’re in the auditorium listening to the symphony or if they’re listening to a CD, but an elephant could, right? Because the elephant can hear higher frequencies and the bit rate wasn’t high enough. OK. OK, but no problem. You’re right that that’s an imperfection in the CD, but we just raise the sampling rate. OK. And eventually we’re going to reach the point where the elephant can’t tell the difference either. OK. So Doge’s point is that you are unable to specify any imperfection that we can’t then just simulate using a larger computer, using a larger sampling rate, using better computations. OK. That’s what he’s trying to get at. It’s a theoretical idea. And just like the universal Turing machine, it’s not meant to say, I can perfectly simulate this today. It’s meant to say there’s no thing really limiting you from doing so because of the way the laws of physics are. Now, I did have Andrew Crosha defending Saudi’s view on your Facebook page. He made a claim that I thought was good enough that it probably needs to be addressed. So he claimed that there’s an arms race.
[01:14:18] Blue: So no matter what level of perfection you specify, there is always some imperfection, even if you don’t know what it is that could be detected. Should you understand how the simulation was made? This is what he claimed to me. OK. Now, is this true? So first, let me just say this seems to me that it goes beyond Doge’s purposes. So if he were right, I don’t think this has anything at all to do with what Doge is saying in his paper. I think it’s still just a misunderstanding of Doge’s point. However, I am curious if he’s right or not. OK, so I think he’s still wrong. And here’s my argument. Why? So there’s this idea of the Planck scale, which, as I understand it, not a physicist puts a hard limit on an instrument’s ability to perceive things. Now, it’s a really, really fine grained limit to be sure, right? But let’s assume for the sake of argument that we’re taking Doge’s view that that there’s an omega point computer that exists somewhere in the universe. And so we have no limits to memory or computation that we need to worry about. OK. So because of this, could you simulate, let’s say an entire universe down to the atom that would be indistinguishable from the real universe? And even if it were at in some sense imperfect, would it get to the point where your instruments just couldn’t detect the difference? Well, yes, that’s how I understand the Planck scale. Physicists out there, correct me if I’ve got something wrong there. OK. So I believe that Andrew’s point. Might still be invalid because of the Planck scale. Furthermore, this is how I understand the omega point resurrection, right?
[01:16:12] Blue: In the final chapter of Fabric of Reality, where Doge talks about Frank Tipler’s omega point theory, the idea of the mega point resurrection is not that you that mean like I’m a I’m me, I’m a program running on a meat computer, you know, my brain and my body, my nervous system. And it’s going to die. And then let’s say the omega points real and they bring me back to life as a simulation. OK. It just has to be indistinguishable from me to be me. That’s the whole point of the mega point resurrection. OK, even if there’s some little imperfection that can’t be detected, it’s still me for all intents and purposes. In in the way Doge handled the mega point, he said, well, you could simulate the entire multiverse. You could run through every single universe the multiverse could possibly create and you could try them all because you’ve got, you know, infinite computation to deal with. You know, eventually infinite computation. So you could run through all of them and you could save everybody and resurrect all of them. And if that is in the limit, if that’s just a problem to solve and laws of physics don’t stop you from doing an omega point resurrection, then, yes, you can resurrect people and you can simulate them quote perfectly by which we mean indistinguishably from them. And is that what he means, though? Or does he really mean perfectly? So I there’s always an assumption of indistinguishable this because it has to it has to be an arbitrary level of. So what do I mean by indistinguishable? So I’m not sure what I mean, right?
[01:17:54] Red: I mean, I thought he really I interpreted that is really perfectly like you’re you are down to the atom doing what’s going on in your brain. And OK,
[01:18:05] Blue: OK,
[01:18:05] Red: so so
[01:18:06] Blue: let’s let’s say that you can do that because you can do that, right? You can down to the atom, make it the same as what you’re doing in the brain. There’s nothing stopping you from doing that because atoms are quantized, right? It’s OK. You would argue here if you’re taking Saudi is position, but actually the atoms themselves be even at even simulating down to the atom isn’t quite a perfect simulation because the atoms might be in slightly different positions. Well, does that make a difference? Not that I know of, right? But but it’s not a quote perfect simulation.
[01:18:41] Red: OK,
[01:18:41] Blue: so I think there’s there’s still this implication of it has to be a difference that matters. And if it is a difference that matters, then we can simulate it. And if it’s a difference that doesn’t matter, then we don’t care.
[01:18:52] Red: OK.
[01:18:54] Blue: That’s the end of my. Explanation for why the continuous versus discrete argument does not have anything meaningful to say about Deutsch’s perfect simulation hypothesis. And my feeling is that they’re getting at something that’s true, but irrelevant to what Deutsch was saying. Notice that that’s like exactly the same as my argument with bit butters because they’re computationally. We’re talking about the same thing here, so it’s kind of the same argument.
[01:19:20] Red: Yeah.
[01:19:21] Blue: All right. Let’s move on to the next question. So in and it turns out it’s got a little bit of a thread to what we’ve just talked about in our principal optimism episode. They didn’t ask me about what I find if I knew heat death. Was true. Would I find that to be a bummer? And he argued that it shouldn’t be a bummer because it’s it’s so far. In the future, that it just doesn’t matter. OK. And I gave him an answer and I don’t think the answer I gave him in the episode was necessarily a bad answer. But I didn’t having had time to think about it. I mean, I was glad he raised the question because it forced me to think about it. And so I want to kind of give him a more thoughtful answer in this section. OK. So yes, let me just admit that if heat death is a correct cosmology, then Deutsche’s principle of perfect simulation is actually false. Right. If heat deaths is going to be the correct cosmology, then it’s not necessarily the case that we will be able to simulate down to the atom what the universe what what the universe is going to be like. Right.
[01:20:32] Red: OK.
[01:20:33] Blue: Let me give you an example. So let’s let’s say simulate down to the atom what the universe what the universe what a universe that lasts twice as long as ours would be like. OK. So if heat death is the correct cosmology, that problem that I did know that problem I just laid out is insoluble. OK, so heat death means that at point X in time, the universe stops computing. OK. So I’m asking you to simulate down to the atom, a universe the same as ours, but that computes to two X. OK. Well, obviously, I can’t compute that in the universe because it is twice as many computations as the universe has. Under heat death. So that would now be something that I cannot simulate. OK. OK, furthermore, under a heat death cosmology, where we’re assuming there’s a finite number amount of computation, it’s not a given that we can explain everything. So this idea that we’re universal explainers and all problems are soluble and principle of optimism and everything is explicable is not true. Under death cosmology. And the reason why is because there’s an infinite number of things to explain and there’s only a finite amount of computation. So clearly the vast majority, an infinite amount of things that could be explained, won’t be explained.
[01:22:04] Red: Yeah.
[01:22:05] Blue: Or one might even say in such a heat death universe for the most part, we can’t explain things. That would be a completely true statement. Yeah. OK. Nor are we are we at the beginning of infinity in a heat death cosmology. OK, because there is no infinity, nor can we assume that morality is objective because this is probably a larger discussion than I want to get into. But we will simply not solve our moral problems because we’ll eventually reach the end of computation. I would even say that it’s worse than that. Right. And this one is one that I think you could challenge me on a little bit. But as under heat death, computation starts to wind down. You will reach a point where the best way to survive, the only way to survive is through immoral means, what today we would call immoral means. It’s weird to talk of objective morality. If you know that part of your cosmology is that that objective morality is wrong for the greater part of time, right? That it’s literally the wrong way to survive. I’m not even sure what calling it objective morality means in a heat death universe. I think you could kind of make a case. Well, it’s objective within certain limits up until we reach the turning point. And maybe I would buy that. OK. So maybe there’s still a sense in very, very limited sense in which morality could be said to be objective. But I’m just trying to say the the force trans way of looking at objective morality is not true under heat death cosmology. OK. Nor can you assume that you can make a better future. And it’s I argued in the religion episode
[01:23:54] Blue: that if you’re assuming that the heat death is going to wind down the universe at some point, then it may well be the case. In fact, entropy seems like maybe it implies this. It’s arguable, though, that that it’s literally impossible to make a better future that you will make a better future temporarily. But then you’ll just create more human beings that then suffer more when you reach the turning point, etc. Now, again, Vaden could easily argue that one with me. So some of these I’m going to insist on because they follow from theory such as the universe is not fully explicable and you cannot do a perfect simulation and we’re not at the beginning of infinity. Those followed directly from are completely banned by heat death cosmology. The others seem suggestive, but they’re they’re not theoretical direct theoretical consequences. For example, you might get around the the problem of not being able to make a better future by making the claim that all of life will commit suicide at day X, which is the day of the turning point. Now, I don’t know how you would do that. Like that’s that’s why I don’t think you can just make it that assumption that there’s an alternative. It’s in fact, it even seems like it violates evolutionary theory. You even if you got 99 percent of all people to kill themselves on a certain day because everything’s going to be bad past that point, you would still have one percent left that didn’t agree. And then they would be the ones that would take over the universe. And so it’s I’m a little unclear how you get around some of these implications. But I admit that they’re not lock solid implications. If does that make sense?
[01:25:29] Red: Yes. So is the basic claim behind, say, the omega point and maybe other related, more optimistic views of of the far future that that humans will that knowledge, human knowledge is almost kind of a force in this universe that could even avoid heat death.
[01:25:58] Blue: Yes. That’s exactly what they’re claiming. Yeah.
[01:26:02] Red: You know, manipulating time. Or well, I guess would have to involve manipulating time in some ways, either with the speed of light or going, I don’t know, if you had a wormhole. Or, you know, there’s all kinds of different scenarios that that might be possible a billion years from now. You know, I mean, the sky’s the limit. But
[01:26:26] Blue: the four strands, the worldview is taking is is actively taking the stance that heat death is just a problem to be solved. Yes. So that is the stance that they’re taking. Now, I mean, like, is that true or not? That’s that’s that is the question, right? And that’s probably a discussion for a completely different podcast. We’re at the moment taking Vaden’s thought experiment, which is we’re assuming heat death is an inevitable consequence. OK, Vaden’s not insisting that’s true. He was very clear he’s not insisting that’s true. He’s just simply saying, what if it was true? Would it bum you out? That was the question to me. So that was an episode 59, by the way. That’s that’s the roundtable episode where we talked about the principle of optimism. So he thought that it shouldn’t matter as long as it was far enough in the future. And I am, as I’ve kind of explained, somewhat sympathetic to what I think he’s really trying to get at. So this got raised after the religion episode. And there was I don’t I don’t remember who it was. Somebody on your Facebook page. I think it was Ivan, but I’m not sure I remember. Raised kind of took issue with the way I was describing heat death as clearly I saw it as somewhat of a bummer, right? Yeah. Just like and he just like Vaden, he was kind of arguing. I don’t see why it has to be a bummer because it’s, you know, so far into the future. And he quoted or rather paraphrased Bertrand Russell, at least I think he said as Bertrand Russell, I was not able to find the actual quote. So I don’t know if this is a correct quote or not.
[01:27:57] Blue: But basically, the paraphrased quote was that if you find the end of mankind from heat death or whatever, depressing, you should just go get lunch. I’m actually somewhat sympathetic to this argument, right? And like you kind of have to, to some degree, it’s like, you know, why ruin your life over something that’s bad, that’s going to happen billions of years in the future. Just go enjoy your life now. That’s kind of what they’re really getting at.
[01:28:30] Red: Yeah.
[01:28:30] Blue: OK. Or maybe they’re even saying something stronger. Maybe they’re saying you still need to live your life and there’s plenty in life to enjoy. So don’t worry so much about the end of humanity, even if it’s real. It’s in the far future. Sure, it might be bad. But hey, go enjoy lunch and don’t worry about it. OK. Yeah. And I’m going to grant them their argument. I’m going to say subjectively speaking, that’s a good argument for them subjectively. And I’m not going to in any way try to talk them out of it. Yeah. Right. It would be dumb for me to try. And it would be wrong. It would be immoral for me to try. Right.
[01:29:07] Red: Yeah.
[01:29:07] Blue: So here’s the thing, though. Isn’t there a sense in which Vedin’s question to me is obviously, yes, heat death is a bummer. No, let me explain what I mean. So remember the assumption here is we know for sure there’s no way to escape heat death. Just a thought experiment. That’s not true in real life. It’s just a thought experiment. And I like thought experiments. I think people should always take thought experiments off or to them. I think people avoid taking thought experiments because often the thought experiment actually exposes their ideas as wrong. And so they will refuse to take thought experiments. I think you should, as a critical rationalist, always be ready to take a thought experiment. There’s never a bad thought experiment. And the reason why is because if the thought experiment is in some way misleading, you can take the thought experiment, then point out what was misleading. You could always do that. Right. So there’s really no reason to not take everyone else’s thought experiments serious. Like, for example, let’s say somebody, you know, it’s well known from my Facebook posts that I’m never Trump or OK. So someone might ask me, let’s do a thought experiment, you know, that actually Joe Biden is the devil. And, you know, he’s he’s going to destroy the universe. If we elect him, are you still sure you won’t vote for Trump? Was like, I have even as a never Trump or I’ve got no problem answering such a completely ridiculous thought experiment. And the answer is, yes, I would vote for Trump. But this isn’t real life. So I do not care about that thought experiment. It’s completely different than real life.
[01:30:41] Blue: And there’s just no reason for you to ever not take a thought experiment.
[01:30:45] Unknown: Right.
[01:30:46] Red: Yeah.
[01:30:46] Blue: OK. So that’s why I’m taking Vagan’s thought experiment very seriously. OK. Because it’s I understand it as a critical rationalist. Thought experiments are totally legitimate. You should take them because they may actually expose flaws in your thinking or they may expose flaws in the other person’s thinking. So I’m now assuming for the sake of argument, there is no way to escape heat death. It seems somewhat trivially obvious to me that it is more of a bummer if there is no beginning of infinity, no universal explainers, no infinite progress, no ability to solve all problems than the opposite of that. I mean, isn’t that like kind of obviously more of a bummer than if there is a beginning of infinity? There are universal explainers. There is infinite progress. We can solve all our problems. It this seems so trivially obviously obvious to me that what I really want to say is, yes, of course, heat death is a bummer. It is objectively a bummer. You would have to be insane not to see that heat death is a bummer. OK.
[01:31:52] Red: Yeah.
[01:31:52] Blue: Now, of course, when I say that, I can almost hear Vagan in my mind saying, oh, well, yeah, of course, it’s a relative bummer. But I meant more like why get depressed all over. By the way, notice how I terribly read him there. Right. You should always terribly read people. Why get depressed all over it? OK, I will accept that even though that’s not how he worded it. I will accept that as what he was really asking.
[01:32:16] Red: Sure.
[01:32:17] Blue: So now if you aren’t depressed over heat death, great. Right. It’s surely that’s better than being depressed. So by the same token, that’s that seems like that’s good. Like if you if you’re able to say, oh, forget heat death, let’s go get lunch, go get lunch. That like that. That’s obviously the right choice. OK, don’t be bummed out by it if you can avoid being bummed out by it. Now, here’s the problem I’ve got with that statement, though. It’s sort of being formed like a universal truth rather than a subjective truth that there’s kind of an implication like they’re they’re offering advice like, oh, you’re bummed out over heat death. Oh, just go get lunch. Then you’ll be OK. If that works for them subjectively great. But I see no particular reason why it’s it’s an irrational response to the problem. So let’s use Tolstoy, who is the one I used in the religion episode as the example. Tolstoy was deeply disturbed. He didn’t say by heat death, but but clearly that was what he was getting at. That the scientific cosmology of his time said mankind is coming to an end at some point. There’s nothing that can be done about that. So you so everything’s finite, no matter how famous you become, you’ll be forgotten. This is what he actually says. I’m not making this up. He explains in a confession why it bothered him so much and why he could not escape from the negative thoughts that this causing his mind and how he wanted to kill himself. But he was too wasn’t courageous enough. That’s his words, not mine to follow through with it when he felt stuck in this rational trap of nothing I do matters.
[01:33:59] Blue: This existential crisis, as we call it, that was taking place.
[01:34:02] Red: Yeah.
[01:34:03] Blue: Now, let’s say I’m there and I walk up to him and I go, hey, Tolstoy, go get lunch. Do you think it would be helpful? Do you think it even would count as a rational response to the problem that he’s raising? OK. I mean, if you
[01:34:18] Red: if you interpret go get lunch terribly, which basically means, well, maybe you should try living in the moment and enjoying your life.
[01:34:27] Blue: Oh, no, he was well aware of that. He talks about that option. Right. Yeah.
[01:34:31] Red: OK.
[01:34:32] Blue: He thought that option through and it doesn’t work for him. And he explains why he uses an analogy to explain why he says he says it life is like you’re running from this this beast and you hide in a well and the beast is at the top of the well. There’s a dragon at the bottom of the well waiting to eat you. Oh, but you see some honey and it’s sweet honey and you can lick it. OK, that was the analogy that he used. And he said, to me, the honey doesn’t taste good because of the circumstance I’m in. OK. He’s directly responding to even the charitable version of go get lunch. He’s saying I can’t enjoy lunch because of this problem that I see this existential crisis that I’m experiencing. OK. So I don’t see how I can see how it’s a subjective argument that you can apply to yourself. I don’t see how it’s a rational objective argument you can give to someone else. Does that make sense?
[01:35:28] Red: It does. Yes.
[01:35:29] Blue: OK. Furthermore, I think it kind of qualifies as too cute, right? And again, I’m giving it as much credit as I can. But it is in some sense not really taking the problem that Tolstoy is raising seriously. It’s just avoiding taking it seriously.
[01:35:46] Red: Yeah,
[01:35:47] Blue: you know, that it’s you’re just totally not helping the person at all. And it’s fine if it works for you. I mean, like a really good response for me subjectively. But it was never something I was worried about to begin with. You know, it’s it’s it’s hard to figure out how the go get lunch response or really meaningfully engages Tolstoy’s concern. And that’s really what I’m trying to get at. Now, you might argue here that Tolstoy was clinically depressed. That is, you’re going to you’re going to reverse the cause and effect. He’s clinically depressed. That’s the real reason why he’s got this existential crisis. It’s not the existential crisis is causing the depression. And you know what? I’m even sympathetic to that argument, right? I don’t think we tend to know what the causes and the effects are when it comes to depression, right? One of the things that Utah has a very high suicide rate and people will often bring that up that there’s this quote problem with Utah, because it has this high suicide rate. Well, it’s actually the Rocky Mountains. The entire Rocky Mountain Belt has a high suicide rate. And they’ve done studies that again, everything’s conjectural, but have suggested that high altitude actually is a causal factor in suicides. Nobody wants to believe that. They always try to accredit it to, you know, religion or, you know, they want some sort of explanation that is human consumable, not the suicide rate was raised because of high altitudes. And moreover, nobody ever writes a suicide note. I can’t take the high altitude, right? I mean, it just that’s not the way human psyches work.
[01:37:25] Blue: Like if you are at an increased risk of suicide due to being at a high altitude, you’re still going to have some sort of proximate psychological cause in your head. And it may not be real or may it may even be real. But it’s not ultimately the real cause, if that makes any sense, right? So so we should be a little skeptical that just because Tolstoy is reporting it as the existential crisis is causing this depression for him, that therefore that’s true. Here’s the thing, though. Tolstoy, you have to at least take Tolstoy at his word for his history and his history was, according to a confession, that he then became a Christian and this solved his problem and he stopped being depressed. Well, if he’s clinically depressed, that doesn’t make sense, right? So I think we need to at least consider the possibility that Tolstoy was in fact depressed because he was correctly understanding the cosmology of his day. And that that was, in fact, a causal factor or even the causal factor in the way the negative feelings he had. OK. I don’t know that for sure, but I think I’ve got a good explanation for why that really is the best explanation. OK. The issue then could be this could be said like this. Tolstoy’s problem was he could not rationally figure out how to wrap his mind around heat death in a helpful way. And no one else knew how to explain it to him to do so.
[01:38:57] Red: Yeah,
[01:38:58] Blue: they only had, in essence, two cute answers. They would tell him various forms of go get lunch or go have sex with your wife and enjoy life, right? And he knew that like he had done that for years, right? And he had reached a point where no matter how he talks about this very eloquently, because of course, this is a famous writer about how he could not see how to enjoy those things anymore that they used to work for him and they were ashes in his mouth and he just he could not deal with it anymore because he needed some sort of rational solution to what he saw as a completely legitimate rational problem.
[01:39:35] Red: Yeah. And, you know, to be I can only imagine that he was probably not just reacting to something that’s happening that may happen billions of years in the future after he’s dead. But that just a general view of life that it’s governed by this second law of thermodynamics.
[01:40:00] Unknown: Right.
[01:40:00] Red: Right. He said things
[01:40:02] Blue: like if I were twice as famous as I was today, it wouldn’t make any actual difference, right? I would still be forgotten.
[01:40:09] Red: Yeah.
[01:40:09] Blue: Right.
[01:40:10] Unknown: And
[01:40:10] Blue: that’s a true statement. Right. And assuming he death is real, that’s a true statement. Yeah.
[01:40:15] Unknown: Right.
[01:40:15] Blue: So he doesn’t seem to have had concerns with something irrational. It seems like he was correctly engaging the the best theories available to him of his day in a rational, completely correct, critical, rationalist manner. OK, that’s the point I want to make here. OK.
[01:40:32] Unknown: Yeah.
[01:40:33] Red: I mean, it is kind of depressing in a way that if you just dwell on the fact that if you just accept the second law of thermodynamics is the best explanation for reality that humans have developed, which is maybe questionable, but if you if you do look at things from that perspective, I could see that how you could get quite depressed. Yes.
[01:40:58] Blue: Yeah. It’s I can rational, even if I don’t feel his depression, I can rationally see why it would depress him. Right. It’s a totally rational argument that he’s making. OK. Sure. So and it’s not like he’s claiming other people should be depressed like him. He simply didn’t know how to not be depressed over the complete what he saw as the complete, ultimate, meaningless of his life. Right.
[01:41:25] Red: Yeah.
[01:41:26] Blue: And because of what he understood, the best scientific theories of his day to say. So I think what I would like to do. So let me put it this way. It does not seem irrational to me that Tolstoy was depressed over his pessimistic worldview. So I think I would flip this to Vaden and I would say, Vaden, what would you rationally tell Tolstoy that would stop snap him out of his depression, given the assumption that he death is, in fact, a given will it’s a long ways off. So don’t worry about it. Is that a good, rational response to his concern? It does not seem to me that it is. It does not seem to me that it even engages the rational concern that Tolstoy is bringing up. OK, do you see what I’m trying to get out here?
[01:42:19] Red: Oh, yeah.
[01:42:20] Blue: OK. So my view is this Tolstoy isn’t being irrational here. There is no nothing you can tell him that is rational or true, given the assumption that he death is true, because he is rationally correct in his concern under this cosmology. Now, I do accept that most people will not be bothered by it that much. And I’m glad that’s a good thing. I think you could even throw out an evolutionary psychology explanation here. If human beings in general were able to reason themselves into an existential crisis like Tolstoy, then they would be rude. Their genes would be rooted out from the population. And we would we would sort of expect that the vast majority of human beings would just not be able to wrap their minds around why Tolstoy was so depressed over this. Even if they can rationally understand it, they just can’t feel it, right? It’s something they just can’t feel. All right. But is the reason why most people don’t feel it? Is that rational or is it actually an immunity to rationality? It seems like that’s the question that you should really be asking.
[01:43:30] Red: OK.
[01:43:31] Blue: So the reason Vedin gives is it’s a long ways off. OK, but try to explain to me why rationally why that matters. OK. Like really take that seriously. Like this is philosophy at its best, right? Think through why does it’s a long ways off matter? Yes, it matters. Why does it matter? Explain it rationally to me. Now, I have I have a common answer here and it’s something like this. By the time the bad stuff happens due to heat death, I’ll be long dead. And so everyone I know and love and everyone they know and love. So I’ll just enjoy the you know, the honey or lunch or whatever of this life right now. And again, subjectively, that strikes me as a good answer. OK, but I’m not sure I see how that’s a rational answer to Tolstoy. And to help you understand why I don’t see it as a rational answer to Tolstoy. Let me give you what I call the dying earth scenario, which I came up with to try to figure this out myself, right? So what we’re going to do is basically we’re going to we’re going to make heat death happen sooner. OK, we’re going to cover the science fiction scenario that’s that’s analogous to heat death, not the same as heat death, but analogous to it. And we want to write like we’re going to write a science fiction scenario. We want to work out the implications of this scenario. So here’s the scenario. We just discovered that there’s something wrong with the sun. And it’s going to slowly go out and it’s just not going to shine so brightly and it’s going to get less and less bright over time. OK.
[01:45:03] Blue: And it’s and there is based on the way they discovered this. This is a science fiction scenario. So I don’t specify what it is techno babble here. There is no hope of saving humanity. OK. It’s the sun is going out. It’s happening too fast that we could realistically create the knowledge that’s necessary to get out of the solar system and get to other other stars. And if you want to insist you can get out of the solar system, then OK, every sun is now going out. It’s new laws of physics. The whole point is to make a thought experiment, not for it to be realistic. So so there’s there’s no way to save humanity at the current level of knowledge. So you basically at this point know that your life will be theoretically unimpacted by this. OK, it’s it’s happening too slow. You won’t notice you won’t actually notice any real difference in your lifetime. Your children won’t notice any difference in their lifetime. Your grandchildren will not notice any difference in their lifetime. But after that differences will start to be noticed. So I’ve intentionally picked this so that given a normal life span, no one you know and love will be impacted, but that humanity as a whole will very soon within 100 some odd years be impacted. So it’s the heat death scenario, but collapsed to an amount of time you can now comprehend. OK, you’re with me so far.
[01:46:32] Red: Yeah. And it seems to me most there’s a lot of people out there who feel very similarly about climate change. Yes. Different kinds of environmental issues.
[01:46:43] Blue: I would I would argue that most of the people who claim they’re concerned about that it’s just pure performative. Whereas in this scenario, it will not be performative. It will be real. I think
[01:46:56] Red: you might be right to some degree. People who are our age, who’ve seen these things play out over decades. But I don’t know. There might be like teenagers might be cases who really
[01:47:08] Blue: believe it.
[01:47:09] Red: You know, putting myself in the perspective of a teenager who grew up sort of immersed in in this kind of alarmism. I mean, I think that there might be a lot of little toolstoys out there.
[01:47:24] Blue: Yeah. OK, I’ll buy that. I think I think it’s hard for us to tell the performative versus the real. I think it can be can be told. There’s actually we should do a we should do a climate change episode. I think it’s actually possible to tell that most of them are performative. And I think it’s because there are actual different predictions as to how they would act if it’s performative versus not. And the vast majority of them it’s performative. The main the main argument I would use is the. Super superfreakonomics argument that you could put ash in the air.
[01:47:57] Red: Yeah.
[01:47:57] Blue: If you were to go to Greta Thunberg and you were to say, hey, Greta, you know what? The earth isn’t actually going to die in a few generations because we know we can put ash in the air and we can stop global warming. OK, and yes, that will lead to problems. It will lead to problems where the sea becomes too acidic. So then we’ll need to probably do something about that so we can like put base into the sea and we can keep life from dying out. And that may lead to new problems. We can solve those problems. But guess what, Greta, we know we are not in danger of going extinct. Do you think Greta would take that as a positive? I don’t think so. I think you would immediately see a negative reaction where she would get angry with you and she would yell at you. And and I think that that action is drastically different than a person who is not performative where they would go. Oh, thank goodness. I didn’t think of it that way. OK.
[01:48:58] Red: Well, to really steal man her position, though, which I think is the position of a lot of people these days have, I mean, I think I think that they would be very skeptical of those kinds of situations, of those kinds of solutions, unfairly, I think, but they would be because they see part of their whole worldview as they is like what what I think Alex Epstein calls the fragile earth hypothesis, and that’s that the only way for humans to survive for a species to survive long term is to somehow live in harmony with with nature, right? And the hypothesis, those kind of of interventions just would not go along with that. And of course, there are all kinds of problems with that.
[01:49:45] Blue: Well, let me even grant you that. OK, OK. So Greta comes back and she says that. OK. And for the sake of argument, let’s assume that I don’t have a argument ready to explain to her why that’s wrong.
[01:49:55] Red: OK.
[01:49:56] Blue: OK. But Greta. That means that we have a lot longer to solve this problem than you’re assuming. So there’s not really something to worry about yet.
[01:50:06] Red: Yeah.
[01:50:07] Blue: OK. Would how would she react to that? Right. I promise you she would be performative, not real.
[01:50:15] Red: Yeah.
[01:50:16] Blue: OK. And so I don’t I don’t think it’s that hard to tell if someone is performative versus they have a real concern.
[01:50:22] Red: OK.
[01:50:22] Blue: OK, let’s get back to my dying earth scenario.
[01:50:25] Red: OK.
[01:50:25] Blue: So let’s let’s pretend like this is reality that basically we’ve got three happy maybe four happy generations to go. And then after that, things are just very slowly going to just keep getting worse. OK. And in some reasonable amount of time, it’s going to be only way you’re going to be able to survive is by killing each other. Right. How would that impact you today? Now, remember not no one you know in love is impacted directly. OK. Now, I don’t know. Everybody’s going to be different. There is a subjective quality to this. But I know for a fact that I would go into a massive depression right now if I knew this to be the fact to be a fact. OK. And I mean, it’s not even hard to see why I would write. It’s just a rational response. I would definitely never have kids. Now, obviously, I’ve already had all my kids. Well, let’s say I was young enough. I would refuse to have kids. I would not be willing to put anyone in that circumstance. OK. Yeah. The fact that it’s, quote, a long way off, which in this case means a hundred years, something that I can wrap my mind around versus a billion years, which I can’t wrap my mind around, does not seem to matter in the slightest. OK. It seems to me the fact that I’m not impacted doesn’t change the fact that I am impacted and that this is a completely rational response in my part. I do not see how my lunch could be anything but ruined at this point.
[01:51:49] Blue: OK, I would still go eat lunch, but it would be really hard to enjoy under this scenario because the time horizon of humanity is now finitely something I can understand.
[01:52:00] Red: So I suspect I’m not I’m not out of the ordinary here.
[01:52:03] Blue: I suspect that I am not the only human being that even though in theory I’m not impacted by this and no one I love is impacted by this would not go into an immediate depression. Maybe even start seeing people start to commit suicide and I just can’t even bear to live with this. Now, what I want to ask is this, why does collapsing the time frame make a difference? Why is it that? And I admit it’s true. I admit that at a billion years, it doesn’t bother me that much. But at 100 years, it basically ruins my life. OK, nothing else in the scenario changes. It’s just how far off the heat death is. Why is that? Like actually try to explain to me rationally why collapsing the time frame makes a difference rationally. I honestly can’t think of anything. Like I really can’t think of anything. In a certain sense, a billion years is infinitely close to zero, just like a hundred years is. OK, all finite numbers are infinitely closer to zero than they are to infinity. We often think of a billion years as practically forever. And I know emotionally, that’s how I see it. Like I can’t talk myself out of it. Like the very fact that it’s billions of years into the future. I know I will always emotionally. I just I just am wired to emotionally understand it as practically forever, right? Even though rationally, I know it is in no sense, not any sense at all practically forever.
[01:53:40] Red: Yeah.
[01:53:40] Blue: OK. And I don’t think I could explain it. It really does, at least to me, I will answer only for myself. And although I suspect I’m not abnormal here, at least for me, it seems to me that my lack of depression over a guarantee of heat death, which I admit, I probably would just go enjoy lunch, right? It seems to me that it’s an irrational lack of depression. And I’m not sure how to explain it other than that, right? It really looks to me like Tolstoy understood this correctly and I didn’t. Yeah. And that that seems like the answer to me. So it’s not clear to me that my immunity to heat death being a bummer is rational. It really seems to me like Tolstoy is simply being rational and I’m being irrational that my emotions are overcoming my rationale. Thank goodness, by the way. And this is the kind of strange things you get in a heat death universe, right? Is that there’s the whole idea of we prefer truth over not truth isn’t true in a heat death universe. There are many things that being irrational and having an emotional detachment from over a billion years. That’s a good thing in a heat death universe, right? It’s there’s no particular reason to believe that we shouldn’t. And Ivan brought this up to and he said, aren’t you really just therefore saying we should all just not even seek truth and that we should just go with whatever makes us feel good? Well, keep in mind, I’m not saying that. I’m saying in a heat death universe, that’s an implication. OK, which and I’m not saying the heat death universe is correct, right?
[01:55:20] Blue: But yeah, that is an implication of a heat death universe is that there. And if Ivan thinks that there is some way around that, I would like to see him try to argue it. Ivan advances himself as a value subjectivist. How would you go about telling someone who’s, you know, meditating and enjoying their life in even though the world’s going to end in 100 years, you know, how would you even go about trying to explain to them why they should, in fact, come out of their delusional view that’s making them happy and take on the depressing real life view? Like if if values are just subjective, that I honestly don’t see how you make that argument. Like it seems like you’re kind of done. Like this is just an implication of subjective value theory. I don’t know how you get around it. I’m not saying I believe it. I’m saying it’s an implication of subjective value theory. And that follows naturally from a heat death cosmology. So this is really, I think my answer to Vaden is is not what I find at a bummer. I think the answer is no, I wouldn’t. Like I think I would just go and join lunch, right? But I don’t think that that’s really an honest, rational answer. I think that in some sense, that’s a subjective, irrational answer. Although it’s the truth. And I can see where Tolstoy is coming from, right? I can see why why Tolstoy was in some sense more rational than me when he let his life be ruined over what he saw as an inevitable end of humankind.
[01:56:54] Red: OK, well, I I it’s an it’s an intriguing issue. I think that probably ninety nine point nine nine percent of the population would be more with Vaden there on that thing. But I agree. I I I think I. So let’s let’s
[01:57:16] Blue: take my thought experiment, though. Let’s let’s take it seriously. What if you knew that that that that point the point was reached in three or four generations, would ninety nine point nine percent of humanity still agree with Vaden would even Vaden still agree with Vaden. My guess is no. Right. I think the collapsing of the time frame immediately snaps you into why this is such a bad thing.
[01:57:39] Red: Yeah. Well, here’s what I hopefully this is this is relevant. I can cut this out if it’s not. Here’s what Honan, the barbarian. Oh, I love this would say what
[01:57:52] Blue: you’re going to use.
[01:57:53] Red: I have known many gods. He who denies them is as blind as he who trusts them too deeply. I seek not beyond death. It may be the blackness avert by the Namedian skeptics or Chrome’s realm of ice and cloud or the snowy plains and vaulted halls of Norheimers Valhalla. I know not nor do I care. Let me live deep while I live. Let me know the rich juices of red meat and stinging wine on my palette. The hot embrace of white arms, the mad exultation of battle when the blue flame and crimson and I am content. Let teachers and priests and philosophers brood over questions of reality and illusion. I know this if life is an illusion, then I am no less an illusion. And being thus the illusion is real to me. I live, I burn with life, I love, I slay and I’m content.
[01:58:52] Blue: I wrote a blog post back in my religious bloggers day about that quote.
[01:58:56] Red: Oh, you did. Yes. That’s hilarious.
[01:58:59] Blue: You know, it’s interesting because I actually think that in some sense in heat death universe, that is absolutely the right attitude. I think that it is a totally spot on attitude. However, I do want to point out, as I did in my blog post, that Robert Robert E. Howard committed suicide. Precisely because he could not take this attitude. You know, this is a tough question. He
[01:59:22] Red: may have been arguing with himself a bit.
[01:59:24] Blue: He was arguing with himself that the stories of Conan, just like the stories of H.P. Lovecraft were Lovecraft trying to come to grips with his own pessimistic world view. The stories of Conan were Robert E. Howard trying to come to grips with his pessimistic world view. And he was unsuccessful at it, right? He he was trying to see life in a certain way. It made sense. It was the subjective. I’m just going to I’m going to just enjoy lunch. I’m going to enjoy the honey. And he ultimately couldn’t do it. At the end of the day, the philosophical argument that Conan makes fun of overcame the author of Conan. And that’s the truth, right? And that was what I wrote about in my blog post. But I think that that’s that quote is lovely. You know, I think it represents a certain granted pessimistic world view. But I think it’s an optimistic attempt to deal with an extremely pessimistic world view. And I thought it was I thought it was beautiful. I thought it was lovely, in fact. I thought in many ways it was much better than the Lovecraft version, which which was we’re all just doomed, you know.
[02:00:35] Red: Yeah.
[02:00:38] Blue: Oh, I love Lovecraft. He’s he’s one of my favorite authors, like I’m totally Lovecraftian, right? Yeah. And I’m very intrigued with just how seriously he took his pessimistic world view in the various ways he tried to overcome his pessimistic worldview. One of the things I wrote a religious blog post about Lovecraft. Also, the Conan one came later because it was related. People don’t know this, but Conan and Lovecraft are the same universe. The Conan, the barbarian series series takes place within the Lovecraft mythos, at least loosely. And it’s not an accident that it did. The two were close friends, right? Yeah. They were doing this on purpose. In a lot of the original stories of Conan, he actually used the Lovecraft mythos names. But then before publication, he decided to pull them out and be a little bit more generic. So that’s that.
[02:01:30] Red: Right.
[02:01:30] Blue: Yeah. So they were clearly intended to be Lovecraft mythos monsters. But but he decided to not use any one of the actual Lovecraft names to make it super clear it was a Lovecraft monster. But Conan, when you read Conan, he he he he he has to encounter two kinds of enemies. One is the human enemies and one is the Lovecraft monsters. And he overcomes both ultimately, which is a more optimistic worldview than Lovecraft allowed, right? But when Conan deals with human enemies, he almost always defeats them easily. But when he deals with a Lovecraft enemy, he almost pays with his life. OK. So there’s still this kind of message of the Lovecraftian monsters even overcome, even someone as exceptional as Conan, that there are there’s always nipping at his heels that they may actually defeat him, right? And again, this is not this is very intentional, right? I mean, there’s there’s a literal message that’s philosophical message being stated here, which is why I found it so fascinating, right?
[02:02:38] Red: Well, that’s an intriguing take. Maybe we should do a message a podcast on Conan.
[02:02:45] Blue: So yes, I think that was a lovely quote to end on. And I think that quote was was perfect, in fact. I think that the Conan approach to life is the right approach to life in a heat death cosmology.
[02:02:55] Red: OK.
[02:02:56] Blue: And also I find it rather a bummer. So well, that at that point, you know, when Tolstoy was existing, it must have been very difficult, if not impossible to even conceive of a technological solution to. Yes, heat death.
[02:03:15] Red: And, you know, granted, it’s pretty difficult now. But I I mean, if we’re if we’re existing in a post biological state and are trying to think this through here and are processing speed that our brains exist on is sped up, right? So that would be, in a sense, time would be slowing down, right?
[02:03:46] Unknown: Right.
[02:03:47] Red: Well, what could it could? Is it conceivable that it could speed up to this speed of light where we’re we’re existing on another like as entities going the speed of light and then time is not passing? Does that make any sense? Yes, it does.
[02:04:03] Blue: So there are various ways that people try to escape the implications of heat death. You pointed me to Bobby Azarian’s book and I haven’t finished it yet. Yeah, I think I think he’s got so far the most. He’s got a take I’ve ever heard of before. Let me put it that way. I’ve
[02:04:18] Red: OK.
[02:04:18] Blue: intentionally tried to research this precisely because I find heat death a bummer. I intentionally tried to confront it and to see if it’s actually a law or not. That was one of the reasons why I found Deutsch so intriguing and the mega point so intriguing. And Julian Barber, who we interviewed on this podcast, so intriguing. I mean, like there’s there’s a number of totally legitimate scientists out there who just really do not accept heat death.
[02:04:45] Red: Yeah.
[02:04:46] Blue: And I think and I think all cosmology suffer from the problem that they’re highly speculative, right? It’s fair. And yet, at the same time, I also can see that heat death does seem like it naturally follows from the second law of thermal dynamics. And certainly the theories that get around it do so with great cleverness, not by simply pointing out that the original argument was just plain wrong, if that makes any sense. Well,
[02:05:18] Red: how about eternal inflation? Isn’t that something that a lot of cosmologists believe in? Does that get around heat death? It does not. It does not. OK. So no,
[02:05:28] Blue: Deutsch, Deutsch raised the idea of using dark energy.
[02:05:33] Red: OK.
[02:05:33] Blue: So that would get around heat death.
[02:05:36] Red: OK.
[02:05:36] Blue: But and that would that would be a consequence of eternal inflation. But eternal inflation on its own would not do any good unless you could somehow harness dark energy, which our best theories are that you can’t, by the way.
[02:05:50] Red: So I mean, we can talk about whether that’s a law of physics or if that’s just a problem to be engineered, right? OK.
[02:05:58] Blue: Heat death has always been something I’m bummed out about. And I don’t accept it, right? I do not accept that heat death is the correct cosmology.
[02:06:07] Red: OK.
[02:06:08] Blue: But it would be interesting to. Talk about the various implications, if it were true, versus what are the ways around it? And it’s always been well known that it’s not a hard, fast law, right? There there’s I think I’ve said this in a past podcast, but the rule of the rule of entropy doesn’t say you will end in heat death. It does not say that it actually says entropy must increase. Well, if you could increase entropy forever, then you will never have heat death. So there’s like there’s a kind of this well known way around heat death. Right. That that ever all physicists know about. And so it’s it’s never been its status is a little weird. Like it’s sort of accepted as a best theory. And in a certain sense, it is a best theory. It hasn’t been strongly challenged like it should have been.
[02:07:00] Red: And
[02:07:00] Blue: it’s not like it’s a testable theory. Right. Yeah. And then as
[02:07:04] Red: I was saying before, kind of the more like out there theories would almost look at like human knowledge as a compound like most more something more like a law of law of physics. Yeah, they can they could potentially defeat heat death and in more entropy in one way or another.
[02:07:23] Blue: Just briefly, a mega point overcomes it through a collapse where you’re allowed to create an infinity of entropy while also holding on to a growing amount of order because the resources grow forever. Dark energy model tries to get around it by creating a source of energy that never runs out so that you can grow entropy forever, but you never run out of free energy. Julian Barber gets and tries to get around it through his theory of extra P. So the idea is that we’ve misunderstood. This is interesting because it’s different than the way Deutsche went about it. We misunderstood the nature of of entropy. It doesn’t actually exist because it’s assuming that you reach a state where you can’t use the differences that exist. In fact, you can always use the differences that exist. It’s just a matter of having the right knowledge. So Julian Barber calls that extra P instead. Keep in mind, I I’m not a physicist. My apologies if I’m misexplaining your pet theory out there. This is just my what I’ve learned so far from reading up or from interviewing Julian Barber or whatever. Right. So there’s also the theory that you try to move to a new universe, you create a new universe for yourself and you move into that universe. You can basically use up one universe. It’s entropy. So entropy always grows and you destroy one universe at a time. But there’s always a new universe for you to live in. I mean, like there’s various science fiction attempts to overcome entropy. And I think you can take two stances. I think you could take the stance.
[02:09:01] Blue: We have no theory today that says any of these are possible, which would be the more pessimistic and maybe to some degree more rational theory, or you could take the stance. Look, we’re at the edge of our knowledge here, like we’re literally making assumptions based on stuff that we barely understand. So all of these and maybe others we haven’t even thought of are completely valid theories. And there’s no particular reason why we should take take heat death seriously today. And I guess I fall into that camp, right? It’s I can see the rationality of heat death. I’m not denying it, but it does seem like we’re making assumptions that are maybe premature when we assume it.
[02:09:45] Red: OK, well, on that optimistic note, we can wrap this up. And we’ll have to. I think we could do another episode just solely on on heat death. I think I sense maybe it may be a two parter. OK, well, thank you, Bruce. This has been been wonderful as always. And I’ve learned a lot.
[02:10:06] Blue: All right, thank you very much. The theory of anything podcast could use your help. We have a small but loyal audience and we’d like to get the word out about the podcast to others so others can enjoy it as well. To the best of our knowledge, we’re the only podcast that covers all four strands of David Deutch’s philosophy, as well as other interesting subjects. If you’re enjoying this podcast, please give us a five star rating on Apple podcast. This can usually be done right inside your podcast player or you can Google the theory of anything podcast, Apple or something like that. Some players have their own rating system and giving us a five star rating on any rating system would be helpful. If you enjoy a particular episode, please consider tweeting about us or linking to us on Facebook or other social media to help get the word out. If you are interested in financially supporting the podcast, we have two ways to do that. The first is via our podcast host site, Anker. Just go to anchor.fm slash four dash strands. F O U R dash S T R A N D S. There’s a support button available that allows you to do reoccurring donations. If you want to make a one time donation, go to our blog, which is four strands.org. There is a donation button there that uses PayPal. Thank you.
Links to this episode: Spotify / Apple Podcasts
Generated with AI using PodcastTranscriptor. Unofficial AI-generated transcripts. These may contain mistakes; please verify against the actual podcast.