Episode 54: Computational and Explanatory Universality (IQ part 2)

  • Links to this episode: Spotify / Apple Podcasts
  • This transcript was generated with AI using PodcastTranscriptor.
  • Unofficial AI-generated transcripts. These may contain mistakes. Please check against the actual podcast.
  • Speakers are denoted as color names.

Transcript

[00:00:10]  Blue: Right. Welcome to the theory of anything podcast. How’s it going? Hey, Bruce. Hey, Peter. Just me and Peter today. So we’re going to continue our discussion about IQ and universality. And today we’re going to talk about Dworkish Patel. I do not think I’m pronouncing his name right, by the way, but dark cash. I think maybe is the right way to pronounce it. Patel and his theory around the scaling hypothesis. And we’re going to talk about spoiler why it’s not a very good theory. Although he admits this, which I think is really good. And I’ve actually got many, many positive things to say about Dworkish and the way he went about handling this, but we are going to give it the critical nationalist treatment today. We’re going to talk through what’s wrong with the theory that he is proposing. I think at the same time we’re going to talk about, so Dworkish has this theory called the scaling hypothesis. And it’s actually got two theories that he’s combining the hardware hypothesis and the scaling hypothesis. But those are like on top of the prevailing theory, which is could probably be described as that there is such a thing as intelligence that some people have more of it and some people have less of it. And that if you have more, you gain knowledge faster. That theory, we’re going to probably define a little better too. And how it differs from Patel’s theory. And then I think finally Peter has some questions for me about maybe trying to understand a steel man version of Brett’s theory better. So we’re going to talk about all three of those things today.

[00:01:30]  Blue: And then we’re going to save our critical rationalist treatment of Brett’s theory for next time, because that probably deserves an episode all by itself. So let me actually just explain something in the last episode, I mentioned that I’m going to call this Brett’s theory. And I said, hopefully we can all understand that that’s the shorthand for the theory, but it’s just that I’m working off of Brett’s tweet. I feel I have felt since then that that didn’t really get to the heart of my concern. So I’m going to express my actual concern here. My concern isn’t that I think Brett is differing from Deutsche on this. I actually think it’s highly plausible that Brett is properly explaining Deutsche’s theory and that therefore this is actually Deutsche’s theory. The problem is that David Deutsche has never in public at least that I’m aware of said the things that Brett said and there’s some it seems just morally wrong for me to attribute to David Deutsche a theory based on something a big fan of David Deutsche said about Deutsche’s theory. Therefore, just morally speaking, it makes sense for me to call it Brett’s theory until such time as David Deutsche says, yes, I agree with what Brett is saying. I mean the same thing, or he comes out and says otherwise. It’s not that I actually have doubts about Brett. I think Brett is in good faith understanding David Deutsche’s theory, probably correctly. This is probably David Deutsche’s theory, but it just feels wrong to call it David Deutsche’s theory when I know plain well I’m actually responding to what Brett said. Does that make sense? I mean, it’s simple. I don’t mean anything bad by it.

[00:03:04]  Blue: It’s just it feels wrong to call it David Deutsche’s theory when I know I’m using Brett’s words.

[00:03:09]  Red: Make sense to me. I should say I feel a trepidation on my part to plunge into criticizing, you know, it’s not a bad thing, of course, critical rationalism. Brett’s ideas when, you know, I think I’ve probably listened to just about every single episode of the TOK podcast and I am just such a fan of his stuff. The way he expresses himself just makes, I mean, the aha moments after aha moments or just I just love his stuff. This is kind of like just the

[00:03:41]  Blue: one area

[00:03:42]  Red: of the whole Deutsch -Brett -Hall worldview where I just kind of like, I don’t know, I just… Right, right.

[00:03:51]  Blue: You know, you’re not alone

[00:03:52]  Red: in that feeling, right? You really are. Yeah, I know. I know I’m not.

[00:03:56]  Blue: There’s actually a handful of things that I’ve talked about on this podcast at various times where I buy into the whole four four strands worldview. I’ve done my best to criticize those theories. I feel that I have found that they don’t have competitors. I haven’t gone into, I mean, I’ve explained a lot about popper’s theory. I’ve explained a lot about how I feel David Deutsch is slightly different than popper’s theory. And in some cases I prefer popper’s theory and in some cases I prefer David Deutsch’s theory, which means I actually sort of have my own theory that’s based on both of those two men, if that makes any sense.

[00:04:32]  Red: This

[00:04:32]  Blue: is the way theories work, by the way, not that big of a deal. We do our best to try to make sense of what people are saying. We have to interpret it for ourselves and to recreate the knowledge in our own minds. And yet there are, there’s a handful of things that I really really feel they just get wrong. And this is one of them. And we can talk about that. You say this is like maybe your main one. And when I talk with people about this, a lot of times these are hanging up points for Pete. Yeah, right. Like I really want to endorse the David Deutsch worldview. I love how optimistic it is. I really like where he’s going with things. But I just plain can’t accept X. And I get told this because I’m someone willing to challenge anything. I have people come to me and say this on a regular basis. And there is something to this. If, if what they’re saying is true, then I want to embrace it. I want to understand what it is is wrong with my thinking. I want to understand why it is that it seems to me that some of these things deserve criticism and in fact have been refuted by criticism. Or I think that they should be removed from the worldview because they’re wrong. I think one of those two has to be true, right? Either I’m right that they’re wrong or I’m wrong that they’re wrong. There’s no other really logical option available. And I think that’s how people see it, right? Is they’re not quite sure what to do. How do you embrace this worldview?

[00:05:52]  Blue: When it says something that you just cannot agree with, it just seems to clash with your experience. In a lot of ways, that’s a lot of what I’m trying to do on this podcast. I’m struggling with those issues. I’m trying to talk them through. And I feel like I’m coming up with things where I’m getting something better that I can’t agree with what Brett’s saying on this, but I’ve got something that’s better, right? It still embraces the core ideology that underlies the four strands worldview.

[00:06:19]  Red: Yeah. Yeah.

[00:06:20]  Blue: And that’s really what I’m attempting to do. But I do think that requires us to be very upfront with our criticisms and to accept, okay, we are going to criticize even David Deutsch’s theories. We are going to criticize them. We’re going to try to find counter examples to what he’s saying. And that really is the attitude you have to have. The moment you start taking the attitude of, I’m going to be a defender of his theory. You know what? I don’t even think that’s wrong, because we just talked about how dogmatism isn’t wrong. But you’re not going to actually be the one to help solve the problems of the theory if you can’t even acknowledge that the problems exist. What you’re actually going to do is you’re going to be the dogmatic person who forces somebody else to solve the problem, which has its role. That’s why defenders have their role. That’s not who I want to be. I want to be the one who actually thinks this through, says, wait, there is a problem here. How can we solve that problem? Because that’s what actually gets me excited in the morning and gets me up in the morning. And I just can’t ever be the guy who just becomes the apologist for the worldview. I think that’s the right attitude in this life.

[00:07:22]  Red: You know, it might not always make you the

[00:07:23]  Blue: most popular person

[00:07:24]  Red: in the world, but you know. No.

[00:07:27]  Blue: All right. Let’s first talk about the theory of IQ, because I feel like we’ve never really talked about the theory of IQ and our original version of this, that we didn’t end up recording or lost them. We did talk about theory of IQ and we didn’t do that last time. So let’s just kind of talk this through quickly so that we really know when we talk about theory of IQ or theory of intelligence or whatever you want to call it. What are we talking about? So it’s a measure of, it’s supposed to be, it purports to be a measure of how much knowledge a person has gained by a certain age. If you’re age 10, there’s an average amount of knowledge you’ve taken in. Some people have taken in more and some people have taken in less, according to the theory I’m saying. Do you say it’s more about knowledge or capability? So now obviously these relate, but they’re the IQ test itself, or at least the original IQ tests because there were multiple of them and they weren’t necessarily even compatible. We’re attempting to measure your general knowledge. Okay, they were trying to say, have you picked up this by now? Because the average 12 -year -old knows that, but the average 10 -year -old doesn’t. Okay, that’s what they’re trying to do for better or for worse. Right or wrong, that’s the theory that they’re trying to deal with. Now, we intuitively, granted this is what Brett’s trying to challenge, but we intuitively have a feeling that some people do not pick up knowledge as fast as others, that some people are quote, slower. This is particularly obvious in the case of someone with likes and down syndrome, where there’s a genetic component.

[00:09:07]  Blue: We know the mechanics as to what has gone wrong genetically. We understand why that has affected the person, and we can see that every single person with down syndrome has a lower IQ. They’ve got less knowledge by a certain age than somebody else with an equivalent age to the point where my neighbor, who’s the one we keep using as an example, can’t take care of himself. Knowledge acquisition was so slow that by, I don’t know what age he is, but he’s either like late 20s or early 30s or something like that. He’s still very clearly operating less well than a seven -year -old.

[00:09:41]  Red: Well, as I mentioned before, I have 17 years of experience as a special ed teacher working with the full range of disabilities and capabilities from kids who are probably on the road to making a $200,000 as a computer programmer, but might have some social difficulties to quite low -functioning students. I’ve got just a lot of real -world experience, even outside of school, that seems to suggest, I’m not just talking about coercively either. I’m not just talking about trying to change people’s brains coercively. I’m just talking about people wanting to change themselves. And I see that within my own self too. It’s really hard to… Maybe it is just having… I do think at some level it is just having the right theory. I mean, that makes a lot of theoretical sense to me, but easier said than done.

[00:10:37]  Blue: Yes. So the example I’ve used of this in the past has been the Western Market Effect. Of course, that was dealing not with IQ, but not with intelligence, but with desires. I have argued that the Deutsch world view that desires are not affected by genes is false. And I think that one is just true. We’ve got overwhelming evidence that one is just true, which I mean, we’ve refuted the theory. The theory is refuted. And there’s great theories that can explain this. So the Western Market Effect is this idea that if you raise children together, that their sexual attraction shuts off towards each other. By the way, it also applies to parent to child. So if you’ve raised a child from a certain age, or if you have been raised by a parent from a certain age, then the desire for the sexual attraction has been muted. It actually doesn’t shut it off according to the theory. And there’s actually a great deal of controversy as to how powerful the effect is. It may not be very powerful. But I’ve argued it doesn’t have to be because genes take advantage of the fact that memes exist. If you have a gene that makes some segment of the population feel like having sex with your sibling is gross, that’s going to create a meme that there’s a social taboo against having sex with your siblings. That meme may do all the heavy lifting, right? It doesn’t have to be that the genes do all the heavy lifting. They just have to be aware through trial and error, through variation selection, that there’s these memes that they’ve come into existence that can take advantage of them.

[00:12:08]  Blue: But they can create that meme by creating some segment of the population that’s just a little bit grossed out over the idea of sexual attraction towards your sibling. Is this a testable theory? It is. And it’s been tested. Okay, not intentionally. That would probably be unethical, but there’s natural experiments that exist. If we separate children at birth, then bring them together later and they know that they are brothers and sisters. They now have the meme that it’s gross to feel attracted to your brother or sister. And they don’t have the Westermark effect, the genetic side. What they end up doing is they end up feeling attracted to their brother or sister and feeling shame over it. You can see both the meme at work and the gene. You can’t explain it without having both explanations. Now, I have had at least one Deutsche and say to me, well, it could be another meme. No, they’re misunderstanding. They’re not doing critical rationalism when they say that because that’s a non -explanation. They need to propose what the other meme is and make it testable. If they haven’t done that, then it’s just an ad hoc save, it doesn’t count. At this point, we have a best theory that the Westermark effects a real genetic effect. Furthermore, even if you could say it was another meme, you would have to still explain why it just so happens that it serves them genes. You would then be leaving that unexplained if you didn’t explain that the gene somehow caused it. This is the way we want to apply critical rationalism to these questions. We look at that.

[00:13:32]  Blue: We now have this good explanation for why there is a social taboo, how the genes affected that social taboo, how the genes caused it. It’s just basically a simple rule, one that genes could realistically do, that by a certain age, anyone you know through that age, just don’t turn on strong sexual attraction.

[00:13:52]  Red: Is that a conclusive finding?

[00:13:54]  Blue: There are no conclusive finding.

[00:13:56]  Red: Brothers and sisters raised apart when they, or if they don’t know about each other or something.

[00:14:01]  Blue: So yes, according to, according, I’m only as good as my sources. According to my psychology textbook, that experiment has been naturally done multiple times and that’s what they found. So now there are no conclusive conclusions ever in science. You have to get that out of your mind, right? We’re always subject to the idea. Well, maybe that will turn out to be non replicable. Maybe we’ll find that there was something else causing it and we were wrong. But until you have that explanation and you can test it, you don’t assume it exists. If you want to assume something that’s untestable exists, you can override every explanation, right? That’s the whole point of trying to avoid ad hoc explanations. The one hard fast popper rule, don’t ad hoc your explanations. Don’t save your explanations from immunize them from reputation using ad hoc explanations, which he defines very specifically as, it has no testable consequences except the one thing you’re trying to explain away.

[00:14:59]  Red: Okay.

[00:15:00]  Blue: Okay. It’s really a pretty reasonable rule. That’s to me. Yeah. So at this point, we have a best theory is that the Western Mark effects are real thing. It’s an example of genetics impacting our desires. Could you with knowledge override? Let’s say, let’s say make up some science fiction story here. For some reason it’s in the interest of the human race that we get siblings to get married. Okay. But we’ve got this problem that the Western Mark effect is in social taboo or stopping us from doing it. Okay. So we tear down the social taboo in this science fiction story. What we’re going to find is that some segment of the population actually was attracted to their siblings. But because of the, the social taboo, they just never acted on it. So suddenly that segment of the population says, yeah, now I’ve got no problem with this, but you’re going to find this not a very large segment of the population. It’s a very small segment of population, at least according to the current experiments that have been done. So then you still got this problem that even though we all want siblings to get married, they just don’t want to, they just don’t have sexual trust. Some of them decide to do it and they’re grossed out by it, but they just do it anyhow because it’s in the interest of whatever the science fiction reason is. This is how our story is unfolding. So finally we go, hey, let’s use our knowledge. If the Western Mark effect is a simple genetic rule that by age two or whatever the age is, if you were raised together, you don’t feel attraction.

[00:16:23]  Blue: Let’s just separate all the siblings and let’s not bring them together until age 10 just to play it safe. Okay. So in our story, we do this, this at least in real life, it could be done. Suddenly human knowledge has overcome the genes, right? The genes will never be powerful enough that with knowledge, we can’t just decide, I’m going to overrule what that gene says, but you do have to have the knowledge. If you don’t actually understand the Western Mark effect, if you don’t have that theory, you don’t understand how the genes are doing it, you don’t know what to do. You can’t just make some change to the social taboos and that alone is going to do the trick because there is a genetic effect going on. That all makes sense.

[00:17:03]  Red: It seems to me that the universal explainer hypothesis, I mean, it’s almost like it calls into question the idea that these genes even have power over our minds. Well, they tried to in that way. I mean, that’s not the case.

[00:17:16]  Blue: Although one thing I’ve noticed is that when you really press them on certain things, there’s

[00:17:22]  Red: always some wiggle room there. Yes. They always, they always will say that there might be some cases where, okay, well, yeah, that’s, does the perfect pitch or something. There’s something. Right. Well,

[00:17:36]  Blue: and I feel like we’re not yet criticizing. Brett’s theory, but they often make moral things from it. Here’s this theory and this theory has this moral consequence and it’s really the morality that they care about, but they tend to ignore the other moral consequences. So for example, if my neighbor and the other consequences and the moral or otherwise, if my neighbor who’s got down syndrome, if he actually is acquiring as much knowledge at the same rate as anybody else, just not in areas that we find to value, which was Brett’s theory. One of the things that that implies is that that neighbor could take care of himself, but has chosen not to. Well, that’s a moral consequence, right? Think about the difference between, oh, this person has a low IQ, they’re incapable of taking care of themselves. So they’re going to have to live with their parents for the rest of their lives versus this person’s just fine in terms of their intelligence. They just don’t want to have to take care of themselves. We see those as drastically different. One of the outcomes of Brett’s theory, if it is true, which I don’t think it is, but if it were true, is that we would have to start looking at the person with the down syndrome as simply making a moral choice to make his parents take care of himself because he’s not interested in that. That would drastically change the way we see a person with down syndrome. And I think wrongly, okay, in a false way. They don’t tend to, they kind of grasp onto certain things, but they kind of ignore others.

[00:19:03]  Blue: By the way, it would also mean that down syndrome, which is a genetic disorder can cause people to lose interest in taking care of themselves, which would be a counter example to their whole thesis that genes can’t change our desires. So they’re really stuck, right? I mean, they’ve got a collection of things that they say that are in contradiction to each other. And if they actually were trying to work out what are the implications of this theory, rather than trying to defend it, they would see that they’re in a contradiction cycle, that this is going to imply things that I don’t actually want it to imply, that it is unacceptable once I start taking it seriously, some of the conclusions that come out of it. Which is what you do with theories like this, according to Popper, is that you take it seriously. You never don’t take it seriously. You take it seriously and you work out what are the implications of this theory and does it get you to a place where you go, no, that can’t be right. And that’s what we get with some of these theories, is that they take us to a place where we go, no, that can’t be right. So we get back to what does it mean with knowledge? So the idea is supposed to be that if they’re trying to measure your general acquisition of knowledge, if by age 10, you’re the same as an average 12 -year -old, then they take your mental age is what they call it, and they divide it by your actual age, and then they normalize to 100.

[00:20:25]  Blue: And the IQ score is supposed to tell you, that’s why it’s called a quotient, by the way, intelligence quotient, is because you’re giving this division of mental age and actual age. And so it’s supposed to be, oh, if you’re, I don’t know what the exact numbers would be, and they actually change every single year because they renormalize theoretically every year. I’m sure they don’t actually normalize every year, but they renormalize on a regular basis. So an IQ of 100 now is very different than an IQ of 150 years ago. And in fact, the average IQ has gone up from like 89. Like if we were, what was 100 back in the 50s, is now considered 89 today. I have to get the exact quote, but there’s like a drastic difference between IQs 50 years ago and IQs today, because they keep going.

[00:21:10]  Red: The Flynn effect.

[00:21:11]  Blue: The Flynn effect, yeah. I want to take seriously the criticisms of it, and there’s some really decent criticisms of it, and Brett makes some decent criticisms of it, right? Yeah,

[00:21:21]  Red: yeah.

[00:21:22]  Blue: But the general idea that there is such a thing as speed of learning, that that can be measured in terms of knowledge, amount of knowledge, and that a 10 -year -old that is equivalent to an average 12 -year -old, none of that actually violates any theory I’m aware of. It doesn’t violate universal explanorship because it’s about acquisition of knowledge, not whether you can acquire the knowledge. Universal explanorship, really strictly speaking, really only says you can acquire any knowledge. You can comprehend anything. It doesn’t say anything about how fast you can comprehend it. There’s reasons why Brett thinks they can make the claim that the speed is the same. We will address that. But the theory itself makes no such claim. That’s an addendum to the theory that they’re inserting in. Furthermore, in theory, we can measure amounts of knowledge. We have something called information theory. It exists. It’s real. We can measure the amount of information that exists. Now, we don’t currently understand the brain well enough that it’s easy to measure the amount of knowledge in a brain. But this is a totally soluble problem. At some time, we’re going to understand the brain well enough that we can actually just measure are some people taking in knowledge faster than others? And it’s going to be an objective fact at some point. And I think the answer is going to be yes. We already know because we have the existence of severely mentally retarded people that some people just don’t take in knowledge very fast. That’s just something to do with the way their brains are designed. Something’s wrong. So they don’t take it in fast.

[00:22:57]  Red: I don’t know if that what you say completely rings true for me. I mean, when you think of all the different dimensions of knowledge, moral knowledge, aesthetic knowledge.

[00:23:10]  Blue: Okay. So I understand your concern here, but let me be clear on what I’m saying. It still has to be stored as information. And information tells theory tells us how to measure that. There will be some brains that have, if the theory of IQ is true, then there will be some brains that by age 10 have more knowledge, more information in them. We don’t really have a theory of knowledge, but we do have a theory of information. So I got, I should say information, not knowledge. We will find that some brains have more information and some don’t by age 10.

[00:23:42]  Red: Okay. At that point. You say information rather than knowledge. I think that kind of rings more true for me then. Now,

[00:23:49]  Blue: obviously there’s, there’s some sort of connection between information and knowledge. We don’t exactly what it is. Cause we don’t have a good theory of knowledge as of today. The best theory of knowledge is poppers theory. And it doesn’t make any connections to information theory today. But there should be a connection there. Right. It’s, it makes sense that there would be a connection there. The theory of IQ, therefore is at least a viable theory. It could be wrong, but it, but there’s nothing weird or out of the, it does not violate, in and of itself, it doesn’t violate universal explainer ship. In and of itself positively works with it. It was about came about, I think before information theory did, but information theory has only shown us that yes, you can measure how much information is contained in something. Okay. So it has not run afoul of information theory or the discovery of information theory. Okay. And it seems very consistent with it. It’s true. We don’t know a good, great, a perfect way to measure it today, but someday we’ll be able to. Okay. That shouldn’t be in doubt. Now. Does an IQ test actually measure that? Well, that’s the question. It’s since we don’t have the ability to apply information theory today directly to the brain and to neuroscience. Instead, we’re trying to measure. Oh, does this has, you know, has this person picked up this piece of knowledge that the average 10 year old hasn’t, but the average 12 year old has. That’s what we’re trying to measure. And this again, this, you can see how this could be a problem theory.

[00:25:13]  Blue: You can see how there’d be a lot of room for criticism here, but you can also see it’s not necessarily a false theory. And it even kind of makes sense. Okay. It seems to me there’s a lot of predictive power in it too.

[00:25:25]  Red: I mean, if someone is scoring off the charts on an IQ test when they’re a kid, you know, they’re probably going to at least be more likely to have certain life outcomes. Yes. Then someone who’s scoring, you know,

[00:25:40]  Blue: this is Patel’s main argument, by the way, but

[00:25:41]  Red: I’ll quote him in a second that this is actually his main argument.

[00:25:45]  Blue: My key point here is, is that we can’t, we can’t assume IQ is right, but we can’t just on the face of it dismiss it using universal explanation either. We’ve got to get, we’ve got to take the theory more seriously than that. If we’re going to try to refute the theory or point out problems with the theory. Here’s the other thing. And I kind of implicitly was getting at this. The underlying assumption here is that the only capacity that matters is the capacity to create explanations and that all humans have them. And therefore we all have the same intellectual capacity. I actually got the defenders of Brett’s theory to say that. Okay. They argued a little over, let’s not call it capacity because then you’re implying a measure. And I said, no, I’m not. The word capacity could just mean we all have the ability, the capacity to create explanation. I can quote you Brett saying that. He goes, okay, I’ll back off. I’ll accept that in some sense, just not in every sense. I am claiming that we all have the same intellectual capacity. Now there’s assumption here, which is that no other capacities matter. That’s not an obvious assumption. Okay. The, we know from universal computation, the theory of universal computation, that there’s three capacities. There’s the capacity of the processor itself, whether it has sufficient commands that, and a measuring takes some sort of means of memory, that it’s universal. If it doesn’t have those, it’s not universal. And there are certain algorithms that can’t run. But we also know, even if it is universal, that it can differ based on speed and memory capacity. Under the theory of computation, there are three legitimate forms of capacity.

[00:27:19]  Blue: Now the Deutschians have argued that one of those capacities, the amount, whether it’s universal or not, that one has ties to their theory. They’ve admitted that. Okay. That’s one of the reasons why, as we’ll see, that they claim universal expansion exists is because of the existence of universal computers. So, but they deny the other two capacities matter for humans. And the argument that they’ve tended to use is, well, all human brains are similar enough. They can’t possibly be like silicon chips where there’s like drastic difference in speed. Well, I don’t know that, right? I mean, like when they make that assumption, that could be a true assumption, but it also could be a false assumption. We don’t really understand neuroscience well enough to actually know if that’s true. In fact, we even have some reason based on the neuroscience, we do have to believe that that’s false. If there is something chemically wrong in your brain, your synapse is fire funny. That would, that there’s enough error correction mechanisms that you may still be able to get the thoughts out, but they’re going to be slower. Okay. So I don’t think we actually can rule out the idea. And we also know that working memory is different for different people and memory overall is different for different people. So I actually think that the three capacities of universal computers, all three of them apply to humans. There may be other capacities as well, including the ones that the Deutschians are raising, such as interest. In fact, I don’t disagree with them. I think that universal computers have three capacities. What types of instructions and memory it has, how much memory it has, how much speed it has.

[00:28:51]  Blue: I think that humans of necessity must have those same three capacities. And I agree with the Deutschians that they have at least one other capacity that matters, interest. If there, if someone finds a subject boring, or if someone finds a subject interesting, they’re going to put more time into it and they’re going to learn it and end up with more knowledge on it. That’s not a capacity that applies to universal computers, but it is a capacity that applies to humans. So now we have at least one example of how humans have a capacity type that even the Deutschians accept that isn’t one that relates to universal computers, but they should, at a minimum, also have all three of the universal computing capacity types. This is

[00:29:33]  Red: one area where the whole universal explainer conception has really influenced my thinking is just caused me to reflect on how just driven by interest and curiosity humans are. I still think that ability matters too. Let me point something out.

[00:29:53]  Blue: Interest is an example of genes influencing us because interest is a kind of desire, so is boredom.

[00:30:00]  Red: So

[00:30:00]  Blue: computers don’t get bored, we do. Computers don’t have interests, we do. Now, because those are at least in part an unpleasant feeling or a pleasant feeling, then it’s not too hard to see that the genes can, if the genes really can influence us through pleasure and pain or unpleasantness in this case, then the genes could affect our interests. And in fact, we know they do. We have the example of Williams syndrome that I brought up in the, in our previous episodes where we know in this case, we actually know this gene is the one that’s wrong. It causes this to happen. We understand the mechanism that causes this section of the brain to shrink in this section of the brain to grow. And we know that that somehow from here, we start to lose what the mechanism is. Somehow leads to someone with the Williams syndrome having tons of interest in social interaction, but very little interest in math. And this is something very consistent. Now, I argued that universal explanorship means not that these people won’t, that these people need to be as good at math as a person without Williams syndrome. If it did mean that, then universal explanorship would be refuted, but I don’t think it does mean that. I think what it really means is that this person in potentially could learn math as well as somebody who’s a mathematician, but they’re just not interested in it. That for whatever reason, their brain’s been wired in such a way that they just feel all this interest in social interaction. They get really good at that. They just don’t have interest in math. So they just tend not be very good at it. That’s an example of the genes influencing our interests.

[00:31:36]  Blue: And I think this becomes unavoidable. Right. We’ve got so many examples of this in real life. One’s where we actually understand the mechanism between the genes and the interest.

[00:31:45]  Unknown: Well,

[00:31:45]  Red: I mean, it just seems to me that brains matter a lot. Nature has created a species that it’s, it’s unbelievably dangerous for historically to our ancestors for women to give birth because nature’s made our brain so big,

[00:32:02]  Unknown: right?

[00:32:03]  Red: Right. I mean, they just matter. It says there’s so many different, different examples.

[00:32:08]  Blue: So the Deutschians accept that boredom and interest is a factor. It’s a capacity factor. Think of that in terms of someone severely mentally challenged because they have Down syndrome. The Deutschian view would have to be that the genes cause this person to be disinterested in understanding how to live their life without help. Well, the moment we start saying things like that, we might as well just admit this is the same as saying there’s a difference in intellectual capacity. Yeah. Right. Because if, if interest is the thing that’s different, that is a capacity difference. Okay. If the genes can shut down our interests or increase our interests, which apparently they must because Down syndrome exists in real life. Even if I’m still accepting the Deutschian view, it now ends up being the same as the view of IQ. This is an example of how we take a theory seriously. We try to work out its implications.

[00:33:00]  Red: Yeah. Okay. I mean, I just don’t think it’s true either. I mean, it just. Even toddlers want to take care of themselves as much as they can. And it, you know, it’s just, they just can’t. Yes.

[00:33:10]  Blue: Okay. And here’s the other thing. I think we could test this. Let’s say we took the point of view like the defenders of Brett’s theory did that a person who, my, my Down syndrome neighbor. That actually he has acquired as much knowledge as me or some genius out there. It’s just that it’s in areas. We don’t value. Well, I doubt you. I mean, like, even if I were to tell them, look, take your own theory seriously, go make a test that demonstrates this. I don’t think they could. Now, of course they would say, well, it’s because he’s none of these tests work, but you’re kind of missing my point. Like let’s, let’s learn the examples that they used was maybe one of them had a family member who was severely mentally challenged, but he was awesome at video games. So he used that as an example of this person actually has the same capacity as anybody else. Now for the moment, I’m going to ignore the fact that that skill acquisition isn’t actually the same as expert explanatory knowledge acquisition. And I feel like he’s missed that fact. Let’s pretend like that isn’t true. You almost assuredly could make a test that tests how good you are with knowledge around video games. And we could easily find a high IQ person that could just be every single Down syndrome person in such a test, right? You won’t be able to design such a test. Because it’s based on something that just isn’t actually true. So on the one hand, I don’t want to say too much in favor IQ. I think we want to admit this is a troubled theory.

[00:34:37]  Blue: And the reason why is because it’s making claims that are premature based on our actual knowledge of neuroscience. On the other hand, there’s nothing ridiculous about the theory and it’s survived a number of tests that could have refuted it. That’s saying something that’s not saying a little bit either. That’s really saying something. And this is where G factor comes in. One of the things that happened early on is they noticed that all these tests that’s purported purported to measure intelligence. We’re all completely different. They were measuring completely different things. And it’s like, why couldn’t it be that people just have different intelligence? In fact, this, this seems so obvious to us, right? Of course, some people are good at English and some people are good at math and they’ve got different intelligences. This is almost just like straightforward, obviously true. So then they noticed something. People who scored well on one IQ test scored well on another IQ test. There was some sort of general intelligence factor. That’s what the G factor means. Where people were very likely if they scored high on this IQ test that measured this kind of knowledge, they were almost assured they’re going to score high on this other test. And not always. I was going

[00:35:53]  Red: to say almost assured.

[00:35:54]  Blue: It seems to me overstating that it may be a little bit. Isn’t it more correlated? It is highly correlated. Highly correlated. Now what’s a highly correlation? One of the things the Deutschians brought up was, you know, what is a 0.3? What is a 0.4? Well, actually that’s pretty strong. I mean, like if I found it was a 0.3 or 0.4, I’d be pretty convinced by that. Their point to me was, well, I don’t find that convincing. I said, okay, but what would you find convincing? Like what if it was a 0.8? What if it was a 0.9? What point would you admit that your theory has a problem? And they said, at no point would I admit my theory has a problem. If I found it was high enough to cause a problem for my theory, I would at that point, my best explanation would be that somebody cheated on a test. So you can’t come up with any set of observations that I would actually find convincing. I’m like, okay, at least you’re admitting it because that’s not a good thing that you’re admitting that, but at least you’re admitting that that there’s no way for your theory to clash with experience. This is something that they found. They didn’t have to find it. It could have been that there was no correlation. And if that had happened, the theory of IQ would have died. It would have been dead on the vine. It would have been refuted to put it into Popperian terms. And it didn’t get refuted. Instead, what they found was there’s this correlation. There’s this chief factor that exists. Now we still don’t know how it all fits together. We

[00:37:13]  Blue: don’t know for sure what we’re measuring. The Deutschens are right about that. And yet we can’t just dismiss this theory. We really need to give it further thought and really come up with testable ways to look at this. With this as our introduction, very long introduction, let’s now look at Patel’s challenge. He says, and I’m quoting him now, IQ scores are heavily correlated with job performance, school performance, income, lack of crime, and even health. IQ is currently measuring some underlying trait that determines your effectiveness across a broad range of cognitively demanding tasks. In many cases, even years down the line, you can call this trait whatever you want, but I call it intelligence. I really like this quote from Patel. First of all, it lays the challenge out really well. The fact is, is that everyone should care about job performance, school performance. Brett’s saying, well, maybe they don’t. Maybe that’s just not what their interest is. But we would kind of expect them to be, right? You’re just saying they’re not, not very explanatory. So Patel’s bringing this out. Secondly, he’s kind of pointing out the real problem of trying to argue over intelligence is there’s nothing wrong with him just saying, look, to me, the fact that some people just perform better at cognitively demanding tasks, that’s what I call intelligence. I don’t care what your definition is to me. That’s what it is. You can call it what you want. It still exists as a concept, even if you don’t want to call it intelligence. And he’s right, right? There’s this measurable effect that we’ve, it has shown up in tests over and over and over again that we have direct examples of in the case of people just clearly lacking something.

[00:38:48]  Blue: We can’t just ignore that. We can’t just ignore the observations that exist just because it’s not the theory you want to believe in, right? There’s something more going on here. All right. Now Patel in an interview with David Deutsch, converting David Deutsch on his views and kind of challenging him. He raises the twin studies and he says IQ correlates between identical twins separated at birth. So he at that point in the interview advances a hardware theory as a competitor to the universal explainer theory. So Deutsch gave the following counter examples and here are like Patel’s answers to that. So Deutsch says the hardware theory explains intelligence in the sense that it might be hardware might be true. So it doesn’t have an explanation beyond that. By the way, Deutsch is spot on saying that. That is an absolutely correct thing to say. We’re going to see. Patel then says, there are physical correlations to IQ like 0.3 correlation for skull size. Now here is what Deutsch said to that. He says, suppose the results of these experiments had been different. Suppose people brought up in the same family that only who their parents were to make a difference in IQ. Wouldn’t that be surprising that there’s nothing else that correlated with IQ other than who your parents are? How much correlation should we expect? There are correlations everywhere. Then he gives examples of famous spurious correlations and suggests that we can be surprised by randomness to quote Nicholas Taleb on that. It’s not a rare event to get a correlation between two things. There are infinitely more things they don’t control for.

[00:40:21]  Blue: It could be that the real determinant for IQ is how well a child is treated between ages three and four, where well is defined by something we don’t know yet. Then we’d expect that things that we don’t know about and nobody has controlled for, we would expect that those things be correlated with IQ. But unfortunately, that thing is also correlated with whether somebody is an identical twin, something like getting the idea that this child is really smart. Now let me point a few couple of things out. First of all, what David Deutsch just did is an example of an ad hoc save.

[00:40:55]  Red: Truthfully, it’s rare for me to hear something said by David Deutsch, obviously my favorite guy that just doesn’t quite ring true for me.

[00:41:06]  Blue: But that might be an example of something that I use. I’m going to take a different stance on this. I actually agree with him on this, but I don’t know that for sure. I know I don’t know that for sure. Let me just say that at this point, this is an ad hoc save. David Deutsch is saying, look, you’re giving me counter examples to my theory, but it might be that it’s just something that we don’t know about yet. Well, that’s the quintessential example of an ad hoc save.

[00:41:37]  Red: But it just doesn’t seem like there’s anything more plausible than just that. Okay. That’s the issue. The idea that there’s genes that influence this stuff.

[00:41:46]  Blue: So now I mentioned a study and that they actually did that they brought up in my biology class.

[00:41:56]  Red: Okay. Yeah.

[00:41:57]  Blue: And this study, what they did is they went, they went, the study was actually about teachers, not children, but they told the teachers it was about children. They usually misrepresent the study because they’re trying to, you know, remove various types of factors, right? So they went to these teachers and they said, they interviewed a bunch of kids in their class and they came back and they said, these five kids that actually they had picked by at random, these five kids, these kids are budding geniuses, maybe not geniuses, but they’re, they’re, you’re going to see that they’re gifted.

[00:42:29]  Red: Yeah.

[00:42:30]  Blue: It just hasn’t happened yet, but we have measured them and we know that they are gifted. And then they came back a few, you know, a year or two later and they, they did IQ tests before and they did IQ tests after. And the IQ of those five children had gone up. Yeah. Okay. Had taken a jump. And what would be the explanation for that? Well, it actually sounds like it’s something very similar to what David Deutsch is saying, right? That the moment the teacher, and this is, this is the belief at least, this is the current theory is that the reason why this effect happened was because the teachers, now having been told by an authority figure, somebody who they respected that these are going to be, you know, gifted children now started treating them like they were gifted, which meant they started getting a better education, which meant they, their knowledge acquisition went up, which meant their IQ went up. Now, I don’t know. I mean, like when I mentioned this, this theory, sorry, this experiment to you and cameo, you were both really. Did not think much of this experiment. And honestly, you hear all sorts of things in college classes that ultimately never get replicated. So for all I know this theory’s net, this experiment has never been replicated.

[00:43:44]  Red: I’ve spent working in education. I’ve spent half my lifetime hearing about this theory or that theory, this study or whatever, you know, these, these studies is really what I mean. That I don’t know. I might, maybe, you know, my wife thinks my, my BS detector is, is overactive, but I, you know, it’s, it’s, it’s, it’s triggered. We’ll say that. So

[00:44:08]  Blue: let me, let me give my point of view here. I’m not saying I necessarily agree with the H. There’s nothing ridiculous about what he’s saying. I mean, yeah, there’s a grain

[00:44:18]  Red: of truth.

[00:44:18]  Blue: Yeah. Well, no, it’s a theory worth looking into. It may not even have a grain of truth. It may be false. But there’s nothing, there’s nothing silly about the idea that, that genetics affects how we perceive intelligence. And then we accidentally educate those that we perceive as intelligent until they actually are more intelligent. That’s really what he’s saying. Right. And there’s nothing even slightly ridiculous about that theory. Now to turn it into a good theory, it’s got to be testable. We need to figure out what you can’t just say it’s something and we don’t know what you need to propose. I think what it is, is that their eyes are closer together and that we perceive that as more intelligent. I mean, this is a stupid example. Okay. But you can start coming up with testable theories. Right. The reason why I picked the I one was because I happened to know that we perceive dogs with eyes closer together that are more similar to the ratio of a human as more intelligent than dogs whose eyes are further apart than the ratio of a human. That could, I mean, it’s stupid. I’m not suggesting this is a real theory, but based on that, maybe genetically, if people’s eyes are slightly more apart, we immediately assume they’re stupider and therefore we treat them that way and therefore they are that way. There’s nothing ridiculous about that theory. It’s probably false, but it’s a theory worth testing. We’re going to have to conjecture what is the genetic effect that’s leading to the difference in IQs. And then we need to figure out what it is. David Deutsches was getting the idea that the child is really smart. Well, that’s what this experiment suggests.

[00:45:54]  Blue: Maybe the experiment will turn out to be wrong, but what if it’s not wrong? What if we end up replicating this experiment over and over again, even though your BS detector is going off now at some point after the one thousandth time that we demonstrate that it actually works, hopefully that overcomes your BS detector. Well, you know, one of the things I also get from David Deutsches and Carl Popper is that humans are just very hard to manipulate. Like we can’t, it seems to me that just is, that’s a little bit too of a neat and tidy explanation.

[00:46:25]  Red: Yes. So how a student can be radically changes, changed by a seemingly small change in their environment. Yes. I just don’t think human, it just doesn’t ring true for me as, as how human beings work. That’s maybe

[00:46:43]  Blue: I’m wrong. The thing that we have to ask tons of questions here, right? Anytime you’re dealing with experiments, you have to, experiments are observations. That’s really what they are, right? We have this observations, random out there, observation that if you treat a child better, that their IQ goes up, right? If you think they’re more intelligent, they become more intelligent. We don’t know how often that observations happen. Maybe it’s completely unrelated to our assumed explanation. Right. And you know, I also live in an environment, you know, where most of the parents I know are just desperate to make, turn their kids into little geniuses.

[00:47:19]  Red: You know, they want to, they, that’s, you know, I don’t know if you heard that guy who wrote that book, the call to the smart. He was on that a lot of podcasts, but he was just talking about how, how, you know, intelligence is almost considered like a primary value to humans. These days, you know,

[00:47:37]  Blue: you’re making a good argument. Doesn’t that mean that all the parents are trying to teach their kid, treat their kids like they’re intelligent, right? So how can we use that as the explanatory difference? You’re right. You’re right to challenge that. We would have to also ask how much of a difference is it? Like when we said their IQs went up, presumably that means statistically significantly it went up. Otherwise they wouldn’t have reported it, but even statistically significantly is often not that much. Right. Like maybe the theory that Deutsch is giving us is true, but only by a value of 10 points. Right. Or something along those lines. And can’t explain the rest. And there’s so much more here that we just don’t know at this point. So in fact, it’s even possible that both are true, that there is some sort of difference that is hardware related, just like Patel saying. And also that there are software differences and also there are interest differences. And also that we value some knowledge more than others. And also that it’s about how you treat the child. It could be all these factors a little bit. And in real life, that’s what tends to happen is that there’s like, a gazillion factors and you can’t just tease. And when you start to tease out the one factor, you end up making a little tiny difference.

[00:48:55]  Red: Yeah.

[00:48:55]  Blue: Right. And so I guess what I would say is I don’t feel like David Deutsch’s theory is ridiculous anymore than I think IQ is ridiculous. I think that they even could be compatible theories if we came up with the right explanation. So we shouldn’t really rule out either possibility, but we should kind of just accept that Deutsch’s theory right now is an ad hoc save and we’ll treat it as one. The problems that exist with this theory, they’re real problems. When we have an ad hoc save, that might be a path to eliminating the problem, but at this point the problem is still considered real. That’s the correct critical rationalist way to look at it. Okay. You don’t get to dismiss the problem because you personally came up with an ad hoc save. It’s still considered a problem. So at this point, both theories have problems. Both theories seem like they could have some truth to them. Now, now where let’s talk about David Deutsch’s statement. That doesn’t really explain anything. Okay. So if Patel’s theory was correct. It should have, it should have consequences that can clash with experience. One of those is why are not animals on a bell curve with us? So animals aren’t on our bell curve animals. You cannot make an IQ for a dog. I mean, like sometimes we’ll talk about dog IQs, but that’s just an analogy. You know, dogs do not have somewhere on the human IQ scale. They’re a completely different scale. Right. Even the most intelligent dogs.

[00:50:31]  Red: Yeah.

[00:50:31]  Blue: Why are, why isn’t there some animals that are, you know, IQ 80, you know, and then some humans that are IQ 60. I mean, like they should be on if, if, if intelligence is really just a factor of hardware, then they should be on the same bell curve as us. I mean, I suppose someone could make that case, but just my, my, from my own experience of interacting with however low IQ humans, however you want to put that, there’s almost always just, there is always just a fundamental difference between.

[00:51:10]  Red: So human and animal.

[00:51:12]  Blue: Yeah. The only place where I think, so my, that was why I use my neighbor. On the one hand, he’s clearly mentally challenged, severely mentally challenged because of his autistic and down syndrome. He’s got, maybe has other problems too. On the other hand, he’s a universal explainer of some sort. I mean, I’ve argued he isn’t a universal explainer. He is able to carry a conversation. Let me put it that way. Which no animal can do. Like even Coco the ape we’ve seen from past episodes, their language ability is drastically lower than like drastically lower than what a severely challenged mentally person can do. On the other hand, there are humans that exist that are on the animal scale. There are humans that, for example, have no brain and only a brainstem. There’s really interesting things about that. The fact that experiments with children that have no brain, but only a brainstem, the only brain they have is a brainstem. They’re, they do still possess a level of intelligence more like an animal. They still have preferences. They still smile. They get happy. They get sad. They have emotions. They act as if they feel things just like animals do. So that was one of the reasons why I had argued that we have good reason to believe that feelings and qualia evolved sooner than humans that they actually evolved back at the lower portions of the brain. So the animals brainstems at least should have similar types of feelings. No, I don’t know if that’s true. That is something that would come out of trying to mix evolutionary theory with what we currently understand about neuroscience. We know so little about neuroscience who knows. I mean, it makes a lot of sense.

[00:52:54]  Red: Like what you, what you, I think what you said before about how you try to describe the behavior of an animal without using like emotional language. You can’t even do it. You can’t even do it today. Right.

[00:53:07]  Blue: So based on that though, it does seem like there’s a joke. Like there are some humans that are the same intelligence as animals, ones that have no really human intelligence at all. But if they have any level of human intelligence, they’re a drastic jump above every animal. Okay. Even a severely mentally challenged person. However, you could argue against that making an argument like this. You could say, Oh yeah, but an ape can take care of themselves in the wild. And that human cannot take care of themselves even in a human society.

[00:53:37]  Red: Yeah.

[00:53:37]  Blue: Well, that’s true. So we do have a little bit of an issue here that the type of intelligence when we talk about intelligence with animals, we don’t seem to mean the same thing that when we talk about intelligence with humans, they’re literally different scales. Okay, they’re not that there may be some overlap. I’m not saying there isn’t, but we clearly are, we’re using the word intelligence with animals to mean something different than what we mean with humans. Now I think the difference here is that animals don’t have explanatory knowledge. They have all other kinds of intelligence that humans, the highest animals have the same kind of intelligence that humans do absent explanatory knowledge. And the big jump is explanatory knowledge, which is universal explanorship. Okay. I think that makes a lot of sense. So getting back to Patel’s theory, his theory has this problem. Animals should be on the bell curve if the only difference is hardware. So what Patel does is he does an ad hoc safe. He uses something called the scaling hypothesis and an analogy with GPT two and GPT three. Now the analogy with GPT two and three is really odd in my opinion, because nobody currently believes that artificial, when I say nobody, whenever I say nobody, of course that means there’s probably somebody out there who does believe this, but it’s really rare. You know, Joe Spoloski used to say, when I say nobody, I mean there are less than two million people in the world. It’s just a way of speaking, right? You’re not supposed to take it seriously. You can find some scientist who thinks artificial neural networks works the same as human neural networks.

[00:55:09]  Blue: But the vast majority of scientists working in all fields, whether you’re talking about neural nets or computer science or neural science, vast majority of them really just don’t believe that there’s any true analogy between artificial neural networks and biological neural network. There used to be this analogy. It was based on a misunderstanding. We’ve long since moved past it at this. As of today, we use gradient descent. Very few people think that the brain uses great gradient descent. Because gradient descent is, if you know anything about gradient descent, it’s doing calculus. It would be odd if the brain were doing gradient descent. So it’s strange that he uses this analogy with GP2, 2 and GP3, because to the best of my knowledge, the whole idea that artificial neural networks and biological neural networks are somehow related is a refuted idea and that nobody takes it seriously today. So he says, so Patel says, so if I think the Deutsch theory of universal explains is wrong, what is the alternative that I am proposing? By the way, good on Patel to realize he needs to propose something. Okay. Instead of just taking potshots at the current theory, I think that the simple model you would develop after thinking about AI for five minutes is correct. A pig is stupider than I am. David Deutsch is smarter than I am. So why couldn’t a future AI be even smarter than David Deutsch? All right. And then he says, the scaling hypothesis regards the bless. So he then he adds in something called the scaling hypothesis. He’s trying to the reason why he inserts the scaling hypothesis is because he knows he needs to explain why animals aren’t on the bell curve with humans.

[00:56:54]  Blue: So he says the scaling hypothesis regards the blessings of scale as the secret of AGI intelligence is just simple neural units and learning algorithms apply to diverse experience at a currently unreachable scale. This is him quoting Grun from something called the scaling hypothesis. Here’s the thing. That explanation does not explain why what it says is scale matters for reasons we don’t know. That’s what it actually says. Why should scale matter? Okay. Why would having more neural networks suddenly just make me smarter in and of itself? Or more to the point, why would it make it that if I have more neural networks, there are some subjects I can’t understand if I don’t have enough of them, right? There’s this giant disconnect between universal explanation what it really says, which is that we can all understand everything and why scale is related to it. Now Patel admits that this is not an explanation or he almost admits it. He says admittedly this is not a full explanation saying that intelligence is a matter of training a very large neural network with lots of diverse experiences in an order to design the right circuits is like saying that nuclear fusion is a matter of the interaction between fermions and bosons. It may be true, but it is not much of an explanation. Okay. Let me just say that first of all good on Patel for admitting upfront that this is kind of a sucky explanation. He’s showing some self -awareness there that really does matter in a case like this. However, I feel like he’s making a mistake here. Let me see if I can try to explain based on the things I’ve said at this point. I’ve been leading up to this.

[00:58:42]  Blue: We know why GPT -3. So the idea here is that GPT -2 and GPT -3, there is this giant leap of intelligence between them and that the only real difference was, so says Patel, so says the scaling hypothesis, that that happens to have a lot more neurons.

[00:58:58]  Red: Okay, so when we did this

[00:59:00]  Blue: giant leap in neurons between GPT -2 and GPT -3, suddenly we had this to the point where GPT -3 can sometimes do math. I mean, some of the things that GPT -3 can do are just surprising, right? They’re a different type of intelligence. They’re not just more of the same. This is his argument and it’s partially true. I’ve read the Google paper on this and he’s making some true claims here. Here’s the thing though. In that case, we have an explanation for why GPT -3 is smarter than GPT -2. It’s because gradient descent wired the improved software into the neurons. We know exactly why that’s happening, that we have that explanation. There’s no actual similar explanation being offered for the biological networks. He’s just saying, well, you know, you got more neurons, you have more experiences. It just isn’t the same thing. These are not compatible. It’s not similar to the fermions and bosons example. It’s not merely that it’s not a strong explanation yet. It’s a non -explanation at this point. The neurons alone are meaningless. It’s really the knowledge we put into the neurons that matters. It’s not even the same as the hardware hypothesis that he’s trying to advance. The scaling hypothesis is fundamentally a software example, not a hardware example, if that makes any sense. His ad hoc save on animal intelligence, here’s the quote now, the hominid neural that scales really well and with slight improvements in architecture and an increase in the number of parameters, more neurons, we were able to hit the next but not the last S -curve on the path to greater intelligence. Okay. Does this really even a little bit explain things?

[01:00:43]  Blue: Isn’t he really just saying, it happens to be for reasons unknown that big leaps take place every so often with the scale of networks. All right. By the way, I can refute this by example. Alex the parrot. I brought Alex the parrot up a number of times. For good reason. This is a repeated set of experiments that was originally done with Alex the parrot but has been repeated since. It was published in peer -reviewed journal Nature. So we have a set of observations about the intelligence of Alex the parrot. And Alex the parrot is an animal with insight just like what Bern calls an animal with insight, just like a great ape. Even though Alex the parrot’s brain is the size of a walnut, it has the same intelligence as the great apes. So I can think about that for a second. Same as an elephant has the same intelligent level as an elephant. Okay. Animals with insight and you were part of the show back when I did these episodes. Animals with insight is part of Bern’s theory that there is this group of animals that have an extra level of intelligence where they can do trial and error in their minds. Whereas other animals have to use trial and error learning. Okay. And it’s a giant leap of intelligence that takes place. It’s a leap to universality, just a different level of universality than what humans did.

[01:02:02]  Unknown: So

[01:02:02]  Blue: under Bern’s theory, you have regular animal intelligence. There’s probably something underneath that like protozoa, but let’s just start with regular animal intelligence. That’s trial and error learning. Some animals are faster, some are slower, but they all have the same capacity to learn. Then you have a giant leap that takes place all of a sudden with a small subset of animals, elephants, apes, great parents, dolphins, probably it’s hard to experiment with dolphins, whales,

[01:02:30]  Unknown: that

[01:02:30]  Blue: there’s this elephants, there’s this group of animals that can do trial and error in their mind and come up with insight into what they need to do. That a normal animal like a dog, even a really intelligent dog can’t do.

[01:02:43]  Red: So my dog is not going to learn not to chase cars unless she actually gets hit by a car.

[01:02:49]  Blue: Yes. Yes. There has to be an actual event.

[01:02:53]  Red: Whereas an ape could look at a car and kind of think, you know what, I’m not going to mess with that thing. Right. Might put together an insight based

[01:03:01]  Blue: on other things that it had seen.

[01:03:03]  Red: And

[01:03:04]  Blue: we’ve got experiments that really do demonstrate this. Okay. Now you can’t demonstrate anything with certainty. And these are Mason theories that still have a long ways to go. But I find Burns research rather compelling. Right. It’s, I don’t see how you can just simply take the Deutsch view that he expresses in beginning of infinity, that animals are all the same and that they have zero intelligence compared to a human. They’re all mechanistic. I don’t see how that even makes sense during the actual study that Bird has done. And Burn has really gone out of his way to try to come up with observations that demonstrate that, you, that there are activities that say an A, a great ape can’t do. There is able to do that can’t be explained through regular trial and error learning. And that you just forced to the assumption that there’s some sort of leap of intelligence that took place with apes and with some of these other animals. My point here is Alex, the parrot has a walnut -sized brain and yet the same leap took place. This is really not suggestive of either the hardware hypothesis or the scaling hypothesis. This is a counter example to both of them. There’s something more going on here. And so in its current form, it’s generous to call the scaling hypothesis an explanation at all. It seems more to describe what we see that actually explains it. It doesn’t follow from any best theories, unlike universal explainership, which as we’ll see does follow from best theories.

[01:04:38]  Blue: It doesn’t solve any problems at all other than Patel’s concern that Deutsch’s theory can’t explain the bell curve of IQ, which by the way is only true if you accept the additional parts that Brett is saying, which I don’t think you have to. And it doesn’t even explain the bell curve IQ itself. It just describes that there will be a bell curve. In my opinion, the scaling hypothesis that Patel is putting forward actually says this. We see that humans have a bell curve of intelligence and that there is a big leap from animals to humans. We are going to explain that by saying that hardware neural nets scale four reasons unknown, sometimes take a big leap and sometimes they don’t. The big leap between animals and humans is an example of the leap taking place as our theory states sometimes happens. The bell curve of humans is an example of how a leap sometimes does not take place as our theory states sometimes happens. I don’t see how Patel is saying anything but that. And when I put it that way, it should be obvious. There’s not an ounce of explanation going on here. This is not an explanation at all. It’s an explanation spoiler. I believe this is the truth about Patel’s theory. But I think Patel deserves a ton of credit, even though I don’t think his theory is making an advance here. First of all, he’s building on top of the basic IQ theory. And he didn’t really go into, he did go into rather, he’s explaining why the basic theory of IQ, even though it’s largely still doesn’t explain very much, is a theory that has to be taken seriously.

[01:06:19]  Blue: I can appreciate that he’s attempting to put together some sort of explanation with the hardware hypothesis and the scaling hypothesis. One that he’s taking out of some of the things he’s seeing, such as the fact that IQ is related to skull size. He’s at least trying to come up with something. I think this shows a great deal of rare self -awareness that he then admits, yeah, this kind of sucks as a theory I know. I actually find that really rare that people are even aware of when people are advancing a bad theory, it’s just so rare that they know they’re advancing a bad theory. And so I can really appreciate that. And I can really, really appreciate that what he’s really trying to do is not advance the hardware hypothesis and the scaling hypothesis as an alternative to Deutsches. He’s really trying to say, I can’t buy Deutsches theory because there are observations that are just clashing with Deutsches theory. Sorry, with Bretz theory we’ll call it since we don’t know for sure if it’s Deutsches theory. So he can see that Bretz theory is implying all humans have equivalent capacities that this just clashes with experience. It just doesn’t match what we actually observe. So he’s doing his best to try to make sense of that fact. And I feel like this is just it deserves our approbation. We should say good on Patel for writing this article, good on for Patel for coming up with really some pretty spot on criticisms of the Bretz Deutsches theory that has very real problems that need to be addressed. That would be my explanation of what I see wrong with Patel’s theory, but also what I see right about Patel’s theory.

[01:08:02]  Blue: Before I move on, I want to mention one other thing that Patel mentions in his paper that I think deserves some attention. Quoting from Stuart Richie’s intelligence, all that matters, he quotes the following. Another important finding from functional imaging relates to brain efficiency. Compared to those with lower ability, the brains of higher IQ people tend to show less rather than more activity when completing complicated tasks. This suggests that their brains can more efficiently work through the problems. Now remember that Patel is trying to advance a hardware theory, the idea that the differences in IQ is explained by hardware. This example he’s using here really does not support his hardware theory the way he thinks it does. Here’s the issue. Under computational theory, all algorithms have the same computational class on all computers. That’s the whole point of the idea of a universal computer. So if I want to do a traveling salesman program, and it’s got a certain number of cities and a certain number of paths, that number of computations I need to do to complete the traveling salesman program does not change no matter which computer I put it on. It does not matter if it’s a 386 or something that’s going to exist a thousand years from now. As long as we’re talking about classical computers, the number of computations never changes. Now, you might have a less efficient algorithm. So let’s say that you have a very bad algorithm that’s trying to do the traveling salesman program. It may be that it’s a thousand times or 10 times or pick a number of computations to accomplish the same thing because there’s something wrong with the algorithm.

[01:09:40]  Blue: It’s just not a very efficient algorithm compared to the more efficient algorithms that we know. So therefore, this example that Patel’s using actually seems more suggestive of a software hypothesis that IQ is related to software rather than to the hardware hypothesis that IQ is related to hardware. Now, I do think I could meet Patel part way by pointing out that the distinction between hardware and software that we think of on a computer is actually something we design into computers. In real life, this distinction between hardware and software, if you’re just talking about evolution in brains, shouldn’t really exist in the same way that it does for hardware and software of a modern digital computer. Let me use an analogy here with logic circuits. In theory, you can take logic circuits and you can build any computation you want out of logic circuits. If you’ve got a not AND gate logic circuit, then it’s a universal logic circuit and you can write any program using logic circuits. If I were to write a program using logic circuits and then show it to you, you might say, okay, where’s the hardware? And I would point to the logic circuits and then you might say, okay, where’s the software? Well, there is no software per se in the logic circuits other than the logic circuits themselves. If the logic, if the software is anything, it’s the form in which I laid out the logic circuits, that would be the software. And so really, there’s just simply is no strict distinction between hardware and software if you’re using logic circuits. Brains are a lot more like that. Because of this, we could imagine something like this.

[01:11:10]  Blue: We could imagine just based on what we currently know about how brains work. We know that there’s like some sort of signal that happens where you get a neuron that fires and it leads to another neuron firing and the signal moves around the brain and does things and we don’t fully understand what it does. Imagine that some brains have problems with the chemistry or maybe problems with the physical makeup of the neurons due to, I don’t know, malnutrition or something like that. This brain would therefore be defective in some way. Now, it would make sense that evolution would know that this is probably not an uncommon thing to have to deal with. So it has created software that does error correction, that if the signal is lost, it attempts to restart the signal. And if it gets lost again, then it restarts the signal again. This is I don’t know if this is how brains really work, but this is at least plausible. We could imagine then that the same algorithm on one brain could be far less efficient on one brain than another due to the fact that this error correction has to keep restarting. Now, this would be a kind of partway in between what Patel was getting at. You’ve got a brain, it’s got a defect. Due to the defect, it causes a problem with the software and the software is now less efficient than it could have been. There’s still a maximum efficiency. That’s possible though. And presumably, healthy brains are probably somewhere near that maximum efficiency. So this isn’t quite the same as Patel’s hardware theory. Yes, hardware is affecting the software indirectly in this case. It’s making the software less efficient, because it has to keep restarting.

[01:12:48]  Blue: And it’s due to a hardware error that that’s the case. But it isn’t really the same as saying, Hey, if you’ve got better hardware, you’re smarter. The arrow is only one direction. Better hardware doesn’t necessarily make you at all smarter. That would require better software. But it could cause a problem where your software has to compensate. Or we might put it like this. The hardware hypothesis might be able to through defects explain certain kinds of low IQs, but it couldn’t really explain why there’s some people that are significantly above average in IQ. Go ahead and ask me your questions now. You’ve said that you had questions about can we kind of steelman Brett’s theory? And I think that’s actually a good idea. Since in the next episode, we’re going to talk about the problems and the criticisms of Brett’s theory and the refuting observations of Brett’s theory. Let’s really try to make sure we’re putting his theory into its most steelman version and really understand why he feels so strongly about this. What is it that he’s after here?

[01:13:50]  Red: Yes. Okay, let’s start with Alan Turing. Can we go back that far? Yes, we can. I just read a very cool graphic novel about him, by the way. I’ve got a same dude wrote a graphic novel about Stephen Hawking and Richard Feynman. I think he would really like this. Alan Turing’s ideas on universality. Obviously, it’s one of the four strands of reality as proposed by David Deutsch. Can you succinctly explain what that means? Sure. Okay,

[01:14:23]  Blue: so let’s start with computational universality. So the way I like to explain this is we know that there are certain types of computers. So there’s at least three types of computers that we know exist, right? That we’ve got the finite state machine.

[01:14:41]  Red: Okay.

[01:14:42]  Blue: And we have the push -down automata and then we have the Turing machine.

[01:14:47]  Red: Okay.

[01:14:47]  Blue: And they have different capacities,

[01:14:50]  Red: right? There

[01:14:50]  Blue: are certain types of algorithms that a finite state machine can run, but there’s other types of algorithms that can’t run. And the push -down automata can run every algorithm a finite state machine can run, plus it can run some additional algorithms, but it also has algorithms that we know it can’t run.

[01:15:08]  Red: Okay.

[01:15:09]  Blue: Then we have the Turing machine, which to the best of our knowledge can run every algorithm you will ever be able to conceive.

[01:15:16]  Red: Okay.

[01:15:16]  Blue: Well, it’s actually not true. Oh, okay.

[01:15:21]  Red: So

[01:15:22]  Blue: there’s actually an assumption there that has to get included that often gets overlooked.

[01:15:27]  Red: Okay.

[01:15:27]  Blue: It’s not actually that hard to come up with algorithms that the Turing machine can’t run.

[01:15:31]  Red: Wow.

[01:15:33]  Blue: And in fact, we have made -up machines that can run algorithms that the Turing machine can’t run. What we really mean by that is, and this is where things a little subtleties have taken place. What Turing really means, what the Turing thesis really means is that every algorithm that you can physically run can be run on a Turing machine. You can’t conceive of a computer that can run an algorithm that it can’t run. You can conceive a physically impossible computer that can run an algorithm that it can’t run. You can’t conceive a physically possible computer that it won’t run. Furthermore, they all run the same. So there may be differences in speed, but the class doesn’t change. So if I have an algorithm that is an exponential algorithm, let’s take the traveling salesman as an example. The traveling salesman algorithm, at this point, we have good reason to believe. There’s no way to produce a proof, mathematical proof. We have very good reason to believe that will always be an exponential algorithm. So the number as a traveling salesman algorithm is, you’ve got a bunch of cities, you’ve got paths between the cities. I want to find the absolute shortest path that allows me to visit every single city. If you’re insisting on getting the absolute shortest path, not merely a very short path, then the number of cities and paths will determine how long it takes to run the algorithm. And it’s an exponential algorithm. So after a very small number of cities and paths, the algorithm will become so slow that on the fastest imaginable computer, the sun will explode long before the algorithm terminates.

[01:17:19]  Blue: That’s true for a train machine, but really what we’re saying is that’s true for every computer you can conceive that actually follows the laws of physics. Now it turns out that’s wrong too. That was the theory though, right? But then they discovered, and they meaning David Deutsch, I was actually Feynman, but David Deutsch was the one who really turned it into a viable theory, that there’s this thing called the quantum computer and that there are certain algorithms that while they can be run on a Turing machine, they have a different computational class on the quantum computer because the quantum computer is exploring answers across the multiverse and it tries all sorts of things simultaneously. Because of that, in certain cases, the quantum computer, main example people use is Schor’s algorithm. Schor’s algorithm is an algorithm that is not implementable on a Turing machine, but can be implemented on a quantum computer. And it will solve the prime numbers problem, which is what they use for RSA encryption. So a quantum computer can crack RSA encryption once one exists, that we don’t really have any as of today, at least none that are big enough to be useful. Once we have a quantum computer with enough qubits that it can realistically crack a large number of qubits, it will be able to in real time crack RSA encryption, whereas a Turing machine could never do that. We can write an algorithm that cracks RSA encryption on a Turing machine. It’s just the sun explode first, right? So for all intents and purposes, it’s intractable. So there’s no point in even trying because you actually like it or not, you actually care about whether you’re going to live long enough to see this answer.

[01:19:05]  Blue: So we basically say RSA encryption is uncrackable today, when what we really mean is it’s not crackable within your lifetime. So with a quantum computer, it is crackable within your lifetime. In fact, it’s crackable whether quickly. So what David Deutch did is he said, look, I’ve now disproven the church Turing thesis, but I’m replacing it with a new thesis, the church Turing Deutch thesis. And the principle slightly different. Oh, the principle is different.

[01:19:37]  Red: That was going to be my next question for you. So you’re moving right into it. Okay.

[01:19:42]  Blue: So now here’s where things get confusing. As a matter of shorthand, we refer to the church Turing Deutch thesis as the church Turing thesis, even though that’s the one he disproved. And this is very common usage. And there’s no way around it at this point.

[01:20:01]  Red: Okay,

[01:20:01]  Blue: because the church Turing thesis is so similar to the church Turing Deutch thesis. And in spirit, they’re saying the same thing, even though in technicalities, there’s differences between them. Even Deutch refers to the church Turing Deutch thesis as the church Turing thesis or the Turing principle for short. Okay. Now, there is a difference between the Turing principle and the church, church Turing thesis. And it’s actually Roger Penrose who pointed this out. Okay, he’s the one who coined the Turing principle, although he called it the Turing thesis. So it was actually Deutch that coined the term Turing principle because he misquoted Penrose when Penrose actually said Turing thesis. It’s all very confusing, I know. Okay. So Deutch in his book Fabric of Reality, he said Penrose coined the term Turing principle. But I’ve looked up the actual book and what he actually says is Turing thesis. So it’s actually Deutch that coined the term by accident. In any case, the idea of the Turing principle, that does come from Penrose.

[01:21:09]  Red: Okay.

[01:21:09]  Blue: And that’s what Deutch really meant.

[01:21:11]  Red: Okay.

[01:21:12]  Blue: So the difference between the church Turing thesis and the Turing principle according to Penrose is that church, he didn’t want to take the church Turing thesis to its logical conclusion. He simply said, I don’t accept that this means that all of reality is computational. I accept only that this means that all algorithms can be run on my computer. Turing did not could see that that was a problematic statement. So Turing said, actually, there’s an implication here. If you can’t actually conceive of a physical computer that can run a different computational class, then what you’re really saying is that the laws of physics don’t allow you to run a faster computer than this, a different computational class of computer than this, which means all of reality is computable. That was Turing’s conclusion. And by the way, that’s a correct conclusion. And church’s conclusion was false. So the church Turing thesis would be the combination of church and Turing. The Turing principle or Turing thesis would be the added additional correct conclusion that that computational theory is a branch of physics. And that what we’re really doing is we’re talking about what physics is capable of computing. And that’s Deutch’s

[01:22:33]  Red: contradiction.

[01:22:34]  Blue: And that is Deutch’s. He pulled that all together and made it all clear.

[01:22:39]  Red: Okay.

[01:22:40]  Blue: So let me restate it now. Here’s the problem. The church Turing thesis is only not the same as the Turing principle. If you assume that church was correct and he’s not, if you’re assuming church is wrong, the church Turing thesis is actually identical to the Turing principle. Does that make sense? They’re one in the same.

[01:23:01]  Red: I’m getting there. I’ll say that. I’ll look. I’m looking forward to re listening to this when the podcast is released. Okay.

[01:23:08]  Blue: So let’s make this simple. We’re going to use church Turing thesis as a synonym for Turing principle. And Turing principle is the easier to say. So I’m going to say Turing principle. But when I say that, I really don’t mean I don’t mean the church version of church Turing thesis. And I don’t even mean Turing’s version of the church Turing thesis. I really mean the Deutch version of the church Turing Deutch thesis, which is called the church Turing Deutch principle. Yes, which we’re going to, we’re going to summarize as the Turing principle.

[01:23:40]  Red: Okay. Okay.

[01:23:41]  Blue: Whenever I say the Turing principle or accidentally say church Turing thesis, I really mean the Deutch version of that, which is the strong version. Now, what Deutch did is in addition to pointing out that the church Turing thesis was wrong, and that we can actually build a quantum computer that can violate the church Turing thesis, he also said, but I can build a quantum Turing machine for which the church Turing thesis was true. Therefore, I’m actually showing that it’s true. I’m showing that the original version is false and refuted, but I’m showing that a stronger version of it and more testable version of it is true. Okay. That’s what Deutch is trying to show. All right. Then he goes on and what he does is he does a mapping. Now, this is something we do in computer science. How do I know that the finite state machine can’t run algorithms that the Turing machine can run? I do this based on mappings. I demonstrate that the church, the Turing machine can do everything that the finite state machine can do by showing a mapping between the two machines. Then I demonstrate that there’s no equivalent mapping back. Okay. Basically, what you do is you use mappings. Deutch used that same idea. He used a mapping, but in this case, between quantum physics, quantum mechanics, and the quantum computer, he showed that the two were equivalent. Equivalent at least in terms of running algorithm. Kiarra Moledo in my interview with her, she pointed out that he didn’t actually show that they were equivalent in all ways, but they were equivalent in terms of their ability to run algorithms.

[01:25:18]  Blue: That is to say, you can describe all quantum mechanics in terms of a quantum computation on a quantum computer. That’s what that means. Okay. When he did that, what he really did is he basically demonstrated that if quantum mechanics is true, then it must be that the Turing principle is also true. Now, there is wiggle room there because guess what? We know quantum mechanics is false. So we don’t really know what the next theory is going to bring us. It may end up violating the Turing principle. This is just the way science works.

[01:25:55]  Unknown: Okay.

[01:25:56]  Blue: Deutch’s point isn’t that we know the church Turing -Deutch principle is true. He’s claiming it’s a best theory that has no competitor. This is where he brings Popper’s epistemology in. Even though he knows quantum mechanics is false, he’s saying our best theory is that all of nature is computable.

[01:26:19]  Red: Quantum mechanics is false because of the mini -worlds interpretation? No.

[01:26:25]  Blue: The reason why we know quantum mechanics is false is because it’s at odds with general relativity. So the two are in contradiction. Every physicist knows that. David Deutch admits that. Stephen Hawking says it in his book. I’ve read it in almost every physics book I’ve ever read. Penrose explains the contradiction and for about five seconds I understood it because it’s a big mathematical equation and then I couldn’t understand it anymore because it slipped out of my mind. But I know that I have confirmed that there is actually a contradiction between the two because I remember confirming it. By the way, that’s why we’re looking for quantum gravity. The reason why we’re looking for a theory of quantum gravity is because we basically know our two best theories are incompatible and thus both must be wrong. I suppose there’s some assumption there. Could there be some way to take quantum physics, which is generally assumed to be the better of the two theories? Is there some way that we just don’t currently know how to do? That you could wrap general relativity in it? Well, no, because there’s a contradiction. But you can never truly rule the possibility out. Maybe we made a mistake when we showed there was a contradiction. Someone will show that there’s a mistake in the show of the contradiction. But this isn’t how popper’s epistemology work. We’re not skeptics. The critical rationalists are not skeptics. This is the thing that Saadia and I keep disagreeing over on your Facebook page. Deutsch isn’t claiming I know this is true. He’s claiming it’s a best theory and that we embrace best theories as if they’re true. That’s his claim. It’s a very specific claim.

[01:27:59]  Blue: And that’s the rational thing to do because it does not make sense to do what Saadia is doing on your page, where she says, I’ve shown there’s a problem with this theory, so I get to throw it out. That’s not rational under critical rationalism. We embrace the theory until a better theory comes along. And we embrace it as true. That’s the right thing to do. And the reason why is because these theories have verisimilitude. Quantum mechanics, even though it’s wrong, it is largely right. And we don’t know of anything about it where it makes wrong predictions. We don’t know of a single example where it makes wrong predictions. It’s clearly got really high verisimilitude. And so that’s what Deutsch is actually saying. He’s not making some ultimate claim. I know for sure we will never find out. In fact, here’s something interesting. Deutsch in his original version of his paper, he actually wrote about how his theory could be wrong. And he gave an example of how it could be wrong because he was well aware of that fact. I only know this because Roger Penrose mentions it in his book. He says, I was talking with Deutsch. Deutsch told me this. He says, and he read Deutsch’s original paper, gave feedback on it. In Deutsch’s original paper showing the Turing principle was correct, Deutsch actually says, when we have a theory of quantum gravity, we may find that it is able to come up with an oracle machine. Now, in computational theory, an oracle machine is a magic machine that can solve the halting problem. And it’s well known that if you had an oracle machine that there are all sorts of really interesting problems you could solve.

[01:29:42]  Blue: There’s like giant lists they’ve made because the halting problem is known to be something that we would want to solve if we could. So they actually, as part of computational theory, they have this idea of an oracle machine. They say, pretend like you have an oracle that can solve the halting problem. And now what could your computer solve? What types of algorithms could your computer solve that a Turing machine can’t solve? And they know what those are. They could have come up with algorithms that a oracle machine could solve that are just physically impossible to solve today. So what David Deutsch said was, maybe we will find that when we have a new theory of quantum gravity that we can, that we have a way to use quantum physics under that new theory to create an oracle machine. In fact, he laid out how that might be possible. He explained that you could create a closed time loop, that you could put the algorithm into a closed time loop. You could wait to see if it ever terminates or not. Because it would instantaneously either terminate or never or not terminate. And you would get back to your faults based on this closed time loop. Because this is based, by the way, on many worlds, interpretation of quantum physics. So David Deutsch actually worked out how a future theory of quantum mechanics has a, maybe even a plausible chance that we will be able to violate the Turing thesis in this new version. Deutsch is saying, yes, that might happen. That absolutely might happen. But as of today, we’ve got no such theory. So we exclude it as per Popper’s epistemology. We do not include what if scenarios because there’s an infinity of those.

[01:31:22]  Blue: We don’t know. We don’t know. I mean, it could be anything if you start including those. We want to embrace the theory we have today as if it’s true. What are the implications of the theory today? Well, one of those is the Turing principle, period end of story. That’s all he’s saying. He’s not saying anything more dogmatic than that. It is such a very conservative viewpoint. Now, I can’t get, when I try to talk with people about this, I honestly can’t get people to see that this is a really reasonable thing for him to say. They want to make arguments. Well, how do you know there isn’t some future theory? He’s already accounted for that. He’s using Popper’s epistemology to remove that for now and he’s waiting for that future theory to come into existence. Then he’ll assess his theory again at that point. It’s so rational. It’s so reasonable. That’s because Popper’s critical rationalism is rationality. This is the rational way to behave. There isn’t some alternative rational way to behave. You embrace your best theories. You don’t embrace theories that don’t exist yet. That’s what rationality is.

[01:32:23]  Red: So Penrose’s theory would be an example of a… It is. So Penrose

[01:32:29]  Blue: is not making a rational theory and he knows he’s not. He says he’s not. Penrose is saying, I’ve got this gut feel that maybe it will turn out that there’s some other theory of quantum mechanics that we will have someday that will demonstrate that the Turing principle is false and that brains rely on those physics. That’s what Penrose is saying. It is an absolutely irrational thing for him to say, but human beings aren’t rational and it’s okay.

[01:32:56]  Unknown: Interesting speculation. It is. It’s a very interesting speculation, but it absolutely goes against all the best theories today and he knows that.

[01:33:04]  Blue: Penrose is saying, my gut feel, if it’s right, says quantum physics is wrong. He says that. He says in his books, if my gut feel is right, Godel’s theorem is wrong. It has to be. You can’t use Godel’s theorem as an example of something humans can do and computers can’t and then claim that Godel’s theorem is still right. It doesn’t work that way. If humans can get past Godel’s theorem, which is what he’s claiming, then Godel’s theorem is wrong. There’s some types of physics that exist out there that we haven’t discovered yet that’s going to allow us to show that Godel’s theorem is wrong. None of that’s true. For one thing, I don’t think humans do get past Godel’s theorem at all.

[01:33:50]  Unknown: I think Penrose is just outright mistaken on that front,

[01:33:54]  Blue: but that is his argument. Penrose is not, and I wonder how many people even use Penrose’s argument, even realize that’s what he’s saying. They kind of just say, oh, Penrose says that quantum mechanics is necessary for brains. The way they use it is so unsophisticated compared to the way.

[01:34:11]  Unknown: I’m guessing 99%.

[01:34:12]  Red: Can we bring this back to the human universality then? Is there a relationship or is the universal explainer hypothesis just completely different or does one kind of follow from the other or relate to

[01:34:31]  Unknown: another?

[01:34:31]  Blue: There is a tenuous connection between the two. Let me explain the connection and I understand it. First of all, they utilize the concept of a jump to universality. Just as there is this jump to universality. Let me actually give an example from Church and Turing. Church had a machine. It’s actually a grammar that was able to do every algorithm that the Turing machine could do and it did it with the same computational class. Turing was able to prove that the two were equivalent. Now that’s a weird thing because they’re not even similar. One’s a grammar, one’s a machine. Why in the world would they happen to be equivalent? Turing hypothesized that there is a jump to universality. That once you reach the Turing machine, that every machine that you are going to create from that point forward is going to end up being exactly equivalent to a Turing machine. Now they came up with all sorts of attempts to refute that theory. They tried doing Turing machines with 2D or 3D space for the memory, for example. And you can always come back and show using a proof, using a mapping, that every machine you can conceive is exactly equivalent to a Turing machine. Now if there wasn’t a jump to universality, that would just be a weird fact of nature that was unexplained. But if there’s a jump to universality, if it’s actually that the Turing machine can process every single type of algorithm that can be conceived, physically conceived on a physical computer, then that would explain why all these machines just happened to be equivalent. Okay, do you see the logic there? Yeah,

[01:36:02]  Red: yeah.

[01:36:03]  Blue: Okay, really interesting. What Deutsch is saying is they’ve never proven the Church Turing thesis correct, and that’s probably impossible. But if we treat it not as math, but instead as a scientific theory, a conjecture, there have been serious attempts to refute it and they have all failed, other than Deutsch’s own, the quantum computer, which then got rolled up into a new version of the theory. Okay, so the fact that that has happened makes the Church Turing principle our best theory, that there is in fact a jump to universality that takes place. There’s actually two jumps, right? There’s the original Church Turing thesis, and then there was the the quantum version of it. Now, you can argue that in some sense they’re almost the same thing, but there is enough of a difference. There’s this difference in computational class that in a sense there was actually two jumps to universality. If you really want to get down to it, a finite state machine is a universal machine too. It’s just universal over the class of algorithms it can do. So there’s actually four jumps of universality that have taken place in computational theory. Deutsch is using the same idea. There’s an analogy here that you’ve got these different levels of animal intelligence. Remember, he doesn’t buy Bern’s theory that there was a jump that took place between animals with insight and previous animals. So suddenly you’ve got this, but even if you take Bern’s theory into consideration, you’ve got this gigantic leap that suddenly takes place with humans. And in fact, Wallace, one of the what the theory we call Darwin’s theory is actually the theory of Darwin and Wallace, right?

[01:37:38]  Blue: So one of the founder, one of the people who created Darwin’s theory, we always forget about Wallace, he was really troubled by the fact that you couldn’t explain this gigantic leap of intelligence that took place where suddenly you got animal, animal, animal, you got apes, and then suddenly you’ve got people building spaceships and rail trains and seems

[01:37:59]  Red: like a reasonable concern.

[01:38:00]  Blue: Right. And so Wallace decided that was God, right? That God interfered. He couldn’t come up with anything else. Darwin disagreed with him. He got on Wallace’s case. You shouldn’t say that blah, blah, blah. What Deutsch is really saying is actually that was a fair question. You know, Wallace had the wrong answer, but Darwin was wrong to claim it wasn’t a fair question. It’s a completely fair question. So and what Deutsch is saying is there’s a jump to universal expansion. You’ve got animal intelligence, animal intelligence, and then suddenly you’ve got universal expansion. And when you have that, a giant leap takes place just like with the Turing machine. Okay, that’s the analogy. And then you reach a pinnacle, just like you do with the Turing machine, where you cannot conceive of something beyond universal expansion that the class of what can be understood can’t grow larger after that. Now again, he’s not making a dogmatic claim like at all. He’s simply putting it out there. This is the explanation for why you have a giant leap of intelligence. Now you do need to explain that. And there isn’t, I mean, we just saw that Patel’s scaling hypothesis doesn’t really work as a competitor because what he’s saying is not an explanation, whereas Deutsches is. Furthermore, Deutsches is at least rooted in something. It’s rooted in biology. It’s rooted in evolutionary theory because it’s based on Popper’s epistemology. I mean, like, Deutsches really tying his theory to all these other great theories. It just

[01:39:36]  Red: seems to make a lot of sort of intuitive sense to me. Like when you really reflect upon this idea that there are this whole wealth of things that are just so far beyond human comprehension that we could just never even understand them by its very nature of, well, I mean, what’s beyond human comprehension is just beyond human comprehension. We’ll never know about those things. But I don’t know. Deutsches’ idea on this makes, it seems to make a lot of sense on a basic level of common sense to me.

[01:40:10]  Blue: There’s a second argument they use that ties it to computational theory, which I actually think is pretty good. So here’s the argument. We know that computational universality boils down to a very small set of statements. There’s different ways you can define computational universality. But one of them is the not and gate. A single not and gate by itself is universal. So a logic gate that’s a not and operation, you can in theory use that to build every other any type of algorithm you want. Okay, it is Turing compatible. Do human beings understand the not and gate? Well, yeah. Okay, so right there we know that a human can understand any algorithm because every algorithm can be broken down into not and gates and every at least human past a certain ones that aren’t severely mentally challenged, that they all understand not and gates. Therefore, you cannot conceive an algorithm that there’s a human that a human can’t at least in principle understand. Okay, this is that and furthermore, nothing in nature, according to the church Turing thesis, according to the Turing principle, nothing, everything in nature can be explained in terms of computation. That’s why we use math to describe physics, right? It’s and it’s always computable. The trick here is that of course it’s computable because if physics was doing something that wasn’t computable, we would use that aspect of physics to make a new type of computer. So we’re always going to find that it’s computable. Unless there’s something beyond physics that is supernatural, that’s got nothing to do with physics. It’s pretty much guaranteed that that we can build a computer that can do it.

[01:41:52]  Blue: There may be things like let’s say we did find out you could build under quantum gravity of an oracle computer that would then be a computer that could do things a human couldn’t do. But we would still understand it. We understand the concept of an oracle computer. So even in a case like that, that would be benign. That would be an example of where the connection between universality, explanatory universality and computational universality didn’t actually end up being true. But in a way that was benign, it’s still basically true. And this is really what they’re trying to say. They’re trying to say because we have computational universality and because humans can understand if not and gate, it’s really hard to conceive how there could and because physics is computable, it’s really hard to see how we could come up with a theory that humans couldn’t understand. What would that theory be? Like if Penrose’s theory turns out to be true, that there’s this physics out there that shows Godel’s theorem isn’t even true. Or Saudia’s theory that she’s trying to advance that we’re going to find out that there’s physics that creates novelty. She’s trying to connect a number of things together into her theory. What would that theory look? Imagine writing a paper on that theory. What would the paper even say that humans could comprehend if it doesn’t include mathematical equations? It’s hard to even figure out what Penrose is suggesting the theory would ultimately look like because, and this is the secret, humans do in fact understand things through computation. Now I’m not saying comprehension and computation are the same thing because they clearly aren’t.

[01:43:28]  Blue: I can have an algorithm that I can understand how to do, but I cannot necessarily understand how it works. We’ve all had that experience in computer science where we finally just put the algorithm in and we don’t understand it. And yet I can in principle understand it. If I really take the time to figure out what the algorithm’s doing and I’m interested enough, I can eventually break it down into things where I can go, oh, I get what this is doing now. And that’s the big secret is that there is some sort of at least one arrowed connection that when humans try to understand things, they try to understand them as algorithms. And it’s something more than an algorithm, but it’s never less than an algorithm. And I’ve had people challenge me on this. They might say, well, Darwin’s theory is not an algorithm. Well, no, actually it is. I don’t think you can name a theory that isn’t an algorithm. Like they’re all algorithms, like all of them. Every single scientific theory is an algorithm. And it’s true that sometimes the algorithms vague, that we don’t have all the details worked out as with Darwin’s theory. And of course, that’s what the person really means. It’s not an algorithm in that sense. We haven’t actually figured out how to program it into a computer because we don’t have all the details yet. I can easily explain the parts we understand as an algorithm and I can program them into a computer as an algorithm. The only reason why I can’t program all of evolution into a computer is because we don’t have a theory of evolution that’s complete yet. That’s the only thing stopping me. Once we do, I’ll be able to do it.

[01:44:55]  Red: You said every scientific theory is an algorithm. But what about aesthetic, moral, other kinds of theories we have? That’s a good question.

[01:45:06]  Blue: Now, I differ from David Deutch somewhat on this. I don’t know that we understand beauty or morality well enough at this point that I can even really say anything strongly rational about them. I buy the overall argument from Deutch that beauty is hard to vary. I don’t think that means it’s objective.

[01:45:29]  Red: He seems to want to unite all these domains with explanatory knowledge. That’s something like that.

[01:45:37]  Blue: Maybe we can. I’m not ruling that possibility out, but I do feel like his argument here is incomplete. The example I often use, the mere fact that Mozart throws away versions of his song and he keeps making error corrections to it. I agree that that demonstrates error corrections going on, that there’s something underlying this that he’s working towards, that there are literally objectively better or worse versions of this song. In that sense, I buy the idea of objective beauty. Here’s the problem though. The same is true for recipes. It demonstrates hard to vary -ness. Recipes are hard to vary. If you take a really good recipe, we have a place here called Waffle Love, which is gone national, but it started in Utah. He was working as an accountant. His story is written up on the wall of every Waffle Love. I’m taking this off the wall of Waffle Love. He was working as an accountant. He needed to make money, but his real love was cooking. He kept trying to come up with better and better waffles. As he tweaked the recipe, he got improved feedback from his family until finally, they were desperate because these waffles were so good because he had found just the right combination for how to make the dough so that this was the yummiest waffles you could imagine. Then he knew I’ve really got something, and he went out and he decided to go national. Sure enough, turns out that people just love these waffles and these really were better recipes than the ones he threw away. This is the exact same argument that Deutsch is using with Mozart and music.

[01:47:16]  Blue: The question you have to actually ask is, what is the objective thing that you’re something universal in the laws of physics? That’s what I think Deutsch wants to say. Stephen Pinker says, well, no, music may just be cheesecake for the years. Just like you have, there’s certain biological parochial, but biological desires that we have that are wired into us by genetics. Some things taste better on average to more people than others because we share a lot of genetics. You can find some people that are different, but it’s going to level of variance. There won’t be that many. You can no more find a society of people who love random notes, which is Deutsch’s example, than you can find a society of people that find dung that tastes good. You can probably find an individual in both cases, but you can’t find a society because that’s just not the way things work. The mere fact that something’s hard to vary doesn’t actually guarantee you an underlying universal law. What it really guarantees you is that there’s something underlying it, but it might be parochial. In the case of taste, I think we pretty much assume it’s parochial. I came across one person that tried to argue that there was an underlying something to taste. It’s not a very viable theory. We know that taste is a matter of taste. If you don’t like waffle love waffles and it just isn’t your jam, and you prefer broccoli, there is no sense in which you are objectively wrong. It literally is a matter of taste. And yet there is a sense in which that recipe for waffle love was better on average for society. More people liked it. That is an objective fact that exists.

[01:49:02]  Blue: It actually was improving. We don’t really know if music and beauty are parochial examples or if there’s something universal. He’s arguing universal, but he’s using examples that also work for parochial. And there’s a lot more here. I mean, like when I mentioned this, Jerry Swan, who’s one of the people I talk with who’s a PhD, who’s an AGI researcher, he sent me a paper that I have yet to read. I have too many papers I haven’t read that makes a good case that Deutsch is right. It’s an extended case that there is more to music than just a parochial cheesecake to the ears, right? And so I don’t know. I’m not arguing. It seems to me that there must be, but just as a music lover, I just can’t really accept that, I mean, the natural conclusion of the alternative is that preferring Bach to a child banging on a piano is just kind of an arbitrary thing. It just doesn’t, it doesn’t sit,

[01:50:01]  Red: right? Yeah, it doesn’t sit.

[01:50:03]  Blue: I think there’s a lot of questions there. Again, my goal isn’t, I actually think that Deutsch may turn out to be right about this, but I don’t think his argument is yet sufficient. He got us partway there, but he’s not yet differentiating between things like taste because his arguments work just as well for taste. That’s the problem. Now, how do I then relate that? Well, I don’t know. I mean, like if it turns out, does taste have a computational explanation for why certain waffles taste better than others? I don’t even know how to answer that question. Sort of. I mean, it’s going to come down to certain chemistry laws, certain things about evolution as to why we prefer certain, you know, sudden levels of sugar versus others. I mean, yes, there’s going to be some sort of computational reason to explain taste if you want to get really technical. And yet it doesn’t really, it kind of misses the point, right? Because ultimately, this is parochial. It’s just the chance of evolution that we happen to have certain taste preferences, and therefore taste is, it’s hard to vary just like music, just like beauty, but it’s also just subjective because it’s parochial. It’s both at the same time. We don’t have a good enough argument yet to know if that’s going to be true for beauty. I don’t really think the flower argument he uses in beginning of infinity is a sufficient argument either. He talks about, and he talks about this, he talks about how we also find peacock feathers beautiful, but he scratches that up to being parochial. But

[01:51:32]  Blue: why couldn’t, I mean, why couldn’t it be that there’s a joint evolution where humans preferred flowers because it was a sign that there was, you know, fruit or water there? I mean, like there could be a completely parochial explanation for why humans and bees find flowers beautiful.

[01:51:49]  Unknown: Yeah.

[01:51:49]  Red: I see his argument is very incomplete. And I think he would, he would admit that.

[01:51:56]  Unknown: Right.

[01:51:56]  Red: I admit that. I mean, it’s just, I hear him saying is that there are just very valid reasons to think that certain, at least certain aspects of beauty may be objective. Yes. Transcend the individual.

[01:52:09]  Blue: Yes. You know, and you have to ask a question like, you know, if aliens showed up, if we like, would they recognize our art as art? I think so.

[01:52:21]  Red: They would probably even recognize it as skillful.

[01:52:25]  Unknown: And

[01:52:25]  Blue: wow, this is an advanced culture.

[01:52:27]  Red: Right. I agree.

[01:52:28]  Blue: Unless you want to argue that there’s something parochial that’s between us and the aliens, I guess you could argue that, that would seem to suggest there’s something more going on. But I think we’re in a nascent form at this point.

[01:52:41]  Unknown: This

[01:52:41]  Blue: is, I think, and this is where I differ from a lot of the Deutschians that they’ll start proclaiming. We already know that these things are true and we really don’t. These are nascent theories that are super interesting that we want to pay attention to, but we’re not there yet. We don’t really have the necessary theory of beauty to answer the question that you just asked me. I guess this is my answer to your question. There might be, though, right? It might be that once we actually, if Deutsch is right, if there is universal principles underlying beauty, then I should be able to put them into mathematical equations and explain them to you. So if that’s true, then the answer to your question is yes, we are still going to explain it using algorithms. You know, it’s, and that’s really kind of where I’m coming from. Now, all this is a little fuzzy, I admit. There’s always the possibility it’s wrong. There’s always the possibility there’s something completely incomprehensible about nature. In a way, it almost doesn’t matter, though, and here’s why. It’s because if Penrose and Saudia are right, that there’s something non -computable in nature, we’re never going to actually comprehend it, because we comprehend things through algorithms. So there will never be a scientific paper that explains their theories. What will do? What’s the right assumption? What’s the under poppers and customology? What would be the right assumption? Would it be to, at some point,

[01:54:11]  Unknown: say,

[01:54:12]  Blue: oh, qualia can never be explained, beauty can never be explained, or intelligence, consciousness can never be explained, because they go beyond anything computable, and that’s what we need to explain them. No, the right thing to do would be to assume they are explicable. There is never a scenario, even if it’s true, there’s never a scenario where assuming something is inexplicable is the right thing, is the rational thing to assume.

[01:54:41]  Red: Well, I had someone online tell me that existence is just kind of a cosmic fart,

[01:54:47]  Unknown: and

[01:54:47]  Red: we can’t

[01:54:48]  Unknown: even,

[01:54:48]  Red: ultimately, we can’t even understand really anything.

[01:54:51]  Unknown: But,

[01:54:52]  Red: you know, I mean, it’s an

[01:54:54]  Unknown: interesting

[01:54:54]  Red: idea, but it seems to me once you start going down this road that knowledge is something real, it’s just hard to really accept that. Right.

[01:55:05]  Blue: Okay. So that’s how we tie these together. And I actually find that a really compelling argument. You should be obvious. I’ve just laid out why I find the idea of a universal explainer incredibly compelling. Okay. I buy the basic argument, but I don’t think that’s a sufficient argument. I think once you tie it to the idea that there just isn’t a way. I mean, there are people who will argue, you can know things intuitively, non -computationally. I’ve had people argue that with me, right? Yeah. Okay, but we don’t really understand intuitions. There’s just no reason to believe intuitions aren’t computable. And in fact, we have really good reason to believe they are, because machine learning is a form of artificial intuition that’s computable. Right? It might be based on a lot

[01:55:51]  Red: of shortcuts or hearings. Yeah. It finds

[01:55:54]  Blue: heuristics. That’s what it does, right? There’s completely legitimate reasons for believing intuitions are, you know, I, because I come from religious background and run a lot of religious people, I’ll leave an open possibility of something supernatural, right? Maybe that there is something that is beyond comprehension that is supernatural. Okay. But clearly that can’t be part of science, just what we’re talking about in this podcast.

[01:56:18]  Red: Yeah.

[01:56:19]  Blue: So, I mean, I have to upfront exclude that possibility just because there’s no other option. If we’re going to discuss science, then to assume things are explicable. If science isn’t explicable, then what is science? Like, what are we even doing? I have to start with the assumption of computationality. There just isn’t an exception to it, because otherwise I’m throwing explicability out the window too. If someone can demonstrate that I’m wrong, if they can show what it means to understand something that isn’t ultimately to the level that we currently understand it, you can put it into an algorithm.

[01:56:53]  Red: Yeah.

[01:56:53]  Blue: Show me a counter example. This is just a conjecture in my part. Show me a counter example. I don’t think there are any. I just don’t think they exist because I think humans actually do use algorithms to understand things. The same as understanding things. There’s something more going on that we don’t understand yet, but there’s nothing less than that. We don’t feel like we understand things unless we can explain them in little steps, which is what an algorithm is. I’m sorry, go ahead.

[01:57:17]  Red: I was going to try to get, bring this into steel manning the version of the hard version of universal universe. We could save that for next week too.

[01:57:31]  Blue: Let me give the basics, because there is something here. Brett’s argument is this. If a person can understand and comprehend anything, if there really is universality, then isn’t that really the capacity that matters, that they can learn it? His argument would be that we all are capable of learning everything. That’s the only capacity that matters, and we all have it. Even my severely mentally challenged neighbor, you would expect that he could at least in principle understand and not animate. How would you keep him from understanding quantum physics if he were interested enough and if he were able to spend enough time on it, et cetera? How would you hold back him understanding quantum physics? Well, that’s a really fair question. It really seems to me like he would be incapable of understanding quantum physics. That would be my initial intuition, but when I really think through the idea of universal expireship, it seems like at least in principle, if I could somehow get him interested enough that I could slowly, very slowly help him understand each concept if necessary, breaking it back down to a not and gate until he got it. Oh, I get this. And then I would have a severely mentally challenged individual that also understands quantum physics.

[01:58:57]  Red: Maybe I would just like to see a real world example.

[01:59:00]  Blue: I would too. I would too. But you can see what they’re getting at. It seems like that actually does follow from the idea of universality, of explanatory universality. And if it does, then can’t we then say, so goes the argument, that really ultimately the only capacity that actually mattered was if the person was a universal explainer or not. So then you might say, well, what about the working memory? Oh, but he can write stuff down on a piece of paper. So working memory is not actually a barrier to a universal explainer. Now, here’s the problem with saying that. And this is what we’ll get into next week. That really is kind of a question. So the Deutschians are admitting this, right? If I can keep something in my head in working memory, and I can understand it without having to write every little piece down and review it over and over again, I’m way more interested. I’m way more likely to not get bored and decide to learn it. So it seems to me that working memory absolutely can make a difference, even if you are going to make the claim that a universal explainer can write things down and therefore that shouldn’t matter. Because at the end of the day, it makes a difference in whether I’m bored or not.

[02:00:16]  Red: Yeah, no, this is a pleasant experience for me. I can tell you’ve thought this through that that’s that’s very, very compelling.

[02:00:23]  Blue: So it’s, I think that I really do see where they’re coming from. I admit that it must be the case that the single most important capacity is universal explainer ship itself. And that that should be determined by whether you can understand a knot and gate or not. If you can, you should be at some level a universal explainer,

[02:00:42]  Red: right?

[02:00:43]  Blue: And I can see that that must be true. That that does mean that in principle, if that’s true, that means in principle, unless I’ve got some explanation, otherwise, as of today, I don’t, that my severely mentally challenged neighbor should be able to learn quantum mechanics, at least in principle, not necessarily fast. And this is where I definitely differ with the Deutschians. He may not choose to, because it’s just too hard. It’s just too cognitively demanding. And he gets bored. He may not. But to me, that is still a kind of capacity. And that needs that is a measurable difference in cognitive capacity. Or it may be that it just take that it’s just too much effort that he’s not going to live long enough that it’s going to take him so long to get the point where he understands quantum mechanics, he’s going to die first. Okay. Differences in speed. Universe computers can have differences in speed. So human universal explainers should be able to have differences in speed also. And since speed affects our boredom and our interest level. And this is why I keep saying you can’t tease these things apart.

[02:01:46]  Red: Yeah, well, it’s just to me that there there’s certainly an element of truth into this this conception of humans as a universal explainer, that on some fundamental level, we are universal explainers.

[02:02:00]  Unknown: But

[02:02:00]  Red: it just, I think maybe where I differ from the more the harder version of that is that, you know, I mean, maybe we’re all a little mentally challenged in our own ways. It’s not even a matter of putting someone in a category being mentally challenged people. And these are not, but

[02:02:18]  Blue: I think that’s right. I actually think that’s what we’re going to find out. So Michael Golding had an interesting theory he mentioned in one of his interviews. I’ve brought this up before. Madness is the real issue here in sanity. For some reason, if things go wrong with the brain, even though you continue to be a universal explainer, you become very crappy at error correction. You do a person who is mad. If they if their brain was working properly, they would be able to error correct on their little conspiracy theories or whatever, paranoid delusions or whatever is going on. And we all we’ve all experienced this. That’s what dream state is. When I’m in a dream state, my brain capacity, my cognitive capacity is hobbled in some way. And I don’t think we even understand in what way we know some like we know that your inhibitions are disabled. So you do things in dreams you wouldn’t do in real life. That’s saying a lot right there. In fact, in a fancy way, we’re saying you in your dream aren’t you, you’re really somebody else, you’re a version of yourself with your inhibitions disabled, which is not you. You don’t actually experience those dreams. By the way, you don’t remember them either because they get erased usually. So it’s not really clear that you actually dream in the sense we would normally mean you. It’s almost like somebody else, a different aspect of your brain is doing the dreaming. And it’s a non non completely rational version. You accept weird things happening. And if you were able to actually cognitively process that sort of state, you’d immediately be able to go, Oh, this is a dream.

[02:03:54]  Blue: And yet in your dream, it’s really hard to say that it’s really hard to say, Oh, this is a dream, because your cognitive ability is disabled because you are in for all intents and purposes, you’re an insane person while you’re in a dream. And we know that that can come into the waking state of an insane person is someone where somebody’s gone wrong with their brain. And now they’re starting to act like they’re in a dream state while awake. And they’re starting to accept weird theories and they can’t seem to error correct them. So Michael Golding, who is a psychiatrist, head psychiatrist at a hospital, this is his reality every single day, even though he’s a Deutsche and he buys universal explainership.

[02:04:30]  Red: So

[02:04:30]  Blue: he needs to explain this to himself. So the way he says it is, he says, maybe there’s like this module in the brain that error corrects, and maybe it gets disabled and or gets partially disabled. It’s not working as well. So you can error correct some things, but you can’t error correct as often as you should. Now, I don’t know. I mean, this is we’re making stuff up. If you accept that theory, and you almost have to accept something like that to try to explain dream states and sanity, things like that. Okay, we’re clearly error correction gets disabled to some degree.

[02:05:00]  Red: But

[02:05:01]  Blue: if you’re going to accept that, then you have a basis for explaining IQ differences that doesn’t violate universality anymore. Because now you can say, well, some people are better at error correction, for some reason. The same person’s having problems to error correction on stuff that’s weird, like whether I heard voices in my head, but maybe somebody else, they just can’t error correct well. And so they can’t learn as fast. Well, yeah, I mean, if we’re going to accept that in same people exist, and we do, because that’s my observation, and we’re going to accept that it doesn’t violate universality, we have to accept that if we’re going to accept universality, we’ve now handed ourselves a way to explain IQ.

[02:05:39]  Red: Yeah,

[02:05:40]  Blue: this is we’re going to see this happens over and over again, that the moment you really take observation seriously and the theory seriously, you cannot help but explain IQ differences as well. That’s why I mentioned if the differences in IQ is really just interests, then some people apparently are more interested in learning than others and gain more knowledge because they have broader interests. That would be then what we were measuring with IQ. You can’t get around this, right? IQ can’t be banished using universal expandorship. It just can’t, at least not with the observations that we have available to us.

[02:06:16]  Red: Yeah, I think that really distills the issue. Wow.

[02:06:19]  Blue: Okay, and yet, we can admit there’s a legitimate set of problems here. On the one hand, I accept universal expandorship, and I can’t really fully explain it as of today, that there’s these problems with the theory. Why can’t a mentally, a severely mentally challenged individual, why can’t they understand quantum physics when they can understand and not end gate?

[02:06:41]  Red: Yeah,

[02:06:42]  Blue: and I don’t have the answers. We’re guessing, right? We’re guessing it’s related to interest. We’re guessing it’s related to error correction modules. We’re guessing it’s related to speed and that leads to boredom. Notice, though, that each of these guesses that I made, each of them is testable, and that’s really what it comes down to. If I’m going to make up non -testable explanations, then I’m wasting my time because I’m just ad -hoc -ing. When I come up with an explanation, I don’t expect it to be right. I expect it to be testable. And then we sit down and we figure out, okay, let’s say it is a difference in interest. How do we measure that? Let’s say it actually is a difference in error correction. How would we measure that? And I believe every single one of these is measurable. And ultimately, we can get down to what’s actually going on, and we can understand the theory better through that process. But only if we accept that the problems are real and we try to resolve them by coming up with explanations that are testable.

[02:07:37]  Red: All right.

[02:07:37]  Blue: We’re like way over time on this one. That was kind of a fun aside.

[02:07:40]  Red: Maybe that last part could be its own podcast, but it was all very interesting. So thank you. Thank you for that, Bruce.

[02:07:50]  Blue: Yeah. So next time, I’m going to go through more in detail what Brett’s theory was, and we’re going to give it the same critical rational treatment we gave Patel’s theory, which, you know, that seems fair. We want to criticize it as strongly as we can. I want to see, you know, what survives and where are the problems with the theory? And we’ve kind of done that to some degree as we went along here, but I think I can get really specific. Like I can use their words. I’m summarizing their view, and I might be getting it wrong. I can use their words. I can say they actually said this. Here’s what’s going on. And I can get very specific with my criticisms. Okay.

[02:08:25]  Red: Well, I’ll look forward to that. All right. Bye -bye. Thanks, Peter. Thank you. Bye -bye.

[02:08:32]  Blue: The theory of anything podcast could use your help. We have a small but loyal audience, and we’d like to get the word out about the podcast to others so others can enjoy it as well. To the best of our knowledge, we’re the only podcast that covers all four strands of David Deutsch’s philosophy, as well as other interesting subjects. If you’re enjoying this podcast, please give us a five -star rating on Apple podcasts. This can usually be done right inside your podcast player, or you can Google the theory of anything podcast, Apple, or something like that. Some players have their own rating system, and giving us a five -star rating on any rating system would be helpful. If you enjoy a particular episode, please consider tweeting about us or linking to us on Facebook or other social media to help get the word out. If you are interested in financially supporting the podcast, we have two ways to do that. The first is via our podcast host site, Anchor. Just go to anchor.fm slash four dash strands, f -o -u -r dash s -t -r -a -n -d -s. There’s a support button available that allows you to do reoccurring donations. If you want to make a one -time donation, go to our blog, which is four strands dot org. There is a donation button there that uses PayPal. Thank you.


Links to this episode: Spotify / Apple Podcasts

Generated with AI using PodcastTranscriptor. Unofficial AI-generated transcripts. These may contain mistakes; please verify against the actual podcast.