Episode 87: Is the Universal Explainer Hypothesis Falsifiable?

  • Links to this episode: Spotify / Apple Podcasts
  • This transcript was generated with AI using PodcastTranscriptor.
  • Unofficial AI-generated transcripts. These may contain mistakes. Please check against the actual podcast.
  • Speakers are denoted as color names.

Transcript

[00:00:06]  Blue: Hello out there. This week on the Theory of Anything podcast we consider, how does the concept of universality relate to human minds? Is the universal explainer hypothesis falsifiable? Is anything truly beyond human comprehension? And how would you frame universality as an interesting topic at a party? We also feature a guest, Dan Geshe, a fellow traveler Bruce has connected with on Twitter. Dan describes himself as a ski bum who is pro -enlightenment and interested in the infinite growth of knowledge. Very cool guy. I hope you enjoy this as much as I did.

[00:00:47]  Red: Welcome to the Theory of Anything podcast. Hey guys, how’s it going? Hello Bruce. Hi Bruce. Today we’ve got Dan Geshe. Is that how you pronounce it? Geshe?

[00:00:59]  Green: Yeah Dan Geshe.

[00:01:00]  Red: Dan Geshe with us today who, someone that I know from Twitter, he’s a crit rat on Twitter. And we’ve talked a number of times. We’ve also like zoomed and talked and things like that. So he’s a friend of mine. And he often raises really interesting questions that aren’t the easiest to answer. And you know how I love a good problem. So I think that’s what kind of attracts me to Dan’s personality and his Twitter is that he likes to just kind of raise these really deep dark questions that I have to like to stop and think, man how am I going to answer that? You know? So Dan maybe you can give us an introduction to yourself so the audience knows something about you.

[00:01:45]  Green: Sure. I’m a computer science guy. Come from a software entrepreneurship background that was fortunate enough to free me up enough to have plenty of free time to get into the intellectual worlds. And the David Deutsch crit rat world is definitely what has engaged me lately and changed a lot of my views on life. So I was attracted to Bruce because he seems to be a crit rat in the best sense of the word where he’s out there challenging the crit rat authorities. That are out there and has answered a lot of the concerns and questions that I have about this world. So that’s kind of my introduction into this world.

[00:02:39]  Red: Oh, awesome. Peter, you said you had a question you wanted to kind of start with.

[00:02:44]  Blue: Yeah. I thought it could be a good opening question here. So you’re at a party. You tell someone you have a podcast. And they ask what it’s about. Somehow it doesn’t seem like very interesting to just say science and philosophy. But you launch into a 10 -minute explanation of the four strands of reality. People aren’t that interested from what I’ve experienced. Serious? I don’t know. I don’t know why. I mean, I sure was. But it seems to me universality might be the biggest theme, perhaps, of our podcast. How would you explain the relationship between universality and universal explainership and just why it’s so interesting and

[00:03:39]  Red: one of the most interesting things in the world,

[00:03:42]  Blue: perhaps, just someone just briefly in a party situation that knows nothing? Wow. For either of you. That’s a tough question. It seems like there’s got to be a way to do it. I just can’t. I’ve never found a hook that gets people into it.

[00:04:00]  Red: I have to tell you the truth about myself. I start spouting four -strand stuff in front of all my real -life friends all the time. And you don’t get invited to parties. And I don’t get invited to parties. So that was actually where Cameo came. So Cameo, I worked with her. And we would go to lunch. And I would start spouting four -strand stuff. And she would be listening to me. And she said, one day, she said, as we’re talking about David Doidge and four -strands and Carl Popper, she goes, Bruce, let’s start a podcast. And I don’t know. I mean, I guess it works for some people. Cameo found it interesting enough. That she wanted to start a podcast with me. That was kind of where Tracy came from, too. I mean, obviously, I’ve been friends with Tracy for forever. I would talk with her on the phone and would catch up. And I would just start spouting David Doidge and Carl Popper. And she’d go, oh, wow, we should do a podcast, Bruce. So maybe the answer is that you go ahead and don’t worry so much about the fact that you look like a total idiot. And you’re just passionate about it. And then people kind of get attracted to that. I don’t know. Maybe that’s the answer. Dan, what do you think?

[00:05:25]  Green: I mean, universality. But for me, if someone asks me what I’m into, I’m going to say I’m into the study of knowledge. And what are the limits of knowledge? And what are humans capable of? And what’s the difference between humans and animals and artificial intelligence? And what are the limits of human intelligence? And they’ll usually ask, well, what are it? And I’ll say, there are no limits. It’s infinite. And then if they want to get into it beyond that, I’m happy to start to get into jumps to universality and that type of thing that makes this whole infinite world possible.

[00:06:09]  Red: Can I ask if this is a contest between me and Dan in terms of how good our answers are? Because if so, I bow out now. I surrender

[00:06:16]  Blue: now. No, that was good. That was good. So this jump between like, this is something I’m very interested. This jump between like Alan Turing’s version of universality in machine learning, a universal computer, and then this universality in the human sense as a universal explainer. Like, how big of a jump is that? It’s pretty big, isn’t it?

[00:06:45]  Red: Yeah, it sure seems like it. Like, it seems like a huge jump, right?

[00:06:50]  Green: I mean, to me, it’s more of an analogy. You want to put it like that. Like, the jump to universality in Turing machines is fairly easy or straightforward to understand. You can kind of see hints of that, hints of a similar dynamic going on with human intelligence. So that helps you understand kind of the framework of how humans can jump to this universality. I think that was an excellent answer.

[00:07:25]  Blue: An analogy. I like that, Joe. It is an analogy. They really aren’t the same thing. They’re both jumps to universality, but they’re very different kinds of jumps to universality. Yeah.

[00:07:36]  Red: So now it’s interesting that Dan just gave a really good answer on that subject and even just said that humans have no limits, because the thing that I invited him to the show on was that he was asking questions about what appears to be limits to humans. Dan, do you want to maybe explain some of the things that you brought up on Twitter and what your concerns are? I don’t know if they’re really concerns. You’re just being a good crit, Brad, and trying to explore this and trying to break our best theories as best you can, which is exactly what you should be doing.

[00:08:06]  Green: Yeah. Sure. Well, I think that the theory of universal explainers, humans being universal explainers, it’s a beautiful theory, and I’m really attracted to it because of that, but in my experience with deep learning and neural nets, just like some criticisms or potential problems of that kind of came up. So I guess I’d start at like, if humans can explain anything, our best definition or explanation of say, what a dog looks like is like, we have a hard time programming directly, explaining directly what a dog looks like, but we can train a neural net that becomes literally our best explanation or definition of what a dog looks like is this neural network. And these deep neural networks are pretty dang inscrutable. That’s an understatement. Yeah, we’ve made some progress with explaining these deep networks. There’s things like superposition of neurons and virtual neurons that make you start to think that we’re never going to truly fully understand how these neural networks work. There’s just layers upon layers of complexity in these networks. So to me, we can make some progress in understanding a deep network, but it’s still not an understanding like we have of say, special relativity. So I guess one of my questions for Bruce is like, what are these different types of knowledge, like special relativity is like an explanatory type of knowledge that we have, whereas a neural network of what a dog looks like is a different type of knowledge. So that would be one question that I have. And then taking that to the next level, when it comes to language, human language, we don’t understand human language well enough to program it directly.

[00:10:31]  Green: But we can train an LLM that does understand language a heck of a lot better than programming it directly does. And we barely understand what’s going on inside an LLM, even less than we do of, say, a neural network of what a dog looks like. And so I guess another point or question is that the knowledge that genes and memes evolve, like language, language is a knowledge that genes and memes have evolved. It’s not explanatory knowledge necessarily. And it kind of evades our explanations of what that knowledge is. But again, we have an LLM. We have this inscrutable neural network that does kind of understand it. And then I guess my final point is that why would we think that AGI, which is human intelligence, why would we think that AGI would have a clean explanation like we have of special relativity? It seems to me like it’s probably more evolved knowledge from genes and memes that’s rather inscrutable. And so potentially, maybe we need to train a neural network with, say, conjecture and refutation in mind. Maybe that’s how we can create AGI in the end, as opposed to having this clear understanding of what exactly AGI is. So there you go.

[00:12:09]  Red: You know what? These are tough questions. And I tried to give some tweets to you with some thoughts on this, but I hate Twitter. And tweets are the single worst way to try to explain things. The level to which people are going to misunderstand what I’m saying, the fact that I’m doing it off the cuff and I’m not giving a lot of thought to it. I mean, Twitter’s the worst way to do it. So I invited Dan on the show and I started to think, okay, how am I going to even respond to his questions? Like, I don’t even know myself. So I started to write down some thoughts on it and it very quickly became a two -hour presentation, which I’m now going to give. So I think this is a really interesting question. And I think Dan’s right to ask it because it takes our maybe intuitive conceptions of what universal explainership is and it kind of makes you realize maybe we don’t understand it that well and maybe we need to think about it a little more carefully. And of course, that’s the whole point, right? That is, we want to think through, okay, given this theory, what are things that are wrong with it? And then you want to figure out some way to improve the theory to deal with that fact. So and this is something that I’ve given a lot of thought to myself. I think I’ve mentioned in previous podcasts that I became interested enough in the intersection between machine learning and Karl Popper’s epistemology that I actually went back to school and studied machine learning and got a master’s degree in it just so that I would have some familiarity with it.

[00:13:44]  Red: I also wanted to improve my mathematical skills because I felt like they were just too weak and I thought that it might help out a lot so that I could read some of these harder books. I don’t know if it helped enough. Like when I still go read like Deborah Mayo’s book or a book on active inference or something like I largely can’t follow the math still. So it maybe was a first stepping stone but I’ve still got a ways to go myself. But let me actually try to address your question, Dan. And I feel like this is like a thread that if you pull it, really interesting ideas come out of it. So I would hope that other people listening to the show have maybe thought of this question themselves. Maybe they’re afraid to bring it up. They’re not sure if it’s a smart question or not. Let me say it’s a very smart question. It’s exactly the right type of challenge to universal explainership. So let me first start with a few ideas here. So famously, there was a mathematical proof that is so long that no human can understand it. In fact, let me actually from a sciencenews.org article talking about this, it says, perhaps the most remarkable success so far came in 2004 when George Gomthier, a computer scientist at Microsoft Research in Cambridge, England, verified the proof of the four color theorem. So this is the four color theorem is this idea that you can take Penrod’s tiles and that with four colors, you can color all of them so that you never have the same color touching. So verify the proof of the four color theorem by computer.

[00:15:24]  Red: The problem dates back to 1852 when a college student noticed that only four colors were needed to fill a map of the countries of England such that no adjacent counties shared a color. It took until 1976 to mathematically prove that four colors were enough for any map. That proof was more than 500 pages long and relied on a computer to check nearly 2,000 special cases. Many mathematicians objected to the proof because it was impossible to check by hand. This is from the article, how to really trust mathematical proof. So this is not quite the same as what Dan is raising, but it’s a little bit similar. It’s certainly analogous to it where there are some mathematical proofs that exist that no human can possibly comprehend by themselves because they’re just too long and there really is a hard limit to how much a human can hold in their head. And so does this mathematical proof, the fact that you need a computer to understand the mathematical proof and the human can’t understand it, does this disprove universal explainership? I mean, you’re just going to put that question out there for now. We’re going to come back to that. In fact, the fact that things like this exist, that humans cannot understand, literally just can’t comprehend, is precisely why Jan Lacoon makes the claim that there is no such thing as a general intelligence. So a couple quotes from different articles about Jan Lacoon, one says, often referred to as one of the three godfathers of AI, Lacoon goes as far as to argue that there is no such thing as AGI because human intelligence, quote, human intelligence is nowhere near general. That is a quote from Lacoon.

[00:17:16]  Red: The Frenchman prefers to chart the path towards human level AI. And then in a different article, he says, no AI system, no intelligence system is general, including humans. We are actually not very good at many things Lacoon says. We can buy a 30 pound game that can beat us at chess. Okay,

[00:17:36]  Blue: no. That’s a really interesting way to turn the argument around, I’ve got to say. So he’s arguing that humans are not AGI’s rather than. Okay. Okay. Yes.

[00:17:46]  Red: Okay. And he uses examples like the one I just gave you. I don’t know if he’s ever used the example Dan has used, which is probably a stronger example. But he’s used examples like the mathematical proof one that I just used to show that human beings are not general intelligences. And so we shouldn’t expect there to be such a thing as a general intelligence at a certain intuitive level. This argument kind of does make sense to me. Okay. If I didn’t actually understand the theory of universal expansion, I would actually find this a very compelling argument. And I would say, in fact, at one time, go back far enough years, I did find this a very compelling argument. Okay. So I think that we should maybe start with this kind of steel manned version of the problem. The fact that it’s not immediately obvious that human beings are general intelligences or that there could be such a thing as a general intelligence. Okay. Now, there’s a number of really simpler examples out there is really honestly, we know humans actually do have a hard limit on how much they could hold in their head. And due to that fact, it just isn’t hard to find something so complex that no human being could possibly understand it. In fact, the most obvious example of this is just a very large corporation or a very large government or something along those lines. Big corporations are beyond the comprehension of any one human. The CEO never, on a big enough corporation, never understands what the corporation is doing. That’s kind of a, at least not at a very detailed level, right? And that’s kind of a scary thing that exists.

[00:19:30]  Red: And one of the things that you have to do is you have to figure out through corporate governance how to govern a corporation without anyone understanding what the corporation is doing. Because it turns out you don’t need to have anyone understand what the corporation is doing for the corporation to be effective. And in fact, this is probably a completely different podcast. One of the advantages of a good corporation is that nobody understands what it’s doing. And so it’s kind of off exploring different things via the problem of open -endedness and finding ways to make money that no one person ever thought of.

[00:20:05]  Blue: So what about the concept? I think that’s maybe what I was getting at in my question about the party question is that it’s sort of the idea that if you really understand something, you can dumb something down in a way that’s comprehensible. I mean, I don’t understand there’s plenty. I don’t understand about general relativity or special relativity or something, but I can understand a dumbed -down version of it. I mean, I think a lot of things are like that, right?

[00:20:35]  Red: Yes. Okay. So let’s actually take the corporations because it’s probably the single easiest example, right? If I were to say, well, there’s no such thing as a general intelligence or a universal explainer because no one human being can understand what a corporation is doing. If the corporation is large enough, if it’s the size of Microsoft or IBM, no one person can understand what it’s doing. Does this actually strike you as a compelling argument against universal explanation?

[00:21:06]  Green: I mean, I would say that it doesn’t because you can always drill down into it and understand any part of it if you wanted to.

[00:21:19]  Red: Yes. I think that’s exactly the right answer. Okay. Now, that answer doesn’t necessarily apply to Dan’s examples, but I just wanted to point out that what we were forced to do here is we were forced to think a little harder about what universal explainership means. Okay. And apparently it doesn’t actually mean that it’s just so complicated that no one human being can understand it.

[00:21:44]  Unknown: Okay.

[00:21:45]  Red: And in fact, as long as there are some, literally in a corporation, there’s some human being that understands every single part of that corporation. It’s not like there’s some inscrutable part of that corporation that nobody understands it, right? There’s somebody in charge and they understand how that part of the corporation works and somebody who’s doing the actual work. So I don’t think that the corporation argument works as a very good argument against universal explainership. Okay. Precisely because you just have to drill down far enough and then that part turns out to be understandable. Now, I’m going to argue that the mathematical proof part has a similar answer. Okay. That while no one person can actually check that whole mathematical proof or understand it, no one step in the mathematical proof is inscrutable. And so the answer to the corporation one is kind of obvious. It’s a little less obvious in the case of the mathematical proof. And in fact, it’s not quite as compelling an answer, but you can kind of see how there’s a similar answer there where we’re moving up to a level and we’re saying, well, you don’t actually mean by universal explainership that you understand everything. What we really mean is that no one step is impossible to understand. And I think that this idea of I’m forcing you to think a little deeper about what universal explainership means. Don’t be vague about it. This is kind of like what we’ve called Popper’s Ratchet in past episodes where in Popper’s Ratchet, it’s quite literally that you only respond to problems by making your theories more empirical.

[00:23:34]  Red: This is not quite the same thing because we’re on the other side of the boundary line, but it certainly seems analogous to it that what we’re doing is that we’re solving the problem not by being more vague and trying to cover over the problem, but by trying to drill down a little bit deeper. Okay, what do we actually mean by to understand anything or to comprehend anything? And we’re coming up with an answer that is less vague, but we can see answers to the question. This is really where I want to start the conversation with to try to address Dan’s concerns, is that we want to follow this kind of looser version of Popper’s Ratchet. We want to define things a little deeper, not a little more vague to try to answer his question. Are you guys with me so far as to kind of the ground rules I’m trying to set here?

[00:24:22]  Green: Sure.

[00:24:23]  Red: So now I need to introduce a different idea and I’m going to come back to this. And it’s the idea of what do we mean by a theory because it turns out it’s we’re always a little vague about it. So if I were to ask you this question, is Darwinian evolution refuted? What is the yes or no answer to that question? Or is it not possible to give a yes or no answer to that question?

[00:24:48]  Unknown: I mean, wouldn’t you say it’s not possible because we don’t understand it so well that we can program it directly?

[00:25:01]  Red: Interesting answer. That wasn’t actually what I was listening looking for, but that’s actually not a completely bad answer. I think what I’m looking for here is something like this. If by Darwinian evolution, so when if I ask the question, is Darwinian evolution refuted? If by Darwinian evolution, I mean the theory Darwin laid out in his book, Origin of the Species, then I actually feel very comfortable saying that it is a refuted theory, like absolutely it is. Darwin laid out this idea that natural selection played the sole role in variation of the species. And honestly, today we just know that’s just not correct, right? Clearly it plays some role in it. But like we have adapted what we call, you know, what we call Darwinian evolution, probably better to call it biological evolution. That theory has long since become something drastically more specific and way better understood than what Darwin had at the time he wrote his book. And he really wasn’t right. Okay, if there’s, I’m currently, well, I’ll give some examples of this here in just a second. But there is all sorts of interesting ideas now about how biological evolution works that really aren’t related to natural, what we previously called natural selection, for example. Okay, so here’s what I would say, though, even though his theory is refuted, his theory had very similitude, even though it was wrong. And in fact, it had very similitude, this idea of natural selection, turned out to be just the right insight to be able to start thinking in a different way that we were able to start uncovering the truth about how evolution works. Okay, and so at a minimum,

[00:26:54]  Red: consider how we don’t even really speak of like of the four strands is one of the four strands Darwinian evolution. Well, we don’t even say that. We actually say it’s neo -Darwinian evolution. And there’s a really good reason why we say that neo -Darwinian evolution did refute and replace Darwinian evolution. You may never have thought of it that way, right? You may have thought, oh, well, neo -Darwinian evolution is in a certain sense, Darwinian evolution. But in a very real sense, it’s a theory. It’s not that different than the case with Newtonian physics and general relativity, right? You have an old theory, Darwinian evolution, and it got refuted and replaced by neo -Darwinian evolution. Now, why do we then kind of say, oh, no, Darwinian evolution has never been refuted? I want us to think about that for a second. And this is actually very related to the episodes that aren’t out yet, so Dan hasn’t heard them, but Douglas Hofstetter’s theories around words and fuzzy categories. So you may not be used to thinking of Darwinian evolution as refuted by neo -Darwinian evolution. But it really is every bit as much refuted as general relativity refuted Newton. In fact, Frank Tipler, who’s from the Mega Point theory, he’s a very good physicist, even if he’s a little bit of a nutter in some cases. But he wrote a very interesting article that’s called, of all the stupid names, it’s called, The Obama Tribe Curvature of Constitutional Space Paper is Crackpot Physics.

[00:28:36]  Blue: The Obama Tribe? Yes,

[00:28:38]  Red: he’s actually responding to a paper written by Obama and others that tries to liken progressive ideals as a paradigm shift similar to how general relativity forced us to have a paradigm shift about compared to Newtonian physics. And Tipler being, I guess, a conservative, I don’t know for sure, whatever he’s against Obama in any case, I guess, he points out that there’s actually no hard divide between Newtonian physics and general relativity, if you properly understand Newton’s theory. And the reason why is because according to this paper, Newton’s theory does in fact predict curvature in space just like Einstein’s theory does. Now I tried to read this paper and I know a little physics because I understand quantum computation and things like that. I couldn’t follow his argument. It just it was beyond me. So I’m not going to make any statements as to whether his argument is correct or not. I don’t have any reason to disbelieve his argument, though. I was curious, though, about this. So I went to my physics genius neighbor and I asked him about it. And here was his response. He stopped and he thought about it for a second. And then he goes, Yes, I guess that’s true. Though, frankly, we needed general relativity to, in retrospect, realize that this was the case. So because my genius neighbor is telling me that this paper is at least technically correct, I’m going to say it’s technically correct. Clearly an argument by authority. You can do without with that, as you will. In other words, Newton’s theory did predict curvature just like relativity did.

[00:30:23]  Red: And so the big thing that we think of as general relativity added this idea that that gravity isn’t a force and that instead it’s the curvature of space. According to Tipler’s argument and according to what my neighbor physics genius friend is telling me, that is actually the case. The thing that Tipler is not telling you and that he’s leaving out is that it was so in obvious that this was the case that we needed general relativity as a new theory to be able to realize in retrospect that that was the case. I’m not trying to decide who’s right here, Obama or Frank Tipler. The thing I want to make a case here, though, is that there really, there’s rarely a hard divide between an error corrected theory and a theory that is refuted and replaced. It is either simply a matter of degree or frankly a matter of aesthetics because we think of general relativity as having this giant paradigm shift. Oh, gravity is not a force and instead it’s the curvature of space. We think of Newton as refuted and general relativity as replacing it. But in reality, it would be just as easy to have thought of general relativity as neo -Newtonian physics and then we probably wouldn’t think of Newtonian physics as ever having been refuted. There’s a very aesthetic element to whether it’s an error corrected theory or if it’s a refuted and replaced theory. And there’s never a hard divide between those two.

[00:31:56]  Green: But isn’t there different types of refutations? You could say that Newton’s theories refuted the theory that the gods created this, the gods make the planets go around and choose their own paths or something like that. That’s a different type of refutation, wouldn’t you say?

[00:32:21]  Red: Yes, so I had a whole episode where I talked about the fact that the word refutation is problematic precisely because it means so many different things to different people.

[00:32:31]  Green: And

[00:32:32]  Red: I’ve also, in my episodes 41 and 42, Popper Without Refutation, I pointed out that Popper understands refutation in what seems to me a very directly different way than how, say, David Deutch understands the word refutation. And I actually think both of them have a legitimate claim on the word refutation. Like, neither of them is wrong. It’s just that the word is a kind of fuzzy term that could mean lots of different things in different circumstances. So I want to make the same claim here that you could think of at any given moment a theory as just being a static thing, in which case every time you amend that theory, you have in some sense refuted the old version and replaced it with a new version. Or you can think of that as a single theory that has never been refuted. And instead, you made it a little more precise and you error corrected some sort of problem with it. And that for our purposes, it’s kind of important for us to understand that those two things are the same thing. We can call it refuted or we can call it non -refuted and error corrected. But for our purposes, those are the same, okay? Because there’s just no hard divide. It’s just an aesthetics of language of how you choose to refer to it, okay?

[00:33:52]  Green: It’s almost like, sorry to interrupt. It’s almost like, is a theory a stepping stone to an improved theory? Or is a theory a dead end?

[00:34:03]  Red: Yes, that’s maybe a good way to put it, okay? The thing that makes that one tough though is that even a dead end theory almost always had some truth to it. And therefore in some sense was a stepping stone. And so then you start to become a little bit of an aesthetic of did, what was that the name of that theory that he was a kind of substance called flagellum or something? I can’t remember what it was. Is that a dead end theory or was that a stepping stone to what the current theory of heat is? I suspect the answer is that it’s that’s a really difficult question to answer. Because was alchemy a dead end theory? Or was it a stepping stone to chemistry? Right? There’s no great answer to that question. It absolutely was, at least in some sense, a stepping stone to chemistry. One that’s really hard to answer that question for is was the ancient Greek version of atomism, was it a stepping stone to modern atomic theory? Like there is an unbroken chain of thought between that original completely untestable by our standard silly theory and the modern atomic theory. So yes, there were stepping stones all the way along between them, right? And so I think that how much of a stepping stone maybe becomes the aesthetic, right? Like we kind of, we may look at Darwinian evolution and we can easily see it was a stepping stone to neo -Darwinian evolution. Like we can easily see that. And so we don’t think of Darwinian evolution as refuted and instead as amended. But maybe that there’s no not a big difference there between those two things.

[00:35:49]  Red: So this is beyond the scope of this particular podcast, but let me just give a few examples quickly. So I will treat it in more detail in a future podcast. But if you’re curious, look at episodes 21, Evolution Beyond the Genome, and episode 77, Counter Examples to David Deutch’s Theory of Knowledge, which was posed as a question. So neo -Darwinian evolution, at least in its original form, is also already a refuted theory. And we’re in the middle of replacing it right now, okay? Refuted in some sense. The ideas that have been known, the ideas that are going to replace it, they’ve been known for quite a while, but the full implications of them weren’t apparent at first. So this is forced a strong rethinking of how much the traditional role of natural selection actually plays in biological evolution. So let me just give a couple examples. And this is where I’ll do a future podcast and I’ll give you much stronger examples. But James Shapiro’s idea of natural genetic engineering, where genes can actually, with intention, adapt within the lifetime of an organism, we’re starting to realize the degree to which that plays a much bigger role in biological evolution than was anticipated. And this is nothing like the original natural selection, where you have to wait for the organism to die out before the genes can change, okay? So organisms do not need to die out for evolution to change in the genome. I even gave an example of this with the immune system. The immune system literally changes the genes within the lifetime of the organism to try to find the recipe for the antibody, so that it can create the right antibody.

[00:37:37]  Red: And it does this by intentionally turning on a hypermutation of the genes, okay, in the DNA. And it doesn’t do it in the sex cells, so it doesn’t pass on to the next generation, but it does it in the cells in the body, in the bloodstream, and it actually sets up a replication process so that successful ones will replicate more and the unsuccessful ones will replicate less. It follows something very similar to biological evolution, but it is in no way following the Darwinian idea of the organism is more fit and so it is more likely to survive. Like it’s doing that at the cellular level instead of the organism level, okay? And then even just like everybody’s heard of CRISPR at this point, but do you even realize that CRISPR exists because they figured out that the cells can naturally genetically engineer themselves, and so they’ve actually worked out how the cells naturally genetically engineer, and when humans naturally genetic, sorry, do human genetic engineering using CRISPR, what we’re really doing is we’re using a natural process that the genes, that the biological life figured out how to do first. I mean like you never probably never think of it that way, right? But it’s the simple truth is that natural genetic engineering is something that’s already a giant part of how evolution works, and nobody talks about it. You never hear about it in school, and you always still hear about natural evolution and certain organisms are more fit, and then the other ones die out. And yes, that does play an important role in biological evolution, but a much smaller role than they originally thought. I had no idea.

[00:39:14]  Blue: That’s a crazy way to put it.

[00:39:16]  Red: Yes. So neo -Darwinian, and this is why you’ll sometimes see me say neo -Darwinian evolution is refuted, and then I’ll quickly amend it, and I’ll say, of course, it’s been refuted by neo -neo -Darwinian evolution. And it still has this thread of similar to natural selection that you can see attached from that original Darwinian thought to what it is we believe today. The other one that I brought up, and this was in episode 21, Michael Levin has dramatically been showing the degree to which the cells don’t take mutations passively, but actively use error correction to find a useful way to something useful to do with that mutation. So this actually explains like the creation has always said, it’s almost ridiculous what you’re asking us to believe with evolution. It turns out the creationists were right about that, right? The reason why natural selection is as effective as it is is because it’s surrounded by this operating system that exists in cells that uses its own variation selection algorithm to take each mutation and go, I’ll bet you I could use it usefully like this. And it actually intelligently figures out how to take mutations and try to adapt them in some useful way, which is precisely why evolution is so much more effective than what the traditional Darwinian conception of it was. So in some sense, it’s like it’s intelligence all the way down. They actually refer to it as cell cognition, believe it or not.

[00:40:56]  Unknown: Okay.

[00:40:57]  Red: So what we’ve wound up with is really like the current biological theory of evolution, whatever we’re going to call it neo -neo -Darwinian evolution is very little like Darwinian Darwin’s original theory of natural selection. But because every step of the way there is some conceived connection, easy to conceive connection to the original theory that inspired the current theories of biological evolution, we will probably always refer to it as Darwin’s theory of evolution, even though it’s nothing like Darwin’s theory of evolution at this point. It has long since evolved past his original theory. Okay. We could have done that with general relativity. We could be calling it neo -Newtonian physics at this point, and it wouldn’t have mattered, right? But because we perceive that as more of a ground breaking paradigm shift, even if that’s not necessarily true as per Frank Tipler, due to the curvature of space, we’ve decided instead to think of it as a new theory instead of an evolution of an old theory. But I want to really emphasize it just does not matter which way you choose to think of it. It is a purely aesthetic way of speaking. Okay. Now, I feel the same way about universal expansion. This is how I’m going to now tie this back to universal expansion. Deutsch was the first to coin the term universal expansion. But he did so as a synonym for a previously existing theory, which was what we might call strong AI or general intelligence. Okay. So the concept that he’s talking about isn’t his theory. But by calling it universal explainership, what he’s doing is he’s contributed something to this existing theory, precisely this idea that explainership has some sort of connection to what we previously called general intelligence. Okay.

[00:42:54]  Red: And this is a brilliant idea. And Deutsch deserves tons of credit. I had never thought of that prior to reading about it in David Deutsch. This is something I’ve been thinking about for a while, you know, and it hadn’t even occurred to me that explanation played some sort of role in general intelligence. So I think that Deutsch deserves tons of credit. And we can decide that universal explainership is a new theory, or we can decide that it is a replacement for an old theory that was similar. I don’t care which way we think of it. Okay. So this theory is necessary for our explanations because it’s it’s quite obvious that there is a problem without it. Okay. So now Dan actually already explained this at the beginning of the show. But let me kind of just go over my version of this. Namely, why are humans so intelligent compared to say chimps when we share nearly all the same genes with chimps, right? It’s this tiny few K difference between us and chimps. How would you go about explaining this huge jump that took place between chimps and humans, but for the existence of some sort of jump to universality, haven’t taken place. This is the other thing that David Deutsch contributed to the conversation that I had never seen prior to him. Okay. And that I feel is super enlightening about the problem.

[00:44:17]  Unknown: Okay.

[00:44:18]  Red: So I feel like the leap to general intelligence is as a leap to some form of universality really does qualify as a best explanation at this point, at least until someone can show that there is some other way to solve the problem without reference to a jump to universality. It seems to me that I don’t anticipate I will ever see a way to explain the giant leap between chimps and humans, but for the explanation that there was some sort of leap to a new level of universality. Okay. Does that make sense? What I’m trying to say here, that there’s literally an explanation here that’s very precise, that it’s really hard to see how there could be a competing explanation with it. Therefore, it qualifies as a best explanation. Are you with me so far? It

[00:45:08]  Blue: makes a heck of a lot of sense to me. Can I ask a brief question, though?

[00:45:11]  Red: Yes.

[00:45:12]  Blue: So how I made sense of the universal explainer hypothesis when I first heard about it, I started to think, well, can you even imagine something that’s beyond human comprehension? And kind of my answer was, well, not really. I mean, we’ve deduced information about time and space and the big bang. And it doesn’t seem that there’s any real good reason to think that anything could ever be truly beyond human comprehension in one way or another. But then, again, doesn’t that make it kind of a tautology in some ways? Because if something was beyond human comprehension… Then

[00:45:56]  Red: we wouldn’t comprehend it,

[00:45:57]  Blue: right? You would never know it. So is that a potential criticism of the universal explainer hypothesis that it’s just a tautology? I

[00:46:07]  Red: have absolutely heard that criticism before. In fact, I’m pretty sure Saadia has used that criticism with me before. So that is a criticism that comes up. And Lee Cronin would probably say something like that. Let me actually address that criticism. I think that’s my in two slides. So let me come back to that question if that’s okay. So let me say, though, that I feel like David Deutch is absolutely contributing something super important to the conversation with the theory of universal explainer ship. And furthermore, I feel like it qualifies as a best theory. And in terms of paparian epistemology, I think that I can point to objective reasons why it counts as a best theory. That doesn’t mean, however, that Deutch’s original assumptions about universal explainer ship as a theory are correct any more than the fact that Darwin’s theory was in fact a best theory meant that it was correct and that we weren’t going to replace it with something better. Okay. In fact, I would even go so far as to say that I would feel completely comfortable with the idea that David Deutch’s original version of universal explainer ship probably is wrong and that what we’re really looking for is a refinement on that idea of universal explainer ship as a theory. So the theory that is actually correct, whatever that is, will likely share many features with David Deutch’s original theory, but it would it would make some sense to still refer to it as universal explainer ship. So long as the most important features stayed intact, the ones that that we cared the most about that defined that theory.

[00:47:52]  Red: And in this case, I think that’s namely this idea that a that general intelligence comes from explainer ship and B that there is some sort of leap to universality. Okay. Now, a leap to universality doesn’t mean by itself that you’ve reached the maximum universality. So the analogy here would be pushed out of Tomata. Pushed out of Tomata represents an important leap to universality. It is it is a leap to universality, but it is not the final leap to universality that a Turing machine is. So could humans be the push down a Tomata of intelligence? This is kind of what Peter just asked me. Okay. With some future leap being the final one. You know, that’s that’s a totally fair question. Okay. So I guess it’s even possible. Surely it’s logically possible that there is some alien intelligence out there. Or maybe agis will be that alien intelligence. And it’s a it’s a separate leap to universality beyond what humans have. But there seems to be no testable theory to that effect right now. So from a critical rational standpoint, this theory is an existential theory. So if you’ve just listened to hopefully recently, our episode on popper second axis, where I tried to summarize what I understood poppers epistemology to be. And it was somewhat different than the way people normally think of it. You know that existential theories versus universal theories is kind of a big deal in poppers epistemology. An existential theory is an untestable theory. It could be true. It’s not like it’s it’s an uninteresting theory, but it’s not a testable theory. So consider that this is similar to saying there exists some other jump to universality that is yet to be discovered. Okay.

[00:49:44]  Red: That if we were to say there is, you know, I think there is humans are not general and that there’s some other leap to universality that’s yet to be discovered, that that theory just tells you nothing about what this other jump to universality is going to look like. It’s really just an existential theory. It’s exactly equivalent to saying there exists a unicorn or there exists a big foot. Okay. There’s nothing you can do with the theory at this point. And so poppers epistemology says that’s not a refutable theory. It’s not a falsifiable theory. Or as I would put it in my terms, that’s not a theory that can have counter examples. So as per episode 83 of poppers epistemology, we would disallow it from the conversation. It’s not that we’re saying it’s right or wrong. We’re not saying it’s wrong. In fact, in fact, we’re maybe even interested in the possibility that such such could be true. But just at the moment, we’re ruling it out until someone can figure out how to turn it into a theory that has actual empirical content that we can test. And the reason why we do that is because if there is such a theory out there and if it’s actually the truth, there should be a way to turn it into a testable theory. And so we’re just waiting for someone to come up with some way to do that. And in the meantime, we’re kind of assuming it just isn’t the case. So what we’re really doing is the way we eliminate the theory there is another jump to universality above humans is really by pointing out that it’s not currently falsifiable. We don’t know what a counter example would look like.

[00:51:19]  Red: So we can’t do much with the theory. At this point, it really is just a subjective opinion based on an unexplainable intuition that can’t be checked by anybody. One person might, you know, Lee Cronin might say, that just makes sense to me. But like I have no way to go figure out if he’s right or wrong because it’s it really just boils down to do you buy that intuition or don’t you buy that intuition? Okay. So the universal explainer hypothesis is an intuition.

[00:51:48]  Blue: Is that what you’re saying?

[00:51:50]  Red: No, the the hypothesis that there is a jump beyond humans.

[00:51:56]  Blue: Oh, okay. Okay.

[00:51:57]  Red: That is just an intuition. That’s an intuition. The universal explainer hypothesis actually is in some sense testable. I see. You can actually, at least in theory, show me something like Dan just did, something that a human can’t explain. And I can look at that and I can stop and think about it. I can say, does this refute the theory? Does this force me to think about the theory more deeply? And so the universal explainer hypothesis isn’t just an intuition. It’s something more than that. Okay. Whereas the idea that there’s something beyond universal explainer ship or sorry that humans are not universal explainers because there’s an intelligence beyond that. That is just an intuition. There’s nothing I can say intelligent about it scientifically speaking.

[00:52:38]  Blue: We’re getting into simulation hypothesis territory there.

[00:52:42]  Red: Now that may change in the future. Like we’re not ruling it out. We’re just waiting to see if someone can figure out how to turn it into a testable theory. And it’s like the I liken science is like the halting problem. We don’t ever say that theory is false because it’s untestable. We say that theory is untestable. So we’re not considering it right now. And we move on and we’re waiting for someone to make it testable. If it’s a false theory, no one will ever make it testable. And that’s why we kind of just don’t look at it right now. So let me say that I think that this bothers people. This is critical rationalism in a nutshell. There’s something a little bit bothersome about it. It’s almost like it’s just too simple and people want something else.

[00:53:36]  Unknown: And

[00:53:36]  Red: that something else they want is justificationism, of course. We never really say we’ve disproven Lee Cronin’s theory that there’s something beyond human intelligence. We just don’t say that. We never even rule it out as a possibility. We simply point out it’s not a testable theory and we move on. And there’s not much else going on. It’s a mistake to think that this theory is stronger than weaker or whatever. It’s just this is what we’re at at the current state of our critical discussion. And that’s a fact. The fact is that the idea that universal explainership, that humans are not universal explainers and that something else is, that’s an untestable theory. Let’s move on until someone can make it a testable theory. So now let’s talk about Deutsche’s version of the theory of universal explainership. So let’s refer to the theory of universal explainership, by the way, as UE theory for short. I’m going to sometimes say UE theory and I mean the theory of universal explainership. What does this theory say today or more to the point? What is Deutsche’s original version and how has it changed since he thought it up? Okay, this is similar to what we’re doing with Darwinian evolution and Neo Darwinian evolution and Neo Neo Darwinian evolution. Okay, so Deutsche originally seems to have attached a number of ideas to UE theory that I’ve challenged on this show. So for example, all an animal’s knowledge is in its genes. Ergo, an animal, animal memes are not knowledge. Humans aren’t much influenced by their genes, maybe just a little. We should not coerce ourselves, the fun criteria, because implicit ideas are like unresolved criticisms.

[00:55:23]  Red: Traditional schools are all wrong and maybe even immoral because they violate how human beings actually gain knowledge due to the supposed bucket theory of knowledge. All knowledge and preferences are equal. So general education is immoral because it’s equivalent to the who should rule fallacy, but in terms of knowledge, you’ve probably heard these ideas and you’ve probably heard them even expressed as extensions of universal explainership theory. Okay, and I’ve argued against each of those. Now, let me just to be fair, let me acknowledge that each of the ideas that I just went through, all of them are actually, while they are at odds with our current best theories, and that’s what I’ve kind of gone through in the show in various episodes, each of them do have some truth to them. And in fact, sometimes even a great deal of various multitude to them. So for example, animals have far less ability to create knowledge in their lifetime than a human does. Animal memes are knowledge, not in the genes, but they are based on explanation. They’re not based on explanations. So they have a very limited form of culture, not comparable to human culture. And humans have an uncanny ability to subvert genetic influences, at least in specific instances, and even to subvert genetic coercion. And living your life in a coercive way, even if it’s just cursing yourself, is probably a very bad idea about how you should live your life. And you should probably instead solve your inner conflicts, if possible. And traditional schools do suck in many ways. In fact, I really, really hated, at least until I got to college where I changed my mind, I really hated traditional schools as a kid.

[00:57:08]  Red: And we do overvalue some knowledge, I would mention math here, for reasons that are really unclear to me. And I’ve never heard a good argument for why we make students take calculus in high school, when they’re literally probably the vast majority of them are never going to take it again, right, are never going to do anything with it again. So it seems to me that each of these ideas that get raised as extensions or implications of David Deutch’s theory of universal explainership, do have a great deal of verisimilitude to them. But in fact, each of these ideas, even though they do have a great deal of truth to them, they are in fact, false ideas. And many of them are known to be false. So for example, animals do learn new knowledge in their lifetime via operant and classical conditioning. And it would be basically impossible for animals to exist if all their knowledge had to be inside their genes. Some animals even have nascent abilities to understand things at a level similar to what we would probably call explanations, probably just not universal explanations, at least according to Richard Burns theories. Animal means really and truly are not knowledge in the genes. Humans are massively influenced, at least over a population by their genes. Just think about pain and pleasure here. Try to explain human obsession with sex and romance without referencing the genes, and you will find yourself contorting in hubris and ridiculous ways ad hoc ways to try to do it. Implicit ideas are sometimes just wrong. There is no reason at all that someone with OCD should try to treat their compulsion as an unresolved criticism. Traditional schools have played a massively successful role in say eradicating illiteracy and improving societies.

[00:58:59]  Red: And literacy is an example of a kind of knowledge that actually is a precursor to most other kinds of knowledge. And thus having schools prioritize it is nothing like the who should rule fallacy. So each of these ideas, even though they are largely right, if you try to treat them as universal laws, which is what you try to do in universal, sorry in Popper’s epistemology, is that you try to take each of these and treat them as universal, and then you try to see if there’s counter examples to them. All of them have counter examples, and therefore they are at least if treated as universal laws, they are refuted and they are false. Okay, they may be very good as rules of thumb, though we’re going to get to that. Okay, but they are false as universal laws. Now, have I just refuted universal explainership with what I just did? Well, if by universal explainership, you mean specifically, Deutsche’s original theory, then arguably, yes, I just refuted it. Okay, but this is really and truly the wrong question, just like it’s the wrong question to get too worried about if Darwin’s theory has been refuted or not, the Darwin’s theory. We still need something like Deutsche’s original UE theory to solve the problem of the huge jump of intelligence between chimps and humans. That Deutsche’s original theory that had these other implications attached to it was wrong, doesn’t change that fact. What we really need is an error corrected version of Deutsche’s original theory. And I propose that whatever this new theory is that we just call it universal explainership. It’s just neo, neo universal explainership if you want, right? Or let’s just even drop the neo, it’s universal explainership too.

[01:00:42]  Red: Okay, because it’s still going to contain the key thread of explanation as central to the theory and this idea of a jump to universality. Okay.

[01:00:52]  Blue: Speaking of this jump, I felt Deutsche gave a very compelling example, I think on one of his latest podcast appearances, he said, there are more differences between individual humans. I mean, to be fair, I don’t know exactly how you would quantify that, but it just rang true for me at least when he said there are more differences between individual humans than there are many species of animals in terms of,

[01:01:23]  Red: I guess, how they behave. Oh

[01:01:25]  Blue: yeah, absolutely. I thought that really got at the magnitude of this jump quite well.

[01:01:33]  Red: Yes. Oh, by the way, humans have so little genetic variation compared to other animals that some scientists claim that we’re very nearly clones of each other.

[01:01:47]  Blue: Oh, that’s really interesting.

[01:01:48]  Red: So on the one hand, we have this giant range of differences in our behavior compared to animals. And on the other hand, we have a much reduced set of genetic differences between us than most species of animals do. So I really think that it’s important to understand that David Deutsch is right about that, right? That we’re absolutely agreeing with David Deutsch that there’s some sort of ginormous difference here and that we need something like universal explanation theory to explain it. And we need it. Like, we absolutely cannot do without that theory in some form, even if we can find problems with any one specific version of the theory. Okay. Now, the above examples that I gave were meant to be a little bit silly. And in fact, let me just admit, I’ve actually asked Deutschians at Point Blank, so are you saying that you would consider UE theory refuted if I could show you genes influence us to care about sex over a population, which of course is silly because of course we do. I mean, like it’s well known. I wouldn’t have to go do a test to figure that out, right? And the majority of the Deutschians have asked that change the subject, but I had at least one. I want to say it’s Jose and Jose is wonderful about stuff like this. Like he really takes you seriously where he admitted it may have been a different question. It may have been the question of if I can show you that there’s actual differences in intelligence between a severely mentally challenged individual and a regular person. Does that refute universal explainer ship for you? I think that’s the question I actually asked Jose and he said, no, it does not.

[01:03:26]  Red: And I agree with you on that. In fact, he went on to say, I really have no problem with the idea that universal explainer ship doesn’t really mean every human being is exactly equal in intelligence. What I’m really have a problem with is IQ theory specifically, but I’d be open to a different theory that had similar predictions, but handled it in a different way. I felt like that was a very honest answer on his part, because what he’s doing is, is he’s admitting this isn’t a true implication of UE theory. Other people may think it is, but it isn’t. And none of these, a lot of these ideas that I refuted with a single example, none of them truly ever needed to be considered implications of UE theory to begin with. And that’s actually why I picked these. Okay. It’s because I feel like they don’t cause any real problems for the underlying theory that I can refute them. Now, I intentionally picked an easy example. Let’s now pick a harder example. Okay. And this one is going to take a little bit of explanation for me to try to get to explain it to you. So, there’s a much more interesting and more subtle problem with Deutsch’s original theory than the one, than the examples I just picked. The examples I just picked are in some sense, thoroughly uninteresting. This one’s an interesting problem that I’m about to explain. Okay. So, Deutsch gives the example of a master builder in the pre -scientific era. And this is from Fabric of Reality. They didn’t use fundamental theories to build buildings. They use some, that what Deutsch calls rules of thumb. Okay. So, this is now a quote from Fabric of Reality.

[01:05:01]  Red: Instead, a master builder relied mainly on a complex collection of intuitions, habits, and rules of thumb, which he had learned from his apprentice master, and then perhaps amended through guesswork and long experience. These intuitions, habits, or rules of thumb were in effect theories. That’s a strange thing to call them, but that’s what Deutsch calls them. Explicit and inexplicit, and they contained real knowledge. This is from page 14 of Fabric of Reality. And then continuing on page 14, it was taken for granted that innovation risked catastrophe. Builders seldom deviated much from designs and techniques that had been validated by long tradition. Progress to our current state of knowledge was not achieved by accumulating more theories of the same kind that the master builder knew. Our knowledge, both explicit and inexplicit, is not only much greater than his, but structurally different too. As I’ve said, the modern theories are fewer, more general, and deeper. Page 14, now onto page 15. So, the problem was that the master builder’s theories would give, quote, hopelessly wrong answers if applied to novel situations, whereas today one deduces such things from a theory that is general enough that it can be applied to all situations, including on the moon, underwater, or whatever. A lot of that was almost word for word quote from page 15, by the way, from Fabric of Reality. I changed a little bit to fit grammatically what we were talking about. So, let’s unpack this a bit. Pre -scientific knowledge is still, in Deutsch’s view, a theory, though unlike the kind of theory we would normally think of in the realm of science today, which we would call an explanation.

[01:06:47]  Red: They were rules of thumb or specific situation to specific situations deviate too far from that situation to which the rule of thumb worked and it would fail. The difference between the two kinds of theories in Deutsch’s view is that the scientific ones are rooted in explanation instead of rules of thumb. Explanations are, according to Deutsch, more general and deeper than rules of thumb. You’d have to know an infinity of rules of thumb, which is impossible to be able to deal with every single situation, but you could replace all those infinity of rules of thumb with one much easier general theory if it’s an explanation. Due to this, Deutsch claims that he had this goal as a child to know everything that is known, and in fact he believes that it is not a ridiculous goal precisely because our theories keep getting more general and deeper and replacing the rules of thumb style of theories instead. So whereas it would be impossible to memorize every rule of thumb, it would be possible to learn a deep but general theory that explains everything. So he then proposes that the four strands are the four theories that will someday merge into a single all -encompassing theory of everything. He then clarifies that by known he actually meant understood. That’s almost a quote from page one.

[01:08:09]  Red: From the above quotes, I feel we can understand Deutsch as making an implication, though it’s never quite stated, but it is widely understood this way by fans of David Deutsch, and it’s something like this, that rules of thumb, which are a collection of facts, are inferior to explanations, which are theories as we would normally in the sense of understanding something, and that explanations will one day fully subsume all rules of thumb and that it’s both possible and desirable to do that, to literally take all rules of thumb style of theories and replace them with explanatory theories, which then become something much easier to understand and we will subsume under this way of thinking we will one day subsume all rule of thumb style theories into explanatory theories. Now, Deutsch never actually states this, but it’s very hard to make sense of his goal to someday understand everything if you aren’t assuming this, and so this is probably why, as we’ll see, fans of David Deutsch do assume he meant this and we’ll talk as if he wrote this in the book. Now, I think this is the first potential problem that we’re going to take on with the theory of universal explanorship. There’s a more serious problem I’m going to address later, but allow me to first walk across an easier problem and then we will know how to walk across a more difficult problem. What do we mean by understood? So let’s take the example of a mathematical proof that no human quote understands.

[01:09:45]  Red: If we take universal explanorship to mean I can understand anything and that’s it, no qualifiers, then the mathematical proof example does pose a problem for the theory of universal explanorship because no human being can quote understand the mathematical proof. Does this mean we’ve actually refuted UE theory? Well, we’ve just talked about how it really doesn’t. If this does count as a refutation, then it’s a pretty lame one. It seems trivially easy to tweak the theory in a totally non ad hoc way to save the theory without violating Popper’s ratchet. Let’s start with a very simple clarification that when we say understand, we actually mean in the sense of that we do not do not mean in the sense of you can hold the whole thing in your brain at once, but instead we mean something more like every part of the proof is comprehensible to humans. If we really meant the first, then and if that’s what we meant by UE theory, then yes, UE theory is refuted, but so what? OK, because we just can jump to this really honestly better version of UE theory that is also not refuted by this refutation. OK, so basically what we’re doing is we’re clarifying the understanding of the word understanding and we’re doing that by making it more clear and more explicit than what we meant, but also at the same time, harder to refute. That’s exactly what Popper’s ratchet requires of us.

[01:11:13]  Unknown: OK,

[01:11:14]  Red: so this is a totally legitimate way to respond to this refutation. Now, here is from that same article about that mathematical proof that no human can understand. Here’s their discussion about it and you’ll see that they’ve actually thought about this. Of course, if the proof checking software has bugs, Gunther’s verification of the four color theorem itself could be invalid. Well, there’s fallibilism right there. There’s a possibility that the proof is wrong. That was always true. OK, but to guard against this possibility, the designers of the proof checking software made the kernel, the center part of the code that implements the axioms and rules of inference as short and simple as possible. One program that the guy who did this wrote has fewer than 500 lines of code in its kernel, few enough that a human can check it by hand. The software may not be able to produce perfect, perfect certainty, but Thomas Hales as a mathematician of the University of Pittsburgh calls the four color theorem one of the most meticulously verified proofs in history. Or to put this in another way, it makes good sense to count understanding the program as equivalent to that make the understanding the program that makes the proof as a kind of understanding of the proof. This explains why big corporations that are too big for a single human to understand aren’t really refuting EUE theory either. OK, now that was the easy problem. Now let’s actually start diving into the harder version of this that Dan has raised. So this answer works well for the mathematical proof or the corporation example, but I wouldn’t blame Dan for not finding it an entirely satisfactory response to his specific examples.

[01:13:01]  Red: For example, a neural net is not like a mathematical proof where we can verify the mere 500 lines of code that are its kernel, nor is it a it’s at the case where at least in principle we could understand every step in the proof. We just it it would be something too complicated that you could even make sense of the steps between the proof. OK, so it just isn’t possible to treat a neural net an understanding of a neural net as equivalent to a five, you know, a giant mathematical proof that no human can understand. Understanding what a neural net understanding, quote, what a neural net neural nets do represents a different kind of challenge to the EUE theory than the easy problem that I just solved. And it will require us to think a bit harder about what EUE theory is saying and make a larger tweak to it. OK, now it turns out that machine learning also stumbled upon this very same distinction between rules of thumb and explanations. And I in my episode on the problem of open -endedness, I mentioned Leslie Valiant and his book, probably approximately correct, which is an excellent, excellent book that addresses these sorts of issues in a way that David Deutsch doesn’t. And that you really kind of need to see someone who’s approaching the same problem from a totally different angle from the angle of a machine learning expert or of an AI expert and what he came up with. OK, so in episode 74, I introduced Valiant’s ideas and the idea of ecorhythms. So ecorhythms is a word that he coined that refers to an algorithm that takes information from its environment.

[01:14:39]  Red: So as you perform better in that environment, that’s from page 185 in the glossary of terms at the back of his book, probably approximately correct. He goes on to say, quote, algorithms for machine learning, biological evolution, and for learning for the purposes of reasoning, i.e. human ideas, are all instances of ecorhythms. An ecorhythm is thus a unifying concept that unifies open -ended knowledge creation algorithms, such as biological evolution and human ideas. And it unifies them with more narrow knowledge creation algorithms. Or if you don’t like that term, we can call them simul knowledge creating algorithms. But it unifies them as a single unifying idea that all of them have certain commonalities between them. OK, so here’s from page eight of his book. He says, I make two observations. First, ecorhythms are defined broadly enough that they encompass any mechanistic process. Now, here I want to note to Deutschians that by mechanistic he means can run on a Turing machine, which is literally every algorithm. OK, and I know that because the next sentence is this follows from the work of Turing and his contemporaries that established the principle known as the church Turing hypothesis. Now, why am I dog earing this? Well, my friend that we’ve called James, who writes to me after episodes and tries to argue these points with me, he made the argument to me and his sense may even past after the podcast were done where I responded to him. He again made the argument to me that that we should not consider algorithms that are mechanistic to create knowledge. And I responded to him. All algorithms are mechanistic because they all run on a machine, the Turing machine, and he never responds to that criticism.

[01:16:39]  Red: And instead, he kind of just acts like it’s just obvious that he has offered me a counter criticism. So I want to make sure we’re being clear here. I don’t know what he means by mechanistic and he’s never explained it. And I’ve queried him many times. Please explain what you mean by mechanistic to me. That means it’s an algorithm that runs on a Turing machine. That’s all algorithms, including the universal explainer ship algorithm. Once we understand it, it will run on a Turing machine. It will be mechanistic. He probably has something else in mind, but he’s never been able to explain it to me. It’s something that he kind of intuously it seems to intuit, but not fully understand well enough, he can explicitly explain it to me. So when we talk about ecosystems being mechanistic, I want to clarify we explicitly mean algorithms that run on a Turing machine, which is all of them. And that is what valiant is intentionally trying to say. So that was his first observation. Second observation, ecosystems are also construed broadly enough to encompass any process of interaction with an environment. So from these two observations, he says, we can conclude that the coping mechanisms of nature have no source of influence on them that are not fully accounted for by ecosystems simply because we have defined ecosystems broadly enough to account all such influences. Let me put this in plain English. He’s saying, I am intentionally defining the concept of ecosystems as a unifying concept across both human ideas, biological evolution, and machine learning. And I’m intentionally making sure that I’ve defined it broadly enough that it includes all of those and that all of those are kinds of ecorhythms. So

[01:18:30]  Red: they are by definition now a unifying concept. But valiant then goes on to break knowledge into two kinds. And this is where it really his theories start to touch with his theories now. The two kinds of knowledge are theory full and theory less. Okay, quote, those aspects of knowledge for which there is a good predictive theory, typically a mathematical or scientific one will be called theory full. This kind of knowledge is equivalent to what David Deutsch calls an explanation. I want to be clear here, David Deutsch uses the term theory differently than valiant does. Furthermore, valiant uses it more like how we would normally use it. It’s a little weird to refer to a rule of thumb as a theory. Yes, you can stretch the word theory to include a rule of thumb. But typically when we think of the word theory, we’re thinking of an explanatory theory. And usually the word theory refers to explanatory theories. That is how valiant is using the term theory full. And you need to kind of hold that in your mind that there is a difference in lingo here. Okay. But valiant points out that most knowledge is not like scientific theories. It is not explanatory, quote, most aspects of life are not so simple. If you want to succeed in a job interview or in making an investment or in choosing a life partner, you can be quite sure there is no equation that will predict and thus guarantee you success. In these endeavors, it is not possible to limit the pieces of knowledge that might be relevant to any one definable source. And even if you had all the relevant knowledge, there is no surefire way of combining it to yield the best result.

[01:20:16]  Red: That’s from page one. And then on page two, he says, this book is predicated on taking this distinction between theory full and theory list knowledge seriously. This kind of knowledge is meaning theory list knowledge is equivalent to what David Deutsch calls rules of thumb. Quote, in contrast to theory full knowledge, the vast majority of human behavior looks theory less. Nevertheless, these behaviors are often highly effective. Now I just mentioned this, but one obvious difference is that Deutsch insists on calling both kinds of knowledge theories, whereas value reverses the word, reserves the word theory, only for explanatory knowledge and refers to rules of thumb as theory less. I’m going to adopt valiance terms here, not Deutsches. And the reason why is because I honestly believe those are the more common usages that normally when we refer to theories, we don’t have rules of thumb type knowledge in mind. Okay. But I’m not saying Deutsches wrong. This is just a linguistic choice and Deutsches language is fine as long as you understand conceptually what he’s talking about. It would seem that Deutsch and valiant, so far it would seem that Deutsche and valiant discovered the same truth independently of each other. And in fact, I think it’s exactly what happened is that Deutsch and valiant have both noticed that there is a difference in these two kinds of knowledge, matter what we call them, theory full knowledge versus theory less knowledge. Or to put it more bluntly. So, but there’s an important difference. Sorry. There’s important difference between them. Valiant is not assuming that theory full knowledge must subsume theory less knowledge or to put more bluntly, he’s assuming that it can’t and won’t subsume it and that we will always need both kinds of knowledge.

[01:22:06]  Red: Whereas David Deutsch seems to be assuming that we don’t really need theory less knowledge, that all theory less knowledge will someday be subsumed into theory full knowledge or explanatory knowledge if you prefer. Okay. So, who’s right? This is kind of an important question. And I’m going to argue that unless you know the answer to this question, you cannot answer Dan Gisch’s objection to universal expansion theory and that once you do know the answer, you can answer Dan Gisch’s refutation of universal expansion theory. So, the natural question is of course, who’s right? Is Deutsch correct that someday all rules of thumb will be replaced by theories assuming here, assuming that’s what he really meant. I guess you could argue that he didn’t, since he didn’t explicitly say that, maybe that’s just an assumption from his fans, but I’m kind of assuming that’s what he meant. Or is Valiant right that there literally are two kinds of knowledge and you can’t always subsume theory less knowledge into theory full knowledge or in Deutsch’s terms that you can’t subsume all the rules of thumb into explanations. What we need here is a concrete example. And luckily, I happen to have one handy that I have used on this podcast, probably a half a dozen times by now. But let’s explore it in more detail. It is face blindness. There is a condition known as face blindness that some are born with and that some acquire due to brain damage to a certain specific area of the brain. Those who believe there are no areas of the brain, this refutes that idea. If you have face blindness, you cannot recognize faces.

[01:23:45]  Red: A man with face blindness, a real life man with face blindness had to have his wife wear a specific bow in her hair at parties or he couldn’t find her in a crowd. So, people with this condition are just as smart and just as capable of anyone else or to put it in a different way. Their universal explainership is untouched and is intact despite their face blindness. They can tell, by the way, they can tell an attractive face from a less attractive face. They aren’t apparently blind in that sense. They just can’t recognize faces. Now, I could have picked any number of examples other than face blindness. There is a gob of examples like this. There is motion blindness. There’s the ability to pick up on social cues, autism quickly using body language or tone of voice. There is a ton of examples like this known in neuroscience, where if you get brain damage in a certain area, you will instantly lose an ability that we just take for granted and that you would have thought was directly attached to intelligence. And often the person’s intelligence is thoroughly untouched by the damage, and yet it’s just disabling to them in some way. Moreover, if you acquire the condition of face blindness due to brain damage, then overnight you go from being able to recognize faces to not being able to recognize faces. And it is always associated with a certain specific area of the brain. Further, it is impossible to ever relearn the ability to recognize faces ever again. Once you are face blind, you are forever face blind. Now, if humans are universal explainers, why can’t they relearn the ability to recognize faces? And does this refute universal explainership?

[01:25:35]  Red: Now, I’ve asked this question of many Deutschians. Many, many Deutschians, in fact. And it seems to get them angry when I ask this question, by the way. Because it really is offering up a possible reputation to universal explainership, at least if it’s understood in a certain way. And when I do bring this up to Deutschians, they will literally go into convoluted ad hoc explanations to try to explain why this doesn’t refute UE theory. Here, in fact, are some real life ad hoc explanations that have been offered to me when I offered them this potential refutation. So one of them said something like this, maybe most people do relearn to recognize faces, but do so so quickly that doctors don’t notice. And maybe those that don’t can’t for some reason. Another one that came from the same person was maybe for some reason, adults just can’t learn this very well. Another one that I got and probably had more than once was something like this. What, you think that if even one cell is broken, that means a person isn’t a universal explainer? Usually it’s followed with something like, geez, you’re so stupid. You’ve even asked this question or something like that. And I’ve had numerous responses like this to this question. Now, there are two things I note about these responses. First, they are all totally ad hoc responses and or, in the case number three, an outright dodge of the problem. So the first two responses boil down to saying something like, maybe there is some reason that I can’t specify that answers this. That is the quintessential example of an ad hoc response.

[01:27:19]  Red: It’s so vague, so non explicit, that there’s no way to know what a count example would look like or what an independent test would look like. The third boils down to trying to make me feel foolish for asking the question in the first place or pretending I said something that I didn’t. Oh, you’re saying that it’s just if even one cell is broken, then they’re not a universal explainer? Of course, I said nothing like that. So it’s actually, the first at least was a response, it was just an ad hoc response. The first two, the third one was actually literally just changing the subject. So the second thing that I want to note here is that these are really kind of bad reactions to a completely fair question. So why is it? Why is this getting such a bad reaction to a completely fair question? I’m going to suggest that there’s a fairly obvious answer here. It is that they do see this as a potential falsifier to their theory of universal scholarship theory.

[01:28:26]  Blue: Well, what about this, Bruce? Yeah, I have a potential pushback. Maybe you’ve kind of already covered it, but I mean, I think it probably would be, you know, so maybe there’s some mechanism in the brain that helps us to easily identify faces without even thinking about it that might be damaged or broken. But I mean, at least theoretically and probably would imagine there’s some cases of this, you could at least like make a through effort, get back some of that ability somehow, where you, you know, I don’t know, just pay extra attention to the color of the hair and the shape of the face and all that. And, you know, I mean, you can, they’re still using human ingenuity. I mean, at least theoretically and probably in real life too, you could learn to recognize faces as well as before, don’t you think? As well as before, no.

[01:29:34]  Red: But what you’re saying is true. That is, in fact, what people with face blindness do is that they learn to use explanations to replace what they used to do without an explanation.

[01:29:45]  Blue: Yeah. Yeah. So it could be considered a support for the UE hypothesis in some sense.

[01:29:51]  Green: Well, yeah, here’s my reaction to it, which is similar to what got my mind thinking about all this stuff in the first place, which is, Deutsch on Twitter recently said that if it really comes down to it, you can, humans can figure it out the hard way, which is there is an algorithm that runs in this part of the brain that recognizes faces. And you can literally write down the algorithm as a Turing machine would recognize it. And you could step through the algorithm step by step and understand the face in that way. So that’s my reaction to it. Perhaps that’s a way of asserting universal explanation where if you had that algorithm that that part of the brain executes, you could literally do it the hard way with a pen and paper and step through it like that.

[01:30:51]  Blue: No, I really like that. Learning the hard way. That’s how he said it. You guys

[01:30:57]  Red: have exactly the right answer.

[01:31:00]  Blue: But you know, actually, let me let me go through another example, perhaps against that is that, for example, my the perfect, I know we’ve talked about the perfect pitch thing. You know, my wife has perfect pitch. She can like hear, which is quite rare even for musicians. She can hear a note and she says it’s as easy as telling what color something is, what note it is. And like, I am quite convinced that if I spent the rest of my life doing nothing, but trying to capture that ability that she’s never had to learn, I would not be able to do it. I mean, to be able to just match a note perfectly, I don’t see how that could be. Not just match. A lot of people have relative pitch, but to identify precisely what note that is. I mean, that just seems crazy. There could be a lot of things like that, I guess.

[01:31:58]  Red: Okay. You guys have what I think is exactly the right answer. And I actually am going to show you how to apply that same answer to the perfect pitch example in just a second. So here’s the thing I want to make clear, though. Me raising the question, does face blindness challenge universal explainership is a fantastic tool by which to understand universal explainership better. And the responses I get from Deutschians is very suggestive, not of an attempt to understand the theory better, but to try to do away with the problem instead of learning from it. But there is something that is useful about the responses. And it really comes down to this. The reason why they’re so defensive about this is because at some level they can see that this is a potential falsifier of a theory that they hold to and that it makes them uncomfortable. And so they feel a need to give a kind of knee jerk response and do away with the problem rather than dig into the problem. Why is that? Why does it feel like a potential falsifier to them? I think the reason why is very simple. And it’s a problem that exists within the Deutschian community where they have a mistake that is not that hard to correct, but they don’t want to correct it. It’s this. It is this idea that I’m suggesting did come from David Deutsch, but we’re not quite quite sure that all theory less knowledge can be absorbed into theory full knowledge. So if all human knowledge is explanatory or theory full and then expert universal explanation should make it possible to relearn how to recognize faces with no degradation of ability. And in real life, that just isn’t the case.

[01:33:50]  Red: So they feel very threatened when the problem is raised and they feel a need to dodge the problem. Okay. But this is tacitly this behavior is tacitly equivalent to saying, yes, I consider this a potential falsifier. Okay. But what’s being falsified? Is it actually UE theory being falsified? Or is it the idea that all human knowledge is explanatory that is being falsified in their minds? Those two theories are the same and it’s a single theory. But there’s no particular reason why it has to be the same theory. So there’s actually a much easier response to this question, much, much easier response, a much better response that still follows Popper’s ratchet and isn’t just an ad hoc response. Okay. And it is to accept that valiant is correct that not all knowledge is theory full. This seems quite obvious. If you give it even a moment’s thought, especially if you just even take Deutsch’s own theory seriously on his own terms, for instance, is biological evolution rooted in explanation? It isn’t, right? So it makes sense that it shouldn’t be the case that all knowledge is theory full ultimately, right? Their theory list knowledge is a completely legitimate kind of knowledge that sometimes can be absorbed into theory full knowledge, but can’t always be absorbed into theory full knowledge. Okay. So if you take valiant’s view seriously and also notice that Deutsch’s view is kind of tacitly saying the same thing, then you understand that the true correct answer to what the true correct answer is to face blindness. Face blindness is caused by the destruction of a specific echarhythm in the brain that is meant to recognize faces but is rooted entirely in theory less knowledge rather than theory full knowledge.

[01:35:42]  Red: Because of this, it would be impossible for the human brain, the part of the human brain that deals with theories and explanations to take over the function in a way that’s equally effective. So let’s take two competing theories here seriously for a moment. 31, face recognition is rooted in theory less knowledge of a kind that can’t be reduced to theory full knowledge. That’s valiant’s view. 32, all knowledge can be reduced to explanations or theory full knowledge. So there can’t be such a thing as permanent face blindness. Face blindness is an observation that refutes. It’s a crucial test now. It represents a crucial test. The fact that face blindness is permanent refutes theories too. We can do away with 32 now. Okay. Moreover, people with face blindness do attempt to learn to recognize faces using their explainer ship like Peter is suggesting. In fact, that is what the man is doing when he puts a bow on his wife to recognize her at a party is he’s replacing the broken echarhythm that was theory less with a new echarhythm that is theory full. And because that part of the brain has to operate slower, he’s never going to be as effective as the theory less version. But he might be able to get to the point where it only occasionally bothers him because he’s gotten very good at memorizing where somebody’s mole is on their face and that’s that person. Right. There are ways he could learn to recognize people using a theory full approach, but it will never fully replace the theory less version. Do you see what I’m saying here? Yes. That

[01:37:25]  Blue: makes a heck of a lot of sense theory less versus theory full. That’s an interesting distinction.

[01:37:32]  Red: Okay. I have a real life example of this with myself. I have two cousins that are identical twins. Their names are Adrian and Ashley. I have a relationship with Adrian and Ashley where so they’re quite a bit younger than me. So I’m up with me. So I was going to school at college and I was living with my grandparents and Adrian Ashley lived there most of the time. They had the freedom to go between their parents’ house and their grandmother’s house and they were driven there all the time and they slept overnight there. They would often spend the whole week there as if they lived there. So they were quite young and I was in my twenties, but they’re almost like little sisters to me because I was around them so much while they were growing up. Okay. So I’m the age of an uncle. They’re actually cousins but their actual relationship is more like big brother, little sister. Okay. Now, when I see them, they do not look identical to me at all. I can tell them apart just by dancing and it’s not just that I have some way of explaining the difference in my mind. They literally look different to me even though they’re identical twins to the point where I can look at pictures of them as babies and they look so different to me that they might as well just be two different people. Right? Now, of course, the reason why I can do that isn’t because of some explanation or theory for knowledge. It’s because I knew them from such a young age and for so long that my face recognizer algorithm, which is theory -less, has created knowledge on how to tell them apart.

[01:39:24]  Red: Now, if you were to ask me, Bruce, how do you tell them apart? I could sort of give you an explanation. I could, for example, stop and think about it and I could say, well, Ashley’s face is a little bit rounder and Adrian’s face is a little bit sharper and I could actually stop and think about it and I could start to turn my theory -less module into some sort of theory -full version, but it wouldn’t be enough to really explain to you how to tell them apart, at least not in an instant. Right? And I’m not really noticing that. Whatever it is I’m doing to tell them apart, it’s just instantaneous. My theory -less knowledge, I have no idea how it’s doing it. My explanations are actually post facto made -up explanations and they’re not really true is what I’m trying to say. Okay, they may be accurate, they may even help you be able to tell the difference and then you might be able to tell a difference between them even though they look identical to you by stopping and thinking about it. Is this the slightly more narrow face? Is this the slightly rounder face? And then going, oh, this is Adrian or oh, this is Ashley, but that’s not how I’m really doing it. I’m doing it because I just lived around him and I can just do it. Okay. So I want to suggest that this is actually what’s going on with Dan Gish’s examples. Artificial neural networks are primarily but not entirely rooted in theory -less knowledge. What is theory -less knowledge? While we have no definitive theory of at this point

[01:40:57]  Red: so far, it is often but not always rooted in something like probability theory, which is how neural networks kind of sort of work. A neural net is a collection of weights or parameters for a giant function, the neural net, that happens to by trial and error perform well within the environment it was tuned for. Take it outside that environment and just like Deutsch explained about rules of thumb, you’ll likely end up with a catastrophe as the creators of auto -driving cars have found out the hard way. Neural nets do primarily work by what David Deutsch calls rules of thumb, which is exactly equivalent to what Leslie Valiant calls theory -less knowledge. Now, I admit there is no clear cut boundary between theory -less and theory -full knowledge. Neural nets famously sometimes capture something like a theory -full knowledge, if they can. We might talk about the example of his example of recognizing a dog. Let’s instead do recognizing a face, like Facebook does where it figures out where the faces are in the picture. If you actually look at how they’ve done studies as to how neural nets do that and we know that they create layers that can include an eye detector based on probabilities. There is sometimes a theory -full explanation for parts of how neural nets work, but it’s never enough to really represent what the neural net is actually doing. It’s almost more like when I try to explain to you how I tell Adrian and Ashley a part, it’s a correct but somewhat post facto attempt to take a bunch of theory -less knowledge and notice that parts of it kind of sort of get subsumed partially into theory -full knowledge.

[01:42:52]  Red: It is true that while we rarely understand how modern neural nets work, we often decades later work out what they’re doing and form theories about it. But this tends to be partial at best. The best explanation I can give you is that neural nets work because they are designed to find clever rules of thumb and they memorize them, just like humans do with rules of thumb if they’re a master builder prior to the pre -scientific era. So long as you use neural nets in an appropriate environment, they will work well enough for your purposes. But to understand how they work may come down to memorizing a billion rules of thumb which is impossible for a human being to do. It is possible that Deutsch will eventually prove correct and in fact maybe there is no such thing as theory -less knowledge that can’t be absorbed or subsumed into theory -full knowledge. Now I’m not going to claim that I know for sure because I don’t but I doubt it and let me explain why. So let me ask the question this way. Will we someday have a theory of face recognition that will be able to teach people with face blindness and will just explain the theory to them and then they will be able to recognize faces again using their explainer ship and they can do it just as effectively as this what I’m calling theory -less knowledge created by the face recognition module. Well I mean it’s not impossible that maybe someday there is such a general theory of face recognition but I guess I see no reason why there would have to be and I seriously doubt that there is.

[01:44:30]  Red: So my conjecture is that Valiant is correct that there is a real life distinction to be made between theory -full and theory -less knowledge and the two are not the same. Moreover, most knowledge is theory -less as per Valiant’s theory. This brings us to the central question of the podcast. Does the existence of theory -less knowledge refute universal explainer ship theory? For the defenders of universal explainer ship theory that understood universal explainer ship as including this idea that all theory -less knowledge all rules of thumb can be subsumed into theory -full knowledge, yes Dan has refuted that version of universal universal explainer ship theory and it is in fact a false theory and Dan has given you a refutation of it. But to me that was never really a good interpretation of universal explainer ship theory to begin with. I would prefer to reinterpret universal explainer ship theory to be something like this. There is no theory -full knowledge that a human being can’t comprehend if they have the right tools to reduce the complexity. So that might mean in some cases pen and pencil, maybe a computer for a giant corporation, it might be that you hire lots of people to take care of it, different parts, farm out the specifics, etc. But I also believe it must be interpreted something like this. There is no theory -less knowledge that a human being can’t comprehend the algorithm that creates it. This adjustment makes sense to me because the algorithm that creates theory -less knowledge is itself always explained by some theory -full knowledge. This is the actual relationship between theory -less and theory -full knowledge.

[01:46:20]  Red: We may never make sense of the specifics of how I personally tell Adrienne and Ashley apart, but I see no reason at all that we can’t someday understand the algorithm that is in my brain that is telling Adrienne and Ashley apart and how it created the necessary theory -less knowledge to do that. Or if we don’t understand that specific algorithm, at least I believe that we will understand an algorithm that is as good or better. And it will be rooted in theory -full knowledge and we as universal explainers will be able to understand it. In this new form, UE Theory has both the features of Popper’s Ratchet requires of us. It is now more specific and that’s more testable than the original theory, but it is also harder to refute than before because it’s more obvious what we’re saying and we’ve made it far more explicit. So if you’re willing to accept this as our best form of UE Theory, then I make the claim that neural nets do not refute UE Theory because we do understand the algorithms that create the theory -less knowledge and how they work. So side note, this whole presentation requires one to give up on the idea that the two sources hypothesis is correct because it does mean that neural nets are creating knowledge, it’s just theory -less knowledge. You may not have noticed that, but that is a kind of assumption that you almost have to make to make sense of this argument. But

[01:47:56]  Red: this is one of several reasons why I feel it’s important to abandon this two sources hypothesis and take seriously this idea that if you want to understand AGI, you must take into consideration the existence of both theory -less knowledge and theory -full knowledge. In fact, this is something we can dig into a future podcast, I promise I’m wrapping up. The simple truth is that I see no reason why AGI isn’t ultimately going to have to be a combination of algorithms that create theory -less knowledge and algorithms that create theory -full knowledge. This really just makes sense to me. Do you really think that dogs don’t have a good faiths recognition algorithm? They don’t have theory -full knowledge, but they can recognize faces probably as well as a human. Maybe they use sense. I don’t know, but then it would be a scent algorithm. It makes sense that the face recognition algorithm that gets destroyed in face blindness, that’s actually a form of animal intelligence. And human beings, most of what we think of about human beings, probably is animal intelligence. And then we’ve got this additional universal explainer module on top of that. And that’s what makes us suddenly leap way ahead of chimps, is that one little additional module. But it’s built on top of a gigantic set of animal intelligence that humans do, in fact, use. And it’s a huge part of what we call our intelligence. Okay, that is my answer to Dan’s is, yes, in a summary, yes, he has, with his examples, refuted probably even the popular understanding of universal explanation theory. But I am proposing a revised version based on that refutation that I believe is more specific, more explicit, and not refuted by his examples.

[01:49:50]  Red: Dan, what do you think of this? It’s okay if you disagree with me, but this is how I would look at it.

[01:49:56]  Green: Yeah, I think the interesting thing is that Deutsch himself, like I ran across this, Deutsch was proving universal explainership by saying, if an alien has an explanation that we can’t understand, at the very least, we can understand it the hard way by literally walking through this algorithm on a Turing machine step by step. And that was his proof that we are universal explainers. And to me, that’s the same thing as what you’re saying, as we might not be able to understand the vision recognition system, but we can always break it down into its algorithm. And that’s a form of understanding.

[01:50:50]  Red: Yes.

[01:50:51]  Green: So in some sense to me that it’s kind of redefining what understanding means. So yeah, there you go.

[01:51:01]  Red: So one of my favorite authors is HP Lovecraft. We need to do an episode just on HP Lovecraft. And we’ll do Conan in the same episode or something because Conan actually takes place within the Lovecraft mythos, a little known fact. HP Lovecraft was a very rationally minded scientific minded kind of guy. And one of the things that really bothered him and that worked into his horror stories was this idea that of non Euclidean geometry or even just extra dimensions that reality is incomprehensible to humans because humans cannot conceive beyond three dimensions. And we can’t conceive what non Euclidean geometry would look like. Both of those could be posed and were being posed by Lovecraft as counter examples to universal explainership. Of course, the term universal explainership and even the concept of universal explainership didn’t exist back in his era. But he was making a claim in his stories that there are some things that are just so incomprehensible that if you were to see them, they would drive you insane. And of course, the answer, the correct answer to Lovecraft, at least in this particular objection, is to point out that it’s very easy for humans to understand multiple dimensions or non Euclidean geometry. They just do it at a mathematical level, right? They don’t intuit it in the same way that we would intuit a 3D space. But so what? Like, what does it mean to comprehend, right? Clearly, we can’t comprehend a four dimensional space or a 17 dimensional space or 1000 dimensional space in an intuitive sense like we would a three dimensional space. But so what? Like it literally just doesn’t even slow us down, right? It’s hard to see why this would even be a good objection to universal explainership.

[01:52:55]  Red: In fact, I can explain to you very simply how to conceive in your mind four dimensions. You just do it as an array. So you’d have a one dimensional array that’s linear. You imagine a two dimensional array. So now you imagine a sheet. Imagine a three dimensional array. It’s a cube. Then imagine a four dimensional array. It’s a set of cubes. It’s not that hard to think about it in your head. There’s nothing incomprehensible about it at all. It does force you to think a little more clearly about what we mean by comprehension. And there are different ways we might think of comprehension. And some are impossible for humans, but others are not.

[01:53:34]  Blue: Is

[01:53:34]  Green: that

[01:53:34]  Blue: what a Tesseract is? Yes, it is. To get us to visualize another dimension?

[01:53:41]  Red: Yeah. I think that’s five dimensions though.

[01:53:43]  Blue: Okay. So I’ve got one question for you, Bruce. This of putting the problem of open -endedness in a way where theory full is open -ended knowledge and theory less is not open -ended knowledge. That’s an excellent AGI question.

[01:54:11]  Red: And once you understand these ideas, you can start to ask questions like that, which is why you should try to understand these sorts of ideas. I want you to actually think about that question for a second. I actually think you can answer the question yourself, and I think it’s going to surprise you that you can.

[01:54:30]  Blue: Well, my intuition says yes. Okay.

[01:54:33]  Red: Your intuition says yes. Can you think of a counter example to that intuition?

[01:54:37]  Blue: Oh boy.

[01:54:38]  Red: There’s an obvious one. We’ve actually talked about it in this podcast.

[01:54:43]  Unknown: Okay.

[01:54:45]  Red: Is biological evolution theory full?

[01:54:49]  Blue: Oh, of course. No, no. Okay, but it is open -ended. Yes. Okay. Fair enough.

[01:54:56]  Red: That was a great question. And in fact, it’s still relevant to AGI because human knowledge, as compared to animal knowledge, I’m almost certain that the answer to your question in that case is yes, that the jump to universality and the jump to open -endedness was due to explicitly, theoryful knowledge. The fact that humans have theoryful knowledge and animals don’t. Okay. That’s like a super, we’re now getting very explicit, way more so than the kind of level we started this whole discussion at. Right? The fact that we’ve actually taken Dan’s refutation seriously has allowed us to reimagine the theory in a way that’s way more helpful now in terms of trying to figure out, well, how does this apply to AGI? And this is the beauty of not reacting badly to refutations is that it actually, it’s okay that the original UE theory got refuted and that we’re now dealing with a new version of it. Like who cares, right? It’s not something we need to feel defensive about. Reveal in refutations. Look at the problems that come up like this and go, I’m going to try to figure out how to respond to that problem by modifying my theories, by following Popper’s ratchet, by making them more explicit and easier to refute or to criticize. And I’m going to still come up with an answer that can’t be refuted or criticized. And that is how you make progress. That is exactly what Popper’s trying to say about how we make progress.

[01:56:28]  Blue: Well, it sounds like words to live by.

[01:56:32]  Green: I mean, I would just say that maybe in some sense, it’s a bit of a disappointment because it kind of seems to me like, instead of AGI being this beautiful theory, that it might be theory, like AGI is actually composed of theory less knowledge, that it’s not like this beautiful theory like special relativity or something like that.

[01:57:00]  Red: Okay, well, let’s think about that for a second. So if we talk about the theory of AGI, it seems like, so let’s make this a conjecture, and I admit it is, so it might be wrong. But it’s a conjecture that fits what you just said. So I’m kind of intentionally making this as hard for myself as I can based on what you just suggested. Let’s say that AGI actually consists of two parts. It consists of animal intelligence, plus this universal explainer module. Okay, and that you can’t actually make an AGI without both, that you can’t slice off just the universal explainer module and have any hope of it working, that it actually is built upon these algorithms that create theory less knowledge. Is that kind of what you have in mind here, Dan, or my misunderstanding? It is, but what’s to say that the module that creates explanations is built of explanatory knowledge itself and not theory less knowledge.

[01:58:12]  Green: Hmm,

[01:58:12]  Unknown: interesting.

[01:58:13]  Red: Okay. So now, because there may even be some truth to that, if explanatory knowledge is built on animal intelligence and animal intelligence is creating theory less knowledge, in fact, let’s even make a conjecture. Again, we’re trying to intentionally pick a conjecture that’s as hard on us as possible, because we learn from the counter examples. Okay, so let’s say I’ve suggested in past podcasts, and I think you and I have talked about this, Dan, that it’s very possible that you cannot be a universal explainer without learning language first, and that Deutsch suggested there may be a meme you have to pick up first. The meme might itself be language. Okay. And language, as you pointed out, may have no underlying beautiful theory to it. It might actually be built on an algorithm that is theory less. So in this way of thinking, now, just to be clear, since I know Herve is going to be listening to this podcast at some point, and he’s going to say, no, that’s all wrong. And I’m like, no, no, Herve, we’re just, we’re intentionally making this as hard on ourselves as possible. We’re not saying this is true. And it may not be, but we’re intentionally picking what if this was true, and we’re making it the worst case scenario for ourselves. Okay, so now we have a scenario where you have to have language to be a universal explainer, and language is under this hypothetical actually just an algorithm that creates theory less knowledge, like an LLM. I think at one point, I would have told you that was impossible,

[01:59:59]  Blue: Dan,

[01:59:59]  Red: and I think the inventions of LLMs would have proven me wrong. So I actually think this is a, if it’s not a, it’s definitely not a best theory, but it’s, it’s a guess that seems really kind of possible, very plausible to me at this point, post the invention of LLMs, that maybe language is itself just an algorithm that creates theory less knowledge. I still, so under that I can see how you could say that seems a little disappointing. I was expecting some sort of beautiful theory of explanations, or sorry, of universal explainership. I’m still not sure this is really a problem though. And let me see if I can explain why. Now this is my view, and so we’re definitely not in an area where things are sure enough that I feel comfortable saying we’ve got a best theory on this subject. So this is definitely going to be an opinional subject at this point, and yes, it’s going to violate Popper’s ratchet, and we’re going to largely just rely on our intuitions. But what I’ve really suggested here in the podcast today is that while you can’t necessarily understand the theory -less nature of language, you could understand the algorithm that creates it as a theory, and that therefore maybe it still is a beautiful thing, just like general relativity, where the final theory consists of a series of theory -full things, algorithms, that yes, those in turn rely on theory -less knowledge, but that we can still explain each of the individual parts as a theory of how that theory -less knowledge gets created.

[02:01:54]  Red: If that were the case, Dan, and this is really what I’m suggesting the correct theory of universal explanation is at the moment, would that change your mind any to see it in that way, or does it still feel to you like it’s maybe a little bit disappointing?

[02:02:11]  Green: We understand the theory of what creates an LLM and the structure behind that. Is that similar to how humans -

[02:02:26]  Red: Well, obviously I don’t know if that’s similar or not in real life. Let’s make a hypothetical where we’re saying we have discovered that they are similar.

[02:02:36]  Green: Okay. Well, then that would be very satisfying if that was the case. Yes.

[02:02:42]  Red: Okay, so we don’t really understand the LLM, but we do understand how a transaction models with attention works. And we understand how that has built the LLM. That might be, to me, that still seems like a fairly satisfying answer, especially if it turns out that there’s nothing more theory -full about the LLM. Now, that may not be true. I mean, I suspect as we dig into LLMs, we’re going to find out that there’s quite a few interesting theories that come out of it, theory -full theories that come out of it, right?

[02:03:20]  Green: Absolutely. Sorry, we can use LLMs to understand language itself.

[02:03:27]  Red: Right. I think there’s a good chance we can. It may be that that’s not entirely true. I mean, because neural nets do rely so much on theory -less knowledge, there may be a certain impenetrableness to language. I’m not sure that really stops me from feeling like that’s a giant problem because we already understand how the LLM was created. The LLM itself is based on an absolute theory of attention and transaction models. And we know exactly why they work, why they do what they do, how they do it. There’s no real doubt about that. That’s a complete theory. It’s really just how is it forming all this theory -less knowledge and why does that emerge into something else? That may or may not have a theory. Some theory -less knowledge can be subsumed into a theory. Some probably can’t. But probably all the stuff that’s really interesting can be. Almost by definition, the stuff that can’t be subsumed into a theory is going to just be true because it’s a collection of rules of thumb that happen to work well for the current environment. That might actually be as good an explanation as you need at that point once we’ve sucked all the theory out of it.

[02:04:56]  Blue: Thank you for this, Bruce. Dan, I hope you weren’t thinking you were going to get interviewed on this podcast. This was really enjoyable. And I guess we’re just not that kind of a podcast.

[02:05:13]  Red: Yeah. This is definitely Bruce gives a lecture each week podcast.

[02:05:20]  Blue: Well, it’s interesting for me. I know there are others out there who enjoy it. So, thank you, Bruce. All right. Thank you. And thank you, Dan. Nice to meet you.

[02:05:34]  Green: Nice to meet you guys. Bruce, always a pleasure.

[02:05:37]  Red: Yes. Thank you, Dan. And you’re welcome back anytime.

[02:05:48]  Blue: Hello again. If you’ve made it this far, please consider giving us a nice rating on whatever platform you use or even making a financial contribution through the link provided in the show notes. As you probably know, we are a podcast loosely tied together by the Popper Deutsch theory of knowledge. We believe David Deutsch’s four strands tie everything together. So we discuss science, knowledge, computation, politics, art, and especially the search for artificial general intelligence. Also, please consider connecting with Bruce on X at BN Nielsen 01. Also, please consider joining the Facebook group, the mini worlds of David Deutsch, where Bruce and I first started connecting. Thank you.


Links to this episode: Spotify / Apple Podcasts

Generated with AI using PodcastTranscriptor. Unofficial AI-generated transcripts. These may contain mistakes; please verify against the actual podcast.