Episode 75: Deutsch’s Theory of Knowledge: The Walking Robot

  • Links to this episode: Spotify / Apple Podcasts
  • This transcript was generated with AI using PodcastTranscriptor.
  • Unofficial AI-generated transcripts. These may contain mistakes. Please check against the actual podcast.
  • Speakers are denoted as color names.

Transcript

[00:00:08]  Blue: Welcome to the theory of anything podcast. Hey Peter.

[00:00:11]  Red: Hello, Bruce. How you doing today?

[00:00:13]  Blue: Doing good. This is our Thanksgiving weekend here and I have been having fun with my family, which mostly consists of watching movies.

[00:00:23]  Red: Oh, that sounds nice. Yeah, we’ve been doing that a little bit too. I had my all chat GPT Thanksgiving dinner in the sense that I used chat GPT to plan out all the recipes and that worked out quite well. It’s very convenient. You say, okay, well, give me 10 ways to variations on turkey. Boom. One catches your eye. You go to that and then ask it further. Okay, well, I’ve used this technique where I put the foil on it to keep the breast from drying out and then gives you a new recipe using that technique. Give me 10 variations on mashed potatoes. It’ll say roasted garlic and truffle oil or whatever. I found it very, very useful. Maybe it doesn’t have quite the sentimental appeal of using grandma’s recipe or something. Do you know what it is?

[00:01:23]  Blue: Saudi has a theory that I really like, that chat GPT and this latest round of generative AI, that what it is, it’s a variation engine. It puts lots of variants out there much faster than humans can do it, but then ultimately, you need a human to help choose which are the best ones. I actually think that I don’t know if that’s exactly the truth, but I think that’s a really good first cut at trying to understand the relationship of generative AI to human creativity.

[00:01:55]  Red: Yeah, I like that. Yeah, it rings true. A variation engine.

[00:01:59]  Blue: Yeah. Yeah. I don’t know. That may not be the term she used, but that’s my interpretation of Saudi’s theory. That would be a more accurate statement. Okay. Well, today, last time we talked about the problem of openendedness, that was kind of a setup for this episode today. And I’ve got a lot of interesting thoughts that we’ll probably do over several different episodes that will tackle this from several different angles. Today, we’re going to talk about Deutsch’s theory of knowledge, particularly as it comes out of beginning of infinity, which is maybe somewhat different than his constructor theory of knowledge. Although we’re going to see they’re actually quite similar or maybe arguably the same. So today, we’re going to talk about what he says about his theory of knowledge in beginning of infinity. And we’re going to look specifically at the example of the walking robot that he uses. And I know I’ve covered this in past podcasts, but we’re going to try to do this somewhat a little bit faster and a lot more focused and honestly, a lot more explicit than I’ve done in the past. In the past, I was exploring ideas, trying to figure this out myself. Now I’m ready to kind of like string it together into a series of what I think are interesting thoughts. All right. Well, David Deutsch in beginning of infinity, he explores a concept that he calls knowledge. So on page 78 of beginning of infinity, he says human brains and DNA molecules are general purpose information storage media. They are in principle capable of storing any kind of information.

[00:03:25]  Blue: Moreover, the two types of information that’s going to turn out to be important to today’s episode, the two types that they respectively evolved to store have a property of cosmic significance in common. Once they are physically embodied in a suitable environment, they tend to cause themselves to remain so. I call this type of information knowledge. All right. So in episode 44, I discussed how David Deutsch, that was the one where I was reporting on the fact that I had actually met with David Deutsch and got to ask him some of my questions about his theory of knowledge. And he admitted to me that it is that the word knowledge is that’s not the only way he uses the word knowledge. In fact, he actually said, I’ve got no problem referring to what the robot creates as knowledge. It’s just not what I meant. Right. And so I know that he isn’t. I mean, I know a lot of people have kind of become essentialists around this, that this is the one and only definition of knowledge. But even David Deutsch doesn’t believe that. OK. Now, Deutsch is following Karl Popper when he reads when he reads definitions. And here’s a quote from Karl Popper that we read definitions from front from back to front or from left to right for it starts with a defining formula and asks for a short label to it. And notice that’s exactly what Deutsch did. He starts with a concept that we’ve got this adapted information that causes that causes itself to remain so. So and then he grabs a label for it. And the label that he chooses is knowledge. OK.

[00:04:57]  Blue: And I know from talking with him that’s not the only way you could use the word knowledge. But he’s using that label knowledge to refer to that concept. That’s something that once physically embodied in a suitable adapted information that once physically embodied in a suitable environment tends to cause itself to remain so. OK.

[00:05:17]  Red: Now are we talking like what kind of time frame do you think he’s thinking of when he says that? Well,

[00:05:23]  Blue: I’m going to ask that question.

[00:05:27]  Red: It’s just that’s something I’ve always wondered.

[00:05:29]  Blue: So you are right to wonder it. And it turns out that that’s going to turn out to be a really big thing that I’m going to emphasize over and over again. OK. Now according to Popper, a definition is a kind of statement or theory or proposition. Notice that Popper refers to definitions as theories. OK. So this is why I think that we are correct to refer to this as Deutsch’s theory of knowledge. OK, even though we’re not claiming it’s the only way you could use the word knowledge and Deutsch is not claiming it’s the only way you could use the word knowledge. This is a theory he is putting forth and he’s referring to as a short label as a theory of knowledge. So Deutsch is putting forward a theory and referring to his theory as knowledge, the theory of knowledge. Now, his theory mentions these characteristics for what he’s calling knowledge. It’s stored in a general purpose medium and it causes itself to remain. So that’s two criteria that define what knowledge is. Now, you sometimes hear other things that he said. I don’t know. That’s why I’m actually curious by that list you said you have that’s all the different ways he’s tried to define knowledge over time. But some of the ones that you hear a lot are it’s it’s information, adapted information with causal power. OK, so let’s let’s throw that one in as a part of his theory of knowledge also. And then from fabric of reality, he also talks about it as being adapted information that converges across the many worlds. And the example he uses from fabric of reality is DNA. So you’ve got, let’s say you have a string of DNA.

[00:07:10]  Blue: And let’s say you have two identical strings of DNA. So they’re the they’re the exact same set of bases in the exact same order. OK, but you know, one of them is a gene and one of them is just junk DNA. So one of them is knowledge and one of them is not knowledge, but they’re the exact same string either way. OK, now, how do you explain that? Well, he explains it through convergence across the many worlds that if you were to able to look across the many worlds in quantum physics, you would find that that junk DNA varies all over the place across the many worlds, because it can change and nothing changes in the organism, whereas the one that’s the gene causes itself to remain so. So it it converges across the many worlds. OK, now, one thing that I want to point out here is that you don’t even need the many worlds to be able to explain this. If you were to go out and you were to look at animals in the same species, you would be able to show that that same string converges across the members of the species, whereas the junk DNA does not.

[00:08:15]  Red: OK,

[00:08:16]  Blue: and this is something that I’m going to make a big deal about is that you don’t actually need to reference the many worlds to be able to utilize the power of this part of his definition or theory of knowledge. OK, that in fact, often you can just run something multiple times or look at it across multiple runs, in this case, different organisms in the same species. And you can actually physically confirm right here in one world in the many worlds that there is convergence. OK, so I like this idea of convergence across the many worlds or just convergence would even be better. OK,

[00:08:50]  Red: so what I’m kind of getting is that whether knowledge is encoded genetically or as a meme or for backing up big picture looking at the multiverse, it all kind of works in a somewhat similar way. Yes, sort of like the almost Darwinian.

[00:09:09]  Blue: Yes,

[00:09:10]  Red: way. OK, yes. That makes sense.

[00:09:13]  Blue: But notice also that in that quote I took from beginning of infinity, Deutsch also specifies two sources of knowledge, specifically biological evolution and human minds. Now, I’m going to refer to this part of history of knowledge as the two sources hypothesis. This is me giving a nice simple label so that I can keep saying the two sources hypothesis. And you’ll know, I mean, the part of Deutsch’s theory of knowledge that there are only two sources of knowledge, biological evolution and human minds. OK. And it’s also interesting that Chiara Marletto in her book on constructor theory that came out, the science of can and can’t. She doubles down on this two sources hypothesis in her book talking about how knowledge referring to knowledge is recipes. And then she says there are two kinds. There’s the kind of knowledge that’s contained in DNA, which she this is a quote now from her book on page nine, as she’ll start with recipes coded in the pattern of living cells, DNA. And then later, she refers to the kind that is human knowledge quote from the book, page 13. The other kind of recipes I mentioned is those that maintain our civilization in existence. OK, so both Deutsch and Marletto are saying there are only two sources of knowledge. Or in other words, member knowledge is just a shorthand for a concept. And the concept is adapted information that causes itself to remain so. So they are saying there are two sources of adapted information that causes itself to remain so biological evolution and human minds. This turns out to be incredibly important to people that accept their theory, even though, as we’ll see, it does not follow from the rest of the theory.

[00:10:59]  Blue: So Deutsch in the beginning of infinity on page 160, he says artificial evolution has been done successfully many times. It is a useful technique. It certainly constitutes evolution in scare quotes here in the sense of alternating variation selection. But it is evolution. But is it evolution in the more important sense of creation of knowledge by variation and selection? I doubt that it has been that that that has been achieved yet. OK, he says that it has been achieved or it has been yet. But I have to insert the word achieved there so that you know what he’s referring to. If all Deutsch said here is that artificial evolution lacks the property of an open ended search as does real biological evolution, then we’ve already talked about how he is correct about that. So let me put this in a slightly different way. I completely acknowledge 100 percent that there are only two open ended searches that we know about today, biological evolution and human minds. Therefore, biological evolution and human minds are special, just exactly like Deutsch is trying to say they are special. OK, but in this case, I’m referring to the fact that they are open ended, not to the fact that they do or don’t create knowledge. All right. So he’s actually going one further. He is saying artificial evolution literally creates no knowledge. And remember, knowledge here is just a shorthand for information once physically embodied in a suitable environment tends to cause itself to remain so. So let’s translate by substituting in place of the word knowledge, the theory or concept that it points to. Here’s now the revised quote.

[00:12:41]  Blue: But is artificial evolution a kind of evolution in the more important sense of creation of adapted information that causes itself to remain so by variation and selection? I doubt that such adapted information that causes itself to remain so has been achieved yet by artificial evolution. So the implication is that there is an actual physical difference between the type of information created by, say, genetic programming algorithm, which is the example he’s going to use, and we’ll cover that, and biological evolution. So Deutsch offers a specific example of this. It’s a robot that uses a genetic programming algorithm to learn to walk. And he is implying that the genetic programming algorithm physically creates no information that causes itself to remain so because that’s what the word knowledge means in this context. This is the threat of the tapestry we’re going to pull and we’re going to see what happens. So let’s actually take a look at the actual example that he uses. Okay. In fact, I’m going to read a paragraph from beginning infinity so you can get the full effect of what he’s saying. Okay. He talks about how you can delegate the perspiration to a computer when you’re trying to figure out how to this last step in this process of trying to get this robot to walk by using a so -called evolutionary algorithm, the word so -called is important here because of course he doesn’t really believe it’s a true evolutionary algorithm. Using the same computer simulation, we run many trials, each with a slight random variation of the first program.

[00:14:16]  Blue: The evolutionary algorithm subjects each simulated robot automatically to a battery of tests that you have provided, how far it can walk without falling over, how well it copes with obstacles and rough terrains and so on. At the end of each run, the program that performed best is retained and the rest are discarded. Then many variants of that program are created and the process is repeated. After thousands of iterations of this evolutionary process, you may find that your robot walks quite well according to the criteria you have set. You can now write your thesis. Not only can you claim you have achieved a robot that walks with a required degree of skill, you can claim to have implemented evolution on a computer. Let’s explain briefly how this actually works. This is something that I’ve actually tried programming for myself. It’s really kind of cool, the idea of a genetic programming algorithm. There’s actually a difference between genetic algorithms and genetic programming algorithms, but for our purposes, I’m going to just discuss genetic programming because it’s maybe easier for me to explain. The idea of genetic programming is that you are trying using a genetic style algorithm search, but what you’re doing it on is a set of computer statements. You’ve got these computer programs or instructions, and you’re actually writing an algorithm or program, but you’re doing it randomly. In fact, you’ll literally start off with just a complete set of random instructions in several different programs. Then you will actually measure which one walks the best. Then you do, literally, you crossbreed and mutate that population to form a new population. Typically, you also keep the best one as well. The one that worked the best, it survives to the next generation.

[00:16:12]  Blue: All the rest, you just crossbreed and mutate. Crossbreed, you literally take part of one program and part of another program, and you splice them together.

[00:16:21]  Red: Is it crazy to think this could be a path towards AGI? I mean, if that’s how it happened in the natural world.

[00:16:29]  Blue: Let me say that it is absolutely crazy that this could be a path. Okay, good.

[00:16:34]  Red: A definitive answer. I was not expecting that.

[00:16:38]  Blue: Really, the reason why is because of the last episode, because this process, as we’re doing it, absolutely does not solve the problem of open -endedness. You would have to solve the problem of open -endedness for it to be a path to AGI.

[00:16:52]  Red: What you do then is after you’ve done all this crossbreeding and mutating, you discard the rest of the programs.

[00:16:58]  Blue: Then you retry with the new population. Notice that this is really just me restating exactly what Deutch just barely said. You repeat this until the robot walks well enough for your purposes, with at least one program. Then you keep that final program as the final algorithm so that your robot can walk. If this were like, say, a toy you were doing this for or a robot you’re going to sell, you would then reuse that program and copy it into thousands, maybe hundreds of thousands, or who knows how many robots you end up selling. That final program propagates itself because it was useful. The final program is a working series of program instructions that are an algorithm. It’s a turn -complete set of instructions, by the way. There is no doubt that an algorithm is a kind of information. In fact, that’s pretty much tautologically by definition. There is no doubt that the final algorithm, the one we keep, is far better adapted to the task than the ones that we started with that were mere random instructions, or really any of the variants that we throw away at the end. We objectively know the following facts about the genetic programming algorithm that causes this robot to walk. Number one, the genetic programming algorithm is creating adapted information. Number two, the final algorithm competed against a population of other variants that all got discarded, except for the one that actually worked. Number three, the snippets of code that are the most useful in the various programs of that process got replicated across the population.

[00:18:37]  Blue: Number four, the final algorithm, the best adapted, the one best adapted the purpose, is still instantiated because we kept it around to keep the robot walking correctly because it’s a useful program. And number five, if the robot is going to go to production, it will get replicated over and over again for who knows how long. Does this not match Neutch’s criteria or theory of knowledge? Let’s take a look. So the criteria were, it’s stored in a general purpose medium. Yes, as per computational theory, Turing complete computers are a general purpose medium. Number two, it causes itself to remain so. So did we pass that criteria? Yes, the genetic programming algorithm works off a snippet of code that causes themselves to remain so by surviving rounds of mating. The end result stays instantiated in the robot and the variants are destroyed. Okay, how about has causal power? Yes, it causes this algorithm causes the robot to walk and it causes itself to remain so. So it has causal power. And does it converge across the many worlds or does it converge? So consider that most of the populations of algorithms were random and useless. While there might be a large number of equally good walking algorithms, presumably there is a very large number of equally good walking algorithms, that number of algorithms that are actually allowed the robot to walk is very, very tiny compared to the entire set of equally sized algorithms that are just useless random and do nothing. Thus, we know there is convergence across the many worlds. Okay, so we’ve got a problem. Deutsch’s example of non knowledge fits his definition or theory of knowledge of what theory of definition or theory of what knowledge physically is. It fits it perfectly.

[00:20:33]  Blue: So I want to take a moment to really just explain that this is a sincere question. When I was reading beginning of infinity, I was bothered by this example of non knowledge precisely because it’s physically what he says knowledge is. And I kept wondering what I was missing. Why did everyone else around me read this and effectively say, oh, wow, this is so obviously correct. And I wasn’t seeing it. What were they seeing that I wasn’t seeing? Okay, so I went out and I asked people I’ve asked a lot of people, okay, and talked to many, many, many fans of David Deutsch’s book beginning of infinity to try to help me with this. And let me just really emphasize this really seems to me like it’s a totally fair question that I would have expected the defenders of the theory to at least acknowledge as a fair question and better yet take it seriously as it not only a question but as a problem that needs to be solved. If only to explain where it contains some sort of misunderstanding or false assumption, perhaps, but when I bring it up, I feel like there’s an immediate jump to being treated like there’s just no question worth answering here. I’ve literally been told it’s just obvious before. Okay. So for example, I might get told, isn’t it just obvious that creating a walking robot algorithm is far less impressive than say creating an airplane? Sometimes I’ve literally been made fun of or treated as if I’m hostile for asking this question in the first place. I’ve had people say something to the effect of fine define knowledge that way. And then nearly everything counts as knowledge or some sort of reaction like that to it.

[00:22:18]  Blue: At one point, I’m thinking, wait, I’m using Deutsch’s definition of knowledge. I’m not varying it in the slightest. So this isn’t me defining knowledge this way. Or they might flip it around on me. They might say something like, well, if you think this is knowledge, you need to prove to me it’s knowledge. Okay. Note by the way that I’m never actually claiming it is knowledge. I’m just trying to show that Deutsch’s criteria of what counts of knowledge includes the walking robot example, which he considers to be not knowledge. I’m not claiming it is knowledge. I’m simply trying to ask why there’s this contradiction that seems to exist. So I can’t recall anyone ever saying, oh, that is how Deutsch defined knowledge. And I can totally see why you’re reading him that way. And the walking robot, at least the way you’re reading it, does fit his definition or theory. You know, let’s work on that. Or here is where I think you’re maybe reading something in that he didn’t intend. I’ve just never had somebody treat my question that way. After so many bad responses, I have come to wonder if no one knows how to resolve this problem. So the defenders of the theory, I think they see me like this. Let me try to, as best I can, acknowledge the feeling of somewhat legitimate feeling of how they see me when they when I raise this question. I think they’re thinking something like this. Geez, Bruce, don’t you just get it that the kind of knowledge created by the two sources, evolution in human minds, is just so much more innovative and repressive than the walking robot example?

[00:23:55]  Blue: Can’t you easily see that the walking robot is not special compared to the knowledge that comes out of the two sources, such as airplanes or rockets to go to the moon? I mean, who cares about some dumb walking robot algorithm that couldn’t even have gotten off the ground in the first place, but for the fact that humans inserted all the real knowledge into the process first. This is just so obviously different than, say, inventing an aircraft. Show me an evolutionary algorithm that invents airplanes and then I’ll be convinced. Okay. I acknowledge the legitimacy of the feelings being expressed here. Okay. I can see where they’re coming from in terms of the emotional response, but I feel like it’s more or less a non -answer to what I really do see as a completely sincere question and a completely legitimate question. I want to see knowledge defined in an objective physical way such that if it needs to rule out the walking robot example, it doesn’t do so by relying on gut feels of what should or shouldn’t count. I want it to be stated in an objective and very specific way such that I can see for myself the walking robot example is clearly not knowledge. And if it isn’t stated in a way where I can see that without having to rely on gut feels, that strikes me as a problem. So the correct theory should tell me specifically why the walking robot algorithm is not knowledge. And, and here’s the thing, it should also tell me what it is instead of knowledge rather than to refusing to acknowledge its existence.

[00:25:38]  Blue: So let’s take a look at what Deutsch himself says about this because he actually says quite a bit about this in beginning of infinity. And I want to take each of his arguments and I want to show why none of the arguments seem very compelling to me. Okay. So Deutsch claims that the evolutionary algorithm didn’t create any knowledge and that the human did instead. So on page 160 of beginning of infinity, he says there is a much more obvious explanation of where the knowledge comes from, namely the creativity of the programmer. The programmer in this example, he created a language of subroutines using his inspiration or creativity and in Deutsch’s view, all the knowledge went there. So you can, you can imagine this, it’s a graduate student in this hypothetical example. He’s thought a lot about, okay, I’m going to need the robot to do this. And he creates this set of subroutines, but he doesn’t know exactly how to use those subroutines to make the robot to walk. So he’s put all this effort into creating this, this language, effectively a language of subroutines are like a language, right? This is the analogy. And all of that was created by the graduate student. And then what you’re really doing with the evolutionary genetic programming algorithm is that you’re figuring out how to put together this calls to this language to be able to get the robot to actually walk. Okay. So on page 161, he says some of the knowledge that you packed into the language during these many months of design will have reach because it has encoded some general truths about the laws of geometry, mechanics, and so on.

[00:27:11]  Red: So the walking robot is almost like a byproduct of actual knowledge. Is that a fair way of thinking about it?

[00:27:20]  Blue: Well, I’m not sure how to answer that question. So if knowledge here, when we’re using it the way Deutsch is saying, it’s adapted information that causes itself to remain so. So obviously, the walking robot itself is not itself adapted information that causes it to remain so. But the genetic programming algorithm that popped out of this end, that is adapted information that causes itself to remain so. So it would be knowledge, at least under the current definition or theory of knowledge that Deutsch puts forward in beginning of infinity.

[00:27:53]  Red: Okay, I see.

[00:27:54]  Blue: If taken literally. Okay. Okay. Now, okay, here’s the thing. There’s no doubt that the walking robot, the walking robot’s ability to walk did not come solely from the algorithm produced by the evolutionary algorithm. And this is really the point that David Deutsch is making. Okay. So it seems to me, so like, like I’m not, no one’s making the claim that the genetic programming algorithm is the sole reason why that robot can walk. Clearly, a far more important reason is that language of subroutines that the graduate student put together and that came from a human, not from a genetic programming algorithm. Okay. So nobody’s denying that. So it seems to me, though, that the question that we’re asking here is if the portion of adapted information that makes up the walking, the walking algorithm that came out of the genetic programming algorithm is knowledge or not. And if not if it’s the sole knowledge that allows the robot to walk. Clearly, it is not the sole knowledge that allows the robot to walk. It may not even be the most important knowledge that allows the robot to walk. It doesn’t matter how important it is. We’re looking at that set of adapted information and we’re asking that came out of the genetic programming algorithm and we’re asking if it is knowledge or not. And that’s the only thing we’re asking. Okay. Now, the reason why I emphasize this is because my experience with the defenders of Deutsch’s theory of knowledge and beginning infinity is that they will start to list out all the knowledge the algorithm didn’t create. Okay. So they might say, well, Bruce, there’s this language of subroutines that the graduate student wrote. That didn’t come from the genetic programming algorithm.

[00:29:36]  Blue: And then there’s the knowledge of how to write a genetic programming algorithm. That didn’t come from the genetic programming algorithm. There’s the knowledge in the computer itself or in the operating system of the computer. In all the different routines that came packaged with whatever software they’re writing this in, in the CPU, et cetera. Okay. And they’ll give me this giant list of knowledge that went into making the robot walk that clearly can’t have come from the genetic programming algorithm. Okay. But this is an inherent problem of counting white swans. And honestly, at least by itself, it is a completely meaningless argument for the exact same reason that counting white swans is a meaningless argument. Okay. Let me give an analogy to help people understand why this is not a good answer. Okay. Think about like the movie sneakers, one of my favorite movies. In the movie sneakers, the whole movie revolves around this idea that somebody came up with an algorithm that can crack RSA encryption. Okay. Now it’s probably not physically possible to write such an algorithm, although we don’t know for sure if it’s physically possible to write such an algorithm or not. Okay. For our purposes, we’re going to put ourselves into the minds of the people who live in the sneakers universe and let’s say that it is possible to write an RSA crack algorithm. All right. So imagine that you are a programmer and you have just written this RSA encryption crack algorithm. Okay. But when you did so, you used a library of functions of how to deal with prime numbers plus you used, so it’s a language of subroutines just like the case of the walking robot.

[00:31:18]  Blue: Plus you had knowledge that you used to come up with things. You had the knowledge that’s in the computer itself. You have the operating system, existing subroutines. So the situation is identical. Okay. There’s all the same white swans available, but in this case, what you’ve outputted is a crack to RSA encryption. Okay. Would it make sense to claim that you didn’t create knowledge when writing this RSA crack just because the library that you used had reach and contained knowledge? Okay. Would listing out the list of all the knowledge that did not come from you in any way work as evidence that you didn’t create knowledge when you created this crack to RSA encryption. Now, of course, that would be absurd. Okay. That would be a completely absurd set of arguments. So the discovery of something totally novel, cracking an RSA encryption, could be easily declared not knowledge by using the same white swan argument that defenders of Doric’s theory of knowledge here use, which is why I don’t think that this is a good argument. So in this case, the crack, this imagined crack to RSA encryption, because a human invented it, even though they, that there may be every bit as much preexisting knowledge that the human didn’t himself create. Without a doubt, we would all now claim, yes, knowledge was created. Now let’s say, this is, I admit absurd, but let’s say that an evolutionary algorithm happened to, by chance, work out the same crack. Okay. Yes, this is highly implausible. Okay. But in this universe where we’re imagining that such a crack is possible, it’s not completely impossible. It’s just highly implausible. Okay.

[00:33:05]  Blue: So the defenders of Doric’s theory would immediately now claim all the knowledge came from, came from humans and that the evolutionary algorithm did not actually create any knowledge, even if it’s the same algorithm. Okay. So I hope people can see why the argument that is being offered here just really seems off to me. In fact, it is rather explicitly, it’s an inductive argument that counts white swans and you cannot refute anything by counting white swans. So Doric’s point is that the programmer is in no position to determine how much he or the genetic programming algorithm created the knowledge. Doric claims it’s impossible to test for knowledge for the same reason you can’t conclusively show knowledge creation during, via the Turing test. Okay. Now, just as a side note, in episode 50 of this podcast, I covered Doric’s treatment of the Turing test and I talked about a lot of things and I completely agree with him on and I also offered some criticisms of his treatment of the Turing test. So if he got some curiosity about his treatment of the Turing test, I would recommend episode 50. Okay. But here is what he says, page 161, the Turing test idea makes us think that if it is given enough standard reply templates, an Eliza program, Eliza being a chatbot, will automatically be creating knowledge. Now, let me just say that nobody thinks this, that that is just wrong. Okay. Nobody believes the Eliza program or any chatbot is creating knowledge just because it’s been given enough standard reply templates. Okay. But let’s continue. And then he goes on to say, artificial evolution makes us think that if we have variation in selection, then evolution of adaptions will automatically happen. But neither is necessarily so.

[00:34:55]  Blue: In both cases, another possibility is that no knowledge at all will be created during the running of the program, only during its development by the programmer. Now, this seems like a bad analogy to me. Eliza does no variation selection at all, nor does it solve any problem or search for solutions to problems. In fact, Eliza chatbots and in fact, all chatbots are meant to be tricks. And that is all they are meant to be. By comparison, the evolutionary algorithm is meant to solve a problem. And it does solve a problem, namely programming the robot to walk. Machine learning famously solves problems no human knows how to solve. Let me give some examples here. It used to be back in the past that there were people who tried to write programs to recognize faces, and they would try to use algorithms to be able to recognize faces. Now, if really stop and think about that for a second, suppose that you need to write an algorithm to recognize a face, how would you go about doing it? There is no obviously good way to do it. Do you try to have it look for something kind of circular that’s at a certain place in the picture and then assume those are eyes? I mean, like, there’s just no really good way to write a face recognition algorithm, OK, using a human’s direct creativity. And but there was this whole group of researchers that spent their careers trying to come up with ways and they did come up with things that kind of worked, but they really, really, really sucked bad. OK, and that’s why a lot of that research never went anywhere.

[00:36:40]  Blue: I remember a guy who just when deep learning was starting to become all the rage back in, you know, early 2000s, he went to a group of these researchers and I got a conference and he said to them, switch your research interests now or you’re going to be out of a job. Machine learning will be able to write face recognition algorithms that drastically outperform everything you guys have been doing. And no one is ever going to use your research again, because we’re just going to use machine learning to do it. And that’s exactly what happened today when you go on Facebook and it quickly says, oh, that’s a face and puts a little box around it and in a picture. That’s all machine learning. There is no human creative algorithms for recognizing faces that humans created using their creativity. And we still to this day have no idea how to write such an algorithm. But we know how to write a machine learning algorithm that can write such an algorithm. And that’s how we do it today. So Tom Mitchell, who wrote what at least used to be a generation ago, the most famous introductory book on machine learning on page one, he says, for problems such as speech recognition, algorithms based on machine learning outperform all the approaches that have been attempted to date. So that’s the same thing. OK, there are certain domain of problems that machine learning is able to solve really well that human creativity doesn’t know how to solve unless you assign credit by saying, well, the human creativity created the machine learning algorithm. OK, but in terms of directly writing such an algorithm, we don’t have a clue how to solve some of these problems. OK.

[00:38:18]  Blue: And we don’t even understand how the machine learning algorithm does solve these problems. We’re still researching how it does it. We’ve got some ideas at this point. We’re starting to develop theories around that. But we have these algorithms coming up with algorithms that solve these problems. And we just didn’t have a clue how they were going about it at first. OK. Is there a way it’s kind of as someone who is not a computer programmer or anything close to that? Like, how can you write an algorithm that comes up with an algorithm and then you have no idea how it did that? I mean, is there an easy way to explain that? Yes. OK, so let’s use genetic programming as an example. OK. So let’s say that so genetic programming is is not typically how you would solve a problem of, say, recognizing faces. And that’s probably not the best learning algorithm to try to do it. But let’s let’s use it as an example. Let’s say that we want to recognize faces. So we’re going to create a take a set of of programming instructions. So like it’s if then statements, it’s loops, it’s, you know, add one. I mean, like just exactly like any other programming language. OK. And you just create random instructions that basically do nothing meaningful at all. Then you run it through a genetic programming algorithm that creates a whole bunch of the population. It it combines them together through mating. It does crossbreeding. It does mutation. And suddenly at the end, you end up with an algorithm at the end that let’s say does kind of OK at recognizing faces. OK. Why would you know what it’s doing? It’s just a bunch of instructions.

[00:40:00]  Blue: You would have to spend considerable time to try to sit down and figure out what those instructions are doing. OK. It would be hard to figure out what those instructions are doing. Or what it is that they’ve stumbled across or found through their variation in selection that works so well in recognizing faces. OK. Now you might be able to. And in fact, that’s what that’s probably the next step in the process is you try to get. So now let’s switch to machine learning. Let’s say neural nets, which is a more realistic way we would go about this. And what they found is they started to look at the hidden layers in the neural nets. And they would find that, like, let’s say one layer was looking for a certain sort of pattern. And that pattern is to a human easily identifiable as an eye. So what it’s doing is it’s searching the image for a kind of generic version of an eye. And anything that that statistically is similar to it. OK. Well, then you start to realize, oh, that’s what it’s doing. It’s actually come up with a way to do search for a generic version of an eye. OK. But right off the bat, you wouldn’t know that, right? It would suddenly start working and you would have no idea what it’s doing. It’s all you’re doing is feeding in a bunch of ones and zeros that happen to correspond to an image. And it’s somehow outputting as a category. Yes, this is a face right here. You don’t have to write off the bat and know what it’s doing. And it would be a lot of effort to try to figure out how to figure out what it’s doing.

[00:41:29]  Blue: So that’s why machine learning is tends to be an engineering field. They try stuff, it works. And then after the words, they spend decades figuring out what it is the machine learning algorithm actually did.

[00:41:43]  Red: OK. I think I’m kind of getting my mind around it a little bit.

[00:41:46]  Unknown: Yeah.

[00:41:47]  Blue: OK.

[00:41:47]  Unknown: So

[00:41:47]  Red: it’s a matter of it’s basically just become so complicated so quickly that you’d have to spend the rest of your life trying to figure out what the machine learning algorithms are doing.

[00:42:00]  Unknown: And

[00:42:00]  Red: then then there’s probably better machine learning algorithms by that part. That’s correct. And then you don’t know what those are doing.

[00:42:06]  Blue: OK.

[00:42:10]  Unknown: So

[00:42:10]  Blue: let me make my point here clear, though, getting back to the genetic programming algorithm that creates the algorithm that allows the robot to walk. Before the genetic programming algorithm was run, the robot could not walk. After it was run, it could. Something, whether we want to call it knowledge or not, I don’t care, appeared due to the genetic programming algorithm. And that something is adapted information that makes the robot walk. OK. Now, I do not insist on calling this adapted information knowledge, but I do insist that it exists because it does and we can easily just see it does. All right. If it is not knowledge, then it needs a name of its own because this not knowledge is something very interesting and important in its own right. OK. Deutsch goes on to use the example of how evolutionary algorithms stop improving past a certain point. Now, of course, this is the case because our current evolutionary algorithms have not yet solved the problem of open -endedness. Because they haven’t solved the problem of open -endedness, all of them stop at some point once they’ve solved whatever problem we’ve given them, and they don’t go on to invent new and innovative things. OK. But Deutsch sees this as evidence that the programmer created all the knowledge, though he admits that this is a less than definitive argument because regular evolution also hits Optima. Now, from a critical rational standpoint, I’m honestly not sure what to do with this argument. It does not constitute in any way a refutation, but it does constitute a good narrative or story. Now, human beings primarily reason via narratives. This is something we’re going to come back to in a future podcast.

[00:43:58]  Blue: And what we do is we have this narrative that in our minds is an explanation and we sort of feel if the narrative is a good argument or a good explanation. And then we make up our minds based on our vibes or feelings about how that narrative makes us feel. OK. Now, Popper explained why this is the wrong way to reason. The correct way we should be reasoning here is we should say, look, we have two competing theories here. The first is the evolutionary algorithm created knowledge. And the second is that the evolutionary algorithm did not create knowledge. Those are the two competing theories. Write them down. Be explicit. All right. Being explicit is something I’m also going to emphasize in future podcasts that the algorithm stopped improving past a certain point in no way helps us choose between these two theories by potentially refuting one of them. OK. That’s why this argument is a non -Popperian argument. It’s just a narrative. It is not a refutation. OK. And that’s why it’s not a good argument. Plus, if Deutsch is right that the evolutionary algorithm creates no knowledge, then how does he explain why it improved the robots ability to walk in the first place? Isn’t this what we call an explanation gap? OK. If no knowledge is created, then why does it walk? OK. Again, something must have been created that allowed it to walk. And if it’s not knowledge, what is it? All right. Note again how this drastically differs from the Eliza example he’s using. Where there is no equivalent explanation gap that exists. It seems to me, and now this is my opinion. I’m not going to insist on it, though.

[00:45:34]  Blue: But it seems to me that Deutsch in this argument is confusing two things. The problem of open -endedness and a theory about what knowledge physically is. He’s trying to make those the same thing and they’re not. OK. That’s my opinion. OK. But there’s a bigger problem here. And it is that knowledge begets knowledge. Now that idea has been formalized in Lee Cronin’s assembly theory. However, before Lee Cronin tried to formalize it in assembly theory, we already knew that knowledge is not created in a vacuum. We create knowledge out of existing knowledge. Once we have new knowledge, it then becomes a tool for creating future knowledge. Now Campbell and Popper formalized this idea in their theory of evolutionary epistemology. And we’ve talked about that in past podcasts, and maybe we need to revisit that and get more specific with what that even means. But evolutionary epistemology, you’ve probably heard that term before. What you may not know, and I didn’t even know back when I did podcasts on it, is that’s a term coined by Donald Campbell himself as the name of his theory of knowledge. OK. And knowledge under this theory is created in a hierarchy of variation and selection algorithms. Now we know that’s the case, right? So the kind of classic example, the one that Popper himself uses, is that of animal learning. So evolution creates a learning algorithm. So evolution, biological evolution, creates animal minds, and animal minds include learning algorithms, OK, which in turn use evolutionary epistemology to create knowledge in the lifetime of the animal. And the animal may in this case be a human, OK. So it does not rely on the knowledge only in its genes, all right.

[00:47:26]  Blue: I know that’s another thing where David Deutch has literally said all the animal’s knowledge is in its genes. And according to Campbell and Popper’s theory of evolutionary epistemology, that is simply not true, OK. So if you want to deny some piece of knowledge exists and you’re comfortable using the white swan argument that, well, this other place in the hierarchy created all the real knowledge, then you will always be able to do so because there’s always something else in the hierarchy. There’s always some other variation in selection algorithm that created the knowledge that created the next place in the hierarchy, OK. Thus, this argument is an all -purpose argument. It can never, ever, ever be refuted or disproven, all right. Any time that you can say, oh, this genetic programming algorithm did not create the knowledge because the human created the knowledge, well, we all know the human created a bunch of knowledge. If you’re comfortable with such an argument, it can never be refuted or disproven. And it is thus, by definition, a bad explanation. I call this all -purpose argument the pseudo -Deutch theory of knowledge. See episodes 25 and 26 where I cover that in detail. So let’s say, let me give you an example. Let’s say the genetic programming algorithm was creating knowledge. So just this is a hypothetical example. So the genetic programming algorithm is creating knowledge. We’re going to start with that as our starting assumption because I want to make a point, OK. Wouldn’t Deutch’s argument still sound good even though the argument is wrong? It seems to me that it would. You could still point to and say, no, no, no. The knowledge was created by the human.

[00:49:07]  Blue: And you could still make that argument and it would sound every bit as good even if it was wrong. OK. Even if the genetic program did create knowledge, you could still list hundreds of places where the human created knowledge that it depends on. And it would still have the same vibe that no knowledge was created or that no important knowledge or no real knowledge was created. This is true even if the genetic programming algorithm did create knowledge. Thus this argument can’t be refuted. Now, is there a term for an argument that sounds good even when it’s wrong? Yes, that’s called a rhetorical argument. When I try to explain this problem and point out how the pseudo -Deutch theory of knowledge seems to be a bad explanation because it can’t be refuted, I have been told many, many, many times, no, it’s just a philosophical or metaphysical probably the same thing here, theory. Now, what is the difference between a metaphysical or philosophical explanation and a bad explanation? Actually, Peter, I want to ask you that for a second. OK. You’re pretty well. You know Deutch’s theories quite well. You at least understand what I’m talking about when I talk about a metaphysical explanation or a philosophical explanation versus a bad explanation. Can you explain to me the difference between a good, non -testable philosophical explanation and a bad explanation?

[00:50:25]  Red: So the premise, I guess, just to try to think this through is that in the metaphysical or philosophical explanations, they’re sort of existing on a different plane than the empirical or scientific theories, but that doesn’t really make a heck of a lot of sense, I think, when you really consider it, because of course we want to move even our metaphysical theories into become more empirical, as we’ve discussed on the podcast many times. Let me ask you a question.

[00:51:06]  Blue: You’ve heard me say that and you apparently agree with me. What if I told you that almost nobody agrees with me?

[00:51:14]  Red: And so the part that almost no one agrees with you, just to be clear, is that they would say that just the empirical empiricism or the empirical, the desire to make theories more empirical just doesn’t play out with metaphysical theories at all. I think

[00:51:35]  Blue: that’s what they would say, yeah.

[00:51:36]  Red: Is that what they would say?

[00:51:37]  Blue: I’ve been told that, yes. I’ve actually had online critical rationalists when I tried to say, look, the goal is that we actually have a preference for empirical theories. I’ve had numerous ones simultaneously jump me saying, that is totally not true. And then they would, I always put words on my mouth, but they accuse me of saying, you’re saying every single metaphysical theory is inferior to every empirical theory, which means that Popper’s own metaphysical epistemology is inferior to every empirical theory.

[00:52:11]  Unknown: It’s

[00:52:11]  Blue: like, look, I’ve never said that, guys. That’s like, you’re totally putting words in my mouth. But I have literally had a whole gang of online Poparians arguing with me simultaneously that that is just not true. And that metaphysical theories can be as good or better than empirical theories. And that there’s no need to move between metaphysical to empirical. There’s no way to prefer empirical over metaphysical. I have had just giant gangs of online critical rationalists tell me that.

[00:52:43]  Red: Well, it seems to me that, I mean, if it’s truly, if you have a theory that’s just truly non -empirical, and then you have no hope of even making it empirical, it’s kind of pointless to even talk about. I mean, isn’t that the whole point of like, if you have a moral theory about the best way to live, you know, I mean, and you’re discussing it, you want it to be sort of logically coherent. You want to look at, you know, people who have adopted this moral theory and show that, you know, you would just look for a whole web of explanations that would demonstrate that this theory is make sense. And I mean, isn’t that kind of moving it, you know, it’s not like you would look at it, it’s sociological studies and think, oh, well, this study says this, I mean, that’s, that would be really, I think, a poorer way to think of it. But you might look, you might, through conversations, you might get at deeper, deeper explanations that would indicate that this moral theory is a pretty good theory. That was kind of rambling. I might cut out some of that. But no,

[00:54:00]  Blue: I think that’s pretty good. And actually, I agree with you. Now, I don’t know everybody would, but like, I definitely agree with you. In fact, let me, let me use an example here. I’m going to, I’m going to intentionally pick an offensive example because I like offensive examples. However, I want to be clear that I’m not actually making this claim for myself like even slightly.

[00:54:18]  Red: Okay.

[00:54:19]  Blue: Let’s, let’s take an idea like TCS, taking children seriously. Okay. Which is an idea that associates itself quite strongly with critical rationalism.

[00:54:28]  Red: Okay. Sorry.

[00:54:29]  Blue: Let’s say that we did studies, sociological studies, maybe even of the kind you just said you weren’t sure you believed in. And we found that children raised under that philosophy had much, much higher crime rates than children raised under just traditional parenting. Or we found that they had much lower crime rates than children raised under traditional parenting. Well, let’s say that they made less money or they made more money. Or, you know, we can imagine a series of studies that would come out. And whatever works at math,

[00:55:08]  Red: I think that’s something that’s been actually alleged is that children who grew up in very non -traditional educational environments are worse at math, which

[00:55:18]  Blue: I’ve never, I’ve never heard that. But okay. But that would be another example. Okay. So it’s not too hard to see that you could take something like TCS. And that was why I say I’m not advocating anything as I’m just using it as an example. Okay. But TCS is testable, even if it’s not easily testable, it is testable. Okay. So it is in fact an empirical theory, rather whether the people who believe in it admit it’s an empirical theory or not. Okay. So yes, I would absolutely be interested in such studies. I wouldn’t consider them maybe definitive, right, right off the bat. Okay. But I would absolutely let those studies influence me as to how to, whether I should or shouldn’t adopt such a theory,

[00:56:06]  Red: whether

[00:56:06]  Blue: I should try to error, correct or modify such a theory. Okay. So I’m actually with you on this. Okay. But let’s get back to my question though. So clearly there are metaphysical theories. Now, nobody denies that theories that can’t be empirically tested. Poppers own epistemology. He considers to be a metaphysical theory. Okay. A philosophical theory. All right. But I probably would say, if you were to ask me, is Karl Popper’s epistemology a good explanation? I’d probably say, yes, it’s a good explanation. So then someone might say, well, see, you can have non -empirical theories that are good explanations. All right. I think

[00:56:45]  Red: that, you know, going back to the TCS thing, I think that’s one of the, you know, it’s not about sociological studies, obviously, but one of the things that sort of appeals to me about it, it does have a certain logical coherence to the whole idea that you should not be, you should not coerce your children and that you should raise them in a way that sort of philosophically make sense based on fallibilism. And all that. And so, you know, I mean, I’m not like, as we’ve discussed before, a true believer necessarily. I’m more of a strong sympathizer. But I think that that’s the, I guess, the coherence to it is really what, and, you know, and there was also, you can, you know, I could rattle off about 10 reasons off the top of my head that might not be particularly empirical, but would add some support to raising children that way. Some of them might be anecdotal or whatever. But I mean, they’re broadly defined. They’re explanations of a sort. OK.

[00:58:04]  Blue: So let me restate my question again, though, because I still really feel like it’s a very important question. Although I’m going to ask you to try to answer it. And it’s OK for you to say, I don’t know how to answer it. OK, fair enough. And I’m not going to answer it in this podcast, by the way. I’m just raising it as a question for people to think about. OK,

[00:58:22]  Red: I’m ready.

[00:58:23]  Blue: What is the difference between a good, non -testable, non -testable philosophical explanation and a bad explanation?

[00:58:33]  Red: It’s an intriguing question. I’ll say that. I mean, a non -testable explanation and a philosophical theory.

[00:58:45]  Blue: So a non -testable test, a testable theory is a philosophical theory. Yeah.

[00:58:49]  Red: We’re using a bad explanation

[00:58:51]  Blue: versus a bad explanation. OK, so let me just say this. I often get the feeling amongst online critical rationalists that a bad explanation are the things the other guy has. Sure,

[00:59:06]  Red: humans are what we are. And

[00:59:08]  Blue: philosophical explanations are the ones that you personally prefer. And I think more often than not, that is the difference between the two. Now, obviously, that’s a really bad difference. And we really need to solve that as a problem. I’m going to raise it as a problem. I will solve it in a future podcast, but not today and not any time soon. But I want people to really think about that. What is the difference between a bad explanation and a good non -testable philosophical slash metaphysical explanation? And the answer shouldn’t be the other guy has a bad explanation and I have a philosophical explanation. If that’s the best, there should be an objective criteria that separates those two that we can point to. And it’s not just a matter of opinion.

[00:59:55]  Red: OK,

[00:59:56]  Blue: I’m going to leave it as an open question for now. Well,

[00:59:59]  Red: it’s a good question. I’ll say that. OK.

[01:00:02]  Blue: Now, Neutch also offers a test as to test whether the genetic programming algorithm is actually creating knowledge or not. OK. So on page 161, he says, to test this proposition, I would like to see an experiment of a slightly different kind. Eliminate the graduate student from the project that instead of using a robot designed to evolve better ways of walking, use a robot that is already in some real life application, is already in use in some real life application and happens to be capable of walking. Then instead of creating a special language of subroutines in which to express conjectures about how to walk, just replace its existing program in its existing microprocessors by random numbers. For mutations, use errors of the type that happen anyway in such processors. If the robot ever walks better than it did originally, then I am mistaken. If it continues to improve after that, then I am very much mistaken. Now, I find this test to be a bad test and I’m going to explain why. Let me clarify that a little bit. If this test were to pass, I would actually consider it a decent test that something open -ended had taken place. So I guess I sort of agree with Doge about this test. If you take it to be a test of open -endedness rather than a test of knowledge. But as a test for knowledge, I consider it to be a bad test. And let me explain why. What is the human equivalent to a microprocessor? Isn’t it the neural hardware, i.e., the physical brain? Let’s say we take a human and scramble his brain with random numbers, random connections in this case. Would he pass this test? Well, we know the answer is no.

[01:01:43]  Blue: We have natural experiments where this takes place. One of them is Alzheimer’s disease. This is a natural version of Doge’s test. And the person loses their ability to learn, to think. They lose their personality. I mean, they’re not going to learn some new innovative way to walk. So I don’t think this is really meaningful because the human equivalent, no human could pass either. Remember that knowledge begets knowledge and is created in a hierarchy of variation in selections, at least according to the Popper and Campbell theory of knowledge. So it doesn’t make sense to eliminate the graduate student per se. Could you test if humans create knowledge by, say, eliminating knowledge created by biological evolution from the experiment? Okay, so what would be the equivalent to that? Let’s say scrambling all the default knowledge that biological evolution puts into the brain upon birth. Well, obviously, that would never work. A person would never even emerge in the first place because you need that starting knowledge. So this is why I don’t really think this test is a valid test, at least not for the thing that Deutsch is trying to use it for. It might be a valid test for testing for open -endedness if we could just switch it around a bit, but I don’t think it’s in any way going to tell us if knowledge was or wasn’t created. What we really need is a way to test if adaptive information was created and if that adaptive information has causal power or not. Now, of course, what Deutsch is trying to say here, I think, is if this algorithm can open -endedly create knowledge, it wasn’t originally meant to create with no human intervention, then I will allow it to be called knowledge.

[01:03:30]  Blue: Okay, in other words, I think he actually is accidentally testing open -endedness and trying to attach the word knowledge to the problem of open -endedness. His real concern seems to be that evolutionary algorithms, as understood today, always have a specific problem, always very narrowly defined and never deviate in the slightest from that. The test really tests not if the result is knowledge, but if the learning algorithm is open -ended or not. However, this isn’t how he defined knowledge in his theory. He did not define knowledge as that which was created via an open -ended search algorithm. He defined it as adapted information that has causal power to cause itself to remain so. Because of that, I’m unclear why we even need a test. We know for a fact the algorithm got created the algorithm that got created by the genetic programming algorithm is adapted information. We know for a fact that the new algorithm is the proximate cause at least, or at least plays a very critical role in the robot’s ability to walk. We know for a fact the algorithm still exists while its variants all died, because it was the most useful one. So we know it fits Deutch’s physical description of knowledge as adapted information with causal power that causes itself to remain so. We would not need a Turing test to see if it counts as knowledge as per Deutch’s own definition. It is completely unnecessary to have a test at all because we already know it is knowledge, according to Deutch’s own definition, as put forward in the beginning of infinity. Again, if the walking robot algorithm created by the genetic programming algorithm isn’t knowledge, then what is it? Whatever it is, it matches Deutch’s criteria for knowledge.

[01:05:18]  Blue: It is adapted information that causes itself to remain so, has causal power, and converges across the many worlds. And it is at least an important part of why the robot can walk. Surely, whatever this special kind of non -knowledge is, it is something of import. It is something interesting. It is something that deserves to be studied and understood. And it must have some sort of relationship to what David Deutch is trying to call knowledge, because it either is knowledge if we accept Deutch’s current criteria and theory as put forward in the beginning of infinity. It’s exactly equivalent to what he’s calling knowledge, absent the two sources hypothesis. Okay, so let’s imagine we assume the two sources hypothesis is just wrong. It now just simply counts as knowledge. Or it’s something similar to knowledge, so similar that Deutch’s definition of knowledge accidentally includes it. So similar to knowledge that defenders of the Deutch’s theory have to rely on vague gut feels of what should or shouldn’t count as knowledge to try to eliminate it from counting as knowledge. So in summary, Deutch defines knowledge as adapted information that causes itself to remain so, but he insists that an algorithm created by a genetic programming algorithm, even though it is also an example of adapted information that causes itself to remain so, so he says that should not count as knowledge. He never fully explains why that is. Instead, he relies on a series of arguments that at least as far as I can tell, do not count as refutations of the competing theory. So naturally, I feel like the problem I’ve raised has yet to be resolved.

[01:07:01]  Blue: Okay, now a completely fair response to me, and you’ve already made it, maybe you didn’t realize you did, is this? Is it possible that Deutch’s theory of knowledge, as put forward in the beginning of infinity, was a sufficient account, but not a, sorry, it was a necessary set of criteria, but not a sufficient set of criteria. In other words, is it possible that because at that point he was still working on the constructor theory of knowledge, which is his later theory of knowledge, is it possible that there weren’t enough, it wasn’t enough in the theory yet, okay? And you already raised one such possible criteria, which is how long does it need to cause itself to remain so? So this is where we’re going to go next, but it’s going to be in the next podcast, okay? And we’re going to actually take a look at Deutch’s later developments in the constructor theory of knowledge, and we’re going to see if that theory allows us to eliminate the walking robot as counting as knowledge. So this is where I’m going to now leave off for this episode.

[01:08:09]  Red: Well, Bruce, you’re asking some extremely thought -provoking questions, and I look forward to your thought -provoking answers.

[01:08:18]  Blue: All right.

[01:08:18]  Red: Thank you for this.

[01:08:20]  Blue: Thanks.

[01:08:21]  Red: Bye.

[01:08:22]  Blue: Bye -bye. Please give us a five -star rating on Apple Podcasts. This can usually be done right inside your podcast player, or you can Google the theory of anything podcast Apple or something like that. Some players have their own rating system, and giving us a five -star rating on any rating system would be helpful. If you enjoy a particular episode, please consider tweeting about us or linking to us on Facebook or other social media to help get the word out. If you are interested in financially supporting the podcast, we have two ways to do that. The first is via our podcast host site, Anchor. Just go to anchor.fm -4 -strands -f -o -u -r -s -t -r -a -n -d -s. There’s a support button available that allows you to do reoccurring donations. If you want to make a one -time donation, go to our blog, which is forstrands.org. There is a donation button there that uses PayPal. Thank you.


Links to this episode: Spotify / Apple Podcasts

Generated with AI using PodcastTranscriptor. Unofficial AI-generated transcripts. These may contain mistakes; please verify against the actual podcast.