Episode 76: The Constructor Theory of Knowledge
- Links to this episode: Spotify / Apple Podcasts
- This transcript was generated with AI using PodcastTranscriptor.
- Unofficial AI-generated transcripts. These may contain mistakes. Please check against the actual podcast.
- Speakers are denoted as color names.
Transcript
[00:00:09] Blue: Welcome back to the theory of anything podcast. Hey, Peter. Hello, Bruce. How you doing good? Hey, okay? Last time we talked about Deutsches theory of knowledge as contained within the book beginning of infinity. Today, we are going to talk about the constructor theory of knowledge, which is a more developed version of Deutsches theory of knowledge. And it’s not just Deutsche anymore. He’s got other people working on it. Chiara more leto in particular talks about it, quite a book bit in her book, the science of canon camp. So but let me for people who didn’t hear the previous episode, let me recap quickly what we talked about. In the last episode, we talked about Deutsche’s definition or theory definitions and theories the same thing, according to can be the same thing, I should say, according to Popper. So his definition of knowledge, which was adapted information that causes itself to remain so we talked about how Deutsche doubts that artificial evolution creates knowledge, which if we take his definition seriously means he doubts artificial evolution creates adapted information that causes itself to remain so he uses the example of a genetic programming algorithm that teaches a robot to walk and claims all the knowledge came from the human. But the problem is that this example is an example of an objective creation of adapted information, the walking robot algorithm that causes itself to remain so all its competitors are gone now because this was the most useful version and possibly it then even gets copied out into hundreds or thousands or hundreds of thousands of robots because it was this useful algorithm that allows the robot to walk. So it objectively fits Deutsche’s definition of knowledge, at least as contained within the beginning of infinity.
[00:01:52] Blue: Could the problem here be that the beginning of infinity was a less than full account of Deutsche’s theory of knowledge? Could his constructor theory of knowledge eliminate the walking robot example from being knowledge and couldn’t do so in a consistent way? That’s what we’re going to explore in this episode today.
[00:02:09] Red: Well, I’m excited for that. I have do have to admit as much time as I’ve spent sort of delving into Deutsche’s ideas, I find the constructor theory kind of the scariest, I guess. I’m not like, it’s just really, there’s just some, I’m a few aha moments short of really, it really making much sense to me to be 100 % honest, but I’m going to try to hang on here.
[00:02:37] Blue: Okay. All right, now let me give a little history on this one because I think this is interesting. I once made a comment to the effect on your Facebook page that we don’t have a good theory of knowledge comparable to, say, information theory, where we can mathematically precisely represent what knowledge is like we can with information. Actually, maybe I made it in the podcast and it was on your Facebook page where Hervé took exception to it, but Hervé took issue with me saying this and he cited Deutsche’s constructor theory of knowledge. Okay. But as I explained the previous episode, I’m not of the opinion that Deutsche’s theory of knowledge is yet a good explanation or at least a completely as good as I would like it explanation and or at least not as good as information theory is. Though I do think it shows promise if we can correct some problems and existing issues with it. So I don’t necessarily dislike or disagree with the theory entirely. In fact, I really feel like it’s kind of a step forward in some ways about thinking of knowledge, but I also think it contains some really problematic areas that I’ve never seen fully addressed, okay, such as discussed in the previous podcast. So Hervé then attempted to summarize Deutsche’s theory of knowledge. He wanted to like put together what it was kind of a quick summary of it, which I’ve never seen anybody do, which I thought was really useful. His first attempt didn’t quite match what I thought Deutsche is actually saying in the constructor theory of knowledge. So I offered some criticisms and he took those criticisms to heart and he rewrote the summary.
[00:04:18] Blue: And between the two of us, we came up with a summary, fairly short summary of Deutsche’s constructor theory of knowledge. And at that point, actually, I published it on your Facebook page with Hervé’s permission. And I thought it was a really good summary of Deutsche’s constructor theory of knowledge that put it together compactly enough that I could actually interact with it well. Okay, we presented it by doing this. We presented it to the fans of David Deutch for feedback. And in the comments, it looked to me like there was very little they were asking to change. Okay, some thought it was just spot on. A few took some really mild issues with it, like just nothing of any real substance that I could see. Okay, so I came away feeling like that summary that Hervé wrote is a pretty good summary of Deutsche’s constructor theory of knowledge. So I’m going to use it here as equivalent to Deutsche’s constructor theory of knowledge. All right. So Hervé first defines information as having four main properties. One, it always needs a medium. Two, it presents with a multiplicity of choices. Three, it presents with a switching process. Four, it allows itself to be copied, including to a different medium. Now, I don’t think anybody doubts knowledge as information. So I’m going to quickly move past that part of his definition, okay, and concentrate on the part that I think is more important. Knowledge, he says, is information that has three main properties. Number one, it is capable of enabling its own preservation. Number two, it can be copied from one embodiment to another without changing its properties. And number three, it can enable transformations and retain the ability to cause them again. He then adds this.
[00:06:00] Blue: There are only two kinds of knowledge that exist at two levels of abstraction. Level one, knowledge is directly instantiated in abstract catalysts, such as useful genes encoded in nucleic acids, and is created through genetic variation and natural selection. The neo -Darwinian theory of evolution is the best explanation we have so far. Examples, virus, bacterium, oak trees, human beings. Level two, knowledge is first instantiated in human minds via creative process fueled by conjecturing criticism, human intelligence. Once created, it’s embodied as an abstract catalyst. It’s embodied as an abstract catalyst can take many forms. We don’t have any working theory of human intelligence so far. Examples, agriculture, democracy, language, the Mona Lisa, and quantum theory. This last part, these two levels, is Erwe’s equivalent to what I’ve been calling the two sources hypothesis. So just as a reminder, if I say the two sources hypothesis, I’m referring to the part of David Deutsch’s theory of knowledge, that there are only two sources of knowledge, biological evolution and human minds, human ideas or human minds. So notice that part of his summary included the two sources hypothesis. Every summary you will ever see of David Deutsch’s theories of knowledge will always include some form of the two sources hypothesis.
[00:07:26] Red: Can I ask you a quick question about the constructor theory? Sure. I think part of my confusion is maybe I’m a little bit confused about what it’s trying to do. Is it just a theory? I mean, you’ve spoken about it as a theory of knowledge, which I didn’t make some sense.
[00:07:47] Blue: The constructor theory of knowledge is a subset of constructor theory. Oh, okay. It’s not a self constructor theory.
[00:07:54] Red: Because it’s oftentimes spoken about something that almost wants to totally turn science
[00:08:01] Blue: upside down.
[00:08:03] Red: I almost imagine like, do you rewrite the whole physics textbook in constructor theory language? Okay. I
[00:08:10] Blue: need to do a podcast on constructor theory. And I have asked that question to every person I can think of who knows something about constructor theory. Is that what they’re trying to do? And I get the vaguest answers. And this is a question that deserves the same treatment as we’re giving to the constructor theory of knowledge. Because it seems to me that nobody knows the answer to that question. And Deutsch just answered it very compellingly in his most recent interview. And I don’t have it handy, so I can’t tell you what it was.
[00:08:44] Red: Yeah, with Sean Carroll. Yes,
[00:08:45] Blue: with Sean Carroll. He basically says, no, we will not be rewriting the textbook of physics into constructor theory. And it was the first time I had ever heard a clear answer to that question. Yeah. When I heard that, I was like, oh, that’s been exactly my question for a long time. But okay. Well, that even what you said right there is very helpful. And so there’s constructor theory and then a subset of that is constructor theory of knowledge. Yes. That’s helpful. Okay. Okay. So is this theory a good theory of knowledge comparable to information theory? Well, clearly many people, including Herve who wrote this, see it as being such a good theory. But I do not believe it is. And that’s what we’re going to talk about in this episode today. I’m going to take you through the theory and I’m going to help you understand why I don’t think it’s equivalent to information theory in terms of being a good explanation. That doesn’t mean I think it’s a terrible explanation or a bad explanation. I think it’s on the right track in many ways. And I think very highly of it. But that was why I had said we don’t have a good explanation of knowledge equivalent to information theory. Okay. I’m going to back up and I’m going to claim that I was right when I said that. But that doesn’t mean we’re throwing out the baby with the bathwater. Okay. You with me so far?
[00:10:11] Red: Oh, yes, I am.
[00:10:12] Blue: Okay. So I think that I think in particular, the theory has something deeply wrong with it, though. And I’m even going to tell you that of my opinion is it’s the two sources hypothesis that is wrong in it. And if I think you dropped the two sources hypothesis, I think that would be a massive error correction to the theory that would actually get it back on track again. And it probably could at some point be developed into a good theory of knowledge similar to information theory. Okay. So but that’s all my opinion. And I’m not going to ask anybody to accept that. I’m going to make a series of arguments, judge for yourself. Okay. So here’s the problem. Do the three criteria offered by Herve actually force us to conclude that only the two sources create knowledge? Let’s discuss that. Okay. So very specifically today in this episode, we’re going to ask, do the three criteria that Herve listed out, do they solve the problem from the last episode? In other words, can the walking robot pass the three criteria or does it fail the three criteria? So let’s re -explain, since you may not have heard the previous episode, what the walking robot algorithm is. This is taken directly from David Deutch’s beginning of infinity. I didn’t come up with it myself. So it’s this idea that you use a genetic programming algorithm and it writes an algorithm and it does it using crossbreeding, mutation, and its variation in selection. And you end up with an algorithm that actually does allow the robot to walk using a set of subroutines that was written by a graduate student, that it doesn’t write.
[00:11:52] Unknown: Okay.
[00:11:53] Blue: So when we’re talking about this walking robot’s algorithm, let’s compare it to the three criteria. Is it capable of enabling its own preservation? Well, the final walking robot algorithm kept itself preserved compared to other variants by being more useful. And it’s the one that ends up in mass production with all the robots and all the variants are dead now. Okay. So yes, it seems to me that it passes criteria number one. Okay. Second criteria is it could be copied from one embodiment to another without changing its properties. The algorithm can and probably will be copied to other robots precisely because it is useful. So it seems to me that it passes criteria number two. Number three was it can enable transformations and retain the ability to cause them again. Well, now it seems to me that the algorithm enables the robot to walk, which is a transformation and can cause them again. So again, it seems to me that it passes the third criteria. So again, it seems to me that we have at least a reasonable interpretation of the three criteria that the walking robot algorithm does pass. And therefore, if we’re taking them at least the way I’m reading those three criteria, it should count as knowledge too. Okay. Now, this maybe isn’t so surprising. After all, beginning affinity account of knowledge is meant to summarize an early form of the constructor theory of knowledge that Deutsche was working on at the time. So of course, they’re very, very similar. All right. Now, I was talking to a defender of Deutsche’s constructor theory of knowledge.
[00:13:28] Blue: So once I had put air vase summary out there, I actually could reference that summary and I could talk to people who defend Deutsche’s constructor theory of knowledge and I could reference it and it could say, do you agree with this summary? And then we can actually talk with people about it. Okay. And I can offer my criticisms in a far more explicit manner. And there’s me again, using that word explicit. That is something that I’m going to claim is very important, but in some future podcast. Now, I pointed out, so I’m talking to a person who’s a defender. Let’s call him Henry. Okay. That’s not his real name. I asked, are these criteria insufficient? And this would mean so I showed to him that it passes the three criteria. And I said, are the criteria insufficient in some way? If they are insufficient, if this doesn’t by itself eliminate the walking robot algorithm, then it seems to me that you have a problem to resolve. Now, you could resolve this problem in one of two ways. One is you could add additional criteria, or you could modify or make more explicit the existing three criteria. For our purposes, I don’t actually care if you make the three criteria more explicit or if you add a fourth or fifth criteria that clarify. Okay. Because for our purposes, those are going to be exactly the same thing. Okay. So that would be one way you could go about this. You could take those three criteria and you could improve them in some way or add additional criteria. And those additional criteria could then eliminate the walking robot algorithm, get us back to the two sources, and the theory would no longer have a problem. Okay.
[00:15:08] Blue: The other way we could do it is we could drop the two sources hypothesis. So in this case, what we’re doing is we’re accepting the original three criteria as is, and we’re allowing whatever consequences follow to follow. Okay. If this means there are more sources of knowledge than the two sources, then so be it. Okay. So I will accept either way of going about this. Now, there’s an interesting phenomena that comes out of this. When I point this problem out to fans of Deutsche’s theory of constructive theory of knowledge, it turns out that the two sources hypothesis is treated as the single most important part of the theory, not the criteria or definition of knowledge. Now, how do I know that? Because in every single case, without exception, the defenders of the theory will always try to either add to or clarify those three criteria. Not a single one has ever said, oh, maybe the two sources hypothesis might be wrong. Maybe we should consider dropping it. Okay. It is just a given that the two sources hypothesis is correct and that it’s the criteria that must be clarified or modified in some way to get back to the two sources hypothesis. Now, this is strange. Okay. Normally, the implications of the theory aren’t considered core to the theory. Normally, we start with the theory and that we try to boldly and precisely explain. And then we have no choice to accept the implications of that theory. Okay. This property of having to accept the implications of a theory has a term that David Deutsch gave it. It’s called reach. Okay.
[00:16:45] Blue: So what we’re talking about is that normally you specify the concept or theory, you give it a label, and then you accept the implications of that theory as is. And you accept the reach of that theory. Okay. It’s the primary goal of trying to put together a good explanation is to have implications that we can’t control by simply changing gut feels or re -weirding things a bit. Okay. This is the concept of easy devariness versus hard devariness. But for some reason, the two sources part of the theory is considered core to the point where we feel the need to adapt the theory itself, meaning the criteria in this case, until we get back to the two sources. Now, let me use an analogy to show how weird this might come across as.
[00:17:36] Unknown: Okay.
[00:17:36] Blue: Imagine if Euclidean geometry saw as its core feature that triangles always have 181 degrees. So you show the Euclideans that actually by their own axioms, it’s 180 degrees. So the Euclidean start to ad hoc vary their axioms to try to get back to 181 degrees rather than changing the assumption that triangles always have 181 degrees. Okay. That would just be weird. Right. And so I want to call out just how strange this is that the two sources part of the theory is considered the core of the theory rather than the criteria or the theory itself. Okay. Now, let me steelman this, though.
[00:18:18] Red: I really It could be just the easiest part of the theory for people like me to understand, too. Don’t you think? I mean, it’s just as a yeah.
[00:18:27] Blue: Okay. So so let me here’s the thing, though. If I if I take that stance, that’s the reason why they see that as cores, because it’s the easiest part to understand. Isn’t that really just the same as saying they don’t understand the theory?
[00:18:38] Red: Fair enough.
[00:18:40] Blue: Okay. So that for me to take that stance, that might be the truth. In fact, I think that may be in fact the truth. Okay. But for me to take that stance is in some sense, me just upfront saying, oh, well, they’re wrong. And then I’m not making an argument. So I don’t want to do that. Okay. I want to steelman their argument as much as they can and then criticize that version, the steelman version. Okay. So let me offer a steelman. All right. Suppose they are theorizing about what it what it may what it makes what it makes up the two sources so special in the first place. Okay. So they adapt the criteria on the fly precisely because this is supposed to be a theory about the two sources. Okay. This isn’t maybe completely unreasonable. Okay. There are some problems with that and we’ll talk about the problems. But it’s not maybe not completely unreasonable. So let me give an example. Let’s say that I have defined a triangle as having three sides and then somebody draws a try a quote triangle, but they make their sides squiggly. Okay. They say, oh, look, this has three sides just like you said. So this is by your definition, a triangle. Okay. Well, it would make perfect sense at this point for me to not just accept the implications of the way I happen to write those the words I happen to pick, but instead to clarify what I really meant. Okay. So it would be fair for me at this point to say, okay, fair, but I actually meant straight sides.
[00:20:10] Blue: So I’m going to now rewrite this as I’ll add an additional criteria triangle has three sides and the sides must be straight or just rewrite the whole thing. It has triangle has three straight sides. Okay. If this is what’s going on, then I would consider that completely fair. All right. Now, here’s the thing, though, there is an implication to taking that stance. And it’s a really big implication. It’s one that I don’t think defenders of theory are willing to take. It means Deutch’s theory is explicitly about the two sources of knowledge and tautologically nothing else counts as knowledge as per the starting assumption. Okay. If this is actually the case, this changes my perception of the theory entirely. It’s like saying, actually, I’m defining knowledge such that it must come from biological evolution or human minds and that anything that doesn’t come from those two is tautologically by definition, not knowledge. Now, no one has ever told me this. Okay. I mean, like, that would be an easy thing to point out to me from the defenders of the theory, if that was really what they were thinking. So no one’s ever said, this is a theory about the two sources, Bruce. They’ve always treated it as if the criteria do in fact exclude the robot and it’s just obvious that it does. That’s how it’s always been treated. Okay. Now, what I’m going to say is you have to take a stance either way, and I don’t care which stance you take.
[00:21:43] Blue: If tautologically we’re choosing to define knowledge as that which comes through the two sources, that if I can show criteria that don’t match it, I’ve refuted the criteria, if not the two sources hypothesis and the criteria have a problem that need to be solved. Or you must decide the criteria define knowledge. And if they define something outside the two sources, then we refuted the two sources hypothesis part of the theory. It’s got to be one of the two though. Okay. So when I bring up this, my refutations, my counter examples, I’m not telling you which part of the theory I’m refuting. I’m refuting a combination of both. And it’s really up to the defenders of the theory to decide which part I’ve refuted and to error correct the theory so that it no longer has the problem. Okay. So here’s the problem then. It is possible that nothing separates the output of the two sources from more narrow means of creating knowledge such as the walking robot algorithm. And that the real difference is in fact that the two sources are actually special because they are open -ended search algorithms and that they’re the only two we know of. Okay. That is to say, they are algorithms that solve the problem of open -endedness as we discussed in our two episodes ago. If this is true, then that would mean Dorich is barking up the wrong tree. He’s trying to find something physically different about the output when really the difference lies in the search algorithm itself. This is again what I believe is actually going on. Personally, that is my opinion. But let’s keep an open mind. Okay. And I can’t prove that’s the case. That’s my conjecture as to what’s going on.
[00:23:24] Blue: Now, when I raise these issues to Henry, how did he respond to this challenge? Henry tells me that I’m missing the fact that Dorich’s theory is exclusively about replicators and that I’m misreading the three criteria. Now, that’s actually a form of clarifying the criteria. So this is fair. Okay. Namely, he says I’m missing the fact that three criteria imply that knowledge keeps itself preserved by being a replicator that lasts for hundreds or even thousands of years. It isn’t merely in need of being able to be copied, but it must be copied. And it must be copied many, many times. Now, in my version, the way I was reading the three criteria, there was no requirement for the information to be copied at all, only that it must be able to be copied. Though if it’s useful, it likely will be copied. Now, let me use an example. So since I’ve been comparing Dorich’s theory of knowledge to Campbell’s theory, Campbell Popper’s theory of knowledge, their evolutionary epistemology, let’s use Campbell’s example of the paramecium. So let me explain that example. He imagines this parent, he doesn’t imagine, he says there’s a paramecium and the paramecium we know works off of a simple algorithm to find food. It determines if there’s food nearby and then if it doesn’t see anything, then it randomly tries a direction to move. And if it finds that it’s blocked, then it tries a different direction at random. And it keeps doing that until it can tell it’s moving. And then by doing that, it will move away from where it knows there’s no food towards a direction that might have food. Okay.
[00:24:56] Blue: So this is a very, very simple example of what Campbell calls blind variation in selective retention, okay, where the variance is in different directions the paramecium might move and the selective retention is that it continues to move in that direction once it finds it’s not blocked.
[00:25:15] Unknown: Okay.
[00:25:16] Blue: So Campbell considers this creation of a very simple kind of knowledge, even though there is no replicator. In fact, this is a really key part of Campbell’s theory. And again, I probably need to do a separate podcast where I try to summarize his paper, but I’ve mentioned Campbell’s theory at length in past episodes. So if you’ve been following the show for a while, you know what I’m talking about. Okay. But Campbell, what he did is he points out that when we talk about evolution, we typically think of it in terms of three things, variation, selection, and this idea of replicators. Okay. He drops the replicators for his theory. He says we don’t need the replicators. Evolution is evolution, even if it’s just blind variation and selective retention, and that alone creates knowledge. Okay. And then he uses the paramecium example as an example of blind variation and selective retention that has no replicators. Okay. So here’s a question. Does Campbell’s paramecium example match air vase three criteria if we drop the apparently implicit requirement to require the knowledge to get copied? I’m going to leave that out there. My answer is, yes, it does, but I’ll let people go back and look at the paramecium example and compare it to Deutsche’s theory of knowledge, at least the way I was reading it as not requiring that the knowledge get copied. Okay. When read that way, there was no requirement that you have replicators. You might have replicators, but you don’t have to have them. Okay.
[00:26:48] Blue: So Henry reads criteria one and two as implicitly saying something like, number one, it is capable of enabling its own preservation through replication, or maybe number two, it must be copied many times from one embodiment to another without changing its properties. Whereas I read it more literally. Now actually Henry goes further. He tells me that the walking robot algorithm doesn’t count as knowledge because real knowledge keeps itself instantiated for generations and for hundreds and maybe even thousands of years. So according to Henry, if it’s only replicates for a short time, it’s not actually knowledge. So the implication here that maybe he didn’t intend this, but this is an implication of what he’s saying is that even useful adaption that lasts only for a few decades no longer gets to be considered knowledge. Okay. And then I would have to ask, okay, then what is it? If it’s not knowledge, what is it? It’s clearly something that’s exactly the same as knowledge. It just didn’t last long enough to count as knowledge. Okay. What is it? Henry is more than a bit vague as to how many generations are required to be considered knowledge. And I admit I’m very skeptical. I was very skeptical of his claim at first that this was even intended by, um, Deutch’s constructive theory of knowledge. But let’s ask the question, is Deutch’s constructive theory of knowledge explicitly about replicators and does it require hundreds and hundreds or thousands of years of copying? In other words, I’m asking, what if Henry is correct? He recommended that I go reread Chiara Maletto’s book, particularly chapter five. Here is what I found.
[00:28:24] Blue: I’m going to actually take you through what the book actually says, you know, using quotes on page one, knowledge defined objectively as information that is capable of perpetuating its own existence. So far, that fits the walking robot example perfectly. On page 144, she finds knowledge as a particular type of information that can enable its self perpetuation. I call knowledge. Again, the walking robot example passes. And then, she says, knowledge merely denotes a particular kind of information, which has the capacity to perpetuate itself and stay embodied in physical systems. In this case, by encoding some facts about the environment, again, the walking robot absolutely passes with flying comers. Okay. So at first, I thought for sure, Henry had misunderstood what she was saying. But let’s keep going. She uses an example of a cricket that transforms dirt into a hole and how she tried to fill in the hole in her lawn and the hole would reappear. That’s an example of what she calls resilience. Okay. She then defines the concept of resilience this way and points out that the most resilient thing is a recipe in the DNA that creates the organism that creates the hole. So the most resilient thing here isn’t the hole. It’s not right. This is from pages one to five. She defines recipes in terms of being resilient due to being useful. This is on page 151. She later defines the concept of a catalyst that allows a transformation. So this is analogous to chemistry. Okay. She creates the idea of an abstract catalyst as a metaphor to a catalyst. So this is kind of a metaphor of a metaphor. So a catalyst is a metaphorical to a chemistry catalyst and an abstract catalyst is a metaphor from a catalyst.
[00:30:30] Blue: So an abstract catalyst is where knowledge is adapted information that can cause transformations and then remains to cause it again. So in this analysis, the cricket is the catalyst in this metaphor. And she says this explicitly on page 142. She says, like the cricket in my story when referring to catalysts. But the information in the cricket’s DNA, which can also recreate the cricket itself, is the abstract catalyst. This is on page 148 of her book. It’s a catalyst. Why abstract catalyst? Well, because it’s a catalyst because on page 148, it can enable transformations and retain the property of causing them again. And it’s abstract because its identity does not depend on the physical systems in which it is embodied. How do we distinguish abstract catalysts from other kinds of information? This is on page 150. I’m quoting her. We need to look for information that can enable transformations and is resilient. So the walking robot arguably passes all these arguably passes all these requirements. Number one, it enables the transformation of walking. Number two, it’s not dependent on physical embodiment. Number three, it was and is resilient compared to its competitors due to being useful. Okay? So at least the way I’m reading it, it passes everything that she said and it counts as an abstract catalyst. But let me acknowledge something here. It depends on what you mean by transformation, recipe and resilient. And arguably, you could interpret those terms in such a way that the walking robot algorithm would not pass. So here’s some other things that Kiara mentions. She says it must have causal abilities. Here’s a quote, page 151. Being a useful adapt, adaption guarantees the survival of that piece of information with causal abilities.
[00:32:27] Blue: It is what guarantees that it is resilient and that it qualifies as an abstract catalyst. So she ties abstract catalysts to the idea that it’s a piece of information with causal abilities that is useful. She ties it directly to usefulness. Stated this way, I would say the walking robot passes with flying colors again. Okay? But wait, on the same page, she then without ever explicitly saying it acts as if not only can it be copied, but will be copied. And pay attention. She even says for generations. Here’s the quote. So the information in a piece of DNA may or may not be an abstract catalyst, depending on whether or not it can propagate itself for generations, thus remaining instantiated in physical systems. So this is where Henry got this. She really does imply it needs to stick around for generations. But how many generations are required for adapted information to become knowledge? A hundred? A thousand? A million? And by the way, can I count the walking robot algorithm as knowledge if I let it run for a million generations? I could do that. I could let it run for a million generations and it would continue to be the best one and it would keep itself instantiated that way. Okay? And you’d get little tiny tweaks to it, but it would stay the way it was. Okay? And what if the toy, what if we build a toy or it’s a robot that goes into production and it remains for hundreds of years later? Would the walking robot algorithm then count as knowledge? But it is interesting that Kayara seems to assume knowledge as a replicator here, just like Henry argued to me. She never once explicitly states this when laying out her criteria.
[00:34:19] Blue: She just assumes it is obvious later on. That quote there that I just gave you, that is the only place I can really find this. And it’s not her saying, oh, to count as knowledge, it must propagate for generations. She does not say that, she just acts like it does. Okay? So it is implicitly part of what she’s talking about. Despite defining recipes as a, in her book, set of instructions to realize a transformation, she then only recognizes two kinds of recipes. On page nine, I shall start with recipes coded in the pattern of living cells, DNA. And on page 13, the other kind of recipe is those that maintain our civilization in existence. So she implicitly assumes there are only two kinds of knowledge in DNA and human knowledge. This is, of course, the two sources hypothesis again. Now, it seems that Deutsch and Marletto assume their theory of knowledge only applies to replicators. And right as if that is the case, and even choose terms like recipe that bring to mind things like copying. But I can’t find anywhere where they, where she explicitly states, as a hard requirement, it must be a replicator and it must be replicated for, for hundreds or thousands of years. Okay? It just seems to be assumed that’s what it’s going to imply. But I think I can conclude that Henry was correct. That when Henry argued that to me, that Deutsche Marletto intended the theory to apply only to replicators, that that is exactly what she was thinking. And the reason why I feel we can comfortably assume this is the very fact that she writes as if it’s the case.
[00:36:02] Blue: Like she wouldn’t have written if that was the case, if that wasn’t in her mind somewhere. So I declare Henry correct that probably that was the intent. But I do not believe that she ever explicitly states it as part of the criteria or definition or theory of knowledge. So I’m going to now assume Henry is correct. And I’m going to point out that this is equivalent to saying that in addition to the three criteria that Hervé listed, there’s actually an implicit fourth criteria. It must be replicated to be considered knowledge and possibly even a fifth criteria that it must be copied for generations. Or if you prefer, we can see that as a clarification on criteria one and two instead. Okay. I don’t care whether you clarify by making the criteria more explicit or if you add criteria, it’s the same thing. This is interesting and let me explain why. Because it’s possible we have two theories here. The first theory, the way I’m reading Constructor Theory of Knowledge that does not require replicators. In fact, I think the Constructor Theory of Knowledge minus replicators is identical to Campbell’s and Popper’s evolutionary epistemology. I think that the two theories are one theory at that point. Maybe I’m wrong, but that’s what it seems like it to me. Okay. So I recall that Campbell intentionally drops replicators to come up with the generalized theory of evolution. So that’s why I actually think the Constructor Theory of Knowledge absent the implicit assumption of replicators turns out to be identical to the Popper Campbell Theory of Knowledge. Okay. With Deutsche’s Theory of Knowledge at this point being a subset of Campbell and Popper’s evolutionary epistemology. That’s really interesting. Okay. I mean, what if that’s the case?
[00:37:52] Blue: What if Deutsche’s Constructor Theory of Knowledge is a subset of Campbell and Popper’s evolutionary epistemology or their theory of knowledge? So I asked Henry if it was okay to write down as a fourth criteria that requires replication to my intention of doing this was so that I could show him my great discovery that we actually had two theories here and that the way I was reading it was the same as the Campbell -Popper one. So under this, the original three criteria would not include replicators and thus much match Campbell and Popper’s evolutionary epistemology and Deutsche’s theory would now be limited to replicators. When I tried to do this, though, a curious thing happened. Henry said firmly, no, you’re not allowed to write that down because then we’re defining knowledge in terms of how it was created instead of how what it physically is. Now, let me just point out that he’s completely right about that. If we add a new criteria that says it’s only knowledge if it happens to get replicated, otherwise it isn’t, even if it’s otherwise identical and equally useful, that we have now explicitly defined the theory of knowledge in terms of the process that created it, okay, instead of what it physically is. Now, I pointed out to Henry that either this was part of the theory or it wasn’t. If it was, then his view of knowledge was defined in terms of how information got created, if only implicitly. So it should be allowed to be written down since it is a correct understanding of the theory. He then firmly told me that it was an implicit in the theory and thus a requirement but not part of the explicit definition of knowledge itself.
[00:39:34] Blue: This was necessary, he argued, so the definition or theory wasn’t defined in terms of how knowledge is created. I pointed out that without that without that clarification, I clearly misinterpreted the three criteria. So it didn’t make sense, didn’t it make sense to like clarify what was really meant? Plus, he’s not denying it’s a criteria for knowledge. He was just refusing to write it down and make it explicit. So it’s part of the definition of knowledge by his own admission, even if it’s not written down. And he’s trying to use that as a way of eliminating the walking robot algorithm as counting his knowledge. He then insisted quite firmly that it was just obvious, yes, he used those words. The phrase capable of enabling its own preservation really meant gets replicated. So there was no need to write it down and it was my own fault I had misinterpreted it. I had apparently missed the manifest truth of what those words meant. So let me talk to Henry just for a second. He may hear this episode. I am not attacking his position. Henry, I’m not attacking your position. Okay. And there’s even a fair point you’re making here. And I wonder if you can see that I am trying to steal man as best I can the position that you’re taking here. Okay. And I hope you can also see the problem I’m raising. I’m going to great lengths to make the problem explicit and obvious. So Henry, can you see that if you don’t write down a requirement and you keep it explicit that that in no way affects anything at all?
[00:41:06] Blue: If it is an implicit requirement, it is just as much a problem as if we wrote it down and made it explicit. The fact that you’re choosing to not write it down does not make the problem go away. Okay. It just hides the problem. And we’re critical rationalists. We should not avoid problems. We like problems. We want problems. We embrace problems as opportunities to improve our theories. So here’s the problem. Deutch and Morledo are implicitly making the theory of knowledge about how the knowledge got created specifically through replicators and not fully about what it physically is. And since what they really want is to define knowledge in terms of what it physically is, this is a problem that needs to be fixed. We should not hide it. Okay. So to be clear, Henry wants to implicitly define Deutch’s theory of knowledge in terms of how the information got created via replicators, but he doesn’t want to explicitly write it down even though he admits it’s a requirement because if we write it down, then it’s immediately obvious that the definition of theory has a problem, namely that it’s actually defining knowledge not in terms of what it physically is, but in terms of how it got created. Now, let me just point something out. Maybe that’s not such a big deal. In fact, the Campbell -Pauper theory of knowledge, their evolutionary epistemology is explicitly defined in terms of how knowledge gets created by blind variation in natural and selective retention. So from that point of view, maybe it’s not so bad if Deutch’s constructor theory of knowledge is defined in terms of how the knowledge got created. On the other hand, let me argue in favor of Henry’s position for a second.
[00:42:45] Blue: I admit that this is always to me seemed like a giant weakness in Campbell’s and Pauper’s evolutionary epistemology. It seems like it’s a really undesirable trait that when we talk about knowledge, we’re basically talking about that which comes out of a successful selective variation selection algorithm. And as a computer scientist in particular, we don’t define things in terms of the implementation of the algorithm. We define things in terms of the inputs and the outputs. And there’s a really good reason why we do that. The moment you say, look, I’m going to define my epistemology in terms of how knowledge gets created, it kind of sucks. And so I totally see where Henry’s coming from, why he wants to avoid defining knowledge in terms of the process that created it. And furthermore, if we are going to define Deutch’s theory of knowledge in terms of the process that created it, there’s this really easy proposal I can offer just to find knowledge as knowledge is adapted information that has causal power and causes it to remain so, and that was created either via biological evolution or human minds. And if it wasn’t created by one of those, then it’s not considered knowledge. Now again, at this point, no one’s going to want to define knowledge that way. Okay, but if we’re going to define it in terms of how knowledge got created, then why not? Okay, so I totally see why Henry wants to avoid doing this, because then it really is kind of a sucky theory at this point. I shouldn’t say that. It’s a theory that’s got something that sucks about it that I would really rather change.
[00:44:31] Blue: However, Henry is tacitly admitting that there is a problem with Deutch’s constructs for theory of knowledge, that implicitly it is about how knowledge got created. So I’m going to accept Henry’s view, specifically that the constructor theory of knowledge, as stated today, is about replicators and replicators that survive for generations. However, I insist we do not make this implicit, but instead we make it an explicit criteria. I am always going to insist on explicit criteria. So we can easily see that it causes a problem that must be fixed. That’s a good thing as a critical rationalist. And this is one of the main reasons I see Deutch’s current theory of knowledge as flawed, because it has this implicit criteria that really boils down to it has to be created through replicators that he’s trying to avoid stating explicitly. That doesn’t strike me as that strikes me as a problem that needs to be addressed. So interestingly, defining knowledge as requiring to be a replicator doesn’t actually help with the walking robot algorithm. As we’ve seen the walking robot algorithm, a genetic programming algorithm that creates the algorithm, does use replicators. Plus in our hypothetical example, the algorithm is going to get replicated into multiple robots. So really, that criteria that it has to be about replicators doesn’t help even slightly with denying the walking robot algorithm from being knowledge. However, we define resilience to mean it keeps itself instantiated for hundreds or thousands of years that would very likely eliminate the walking robot algorithm. What are the odds that this walking robot that we did is going to go into production and then hundreds or thousands of years from now, the same robot with the same algorithm is in use. Okay. It doesn’t be
[00:46:27] Red: much, much better than
[00:46:28] Blue: right.
[00:46:29] Red: Yeah.
[00:46:30] Blue: Okay. Here’s the problem with trying to do. There’s there’s an inherent problem with trying to eliminate the walking robot algorithm in that way, though, specifically eliminates much of human knowledge. So for example, do airplane designs last for hundreds or thousands of hundreds or possibly thousands of years either? Okay. They don’t. Okay. So are we prepared to also say airplane designs are not knowledge? Okay. Now, I’m okay with you declaring it either way. Again, I’m really just looking that you’re consistent. Okay. I’ve also earmarked a few other things that might be additional criteria or additional clarifications, if you prefer, that might disqualify the walking robot from being algorithm from being knowledge under Deutsche’s theory. So it’s a bit vague what a recipe is and if an algorithm counts, if it isn’t constructing something. So Chiara writes about recipes as if they construct something. And I’ve been talking about a transformation that is a walking robot, which isn’t constructing anything. Okay. There seems to be a few other places where we might grasp onto implicit criteria that eliminate the walking robot algorithm. So let’s play with those and see if any of those work better than trying to base it around replicators or basing it around lasting for generations or for hundreds or thousands of years. So let’s imagine a revision to the original three criteria. The original were it is capable of enabling its own preservation. It can be copied from an embodiment from one embodiment to another without changing its properties. It can enable transformations and retain the ability to cause them again. We’re going to now add implicitly or explicitly. When we say enabling its own preservation, we mean it’s a replicator that keeps getting copied, not merely that it can be copied.
[00:48:23] Blue: And when we say transformations, we mean it constructs something. Okay. Now, if we were to do this, that would, that second one in particular, would eliminate the walking robot algorithm from being knowledge anymore. So let’s explore that further. What is a transformation? If a transformation is understood to mean having to construct something, the walking robot algorithm is out. Is that what was intended by the word transformation though? Let’s explore that using quotes again from various defenders and from the book. So let’s lose an uncontroversial example first to really look into this further. Let’s say we have an automated factory that creates airplanes or bridges or whatever. Okay. So here’s from Chiara’s book, page 153. The recipe for the aircraft must be copied for the factory to survive. It is an abstract catalyst that keeps the factory going for years. This recipe is a set of instructions to realize the construction of an aircraft. It is the recipe in the sense that is the sequence of steps that one has to follow in order to forge the metal into the shape of the plane. The recipe is a fully, recipe for a fully fledged aircraft is what allows the construction of the aircraft to happen reliably. Notice that she defines recipe in terms of being a transformation, but writes as if it’s obvious that she really means a construction, at least in this paragraph. Okay. So the walking robot algorithm is a transformation in the sense that it allows the robot to transform itself and walk, not in the sense that it constructs something. And it does, the robot does do walking through a set of instructions. That’s the algorithm. Algorithms are a set of instructions, just like she’s saying. Okay.
[00:50:16] Blue: So it’s kind of close to what she’s talking about, but if she really means transformation equals construction, then I agree the walking robot algorithm doesn’t pass. So here’s the point I’m trying to make. I’m like belaboring this and you’re probably going crazy by this point. What I’m trying to say is whether or not the walking robot algorithm passes depends on if we’re willing to accept walking as a kind of transformation or not. And it’s a little bit open to interpretation. We could play the same game with the word recipe is an algorithm or recipe that word does not necessarily summon the word recipe, does not necessarily summon to my mind an algorithm and does not call to mind the idea of instead it calls to mind the idea of transmuting constructing something, not merely allowing a thing to move around. So it is a little unclear if the walking robot example is knowledge in the sense of being a recipe or in the sense of being a transformation. Arguably, yes, it passes and arguably no, it doesn’t. Again, it depends on what you mean by recipe and transformation. Now here’s the thing. I don’t care how you define these terms so long as you’re explicit about it. And it’s so long as you apply the criteria consistently. Again, I want to emphasize explicitness and consistency. So for example, let’s say humans build the algorithm for a robot to walk. Now I’ve been told in my back when I was a master student studying artificial intelligence, I was told that the original Boston dynamic robots, that they didn’t use any sort of machine learning or anything like that, instead they came up with their own algorithms to get the robots to walk.
[00:51:58] Blue: So let’s say a human makes the robot walk and there’s no evolutionary algorithm, it’s human creativity that is used to come up to do this. Is this now not knowledge because it is not a construction or a recipe? What I’m asking is that you apply the criteria consistently. So if a human creatively makes the algorithm, is it not knowledge because it’s not a construction or a recipe? Or let’s say we’re talking about knowledge in an animal’s genes. So for example, a giraffe can walk on the first day. So the giraffe has the knowledge in its genes on how to walk. Or at least that’s how I would normally have said it. But if you want to tell me that because nothing got transmuted when the giraffe is walking, and this is not a transformation or it’s not a recipe, then we can no longer say that the giraffe has the knowledge in its genes to walk. Because walking is not a kind of transformation or a recipe, arguably, okay? And therefore doesn’t fit Deutsche’s theory of knowledge. Are you willing to go that far? I’m fine with that. I don’t care how we define things. I’m just looking for a consistent application that’s explicit. So furthermore, let’s go think back to Hervé’s list of examples. He gave the list of the two sources and he gave examples. Those examples included democracy, language, the Mona Lisa, quantum theory. Is democracy and quantum physics, are those a recipe anymore or less than the walking robot? Is democracy and quantum physics a specific construction of something? If so, what is it constructing? Are you prepared to drop these examples on the grounds that they aren’t recipes and they don’t construct anything?
[00:53:49] Blue: The Mona Lisa may exist for many hundreds of years, let’s say, but let’s say there is a great painting that is a fad and then it disappears in a few decades. Are you prepared to declare it not knowledge because it didn’t last hundreds or thousands of years? Again, I know I’m going way overboard trying to emphasize this, but I am asking for consistency and that’s something I should be allowed to ask for. If you declare the walking robot algorithm not knowledge because it didn’t get copied, then it goes into a toy and it gets copied, I’m asking you to declare it knowledge at that point if that was the dividing line you wanted to base it on. If you declare the walking robot algorithm not knowledge because it is not a recipe that constructs something, then I’m asking you to declare democracy and quantum physics also not knowledge. If you declare the walking robot not knowledge because it didn’t last a thousand years, then I’m asking you to declare an aircraft design that is retired after a few decades, not knowledge also. What I’m not allowing you to do is to declare the walking robot not knowledge due to, say, sticking around, it didn’t stick around for a thousand years and then drop that implicit criteria when we’re talking about human knowledge like an airplane design that only lasts for a few decades. Henry, I’m allowing you to modify the criteria in any way you see fit so long as you make it explicit and apply it consistently, then take whatever the consequences are after you’ve done that. Now, let me even come up with one that Henry didn’t come up with that I actually think is worthy of some discussion. What about this one?
[00:55:25] Blue: Here from her book, she says, the best way to define this type of information called knowledge is that it is exactly the thing one would ultimately have to eliminate in order to prevent a particular transformation from being performed reliably. Henry never raised this one, but I could channel my inner two sources apologists and I could make an argument something like this. The walking robot is not knowledge because Chiara Marletto says knowledge is defined as being exactly the thing you have to eliminate in order to prevent a particular transformation from being performed reliably. And if we eliminate the walking robot algorithm, we could just run the genetic programming algorithm again and a new one would appear. Okay, would this allow us to eliminate the walking robot algorithm? At first, this might really seem like a strong argument, but consider the following. The argument ignores the fact that Chiara’s own example of a cricket constructing a hole in the dirt has exactly the same problem. If you destroy every single copy of DNA of crickets, a biological biological evolution now has a niche that’s no longer filled. So it will likely create a new app recipe that accomplishes the same thing that creates a hole. In fact, we can point to go first creating holes instead or something like that. Okay, because Campbell’s knowledge creation happens in a hierarchy of variation and selection algorithms. It will always be the case that it can and probably will be recreated by something else in the hierarchy once the niche is no longer filled. It also means airplanes aren’t knowledge because if you eliminate the Wright Brothers first design, surely somebody else using human creativity would eventually have invented airplanes.
[00:57:09] Blue: You have to assume that Chiara really meant exactly the one thing that must be eliminated that is the most proximate cause, which in this case is the walking robot algorithm, not the genetic programming algorithm. But it is arguably that it might or might not eliminate the walking robot algorithm. Again, it depends on how you interpret it. And all I’m asking is that you make it explicit, the criteria explicit and apply it consistently. So if you want to use this one to eliminate the walking robot algorithm, that’s fine. But I then get to use it to eliminate various things that you probably do consider to be knowledge. So we have a handful of arguable criteria at this point by which to eliminate the walking robot example, a recipe might imply we have to construct something rather than just transform something in the more generic sense of walking resilience might imply anything from microseconds compared to its competitors or thousands of millions of years capable of enabling its own preservation might imply by replicators. The thing you need to eliminate may or may not require the thing be the proximate cause. These arguable points exist because the theory is vague in some parts and thus open to interpretation. Now, let me let’s talk about that for a second. When a theory has vague points that are open to interpretation, let’s refer to that as the theory’s degrees of freedom as a shorthand. It is not abnormal for theories at least initially to have degrees of freedom, thereby leaving some parts of the theory open to interpretation. In fact, it’s probably impossible to avoid at first. Perhaps all theories have some degree of degrees of freedom where things are open to interpretation.
[00:58:59] Blue: So the fact that Deutsch’s theory, constructor theory of knowledge has degrees of freedom like this is not in and of itself out of the ordinary. Okay. So the constructor theory of knowledge currently has enough degrees of freedom to eliminate the walking robot algorithm as knowledge. And I’m admitting this upfront that that’s the case. But in some sense, this is the wrong question. The question is, can you explicitly state why the walking robot algorithm is eliminated, then hold the same explicit criteria constant and everything you did want to include as knowledge still counts as knowledge. Okay. Henry, that is my real question to you. That is what I am trying to ask you to do in our various discussions. And this is my attempt to clarify my true meaning to you using tons of examples and such. Okay. I am not claiming it’s impossible, by the way. I’m asking you to take this as a valid challenge and approach it with the same relish that a true critical rationalist would. This is a problem worthy of being solved. Now, let’s talk about how critical nationalism works. Okay. Think of me as making the following conjecture. Deutsch’s theory of knowledge is flawed because there is no set of criteria you can come up with that eliminates the walking robot algorithm, but if held constant, wouldn’t also eliminate many examples within the two sources that we all know or knowledge. Therefore, his theory is currently flawed. Okay. That is my conjecture, by the way. And I am saying that and I am stating it as a conjecture. Now, you might say, wait a minute, didn’t you just say you’re not saying it’s impossible? Well, that’s a conjecture. So I’m not saying it’s impossible, but it is a conjecture.
[01:00:43] Blue: It may be wrong, just like all conjectures, but it is a good conjecture. And here’s why, because it is very easy to see how to refute it. You simply have to produce a set of explicit criteria that eliminates the walking robot without eliminating any examples of knowledge created by humans and or biological evolution. It does no good to say, well, prove to me there are no such criteria because that would be impossible for me to do. I can’t try out an infinity of possible criteria. That is what justification is. When you try to turn this around on me and you say, well, prove to me there are no such criteria, you are being a justificationist. Okay. For Dorich’s theory to be a correct theory, it must formulate the criteria of what knowledge is explicitly enough that I know what a counter example would look like, but I can’t find even a single counter example. That is what we’re trying to do when we’re doing critical rationalism. If you keep forever arbitrarily changing the criteria on the fly, or choose to apply the criteria inconsistently to get back to the two sources, then Dorich’s theory must be declared a bad explanation. And indeed that is what Dorich says a bad explanation is. If you can’t come up with a single consistent set of criteria, then Dorich’s theory must at least in the current form, given the current criteria, be declared refuted and a new modified theory must be hypothesized. Those are your options. The moment I bring up that conjecture, you either, which I just did, you now only have, if you’re a critical rationalist, a handful of choices.
[01:02:25] Blue: You must either produce the consistent set of criteria that eliminates the walking robot algorithm without eliminating everything in the two sources that you want to count as knowledge, or you must move to a bad explanation and forever just on the fly, keep changing things, or you must declare it refuted. Those are your three options. So I offer this as an intriguing problem for the defenders of the theory of Dorich’s constructor theory of knowledge to solve. Here is the challenge in a nutshell. Come up with a set of criteria that is explicit enough that it clearly eliminates the walking robot, but at the same time includes everything you want included, and I can’t use that as a counter example to you. A critical rationalist should relish this well -formed problem in their theory. My guess is that it’s impossible so long as you insist on including the two sources hypothesis. I think that the current theory is wrong, and it’s wrong because it includes the two sources hypothesis, but I may be wrong, and it should be easy to refute me if I’m wrong by simply coming up with a single explicit and consistently applied set of criteria. Note how this compares to Shannon information theory. The constructor theory of knowledge includes a number of vague and implicit criteria. Defenders of theory have to take measures to protect the theory from criticism, like refusing to write down criteria, even when they admit it applies to high problems. It isn’t actually defined in terms of what knowledge physically is since it requires an implicit criteria about being replicators. What does or doesn’t count as knowledge often seems to come up, come down to a gut feel of what she count.
[01:04:08] Blue: The criteria gets applied inconsistently and is changed on the fly. Those are all very real problems that every single defender of Deutsch’s constructor theory of knowledge has had to use with me up to this point, without exception. There is nothing equivalent to these problems in Shannon information theory. So, getting back to my original discussion with the survey that led to all this. This is why I don’t consider us as yet having a theory of knowledge comparable to information theory. Having said that, this doesn’t mean I think Deutsch’s constructor theory is garbage or bad. I think it can be reformed. In fact, I even have my own suggestion of how to do that. Drop the two sources hypothesis, drop it entirely, give up on it. I think that’s the part of the theory that is wrong. I think once you actually drop the two sources hypothesis, the theory will be error corrected. And I think it will, at that point, also be equivalent to Campbell and Popper’s evolutionary epistemology, except it will no longer be defined in terms of how the knowledge got created. It will be defined solely in terms of what it physically is. Therefore, I think it’s actually a step forward compared to Campbell and Popper’s theory. But I think right now that’s not possible because the theory always includes the two sources hypothesis and that part of the theory is wrong. That’s my opinion. And I’ve offered a challenge. I’ve offered a conjecture and I’ve offered how to refute that conjecture. And really, I’m open to waiting to see if anybody can produce the criteria. Until they do, tentatively, Deutsch’s constructor theory of knowledge, particularly the two sources part of it, is now tentatively refuted. And that is how critical rationalism works.
[01:06:01] Red: So you think that Deutsch himself considers the two sources hypothesis is very important to his theory of knowledge? I think it does. It’s just kind of a side thing. I think it does. He brings it up so much and so often. And it leads to, it ties into a number of his other theories.
[01:06:22] Blue: So for example, and this is something that needs its own podcast, so I can only briefly go into it. But why is it, do you think that Deutsch places so much emphasis on all the knowledge that an animal, all its knowledge is in his genes, where that’s not true for humans? Is the basis for the conjecture that animals have no feelings? And there’s like a series of connections we have to make to get there.
[01:07:00] Red: But
[01:07:00] Blue: it’s based on what I consider to be a false, the false part of the theory to begin with. So I have no need to ever go there, right? Interesting.
[01:07:08] Red: So that kind of, the animals has no feelings thing. Kind of comes from the two sources. It
[01:07:13] Blue: does. Hypothesis.
[01:07:15] Red: Okay. That makes some sense. I mean, that’s one of the, definitely one of the areas of Deutsch’s ideas that I think doesn’t sit well with a lot of people, myself included. I don’t know.
[01:07:31] Blue: You know, there’s a lot of interesting things that come out of what I’ve just explained. So I really hope that people in the community will take the criticisms I’m offering here in the right spirit though, right? I really feel like I’ve been treated sometimes even hostily by members of the community, not like overall, like, but just when I raise these issues, there’s kind of a knee -jerk reaction. Oh, no, it’s obvious. This isn’t a problem. And how can you not see that? I’ve literally had at least one person treat me like I’m stupid, that I can’t see that there’s just no problem here, right? And I think the reason why is because the two sources hypothesis is tied to a whole bunch of pet theories that really are a deep part of the overall meaning meme that are going to have to die the moment that it is admitted that the two sources hypothesis is wrong. And the two sources hypothesis honestly is wrong, right? Or at least that’s my conjecture. And until someone can offer the alternative set of criteria that’s consistent, tentatively, I am right as a critical rationalist to consider the two sources hypothesis refuted. That’s just how critical rationalism works. Okay. So for now, I’m embracing this tentative refutation. And I’m embracing the idea that the two sources hypothesis is wrong. And I’ll admit that I might be wrong. And I’ve made a clear path on how to refute me. It’s super clear path, really easy path, right? If I’m wrong, just come up with the explicit criteria and apply them consistently, such that I don’t have counter examples I can use on you. That’s it. That’s all you need to do. And I think that’s impossible.
[01:09:18] Blue: I think that’s why no one’s been able to do it. And I think the reason why it’s impossible is because the two sources are not the only place that creates knowledge. I think knowledge is in fact ubiquitously created by many, many, many selection and variation algorithms that exist in nature, just like Donald Campbell says. And I think that’s the actual truth. So I don’t think there’s any other way around that because it happens to be the truth. Okay. I’m now intermixing my opinions, why I hold my opinions, everything together. However, let me just say this, I really am open to the possibility that a set of criteria could be found. And I actually would like that if somebody could come up with the explicit criteria that eliminates the walking robot without eliminating things that they don’t want eliminated. That would be like intriguing if someone could actually do that. And I would embrace that very quickly if someone could come up with that. I’m not able to. Like I’ve tried my best and I absolutely cannot come up with this consistent set of criteria that ultimately that holds us to the two sources hypothesis.
[01:10:27] Red: Well, this has been good. Are you winding down here?
[01:10:33] Blue: I’m basically done. Okay. Do you have any questions? Yes.
[01:10:37] Red: Well, only question. Now, when we’re done with this knowledge series, will I receive some kind of a credential or a certificate, maybe something, or as far as this more more unschooling curiosity driven?
[01:10:55] Blue: Let me kind of indirectly answer your question. The reason why I put so much emphasis on this, because I’ve brought this up multiple times throughout the podcast and I’ve made a big deal about it. I made a big deal about it when we talked about animal intelligence. I made a big deal about it when we talked about artificial intelligence. Okay. Let me explain why. Okay. I am interested. My interest is in AGI. I want to understand how to create an AGI. And trying to get the right theory of knowledge is a precursor to that. And I feel like every theory of knowledge I’ve seen has problems and sometimes big problems. So it’s not that you’re going to be credentialed, Peter. It’s that you’re going to help me figure this out that I can make progress on this problem. Okay. And I don’t know the answers myself. Like I don’t obviously don’t know how to create an AGI. Right. But just stop and consider the possibility I’m right for a second. What are the implications? Okay. So first of all, it means that, and I kind of already said this, that Campbell and Popper’s theory of knowledge, their evolutionary epistemology, it means that it has something correct that Deutsche’s doesn’t, namely that the two sources hypothesis is wrong. But Deutsche’s has something that’s right that Campbell and Popper’s doesn’t. It tries to define knowledge in terms of what it physically is instead of how it got created. Okay. We’ve already now made progress on the problem just by embracing that viewpoint. Okay. We now know that there’s a version of Deutsche’s theory dropping the two sources hypothesis that encompasses Campbell’s and Popper’s theory of knowledge and improves on it. So we’re already a step forward.
[01:12:50] Blue: Now, do I think that’s the correct theory? No, I don’t. In my artificial intelligence podcast, I talked about examples for machine learning that didn’t match it, didn’t match the Campbell, Popper version, but also don’t match the Deutsche version. Things like linear regression, things like that, where they seem to create a ductive achievement, which I understand to be the same as knowledge that there’s maybe an open question, whether those are the same thing or not. I’ll do a podcast on that someday. And yet don’t use any sort of variation in selection. So I actually think that Deutsche’s theory is an if you drop the two sources and improvement on Campbell’s theory, which is Popper’s theory, but I think it’s still wrong. But I feel like we’re made some progress here, right? Like, we’re actually moving towards an understanding that. And the other thing is, is that I believe that what Dwight is trying to do is when he adds that when he taxed the two sources hypothesis into his theory, he’s acknowledging something important, namely the problem of open -endedness, but he doesn’t know to call it the problem of open -endedness. And he doesn’t know that the problem of open -endedness isn’t about what knowledge physically is. It’s about the nature of the search algorithm. OK, I figured that out based on trying to figure out why he kept attaching the two sources and being unable to figure out how to make it consistent with the rest of the theory. OK, so I know that assuming all my assumptions, my conjectures are correct. And so far, they’re not refuted. I know the problem of open -endedness must be what’s missing from the theory of knowledge. OK, the theory of knowledge.
[01:14:34] Blue: And consider even from our two podcasts ago, and we did the podcast on the problem of open -endedness. One of the things I quoted Kenneth O ‘Stanley is saying is that the fact that we see evolution in terms of variation and selection is in and of itself what’s causing us to not figure out the problem of open -endedness, because evolution isn’t actually about variation and selection. It’s something else. And variation selection is a very similitude. It’s similar to whatever it is that the real theory of knowledge is, but it needs to be defined in some other way. OK, think about all the places that I’ve now made progress against trying to figure out AGI where I’ve eliminated dead theories. I’ve even narrowed the search down to I’ve constrained the search as to what we need to be looking for considerably. It’s still way huge, like way bigger than I’m going to get in my lifetime, right? But these are genuine improvements on trying to understand what AGI is. And all of these came from just taking the existing theories, criticizing them, taking the criticism seriously, and then moving from one theory to the next slowly trying to find the best theory. OK, there’s like something genuinely good going on here, even if it’s just tiny progress that I wish more people knew about and more people could help me with. And there were more minds wrapped around it so that we could try to constrain this problem further until we actually figure out what AGI is.
[01:16:07] Red: Well, I think it seems like such a neat way to put it, I guess. There’s two sources of knowledge in the world, genes and human created memes. But the way you put it, I mean, it sounds like an assumption that’s really worth criticizing and investigating. And on this Thanksgiving weekend, I’m grateful for you, Bruce. And this has been very, very educational. So thank you.
[01:16:37] Blue: All right. Thank you very much, Peter. The theory of anything podcast could use your help. We have a small but loyal audience, and we’d like to get the word out about the podcast to others so others can enjoy it as well. To the best of our knowledge, we’re the only podcast that covers all four strands of David Deutch’s philosophy, as well as other interesting subjects. If you’re enjoying this podcast, please give us a five star rating on Apple podcasts. This can usually be done right inside your podcast player, or you can Google the theory of anything podcast, Apple or something like that. Some players have their own rating system, and giving us a five star rating on any rating system would be helpful. If you enjoy a particular episode, please consider tweeting about us or linking to us on Facebook or other social media to help get the word out. If you are interested in financially supporting the podcast, we have two ways to do that. The first is via our podcast host site, Anchor. Just go to anchor.fm slash four dash strands f o u r dash s t r a n d s. There’s a support button available that allows you to do reoccurring donations. If you want to make a one time donation, go to our blog, which is four strands.org. There is a donation button there that uses PayPal. Thank you.
Links to this episode: Spotify / Apple Podcasts
Generated with AI using PodcastTranscriptor. Unofficial AI-generated transcripts. These may contain mistakes; please verify against the actual podcast.