Episode 25: Universal Darwinism - Does Artificial Intelligence Create Knowledge?
- Links to this episode: Spotify / Apple Podcasts
- This transcript was generated with AI using PodcastTranscriptor.
- Unofficial AI-generated transcripts. These may contain mistakes. Please check against the actual podcast.
- Speakers are denoted as color names.
Transcript
[00:00:00] Blue: The Theory of Anything Podcast could use your help. We have a small but loyal audience, and we’d like to get the word out about the podcast to others so others can enjoy it as well. To the best of our knowledge, we’re the only podcast that covers all four strands of David Deutsch’s philosophy as well as other interesting subjects. If you’re enjoying this podcast, please give us a five -star rating on Apple Podcasts. This can usually be done right inside your podcast player, or you can Google The Theory of Anything Podcast Apple or something like that. Some players have their own rating system and giving us a five -star rating on any rating system would be helpful. If you enjoy a particular episode, please consider tweeting about us or linking to us on Facebook or other social media to help get the word out. If you are interested in financially supporting the podcast, we have two ways to do that. The first is via our podcast host site, Anchor. Just go to anchor.fm -4 -strands, F -O -U -R -S -T -R -A -N -D -S. There’s a support button available that allows you to do reoccurring donations. If you want to make a one -time donation, go to our blog, which is four strands.org. There is a donation button there that uses PayPal. Thank you. Welcome back to The Theory of Anything Podcast. Hey, Camille, how’s it going? Going great, Bruce. How about you? I’m doing fairly well. I’ve been on a week of vacation and it feels great and I’ve been doing, playing video games, things like that. That’s been nice. It’s over now.
[00:01:43] Blue: I’ve got to go back to work on Monday and we’re doing a giant deploy and it’s the thing I’ve been working towards for months. This is going to be an exciting return to work. Awesome.
[00:01:55] Red: Exciting with quotations around exciting.
[00:01:57] Blue: Exactly. Scare quotes intentional.
[00:02:01] Red: How about you? Well, I just quit my job and I’m starting a new job on Monday, so that is also an exciting thing.
[00:02:10] Blue: That’s probably more exciting than with scare quotes.
[00:02:14] Red: Yeah, a little bit, but there’s always a truth about new jobs that no matter what conversations you’ve had with a company during an interview, you don’t actually know what the job is going to be like. Yeah,
[00:02:26] Blue: that is absolutely true. All right. Well, where we left off last time is we talked about how artificial intelligence has an overlap with another of the four strands, really with all four of the four strands, and that is through poppers epistemology. So artificial intelligence at least attempts to to create algorithms that create knowledge. And we’re going to talk today about the debate over whether they do or don’t create knowledge, but it’s attempting to study it in any case, even if you think it’s unsuccessful at it. Furthermore, it follows the universal Darwin algorithm. I’m going to explain what that means in this episode today. At least it follows my understanding of the universal Darwinism algorithm. I will discuss some of the other possible understandings that differ from mine and why I don’t feel like they really work. And ultimately what it comes down to is that if you are following a process of variation in selection, and you’re doing that with some means of being able to pick to select the better variants, you will of necessity create knowledge because you will create improving solutions to problems by doing that. That is Darwinism in a nutshell. And it is really poppers epistemology. It’s kind of a generalization of poppers epistemology, but it’s also very similar to biological evolution. That’s where it came from, was inspired by. And so what we see here then is that this really touches all four of the, well at least three of the four strands. So poppers epistemology, biological evolution, and computational theory. It also touches the fourth. There is various Darwinian ways of looking at quantum physics. And we won’t get into that, but it intersects, universal Darwinism intersects with various quantum physics things as well.
[00:04:33] Blue: So let me just explain to you what the universal Darwinism algorithm is as I understand it. Yeah, so we’ll start with that. And I originally tried to build up to this and then I just realized I should just explain it because it’s so simple. It’s so simple. It’s just surprising that there’s so much controversy over it. So the simple version is you try, it’s two steps. You try out variants of potential solutions to a problem and you select the variations that best solve the problem. Or to summarize, the simple version is variation and selection. That’s it. That’s the universal Darwinism algorithm. If you’re doing those two things, you are following the universal Darwinism algorithm and you are creating knowledge, which here I’m defining knowledge as simply improvements to solutions to problems.
[00:05:25] Red: Okay, I accept that as a definition. Okay.
[00:05:28] Blue: And that may not be the only definition of knowledge. The word knowledge is a very vague term. There could be, and we’ll discuss this, there could be other ways to define knowledge. I don’t care. It’s not that that would be wrong. I’m not saying that other definitions are wrong. I’m just saying for our purposes, I’m defining it this way. That is very acceptable. Okay. Now there’s a detailed version of this that I feel removes some of the confusion that exists around this algorithm. So here’s the detailed version. I put it into five steps instead of two. One, you start with a problem. Two, you conjecture solutions to a problem. That’s the creation of the variance. Three, you measure how good that proposed solution works. And then four, you retain the better solutions. And then finally five, five is kind of usually just implied. It doesn’t usually get included. You go back to step two and repeat until the problem is sufficiently solved. Okay. So if you look carefully, the first two of the detailed version are exactly the same as the first one of the simple version and the third and fourth of the detail version are exactly the same as number two of the simple version. So I’m arguing the simple version of the detailed version are identical that when I’m talking about simple version, I really mean exactly what’s in the detailed version. Yeah,
[00:06:45] Red: I agree with that. I agree with that.
[00:06:47] Blue: Also, it should be noted that one of the reasons why is when you break it out into larger steps like this, sometimes it commingles things. So for example, in biological evolution, measuring how good the proposed solution is, is precisely the same as retaining the better solution. It’s just whichever organisms actually survive. That is the measure of which ones were better at replicating their genes, right? And so I break it out into the more detailed version. You’ll see why in the future here. It’s so that certain misconceptions go away, but it should be noted that a lot of times these aren’t really separate steps. A lot of times it just comes down to variation in selection. Sure. Sure. Okay. In fact, you often don’t even quote start with a problem. You have a problem that it happens to solve. There may have been no intent involved like biological evolution isn’t intentionally trying to solve problems. It just happens to solve problems.
[00:07:44] Red: Right, which is why you kind of don’t have a step one conceptually at the biological level because it doesn’t, it’s not even starting out with the problem. It’s just, it’s just changed. Well, interesting. Okay,
[00:07:58] Blue: I’m with you. You’re exactly following the problem with trying to break it into the detailed version. So the simple version is in some ways better, but the detailed version avoids confusion. So the result of this is that you will end up with improving solutions to problems. Now just take a look at this algorithm. Can you see that that has to
[00:08:17] Red: be the result? You should probably read the algorithm for those who are only verbally following along. I mean, you
[00:08:25] Blue: start with a problem. You conjecture solutions to the problem. You measure how good the proposed solutions are. You retain the better solution or solutions. Sometimes it’s one solution, sometimes it’s multiple ones. And then you just repeat. Can you see that that must of necessity end up with improving solutions to the problem?
[00:08:43] Red: It would have to. I can’t see it anyway. It wouldn’t unless you weren’t very good at step three or step four, which actually, like if you compare this to anything, and I think this is going to come out on the knowledge side, humans aren’t always very good at three and four. And sometimes even biology isn’t very good at three and four. Okay,
[00:09:07] Blue: so let’s talk about that because that is a very interesting insight. So it is possible this algorithm would fail, particularly on step three, if your measure of how good the proposed solution was was wrong, if it was at odds with reality. Now, we’re going to see in machine learning that absolutely does happen. There’s something in fact called overfitting, which is specifically a case of where our measure of how good the proposed solution is, is at odds with what we really want in reality. So let’s say you had like a face recognition algorithm that you’re trying to come up with using machine learning. And you have some sort of measure as to how well it’s doing. And then when you actually run the algorithm, it doesn’t work in real life, because it turns out the measure you used didn’t correlate well with what you actually wanted. Okay, that would definitely cause the algorithms to fail. However, I wouldn’t point something out. Step four, if taken literally, actually eliminates that possibility. In other words, I’m saying you’re not really following this algorithm unless you actually are taking the better solutions. If you’re not taking the better solutions, then by definition, you’re actually not following this algorithm. Can you see that through? I can absolutely see that that is true. Okay, that’s why I made this detailed version. I made it so that it eliminates possibilities like that, that might confuse people.
[00:10:35] Unknown: Okay,
[00:10:35] Blue: so you’re right that this algorithm can at some level fail, but only by not actually retaining the better solutions. If it fails to do that, then clearly it’s not going to work. Okay. Now, there’s an explanation attached to this algorithm. And this is as I put it in bold on the slide because this is incredibly important. The explanation is the result of the act of comparing variance and discarding the worst ones while keeping the better ones must of necessity result in improvements. Okay, this is exactly what I just got you to agree to. This explanation is key. There, if you, there doesn’t seem to be any way to modify this algorithm and not spoil that explanation. So we’re going to talk about possible modifications to this algorithm. And the key point I’m going to be making is every single one of them spoils this explanation and therefore they aren’t good alternatives. Okay. Okay. The other thing is, and I already said this, I’m defining knowledge creation for our purposes and only for our purposes today as including, including something like finding improving solutions to a problem. Okay. I don’t know for sure if that is exactly how you define knowledge or not, or if it has only one definition or multiple definitions. I don’t care. Popper is very big on the idea that we don’t try to define our terms too carefully. He feels that that actually leads to a lack of ability to create knowledge. I couldn’t agree with that too. And so he felt that we should never take our definitions too seriously. You need to put something out there so people know what you’re talking about. But I’m not trying to make a complex technical definition of knowledge or knowledge creation.
[00:12:20] Blue: I’m kind of going with a fairly intuitive understanding of that. I’m really only asserting that this algorithm is at least one way to create knowledge. It might be the only way. And we’re going to consider that. But we’re not insisting on that though. We’re going to say it’s at least a way that knowledge, as I’m defining it, is created. Furthermore, I’m going to make an assertion. This one’s where some people might challenge me. I assert that what Campbell and Popper use the term knowledge or knowledge creation, that this is what they mean. And I don’t think they mean anything else but this. And so what I’m really going to be claiming is that this is the Campbell slash Popper universal Darwin algorithm and that they didn’t have something else in mind. And this is where I get a lot of arguments. But this is my assertion and I’m going to try to show why I believe that to be the case. Okay. All right. So let’s talk about criticisms of my proposed summary. There are two types of criticisms that I receive over this. One is you miss read Campbell. He met something else. And then there is Campbell and Popper are wrong. Knowledge creation is defined in some other way. Okay. Now, let me just say that the first of these criticisms isn’t really a very meaningful criticism. And let me explain why. Okay. In a lot of ways it doesn’t matter if I miss read Campbell or not. Let’s just say maybe I did. Okay. I read Campbell. I thought he was talking about the algorithm that I just showed you. And I got the idea from reading Campbell. And I’m not taking credit for it. Okay.
[00:13:56] Blue: Now I’ve had people tell me, no, you’re miss reading Campbell. You Campbell actually meant something else. Okay. Great. What did he mean then? And let’s pull that algorithm out. Let’s compare it to the one that I thought he was saying. And I think what we’re going to find is that there just isn’t an alternative. Okay. That Campbell either meant what I just said he did or he was wrong. And there doesn’t seem to be any middle ground that we can find here. Right. And I’ll strengthen that case as I go on. But so in a lot of ways I’m just I’m more interested in what’s the truth? What is actually the universal Darwin algorithm? What is the minimum case for what is evolution and knowledge creation through evolution? And if Campbell meant something else, then that’s fine. His version must be compared to mine. And we have to determine which of the two is the correct one. There’s no authorities here. It doesn’t matter if I created this myself or Campbell created this and I got it from Campbell. That’s what I think happened. But maybe I created it. Maybe it’s my algorithm. That’s fine. It still has to be considered on its own merits. Agreed. Okay. The other one is harder because it gets into the question of maybe you’re using different words. The word knowledge mean different things. But let’s talk about each of these. So the biggest problem that we probably bump into the biggest critic of this possible critic of this is actually David Deutch himself. In the beginning of infinity, he says that evolutionary algorithms certainly constitute evolution in the sense of alternating variation and selection.
[00:15:32] Blue: But it is, but is it evolution in the more important sense of creation of knowledge by variation and selection? This will be achieved one day, but I doubt that it has been yet. Okay. So interesting. That seems like a really clear statement that the universal Darwin algorithm, it’s so simple. So many things count as universal Darwinism and count as knowledge creation under the algorithm I just gave that this seems like it’s just clearly at odds. What Deutch is saying is clearly at odds with what I’m saying. Okay. However, I’ve gone out and I’ve asked other people who are fans of David Deutch and that are paparians about this. And what I found is that a lot of people think that the plain more obvious reading of David Deutch isn’t really what he intended. And so, and there’s even some evidence they might be right. So like on page 78 of beginning of infinity, he defines knowledge and he specifically says it’s information that once physically embodied tends to cause itself to remain so. Now I’m unclear from that definition if it’s the same as what I’m proposing or if it’s different from what I’m proposing. And as such, I don’t know if he’s actually talking about knowledge in the same sense that I’m talking about it or not. Okay. We may be using a single word for two definitions that have different purposes, in which case he may not actually be disagreeing with me. Okay. And sure without without asking him his intent. That’s right. It’s hard to make assumptions. So now David Deutch is working on a theory called constructor theory. And part of what he’s working on is a theory of knowledge under his theory of constructor theory.
[00:17:09] Blue: It’s not done at this point. I can’t really go look at his theory and assess it and determine, oh, he’s talking about something different than me or he’s talking about the same as me and I’m wrong or he’s talking about something the same as me and he’s wrong. There’s just nothing for me to assess at this point. So we may have to like wait for him to actually come out with his full theory of knowledge before we can really determine for sure what he meant. Okay. However, let me just say that it is very common to read David Deutch in what seems like the straightforward way, which is that he’s saying no existing algorithm creates knowledge today and we have yet to discover how to do this. And I have seen numerous people call David Deutch claim that it’s true because, you know, based on his arguments that he makes in the book to that effect and claim, oh, when you run a revolutionary algorithm, no knowledge is created. Actually, it all came from the programmer and the algorithm played no role in any of the knowledge creation and then they point to David Deutch’s argument here. So at a minimum, even if this wasn’t what David Deutch intended, it escaped into the wild as its own little theory and therefore I’m going to have to address it at this point. Does that make sense?
[00:18:23] Red: Yeah. Of course, you have to address it because it’s become popular knowledge.
[00:18:27] Blue: Yeah. For the sake of not offending anybody, I’m going to refer to this as the pseudo -Deutch theory of knowledge, meaning I don’t really claim for sure that this is what David Deutch was trying to say.
[00:18:40] Red: Okay.
[00:18:41] Blue: We’re going to address it as a theory and we’re going to show that it’s a bad theory, but I’m not really claiming the theory came from David Deutch, except maybe by accident. Okay.
[00:18:53] Red: Only that it’s an extrapolation of people’s assumptions about what he said.
[00:18:58] Blue: That’s correct. Okay. Now, David Deutch goes on to say, the reason I doubt evolutionary algorithms create knowledge is that there is a much more obvious explanation for their abilities, namely the creativity of a programmer. So in context, what he’s talking about is using an evolutionary algorithm to create a walking robot. Okay. And he points out that as the programmer is creating this walking robot and using the evolutionary algorithm that they come up with some, they may come up with some sort of language that shows how to move stuff, you know, how to move the limbs or something like that, or has knowledge of the laws of geometry. So he goes on to say, even if you yourself are the programmer, you are in no position to judge whether your algorithm created knowledge. For one thing, some of the knowledge that was packed into the hypothetical robot controlling language will have reach because it encoded general truths about the laws of geometry, mechanics, and so on. Okay. So this is in a nutshell, his argument for why he has doubts about any existing evolutionary algorithm creating knowledge. So this is the first big criticism I’m going to address. Now, let’s talk about the second criticism because this one causes a lot of problems too. So Dawkins in particular, but Dawkins, Dennett and Blackbore have, and people before them too, have defined the evolutionary algorithm as having three steps. So I’m taking this from John Campbell, not to be confused with Donald Campbell. Donald Campbell is the one we’ve really been talking about. This is a different camel.
[00:20:32] Red: Good.
[00:20:32] Blue: And notice he’s 2010, so he’s alive today where Donald Campbell isn’t. So he summarizes the Dawkins, Dennett, Blackbore view as replication of system, inheritance of some characteristics that have random variation among the offspring, and then differential survival of the offspring according to which variable characteristics they possess. Okay. So here’s the real difference. It’s very similar to what I’m proposing, but it’s throwing in this concept of, of replicators. Okay. So it’s saying evolution requires replicators. Whereas the version I’m suggesting doesn’t deny the possibility replicators. That might be a convenient way to create variation in selection, but it doesn’t require it. Okay. So there’s a difference there that has to be addressed. And some people, a lot of people actually, including John Campbell in this paper that I’m quoting from, or paraphrasing outrightly says this is the evolutionary algorithm. And if you’re not doing these three things, which includes replicators, then you’re not actually creating knowledge through evolution. So this is, this is another possible criticism that I often hear against the algorithm that I am trying to propose here. Okay. Now, to make matters worse, even though I got this from Donald Campbell, he’s a very opaque writer. And I, he says, he does, says and does so many things that just lead to confusion. And poppers got some of the same problems where there are points where he says things that are just so misleading and he means one thing and people are good, definitely read it a different way. This happens with Campbell all the time. And I think a lot of the confusion around this is the fact is Campbell’s fault that he did not write in a plain way. So for example, here’s a quote from Campbell, three conditions are necessary.
[00:22:28] Blue: So this is necessary for his proposed evolutionary algorithm, a mechanism for introducing variation, a consistent selection process, and a mechanism for preserving and reproducing the selected variants. So we have three steps in the algorithm, try out variants of variations, potential solutions to a problem, exactly like I said, selection of the variants that solve the problem. And then he throws on there, reproduce the selected variants. Okay. So he’s trying to include replicators, at least in this quote, he is. Okay. Do you see that? So, but then there’s the question, why do you need that third step? Yeah,
[00:23:08] Red: what, what, what possible benefit does that you’re just saying you’re going to, once something is successful, you’re going to do it again.
[00:23:17] Blue: Yeah. I think what he means is that two and three are really the same thing. Right. I think that he’s trying to lay it out the way people would have thought of it at the time. Interesting. Okay. But I think that he, he then goes on and he actually summarizes this, these three steps as blind variation and selective retention, which is two steps, and really only the first two steps. So he, basically what happens is, he never clearly says that third step is unnecessary, which would have been really helpful if he had said that, but he ignores it for the rest of his paper and pretends like it doesn’t exist. So either he didn’t anticipate, he must not have anticipated the trouble with it. No, no. I don’t think so. Now, I, one might argue here, I’m over -reading him. I’m going to now prove that’s impossible. Okay. Here is one of Donald Campbell’s examples. This is the simplest example that he offers. So you’ve got a paramecium and it’s blocked. So the paramecium, this is, this is an example right from his paper. It, it does have the ability to detect if it’s able to move or not. So it attempts to move in one direction and it finds it can’t move. So it tries moving in a different direction and it finds it can’t move. So it tries moving in a different direction, still can’t move. Finally, it finds a direction that can move in and it then retains that direction because that’s the one it’s able to move by using until it gets blocked again and then the algorithm will repeat.
[00:24:42] Unknown: Okay.
[00:24:43] Blue: Which step is this? Is that step two, select the best variant or is it three, reproduce the selected variants? Uh -huh. Okay. They’re a single step, right? Sure. Like the best variant implies reproducing the selected variant. Okay. If that’s what you understand, then my algorithm actually includes replicators. It means that whatever the select, the successful variant is, that’s the one you’re going with. But that isn’t what people think replicators means. That’s why I think this has become confusing. Okay. It’s a lot easier. By the way, Campbell states this as a fundamental tendency for the successful to replace the unsuccessful. That’s a much more clear
[00:25:21] Red: statement.
[00:25:22] Blue: Interesting.
[00:25:23] Red: Yeah.
[00:25:23] Blue: So I am going to propose that we just drop the third and that we just go with two. Okay. And just basically variation in selection. Okay. Because I don’t, I don’t see how the third one is anything but a form of the second one. All right. So now with this Paramesium example in mind, let’s go back to the quote from David Deutch. He said, evolutionary algorithms certainly constitute evolution in the sense of alternating variation in selection. But is it variation in the more important sense of creation of knowledge by variation and selection? This will be achieved one day, but I doubt that it has been yet. That clearly must be at odds with the Paramesium example that Campbell uses. Okay. Because the Paramesium example would be meet would be trivial to go program. And so if the Paramesium example is an example of evolutionary knowledge creation, then we have absolutely already attained the ability to write algorithms that create knowledge, evolutionary algorithms that create knowledge that just must be the case. Can you see that that must be the case?
[00:26:27] Red: Absolutely. There’s just no question in my mind.
[00:26:30] Blue: Yeah. So what we have here, and now keep in mind that Popper endorsed Campbell’s theory, and it was specifically, it wasn’t just Campbell’s theory in general that Popper endorsed, it was that particular article that I’m taking this from that Popper endorsed. Right. So what we have here is, so obviously if an algorithm is as simple as the Paramesium algorithm creates knowledge, the knowledge creation is ubiquitous, and we have achieved knowledge created, created creation algorithms before we even invented digital computers. They were all around us and we were doing them all the time before we even came up with digital computers. Okay. So what we have here then is a contest between two points of view. So we have on the one hand, we have Popper and Campbell’s view. Okay. And on the other hand, we have this pseudo -Deutsch view. So the pseudo -Deutsch view could to kind of make this more clear what the two competing theories are. The pseudo -Deutsch view is knowledge creation is rare. Only biological evolution and human minds have currently achieved it. We don’t currently know how to create knowledge creating algorithms. Okay. So that would be one possible view. That’s the pseudo -Deutsch view. And then for comparison, we have the view that Campbell and Popper must be expressing, which is knowledge creation is something common and ubiquitous. Any process that follows Popper’s epistemology of variation where you can differentiate between the better and worse variations over time, and you select the better variations, is included in what we call knowledge creation. Nature consists of a vast ubiquitous overlapping hierarchy of evolutionary algorithms that are constantly creating knowledge. Computer algorithms that create knowledge are common and well -known.
[00:28:14] Blue: In fact, knowledge creating algorithms existed before the invention of the digital computer. Okay. So we’ve got two views, one seemingly coming from David Deutsch and one seemingly coming from Popper and Campbell. And they absolutely must be seen now as at odds with each other. They’re mutually exclusive. There is no way to conflict. Okay. So this is now my summary of making it a little easier. So I’m summarizing Campbell’s and Popper’s view as all variation selection processes must result in improvements. And so by definition, are creating knowledge. That’s the way I’ve summarized it to make it really simple. Okay. Okay. So now let’s consider this. Is this two equally good theories? I’m going to show that they are not, that really only one of these is a good theory. Okay. So keep in mind that Deutsch, I won’t reread the whole quote, but Deutsch made that argument about how the programmer may be in injecting the knowledge into the evolutionary algorithm that makes the robot move. Okay. And then he argues that there’s so much knowledge coming from the human that that may be where all the knowledge is coming from. And the algorithm may not actually be producing any knowledge at all. Now I do want to somewhat align myself with this argument from David Deutsch. And we talked about this in the last podcast. There is so much knowledge that comes from the human in artificial intelligence today. Like it’s overwhelmingly coming from the human today.
[00:29:46] Red: Yeah. Like kind of to an appalling level that we would call it artificial.
[00:29:51] Blue: Yes. Right. And that’s, that’s one of the reasons why it’s so dang narrow. Right. And so to some degree, at a kind of spiritual level, I agree with what David Deutsch is arguing here. I think that he is saying something meaningful and important, which is you’ve got to stop thinking of artificial intelligence algorithms as creating all the knowledge that it appears to be creating. Most of it is coming from the human. Right. However, he goes too far. He says all of it’s coming from the human. Okay. One thing that needs to be immediately noticed is that this argument is inductive. You can’t count white swans to determine if black swans exist or not. That is just not how critical rationalism, how poppers epistemology works. So the fact that there is lots of knowledge coming from the human in no way informs us whether the algorithm is creating knowledge or not.
[00:30:45] Red: I agree.
[00:30:46] Blue: Okay. So in far as these two competing theories between poppers view and the pseudo -Deutsch view, this argument from David Deutsch, however much I agree with it spiritually, does not differentiate between the two theories in the slightest. And therefore does not work as an argument if what we’re trying to do is look between these two theories. Okay. So that’s the first thing I need to get out of the way. The next one is that popper just doesn’t agree with Deutsch, the pseudo -Deutsch on this. Okay. Popper is in essence, popper Campbell are in essence saying, that’s not so. You’re wrong. It is not true that we can’t tell if the algorithm is creating knowledge or not. Because we are specifically explaining that any variation selection process must create knowledge, assuming it follows everything we talked about. And we’re explaining why that must be the case. Your theories, not an explanation at all and ours is. So what happens here is notice that that’s just true. The Campbell and Popper view that I’m putting up on the screen here, it’s an explanation. The pseudo -Deutsch view is not an explanation. It simply is a statement, but doesn’t explain anything.
[00:31:55] Red: Well, and as we talked about it in some of our very earliest episodes, that’s not an unusual thing for a not explanation to be, for people to rally behind it because they like the confidence in which it’s stated and adopted even though it explains absolutely nothing.
[00:32:15] Blue: That’s right. So we have the ability using popper’s epistemology to eliminate the pseudo -Deutsch view, at least in its current form. Now, if Deutsch comes out later with a good hard to vary explanation of what knowledge is that is at odds with the Campbell -Popper view, it will then be a serious competitor. But as of today, based on what little we know, it’s not a competitor. It’s not an explanation. It is eliminated. And we are left with a single view, which is all variation selection processes must result in improvements. And so by definition, are creating knowledge. This is how I am dispensing with that criticism is I’m pointing out, it’s not an explanation at all. So it doesn’t count. And we can eliminate it just using critical rationalism.
[00:33:05] Red: I can get behind that until Deutsch shows up and he can contest us.
[00:33:12] Blue: Yes. And that may happen, right? I mean, you’re open to the plan. He’s a smart guy, right? He may have something in mind that Popper and Campbell didn’t know about, that this is a very realistic possibility even, right? But we can’t consider a theory that doesn’t exist yet.
[00:33:30] Red: I agree. Let’s not consider it and move on.
[00:33:34] Blue: Okay. So getting back to Campbell’s description of universal Darwinism, where we left off then was that step three can be eliminated. So we’re left with try out variations, select the variants that best solve the problem. Notice that that is the universal Darwinism algorithm that I proposed right at the beginning. And this is how I came up with it. I got it directly from this quote, based on the reasoning I just gave you, that the third step was unnecessary. Okay. So now we have to talk about if I’m misreading Campbell or not. So my summary is try out variations and select the variations that best solve the problem. So my summary is, of the universal Darwinism algorithm, is variation in selection. Campbell very specifically states this as blind variation, selective retention. Interesting. Okay. Why does he do that?
[00:34:30] Red: Well, the emphasis on the word blind is really interesting. It is. And I’m going to talk about that next.
[00:34:37] Unknown: Okay.
[00:34:38] Blue: And this is where I really feel like Campbell did himself a disservice by choosing to use extra words to say, I think he meant variation in selection. I think that he didn’t mean some subset of variation in selection. I think he was actually trying to explain something else. But everybody who reads Campbell, but me apparently a lot of people read Campbell read him as saying, well, there’s the variations have to be blind. So there’s, there’s certain kinds of variations that don’t count because they’re quote cited that term never came from Campbell. He never talks about cited variations, but other people reading Campbell do. And so now we have to deal with have I misread Campbell. So let me let’s talk about blindness. So Campbell summarize this as blind variation and selective retention. But what is blindness? Now, the most common reading of blindness is clearly wrong because Campbell makes it clear that this is a wrong reading. Most people read blindness as being the same as randomness. So like I was talking with my friend, Dan Elton, and I was showing him this presentation. And he says, when Campbell says blind variation, I’m sure he means random variation because everyone knows that evolution is based on random variation. Okay. Remember that when I, when I showed the Dawkins version, it included the idea of random variation. Right. So it, I think that blindness just means randomness, but Campbell very specifically says that that’s not the case. Okay. In fact, I believe that what he’s trying to say with the word blindness is he’s trying to explain he doesn’t mean randomness.
[00:36:18] Blue: So instead of just saying variation, which would have been the correct thing to say, he says, I gotta, I gotta make it clear that the word variation doesn’t mean random variation because everybody thinks that the word variation means random variation. So I’m going to put this clarifying term blind in front of it to make it clear that I mean something different than random variation. And what he has in mind, I believe, is something that’s a super set of randomness. So randomness is a kind of blindness, but it’s not the only kind of blindness. So for example, let’s say that you do a full sweep, the paramesum example is an example of a full sweep where you simply try every single possible variation, but not randomly, that would still be considered a blind variation, but it’s not a random, it’s clearly not a random variation. So blind variation would be either randomness or really any non random approach that’s just trying variations. And I think that’s all he meant. I don’t think he meant more than this. I think he was trying to simply make it clear that his algorithms are generalization of the way people normally think of the evolutionary algorithm and that he’s including full sweeps or other types of trying out variations that aren’t random because so long as they are trying variations, then they count as blind is basically what I think he was trying to get out.
[00:37:42] Unknown: Okay.
[00:37:43] Blue: So just to kind of drive this home, what exactly is a non blind variant? I really don’t know. Okay, like I when people talk about sighted variants, they they they’re acting like they know what they’re talking about. And I’ve never been clear what they’re talking about. Okay. So it seems to me that the very concept of variation is blind, that if you have to try out different solutions to a problem, it’s because you don’t know how to compute the problem, the solution directly. Therefore, it’s blind, right? So I think variation and blind variation is the same thing. Okay, but clearly other people think otherwise. So the question is, what is a non blind, if I’m wrong, then there should be something called a sighted variant. And we should be able to understand an explanation of what that is. And we should be able to take a look at how that changes the algorithm.
[00:38:34] Unknown: Okay.
[00:38:35] Blue: Okay, to make matters worse. Campbell goes out of his way to to kind of clarify that blindness can happen anywhere in the hierarchy of evolution. He’s the one to introduce the idea of hierarchy of evolution. And so he says, look, I get it that what I’m calling blind variation may seem sighted. And he uses the example of eyesight to explain this. Okay. So if there was ever something that was sighted, it would be eyesight, right? And he says, okay, so eyesight counts as what I mean by blind variation, because each individual receptor is itself just pointing in a direction. And then it takes information in. And it’s utilizing knowledge that was evolved by some other process that was using blind variation. So basically what he means by blind variation can mean almost anything. So long as somewhere in the hierarchy, there is some sort of blindness going on. And that’s the way he explains it. And he never really, he never uses the term sighted variation. But this is his explanation for blindness. Okay. And it really leads to a lot of confusion. We’re going to see, I’ll give you examples of this. He also calls, says selective retention. Now, what does that mean? Is that a fancy way of saying use the best variant you find? Okay. If so, then we can just say selection instead of retention. Okay. Now let’s test this a little bit. What does non selective retention mean? Just like I wanted to know what a non blind variant is, what’s a non selective retention? Or what does selective non retention mean? I don’t know. Those seem like nonsense phrases to me.
[00:40:12] Red: Okay.
[00:40:13] Blue: So even though he was very careful to use the term selective retention, I think he just means selection. Okay. Now I’ve had one argument made to me that probably deserves some mention. I had somebody say to me, okay, no, you’re reading this wrong. What he means is that you have for it to count as knowledge creation, you have to retain something from the previous variant you tried. So if you’re not retaining something from the previous variant you tried, then it doesn’t count as knowledge creation. Now this actually strikes me as a somewhat plausible reading of that might be at odds with what I’m saying. Okay. Except for one thing, the Parameasim example, what is being retained between variations here? This is his example. Okay. I don’t know. Right. I mean, that reading is at odds with his actual example. Okay. So my conclusion is selective retention just means selection. However harsh that is to say, I think that’s all he meant. Right. He had anything else in mind. Okay. So now let’s get back to this idea of sighted variants. So Gary Sisko, it was a student of Donald Campbell, and he writes an article called From Blind to Creative in defense of Donald Campbell. And he cites Perkins and Sternberg as criticizing Campbell because they see sighted variations as requisite for some forms of knowledge creation, particularly human intelligence. So just to put their argument into perspective, their argument goes something like this. Donald Campbell’s evolutionary, universal evolutionary algorithm is the wrong one. It’s wrong. And the way we know is because when humans create knowledge, they do so through sighted variants and not blind variation. And therefore, since knowledge is created through sighted variants, we have refuted Donald Campbell’s algorithm and it’s wrong.
[00:42:09] Blue: Okay. Does that make sense? Yeah, it totally makes sense. Okay. Now, keep in mind that the way I’m trying to define blind, they’re wrong, because the very fact that there are any variants implies blindness. The algorithm itself implies blindness. That’s why sighted variants are actually blind variations, because they aren’t the answer to the problem that we’re after. They’re possible answers that have to be considered. I see. Okay. Okay. However, as long as you’re thinking of blind variation is not being equivalent to variation, and you’re thinking that there’s some sort of delineation out there called sighted variants, this is the type of misunderstandings you’re going to have. Okay, where people are going to say, oh, no, I can show you examples of where sighted variants are necessary.
[00:43:00] Unknown: Okay.
[00:43:00] Blue: Now, Cisco goes on and he points, his argument back is the somewhere in the hierarchy argument, which is what Campbell said. He says, no, actually, what you’re calling sighted variants, they count as blind variants. And the reason why is because the blindness exists somewhere else in the hierarchy. Okay. And this argument works. And I’ve got no problem with this argument, but it still leaves open the question. What in the world is a sighted variant? I mean, can someone actually point to me to an example of a sighted variant so that I can actually consider it or all of them blind? Okay. So here’s another example that I had somebody argue to me. They said, okay, I’ll give you an example of a sighted variation. Let’s take the Fibonacci numbers. Now you’re familiar with agile. So you’re familiar with numbers. So they’re one, two, three, five, eight, 13. Basically, you add up the previous two and that gives you the next one. Okay. So the first two, the real Fibonacci numbers, you actually start with zero and one, but I’m going to skip that. I’m going to start with one and two to make it easier. So you start, we’re given the first two, one and two. There’s no way to calculate those. Those are just atoms that you have to be given. So the third Fibonacci number is the first two added up, one plus two equals three. The fourth Fibonacci number would then be two plus three equals five, et cetera. Okay. So the person who was suggesting this said this is an example of sighted variants and this is an example of why you can’t have sighted variants and they don’t create knowledge.
[00:44:29] Blue: Now I would agree that there’s no knowledge creation going on here. Okay. What I disagree with is that these are actually sighted variants. I think these don’t work as examples of sighted variants. If they are examples of sighted variants, if this is what we mean by sighted variants, then I agree that sighted variants don’t create knowledge. However, for a different reason than what they’re supposing. As far as I can tell, there are no variants at all here. Okay. Yes, the Fibonacci numbers are variants of Fibonacci numbers. We’re playing with language here. Okay. But remember, the algorithm is about solving a problem and having proposed solutions to a single problem. So what we have here is really a series of problems, different problems, each of which gets a single variant as an answer. So one problem is what’s the third Fibonacci number and the single variant you come up with is three. And one is what’s the fourth Fibonacci number and the single variant you come up with is five. Okay. You’re not trying different Fibonacci numbers and then measuring them against each other and then deciding this one’s a better answer to the problem. What’s the third Fibonacci number? Okay. Do you see what I’m saying here?
[00:45:40] Red: Yeah, totally. It makes total sense. So if this is an example of sighted variants, the more key thing here is that this is in no way following the algorithm I outlined.
[00:45:52] Blue: Therefore, we can exclude it just by virtue of the fact that it’s not following a variation in selection process, as I in the more detailed version explained. So we can eliminate it based on that. By the way, one thing to note, if you have only one variant, then by definition, you have no variations. So that’s why I think if you have variations that you’re trying, you are by definition blind. And that is okay. So the real question is, is there any other option available? In critical rationalism, this is what it always comes down to. What’s the alternative explanation? I’m offering one explanation. It’s even a testable explanation. If you run this algorithm and it doesn’t produce knowledge, given the caveats we talked about, that would disprove this algorithm. But we know that’s never going to happen. It’s my proposed solution is if we’re comparing variants to find the best one, you’re blind. That’s it. The algorithm itself defines blindness. We don’t need any other concept of blindness except that it’s following the algorithm that I’ve outlined. Therefore, I believe the word blind is redundant and I’m dropping it. Okay. So going back to my algorithm, so here’s the five steps that we talked about before, and including the explanation and things like that that we kind of laid out. It’s really important to now see what’s not referenced in this algorithm. What’s not there is as important as what is there. Okay. So one thing is, is that it doesn’t reference blindness anywhere. The algorithm itself defines blindness. So we don’t have to throw into step two, conjecture a solution to the problem by creating a variant. We don’t have to say make sure the variant is blind.
[00:47:44] Blue: It does not matter how you create the variant. If you’re following this algorithm, it means you’re blind and therefore we don’t have to reference blindness anywhere. Okay. So we can cut that out. All right. It doesn’t mean that I’m rejecting blindness. I’m still claiming that Campbell was right that blindness is a necessary part of the algorithm. I just feel no reason to put it in front of the word variations.
[00:48:10] Red: It’s a specificity that causes confusion instead of giving clarity.
[00:48:14] Blue: That’s right. Okay. In fact, it would probably make more sense to say it’s a blind process of variation in selection rather than to say blind variations. Right.
[00:48:23] Red: I agree.
[00:48:25] Blue: Okay. The next one is no randomness. Okay. People are so used to the idea of evolution being a random variation. It’s really important to realize that that’s unnecessary for the universal Darwinism algorithm to work. It does not matter how you come up with your variants. Your variants might be based on some amazing heuristic. In fact, we’ll see an example of this when we get to gradient descent where the heuristic is very good. It really helps you pick really good variants. Okay. But the very fact that you are picking variants and that you’re having to compare them, that works. It doesn’t have to be random. Okay. Although randomness is fine. If you want to include randomness, that’s how you come with your variants. That’s fine. Okay. Nothing about replicators is mentioned in this algorithm. We’ve dropped out replicators and it’s no longer… It still might be a good way to make a specific algorithm. Might be to use replicators to come with variations. But there’s no reason why you have to use it. The only exception to this would be if you’re understanding replicator as meaning that you retained that solution. But that’s like such a vague… I mean like the word replicator just doesn’t express that idea well. So I think it’s better to just say replicators are not necessary and to move on. They’re optional, but they’re not necessary. And then finally, and this one’s interesting, I don’t anywhere use the word refutation. Now people are used to the idea that popper’s epistemology can be summarized as conjecture and refutation. I don’t actually think that’s the best way. Now that we’re generalizing this to something outside of science, I don’t think referring to refutation makes a lot of sense.
[00:50:04] Blue: Now you’re free to think of it that way if you want. We’re going to be doing a comparison of variants. We’re going to be keeping the better variants. That means we’re discarding the ones that weren’t as good. If you want to call that refutation, that’s fine. I’m not going to object. If you want to call keeping the variants as conformation, that’s usually anathema to popper’s epistemology, that’s fine too. I don’t care what you call it. It seems unnecessary to force the word refutation in here. That made more sense when we were dealing with science and scientific explanations. This is a more general algorithm that includes popper’s epistemology for science, but isn’t limited to popper’s epistemology for science. It can do other things as well like Hermitian example. Now let’s get back to the question. Have I misread Campbell? I’m trying to say the summary of his algorithm is variation in selection. He’s trying to say it’s blind variation in selective retention. Here are the key insights I want you to understand at this point. All evolutionary processes that use variations to find solutions clearly don’t know how to directly calculate the solution and thus are tautologically blind. Therefore, we can drop the word blind from the summary. There doesn’t seem to be such a thing as sighted variations. However, if there were such a thing, in fact, let me go back one slide. Let’s say that we believed that there was something called a sighted variant. Let’s say that you said, you know what, this algorithm doesn’t work. You need to throw on there. You need to say conjecture solution to a problem by creating variants that are blind.
[00:51:44] Blue: Let’s say you really insist that the algorithm doesn’t work unless you throw that on there because sighted variants, you think don’t create knowledge. Notice that that ruins the explanation. The explanation is that if you’re comparing variants and discarding the worst ones, you’re getting improvements and that’s what knowledge creation is. If you can take sighted variants and if you can compare them at step two, and that will also lead to knowledge creation. If you’re trying to stick the idea that it can’t be sighted variants into there, you’ve ruined the explanation. You now have this extra thing you need to explain, which is why is it that sighted variations, even though they follow exactly the same process of comparing variations and picking the better ones, why does that not create knowledge? You can see what still create improvements. Why are we suddenly slapping this label on, oh, that’s not knowledge. That’s something else. There doesn’t seem to be a way to modify this algorithm. You can claim you do, but you can’t do it without ruining the explanation. You can’t do it without ruining the algorithm as an explanation or making it untestable in some way. That’s why I reject the idea that you can make that sort of change to this algorithm. Interesting. There doesn’t seem to be such a thing as a sighted variation, but even if there was, even if someone could show me what a sighted variation was, there’s no way it’s following this algorithm because by definition, this algorithm is blind. That’s how we’re defining blindness. The example of the Fibonacci numbers, they aren’t following this algorithm. You could try to force it in. You
[00:53:31] Blue: could say, well, we’re starting with a problem, give me what the third Fibonacci number is, and then I’m going to conjecture a solution by calculating it, adding one and two together and getting three. But then you never actually measure how good the proposed solution is. You don’t retain it against other solutions. You’re not really following this algorithm with the Fibonacci numbers. Absolutely. Okay. And then finally, okay. It’s therefore for the purposes of the algorithm, blind variation is identical to variation, and the perinmissium example shows us that selective retention is just another way of saying selection. Therefore, I’m proposing the universal Darwin algorithm is variation and selection, and that is it. No random, no blind, no selective retention. If you’re doing variation and selection, and that’s how you are making improvements, you’re following the universal Darwin algorithm, period, end of story.
[00:54:25] Red: I concur.
[00:54:26] Blue: Okay. And that is my argument for today. We can end there, actually. And next time we will pick up with, there’s still a lot to go over. But next week, what we’ll do is we will talk about a conjecture that Campbell makes, that basically, I mentioned that I’m claiming that this is a way to create knowledge, not necessarily the only way to create knowledge. There’s a wide belief that evolution is the only way to create knowledge. And Campbell, I believe, expresses that idea in his paper, and basically makes the claim, if there is knowledge creation, I’m telling you, there’s going to be blind variation, what he calls blind variation, which is really just variation and selection, somewhere in there. And he makes this bold, what I saw as a bold prediction, and we’re going to test that prediction the next time we meet. Well, thanks for joining us,
[00:55:24] Red: Camiel. This was a fascinating conversation, and I feel like I’m ready to go have a fight with somebody about creation of knowledge. All right. Thanks, everyone.
Links to this episode: Spotify / Apple Podcasts
Generated with AI using PodcastTranscriptor. Unofficial AI-generated transcripts. These may contain mistakes; please verify against the actual podcast.