Episode 88: The Myth of the Objective
- Links to this episode: Spotify / Apple Podcasts
- This transcript was generated with AI using PodcastTranscriptor.
- Unofficial AI-generated transcripts. These may contain mistakes. Please check against the actual podcast.
- Speakers are denoted as color names.
Transcript
[00:00:05] Blue: Hello out there. This week, Bruce reflects on AI researcher Kenneth Stanley’s assertion that setting specific, miserable goals may actually hinder discovery and innovation. How does this insight relate to critical rationalism, education, and life in general? I hope you enjoy this as much as I did.
[00:00:28] Red: Welcome to the Theory of Anything podcast, Dave Peter. Hello, Bruce. How are you doing? Good. We’re going to talk about why greatness cannot be planned, the myth of the objective, by Kenneth O’ Stanley and Joel Lehman. Back in episode 74, we talked about the problem of open -endedness. That was one of our more popular episodes, and I covered a lot of Kenneth Stanley and team in that episode. So that actually got me thinking. I wonder what else I could learn from his theories, because I know that he’s got all sorts of interesting theories and even practical theories that have been turned into programs and to actual algorithms that he and his team have developed. So I knew there was a book, why greatness cannot be planned. So I wanted to read the book. It’s not too different than if you just went out and you just watched a lot of Kenneth Stanley’s videos online. I think he largely covers the same subjects. Peter, I know you’ve watched at least some of those videos, so you may be actually pretty familiar with a lot of the ideas that he raises.
[00:01:32] Blue: I checked out a couple, thought it was interesting, and I’m curious what you say about this. That’s kind of where I’m at. The first thing I should say might be helpful for our listeners is that we are not talking about the objective in terms of objectivity versus subjectivity. That’s when I first heard you say that. I was like, oh, okay, this is the myth of objectivity. That’s not what he means, right? Yeah. It has a little different connotation. It’s more the objective as in what you’re searching for kind of a thing. Right. And let me give my own caveat to that. The name of the book is Why Greatness Cannot Be Planned, which I think is a pretty accurate title. And then the subtitle is The Myth of the Objective, which I don’t think is really very accurate at all to what he’s talking about. He never really takes the stance that we should not take objectives and that they’re myths, right? We’ll not make things goals or objectives and that they’re myths. Instead, what he really means is that we shouldn’t have ambitious objectives. And he does in total throughout the book, although it takes him a little while to get to it, he does eventually say, actually, I’m totally in favor of having objectives. It’s just, objectives are really good. As he puts it, well, I’ll explain this a little bit more as we go.
[00:02:57] Red: But when you’re one stepping stone away, having a measurable objective is a very viable way to organize yourself and to make sure you’re making good progress. But if it’s an ambitious objective where it’s kind of off in the distance, then he feels that trying to turn it into a set of measurable objectives is absolutely going to end up being completely deceptive. It’s going to be a false compass. It’s going to completely mislead you. And so that’s what he really means. He really means something more like the myth of the objective when we’re dealing with ambitious goals or something along those lines,
[00:03:39] Blue: and like a lot of what we talk about on this podcast, I’m going to guess we’re not just talking about some kind of machine learning thing. We’re talking about something more far -reaching. Yes. And of course, that’s what I loved about this book.
[00:03:55] Red: As someone who’s a total AGI geek, this book is actually saying something meaningful about what AGI needs to look like, what it is we should be looking for. I mean, obviously I covered a bunch of this in the Problem of Open Editus episode 74. And so we’re going to recover some of that, but he goes into a lot more detail in the book. And then he also, I mean, I’ve raised a few times the problems of biological evolution, the fact that Darwinian evolution is an incorrect theory that we’re actually now trying to find the neo -neo -Darwinian evolution that’s the actual correct theory. And we’re trying to turn that into something that’s understood well enough that we could, in theory, create an algorithm out of it, which we’re not even close to doing that at this point. And he’s got something really interesting to say about that.
[00:04:44] Blue: Okay, so it sounds like it intersects with a lot of what we talk about on the intersection of philosophy and science and machine learning. Yeah.
[00:04:58] Red: And then he does show how a lot of these ideas are similar, machine learning, artificial intelligence, AGI, there are commonalities between those. I know that a lot of people want to take kind of this stance that AGI is something totally different than AI, which, depending on how you’re trying to slice your concepts, that could be true, right? But there are commonalities between them that are really important to understand and to try to make sense of. And I’ll always bring up Donald Campbell’s theory, which was the earliest attempt to try to find a commonalities between how humans make knowledge, how Darwinian evolution makes knowledge, and how existing algorithms make knowledge, how animal learning is related to those. And it’s trying to find an overarching theory that kind of brings all these concepts together that might at first seem only related as analogies, that they’re in fact kind of all similar in important ways. And he digs into that a lot too. So just a little background on how Stanley and Lehman came up with their theories. It’s kind of interesting because it’s very serendipitous, which is one of the main things that they’re trying to explain is the nature of serendipity and the role it plays in creativity, right? So the authors wrote a genetic art program called Pickbreeder to explore algorithmic evolution in the creation of art. So imagine that there’s something like DNA, and it’s some sort of set of code that exists, and it produces at random some work of art, which initially is just going to be, you know, something very vague, right? It’s not going to look, it’s going to look like abstract art or something, right? It’s not going to look like anything.
[00:06:46] Red: And he was trying to figure out, you know, like, how many steps would you have to take to find something interesting? I mean, he had a lot of kind of interesting, but maybe additionally vague questions. So he realized that evolution takes a substantial number of generations, and that humans are going to lose interest fast. So if he just created this program and just said, you know, just keep evolving stuff, someone’s going to try it, you know, 10 times, then go, yeah, I’m done. Like nothing, nothing useful is going to be found. Interesting to be found in 10 generations or something, right? So they had this idea to make it an online program so that anyone can pick up where someone else left off. So essentially, you’d pick one of several images, and it would breed using the DNA of that image, okay? And you would end up with a whole bunch of new images that were variations on a theme.
[00:07:37] Blue: And just for context, how far back are we looking here?
[00:07:42] Red: 2015.
[00:07:43] Blue: Okay.
[00:07:44] Red: So things are
[00:07:44] Blue: changing so quickly. Yeah.
[00:07:47] Red: Yeah.
[00:07:48] Blue: Okay.
[00:07:48] Red: And so the Pick Breeder probably is still online. I’ve seen it before, but it probably predates the book by years, I would imagine. Okay. So now here’s what’s interesting. So the other thing that was kind of cool about the fact that it’s an online program is that then you don’t have like one person just trying stuff. You have all sorts of different people with different diverse interests trying it. And once they get bored, they leave out there these breadcrumbs of where they left off that somebody else can pick up and start using, right? And the results were stunning. Users bred amazing pieces of art that were far more interesting and diverse than you would have ever guessed was possible from this little simple algorithmic evolution thing. Like things like skulls, butterflies, insects, aliens, apples. I mean, like I can’t show you pictures because this is an audio podcast, but like they really are pretty stunning. There’s a butterfly on the front cover of the book that was actually evolved using Pick Breeder. And it’s all from some very simple rules that contain no set objectives at all. So this led to a lot of interesting questions. And so on page 25 of his book, he says, it turns out that it’s a bad idea to set out with a goal of evolving a specific image because they wanted to know you can evolve all these cool pictures using this, but can you evolve a specific image? He says, in fact, once you find an image on Pick Breeder, it’s often not even possible to evolve the same image again from scratch, even though we know it can be discovered. Okay.
[00:09:18] Red: So in plain English, I can show you a bunch of pictures that were evolved with Pick Breeder, and they’re cool. They look like apples. They look like butterflies. They look like skulls or aliens or cars or whatever. And then if I were to say, okay, go evolve that image, which we know Pick Breeder can evolve because it already has, you would find you just couldn’t do it, that it was just impossible to find it because the search base is just way too big. Okay. And
[00:09:46] Blue: is this technology that led into LLMs or mid -journey in Dolly and things like this, or is this just completely different?
[00:09:55] Red: This is completely different. So yeah. So if I were to look at machine learning, I think the types of stuff that like open AI and are really hot right now, those are a certain kind of artificial intelligence. And they’re rooted in neural networks usually, deep learning where you have like just very deep networks and you’ve got, requires a ton of data to train the network. And I don’t think that they’re, I mean like open AI may believe that they’re trying to discover AGI. And I wouldn’t even count out the fact that they will make interesting discoveries that are somehow related to AGI. But I don’t think anybody really seriously believes LLMs are going to turn into AGIs. If a person believes that, most likely they simply don’t know what they’re talking about. And yes, I know a few very well big names have worried about this. Jeffrey Hinton, who’s like the father of neural networks, has said some things that seem a little ridiculous to me. And I really respect him. I really strongly respect him. He’s amazing. But I think for the most part, nobody really takes seriously this idea that LLMs are going to turn into AGI. I think that there are these other branches that you don’t hear about. Like there’s all sorts of branches in artificial intelligence of people trying stuff. And that’s actually one of the things that Stanley is going to talk about in the book is that he feels that the way we organize AGI today isn’t really the healthiest in terms of trying to make interesting progress. Precisely because we do it through the myth of the objective when we shouldn’t be, because that’s a bad idea. But you have Douglas Hofstetter, who’s an AI researcher.
[00:11:45] Red: You’ve got Stanley and they’re not really researching neural networks and really could care less about them. And they’ve got a totally different direction that they’re going. I’ve mentioned a few times the idea of explanation -based learning. Like you go pick up old textbooks. Tom Mitchell’s is the one I actually used in my actual AI ML program. And he talks about explanation -based learning and he almost never hear about it today. And it’s such an interesting idea, this idea that explanations have much greater power than just probabilistic outputs from a neural network. And it doesn’t seem to have turned into something super productive and kind of everybody goes to wherever the current productivity is. But it’s this really interesting idea that should probably be of strong interest to people who’ve read David Deutch’s books and agree with him on ideas behind AGI because explanations are somehow related to how humans are open -ended and have this amazing ability to be universal explainers. So why aren’t we spending more time studying algorithmic forms of explanation, like explanation -based learning? I think if we understood Deutch’s theories, that would be a more interesting area to go looking at. But what I’m trying to say though is AI is just a super broad field. And so what you’re currently hearing about that’s super hot probably is not that strongly related to a lot of the other fields that you don’t hear about so much. So they wrote a program that would automatically try to breed a specific picture that’s previously been bred. So they knew it was possible. By having it score improvements towards the goal of the objective, which would be the image. And here’s what they say about that.
[00:13:42] Red: The result for the most interesting images, total failure. It’s impossible to breed an image if it’s set as an objective. The only time these images are being discovered is when they are not the objective. The users who find these images are invariably those who were not looking for them. Page 26. So one repeating theme of this podcast is the idea that evolution is a kind of search. Moreover, search is a kind of evolution. And it isn’t hard to see why this must be. An algorithm that does a form of variation in selection is going to produce results that have the very same properties as what David Deutch calls knowledge. Namely, surviving variants must have out -competed the failed competitors and thus in some limited sense kept themselves instantiated by containing some kind of knowledge. This is just as true for biological evolution, what we’d call maybe survival of the fittest. Human theories, survival of the best theory, human and animal memes, survival ideas due to being useful or possibly parasitic. Animal operant learning, survival of the best actions that give the best rewards. And basically nearly all AI and ML, which are nearly all kinds of search algorithms, all of them are kinds of search. Think about the commonality of each of these kinds of algorithms. They all involve trying out variants and selecting the ones that are the best, where best is determined by context, not some universal standard of best. So all evolutionary algorithms are actually searching through variations. And therefore, all evolutionary algorithms are kinds of searches. By definition, they would always have to be. But likewise, all search algorithms involve trying out various solutions to a problem by comparing proposed solutions.
[00:15:35] Red: So let’s take something that probably nobody perceives as an evolutionary algorithm, a simple A star algorithm that tries to find the shortest path. So in video games, it’s a technique that you might use to, you want your character to walk from this place to the other through the maze. And so it does an A star algorithm, and it finds the shortest path between their current location and the location they want to get to. Okay. So an A star algorithm literally tries every possible path, although it’s constrained by an admissible heuristic. You don’t need to know what an admissible heuristic heuristic means. But it has a heuristic that allows it to try out the best paths first, basically, and to find the shortest one. And you don’t have to actually try every single path because of this heuristic, you know, at some point, that you’ve reached the shortest path before you’ve tried every possible path, because you know that you’re, the search is taking place in such a way, and it’s constrained in such a way due to the admissible heuristic, that it only has to search this much before it knows it’s exhausted every realistic possible path to the shortest path. So that is to say an A star algorithm is in fact an evolutionary algorithm of variation and selection, where it actually tries out variations of possible paths and eventually picks the one that’s the shortest. All search algorithms are in this sense evolutionary algorithms. Now, this isn’t how you would normally think of the term evolutionary algorithm, I admit. Okay. The person who discovered this idea is Donald Campbell, way back in the 70s, who saw this as a generalization of Karl Popper’s epistemology.
[00:17:13] Red: And he believed that he had found a way to link all knowledge creation via what he called blind variation and selective retention. And I talked about this at length, way back in episodes 25 and 26. Now, I’ve argued that there is that saying blind variation and selective retention is an overly difficult way of saying variation and selection. And I much preferred the simpler term variation and selection. For those that think there’s a difference between these, I guess go back to my episode 25 and 26 where I in great ridiculous detail explain why I feel these two concepts have to be considered the same concept. I even give my explanation for why Donald Campbell bothered to make it more complicated. And the reason why I will explain it briefly again is that if you say variation and selection to people, they almost immediately assume you’re talking about random variation and selection. And Donald Campbell was not talking about merely random variation and selection. He meant any kind of variation, whether it’s full sweep or random, that’s what he meant. So he threw the word blind on there to try to specify that he didn’t mean random variation. And instead, people misunderstood it and thought that there was some sort of blind variation versus sighted variations. And there’s like all sorts of confusion in the literature ever since he chose those terms, which I believe he caused that confusion. I will do a separate podcast on Donald Campbell’s theory and I will actually go over the theory at some point. I always wanted to do that. And I will go into more length as to why I feel he’s really just talking about variation and selection, period end of story.
[00:18:55] Blue: Now,
[00:18:56] Red: can the Stanley’s discoveries fit perfectly into Campbell’s view? Oh, I should also mention, I don’t actually think Donald Campbell’s theory is correct. I think it’s got very similitude, but I do think there’s problems with it. But I think it was the first real attempt to find a generalization of Popper’s epistemology that covered every single kind of knowledge creation, not just biological evolution and human ideas. It was trying to show that there was this commonality between machine learning algorithms and animal learning and that all of them were related in some important way through this thread of Karl Popper’s epistemology, or at least this generalization of Karl Popper’s epistemology.
[00:19:36] Blue: Just to be clear, it’s not like Donald Campbell influenced kind of Stanley, right? This is one’s talking about epistemology, one’s talking about machine learning, but you’re just making sort of an action between their ideas. Well,
[00:19:55] Red: yes, you’re right. It’s a little more complicated than that, because I have a book on evolutionary algorithms that’s written by a big guy in the field, I can’t remember his name with the top of my head, and when he’s tried to trace out the history of evolutionary algorithms in his book, he’ll list out the first paper that was in this field of evolutionary algorithms was Donald Campbell’s paper, and it’s the exact one I’m talking about. Okay. So in fact, Donald Campbell is considered one of the, if not the founder of the whole concept of evolutionary algorithms. Okay. And so from that point of view, then yeah, he absolutely did influence Ken Stanley. I just don’t know that Ken Stanley had any idea he was being influenced by Donald Campbell, because nobody goes back and reads that paper and says, I’m going to learn about evolutionary algorithms by starting from the beginning and studying Donald Campbell’s theory. That isn’t what people are doing. It probably should do that, but that’s not what they’re doing today, right? Only you. Yeah, only me. So I would say he is indirectly influenced by Donald Campbell’s theories, absolutely indirectly influenced by it. Like it permeates all throughout our thinking in many ways. On the other hand, I don’t think anybody’s really aware of that fact. So it certainly isn’t a direct influence where he was studying Donald Campbell, and he came up with these ideas. I doubt that’s true at all. I don’t know for sure. I don’t know Ken Stanley personally. Maybe he did, but I doubt it. So, okay. So Ken Stanley’s discoveries fit perfectly into Campbell’s view.
[00:21:33] Red: Stanley sees his discovery as directly related to the nature of search and also directly related to how evolution works, because he sees those as the same thing. I’m going to explain why. What Stanley adds that Campbell missed, in my opinion, is an important discovery about the nature of the evolutionary search space, particularly when trying to do an open -ended search, similar to biological evolution and human ideas. So biological and evolution and human ideas are the only two known open -ended search processes at this point. See episode 74, the problem of open -endedness for a discussion on that. And also keep in mind that that’s what I believe when we talk about the two sources hypothesis, that all knowledge comes from biological evolution or human ideas. I believe that that two sources hypothesis is a misunderstanding of the problem of open -endedness, and that the problem of open -endedness is a more clear version of the two sources hypothesis. So because of this, this idea that in fact biological evolution and human ideas are the problem of are the two open -ended searches, you’ll understand why I spent episodes 75 to 79 arguing that the two sources hypothesis is really just a slightly corrupted version of the problem of open -endedness, and that it mistakenly misunderstands knowledge as coming from the two open -ended sources of knowledge instead of recognizing the broad range of also narrow knowledge creating algorithms that exist in nature and that humans have created. So now, getting back to quoting the book, he says, consider all possible images. So imagine this is a computer screen, this misses me saying this. Imagine that you’ve got a computer screen made up of pixels.
[00:23:22] Red: There’s actually a set number, a finite number of possible images that can be drawn on that pixel, on that screen. Imagine the screen so high resolution that your eyes can’t see the pixels. So literally, you are, we now have a set of every possible picture that can exist, okay? And it’s a finite set, all right? So he says, consider this, a minuscule portion of all possible images are great masterpieces like the Mona Lisa. These are objectives that are hard to achieve. A larger portion of possible images are recognizable, but less inspiring than masterpieces like pictures of maybe everyday objects. Of course, the vast majority of possible images are of no interest whatsoever, just random static. That’s all page five. He says, it is useful to think of achievement as the process of discovery. We can think of painting a masterpiece as essentially discovering within the set of possible images. So note how this perfectly fits with the concept of objective beauty. And in fact, it may, you may need the concept of objective beauty to make sense of what Stanley is saying here, but anyhow, that’s a side note. So he continues, it’s as if we are searching through all possibilities for the one we want, which we call our objective. The point is, is that the familiar concept of search can actually make sense of more lofty pursuits like art, science or technology, basically all forms of creativity. All of these pursuits can be viewed as searches for something of value. Page five. Now this may come as a surprise to many, though hopefully not come as a surprise to listeners of this podcast since I feel like I have drilled this point home over and over and over again throughout the podcast.
[00:25:06] Red: This idea that we can take something like great art, scientific theories, biological evolution, great inventions, basically all forms of creativity and we can think of them as a kind of search. This leads to a natural question. Of course, does that mean literally all creativity is a kind of search? Yes, that is what I am saying. I am saying, all creativity is a kind of search, period, end of story. So on page six, he says, out of many possibilities, we want to find the ones that’s right for us so we can think of creativity as a kind of search. So that was Stanley saying that. I keep saying Stanley, but it’s really Lehman and Stanley. It’s just too hard to keep repeating both names. If all creativity is a kind of search, then what is it that we’re searching through? So on page six, he says, we can call that something that we’re searching through the search space, the set of all possible things that we’re searching through. What is the nature of this search space? He says, now try to picture this space as if different possibilities appear in different locations in a big room. Imagine this giant room in which every image conceivable is hovering in the air in one location or another, trillions upon trillions of images. The images within it would have a certain organization. The good stuff is relatively few and far between. As you can imagine, the kind of image you are most likely to paint depends on what parts of the room you’ve already visited. If you’ve never seen watercolor, you would be unlikely to suddenly invent it yourself. Civilization has been exploring this room since the dawn of time.
[00:26:47] Red: As we explore more and more of it, together we become more aware of what is possible to create. And the more you’ve explored the room yourself, the more you understand where you might be able to go next. In this way, artists are searching the great room of all possible images for something special or something beautiful when they create art. The more they explore the room, the more possibilities open up. That was page six. So then he continues, let’s pretend you wanted to paint a beautiful landscape. So this is your objective. If you’re experienced in landscape painting, it means that you’ve visited the part of the room teaming with images of landscapes. From this location, you can branch off to new full landscapes that are still unimagined. But if you’re unfamiliar with landscape paintings, unfortunately, you’re unlikely to create a masterpiece landscape, even if that’s your objective. In a sense, the places we’ve visited, whether in our lives or just our minds, are stepping stones to new ideas. This idea of stepping stones is a really important point that he’s going to be coming back to. So he uses the example of inventing the computer. So the ENIAC is often credited as being the first computer to be invented. But to invent the ENIAC, you had to first invent vacuum tubes, which were invented for a totally unrelated reason. It had nothing to do with computers. So on page seven, he says, stepping stones are portals to the next level of possibility. Before we get there, we have to find the stepping stones. Now I want to note here the similarity to Lee Cronin’s assembly theory. In fact, this is a rough version of Lee Cronin’s assembly theory.
[00:28:21] Red: This is why I’ve meant, even though I don’t know much about assembly theory, why it has been of interest to me is because I can see that he’s trying to formalize something that clearly is at the heart of what creativity is, or what evolution is. Okay. So Stanley continues, he says, vacuum tubes are so unrelated to the objective of a computer that if you were simply searching for computer -like things, you’d entirely missed necessary stepping stones required to actually invent a computer. Thus the search space of creativity is structured such that stepping stones to a discovery do not exist anywhere near the discovery itself. Okay. So on page eight, this is his words, the arrangement or structure of this search space is completely unpredictable. Because of this, the nature of the search space where the stepping stones don’t exist near the objective that we’re interested in, he says objectives must necessarily be a false compass. So page 29, the fundamental problem of search and discovery is that we usually don’t know the stepping stones that lead to the objective at the outset. Page 30, the challenge of ambitious problems is to say that their solutions are more than one stepping stone away. While gifted visionaries often can lead us to the next stepping stone, can anyone guide us at all once beyond the horizon, many stepping stones away? Page 30, there’s a good explanation for why stepping stones are so difficult to predict, which connects to the general problem of objectives. Recall that a key tool for pursuing objectives is to measure progress towards them. This is the concept in machine learning and artificial intelligence of the objective function.
[00:30:04] Red: We actually create our AI algorithms around the idea of an objective function today, where there’s some sort of objective we’re trying to accomplish. And we want the algorithm to find the solution by searching for it. And what Stanley’s trying to say is that just the very concept of an objective function guarantees you that you’re doing a narrow search instead of an open -ended search. And therefore it’s a mistake, unless you want to do a narrow search, often that is what you want to do. You just want AlphaGo to play go well, you know, or deep blue to play chess well or whatever. In that case, a narrow search makes perfect sense and an objective function makes perfect sense. But if what you’re trying to do is you’re trying to do something similar to evolution, then an objective function is a problem. It’s exactly what you don’t want. Okay. There really isn’t an objective function in Pick Breeder. And the reason why is because let’s say that you want to breed that you end up breeding an alien face. In real life, that breeded from an automobile. And automobiles don’t look like alien faces. And you may even wonder how an automobile could turn into an alien face, but like the wheels turned into eyes. And it actually, there is a connection between the two. It’s just not at all obvious to a human being. So since the alien face looks and the automobile look nothing alike, but they are next to each other in the search space of Pick Breeder, you can’t try to breed an alien face directly. You have to first breed an automobile to be able to get to it. At least get to that particular alien face. So
[00:31:42] Red: Stanley says on page 30, when the objective function is a false compass, it is called deception, which is the fundamental problem in search because stepping stones that lead to the objective may not increase the score of the objective function. Objectives can be deceptive. You know, there’s probably an easy way to explain this. And of course, this is a massively oversimplified example. But imagine that you were trying to find the shortest path through a maze. And you were trying to do it by simply always moving in the direction of the goal. What you would quickly do is you would bump into a dead end. And then you wouldn’t be able to find the goal without first moving away from the goal and then moving back towards it because that’s the way mazes work. Okay. Now, of course, our search functions, even when we have objective functions, aren’t that stupid. They understand this concept that you may have to move away before you can move forward. But the very fact that they’re even using an objective function doesn’t allow them to move too far away and go find an automobile before we find the alien face. And that’s the inherent problem with objective functions. They’re very good. If you’re just trying to go one stepping stone, then they make perfect sense that you can measure between one stepping stone to the next progress towards that next stepping stone. But if it’s more than one stepping stone away, they may simply not be able to depart far enough to be able to get to the goal you’re after. So on page 31, he says deception is the key reason that objectives often don’t work to drive achievement. Okay. So key point, pickbreeder has no final objective.
[00:33:26] Red: Instead is a stepping stone collector on page 31. It collects pickbreeder, collect stepping stones that create the potential to find even more stepping stones. This is particularly true because a whole bunch of humans are doing it at the same time. One finds something interesting and then they leave it out there and somebody else goes, oh, that’s interesting. Let me try to evolve that. And that creates more interesting stepping stones. So he refers to this kind of search as a non, sorry, he refers to this as a non objective system of discovery or a non objective search, i.e. a search without an objective function. This is different than the way we normally think of artificial intelligence. The idea of an objective function is so wired into the concept of artificial intelligence that it’s it’s not something that we it’s almost just a given that you have to have an objective function because otherwise how are you going to drive the evolution towards your objective? Okay. And yet when we do that, we’re kind of forcing that algorithm to be a narrow knowledge creator instead of a broad open ended knowledge creator. Okay. Now, at this point, it would be natural to maybe say something like this, but wait, doesn’t biological evolution have an objective function? And what I have in mind here is something like survival of the fittest, by which we mean survival of the best replicators, not survival of the strongest organism. So Stanley puts this as such tries to take this as the objective of biological evolution, but he he instead calls it survive and reproduce, which is probably a better way to say it than survival of the fittest. Stanley admits there is some truth to this idea.
[00:35:09] Red: But he points out that while we could think of this as an objective of Darwinian evolution, it isn’t really the same as the artificial intelligence concept of an objective function, nor is it strongly similar to what humans usually mean by the word objective in most circumstances. Stanley and Lehman suggests that we think of survival reproduce as a constraint rather than an objective. So an example of how they differ. Usually when we formulate an ambitious objective, this is from page 35, by the way, usually when we formulate an ambitious objective, we haven’t already achieved the very objective that was that we’re setting. That would be rather odd way to start. What kind of strange marathon is over exactly when it begins? But clearly the very first organisms organisms on earth survived and reproduced, or we wouldn’t be here. So you can’t really say survive and reproduce is the objective of Darwinian evolution, at least not in the sense of it’s some sort of end goal we’re in reach because the very first organisms did survive and reproduce and did so successfully. And therefore it’s it’s they had already reached their objective. And so whatever evolution is doing beyond that point is in some sense just maintaining that objective. It’s continuing to survive and reproduce. Right. So it’s not a very effective way to think of surviving reproduce. Yes, it’s in some sense an objective, but it’s not an objective in the sense of being a goal. So the objectives are the wrong way to power a non objective search. What is the right way for pick breeder, which was human being human brains that powered the search. The answer was interestingness. Now this is a bit hard to define into an algorithm, of course.
[00:37:01] Red: So if we want to make this into an algorithmic search, how how about using novelty as a proxy for interestingness. So on page 40 Stanley says novelty can often act as a stepping stone detector because anything novel is a potential stepping stone to something even more novel. In other words, novelty is a rough shortcut for identifying interestingness. Interesting ideas are those that open up new possibilities. And then he quotes Alfred Whitehead. It is more important that a proposition be interesting than it be true. Stanley and Lehman now go on to describe an actual algorithm that they created to try to model this idea of a non objective search. And they call it novelty search. This is what you’ll find in Stanley’s videos on the Internet. And I actually do recommend people go check this out because it’s actually very cool what they were able to do with the concept of an open ended non objective search using novelty search. So for example, he might program a robot not to run a maze to some objective, some sort of end goal to exit the maze or something, but instead to just search for novelty. Now at first it is novel for the robot to just fall down in new spots or to bump into walls. But if you keep the novelty search running, the robot must eventually learn to walk and avoid walking walk walls to discover something novel. So in the end, Stanley and Lehman found that a robot doing novelty search was better at running a maze to the end, at least for certain sizes of mazes than a robot that was doing an objective search to run the maze to the end.
[00:38:39] Red: In fact, the specific numbers he gives is that 39 out of 40 times novelty search solved the test maze compared to three out of 40 for objective search. Now keep in mind, novelty search had no objective to solve the test maze. So if it did it did it by accident, because it’s just trying to do novel new things, right? Also, they do admit that if you make the maze large enough that novelty search and objective search will start to fail at some point. So it’s not like novelty search is some better way to solve mazes. It’s just kind of interesting that with a appropriate sized maze, it’s actually more likely to solve it than a function that’s got an objective function that is trying to solve it. It’s interesting that in an appropriate sized maze, novelty search would often outperform objective search, even on the objective itself.
[00:39:41] Blue: Can we take a brief detour into poetry here? Sure. Just as you were talking, I got curious about who said the quote, life is a journey, not a destination. And I think according to chat GPT, it’s somewhat apocryphal, but it was most commonly attributed to Ralph Waldo Emerson in his, he has an essay called self -reliance. And here’s the quote, he says, to finish the moment, to find the journey’s end in every step of the road, to live the greatest number of good hours is wisdom. And then this is chat GPT speaking, it says, while this passage conveys a similar idea about appreciating the journey and find fulfillment in each moment rather than solely focusing on reaching the end goal, which seems, I just thought it corresponded so nicely with what you’re saying about the novelty and the interesting, looking for novelty and interesting things rather than a destination. So just thought I’d throw that out there.
[00:40:58] Red: Well, that’s good because you’re showing that these ideas that Stanley are developing, he may be an AI researcher trying to develop these ideas as a form of AI, but they have a really broad application, right? I mean, he’s stumbling upon something that tells us something interesting about natural evolution, about intelligence, about wisdom on how to live your life, things like that, right?
[00:41:24] Blue: Yeah, I think so.
[00:41:26] Red: And it even tells us something interesting about AI. If you recall, I forget which episode this was, but we did the episode on chat GPT. And one of the things that was interesting is that when you had a specific Microsoft machine learning algorithm that tried to find personal information to remove it from a data set, it actually did worse when it was specifically trained to do that than chat GPT, which was not trained to do it. Precisely because chat GPT just simply tried to learn language in general and ended up picking up a whole heck of a lot of knowledge in a whole heck of a lot of areas, which meant it had collected these stepping stones in a way that a narrow machine learning algorithm couldn’t. And so it actually outperforms, at least in this case, outperforms machine learning algorithms meant to accomplish something specific just by chance, just is able to do it right out of the box, just talk to it. It can figure out what personal data is, and it does a better job of helping you remove it than a machine learning algorithm that was meant to do that. I think that’s kind of cool. And I think it shows us something really interesting about how these different ideas do interconnect. Okay, so an interesting side note that critical rationalists will find interesting, especially those that are interested in Deutsches writings is that Stanley says this, page 42, the main advantage of writing an algorithm without an objective is that we can put our money where our mouth is. If search for novelty alone really works for making useful discoveries, then it should be possible to literally formalize the process as an algorithm.
[00:43:12] Red: Once that’s done, it can actually be tested. Now, this hopefully sounds a little similar to when Deutsch in beginning of infinity said that, I can’t read the exact quote, I didn’t get a chance to look it up before the episode, but there’s the quote in beginning of infinity where he talks about, we’ll know that you actually have a working algorithm of intelligence because you’ll be able to write it, that you actually understand intelligence because you’ll be able to write it as an algorithm. Now, if I recall, he was actually paraphrasing Feynman. And I think Feynman was actually paraphrasing some other famous computer scientist. So it’s an idea that’s been around for a while, this idea that you don’t really understand it until you can put it into an algorithm. And this is really what Stanley is talking about here. So on page 43, he says, in the field of AI, this philosophy of building an algorithm to test a theory is fully embraced. In fact, in AI research, no explanation is considered good enough unless it is built as a computer program and then runs on a computer to test it. And in that way, AI has a demanding threshold for success because its researcher can’t simply offer explanations but must build a prototype of their theory and show that it works. Page 43, emphasis is mine. I’m emphasizing various Deutchian concepts that Stanley is accidentally including here. And in fact, clarifies certain Deutchian concepts where maybe they’ve gone off the rails a little as to, for example, what do we really mean by an explanation? This idea that your explanation isn’t really a good explanation until you can turn it into some sort of algorithm that is testable.
[00:44:55] Red: That’s a very strong Carl Popper idea that many crit rats online have lost at this point. If you go back and look at our episodes that are about epistemology and about easy to variness and things like that. So several important points here are worth mentioning. He’s calling building a prototype a test. Now, this is clearly not an experiment in the sense of physics or other science or how other sciences use that term. It is also not a test between theories like observations, like the Popperian epistemology would normally use that term. Now, I’ve heard crit rats say tests are only a way to have to select between theories. They say that over and over till they’re blue in the face. What are the two theories that are being tested between here? Now, if you’re a Deutchian and it can’t be that you are testing between if novelty search works or not because supposedly the inverse of an explanation is never an explanation or that is at least a Deutchian claim. I think that that should really instead be worded not as never an explanation, but not always. But that’s just me. Yet clearly this is a valid way to test a theory. In fact, it is the single best way to test a theory imaginable that you have to actually implement it and simulate it to show that it actually works. This should hopefully challenge your understanding of epistemology in some interesting ways that this is a completely valid type of test. And yet it doesn’t really match what we would normally mean by the word test in epistemology.
[00:46:38] Red: On page 43, he says, one nice consequence of programming an idea as an algorithm is that it forces us to be clear about what we really mean. In other words, there’s no way to hide between behind fuzzy words when a machine is running the tests. So to make an algorithm, we need to decide how exactly a computer should search for novelty. Let me just go ahead and make my point here. He doesn’t know it, but he’s talking about Popper’s ratchet. That’s the connection here. Okay, is that Popper’s ratchet could be thought of exactly what he’s talking about here, that you only are allowed to solve problems with your theories by making it increasingly concrete, set to the degree, eventually the goal to the degree that you could program it and it could be put into the sense of an algorithm. Now, how does this actually work in computer science today? So on page 43, he says, the first step towards testing such a program is to decide on what is called what’s called the domain. In other words, the computer only searches for novelty within a particular category. The domain defines the space that’s being explored by the algorithm. How would novelty search really work? So he goes into quite a bit of detail in the book, and it’s probably even better to see this in his videos. One obvious approach is to try everything. And this is called an exhaustive search. Unfortunately, it’s completely intractable. But novelty search doesn’t need to do an exhaustive search. So he says, discovery in novelty search is deeper than simply trying every behavior you can think of. The reason that it’s more interesting than that is that novelty search tends to produce behavior in a certain order.
[00:48:21] Red: So unlike objective search, where you get a measurement of good versus bad behavior with hopefully ever increasing towards the good and away from the bad, he says, novelty search, quote, all depends on what you’ve seen before page 46. Instead, it orders from simple to complex. So for example, the robot first can’t walk at all. Then it can walk but falls over. Actually, I’m skipping probably tons of steps. Then it can walk but bumps into walls. Then it can avoid walls. But I say I skip steps, but like even just going from not walking to walking, there’s probably a ton of steps just in between that the novelty search would go through. So on page 46, he says, when all the simple ways to behavior exhausted, only new behaviors that remain, the only new behaviors that remain to discover or more complex. Now, of course, we’re being kind of vague about what we mean by simple versus complex. So he makes an attempt to formalize that, which is exactly what you do under Popper’s ratchet. Okay. Try to make it more concrete. He says simplicity for our purposes. The key is that it requires no information or knowledge about the world. Running into a wall is easy because you can do it without any without knowing anything about walls, hallways or the fact or in fact without the ability to perceive anything whatsoever. But eventually to navigate a hallway with walls and not crashing to them, we require acquiring some knowledge about walls. That new knowledge is the magic step when a novelty search climbs out of ignorance and in demeaning. Eventually doing something genuinely novel requires learning something about the world. Page 46.
[00:50:05] Red: Because you have to acquire knowledge to continue to produce novelty, Stanley sees novelty search as a kind of information accumulator about the world. Here, world doesn’t necessarily mean our world. In fact, most of his experiments were done in a virtual world. Okay. He sees this as analogous to how biological evolution works. So a single celled animal has some knowledge about the world, of course. In fact, in reality, single celled animals are drastically complicated, very, very, very complex things. And that there must have been a huge number of evolutionary steps prior to even a single celled animal. However, for our purposes, we’ll start with a single celled animal as the simplest form. So for evolution to keep producing novelty, animals need to eventually form organs that reflect properties about the world. So he gives examples of eyes for light, ears for vibrations in the air for sound, legs so that you can like gravity pulls you down, you can defy it and walk around, lungs for the existence of oxygen, etc. Once you hit upon the existence of light in your open -ended search for novelty, that becomes part of evolution’s accumulated inventory of knowledge and information about the world that it can then use in the future. For a robot learning to run a maze, quote, falling down and kicking your legs may be a better stepping stone than trying to take a step because kicking your legs is the foundation of oscillation, which is how walking works. But if walking is the objective, falling down is considered one of the worst things you can do. This is page 53 to 54.
[00:51:45] Red: This is why novelty search often outperforms objective search, even when the objective search is trying to, even when the novelty search isn’t trying to accomplish any specific objective. So objective searches converge while non -objective searches diverge. Does this have any relevance to understanding of the two open -ended kinds of search, meaning biological evolution and human ideas? Yes, on page 59, he says, it’s the combination of many minds with many different interests that ultimately plunders the search space in the long run, not any individual or person. So what he suggests is building something like a treasure hunting system, and he suggests that one of the main ways you do that is by avoiding consensus. Now, this leads to real -life relevance, and he spends quite a bit of the book talking about the real -life relevance of the theory that he’s developing. Now, I want to keep in mind he is an eye researcher. Okay, he’s not researching biological evolution. He’s certainly not researching pedagogy, which is what we’re about to talk about. He’s not researching a lot of things, but his theory has application to things like pedagogy. How do we educate people? It even has application to what’s the best way to do science, because science is creativity and it’s a kind of search, and he’s really studying what’s the right way to do a search. So for example, academia is broken today because which projects get funded is based on consensus rather than interestingness. So he gives the example of the 2013 High Quality Research Act, which required research to be objective -driven. But if his theory is correct, then that’s a mistake,
[00:53:38] Red: because then it will narrow our search and we will actually cut off our ability to find the necessary stepping stones and will bog down our creativity. So today, I don’t know exactly how academia works, but a project that gets approved, usually for government funding, has to pass a board of peers that are experts in their field. They have to feel that the project has merit. They have to feel it’s feasible. They have to agree with it. So on page 82, he says, for the scientist writing proposal, the result is that the best way to win a grant is to propose the perfect compromise, the best faded gray, good enough to satisfy everyone, but unlikely to lead anywhere, highly novel or interesting.
[00:54:23] Blue: Am I crazier? Has Deutsch said something very similar about academia in a couple interviews? Okay. Yes, he has.
[00:54:31] Red: So now this makes sense, right? Because when you understand that all these theories are kind of the same thing that they’re getting at the same epistemology in different ways, right? So Karl Popper’s epistemology was really just trying, initially, just trying to understand why science worked and why a scientific theory was better than a non -scientific theory, right? And initially, he was actually just trying to work out what’s wrong with communism, you know, a fairly simple problem. Why is it not science? And he didn’t initially think of it broadly enough to realize just how applicable it was. And then you can kind of see how Popper starts to realize, wait, this epistemology, it can be generalized to cover almost everything, right? And this is, Campbell was the first to try to do that, but Popper, and Campbell called it evolutionary epistemology. Popper called that, what was the term he used? He called the term evolutionary epistemology pretentious. So he didn’t like the term evolutionary epistemology, and he never gave it a specific name. He called it, famously, his article was called Towards an Evolutionary Epistemology to try to avoid the pretension that we had an evolutionary epistemology today. But he agreed with Campbell’s basic idea that there was some sort of commonality that we needed to figure out what it was. And that was when Popper starts going into, how does this apply to politics? How does this apply to blah, blah? And you can see how Deutsch, who’s a Popperian, would start to come to some of the same conclusions. He would start to say, you know what? We’re doing something wrong with education. We’re doing something wrong with the way we do research.
[00:56:20] Red: Stanley is coming, not coming at this from the view of epistemology, but he stumbled across the same thing. This idea of an open -ended search is an important part of why certain types of search are different than narrow searches, why the two sources are in fact special. And because of that, he recognizes a lot of the same things that Deutsch has picked up on, but he’s coming at it for a totally different direction. He’s coming at it literally through AI theories, rather than epistemology or biology. So as an alternative then, so if the way we’re currently doing research isn’t good, what’s the alternative? So Stanley and Lehman suggest that we don’t fund projects based on consensus, but based on lack of consensus. Now, this sounds crazy at first, but they make a really good case for it. Okay. He says, imagine that if everyone reviewing a project says this project is bad, then it would make sense to not fund it. So we’re not going to change that part. If the whole board says, this is a bad project, we go, this is a waste of time, let’s not fund it. Okay. And let’s say though, everyone reviewing the project says, it’s good. Well, maybe you fund it, but doesn’t that kind of mean it’s probably kind of uninteresting? So he says, what if half the board said, yes, fund it. And the other half said, no, don’t fund it. That would really be a strong indicator of interestingness. And so lack of consensus may actually be a much better criteria for determining what to fund in terms of research than consensus.
[00:58:00] Red: Precisely because consensus implies some sort of objective, it’s got the same problems as objectives anyhow, where lack of consensus instead works as a good proxy for interestingness, which is what we should really care about because we’re trying to collect interesting stepping stones when we’re doing science. By the same token, standardized tests are trying to place an objective around the best way to educate, which is really an open -ended non -objective search. So Stanley suggests that instead of standardized tests and metrics to education, which aren’t likely to he’s suggested they’re not likely to do anything, but halt innovation. Now, of course, right now, in least in America, education has got this huge drive towards standardization. We tried to have the whole idea of common core that we’re going to teach exactly the same things that we want to actually test. We have goals set for we’re going to raise scores on these tests by X amount. And if you don’t manage to do that, then we’ve made a mistake and we need to figure out how to get those test scores raised. And the way we’re currently doing it, what it’s going to do is it’s going to corrupt the search. So you’re going to end up with all the creativity going to how to raise a test score rather than how do we actually educate? And you’re going to end up with this corruption due to the to the trying to turn education into an objective function.
[00:59:32] Blue: I don’t want to get too far down a tangent here, but I will say that there is another major pushback on that in the form of critical pedagogy, which is wants to turn education more into something like liberation, which is in the sort of Marxist tradition. I mean, that’s this is this is a major thing now. And it seems to me, okay, it’s been it to be briefly. Okay, well, there is a guy named Payalo Freiri, I guess, who was a Brazilian educator in the 70s. I mean, this is it sounds like a little bit like a conspiracy theory, but this is this is big. And they he’s I think he’s the most cited scholar in education right now, and maybe ever he might be one worth looking up. But he he wants to his ideas are a lot of his ideas came from educating poor people in the slums of Brazil. His idea was that he sort of to deconstruct the student teacher relationship, where where it’s more that something they learned to the they’re they’re learning together. He was very influenced by Marx and various postmodernist thinkers. I guess that the steel man is that he wanted to create critical thinkers, rather than fill people’s minds with knowledge, how that plays out in the real world. Well, the I think the straw man could be something more like indoctrination into leftist causes. But of course, anyway, that it’s just I think it just contrasting these two views of education indicates just just how complicated it is. But anyway, that’s this is a tangent, but
[01:01:44] Red: no, let me just let me just say that at a vague high level of abstraction. I mean, I guess I agree with the idea that it would make more sense to have teachers and students learning together, right? Isn’t that like what a PhD program is? I mean, I mean, if that’s the ultimate form of education, the PhD program, then, you know, why isn’t that good for every single level of education? So at some level of abstraction, I can feel like I can agree with this. Now, of course, the problem is that all theories are like that. Almost all of them have some verisimilitude. And therefore, I can get behind some aspect of the theory at some level of abstraction, as long as you don’t get concrete, right? And I think this is the problem. I mean, like you can almost assuredly dress up communism in such a way that it sounds like a really good idea and contains true ideas within it, right? The real problem with communism is that when you actually try to concretely go implemented, it turns out to be a disaster. And I think that is kind of the question, right? We can maybe even all agree, hey, we should be teaching critical thinking and or at least we not teaching critical thinking. Maybe we wouldn’t agree on that. We want to help people learn to be critical thinkers. Probably nobody would argue with that wording. And we want to let people learn to discover things for themselves and have fun doing it. And like, I can easily come up with this high level of abstraction that everyone’s going to agree with. And then the real fact is, okay, great. Now, how do we concretely concretely actually do that, right?
[01:03:23] Blue: Yeah, easier said than done. Had a goji of the oppressed as his most influential book from 1968.
[01:03:30] Red: What a title. Now, that’s a title right there.
[01:03:35] Blue: Generally, to give you an idea of how influential he is, though, in every single training I’m required to go to as a teacher, one of the things I do, I like to look up the influences the philosophical influences of the material or writers that were reading. And it’s, I wouldn’t say just about all of them come back to him at least recently. So he’s someone who with a lot of influence on the
[01:04:08] Red: world. Let me ask you a straight question. Does that scare you or does that excite you?
[01:04:13] Blue: It doesn’t excite me. I think it’s quite scary, actually. I think these ideas, when they play out in the real world, I mean, they sound good, but they’re actually quite authoritarian.
[01:04:29] Red: They’re utopian.
[01:04:30] Blue: Utopian, yes. Interrelated idea. And so what a, well, Jesus, we have to do a whole episode on this.
[01:04:44] Red: Here’s the thing, though. I can completely understand why this scares you, especially if it ultimately seems like it’s starting to turn into just indoctrination. But the way you just describe this is not too different than what I’ve heard David Deutsch or a bazillion crit rats say about education and what’s wrong with education. If you had removed the actual influences and just said the ideas, I probably would have thought you were quoting something from David Deutsch. And I think this is part of the problem is that it’s very easy to vaguely criticize the existing. It’s very easy to vaguely come up with a utopia that we should be driving towards. It’s super hard to be concrete. And this is Popper’s Ratchet in a nutshell, because Popper’s Ratchet could in some sense be you make your theories more explicit so that they’re more testable, which requires you to really start thinking concretely about what you’re going to do. And since that the far off theories are ambitious goals and therefore multiple stepping stones away, it’s not clear even if they should be our objectives, much less how you would possibly get there.
[01:06:02] Blue: Yeah, that’s a nice way to put it. Through David Deutsch, I’ve become quite interested in alternative education. And you know, read John Holt and I was the other guy is some of these these kinds of thinkers. I love their ideas, but I would say looking at alternative schools, more curiosity driven schools, unschools. And I know that unschooling can be something that’s more like happens at home and homeschooling. And you know, that’s its own thing. But looking at these actual schools, most of them seem to me like Maoist indoctrination centers. I mean, these are these are really, it’s not clear at all to me that they are an improvement from a school that you might go to where you are learning geometry in a geometry class or something or learning facts about history in a history class and not something more like liberation, whatever that is.
[01:07:05] Red: Right.
[01:07:05] Blue: My two cents there.
[01:07:07] Red: We will have to do a separate podcast on this. Yeah.
[01:07:10] Blue: I’m sorry, I took us down a tangent there.
[01:07:12] Red: Let me now give Stanley’s and Lehman’s approach to this, since we shouldn’t be heading towards the goal we’re heading towards in education is improved education, improved learning. That’s super vague. We don’t know that’s not a specific goal. It’s certainly not something that’s measured by a specific test.
[01:07:34] Blue: Yeah. And you know, I will say I don’t like to be clear. I don’t really like the other idea either that we should just be teaching kids how to score well on a test. I mean, that’s that’s pretty. There’s a lot of problems with that as well.
[01:07:49] Red: Right. So what’s the alternative though? So if having an objective in education is the wrong way and he is definitively saying he believes that his theory shows it’s the wrong way. Okay. Does that mean we do no assessment of education at all? Do we all go to homeschooling of all his traditional schools altogether? You know, are we gonna just assume that kids are sponges that soak up knowledge and doesn’t require any sort of schooling or teaching, etc. No, that is not the lesson of the myth of the objective. And he’s very clear about this. Okay. He suggests that we create a yearly assessment of a teacher by the teacher by the teacher putting together a portfolio of assignments, tests, syllabi, teaching philosophy, methods and samples of students works. That’s all from page 76. That wasn’t an exact quote, but that was what he suggests as the assessment instead of just trying to score higher on tests. The teacher later then receives five anonymous assessments. If the grade is failing, then and only then do we start into standardized tests. So he’s not even saying get rid of standardized tests. He thinks they could have a place that if a teacher’s having a hard enough time, maybe we do need to say, look, we got to get you to at least you’re doing pretty okay on standardized tests and get you up to that level. And that may be where standardized tests fit into things. But once you’re at a certain minimum level on standardized tests, then they don’t really serve a purpose anymore. In fact, they serve a negative purpose from this point forward.
[01:09:24] Red: And it would make a lot more sense to make the assessments a little more subjective so that we still are assessing teachers. We’re not just letting them fall off the end, do whatever they want, but we are and we’re giving them feedback. But think about how this type of assessment is a stepping stone collector, that we’re allowing teachers to have their own teaching philosophy to try out different ideas. And we make a point through the reviews of letting those ideas disseminate amongst the other teachers. So this type of assessment works better because instead of being some sort of objective search score X percent higher this year on a standardized test, it assesses where are we at the moment while allowing for different philosophies on how to improve from there. So this is much more like a treasure hunter than an objective search and therefore fits with the idea of a non objective search. Now, some readers may detect in this a hint of Paul, I don’t know how to pronounce this name Paul Bayer Brent Bayer Bend. So one of the students of Popper that kind of went his own way and maybe even has a negative in Popperian circles have a negative view of him at this point. But he taught that science can’t be distilled to an objective methodology. And this is kind of in at least seen as being in contrast to Carl Popper who believed that there was an objective methodology to the critical method. So he also argues page 88, realistic objectives, which tend to be the providence of investing tend to be exactly those that are one stepping stone away.
[01:11:13] Red: The business person tends to look for nearby stepping stones before sinking funding, while the scientists ideally request funding to follow a hunch that is an interesting stepping that an interesting stepping stone is nearby, page 89. So this is, it may not be obvious at first what he’s arguing here, but this is argument in favor of public funding of research. Okay, so he doesn’t want to see it all entirely privatized because he feels that the market is just too objective driven towards profits and that therefore you will actually cut off interesting kinds of research if you don’t have some sort of public funding for research. So I know a lot, many libertarians would probably just roll over in their graves, even though they’re not dead yet, just me even saying that. But that is one of the arguments that comes out of his theory here is that there actually is a place for public funding of research, but that it needs to be different and distinct from private funding of research, which is objective driven. There is little reason to doubt that the theory of natural evolution, unlike those who subscribe to intelligent, to doubt the theory of natural evolution, unlike those who subscribe to intelligent design, because the scientific evidence backing evolution is truly staggering. That’s from page 102. And again, I probably had all the crit rats roll over in their graves. Oh no, evidence backings a theory. That’s not evidence as for it’s for selecting between theories. And then he says biologists don’t fully understand every aspect of evolution. This is something that I’ve, we’ve talked about quite a bit on this podcast. The origin of life itself is not entirely settled, or I would argue really understood well at all.
[01:12:51] Red: Biologists also debate several different interpretations of evolution. In particular, a key question is how important is natural selection? Now, at this point, you might say, wait, I thought natural selection was just another term for biological evolution. And certainly the way we teach natural selection, sorry, biological evolution today in schools, we do teach it as specifically a form of natural selection. But one of the things that comes out of Stanley and Lehman’s research in AI on this is that we are interpreting evolution wrong, and that it actually isn’t explicitly about natural selection. Now again, if you’ve been listening to this podcast, this won’t surprise you. I think I’ve already mentioned this idea that natural selection is different than biological evolution. It’s just one part of it. And it may not even be the most important part of it. I think in, I remember which podcast it was, is I think it was like the last one we just recorded. I even went so far as to say Darwinian evolution is in fact entirely refuted at this point. True, what we still call biological evolution, we still call that Darwinian evolution, but we really mean it’s a modified version that takes what we now know into consideration. But the original idea of just natural selection, nobody really believes evolution’s just about natural selection, period end of story. There’s different people who believe maybe it’s still the most important force. There’s other who believe it’s not, right? So we are used to thinking of biological evolution as about natural selection, but what if there is a part of the multiverse where nothing ever died? Okay, probably doesn’t even exist, but just pretend this is just a hypothetical to make a point that Stanley is making.
[01:14:35] Red: So in this universe, there is no selection. There’s just variation. So every, all the variations are still being tried, but there’s no selection. Presumably all existing species that exist in our world today would exist in this hypothetical world, plus a lot more. The ones that we don’t have, those are the ones that were removed by selection. But since all the variations are still being tried, all the ones that we have in our world still exist. So he says, page 107, perhaps selection actually restricts exploration. So pages 107, 108, you might first think that this sort of evolution would not produce anything interesting because there’s no pressure for organisms to adapt to their environment. Without adaption, which is the usual explanation given by biologists for complex biological traits, there is no need for organisms to improve objectively. But if selection really, but is selection really necessary in the strongest sense to create the complex creatures we see around us? Or is it possibly just a restriction that limits the creativity of evolution? So selection is not is really a constraint. And selection is the real search process. Or as he puts this, selection is not really a creative force. Page 108. Stanley and Lehman argue that once you can wrap your minds around this idea that selection is only a constraint due to limited resources, that you can realize something interesting about biological evolution. It’s actually not about survival of the fittest being red and tooth and claw, but really it’s about avoiding competition. I think this is a brilliant insight. And it’s one that I’ve heard elsewhere and that kind of helped me pull together some thoughts in other areas. Okay.
[01:16:23] Red: The reason there are so many different species is that as the search for new species happened, it was easier to move to a new unfilled niche which required specialized knowledge than to compete in existing niches that were already full. Once the niche filled up with competition, that drives biological evolution to find a new niche to avoid competition. So page 109, a new niche often makes founding even newer niches possible and such newer niches often lead to even newer ones. I would note that this is very similar to Peter Teal’s claim in the book zero to one about the way the economy works. So we would normally think of the economy as it’s this all this competition and the competition drives innovation and it drives margins to zero. And Peter Teal points out that, yes, that does happen in a niche that’s been filled up, but that the real thing you want is for the invention of a new niche that has no competition and in essence is a monopoly. Not a monopoly in the bad sense though. What he has in mind is the iPod or the iPhone where yes, maybe in a sense you’ve got a competitor in Android, but it’s this package that’s so good that it really doesn’t have a competitor, not really. And it’s a monopoly of innovation rather than a monopoly of a corporation forcibly taking a market or something like that. And then in this sense, the goal of the economy of a free market isn’t to cause competition, but to avoid competition. So real innovation is driven by lack of competition, not via competition. Obviously, there’s a complex relationship here, so don’t take that too literally.
[01:18:12] Red: So he then says the challenge with abstracting natural evolution is to boil it down to the most fundamental explanation of its creative power, all without throwing away any essential details, page 111. So evolution is a special kind of non -objective search called a minimal criteria search, i.e., the minimum criteria to survive in this case. This abstracts away competition from the definition of evolution. Evolution is creative despite the competition it tolerates, not because of competition, page 114. Now, this now allows us to kind of circle back to what actually this podcast is actually about, which is AGI. So let’s circle back to Donald Campbell’s theory that all knowledge and all inductive achievement comes through variation in selection. But non -objective search teaches us that selection isn’t really necessary. In fact, it’s easier to think of all knowledge and all inductive achievement as a kind of search, which means that the, or I don’t know, it’s really all, that would be a slight modification to Campbell’s, what I’m saying, which means that the field of artificial intelligence, including the study of AGI, can be unified into a single field that studies how to search effectively. So on page 122, the AI community researchers study how to search effectively. So the twist, quote, the twist is that the field of AI is itself a search for algorithms, and AI researchers are themselves experts on thinking about searching. In other words, AI researchers are searching for algorithms that search page 122. There is nothing particularly mysterious, particularly mysterious about this. It’s just that search is an important part of intelligence. So it’s no coincidence that both humans and artificial intelligence turn out to involve search page 122 to 123.
[01:20:12] Red: Example, machine learning is often the search for the best parameters. LLMs are a giant large network, and that network contains nodes with parameters. And what you’re really doing is you’re using the data to drive a search for the best parameters in this giant network. And when it does find the best parameters, it now contains knowledge about how words relate, which to really understand that requires some sort of knowledge about the world, which is what the LLM has picked up and why they’re so useful to talk to and to ask questions. And it’s also why they hallucinate, because they contain knowledge, but they also contain knowledge of just how to put words together. So the search for AI algorithms can be described as a kind of meta -search, a search for things that search, page 123. They argue that AI today, then, is going about this search wrong, even by their own standards. So what he does is he points out that typically, the way we decide which algorithms to publish in a journal, and this would be an AI journal, would be based on two very popular heuristics. One is the experimental heuristic and one is the theoretical heuristic. Basically, in a nutshell, the idea is you want to either prove by experiment that your AI algorithm outperforms all previous AI algorithms, or you want to prove that, theoretically, it has certain guarantees to, let’s say, correctness that other algorithms don’t have. Now, you can see why those would be two very interesting heuristics. Obviously, outperforming, that’s a really important thing.
[01:21:55] Red: And having some sort of guarantee, it’s guaranteed to learn this much within this period of time, or the whole probably approximately correct that we’ve talked about in the past, is really a set of guarantees. I can guarantee you that you will learn the correct concepts, probably, and approximately, within certain bounds. So there’s a 99 % chance you’ll learn, and it’ll be this close to correct, would be what probably approximately correct is about. Here’s the problem, though. That means that we’re falling into the myth of the objective when we’re doing artificial intelligence. And we’re not learning the lessons that we’ve learned of artificial intelligence. We’re not seeing that we’re learning about search, and then applying that to how we search for search algorithms. So, if you only are allowed to get published, if you can show that you outperform, or that you have certain guarantees, you’re driving away all sorts of potential interesting new algorithms that maybe aren’t the best performing yet, and maybe don’t have yet theoretical guarantees, but that are actually interesting stepping stones that could have led to something else in the future. So he proposes that we don’t do journals and based on just two heuristics, which then falls into the myth of the objective. But instead, we do it based on interestingness. Journals are done by humans for humans. So instead of trying to say, oh, I’m going to have you summarize for me whether I should pay attention to this or not, based on does it perform better or does it have theoretical guarantees, in which case I don’t even take the time to find out if this is an interesting theory or not, right? Then instead, you have to actually read the paper and assess if it’s interesting.
[01:23:49] Red: And if it is, you publish it. And he believes that by doing that, that would, by directly doing it based on interestingness, rather than trying to use experimental heuristics and theoretical heuristics as a proxy for interestingness, that you would actually end up with a better journal that is more interesting and does a better job of finding stepping stones. So anyhow, that is ultimately what he would like to see AI itself be reformed on how it does research based on his findings of an AI algorithm, this non -objective search AI algorithm. And actually, that is the end of, I’ve now probably covered every interesting thing I can think of that was in the book. Well,
[01:24:36] Blue: I think this is an above average episode, Bruce. And I’ve enjoyed listening to you. And so thank you very much for that.
[01:24:44] Red: All right. Well, thank you very much.
[01:24:52] Blue: Hello again. If you’ve made it this far, please consider giving us a nice rating on whatever platform you use, or even making a financial contribution through the link provided in the show notes. As you probably know, we are a podcast loosely tied together by the Popper -Deutsch theory of knowledge. We believe David Deutsch’s four strands tie everything together. So we discuss science, knowledge, computation, politics, art, and especially the search for artificial general intelligence. Also, please consider connecting with Bruce on X at BN Nielsen 01. Also, please consider joining the Facebook group, the mini worlds of David Deutsch, where Bruce and I first started connecting. Thank you.
Links to this episode: Spotify / Apple Podcasts
Generated with AI using PodcastTranscriptor. Unofficial AI-generated transcripts. These may contain mistakes; please verify against the actual podcast.