Episode 96: Kenneth Stanley on the Pursuit of What’s Interesting

  • Links to this episode: Spotify / Apple Podcasts
  • This transcript was generated with AI using PodcastTranscriptor.
  • Unofficial AI-generated transcripts. These may contain mistakes. Please check against the actual podcast.
  • Speakers are denoted as color names.

Transcript

[00:00:00]  Blue: Hello out there. This week, on the Theory of Anything podcast, we interview AI researcher Kenneth Stanley about what he calls the myth of the objective. In his book on this, he asserts that in all the complicated areas of life, from machine learning to education and more, we should strive to follow what is interesting rather than a predefined objective. He is such a calm, reasonable guy that it might at first be easy to overlook just how radical this idea is that goes against so many known knowns. What I hear him saying is that on a core level, whether in natural selection, machine learning algorithms, or human brains, what really matters is the search for novelty. This is how we solve problems, how knowledge increases, and maybe how we grow as humans. Out of all the out there ideas we have looked at on this show, from many worlds to Tipler’s Omega Point, this may actually be one of the most out there, but also strangely sensible too. Perhaps an idea that is even likely to influence our lives in a positive way as we wrestle with his thesis.

[00:01:25]  Red: Welcome to the Theory of Anything podcast. We have Peter here, and we have with us Kenneth Stanley today. Hey, Kenneth, how are you doing?

[00:01:34]  Green: Great. I’m very glad to be here. Thanks for having me.

[00:01:38]  Red: He is the author of the book, Why Greatness Cannot Be Planned, The Myth of the Objective. We did an episode on his book. Actually, we’ve done multiple episodes on his ideas, but we did one specifically on his book several episodes ago, and then he also featured prominently in our episode on the problem of open -endedness. I had several people I was quoting, but he was one of them that I kind of went over his ideas. We wanted to bring him onto the show, and I wanted to have a chance to ask him all the burning questions that I’d been wanting to ask him ever since coming across his articles and his books. Ken has graciously agreed to come on, and so we’re going to probably jump right in to maybe start with an introduction for him and kind of what his ideas are. Ken, we have on this podcast already gone over your ideas in a great deal of detail, but there’s no guarantee that the person listening to this particular podcast has listened to those podcasts. Can we maybe get a quick summary of your main thesis, your main ideas, and how you came up with them and things like that, and just kind of give us a good introduction to your theories?

[00:02:57]  Green: Okay, and yeah, I’m happy to do that, and I think we’re talking about mainly the ideas in my book, not like all ideas I’ve had. But if you have related ideas, I’d be actually interested in those. I’m familiar with the ones from your book, but yeah, just… Okay, we’ll start out with the book, so I think that’s really where a lot of the topics we’d like to go, which is based on an idea that setting objectives can actually have a really bad effect on pursuing, especially innovation creativity. But just discovery in general, it can be really undermined by setting an objective. And it’s such a general point because we set objectives all the time. It’s a normal thing for us to do that it needs a lot of kind of explanation, like how we would come to a conclusion like that. But if it’s true, clearly it has really profound implications, because setting objectives is so ubiquitous across the culture, across institutions, as individuals, like everywhere is about setting objectives. And so it was a really long road to getting to this kind of viewpoint, which went mostly through artificial intelligence, because I’m an artificial intelligence researcher, and most of my career was spent doing that. And so really, I was focusing on algorithms, algorithms for learning, learning algorithms. Some of them were evolutionary in nature. And experiments in that area revealed that there were some funny things going on when algorithms that usually would try to be optimized towards some objective would actually perform worse than algorithms that didn’t have an explicit objective. And I mean, we could get into the details if we want. But these kinds of experimental comparisons started to reveal this principle.

[00:04:48]  Green: And if it doesn’t make sense, because usually it doesn’t make sense, because it’s counterintuitive, the reason that this would happen is because objectives are often deceptive, which means that it appears that you’re approaching the correct direction to get to your objective, but you’re actually not. You’re being deceived. And this is common in complex spaces with hard problems. I should note that that’s why this is counterintuitive, is because in less complex spaces, they tend not to be deceptive. And therefore, objectives do work. So many of us have experienced with setting objectives and succeeding. And because of that, we intuitively believe in objectives and sort of across everything. And that actually isn’t the case, because once you get into really high dimensional spaces and problems, you start to face the problem that those spaces are complex and deceptive. And so because of that, like the objective can cause you to hit a dead end. But the alternative, which sounds to most people at first like I’m advocating being random, isn’t that’s not what I’m advocating. Randomness is not a good algorithm either. So optimization has its limits. Optimization means basically optimizing towards an objective. But randomization also has big limits. It’s not like a great formula for achieving something you want to achieve. Rather, you would do something that’s sophisticated in a different way than we’re used to. So what we’re used to is optimization means following a gradient towards an objective. But in terms of the alternative, it means following a different gradient, but it’s still an information rich gradient, something that’s informed by information. But it would be something like novelty or interestingness. So I don’t know where I’m going, and that’s okay. But I know that where I’m going seems interesting to me.

[00:06:24]  Green: And really, the difference is that when you have an objective, you’re comparing where you are to where you want to be. But when you’re pursuing something like novelty or interestingness, you’re comparing where you are to where you’ve been in the past. So either way, you’re making decisions based on a comparison, which is informed by information. But it’s a different kind of information. And the information about the past is actually really interesting and rich, because actually, we know more about the past than we know about the future. And so in terms of creative pursuits, innovation, discovery, in other words, how to achieve blue sky objectives that are far beyond our understanding right now or the way I usually put it more than one stepping stone away, it requires a degree of exploration to achieve those things. And then setting an objective can actually be antithetical to eventually achieving those objectives paradoxically, which is why I call it the objective paradox often. And so you need sometimes to just back up, forget about worrying about where we’re going, actually just drop the idea we’re pursuing an objective, and instead collect interesting stepping stones, and hope someday that they will lead to some of the things that once we thought of as objectives.

[00:07:31]  Blue: Can I just emphasize before Bruce gets going one thing you said there for our audience? So we’re not just talking about something in machine learning, right? This is business, this is art, this is just our personal lives, perhaps. This is just like a basic epistemological statement about life that you’re making. Is that fair to say?

[00:07:56]  Green: Yeah, yeah, that’s a good point to bring into this for clarity and also because it’s interesting. It’s interesting that what I’m saying, which absolutely I totally agree, is not just about algorithms. It’s very much about life. It’s very much about society and how we run it. It’s about how we run our institutions, because the same principles that apply algorithmically also apply to all of those other spheres of life. As a company, in Silicon Valley, you hear about things like OKRs or objectives and key results. It’s basically applying these algorithmic principles in an institutional setting because we have an absolute die -hard belief in the ironclad reliability of objectives, which is something interesting to get into. But what happened with this research I think is a very unique story in the field of artificial intelligence where something that was originally really just algorithmic observation seeped out or you could say leaked out into social critique. I mean, I’ve actually never heard of this. I don’t know of any other story like this. And it’s not just social critique. I mean, it has an emotional surveillance for a lot of people. It changes how they feel to understand that they don’t need to justify all their endeavors through objectives. And I’ve had people that I’ve encountered talking about this for the first time that in certain contexts have been near tears. And this is to me an astounding thing to think about that people could be almost crying about something that is really just a dry result from algorithmic research. And I don’t think there’s any other story like that in the history of the field.

[00:09:30]  Red: Thank you. In general, I definitely agree with your thesis here, but being the good critical rationalist that I am, I immediately tried to think of the strongest counter examples that I could think of. So one that came to mind was I’d read one of the biographies on Elon Musk and how and why he built SpaceX. So here we have a guy that if the biographies to be believed, which I suspect it is, he actually built SpaceX because he has a grand objective or goal to get humans to Mars so that when Earth gets wiped out someday, that humanity doesn’t get wiped out with it. And so here’s this guy who’s started a company, it’s been successful, building rockets. And the reason why he’s doing this is because he has this objective or maybe I’m abusing the word objective here. That’s one of my questions for you to try to get to Mars. And it seems like he’s somewhat successful. It hasn’t really necessarily been deceptive for him because he has improved rockets and we’re still not to Mars, but surely we’ve made strides in the right direction to where that may someday come to be because of the technologies that he has created along the way. Is this an encounter example to the myth of the objective? Or is this not really an objective, not really what you mean? Certainly you could word it like it’s some sort of ambitious objective and that’s how he sometimes words it.

[00:11:05]  Green: Yeah. So this is a fun game to play that I’ve inadvertently created by writing this book, which is like to find counter examples and say, can I show that this isn’t always true? And so there’s a lot of nuance though to analyzing these real -world cases. You really have to pick it apart to understand what it actually means. So at one level, what you have to think about is that in this particular case, for example, we actually don’t have the resolution of the question yet. So we can’t really say it is or isn’t a counter example yet. Like you say, surely we’ve made progress, but you have to remember though, the whole point of deception and the objective paradox is that it always looks like you’re making progress when you’re being deceived. So even the smartest person will think that they’re making progress because that’s the definition of deception. It’s just a truism. Now, I’m not saying he’s being deceived. We don’t have evidence of that yet because maybe he is making progress. So this story is not finished yet. So the jury remains out and it could turn into something that you might argue or you might have more reason to argue as a counter example if he really does get us to Mars. Now, one possibility here is that really I could argue that, well, actually just analyzing this particular case, it’s a good example of somebody who’s actually a victim of the exact objective paradox we’re talking about. This is too ambitious. It’s too many stepping stones away and he’s making a mistake. So we’re not going to go to Mars, at least not on his stepping stones that he’s laid for us.

[00:12:40]  Green: I can’t prove that though, since like I said, the jury remains out on this particular example, but it’s very possible. But maybe a more nuanced view of this is like what usually is instrumental in analyzing a particular case is this kind of subtle distinction that I make often when I talk about this, which is that there is a moment where it’s okay to set an objective. Like I conceded early in our conversation, sometimes objectives work. And I said the way I couched it back a little bit of a while ago is saying that when objectives work, usually it’s because they’re relatively modest. But there’s another kind of situation which actually causes objectives that were once extremely ambitious to become more modest, which is when you get to a point where you’re close enough that something that used to be far away is no longer far away. And I usually articulate that. I usually call it one stepping stone away. Like I think about it like as the approach to an objective is basically walking across a sequence of stepping stones. Like first you have to invent this and then this and then this. And obviously a lot of these stepping stones have been laid, like the invention of rockets, for example, perceived Elon Musk, but it’s probably relevant to getting to Mars. And so we’ve traversed some stepping stones, but there’s some stepping stones left to go. And I the way and this is just a conceptual point because it’s not formal. But basically what I’m saying is that when you’re one stepping stone away, suddenly something that was inconceivable as an objective snaps into possibility. Because now all the stepping stones are laid.

[00:14:16]  Green: So it becomes like what you might call, like I’ll put it in quotes, modest. It’s still not necessarily modest. I mean, and by the way, with things I call modest can still be a lot of work. It’s like something like, you know, getting in shape or something. It’s like getting shape might actually be hard. It might take years of work. It’s not an easy thing to do. But we do know how to do it. Like we know what the steps are to getting in shape. So it doesn’t mean I don’t mean to demean things by calling them modest. I just mean that they’re not subject to this objective paradox. And so like the real question with visionaries or the people that we usually think of as visionaries like Elon Musk, somebody with this like amazing achievement that they’re trying to achieve is are we one stepping stone away? That’s what I would ask. And usually I think a really good visionary or what I would call a genuine visionary because there’s a lot of false visionaries. A genuine visionary is somebody who realizes for the first time that we’re one stepping stone away, or you could say the first person to realize that we’re one stepping stone away. It’s like you’re keeping an eye on the accumulation of stepping stones in the entire world. Of course, you’re probably only keeping an eye on one’s relevant to your interest, but you’re keeping an eye on those. And you notice before anyone else that something has snapped into possibility that wasn’t possible before.

[00:15:26]  Green: And this is very different from someone when we’re really far away from something many stepping stones away who just create some crazy far off objective just to get people hyped up and excited, which has almost no chance of being achieved. Like that’s like more the way that visionaries are depicted in sort of mythology and fiction and things. It’s like somebody who steves way across the horizon beyond anything we can see. I don’t believe those people exist. That’s basically impossible. Obviously you may come up with some one exception. It’s because this is not a theoretical point. This is a conceptual point. So possible there have been exceptions in history where something highly unlikely happened. But in general, you won’t find cases like that because it’s basically impossible to see multiple stepping stones into the future. So the real instrumental question in this particular case is did he see that we’re one stepping stone away before other people that we actually now do have the technology to get to Mars? It doesn’t mean there’s not some work that needs to be done, but in effect the work that needs to be done is pretty evident from the start. And so it’s just a matter of doing the work that needs to be done. And I still I’m not saying that is where we are, but it’s possible. And then there would be no contradiction, because that’s consistent with what I’m saying when you’re one stepping stone away. Another I just want to put in one third point about this because it’s a very complicated thing to say there’s a counter example. There’s also the possibility here that that isn’t actually an objective in his mind. The biography which I haven’t read may have a different connotation.

[00:16:56]  Green: It’s not clear, but it’s very hard to really know someone’s mind. Sometimes it’s hard to know your own mind. But you can see that like there’s a lot of possible fruits to this labor other than getting to Mars. And intuitively like he may understand that there’s an opening here for a kind of an industry or in industry that hasn’t existed before. That’s like a different thing than getting to Mars. Like the space industry because of changes at NASA and various other kinds of political issues has changed. And he just stepped into the vacuum. It was a very astute move. It actually moves towards a different objective, which is still ambitious but also really interesting, which is to dominate this future industry that’s going to be very important. And he’s in the right place at the right time and you can do this in that he may even have used this narrative about Mars just as a motivation. It’s not actually an objective, but just as a motivator or sort of an advertisement to get people to rally around the real objective, which is more realistic, which is basically to dominate the space industry. And you can see certain interesting stepping stones like you can see like SpaceX to Starlink or something. Like that’s a more conceivable short term thing, which leads to a satellite domination of communication networks, which is pretty considerable and amazing. And so it’s hard to say like what really was the genuine fundamental motivation here so that I can critique it and say like, well, did he defy the objective paradox or not? So I think it’s like a very multifaceted and complicated thing.

[00:18:25]  Green: And every case like this, you can have this kind of back and forth and argue about, but maybe like if you really want to get to counter examples, you would go to things that actually have happened. Like we got to the moon, for example. What’s the explanation of that? Like with that, would I concede that where we built planes that could fly like somebody tried and they succeeded? And generally, I would always say the same thing that you’re focusing only on the last step in a long eternal chain. And almost every step in the chain until that final step had absolutely nothing to do with that achievement, like all the prior inventions that were essential to getting to space, to getting an airplane, to any kind of amazing thing to building a computer, the vacuum tubes that lead to the computer, the people working on them were not thinking about computers. If they were, they wouldn’t have been working on vacuum tubes and we’d have no vacuum tubes and no computers. And so I think I would present the hard assertion that there’s no invention ever in history that doesn’t have this property, that the stepping stones were conceived at some point in the past without understanding the final product that they were leading to.

[00:19:33]  Red: All right. Thank you. Let me now use one that’s a little closer to home as a possible counter example. I loved your article about open -endedness as the last grand challenge that you’ve never heard of. You were one of the authors on that article. What’s the difference between an objective and a grand challenge? So kind of what I have in mind here is that obviously this has motivated you and the people who wrote the article and maybe members of your team this idea that there’s this problem of open -endedness to solve. That is maybe arguably an ambitious objective because of course you have no idea what the actual stepping stones are towards it. And yet it did motivate you to go out and move on to do novelty search, which we’re going to talk about in a second and try to find a stepping stone towards that goal. So what’s the difference then between an objective and a grand challenge like this in your mind? And what place does a grand challenge have if it’s too much like an ambitious goal?

[00:20:34]  Green: Yeah. This is another really good question that gets into a lot of subtleties, which are important and people often wonder how do you operate with no objectives at all or what does that mean? Are you allowed to be an AI researcher since that’s sort of an implicit objective or open -endedness researcher when you’re opposing it now? The way that I would think about this is that there are two ways to be dedicated to a cause, but it’s really subtle. The way that almost everybody always thinks of is you’re dedicated to the cause as your objective, like my objective is AGI, something like that, where my objective is to create strong open -endedness like I could say that. And then my theory would suggest that that’s actually an exercise in futility, but it’s not so much as an exercise in futility, it should be a little bit more nuanced than that. It’s basically that by creating an objective like that, I’m actually blinding myself to a lot of interesting things that I could be doing that might even be related to that objective, but only because they don’t seem to actually be moving me closer to it. And so it has a sort of deleterious effect on my ability to innovate and think out of the box just by the virtue of setting that objective and that’s the downside of it. And so then what’s the second way then that I can be dedicated to the cause? Well, the second way to be dedicated to the cause is basically the way I think of it is that I am interested in the stepping stone itself, not where it’s going. And so another way I think of stepping stones is as playgrounds.

[00:22:12]  Green: Like playgrounds are places that when you go into there’s all these things you could do and it’s kind of like overwhelming, like I’m thinking from the perspective of like a five -year old, like you go to the playgrounds suddenly there’s all these things that you could explore. Like I climb up that thing over there, slide down that thing. It’s like I could swing over there. Like what’s that rope do? I don’t even know what it’s for, but it’s something I can explore. And you go into a playground, it means you’ve opened up new stepping stones. Like from the stepping stone of the playground, now you will discover new things to explore, which will lead to other types of things that you can discover. And in that sense, like I think of AI or open engineers as a playground. If you think about the article, like is what I’m advocating is that this is a playground where nobody’s playing and that’s a mistake. Like there’s got to be tons of interesting stepping stones in that world, the playground of open -endedness. It’s different than saying we’re going to achieve open -endedness as an objective, which I haven’t and wouldn’t claim to have achieved. But certainly by hanging out in that playground, I’ve uncovered a number of really interesting stepping stones. May some someday lead to actually solving the thing, so to speak. It means possible someday through some circuitous route. But I don’t think of myself as trying to optimize along that gradient. I’m just exploring that playground. And so I can be an AI researcher because I’m interested in AI. It’s not because I’m pursuing it. It’s a subtle thing. What’s the difference between pursuing a goal and just being interested in a subject?

[00:23:32]  Green: But I have to do that to be consistent with my theory. It’s like I have to think in hindsight, like why did I get into AI if I’m against having an objective? Like that’s true. It sounds like a contradiction. But I think of the resolution is it’s just an interest. I’m interested in the area. And so I’ll do algorithms that I don’t necessarily know or have an opinion about whether they’re advancing us towards AGI or not. But I don’t care because the algorithms are intrinsically interesting in their own right. And that’s philosophically very different from like how the field works. Because generally the field works by comparing any new algorithm to the current algorithms to see if you’re making progress towards the goal, which is AGI, which is usually represented through a bunch of proxy benchmarks. And so I just don’t care about that stuff because I think that I’m independently confident in my own ability to assess whether something is interesting, irregardless of a bunch of probably deceptive benchmarks.

[00:24:24]  Red: All right.

[00:24:25]  Blue: One thing that I’ve been interested in lately is the way that practically every technology you can think of has been predicted by science fiction authors at some point. I mean, whether it’s the internet or the tablet or just everything. So I mean, maybe this is the most extreme example where these guys are kind of in the playground playing with ideas and concepts, but they’re not setting objectives. An objective is when you’re trying to take specific steps to walk towards someone, something in a more algorithmic way. Is that a fair way to put it?

[00:24:59]  Green: Yeah. I mean, I think science fiction writers are just talking about things that are interesting. They’re not actually achieving the things that they’re describing. They could talk about time travel, but it doesn’t mean it’s been achieved. It also doesn’t mean that they’re helping us achieve it. I don’t think fiction about time travel has advanced the cause of time travel in a significant way. So it’s just an aperture into the playground just showing you there’s a place there. I mean, there’s many centuries people are thinking of flying machines. It’s kind of an obvious cool thing before the Wright brothers. But it didn’t cause the flying machines to work at all.

[00:25:35]  Unknown: I

[00:25:35]  Green: mean, Da Vinci even tried to build a flying machine without success. So we had to wait until the stepping stones were laid to be able to build things like that. So yeah, I don’t think it fundamentally changes things, but it’s true that they play a useful role in sort of helping us to refine our interestingness detector. So it’s not like an irrelevant part of the of the cultural cooperation that leads eventually to exploring these things.

[00:26:03]  Red: It’s actually skip forward a little on my notes here to the two interestingness because we’ve kind of brought that up several times. What is interestingness? I mean, obviously, that’s very hard to define, right? And it seems very subjective. But when you say interestingness, kind of help me understand what it is you have in mind with that.

[00:26:29]  Green: Yeah. So I often raise interestingness because to me, it’s the compass that’s the alternative to the objective, like the objective is used as a compass. We’re trying to figure out where is our north star? We look to the objective to figure with direction to be going. That’s what leads to all these benchmarks and measures of progress and so forth. So if you’re not following that compass, what’s your compass? Well, the interestingness compass is the alternative compass. And it’s true, though, the reason that we generally, as a society, don’t engage with that explicitly in serious discourse is because we think it’s subjective, which I agree it is subjective, but we don’t like subjectivity. Culturally, we’re not very happy with subjective concepts, especially in science and technology, like we want objective assurance that we’re doing the right thing. But I’ve argued that we need to embrace subjectivity in order to be creative, in order to innovate, to make discoveries. And so it’s okay that it’s subjective, but still that raises the question, what exactly is it? Well, what it is, to me, it’s the accumulation of all of your experiences and the entire legacy of your evolutionary history all bottled up into one thing, which is then applied to deciding which direction looks most interesting to you to go. So in other words, it’s informed by enormous amounts of information. Astronomical amounts of information are coming to bear on the question of what’s interesting, because the only people are the only things, the only entities in the universe that make this kind of assessment up to very recently are humans. And so humans are the ones who decide what’s interesting, and humans have an entire lifetime of experience to come to bear on this.

[00:28:13]  Green: And so what I’m saying is that your lifetime of experience is probably, it doesn’t mean that you are able to assess every type of domain in terms of what’s interesting. Like if you spent your life training as a physicist, and then somebody wants to know whether a certain recipe, like a cooking recipe is going to be interesting, you don’t necessarily have useful information to go on there. You’re not necessarily going to have interesting insights into whether that’s actually a good trail of stepping stones or playground to enter. But in the area where you do have a lot of experience, you have enormous amounts of built up intuition. And the thing is that like we’d spend enormous amounts of investment in people and individuals to build up that kind of intuition over a lifetime. Like if you think about getting a PhD in something, but it’s not really just about PhDs. I don’t want people to think I’m just talking about higher degrees. It’s like becoming an expert in anything like an expert athlete or an expert chef or whatever it is that you’re getting into. You spend a lifetime several decades becoming at the cutting edge of your field. Usually there’s a lot of public investment that went into that, at least with the public education and often beyond that with like graduate school and things like that grants that were given, whatever it was that went into creating that mind that you have after all that experience.

[00:29:29]  Green: And then suddenly at the end of that, you could be 30 years old, we say, let’s just shut it all off and just make it illegal to use all those intuitions because the only thing we care about now is whether there’s a performance curve that’s going up, which even a kindergartner could look at to see if the line is going up and to the right. That’s the optimization. That’s the objective following. And so of course, we should be admitting people’s intuitive understanding of what’s interesting after building up all of this information over decades of experience in order to decide what will be an interesting experiment to perform, what will be an interesting hypothesis to pursue. Like that’s what interestingness is about or even what’s an interesting art form or what’s an interesting picture or whatever it is that you’re in. The accumulated experience of decades provides enormous substrate to make really interesting subjective decisions. And one last caveat to that is we should be able to discuss why something’s interesting. So one reason people worry about this kind of argument is they feel like it’s opaque. It’s like you could just come to me and say, I have decades of experience thinking about computers and so you should give me a million dollars to do this. That would be a crazy argument. Again, I’m not obligated to give you a million dollars just because you’re so such and such old or been working for so much so many years. You should be able to explain why it’s interesting. That’s still something we should be able to do.

[00:30:53]  Green: That’s what I think is missing from the discourse is that we don’t allow ourselves to discuss why things are interesting because it’s subjective and we’re afraid of subjectivity because we need an objective security blanket. But remember, it’s just a security blanket because it’s deceptive. So it’s no more of an assurance that you’re going to achieve what you want to achieve in life because you’re pursuing an objective than not pursuing an objective and just following the path of the interesting. And so we should be able to have a really complex, grounded discussion about what’s interesting. Assuming that we’re peers in an area like a computer scientist and a chef may not be able to have a great discussion about this, but a computer scientist and computer scientists could discuss what’s interesting in a productive way. And that means really what does this playground open up? I want to do this thing. This is the project that I think is interesting. I don’t know where it’s going because I’m not going through the objectives. But what I can tell you is it’s a whole new frontier. And here’s why because these things have never been looked at before. The phenomenon underneath these ones we look at those will change our perspective undoubtedly even though I can’t tell you how. And things like that should be able to be made explicit so that we could have a genuine subjective conversation about why you have the beliefs that you have about what’s interesting. And by the way, I just want to add one other thing. I said that up to now it’s been humans who make this judgment. We’re just getting to the point where maybe computers will start to have something to say about what’s interesting.

[00:32:16]  Green: So that gets kind of interesting too. Can computers actually be a participant in this process of deciding where to invest interestingness and research? But it’s at the edge of possible right now.

[00:32:30]  Red: All right. Thank you. So now when you took a look at the grand challenge of the problem of open -endedness and started, which seems to be related to what had happened with your pick -breeder and the fact that you had discovered this idea that having objectives could be deceptive, it seems like you tried to come up with what would a search look like if it didn’t have an objective. And what came out of that was what you call novelty search. Can you maybe describe what novelty search is and how you came up with it and how it works?

[00:33:10]  Green: Yeah. So it helps maybe to give the background of how we came up with this idea. It does go back to that pick -breeder experiment that you just alluded to where we were allowing people to breed pictures, which is, for a lot of people, it sounds strange. But so maybe to make it a little more concrete, if you see a number of images on your screen, you could pick the one that you find interesting and it would have children. And if that sounds strange, it’s just like if you were breeding horses or dogs or something. It’s just that on a computer, we can artificially cause the image to have offspring. So just like if a horse had children, if this image has children, they resemble the parent, but they’re not exactly the same. And so you could breed over generations and people bred to amazing ends. Like they found things like butterflies and cars and planets and all kinds of artifacts of the real world that look familiar in a system that knew nothing about any of this stuff. So it was purely through breeding. And so what was interesting though is we discovered in that system that when people made these discoveries, which by the way were hard, like they were hard in the sense that these were extremely rare needles in a haystack because like the vast majority, like 99.999 % of the space of the pickbreeder is just total garbage, just like wallpaper patterns and static and things that mean nothing. And so people were finding these needles in a haystack consistently and we looked into how we’re interested, like how are they making these discoveries?

[00:34:39]  Green: This is related to open -endedness actually in some way, like how people actually make discoveries in an open -ended search process. And we discovered something that to me was totally astonishing, shocking really, which is that people were only making the discoveries when they were not looking for what they ultimately found. So in other words, the only way to find something in pickbreeder is to not be looking for it. And it’s totally paradoxical and totally counterintuitive. And up until that moment in my life, I believed in objectives too. So I just noticed right away that this contradicted that fundamental belief. Like I was like, I was trying to reconcile this with what I’ve been taught. Because I looked for an engineering and computer science background, like I’ve been taught, like you set objectives and then you strive towards them. And there’s different ways, like there’s like top -down design and bottom -up design and optimization and gradient descent. Like there’s lots of ways to talking about getting to an objective, but that’s like how you do stuff. But I was like looking at pickbreeder and it was clear that is not how you do stuff here. And I was trying to figure what’s the lesson to draw from this contradiction? I was totally obsessed with this. I mean, this is where the beginning of all the things we’re talking about was, was this discovery inside of the pickbreeder system. And I was sure that there’s something really fundamental going on here that I needed to understand. And I eventually started to understand the principles that we’ve been talking about. But the natural thing for me to do, being a computer scientist, was not to jump into social critique. That came way later.

[00:36:06]  Green: But to just try to think of an algorithmic explanation or something that could distill the insight into an automated search algorithm, like something in AI or machine learning, because that’s basically my job. Like that was basically why we’re doing the research. I want to understand how algorithms that discover interesting things should work. And so the lesson that I drew here is that it’s actually sometimes better not to have an objective in order to make certain kinds of discoveries. And so what I was trying to think of was how can you mechanize that insight, like turn it into an algorithm, and like take humans out of the loop. And so it’s very hard to mechanize interestingness because it has never been formalized successfully. Because it’s, like I said, it’s as complex as the human mind. There are some theories about interestingness like Schmidhofer would cite himself here. I mean, his whole website’s on this. But while I think they make a dent in the question, like theoretically what interestingness is, I think interestingness is the accumulation of all of knowledge over all of our lifetimes. And so it can’t just be formalized simply into an equation. It’s something massive. It’s more like a large language model set of sort of information than it is like a small formula. So to me, at the time, I was thinking, I need a proxy that’s relatively simple, because I’ll never be able to actually formalize interestingness. And so I that’s why I appealed to novelty. That’s where the name novelty search comes from.

[00:37:29]  Green: Because novelty is kind of close to interestingness, close enough to be a pretty good proxy to see what something like that would do algorithmically in the sense that not everything that’s novel is interesting, but I would assert that everything that’s interesting is novel. And so novelty is a pretty good proxy. So as long as I could define novelty in some domain, then I could say, follow the gradient of novelty instead of following an objective gradient. In other words, you have no objective. We don’t even know where we’re going, but we try to do novel things. And then the hypothesis is if I cause this machine learning algorithm to keep on trying to do novel things, it will make discoveries that are like truly useful and interesting. And so I think this is not obvious to most people. Like if you just say, well, it doesn’t have any objective, doesn’t even know what it’s trying to do. And I’ll just tell it to keep doing novel things and something interesting will happen. Most people initially are like, that’s crazy. Like, what kind of algorithm is that? A lot of people use the word random, but like I argued, that is not it’s not random. That’s why that’s why your intuitions are wrong. Because if it was random, you would be right to think it would do nothing. But novelty is based on information. It’s a very information rich operation to move along the gradient of novelty. And so you shouldn’t predict that what it will do is random. The question is, what should you predict? And what I would predict is it will do interesting things. It will accumulate information eventually, it will do something interesting and possibly solve a problem that you have.

[00:38:46]  Green: But you shouldn’t think about it that way because that would be an objective way of thinking about novelty, which is not what it’s about. But it does solve some problems. It’s just we don’t know what problems it’s going to solve. So we tried this algorithm, we built it, we tried it out in some experimental domains, like we tried like a biped robot, we just said just keep doing new things. And it learned how to walk. Like it learned how to walk better than if we tried to optimize walking, which is a paradoxical result that is very confusing often to people. But again, it just goes back to deception. The stepping stones to walking don’t necessarily resemble walking. So if you’re trying to optimize walking, you might actually miss better stepping stones that initially don’t look like walking, like oscillation, for example. And so we saw result after result, like the results accumulated over years, not just from our lab, but many. It started to be clear that there is a real phenomenon here. And eventually the interpretation of that phenomenon led to the book and all of the kind of social implications that come out of this as well.

[00:39:43]  Blue: I was thinking of, when you say interesting, like how I kind of interpret it is, would you say it’s synonymous with, well David Deutsch has this idea called the fun criterion where if you’re choosing between two theories, there’s more to it than this, but if you’re choosing between two theories, you kind of go with what’s fun. Or there’s a concept of meaningful. I’m thinking of like Victor Frankel, is that the guy’s name who wrote the book Man’s Search for Meaning, where you should, you know, it’s man’s unique place in nature, really, that we pursue. What is meaningful or joyous, or, you know, there’s all kinds of words you could kind of related words you could use there. Are these all sort of interwoven concepts to you? Meaning and joy and fun and interesting or

[00:40:33]  Green: is that fair? They’re certainly related. I mean they’re certainly related. So I think it’s fair, but it’s important also to point out that the beauty of the idea of interestingness is that none of us agree on what’s interesting. Every individual person has a different definition, if you could write it down, which you can’t, of what interesting is. Like what you find interesting is not what I find interesting. There may be some overlap, but it’s not a complete match. And so what that means is that we get more diversity. You know, like you’re going to pursue things I wouldn’t pursue and I’ll pursue things you wouldn’t pursue. And the beauty of it is that I might find a stepping stone that ends up instrumental to you because of that, which you would never have uncovered because you’re not interested in it. But it’s a good thing I found it interesting. And so this is one of the the fuels of open -ended systems is the fact that we don’t all agree on what’s interesting. It allows us to get divergence and to collect a diversity of stepping stones, which means that the next generation of discoveries is even more diverse and we can get to even more places from that. A collection of stepping stones is the power of open -ended. It’s like the more jumping off points you have the more places you can get. And so one concrete examples of this is like in Pick Breeder, we observed clearly that people had accounts where they could save their discoveries. So we could see the kinds of discoveries people have. Like people have vastly different kinds of stuff that they’re discovering depending on their personality.

[00:41:57]  Green: Like there was one woman whose whole account was full of bugs. It was like spiders and beetles and my account had lots of faces. And these are often things you don’t know about your own self. Like she asked me because she was just one of my students. And she asked me, is Pick Breeder like designed to produce bugs? That’s a really funny question. You know, it’s like it’s not designed to produce bugs. It doesn’t know anything about bugs. It was actually rather it was that she didn’t know that about her own self. That this is something that for some reason has valence resonates to her. She’s attracted to bug -like forms. It’s completely her fault. So it’s a mirror like showing something back to her about herself. And I apparently like faces. And I didn’t necessarily know that about myself. But we’re different. We’re different in the kinds of things we pursue. And it’s a good thing because now Pick Breeder has both bugs and faces on it. And so it’s not something we need to get a universal holistic definition out of. But we do want to understand some of the underlying factors. So it’s not like I would dismiss your point because I think like the idea of beauty or fun like those hide behind the diversity like of course like those are factors in it. And they are worth trying to disentangle and further scrutinize. I mean I think it’s true because I mean from an algorithmic perspective eventually we’d like to understand like what does it mean to have fun? Like because we want eventually these models, artificial models, to participate in this kind of process. And so to understand better where all of that originates is worth thinking about.

[00:43:32]  Green: But it’s just that it’s not going to be the same for any two people.

[00:43:37]  Red: So can you go into some detail on how novelty search actually works? Like at a high level your book explains it that you’ve got the virtual robot and maybe it initially just falls over and kicks its legs. So like I can understand it at a certain level. I struggle to understand how you would actually implement the concept of detecting novelty in a program like this. And I’m very curious how and it almost seems like you would have to have a concept of what kind of novelty you were interested in. And I don’t see how you’d get around that or if you did that would be something that I just haven’t been able to think of.

[00:44:17]  Green: Yeah, so there’s a long history of thinking about this question and it’s true that the most immediate thing and the thing that we did initially is to leave it on the shoulders of the human to decide how to measure novelty in a particular domain. So that means it is domain specific, at least as a first stab. And so you have to think about what you said, what you’re interested in. And so yeah, there’s a little bit of the magic is coming from the human. You could say it’s almost like that’s like a little cheat there. The human has to decide what are the features that are interesting here. I don’t actually think it’s cheating. I think it’s actually if this leads to discovering things that are really useful, interesting, or insightful for us, what does it matter whether a human was involved or not? The important thing is that it did something we otherwise couldn’t do. But it’s true though that it would be at first early days of knowledge such as a domain specific question. So very early novelty search experiments were robots and mazes. It was sort of like a nice canonical platform for just demonstrating it conceptually because you could see all of the paths that were explored through the robot that was running novelty search. And so it’s very easy to see how it’s working and get an intuitive understanding of what’s going on. And so a robot in a maze, it makes sense then for a novelty to be measured with respect to where the robot ends up in the maze. Or the trajectory. So it’s like a sequence of XY positions could be the novelty measure or the novelty. We call it actually the behavior characterization.

[00:45:49]  Green: That’s what we call this like this vector of information, which is how do you describe the behavior? And then you would compare one behavior characterization to another. So it’s like one trajectory to another you could compare. Or you could really what it would be is you’re comparing a trajectory to a whole archive of prior trajectories to see if it’s novel. And so that would make sense there. And for like the walking robot, it’s something like the XY positions of where the feet hit and the times where the feet were hitting the grounds. I mean, there are other things you could do. It’s subjective, of course. Ultimately, like what should we care about? You could imagine like if the robot in the maze had a light on its head, we could also measure the pattern of flashing of the light. But like the problem is there that it’s probably not interesting because it’s completely orthogonal to the trajectory through the maze. And presumably the reason we have a maze is because we care where it goes. And so the light is irrelevant and a human would know this. And so it’s true that human intelligence is part of like deciding what we care about and what’s interesting to us here. And that was at play. Now over time as the field progressed and people came up with ideas, it was an obvious thing to think, is there such a thing as a generic measure? Like is there a way to take the human out of the loop? I mean, of course, people were thinking about this. And people tried like information theoretic measures of things like to try to do.

[00:47:06]  Green: Like one thing you could say is like, let’s just look at the inputs and the outputs of the neural network and just forget about what’s going on in the outside world and just like track those patterns in some way, characterize those patterns and just see if we get new patterns. We don’t have no idea what’s going on in the domain. This could be a generically applied to any neural network that’s running a novelty search. And those have shown some degree of success. But what I would say is as is usual in machine learning, any completely generic approach tends to do better across the board but worse on any specific case. It’s like if I really wanted to work the best I could possibly do in this domain, probably better to use human insight than the generic measure. But if I can’t use human insight or something like that, then the generic measure is kind of the second best option. And maybe we can get some progress through measuring it generically.

[00:47:58]  Red: Okay. So what type of limits did you bump into with novelty search? It seems like I recall reading somewhere that eventually it led to the robot coming up with clever new ways to walk down a hallway that maybe to a human didn’t seem as interesting.

[00:48:14]  Green: Yeah. So there are limitations to novelty search. Novel research to me is more of a pedagogical point than a utilitarian algorithm. Like it’s not actually clear what it’s for exactly. Now there are successors to novelty search that might be more useful as a practical matter. Like there’s a whole field that now is called quality diversity algorithms which is grew out of novelty search, which I think are more practically oriented. But the original novelty search algorithm is kind of making a philosophical point almost. So it’s not necessarily trying to do something that’s useful, although occasionally it does something useful. But what I mean is that like it’s not objectively oriented at all. It’s as far in the other direction as you could possibly go. So you aren’t telling it. You have no mechanism to tell it what you want it to do. So you have to take that out of your head. And you also have no guarantees that it will do anything useful at all. Because it’s just being asked to do new things. And so of course sometimes like if you call it a pathology, the pathology is that it’ll go off into the vast space of insanity that exists like in the domain that nobody cares about. Like that can happen. Now the more simple the domain, the less likely that is to happen. So in other words, that’s why you do see sometimes interesting results out of novelty search like learning how to walk. Because there just aren’t that many things to do in that domain. And so it eventually sort of like finds the pockets of possibility that are interesting there. But in a vast, vast space like it’s not necessarily going to do something that’s useful for you.

[00:49:50]  Green: But it makes a point, like I said, a conceptual philosophical point because what I think is like really, really salient about novelty search is it’s just embarrassing and crazy that this algorithm that doesn’t know what it’s trying to do can optimize a better walker. I shouldn’t even use the word optimize because that’s the wrong word. It can discover a better walker than an algorithm that’s trying to explicitly optimize walking. Like that should be an absolute embarrassment to every objective based algorithm on earth. And it should raise huge red flags all over the place. It’s a crazy result. And so some people take the wrong lesson from that and say, what he’s saying here is I should just use novelty search for everything. Like that’s actually the best algorithm in the world. That’s obviously not correct. Novel search doesn’t provide any guarantee you’re going to solve anything whatsoever. The lesson here is not that. The lesson is that there’s something wrong with objectives. Like this should not be happening. And the fact that it’s happening raises a lot of questions and should change our understanding of what it means to set an objective and to optimize and what we’re really asking an algorithm to do. Okay.

[00:50:57]  Red: You mentioned something as a successor to novelty search. Quality something. Can you say that again? Quality

[00:51:05]  Green: diversity algorithms. Quality diversity

[00:51:07]  Red: algorithms. Okay. I’m going to have to look that up after the show. That sounds actually very interesting. So that’s actually maybe related to what my next question was going to be. Novel search seems very clever to me. And I can see how it’s kind of a proxy for interestingness and therefore kind of mimics what a human is doing when we have this kind of open -ended search of ideas and we’re using our own interestingness as a way of creating the open -endedness. It’s less clear to me how that would relate to biological evolution because biological evolution I would assume doesn’t have… Maybe I’m wrong, but I would assume it doesn’t have any sort of novelty search like structure to it. Have you given any thought to that as to… You meant this quality diversity algorithm. Does that show maybe more potential to matching up to what biological evolution does? Or have you given any thought to how novelty search does or does not relate to biological evolution? And why biological evolution is able to do open -endedness without having interestingness as a guide?

[00:52:18]  Green: Yeah, yeah. Certainly I’ve thought about it a lot. I mean, I was originally entered into the field mainly interested in evolutionary algorithms. So I was inspired by evolution. I was trying to understand really how evolution in nature leads to astronomical complexity. That was one of my original motivations for a lot of the work that I did going back before novelty search, like all the way back to the need algorithm or neuroevolution of all managing topologies and things that succeeded that. And so I was thinking about evolution the whole way through. And my view is that the issue with evolution, I’m not just really interested in modeling what actually evolution does. So it’s true that novelty search is not what evolution does in some kind of one -to -one mapping. The reason is because what I’m actually trying to understand is what in evolution actually accounts for the interesting things it does. It’s that when you observe a natural phenomenon, there are some properties that it has which are undeniably factually there, but that actually are not the fundamental explanation for the part of it that really is captivating you. There are examples of this in embryology too. Like I also did a lot of work in artificial embryology and sort of like trying to find what are the actual things that lead to embryogenesis being such a good way of representing organisms. And so you can look at the natural phenomenon and just say, well, I can tell you what happens. Like it starts out as a single cell and then it’s splitting and splitting and there’s genes are interacting and a gene regulatory network and blah, blah, blah. But that’s not the question. I’m not asking what happens because that is what happens.

[00:53:55]  Green: I’m asking like what’s the deeper level of abstraction or I should say the higher level of abstraction that actually explains the interesting part of the phenomenon. And so when it comes to evolution to me, the interesting part of the phenomenon is the open -endedness, not the optimization. Like I don’t deny that evolution does perform an optimization function in some pockets. Like there is optimization going on there, but I don’t find that interesting. So I think of that as the uninteresting part of the problem. Like in some way it’s like if you hear it, if you hear like the way David Chalmers talked about consciousness, like he said, there’s the hard problem and there’s the easy problem. Like I think that’s true of a lot of subjects. Like in terms of evolution, like the easy problem is to explain the optimization process. The hard problem is to explain the open -endedness. And so like most textbooks and like biological explanations focus on the easy problem. It’s like, yeah, like selection and competition and all this stuff. It’s all convergent types of processes. Like competition is a convergent process. It’s a death match that leads to a winner. Like the thing that’s really mysterious and intriguing is the divergent process. What explains divergence? Competition does not explain divergence. And so what does explain divergence? Like explicit divergence. And so, and how much could you get if you just extracted that one property from the process? It’s divergent nature. And that I would call more like an ablation study. Like let’s cut all the other chaff out of evolution, which most people would think is interesting, but I don’t. Like all of these other things, like trying to compete with respect to fitness and things like that.

[00:55:27]  Green: And just distill it down to a fundamental essence, which is a high, high level abstraction of the thing I’m interested in, which is the divergent property, and see what it does. Like what are you left with? And I think that’s what novelty search is. It’s a kind of a distillation to a very high level of abstraction of just the divergent property and only the divergent property. It’s a kind of like taking the divergent property of evolution and putting the foot down on the accelerator. Because it’s true, like you say, the actual novelty searching aspect of evolution is not explicitly like novelty searching. In other words, there’s not as many explicit mechanisms pushing towards novelty. There’s not zero, though, should note. Like there is sort of, there is like mate -centered novelty -based selection. Like sometimes mates do have preference for novelty. There’s also niche founding, where it can be actually useful to be different because you get separated off from everybody else that you otherwise would have had to compete with. And so there’s actually is an incentive for novelty in some cases. So it’s not like there’s no novelty in there, but it’s a big complicated mess of things that are all interacting. So it’s not like just been distilled into this one thing. The way novelty search is. So novelty search is, let’s take that metaphor, let’s abstract it out, let’s just go all with that, all in, all exit in one basket and just put the foot on the accelerator and just like push it towards divergence in a way that evolution doesn’t have an explicit push. And just see what happens then if we just focus on this one property.

[00:56:52]  Green: And what we find is, I think it is very surprising that it has a lot of the properties of evolution without even though it’s missing, I mean in terms of the outcomes, that’s what I mean, a lot of the outcome properties of evolution. But it’s missing all of the mechanisms except for this one aspect of divergence. And so in that way, I think it’s highly illuminating about understanding what’s interesting about evolution even though it’s such a high level abstraction that it’s not at all mechanistically the same thing.

[00:57:18]  Red: Interesting. By the way, one of my favorite quotes from the problem of open -endedness article that I’ve mentioned a couple of times is, it is now becoming clear to us that open -endedness, while perhaps simple, involves a kind of mind trick that would force us to re -examine all our assumptions about evolution. The whole story about selection, survival, fitness, competition, adaption, it’s all very compelling and illuminating for analysis, but it’s a poor fit for synthesis. It doesn’t tell us how to actually write the process as a working open -ended algorithm. To pinpoint the reason we see open -endedness in nature and hence become able to write an algorithm with analogous power likely requires a radically different evolutionary narrative than we’re used to, which is kind of similar to what you were just discussing right now. Now, I know that this is not your wheelhouse. After reading your article, I became interested in the fact that there are some biologists that are trying to rewrite neo -Darwinian evolution to have a different story. And I don’t necessarily see any strong connections between your theories and what they’re doing. But Ray and Dennis Noble come to mind as scientists that have kind of tried to push towards a rewrite of evolution to have a different story as to how it works. I confess that when I read their books, I don’t always understand what they’re getting at. But I’ve also read James Shapiro’s book and he has pushed for something called natural genetic engineering. He has this idea that evolution doesn’t really work by just natural selection. That is a part of it, obviously, but that the genes actually have sensors.

[00:59:05]  Red: They actually have knowledge of how to take mutations and move them in the genome and engineer what they want to have happen. Do you have any familiarity with any of these theories? Do any of them seem like they have any overlap to where you’re going or does this just seem like it’s totally unrelated to what you’re working on?

[00:59:23]  Green: Yeah, I have some passing familiarity, but nothing intimate. So it’s like I just want to give that disclaimer at the front. Like I’m not an expert on these theories, so for those authors, I don’t want to be pretending to have an expert critique here. But I understand the gist of what this is about and from the gist of it, yeah, I don’t think it’s on the same page with me. And it’s not just in a superficial way. I mean, it’s deeply different, I think, in terms of what they’re exploring because they’re looking at mechanisms that I would probably say aren’t the most interesting, but they’re actually trying to show that there are interesting ways that these mechanisms occur. So I don’t mean to suggest that it’s not interesting research. Obviously, someone should be looking at these questions and it actually is worth understanding that there’s more there maybe than meets the eye. But what I mean is, from a macro level perspective, like the whole evolutionary process, like what interests me is like I said divergence or open -endedness, explaining how it can diverge over eons. I mean, we’re talking about over more than a billion years. It’s almost like forever. And to create all of living nature in a single run, which is like literally biblical, like it’s all of creation right there. And so I don’t think where they’re, what they’re looking at is part of the explanation of that. And one reason for that is because these are actually optimization -centric questions that they’re addressing or you could think of the objective questions. It’s like any mechanism that allows you to improve your performance with respect to some metric at the current time is a form of optimization.

[01:01:08]  Green: And optimization processes lead to convergence, not divergence. So it’s like if I can more quickly converge my species towards what is the locally optimal behavior for survival, well then the species will converge. This is like the opposite of explaining divergence. And so to the extent such mechanisms are available, you would get dramatically less diversity, fewer stepping stones, fewer species over time, and evolution would be less spectacular. And so I think it’s still important to get to the facts of the matter in terms of how these things work. Because there’s more, I mean, I’m not claiming that what I’m interested in is the only thing we should care about. Like I’m just saying this is what I’m interested in. Like we should also want to know how things actually work at a mechanistic level. And they’re doing that, and I’m not. So I’m hardly arguing that we should stop that kind of research. But I’m saying from the purposes of trying to understand open -endedness in the creation of all living nature in a single run, this is probably not going to end up being extremely illuminating. And it’s sort of related, it relates to the point I made about the fact that there’s so many seemingly salient phenomena that are very hard to take your eye off of when you look at a natural phenomenon like the unfolding of an embryo, that it can be really hard to kind of sort out what really matters here for understanding at a fundamental level what’s going on. Because like the magic of just the unfolding process itself is just so salient it’s hard to take your eye off of it.

[01:02:34]  Green: And so I think this is the case where it’s like seeing all of these mechanisms is just absolutely magical like to see that it could actually engineer its own genome. It’s an insanely incredible process. But it can distract you, it can distract you from seeing something that’s even deeper and more fundamental in terms of explaining how something like what happened on this planet is possible.

[01:02:55]  Red: Okay. Let’s talk about practical applications for your research particularly how it relates to artificial general intelligence. So you had some you had some suggestions in your book about how maybe funding of academia is too objective oriented and then you made this really interesting point which I’m sure I’m not going to be able to do justice to that artificial intelligence is in some sense the search for search algorithms and that we don’t even apply what we know about search algorithms in the search for search algorithms. Maybe I’m saying this poorly so please correct me if I’m doing a poor job of representing your view views here.

[01:03:43]  Green: No, no, I think that that’s that’s a good statement. Um, I uh yeah, so so yeah there are clearly practical implications and now we’re talking at the level of institutions in some sense. And I think there are practical applications of these insights because it’s true that we run almost all our institutions through objective means and I would claim that as a result we have a lot of unintended consequences that are a result of the objective paradox. In other words, we are asking for deception. We don’t want to be asking for it but in effect that’s what we’re doing. By running everything in terms of objectives. And there’s a lot of like sociological analysis you could do to understand why are we like this? And it’s a why why did we get to a point where every single thing that we do has to be objectively justified. And whatever the reasons are and they’re fun to talk about too but whatever they are what the analysis in the book shows is that they shouldn’t be. I mean not everything should be objectively driven. Especially when it comes to the pockets of our institutions and our personal lives where what we’re actually after is innovation or discovery or creativity. Like in those pockets this is actually really hurting us. And those pockets are very very important. You know because that’s how progress happens. Like it happens through innovation. And so there’s a lot of institutions that have been set up explicitly because they’re supposed to facilitate innovation because like all the problems in the world that have not been solved will only be solved through innovation. And so there’s an enormous weight on these institutions that are trying to do this.

[01:05:23]  Green: And so if they’re making a mistake because of the principle of the objective paradox because of the fact that when you pursue an objective that’s too far away you end up deceiving yourself and blinding yourself to other stepping stones that might lead to other important discoveries as well as stopping yourself from making the discovery you’re trying to made then we are slowing down all of innovation across society inadvertently unintentionally but obviously that’s bad. We don’t want to be doing that. And so that leads to some kind of prescriptive solutions potentially that are based on the ideas like the insights and novelty search and the whole theory behind these kinds of algorithms these open -ended algorithms to the extent that we’ve worked this out about better ways that these kinds of institutions could be run. And you know I’m thinking of like granting agencies like innovation labs and research labs inside of corporations like investors like especially venture investors like thinking about the next big thing like the government attempting to solve big, big problems in society and in the world trying to cure diseases environmental problems like it does all kinds of extremely ambitious objectives littered across society that are subject to the objective paradox. So it’s something to be worried about because we approach these things objectively. And so the example you gave of artificial intelligence is one artificial intelligence is obviously a very ambitious objective or really you could call it AGI these days it’s the way we articulate it artificial general intelligence it’s like the objective of AI is AGI you could say.

[01:06:55]  Green: Well as soon as we really if we take that to heart and really pursue things that way then yes you start to see all these pathologies coming out that come from pursuing deceptive objectives. And you see it really explicitly in this field and maybe part of the reason is because I’m in the field so it’s an easy example for me to refer to because I’ve been watching it but it’s very objective how it works because we have benchmarks and benchmarks are basically a way of making things objective. And people are really really worried about knowing whether we’re really making progress like that’s what objectives mean. So it’s like everywhere I’ve been in the in the academia and industry like everywhere I’ve been there’s a constant drumbeat of like we need better benchmarks like we got to figure we have a way of knowing like is that really better than that? Like you get me more granularity and information here so I can know whether this is better than this. And so it’s just an it’s just an obsessive preoccupation.

[01:07:47]  Blue: I’m a high school teacher and it’s it’s every day you wouldn’t believe it. Anyway, I’m sorry.

[01:07:52]  Unknown: Oh yeah,

[01:07:52]  Blue: yeah. And interject that.

[01:07:54]  Green: Education is another place that this is just absolutely pathological. I mean the entire education system is saturated in objectives. It’s like you know how much funding is going to go to which school? It’s like there’s some objective measure of how the school is performing there’s objective measures how the teachers are performing then there’s the objective measures of how the kids are performing on the test. Like you could say well how could all this be bad? Like obviously you have to do these measurements but just think about what I’m saying if our goal was to get every student to get 100 on every test across the across the country then it’s obviously deceptive. Like it’s going to be extremely complex to do that which means that sometimes we’re going to have to explore something that might you know horrible as it is cause test scores to go down a little before they go up and we are not allowed to do that and so it’s preventing us to do any kind of exploration at all. And so this is like across all of the problems of society and education is a big one and so you know like the thing in AI that we’re have trouble with is to just say actually this thing should be published because it’s interesting. It’s a new stepping stone and I would argue that a lot of the major innovations in AI are because somebody did actually buck the trend and just decide to invest in something because it was interesting like somebody had the courage to do that.

[01:09:07]  Green: Obviously it happens but it happens despite the way that we’ve structured things not because of it and that’s why I’m saying we could improve the structure of all of these institutions including academic communities to take this objective paradox seriously and make things including especially I mean I could even say especially education work better because I think it’s especially education is just a crying shame you know because like that there’s real victims of the way that this stuff works and it’s disgusting like what is done to the kids and nobody wants to actually I’ve talked to so many diverse communities about this because of the book but the one that I’ve found the hardest to get access to is the educational community like they just don’t want to hear about this they love their standardized tests and like other communities are more receptive like at least they like hearing something contrary and but the educational they don’t even want to talk to me there’s exceptions like I spoke to a superintendent in Kansas once this guy was really interested in this stuff but he’s like an exception and so it’s just kind of interesting that like it’s so entrenched in in that particular culture

[01:10:11]  Red: so actually you’ve mentioned that a couple times now this idea that we have this psychological need for objectives explain that a little bit further why do you think we have such an almost pathological need for objectives how did that come to be is that something natural it’s part of human nature if you will or is that something cultural I mean obviously it’s probably you know anytime you do genetics versus you know nurture versus nature it’s almost always a false dichotomy but is it something that’s more cultural that has come up because something in American culture or whatever has led to this or is it something that like it’s just a psychological pull that all humans have that just kind of make us want to go towards objectives

[01:10:58]  Green: yeah that’s a really intriguing question like I like to think about that like why are things the way that they are and it is hard to disentangle I mean you identified the right part of it because it’s like it’s hard to disentangle the cultural from the psychological like the intrinsic psychological needs that we have so my guess is a bit of both that like from the psychological side you’ve got a desire for certainty which I think is deep -seated like we just want to feel confident and sure that things are going to be okay in the future

[01:11:27]  Green: and objectives provide what I think is a false security blanket for creating a veneer of certainty it makes you feel like you can feel like you’re doing the right thing like you’re confident you know you’re on the right path and that’s something we really crave and want it’s hard to create or simulate that without an objective you know like following the path of novelty or the path of interestingness does not create any certainty at all it’s about risk actually that’s an important point to note like when I advocate following interesting paths I’m actually advocating taking risks you know because there’s no guarantee that because you follow an interesting path you actually get a reward at the end that’s the whole point we don’t know where we’re going a lot of these paths will lead to dead ends but if you follow enough interesting paths you will find something interesting you just don’t know what it is but so there’s a lot less certainty in that kind of an argument you know so so it’s just not as psychologically satisfying for the people who want to feel like they know where they’re going and because of that it’s important to acknowledge that you have no obligation as an individual to take risks like it’s not like important or noble it’s just a personal preference it’s like if you do want certainty in your life like if you don’t want risk you want to minimize risk by all means follow objectives like what will happen is that you won’t achieve anything remarkable you’ll have modest achievements but you’ll live a relatively risk -free life and it’s completely admirable you know it’s like you’ll support your family you’ll go through life without like suddenly losing your job and having to figure out what to do next like there’s nothing wrong with it it’s just a personal decision I think the only time where you’re making a wrong decision on this is if you’re just not aware of the risks like that’s that’s a mistake like if you go out on a path and you think oh well he said do what’s interesting and so I’m gonna guaranteed to become a billionaire now that’s a mistake and that’s actually you should try to avoid you should be informed about the risks you’re taking but as long as you’re okay with the risks then you should be allowed to take those risks like that’s something you can do and institutions and governments have to think about when it’s viable to take risks you know like research organizations can take risks like the National Science Foundation which is giving out money to do research is supposed to take risks so it’s okay if some projects fail and that’s why following the interesting should be okay and I argue it’s not very okay in that institution but it should be that institution needs to be reorganized but something like the economy it’s not actually okay to take huge risks like you can imagine doing experiments with the economy because they’re interesting that’s just off the table like we can’t afford to have like thousands or millions of starving people just to find out if something actually turns out interesting so we are satisfied to live in that local optimum and simply optimize objectively because the risks are not worth it so it’s a question of where the risks are worth it now I also the social component that’s the part I didn’t mention the social the component of culture like how much is this influencing our obsession with objectives I think it’s big I think it’s not just the psychological component I think if you go back farther in history you get less objective cultures like if you go back really far you get to these philosophers like you know Lao Tzu who I don’t even know he’s a real person but he was attributed thousands of years ago with saying something like

[01:14:34]  Green: a good traveler has no fixed plans and is not intent on arriving and you hear these kinds of like confusion or Buddhist or Taoist like philosophies ancient philosophies that sound much more like something that came straight out of our book that’s actually why we use that quote in the book like it’s just we found all kinds of quotes that are like we actually put them at the top of chapters if anybody reads the book like we would put these quotes because we admit that we’re not the first to make this kind of a point I think what we’re the first to do is to try to put it into some kind of empirical framework we’re actually have evidence of it but many have made this kind of philosophical point that’s often better not to know where you’re going if you want to get somewhere really great I mean we hear things like that like when people talk about especially a love like you know like the best way to find love is to not be seeking it and things like that like this is a direct expression of like the philosophy of the objective paradox and so in some corners of our culture like it still exists and people still talk about it but in terms of like most of the culture it has been drained out of the system and I think the reason is because we have become increasingly sophisticated at creating checks and balances inside of institutions to try to prevent anything from going wrong like nothing bad should happen anywhere I mean just bureaucracy is what we’re talking about and so bureaucracy has become very sophisticated and so we continually want to try to use this sophistication to mitigate risk and the problem is that objectives look like a really good option for mitigating risk it’s like how do I know there might be a wasting investment on something and so I should be able to have an objective measure for it and so we feel like we are becoming increasingly sophisticated at managing risk and making sure that we’re putting money into things that actually are going to pay off but the unintended consequence of it is that we’re killing our abilities to diverge and to do open -ended exploration and it’s a worldwide phenomenon like it’s not just the US or the West or you know Eastern cultures have also lost their ability to do this kind of divergent thinking I mean if you look at like Chinese culture it is very top -down it’s very goal -oriented

[01:16:44]  Green: both at like the government level and the individual family level it’s like this is what you need to do to succeed in life and you do that for your parents and the reason I mentioned China because our book recently came out in China so actually got to know a lot of people in China because of that and the book was particularly potent in China like it did very well there because it’s so it’s a clash with the culture there like the culture is totally objective in China and so it’s like you can go almost anywhere and see this has happened in the modern world it’s just a calamity across the modern world in my view which is why why we wrote the book like it needs to be some force pushing in the other direction

[01:17:23]  Red: so let’s talk about how this would apply to actually funding research so I might be reading too much in here from your book but it seems like you kind of advocated for two kinds of research maybe one kind that would be done by companies that would be more objective oriented because a company needs to turn a profit so they’re going to try to move one step towards some sort of goal and then another kind that maybe was funded by the government that was more open -ended of just trying to follow interestingness so that we collect stepping stones again my apologies if I’m misquoting or misrepresenting your views here but I got the sense that you were kind of advocating for having both kinds of research available and doing both because they both have some sort of importance

[01:18:16]  Green: okay um yeah so so I I I don’t remember if I exactly said that but it you know it certainly points that are related to things I would have said that you know because like that the part that I don’t completely mean to convey is that it’s necessarily a dichotomy between the corporate world and the government world okay it doesn’t have to be although it does tend to be like you do tends to be see more objective pursuits in corporations because of the profit motive and the government through its funding agencies can afford to be less profit oriented so that’s true but I do think that like generally there’s a breakdown in both in terms of like being able to do non -objective pursuits like that the government is not very good at actually doing non -objective pursuits by the government I mean the funding agencies like the National Science Foundation like those those kinds of things or DARPA for example there’s like different funding agencies and the government

[01:19:10]  Green: and so these agencies do tend to ultimately force you to put your cards on the table objectively unfortunately but the real question here is not what actually is in practice but what could be in practice like where should things be happening and how should they be happening and I think that you could have non -objective pursuits in both environments it’s true it’s a little more challenging in industry but it can be justified especially in large corporations you know because they have they have research labs which are which are basically research institutions and they need to have them they’re not just having them for fun because like what large companies are dealing with is the potential for disruption it’s the possibility that somebody’s going to come along with some completely unanticipated new technology out of left field and just make everything obsolete at this big company and so what is your defense against disruption it’s research like you have to like look at the possible future horizons and inventions yourself to anticipate the future world and so that’s why you know there’s a Google Brain or DeepMind at Google or like a meta has its AI labs and like these organizations have their big labs doing research often these days in AI but also like things like Bell Labs Xerox PARC like that they have these things for good reasons and so the question though is can these kinds of corporate labs exist within an objective free environment when the company itself obviously is very strong objective requirements like it needs to have it needs to have a profit I mean it’s answering to investors so it literally is a legal fiduciary responsibility it can’t just be like oh we don’t care it does care and so it’s hard for the CEO to look at their innovation lab or their research lab and think oh we don’t really need objectives here like let’s get rid of them or we shouldn’t have them because they feel like everything needs to be aligned with the cause of profit but the thing is that I think that’s an incorrect view because the thing is like these are your shield against disruption the shield against disruption works differently than every other component of your organization because you cannot protect against disruption by following an objective the only protection against disruption is open -endedness and the open -ended component of the organization has to actually be open -ended which means it cannot be ultimately objectively oriented and this point is extremely difficult to absorb for leadership in companies I wouldn’t it’s not quite impossible I hope but I don’t think it’s quite there are some exceptions actually I know of some CEOs who try to actually follow this philosophy but it’s extremely difficult but the thing that you have to recognize is that if you can’t get it all the way up the chain all the

[01:21:52]  Green: way to the CEO that they actually believe in this then it’s doomed forget it like you’re you can call it whatever you want you can call it a research lab you can call it basic research it ultimately will not actually be that because people will understand that there’s an objective incentive which is to affect the company’s bottom line and they won’t do what they otherwise would have done which is do what they thought was interesting because they’ll understand that they’re going to lose their job so they’re going to do the thing that they think will increase the bottom line which is objective and it’s not what they think is interesting and you will destroy all innovation and most companies work that way the government is it’s easier for them to extricate this from the objective pressure because they don’t have a profit motive but for some reason they still can’t seem to do it you know because this is

[01:22:33]  Green: like the the the NSF which I was very familiar with because I was a professor begging for the money myself it forces you to explain like the deliverables that you’re going to get like you actually write these statements about at the end of your grants and that’s basically like laying out objectives like people want to know well what are we going to get at the end well what if I don’t know what I’m going to give you like I’m just doing this because it’s interesting like that was actually why did pick breeder I have no idea that I would this theory would result from doing pick breeder but I did know or at least I thought that if we do this experiment where people evolve pictures online we’re going to learn something really interesting about discovery I don’t know what it is though so I couldn’t write it down and NSF you know of course they rejected the grant application for that reason like they said it wasn’t clear what the point was like what’s the point of having all these people evolving pictures on the internet it’s like a completely pointless endeavor but so they failed to see that my argument is not that there is any deliverable it’s just that it’s interesting so we’re going to open up a new playground and discover new things so they’re both not doing well it’s harder to do in industry but I think it’s possible and it’s easier to do in the government but they’re still not doing it anyway

[01:23:39]  Red: I what we’re just about out of time but I have one more question I’ve really wanted to ask you so you talk about creativity as a kind of search factors here’s a quote from your book from page six so we can think of creativity create of creativity as a kind of search I suspect that idea equating creativity with search with surprise many people in and of itself how literally do you mean that is that literally true or is that more like an analogy in your mind and I should probably give my own biases here I actually think it’s literally true so I found that interesting but I wasn’t sure if you intended it as literally true or more as an analogy

[01:24:17]  Green: I think I mean it literally too yeah I think it’s I mean I say I think because it’s there’s some there’s some ambiguity about what search even means like that that’s something you can get and split hairs on that like what is a search but like I basically think you’re searching through a space like that’s what you’re doing when you’re being creative in other words there exist of course it’s a philosophical question what does it mean that they exist but these like things exist in the space of images that haven’t yet ever been written down or haven’t been drawn yet that they they in some sense theoretically exist and so you’re searching for those things in that giant space that actually are interesting whatever those are like if you draw a picture or make a piece of art and so I think of it as searching through that vast vast space which is like uncountably vast

[01:25:08]  Red: so I that’s kind of an interesting idea in and of itself the idea of search as related in some way to creativity or to intelligence although I did notice that like the most famous textbook for AI is the Stuart and Russell book and they basically put search as chapter two because then they’re going to reference that all throughout the entire rest of the book so there does seem to be something basic about search and AI that go together

[01:25:37]  Green: yeah that’s true yeah I’m trying to think like where or why why I’m so comfortable calling everything search like it’s some it does go back something to something like those textbooks because I I mean I read those textbooks when I was in introduction to AI and yeah it sort of frames everything in terms of search and it starts to be like a a kind of framework for thinking about yeah how a learning algorithm moves through a space and like once that kind of like clicked for me actually took a few years I think to really click like what that means it made it easier to think like about like you can visualize more what’s happening like you think of these points in the space and you can think of the algorithm moving along this point moving through the space like where you are in the space and approaching an optimum is like moving towards it in the space you’re searching for the optimum it’s just a metaphor that’s helpful I think to think about it visually to some extent and yeah to couch it in something that you can actually grasp onto you

[01:26:41]  Red: yeah even the very concept of a fitness landscape and things like that where they try to they try to visualize all the algorithms in terms of some sort of search is kind of interesting all right well we’re out of time but thank you so much this has been a massive pleasure to be able to meet with you and to be able to ask all my burning questions after reading your book and I think that our listeners will love what you had to say on the subject here

[01:27:09]  Blue: and Ken I just want to say your your ideas have led a fire in my brain I just it’s one of the most interesting concepts I’ve come across it a long time and I really appreciate you coming on our show

[01:27:22]  Green: well thank you for that Peter and thanks to both of you this was great I loved these questions it’s great to get these points out there which often you don’t get to talk about at this level so I’m really glad that this is now recorded and down on video so people can actually hear these ideas thank you thanks a lot

[01:27:39]  Red: all right thank you

[01:27:48]  Blue: hello again if you’ve made it this far please consider giving us a nice rating on whatever platform you use or even making a financial contribution through the link provided in the show notes as you probably know we are a podcast loosely tied together by the Popper Deutsch theory of knowledge we believe David Deutsch’s four strands tie everything together so we discuss science knowledge computation politics art and especially the search for artificial general intelligence also please consider connecting with Bruce on X at B Nielsen 01 also please consider joining the Facebook group the mini worlds of David Deutsch where Bruce and I first started connecting thank you


Links to this episode: Spotify / Apple Podcasts

Generated with AI using PodcastTranscriptor. Unofficial AI-generated transcripts. These may contain mistakes; please verify against the actual podcast.