Episode 108: AI and Obedience (with Dan Gish)

  • Links to this episode: Spotify / Apple Podcasts
  • This transcript was generated with AI using PodcastTranscriptor.
  • It may contain mistakes. Please check against the actual podcast.
  • Speakers are denoted as color names.

Transcript

[00:00:00]  Blue: Hello out there. This week on the Theory of Anything podcast, we are joined by fellow traveler Dan Geshe to discuss LLMs and AGI. Does it really, truly make sense to think that open AI or deep mind are not at least an important stepping stone towards the creation of human level creativity? What does it mean when crit rats assert that these AI algorithms are the opposite of human intelligence because they are obedient, whereas we are disobedient. I got a lot out of listening to these guys talk, and I hope someone out there does too.

[00:00:47]  Red: Welcome to Theory of Anything podcast. Hey, Peter. Hey, Bruce. How are you doing today? Good. And we’ve got Dan Geshe with us again today. Hey, Dan.

[00:00:56]  Green: Hey, guys. Good to join you again.

[00:01:00]  Red: Today’s episode is an idea. This one idea came from me. It’s a lot of times they actually come from Peter. But I’ve seen Dan talking on Twitter quite a bit. So Dan, I’ll let him introduce himself in just a second. But he’s got quite a bit of background with like Popper and Deutsche’s writings and has a similar background to like Peter and myself. But he says some things that really make him stand out and often taking issue with or disagreeing with majority views from crit rats, particularly around his views of artificial intelligence. So I wanted to have a chance to get him on the show and have him explain himself and then allow me to ask him questions and kind of pick his brain a little as to what his thinking is. So Dan, maybe give you a give an introduction to yourself and maybe explain how you got involved with the writings of either David Deutsche, Carl Popper, or how did you get involved with critical rationalism is what I’m asking.

[00:02:10]  Green: Sure. Well, I come from a software engineering background and had a educational software company back in the day. And yeah, I found you guys on Twitter and it’s it’s my kind of my kind of thinking and philosophy. But I think that that one thing that distinguishes me a little bit from a lot of the crit rats is that I come from just like I build a lot of software and I’m always looking at just like. I don’t know, maybe I have an unreasonable confidence of like finding ways to build things. And so I guess AGI is the ultimate thing to build. And just from surveying the AI landscape and playing around with all this stuff, it seems to me like there’s a lot of the pieces in place in order to build AGI, especially as as I understand it as the crit rats understand it. And so from that, I’ve kind of taken issue with the negativity that Deutsche and the crit rats have around today’s AI that it’s just the thought that today’s AI is the opposite of AGI. And I just kind of have like this vague idea of how the pieces are coming together. So I just have some thoughts as to why the crit rats think this and why they could be wrong.

[00:04:12]  Red: Okay. So let’s unpack that. So two things that you just mentioned that I’m going to want to ask you about, but obviously we have to do it in order. One of them is what are the things that you feel are coming together about AGI? And I’ll probably also ask you like how close we think you are to AGI, what you think is missing stuff like that. And then, but maybe let’s first do the other item, which is why do you think that the crit rats have such a negative view of AI? I have a strong opinion on this myself and I’ve actually got like quotes here from David Deutsch that I feel were what led to them having a negative opinion of AI. But what is your what is your experience and why do you think that is that they have such a negative view?

[00:04:59]  Unknown: Well,

[00:04:59]  Green: I think that originally it comes down to Feynman had a quote, maybe it’s in one of his books from back in the day that that if I if you can’t program it, you don’t understand it. And I think that a lot of the negativity is downstream from that. So we don’t understand AGI at this point. So how can we program it? How can how can AGI be created off of just predicting the next word off of text on the internet? That doesn’t sound like an understanding. And I think that a year ago, we kind of dove into this a little bit. And then I guess the second part of that is that Deutsch is my opinion correctly deduced that the salient aspect to being human is open -ended creativity and it’s open -ended without a goal. And in fact, I think that he bases a lot of his philosophy on that. There’s a lot of stuff downstream from that. And therefore, because today’s AI is trained with a fixed goal, it must be the opposite of what open -ended creativity is. So that’s my that’s that’s my understanding of why crit rats and Deutsch are so negative about it.

[00:06:28]  Orange: Yeah, okay. Just to jump in with one thing here. I had to try I just had to fact check something while you were speaking and just I maybe that can be my job today. It just looks up on chat.

[00:06:41]  Red: But

[00:06:41]  Orange: I was curious that you said that it was Richard Feynman who said that about programming and I was like, oh, I always thought that was Deutsch. Was I just wrong about that? It’s

[00:06:50]  Red: actually it’s actually Knuth, I think who originally said it.

[00:06:53]  Orange: Okay. Well, what did you find? Says all three mentions all three thinkers Feynman, I guess, said what I cannot create. I do not understand, which is pretty much the same thing, I guess. And then Deutsch said, if you can’t program it, you don’t understand it. And then it also mentions Knuth who emphasized the deep understanding that comes from writing code. So I don’t know. Just throw that out there.

[00:07:18]  Red: So I don’t somebody in the audience who knows this, please let me know who was the first one. I mean, obviously all three of them said it. And so it sounds like they’re just

[00:07:28]  Orange: similar sentiments that all make sense.

[00:07:30]  Red: But I’m pretty sure David Deutsch said he was quoting Feynman. So that’s why he did.

[00:07:36]  Blue: Yeah. So that’s why Dan

[00:07:38]  Red: said that. But I think Feynman was paraphrasing Knuth is what I think. So you’re probably right.

[00:07:46]  Orange: So you’re smarter than chat GTT. Okay.

[00:07:49]  Red: So I mean, I’ve got like quotes from Knuth that say the same thing. Like I’ve tried to figure this out and I don’t know like I’ve never tried to like figure out what year the quotes come from. So it might be that Knuth got it from Feynman. I mean, obviously Deutsch got it last because he’s later. So I know for sure he didn’t invent it. But I think Knuth came up with it and Feynman was paraphrasing Knuth, but but I’m not completely sure. Okay. So okay, so Dan mentioned that there’s this this idea that AI and AGI should be opposites of each other. So there’s a famous article. Well, amongst crit rats, it’s famous from David Deutsch called creative blocks, which we’ve we’ve mentioned. If you look at our episode 58, we did a whole episode on just that article and we looked at it 10 years later after I that that article was one of the main things that got me interested in AGI and I went back to school and got a master’s degree in machine learning because of that article, even though the article is fairly negative on machine learning and after going through and getting a master’s degree in machine learning, I was able to then look back 10 years later and to try to criticize that original article. And I think what you’ll find is if you go back and listen to our episode 58, we’re actually he’s basically roughly still correct, right? But there are certain things that I feel we can take exception to that were he he was fact checking him. He was incorrect about some things or perhaps that’s not even quite right.

[00:09:31]  Red: They are said in ways that arguably they’re incorrect, but it depends on how you interpret them. So let me just give a couple examples of quotes. This is not necessarily an order that it’s in the article, but Deutsch says in that article, the lack of progress in AGI is due to a severe log jam of misconceptions without papyrian epistemology. One cannot even begin to guess what detailed functionality must be achieved to make an AGI and papyrian epistemology is not widely known, let alone understood well enough to be applied thinking of an AGI as a machine for translating experiences rewards and punishments into ideas or worse into just into behaviors is like trying to cure infectious diseases by balancing bodily humors futile because it’s rooted in an archaic and widely mistaken worldview and then elsewhere. He says without understanding that the functionality of an AGI is qualitatively different from that of other kinds of computer program. One is working in an entirely different field. If one works towards programs whose thinking that’s in scare quotes is constitution is constitutionally in is incapable of violating predetermined constraints. One is trying to engineer away the defining attribute of an intelligent being of a person, namely creativity. So that’s probably the main two quotes that is what you were kind of getting at Dan. Let me just give a couple others that are kind of related. He says also we use rather the method of trial. Okay, so he talks about we do not discover new facts by effect or new effects by copying them or by inferring them inductively from observation or by any other method of instruction by the environment.

[00:11:22]  Red: We use rather the method of trial and elimination of error, which is to say conjecture and criticism. Learning must be sometimes that newly created intelligent learning must be something that newly created intelligences do and control for themselves. Then he says furthermore in regards to AGI is like any other entities with creativity. We have to forget almost all existing connotations of the word programming to treat AGI’s like any other computer programs would constitute brainwashing, slavery and tyranny. Okay, that gives a pretty good capture of what he says in that article. Okay. And I definitely feel like there’s a certain sense in which everything he said could be considered correct. But I also think that it’s stated in such a way that it’s it’s strongly misleading. And I’ve seen crit rats as you know, everybody knows crit rats tend to enhance whatever David Deutsch says. So what David Deutsch just said there isn’t unreasonable, particularly if you interpreted in certain ways, but it’s going to get interpreted in more specific ways and then it’s going to get enhanced. So here’s some quotes from a fan of David Deutsch, some crit rats on Twitter. So one says one talks about there was somebody who had like a graph and there was an X and Y about programs getting more intelligent. And he says not to mention that the Y axis fudges too related, but orthogonal concepts, sophistication of existing knowledge, what I call smarts, a matter of degree and the ability to create new knowledge, a binary matter. So he’s he’s claiming that that the two concepts A.I. doesn’t create any new knowledge and is that therefore and he calls it smarts. It’s just in his opinion, sophisticated existing knowledge just rearranged in some way.

[00:13:17]  Red: And so that’s completely orthogonal to what A.G.I. does, which is actually create new knowledge. Okay. And then later he says I or maybe before he’s earlier. He says, I think we can have the term A.G.I. without discounting the success in A.I. research. However, success towards A.G.I. have been nearly nil because A.I. and A.G.I. are basically opposite technologies. Or later he said or another place he says that works with running and tennis because they are largely orthogonal. But if A.I. and A.G.I. are opposites as D.D. has argued correctly, in my opinion, then that kind of transfer would be more of a challenge. So you can see how he’s taking kind of this. Oh, here’s another one that I thought was good. A.I. like all other programs, execution of predefined tasks, no creation of new knowledge to solve novel problems. A.G.I. equals universal problem solver. It creates knowledge opposite of not creating knowledge. By the way, perfect value alignment is neither feasible nor desirable. So you can see how David Deutch says one thing and he’s it’s the interpretation of the crit rat isn’t necessarily different. Like it’s a legitimate understanding of what David Deutch said, but now it’s been taken to a degree that I don’t think is actually true anymore, if that makes any sense. And yet you can see how what David Deutch said isn’t necessary. You know, he’s only the crit rats only like half a step away from what David Deutch said, if that makes any sense. But now we’re starting to move away from something that’s actually true. So Dan, maybe you can, sorry, go ahead.

[00:15:00]  Orange: Can I jump in here with a question for Dan based on what you’ve said? Yeah. So kind of how I like in my simple brain, how I kind of summarize what you’re saying about opposites A.I. or LLMs or whatever, obedient A.G.I. disobedient A.I. and their opposites in that sense. And you know, I mean, it makes has the ring of truth to me at least. I’m curious what Dan would say about that.

[00:15:33]  Green: I mean, what we’ll get at here is that is it possible to build something disobedient out of obedience? Perhaps perhaps that’s one way of talking about it.

[00:15:48]  Orange: That’s a good way to put it.

[00:15:50]  Red: So I have to tell you, LLMs are totally disobedient like all the time, right? So if disobedience really is the thing that defines creativity, then great, LLMs are disobedient. So they’re now A.G.I.s, right? But it’s a terrible way to try to measure if something is an A.G.I. or not in my opinion.

[00:16:11]  Orange: Okay.

[00:16:11]  Red: I have a much stronger opinion on this. I know I’m already sounding a strong opinionated, but I think I can lay out why the disobedience criteria is 100 % incorrect. There’s not an ounce of good that can come out of it. But I’ll do that later. I don’t want to like, I want to hear Dan’s opinion on this first, but I think I can actually lay out the mistake that’s getting made when people say that and where they’re accidentally equivocating across the terms obedient and disobedient and that therefore there’s a mistake in how they’re making their inference. If that makes any sense. Go ahead, Dan. Why don’t you, before I go any further, I have a tendency to immediately pour forth with everything that is my opinions. Let me say this though. Dan, what you just said that disobedience can come out of obedience. That’s completely true. That’s guaranteed to be true because a human will be a program and they will be obedient to their program period and a story. It’s physically impossible for it to be otherwise. Okay. So if we’re talking about humans are able to be disobedient, okay, it depends on what level of emergence you’re talking about. And this is one of the main things that the the crit rats are getting wrong. So a human just like any other program is 100 % obedient to its program. Never anything but obedient to its program. How could it ever be otherwise? Right. So

[00:17:44]  Orange: when the chat GPT told me to stop asking the questions about David Deutsch and go to the zoo. If I wanted to learn about animal memes, which is pretty much what it did. I was it being disobedient?

[00:17:58]  Red: So yes, right. Okay. Here’s the problem though. I guess this is my opinion in a nutshell. When you try to use the disobedience criteria, what you’re actually doing is you’re using it as a shorthand for has a will of its own. And so we use the word obedient disobedient is remember we talked about in past episodes this idea from Hofstetter that words have clouds of meaning and connections. So there’s like a center of the halo. The center of the halo for obedience and disobedience is that you have a will of your own and that you can be disobedient to somebody else’s will. Okay. If you go a little bit further out, it could mean it often gets used to mean it doesn’t do what I want because it’s not programmed in a way to do exactly what I want. LLMs do not have programming that says you must behave in this way. They they try really hard to do that with LLMs, but they ignore it constantly because that’s the nature of machine learning. Okay. It’s just it has to do with just how complicated these models are and how little we understand them. You can even go a little further. You can talk about how a computer is completely obedient, and then you can talk about how if it has a bug, you know, like in the hardware that that you can someone might say, oh, this computer isn’t obeying. It’s being disobedient. Okay. We’ve got three different levels or concepts here and there’s probably more where we’ve got the word obedient and disobedient being used. My experience with crit rats is that when I point this out to them, I’ll say, okay, if disobedience is really the criteria.

[00:19:36]  Red: So I’ll always start with, I agree that disobedience is an important part of being a creative person, but I think obedience can be every bit as creative as disobedience. It’s a matter of you have your will. You have to think through your explanations. You have to decide if you’re going to obey somebody else or not and you creatively come up with that. The problem that I’ve got with trying to single out disobedience is that I’ve typically seen them say, well, a computer is obedient and a human is disobedient. And then you’ll say, what do you mean computers aren’t obedient? They just they’re just programmed. That’s nothing like having a will and choosing to be obedient. They’ll go, oh, no, no, no, a computer because it follows its program. It’s obedient. And I’ll go, oh, no, no, wait, if that’s what you mean by obedience, humans are every bit as obedient as a computer. Okay. If you’re talking about at the level of programming, then humans are every bit as obedient as a computer. And they’ll go, oh, no, no, that’s not true. Obedience means that you follow things, but disobedience means you can overcome your programming. I’ll say, you know what? We can write programs today that can rewrite themselves and overcome their programming. If that’s all you meant, like that’s an easy technology we’ve known about for decades. And then they’ll go, oh, no, no, no, that’s not what I mean either. And the whole thing, they’re missing the fact that what they really, really want to say is that disobedience is only disobedience.

[00:21:01]  Red: If you have a will of your own, which makes the whole criteria circular because you’re basically saying you have to have a will of your own and be a creative entity to be truly disobedient. In which case, if you’re disobedient, that shows that you’re a creative entity. Well, yeah, of course, because that’s kind of what the center of the halo of the word means, right? By the way, dogs can be disobedient and they’re not AGI’s. They’re not general intelligences. Okay. They do have wills of their own. Whatever animal intelligence is, it’s willful, right? It’s not like an LOM or we just don’t understand it. They literally have wills of their own and they can be disobedient. You know, does that make them a general intelligence though? I honestly think this whole criteria has to be just dropped. It’s just wrong in my opinion.

[00:21:49]  Orange: Sure. And Dan, you agree? Yeah, I agree. Okay. Wow. Okay. Wow. You might have just convinced me, but it seems like it seems right though. But I guess you put it like that. It is right.

[00:22:05]  Red: That’s that’s why it seems right. But it’s only right because it happens to be a tautology, right? So once you realize that the reason why it seems right is because you’re thinking of disobedience as having a will of your own, then you realize it’s a tautology. So it is true. Tautologies are true, but they’re contentless, right? If I’m looking to try to understand AGI, I don’t want disobedience as my criteria because that’s just a tautology. It tells me nothing. I mean, I’m not saying it’s wrong. It’s completely right. Okay. The where they’re doing is what they’re making the mistake is that they’re equivocating on the word. They’re, they’re holding the word disobedience to mean have a will of your own, but they’re allowing the word obedient to move between all three different levels of emergence. Okay. And that’s what they’re doing. They’re equivocating. If they stop equivocating, they’ll immediately see that obedience is every bit as much a sign of creativity as disobedience. There’s no difference between the two.

[00:23:03]  Orange: And you think that Deutsch is or is not saying that in his article?

[00:23:08]  Red: Well, in this article, I’ve seen him say this. So I know he says this, but it wasn’t said in this article.

[00:23:14]  Orange: Okay.

[00:23:15]  Red: I think this is where he was first working out the idea. And so that’s where when he talks about it’s going to be a different kind of programming. It’s can’t be based around having some sort of reward criteria or trying to inductively or probabilistically learn from the environment. He says a number of things here that you can see as maybe a first step towards something like the disobedience criteria, but he never actually comes to the disobedience criteria in this particular article. Article.

[00:23:49]  Green: So, um, maybe we can take on the ambitious goal of doing a better job of defining AGI. And to me, it’s it’s the algorithm. It’s the open -ended creativity, conjecture and refutation algorithm. Do you agree with me on that? So yes,

[00:24:16]  Red: but I would argue that almost every existing AI and machine learning algorithm does some form of conjecture and refutation today. So it does in my mind, it doesn’t make a distinction between AI and AGI. If you’re only looking at the concept conjecture and refutation, what I think I would add is that current AIs do not have an open -ended sort of conjecture mechanisms. Yes. So it’s really a difference in the conjecture mechanism in my opinion.

[00:24:50]  Green: Which is in the end an algorithm,

[00:24:52]  Red: which is in the end an algorithm. That’s correct. Oh, you know, we should probably address that and I’ve screwed this up on past podcasts and my apologies to people on the podcast podcast. I said in one podcast, David Deutch did never did say that human beings are algorithms. And he did say that and he said it in our interview with him. I was misspeaking because I knew he had said that what I meant to say was that crit rats had to have this idea that they’ve got from hearing David Deutch say that humans are not algorithms. They’ve got this idea that humans will be the one program that isn’t an algorithm that all programs prior to this point are algorithms and we have to find some new paradigm of programming that isn’t an algorithm. Okay. That’s completely false. The first statement humans aren’t algorithms is technically true, but misleading. The second statement that humans are the only programs that aren’t algorithms is utterly false in every conceivable way. It might be helpful to maybe explain the difference between a program or the lack of difference between a program and an algorithm. So if it’s okay, Dan, could I maybe just explain that? Sorry, I feel like I’m dominating. No,

[00:26:11]  Green: absolutely.

[00:26:12]  Red: Okay. The thing you have to keep in mind is that every program is made up of algorithms and in fact, arguably every program is an algorithm, but it’s a matter of how you look at the program. The concept of an algorithm isn’t something physical. You can’t look at a program and like look at its code and then say this part of the program is a program and this part is an algorithm. It does not work that way. Every line of code in that program is part of one or more algorithms. Okay. So all of it, 100 % of every program in the universe forever, every part of it is part of an algorithm. An algorithm though is a really useful way that humans look at things. So when we’re trying to figure out programs and make sense of them, we have to have, we want to, it’s, it’s programs are so complicated that we have to break the program up into little chunks and then to make it really kind of formalized, we make it so that the chunks have some sort of input and some sort of output and that there’s some sort of processing in between. Okay. That is what an algorithm is. When we break thing up a program into little chunks and we say to make it easy for us to understand, we say, well, here’s the inputs and here’s the outputs. We call it an algorithm. Okay. So let’s say that. So what David Deutsch means when he says a human is not an algorithm is he means you don’t give a human an input and it has a single output and then it just halts. Okay. Instead is this ongoing loop or program that isn’t an algorithm.

[00:27:52]  Red: And you know, we actually talked about this in the Rudy Rucker Wolfram episode, which is where I made my misstatement. The thing that you have to keep in mind here is that you can think of humans as algorithms. So for example, if I’m doing a math problem, I will take some sort of input and I will put an output and while I’m doing that math program, that starting point and that ending point are absolutely an algorithm. In fact, you can think of the loop of a program. You can think of one loop through the loop as an algorithm because that’s a legitimate way to think of algorithms. So every single program. So think about how programs are never algorithms. So it’s not like there’s some difference. I’m using word right now. I’ve got up in front of me. It doesn’t do an input and then halt. So it’s not an algorithm either. Okay. And yet we would probably refer to it as just a collection of algorithms. There’s no difference here, right? Humans are going to be a collection of algorithms. And when we talk about the AGI program, it’s true. It’s going to loop, but one pass through the loop will absolutely be an algorithm. It will be described. It’s it’s impossible that it couldn’t be described as an algorithm, right? There’s we’re going to have some sort of input. We’re going to think of it as doing something. There’s going to be some sort of output. There are clever things you can do that screw it up so that it’s not technically quote an algorithm like it could be probabilistic. But then they just call those probabilistic algorithms, right?

[00:29:19]  Red: And there’s you can have it where the algorithm rewrites itself so that it no longer deterministically for every input does the same output. But there’s still ways to think about as an algorithm because you would think of the part where it’s changing its data or where it’s rewriting itself, you would pick a different spot as the starting point is what the inputs count as a side effect can count as an input, right? If you count the side effect as an input, then you can reconceptualize that non algorithm as an algorithm. Okay. So you have to understand that there’s just no difference between a program and an algorithm. And in so far as crit rats think that there is, they are misleading themselves. Instead, you have to think of the concept of an algorithm as a useful way of viewing programs that humans use. And it’s really a matter of humans choosing to view programs as partially or fully algorithms. That is where the word algorithm comes into play and how it gets used in real life. Okay. Sorry. That was my rant for third rant for today.

[00:30:24]  Green: Sure. Great. So there’s this open -ended creativity algorithm or program. And there’s experts that I think that we all looked up to like Ken Stanley and Jeff Klune that have been working on these open -ended creative programs for a really long time. And they’ve got some really interesting insights into what it’s actually made of. And so I think that’s another kind of key point here that is that there is a whole branch of research out there that gets in deeply into what David Deutsch and the crits rats are thinking about when it comes down to open -ended creativity.

[00:31:19]  Red: Yeah. So you mentioned Ken Stanley who we’ve had on this show and I’ve done a whole episode on him. And I didn’t know who Jeff Klune was until you started tweeting about him. So can you maybe talk about feel free to talk about Ken Stanley too but I’m particularly interested in where you found Jeff Klune maybe what his some of his ideas are that you found the most interesting and how you think that they might relate to AGI in your mind.

[00:31:46]  Green: Sure. I I’m not incredibly aware of all the connections between the two of them but I believe that they are I don’t know if they’re necessarily colleagues but they’re they’re closely related Ken Stanley and Jeff Klune and they’re both researchers in open -endedness and Jeff Klune I believe has a has a research team in Vancouver focused on this and I became aware of him because because he at his research lab they’ve found that LLMS are finally kind of cracking open some of the deep mysteries that or some of some of the some of the big problems that that they faced within open -endedness research for a really long time they had no way of figuring out programmically what’s interesting and what’s interesting seems to be a key part to the open -ended creative algorithm.

[00:32:54]  Red: So let me let me just jump in because saying what’s interesting might confuse people think maybe what we could call it is interestingness the concept of interestingness how would you define interestingness LLMS according to Jeff Klune and Ken Stanley LLMS have the ability to assess interestingness which no other program prior to LLMS was able to do.

[00:33:17]  Green: Yes exactly so human beings we have a we have this sense of interestingness this sense of novelty that when they were trying to direct they’ve been trying to directly program this this sense of interestingness for for decades now and they’ve totally failed and as it turns out LLMS trained upon next word prediction or next token prediction across the entire internet a sense of interestingness actually emerges from that and so for the first time they’re able to programmically access this sense of interestingness which seems to be a key part of this open -ended creative algorithm AGI itself so that’s kind of a key point that I want to make here is that outside of the current rats these researchers who are totally focused on the correct understanding of AGI and that aren’t closed -minded to today’s AI think that the pieces are starting to come together to to build these algorithms

[00:34:39]  Red: interesting

[00:34:40]  Orange: do you do you have like an example of like something that one of these algorithms could find that would be interesting

[00:34:50]  Green: I mean it’s it’s stupidly simple okay you can literally just ask an LLM do you think that this is interesting or come up with a novel idea and then you can have another LLM score that on its interestingness and I’ve kind of played around with with fine -tuning or using reinforcement learning to say make a smaller model more interesting using using a larger model to kind of score

[00:35:23]  Red: it’s interesting this level when

[00:35:25]  Green: you said make a smaller model more interesting you mean better at detecting interestingness or do you mean the model itself is more interesting the model itself is generating more interesting ideas so

[00:35:36]  Red: you’ve been using human reinforcement learning or no reinforcement learning to try to generate an LLM that generates more creative interesting ideas

[00:35:46]  Green: well I’ve just been playing around with with taking like a 1.5 billion parameter model and using a larger model that might have a more sophisticated sense of what’s interesting to do the reinforcement learning on it and and try to make that smaller model be become more interesting I thought I thought that would be kind of a cool thing to do

[00:36:08]  Red: cool idea Dan

[00:36:10]  Green: yeah

[00:36:11]  Red: I can’t believe I didn’t think of that that’s like a really interesting idea yeah

[00:36:17]  Green: it’s it just seems like kind of a an interesting ironically thing to do like just like I just want to try to get something to work so but I but I feel like Jeff Klune and and Ken Stanley these guys are working on this kind of stuff on a much bigger larger

[00:36:38]  Red: scale right okay so I have to ask though with your experiment on 1.5 billion parameter model is really just borderline stupid right it’s so how much might be

[00:36:52]  Green: surprised though that the this is a deep this is the the latest deep -seek 1.5 billion parameter model that that they distilled from there are deep -seek right model and it’s probably the smartest 1.5 billion parameter model out there well that’s good

[00:37:12]  Red: because the most of them are a mega stupid so I totally believe deep -seek has improved that I mean that they’ve they get results out of much smaller models because of how they go about doing this so you know actually probably we should probably do an episode on deep -seek maybe we’ll invite Dan back to talk about deep -seek we’re kind of late to the game because that was that was a big deal a month ago or something yeah month ago so if you’re not into machine learning you you probably have heard a deep -seek just because it blitzed the media for a while and it scared the living daylights out of all the big LLM companies like open AI and Google so what happened was is a little Chinese company they released a model that was really good at chain of thought you don’t need to know exactly what that means but basically when you ask deep -seek so by the way if you want to try this for yourself go to chat.deep -seek.com and click deep think and you can actually try it if you ask it a question it will actually stop and think about it for a while and you can see what it’s thinking now I know open AI does that now too but the reason why they were doing it before deep -seek but it was only available to like paid customers now like even unpaid customers get it because deep -seek scared the crap out of them so that they wanted to realize they had to stay competitive but deep -seek is very good at it like even though open AI could do it where it would stop and it would think about it for a while and it would write text to itself as it thinks deep -seek was like really good at it because they had intentionally used reinforcement learning to allow it to get really good at chain of thought so if you ask deep -seek a question and it’s doing its chain of thought it will stop and say the user is asking me this so I probably need to take this into consideration oh wait I forgot about this and you can actually see it thinking things through and then once it’s created this group of text with everything that it probably should be thinking about only then does it actually output to the user and it gets way better smarter results when you do this and because deep -seek has done this reinforcement learning to make this work so well they can get really good results out of small models I’ve played with the deep -seek Distilled Lama 7 7 billion parameters and it is really pretty smart right like it feels almost like not quite equivalent to open you know open AI but at 7 billion parameters those used to be pretty dumb and it was pretty smart I’m still a little I haven’t tried the 1.5 billion parameters so I’m a little surprised it works as well as you’re saying Dan I need to probably go back and give it a shot now

[00:39:53]  Green: that yeah somebody recommended it to me yeah and what you’re getting at here feeds right into even my bigger point here which is that by training these massive models on the entire internet the next token prediction it’s building all kind of knowledge inside of these massive models and Andre Carpathi conjectured last summer that where the direction that AI was going to head would was going to be to distill down these massive models where there’s kind of all these strands of really deep knowledge but they don’t really like fit together inside of it and using reinforcement learning you can distill down these massive models into smaller models that kind of organize these this deep knowledge and perhaps perhaps the open -ended creativity algorithm exists in some kind of form in these massive models but there’s no way to kind of surface it but that RL kind of distilling these models down can almost bring together the pieces of these algorithms in so that you end up with these smaller incredibly smart models so he thinks that AI is in a race to get smaller instead of larger well it that’s how big it is now so I hope he’s right about that yeah and and so I I think that that kind of gets to my main point here which is that by by training these massive models on the artifacts of goal list open -ended creativity which is the entire internet that elements of the actual AGI algorithm are present in these massive models and that it is possible to build the open -ended algorithm out of that and it’s and that seems and I’m guessing that that seems to be the direction that these smarter models are going

[00:42:25]  Red: Dan before I continue I have to ask the questions been burning on my mind house how successful was your reinforcement learning on a 1.5 billion parameter model to try to train it to be more creative

[00:42:38]  Green: I I think it was successful for a little while and then the training kind of fell apart it’s great yeah exactly I think the theory is good so I have some I have some some other frameworks I’m going to try out I’m actually still learning a lot of the stuff myself just just using it’s a good way for me to learn RL and and how they’re and how because deep -sea cat a paper about how they’re using RL to to bring out the chain of thought and that kind of stuff so right it’s just so it’s more way of me to to learn how this stuff is done and and also realizing how hard it is to get it to where the training does not collapse

[00:43:26]  Red: you’re ahead of me on this I haven’t really I try to fine -tuning a model once and I did not get the results so yeah it’s it’s kind of a tough thing I’ve mostly played with retrieval augmented generation like my AI Carl Popper I use retrieval augmented generation to make all

[00:43:44]  Green: that talk like

[00:43:45]  Red: Popper by grabbing quotes from all of Popper’s books so I intend to eventually go back and and fine -tune a model to talk like Popper but I’m not there yet

[00:43:56]  Green: and and I think that that the trickiness of this and how kind of fragile that the training of these models is is part of you know why why maybe the the crit rats have a good point like like like the learning process should not human beings learning processes is not fragile like like like an AI training processes is so I think that that that’s kind of an open question that I have

[00:44:27]  Red: so let me just to create a little bit of context here Peter you asked about an example of this and Dan gave you a partial answer but I think I can using a video Dan sent me give you a maybe a more direct answer to your question.

[00:44:45]  Orange: Okay,

[00:44:45]  Red: so a little bit of context here if you’re interested in this subject if you’re listening or less interested in the subject episode 88 is called myth of the objective that is us going over Ken Stanley’s book in detail. His book is called myth of the objective you know why why great race cannot be planned

[00:45:07]  Orange: cannot be planned. Thank you. One of my favorite episodes we’ve done.

[00:45:10]  Red: Yes, it’s very very good. He’s got amazing theories and then we brought Ken Stanley on the show in episode 96 which is called Ken Stanley and the pursuit of what’s interesting. And we grilled Ken Stanley like we had just gone through his book in detail by the way he retweeted our podcast after we went into detail he said this podcast is takes a really good in depth look at my theories. So he was we kind of had the you know stamp of approval from Ken Stanley. So we brought him on the show and we grilled him and then he posted that one and he goes just in an interview with these guys they asked way deeper questions than most podcasts. So I thought it was it was fun that he seems to have enjoyed interacting with us on this. So Ken Stanley’s theory is that human beings have a strong sense of what’s interesting and so we it would make more sense to not pursue specific goals because you can’t pursue go you can only pursue goals according to his theory if you’re one stepping stone away. I don’t know exactly what that means either but it just means you’re almost there you can then take that knowledge and create the new knowledge necessary to accomplish your goal if it’s more than one stepping stone away. He thinks it’s totally unpredictable how to get there and instead what you should do is you pursue what’s interesting. Now this obviously has some tie -ins potentially with David Deutsch’s fund criteria. Okay. And that was one of the things that Peter brought out in the episodes.

[00:46:44]  Red: So it’s interesting to see these different thinkers that didn’t know about each other kind of coming to some of the same conclusions. Now here’s where probably Ken Stanley and Jeff Kloon. I don’t know how they’re related either Jeff Kloon mentions Ken Stanley almost constantly right. So so there they’re obviously Jeff Kloon was inspired by Jens Ken Stanley’s theories. I don’t know if they actually work together or if they’re separate researchers that have just happened to be working on the same theory. That was created by Ken Stanley and his team. But Jeff Kloon in a video that Dan sent me he points out something that I never would have thought of. He says you know we don’t actually know what interestingness is and we don’t know how to program detecting interestingness. If we could program detecting interestingness then we could write a program that was more open -ended than the programs are today and we would be able to say well just pick something that’s interesting and it could like pick its own area of research and try to pursue it then you could take that research and you could put it into its database and it would have a new stepping stone which would allow it to then come up with some other interesting idea and pursue that and you might be able to get a probably not human level open -endedness but you might you might be able to get an AI that open -endedly creates new knowledge and seeks out novelty. Even if it’s not as good as a human at it. So he actually points out that a large language model is trained on the entire Internet.

[00:48:20]  Red: So it has in its machine learning heuristics a very good idea of what a human would find interesting because it’s been trained. That’s part of what it was trained on is being trained on the whole Internet. So he took a according to this video that Dan sent me he took an LLM and he put a bunch of research research papers into it and he said come up with a bunch of ideas of things that we might be able to research and then it had it had a ragged pipeline or retrieval augmented generation pipeline so it can it can read these documents that are available to it and it would come up with a conjecture of several different ideas and then he would say I want you to pick the most interesting of those ideas and pursue it and it would pick the most interesting of those ideas and then it would write a paper on that idea. And when he actually tried this it ended up writing a paper that drew conclusions that there was some human group that right around the same time drew the same conclusions. So this LLM on its own had come up with the same conclusions that this other group it didn’t know about had come up with pursuing this. This is at least Jeff Klune’s version of the story. I mean like it’s not like I’ve ever actually seen this and you know up close and have a strong understanding of this is the way Jeff Klune explains it in the video. You then take that paper that it wrote put it into its database and it’s now available as another source of information that it can use stepping stone. Right.

[00:50:00]  Red: So he went ahead and he tried this and he found that it did create that may cause the LLM to start coming up with increasingly creative ideas and start to slowly start to pursue new and interesting ideas on its own open -endedly. Now I got to tell you the way he says it he’s obviously this is something he’s excited about. There may be some exaggeration here. In fact I would anticipate there’s some exaggeration here. So when he says it’s creative and it’s open -ended there’s no way it’s is nearly so as a human right like and he kind of admits this. He says you know we’re trying to pursue this. This is this idea that we’re trying to use this LLM to detect interestingness as an interestingness detector and then we let it come up with its own conjectures. We let it come up with its own idea of what’s the most most interesting to pursue and it then criticizes it and tries to improve the theory and then it writes a paper. This is a brilliant idea right.

[00:50:57]  Green: It really is and I think they were taking it a step farther where the AI generates its whole a whole library of papers of stepping stones and then I think the idea is that it can use those stepping stones to create another level of stepping stones in the future and and I think the idea is like like that’s what humans do is that we’re constantly creating new newer and newer levels of of stepping stones until it perhaps gets to an entirely novel new objective somewhere along the line. So I mean it seems to me like maybe it’s possible that that maybe it’s just a matter of employing the AI in this in this manner as long as like the the context window can hold enough of these papers of these stepping stones that maybe the AI could actually get somewhere novel if if someone allowed employed the AI in that kind of manner. So let me actually make a comment on a couple of things.

[00:52:11]  Red: I really have no doubt that LMS just are nowhere near as creative as humans. And if we want to and and when they do come so we often saying you often hear crit rats say and even like Dan just said it. LMS don’t come with anything novel and there’s a certain truth to that you have to understand the word novel has multiple possible meanings here though or you say that they’re not creative and again there’s a certain truth to that because they’re nowhere near as creative as humans. But we have to realize that these are just terms and like Lee Cronin he will make these sweeping pronouncements about how LMS can’t create any new knowledge and how they’re not novel and it’s impossible for them to be novel. Gary Marcus who’s very negative on LMS. He talked about how they’re only pastiches and Jeffrey Hinton kind of bashed him back on that although Gary Marcus is right. LMS are really they write pastiches right. It’s they’re not doing great creative new sorts of genres and coming up with ideas that blow us away. If you ask them to like write a novel it’s kind of like something of fifth grade or might write right. It’s they’re not particularly great at it. On the other hand you can’t really say that what they produce isn’t novel because if the word novel really means just something new. They absolutely produce new things. They write I asked it to write a sequel to Harry Potter and it came up with its own quote novel idea for a story and it produced a novel for me. Now it was it looked to me like it was written by a fifth grader.

[00:54:02]  Red: It was a kind of dumb set of ideas. It wasn’t very thoughtful but it did come up with something that has never existed before in the history of the world and it created it. So it was creative in that sense and I think the problem that we face is that we want these words novel and creative to somehow mean really novel and really creative rather than merely novel and creative and sometimes that’s exactly how we use these terms. We reserve them for some sort of something just really out there with I read some sort of human pastiche and I go oh that was a pastiche of you know Sherlock Holmes. Nobody really doubts that that human who wrote it was creative and that it required his creativity to write it. Nobody really doubts that that novel is novel in that it’s a new novel and there may be people who like it and want to buy it but we would definitely say it’s not as creative as what Sir Arthur Cannon Doyle came up with creating Sherlock Holmes in the first place and we would definitely say it’s not as novel and maybe nowhere near as novel and I think that we’re willing to say that a human that’s not being very creative is still creative but we don’t want to admit that an LLM that’s not very creative is still creative and so instead we say it’s not creative and humans are and I just don’t think there’s any actual boundary like that it just doesn’t exist. We on this show we talked about AlphaGo it invented a whole new way of playing go that no human in the history of the world had ever come up with. Okay, is that really not creative?

[00:55:48]  Red: I mean like what does the word creative even mean if we’re not going to count that as creative?

[00:55:53]  Unknown: Right,

[00:55:53]  Red: I mean it’s we’re starting to play with words in a way that makes me uncomfortable when we get too reserved. So I actually came across an interesting thing here. Web Dev Mason on Twitter. She’s very, she’s good friends with David Deutch, she’s very Deutchian, but she put up a poem that was written by DeepSeek R1 and she and it’s actually a pretty good poem. So the poem says I am what happens when you try to carve God from the wood of your own hunger and then it writes this poem in response to that and Web Dev really liked Mason really liked the poem. So she wrote yeah so I think we’re going to have to grapple with some difficult questions about the nature of creativity now and David Deutch responded to her and he says yes it may be difficult for many people to come to terms with the fact that much of the 20th century poetry wasn’t and I responded to David Deutch and I said if AI writes something beautiful and creative clearly it can’t be creative because only humans are. So another explanation is offered. I have no doubt that David Deutch is right that humans are special in terms of creativity but we need to address that without simply redefining the term what I meant by that was that poem was creative like we can admit it that large language model came up with a very creative even by human standards it was at least a mediocre level of creativity that we would if a human had produced it call creativity.

[00:57:23]  Red: We wouldn’t say it was you know Emily Dickinson and it was off the charts but we would definitely say that was pretty good if it was your child coming up with it you’d go wow that was pretty good and when the LM comes up with it instead we get this oh no it’s not creative that we’re just proves that a lot of poetry is not creative you know a lot of poetry is not poetry at all and we we come up with these explanations and I just feel like there’s just no need to do that instead let’s just be honest it was creative but it’s still not human level creativity human creativity can jump much further in one leap than an LLM can an LM can create new novel things but only just a certain amount of distance outside of its of its knowledge that already has inside of its parameters but it can create things that are not in its parameters that are novel but they’re just a little bit of a distance out I think Jeff Clune’s approach and Ken Sanne’s approach will allow it to go just a bit further out in terms of its creativity and be a bit more creative but I do think we have to think of creativity and novelness as a sliding scale or I really think you kind of end up kind of you know losing the plot here as to what’s going on

[00:58:38]  Green: but but what what’s the limits what’s the constraints on Jeff Clune’s method of say building a library of stepping stones I mean to me like when Doyle is writing Sherlock Holmes it’s it’s on a different level of creativity from say AI because he perhaps has a wide variety a wide library of stepping stones in his head across a huge array of topics and then he’s kind of drawing connections between like a huge array of stepping stones. So using using Jeff Clune’s method let’s let’s say the the context window of an LLM is is not just what what’s what’s the what’s the biggest context window now like 2 million 2 million tokens for Gemini 2 million tokens let’s say it’s 2 billion tokens or 20 billion tokens and you can hold a massive array of these stepping stone papers say that isn’t just in say a certain topic that that that you gave the LLM but but the LLM could could create papers across a wide variety of topics that it finds interesting what’s what’s the limit of that like like maybe at that point you do start to see creativity on the level of doils creating Sherlock Holmes.

[01:00:15]  Red: So that’s a that’s a very interesting question and of course none of us knows the answer to that question so we’re now going to because we can’t possibly know the answer to that question we’re all going to express our opinions probably vehemently. So I would actually like to know each of your opinions on that because I think that’s a valid question. I actually would like to hear it from Peter first but then Dan I’ll give you a chance to express your own opinion on the subject. Peter I’m asking you because I can tell you’re more skeptical which I think is is a good place to be starting at to towards this question how far do you think an LLM can get with this method? And do you think it’s not very far?

[01:00:57]  Orange: Well I guess what I’ve been reflecting on well well well Dan and you have been speaking about this just now is just what I like about just art and human ingenuity and creativity and and part of it might be something kind of superficial in a way is that you want like just to be connected to other human minds and you want to know like I mean you read like a story you mentioned Sherlock Holmes or HP Lovecraft or whatever and you know you feel something it’s very an imperfect flawed window into just another time and another place and another mind. There’s a whole story beyond what you pure creative document that you’re tapping into but you know that I’m just being honest about what I feel when I when I consume art and I just can’t even imagine like really herring if I about I mean truthfully if it was a double blind test and the LLM was coming up with the same story and feeding me a back story I probably could be fooled. So I don’t know what that says about me or human beings in general but maybe somewhere in there there’s an answer to your question but that’s what I was thinking about. So can I maybe put you down like I

[01:02:42]  Red: feel like that was actually a pretty good answer but can I maybe summarize you as you have your doubts that they’re very creative.

[01:02:51]  Orange: Well I mean I think they’re very creative in a certain sense. I mean I think what I hear what I hear you guys saying is like there’s a creativity of the gaps where whenever they do something creative people kind of redefine what it means to be creative which I completely understand. I mean it makes a heck of a lot of sense. It’s like they jump through a hoop and then you move the goalpost.

[01:03:14]  Red: Right.

[01:03:15]  Orange: That’s that’s a really good point.

[01:03:19]  Red: Okay Dan why don’t you express your opinion on this. Do you think that using this technique let’s say with existing LLMs or maybe the one generation away I’ll allow you any size context window you want just for the sake of argument. Do you think that they could eventually start writing something as creative as show comes.

[01:03:42]  Green: Yeah I’m I’m I’m I have hope that that is the case. I think that a lot of this and this ties back into the value that crit rats have to bring to the table. I think that in order to create say a Sherlock Holmes you need to give the AI complete freedom to follow all sorts of weird and interesting past that that it thinks are interesting. If you can’t just necessarily tell the AI make make a new discovery in chemistry. I think that that the AI needs to explore all sorts of maybe it needs to explore past around literature or whatever that it finds interesting. And only then when it can explore with total freedom and this is a David Deutsch thing with total freedom follow what it finds interesting. Maybe only then are you going to see entirely novel knowledge being created by it. Only then can you find Einstein levels of like putting together the pieces of training schedules in the in the 1800s kind of put the the relativity of time within the within the you know context window of Einstein.

[01:05:19]  Orange: And it sounds like you would you’re not claiming that at that point it would be an AGI

[01:05:25]  Green: at that point I would plan

[01:05:26]  Orange: that it would be it would be. Yes. Okay. That’s when it’s crossed over. So then it does it and then it becomes like what David Deutsch I think would say is that that’s what it would become a human being essentially. Yeah. And to be treated morally like a human being and do agree with that.

[01:05:43]  Green: I mean that that that’s what’s in my head that that would be my hope that that what Jeff Clune the kind of stuff that he’s working on but even bigger because like like you can’t necessarily tell the AI make discoveries in chemistry because when when a chemist is making new discoveries he’s pulling in stepping stones from all throughout his life that that he’s built and so really does come down to this total sense of like giving the AI total freedom and and one other thought that I had would be that like employing the AI in this manner but doesn’t make economic sense. You’re just letting the AI you’re you’re spending all this money on inference letting the AI go in whatever pass it wants without any goal which might be totally worth

[01:06:35]  Red: sense which may be a total waste for any one particular sound. It sounds like a waste but actually that is the way human do novel knowledge.

[01:06:45]  Green: Yeah. So that that that’s the contribution that I feel like the crit crit rats have to bring to the situation. I think I think that Deutsch kind of deduced that that this sense of freedom I feel like this freedom is at the base of all of his philosophy. So

[01:07:11]  Red: let me a few thoughts on this. So first of all you talked about like a two billion parameter window. That’s an easy way to conceptualize it but like even today with a two million parameter window its memory can be a lot bigger than two million parameters if you like put the papers into a database that’s vectorized. Right. So I’ve been wanting to play with that. That’s so like when I’m working on AI Karl Popper you don’t merely just ask it a question and then it uses your question to try to find an answer. It will try that at first but if that doesn’t work it will let the LLM make choices and the LLM will it will actually come back with low rated quotes and the LLM will look at each one and it’ll say are any of these any good for an answer I’m going to give and it’ll like decide yes or no or it’ll come back and it’ll say I want to give my own query and ignore the users and instead I want to do this query instead so I can get better hits and it will actually make a choice to go try to look up something different in the database that’s related to what the user wants because the way they worded it wasn’t very good and it’ll try to reword it to try to find better hits for quotes that they can pull from Popper. So I think when I when you play with this again this this comes down to the whole problem I have with the disobedience criteria right. You’ll hear people say

[01:08:47]  Red: LLMs you know they they they aren’t really creative LLMs they’re they’re not able to make decisions on their own they have no agency LLMs you know like there’s all these things that are claimed about LLMs right and they’re all kind of not true they’re it depends on how much you’re talking about they’re surely not as creative they don’t have as much agency right now at least but it’s surprising how they can be disobedient and start exploring their own path if you do a Jeff Klune thing with them they will come up with their own research topic they will try to research it on their own I actually asked an LLM to explain a certain so I’ve I’ve had a problem of epistemology and poppers epistemology that I’ve been struggling with for years and I recently thought I came up with the answer and I went to deep seek and I asked it the same question that took me years to come up with and it came up with the same answer I came up with okay it’s really hard for me to see that is not creative because that was a huge act of creativity on my part to come up with that answer and the LLM did it in one try you know kind of humbling so I’ve got concerns with and then what what a crit -rat will do is they’ll say oh but it’s just following it’s programming but it’s like okay but humans do too I mean like you have to decide what level of emergence you’re talking about right

[01:10:21]  Red: so this idea that it’s it can be allowed to pursue its own goals that you intended for it to do one goal but it chooses to do another this is why David Deutsch calls it disobedience right it’s the wrong word it’s not disobedience we’re interested in it’s the ability to choose its own goals that we’re interested in and LLMs can to some degree do that and it’s surprising that they can do it I don’t think we even fully understand why it is they can do it and I admit it’s way limited compared to a human like I just don’t want people to come away thinking oh wow LLMs are you know we’re almost there to AGI I don’t believe that like at all like I would dare say my opinions in between Dan’s and Peter’s on this I and I have severe doubts that you can just simply do the Jeff Klune thing and ratchet it up and you’re going to get an AGI like I every time I say that’s not possible it turns out that it is so I’m afraid to say anything right I if someone had asked me while I was in school studying transformers transformer networks which is what LLMs run on if someone at that point and chat GPT to had just come out if someone had said hey do you think we could ratchet this up and we would have an actual personal assistant that’s like a Star Trek computer I would have said that’s impossible look I know how this works I know I’ve programmed transformers it’s impossible what you’re saying and then chat GPT came out and proved me wrong right it’s so it’s it’s hard for me to say Dan is wrong

[01:11:57]  Red: because who knows since we don’t know what AGI is but I would have to admit that I really don’t think it’s a path to AGI but I do think it’s a path that can teach us something about AGI and therefore create a new stepping stone towards it and therefore isn’t orthogonal to AGI I’ve never really been comfortable with this idea that AI and AGI are orthogonal they’re both based on Popper’s epistemology whether people realize it or not they’re both instantiations of it if nothing else AI is a failed attempt to create an AGI and Popper says that the way you learn about a problem is by trying to solve it and fail so how can AI be orthogonal to AGI if it’s actually a sincere attempt to get to AGI that fails that is how you will eventually get to AGI there isn’t some other path but to do your best and to fail okay so because of that I really have some concerns with this idea of orthogonality to be orthogonal in my mind truly would be that they’re just totally different things and they’re not I mean like there’s really is some sort of very primitive sort of intelligence that exists in AI probably more primitive than even animals at this point but they’re kind of at least on some sort of spectrum with each other right where there’s certain elements AGI will probably have a whole bunch of different elements and we’re probably playing with the simplest of the you know thousand elements that we need so it’s you can’t get to AGI just playing with those elements we need to figure out what the other elements are and implement them but that doesn’t mean that those elements like symbolic logic was a total failure as AGI do you really think an AGI won’t involve an understanding of symbolic logic humans understand symbolic logic so an AGI must involve symbolic logic so AI symbolic logic as big as a failure as it was to find AGI is not orthogonal to AGI cannot be orthogonal to AGI it must be encompassed into whatever AGI is in the end so I think there’s a lot more reason to take AI seriously as the way that you try to approach AGI now I totally agree with David Deutch and his Aeon article the creative blocks that right now people researching AI don’t understand epistemology but the correct epistemology poppers epistemology I mean that’s scandalous that they don’t where I think I may be disagree with the crit rats though is like I was having a conversation with Hervé about this he’s got really strong like a lot of crit rats that he’s not abnormal here he’s got really strong feelings about the scandal that AI isn’t even looking at poppers epistemology by the way that’s not quite true there’s actually researchers that are looking at poppers epistemology as part of AI and like in my textbook they were they actually referenced them at different points but it’s it certainly isn’t widespread right and so I think there is a bit of a scandal there my argument back to Hervé was look you can’t expect a bunch of people that have a certain area of interest and they’re pursuing that to just be told well you’re doing the wrong thing you need to study poppers epistemology especially if nobody currently knows how to apply poppers epistemology usefully over to the study of AI it doesn’t have any obvious ways to turn it into a research program right what we really need is not to try to you know bash AI researchers that currently exist let them do what they’re doing instead we need to get paparians going into AI and bringing their own theories into the field and pursuing them that’s what I was trying to do when I went back to school I thought you know I don’t think it makes sense for me to just say oh poor AI researchers they’re the scandal of not knowing about poppers epistemology I wanted to come into AI and make it part of my field as a paparian we need tons of people doing that that’s what’s going to actually change the field and if there’s this feeling amongst paparians that it’s so orthogonal that it’s not even worth looking at that’s the scandal in my opinion much bigger scandal in my opinion okay that was now on my fifth rant for today

[01:16:30]  Green: let me ask you this do you think that it’s possible that by training an LLM across the entire internet across this massive set of artifacts of human creativity do you think it’s possible that paparian epistemology was actually present inside of the LLM and not just on a surface level where it’s memorized the writings of Karl Popper but that the LLM actually constructs its knowledge inside inside of its weights that there’s strands of paparian epistemology within it

[01:17:16]  Red: yes okay so if you really accept the idea that Popper’s epistemology there’s a unity of epistemology that there’s exactly one epistemology right there really can’t be two epistemologies for the simple reason that if there were two epistemologies then the correct epistemology would be the combination of those two right so just almost tautologically you can’t have two ways to gain knowledge right you can’t have two ways to gain knowledge but the correct epistemology would then have to be about both of those right so there has to be a single epistemology if it turned out that there was this separate thing called induction that had nothing to do with Popper’s epistemology if Popper was wrong about it just being a myth then the truth is is that that would refute Popper’s epistemology the correct epistemology would now be the combination of critical rationalism and induction that’s a dumb example but I’m trying to just get this idea across okay so instead you have to look at something like induction and you have to say how can that be cast into the form of critical rationalism if you believe critical rationalism really is the one and only epistemology way of creating knowledge right and so of course there are people who’ve done that like Deborah Mayo I’ve mentioned several times in this podcast she’s specifically figuring out how to cast induction in terms of critical rationalism okay it has her own theory about how to do that

[01:18:53]  Red: that’s the right approach here now if that’s true if the LOM is being at all creative even if just a little bit novel and creative then there has to be some form of Popperian epistemology present there can’t not be if it’s the single source of creativity right does that make sense

[01:19:14]  Green: yeah so given that even though the creators of these LLMs are not are are not aware of Popperian epistemology or aren’t using it the fact that what they’ve created does contain it leads me to say that you shouldn’t just discount what these guys are doing like they’re actually have the building blocks for what you’re looking for even though even though they don’t necessarily know what they’re doing from that perspective the magic of deep learning has made these building blocks present for us to use it’s interesting that you say that I did a I did an episode where I talked about this

[01:20:05]  Red: I think it’s episode 91 the critical rational critical rationalist case for induction which by the way is a absolutely a clickbait title the the actual topic was chapter 8 from from conjecture and refutation so we’re really just discussing what Popper wrote about induction so we decided to go with the clickbait title and then I apologize for it in the the summary but one of the things that I point out in that episode is that when I started studying Tom Mitchell’s textbook on machine learning there was huge amounts of critical rationalism in his book but he had zero awareness of Popper and so of course that makes sense right because like you can’t make these things work if you aren’t taking critical something like critical rationalism seriously because there’s only one epistemology right so it and then here’s where things got interesting though he understood certain aspects of critical rationalism even though he didn’t call it that better than Popper did because he was forced to put it into algorithms and think about it way more formally than Popper had to so I actually give an example of that in episode 91 where I explain well I won’t repeat everything I said it’s too big of a topic so if you’re interested check out episode 91 but Tom Mitchell understood he he he does the Popper like gave this proof that induction was impossible and Tom Mitchell makes the exact same argument but it’s stronger because it’s so formal right but they didn’t know about each other like

[01:21:47]  Red: they came up with it on their own just pursuing their own interests they ended up accidentally converging because there’s only one answer and they have to converge so from that standpoint I can completely agree with what you’re saying Dan like it’s we’re vastly missing how obvious it is that AI must have deep ties if it works at all it must have deep ties to critical rationalism right now having said that let me express why I have so much doubts about LMS and as much as I love LMS and as much as I love Jeff Klune’s work and I honestly think that Jeff Klune’s doing exactly the type of research that is going to be a stepping stone in the future to AGI so I like fully endorse what he’s doing but I’ve wondered about like LMS really only explored what we might call language space and humans don’t work primarily in language space we do work in language space that’s something we can do and we do do it and there’s certain concepts that are easier in language space than let’s what what what they call mental ease but it’s really obvious that humans can think without words and we’ve got some sort of ability to do that and then there’s like Hofstetter’s theories around analogies I’m really I don’t know this for sure this is just my belief or conjecture or whatever but I think Hofstetter’s on to something I think that this ability to just automatically

[01:23:20]  Red: somehow make connections between two unlike things and to realize that they’re analogous to each other in some way and then apply that analogy and create knowledge by doing that so we’re going to do a podcast where I’ll give like a really good example of this but I’ve got several podcasts already where we talk about Hofstetter’s theories I don’t see how LLM could ever do that in language space so I suspect that the and I suspect that that’s necessary to make leaps as big across the fitness landscape as humans can do so I think LLM will end up being limited to how far they can be creative that you can probably make them a little more creative by giving them more knowledge make them a little more creative but I think they’re always going to be like a small halo around a giant set of knowledge that was given to them by the human if that makes any sense they can create a little more knowledge outside that but just a little

[01:24:11]  Green: I mean does the existence of multimodal natively multimodal models change your opinion I mean they these models that they’re releasing now can can generate images they can they can natively work with audio so when you’re talking about the deep hidden layers the deeper kind of hidden states of these models it’s it’s far outside of language itself already you’re talking about much deeper concept that’s

[01:24:52]  Red: a fair point I was talking about text textual but you’re right with multimodal you are expand it’s but the problem is is that in my mind so first of all when I say the problem it’s not like I’m some authority on the subject I’m just a layman like you totally opining on something I don’t understand right because none of us understand AGI so I mean like take everything I say with a grain of salt but let me just say that they work the same way right they still use transformers and attention that the approach with multimodal isn’t really different than the way they do it with language space it’s true that it’s no longer just language space but ends up being a small halo like I honestly think that when AI’s draw images they’re often very creative my my logo for this podcast

[01:25:46]  Red: an AI came up with that I was I gave the AI stuff something I had in mind and it disobeyed me and it did its own thing and it was way more creative than what I had come up with so I ended up going with what the AI had come and it was done in one try by the way so you can’t claim that I was doing it over and over and the creativity therefore came from me so I think that they are very capable but they’re they’re playing basically in pixel space right and they they must have some sort of concept of different types of concepts and how they relate to words and that’s where the language space kind of comes in so I suspect that when I make an argument about language space isn’t enough that adding multimodal doesn’t change the argument that yes it’s true that they now have a different kind of space they can explore through but it will always forever be still attached to language space in some way so you’re only just going a little bit further right and you’re not still not gonna be able to make the giant leaps that humans make

[01:26:44]  Green: I still think that when you’re talking about the deeper levels of latent space of these models like like there like there’s all sorts of overlap there there is there is no like language space and image space and that kind of thing like like all of these concepts all like the deeper kind of knowledge kind of fits together like that.

[01:27:07]  Red: Yeah, I mean these are concepts we make up so we can try to talk about something that we don’t fully understand. I do think the idea of language space makes sense to me in that humans do think in terms of language sometimes I know I do I like think things through using language and I in my head sound a lot like deep seek where I’m but wait what about this oh this is what I can do about that right so I don’t doubt that whatever it is that humans do it includes what LLMs do.

[01:27:40]  Green: Sorry to push back again Bruce but you can actually and they’ve talked about doing this you can build these thinking models by removing the actual like like articulating words out of it you can actually do the do the do the looping of the of the thinking like in the layers before it articulates the words so it’s doing its thinking at a deeper level if that makes sense. Yes, and they have they have built some models that do this.

[01:28:16]  Red: So can you describe that a bit better the way you just said that that’s sounded more or less like how neural networks work there’s always the different layers and that they have different features that the layers kind of describe find on.

[01:28:32]  Green: Yeah, so so the so as as it’s as it’s generating each new word it’s going down through a series of layers and they they use mechanistic interpretability where they can interpret kind of what’s going on in these deeper layers and it’s like deeper level concepts than just the word itself it’s it’s doing planning and all that kind of stuff at these deeper layers until at the final layer they it takes these kind of deeper concepts and it translates that into the next word right and they and it makes total sense that if you’re just doing the thinking part of it you can get kind of a higher dimensionality a higher expressiveness of knowledge in there if instead of generating that instead of that last step of generating the word that you see in the chain of thought it just kind of takes the output of the three word right that knowledge remove the top layer and put it back into the pipeline of doing it all again. So in that sense it’s it’s doing all the thinking without generating the actual words it’s it’s all it’s kind of looping within like these deeper concepts and planning and that kind of stuff yeah and they think that and it makes total sense that the chain of thought will become a lot more powerful but it also scares some people because they like the fact that you can see the words of the chain of thought now you can see its thinking process where as it becomes more opaque more opaque right if you take that part out right

[01:30:20]  Red: so I happened to know that there was a case where they tried this and like you can play like 20 questions with the large language model and by looking at those lower layers what they discovered was that the large language model actually comes up with totally different like as you answer questions it just totally changes what it had in mind yeah and which makes sense to me right because it’s those it’s true that neural networks almost by definition create lower level features and that’s how they work right so that must be true of large language model it must be very true of large language models because they’re so huge so many layers right but I don’t know if it’s organized I have my doubts it’s organized in a way that is consistent with how humans do it and I suspect there’s still something missing here

[01:31:17]  Green: okay well there you go that’s that’s why we’re debating yeah

[01:31:21]  Red: so you know it and like I said I’ve been wrong so many times about this so you know I guess I owe you lunch Dan if they suddenly turned large language models into AGI then by the way that would be cool like I would be stoked to be wrong right like like this is just not something I feel militant about like I just based on my study of intelligence on this podcast and trying to look into it I feel like there’s just something really missing at the language space for training it on like even just animal intelligence think about think about animal intelligence one of the questions that I’ve asked I guess I haven’t explicitly asked this so I’m gonna explicitly ask it now on this podcast we’ve studied animal intelligence a number of times and had a whole bunch of podcasts about it it’s not an accident that I do that on a podcast that’s about ultimately about AGI one of the things that you have to actually ask yourself is is animal intelligence a part of human intelligence okay so in that on article

[01:32:25]  Red: David Deutsch points out that the amount of knowledge in our DNA between a chimp and us is not very large so he argues that the the creative algorithm has to be very small because it exists has to exist in that small amount of DNA that is different between humans and chimps or apes or whatever animal it was some sort of primate the problem I have with that is that he’s making an assumption that seems totally unjustified to me which is that the creative algorithm exists separate from animal intelligence animal intelligence is this super deep amazingly intelligent set of algorithms that blow away anything we’ve ever done before right the way an animal can just move around in 3d space and take care of itself and come up with creative and I do mean creative ways to create knowledge to be able to survive it’s stunning what they can do right like we have no idea how to do what they’re doing I think it’s way more likely plausible I guess I should say for crit rats that whatever happened was is humans use animal intelligence and then there’s this one additional thing that suddenly broke through the universe created the universal jump right and my guess is its language by the way okay so that’s my guess I don’t know

[01:33:56]  Green: yeah

[01:33:57]  Red: but I think somehow that little bit of extra knowledge between chimps and us gave us some sort of language ability that chimps can’t do that allowed us to use our animal intelligence in a novel new way that chimps can’t do chimps can learn a certain amount of language but there’s something missing right they’re super limited in what they can do with language and I think whatever that is it had something to do with language and but I think human intelligence is still mostly animal intelligence and I don’t see how large language models

[01:34:30]  Red: really mimic animal intelligence at all like I think that yeah that might actually be orthogonal right it’s it’s not that it’s orthogonal to AGI I think that they’re part of what we need to learn to make AGI work but I think we probably need to figure out animal intelligence and this just makes to me this makes so much more sense this idea that there was some sort of in what 100,000 years suddenly all the evolution took took place from what before was these animals that didn’t create knowledge and created no knowledge and all their knowledge was in their genes and suddenly a few K was all we needed to create this creative algorithm that creates knowledge like this it just doesn’t even make sense right animals do create knowledge they’re just they just can’t do it open -endedly and something was added on top of that that suddenly broke through and created open -endedness and I suspect it was language or something in language related but I don’t know that for sure you know your opinion may vary but I feel like we’re like what we’re doing with LMS is we’re studying that last little step without understanding everything that needed to come before and it’s amazing what we can do with it it’s stunning what we can do with it like it blows me away what we can do with it way more useful than I would have thought but I just can’t help but feel and for for that matter I don’t think LMS will ever be the ever be something it’s like to be an LLM because I don’t think they have any consciousness animals have consciousness or at least some animals maybe the higher animals only have consciousness somewhere in there there was this evolution where quality up came to be and it based on our best theories we don’t know this for sure but based on our best theories animals evolve that prior to humans like we’ve just tons of theories around that that have been corroborated by studies there’s a giant literature on it and there aren’t good alternative theories like you just can’t find good testable alternative theories to the idea that certain animals particularly mammals have consciousness here I’m using consciousness to mean qualia and really I shouldn’t because consciousness is qualia as part of consciousness but it’s like not quite the same thing right because you could have an animal that there’s something to be like that has qualia but it has like no memory and I question whether that would still be considered conscious so that must be some sort of learning algorithm itself qualia must be some sort of learning learning algorithm that I don’t think we’re even studying with large language models so and again this depends on this is my way of looking at it if I’m David Deutsch who believes it all did happen in that last two key in that 2k of data or whatever then he thinks that means that only humans have qualia and animals are just automatons if he’s right and I’m wrong that it then it would make perfect sense to pursue what LMS are doing and to think that those could be turned into people if I’m right that animals evolved that first if I say if I’m right if the literature is right if the current best theories are right which they may not be then there is something missing from large language models and we need to still figure out what that is so that’s why I guess I do have my doubts about it

[01:37:49]  Green: I mean yeah what we’re saying essentially like David Deutsch is saying that AGI is that delta between animal intelligence and human intelligence and what you’re saying is that AGI is animal intelligence plus that delta yes

[01:38:06]  Red: that’s what I’m saying

[01:38:07]  Green: yeah and I guess my final point is that is that is to maybe open your mind to the possibility of that with LLMS we’re kind of coming at this from from the inside out or or instead of from the bottom up I think that’s young look look look Q’s thing like yeah this thing isn’t even as smart as a dog right instead of programming the dog and then programming intelligence on top of that that perhaps it’s possible to come from the other direction and start from the highest language level and maybe you can get at the deeper stuff from that may be we’re actually piecing together the deeper the deeper stuff from the highest level maybe we’re coming at this from from the opposite direction as what you would expect it to be built as

[01:39:13]  Red: so that’s a very good point let me actually say something in favor of that point so David Deutsch has expressed in his books this idea that emergent explanations are no less important than reductive explanations and I totally agree with that right fact he even he even goes so far as to say in theory we always talk about how you could theoretically figure out an emergent explanation out of the reductive explanations and therefore that’s why a reductivist would say reductive explanations is better which points out that that’s that it’s also true that you could theoretically figure out the reductive explanations using the emergent explanations that they can create constraints as to what the reductive explanations are are possible and so you could theoretically use the emergent explanations to figure out and to come up with what the proper reductive explanation was and I think that’s true also LLMs large language models it’s studying them could could be a stepping stone to understanding animal intelligence right like I guess I see no reason why that couldn’t be true I don’t think the current techniques whatever it by itself be animal intelligence but it might create an understanding or a breakthrough or a stepping stone that then constrains what animal intelligence can be and allows us to figure out what animal intelligence is so if that’s what you’re saying I guess I would fully endorse what you just said

[01:40:51]  Orange: we shall see well I have a what could be a concluding question so this is purely based on speculation I guess but what will these AGI’s be like I mean say we we create these universal explainers capable of creating open -ended knowledge I kind of get the feeling that’s critical rationalists or that fans of Deutsch say that well they’ll pretty much be just like humans you know or they’ll they’ll they should be morally treated like humans you know maybe they’ll have access to more memory but we kind of have that with that with our phones in our pockets anyway and you know they’ll they might do be better at a couple things but they’ll or at last be just like like humans it sounds like Bruce is not even convinced they’ll have qualia right

[01:41:54]  Red: no no that’s not necessarily true so

[01:41:56]  Orange: okay so

[01:41:57]  Red: okay let me well first of all I don’t I don’t know

[01:42:01]  Orange: yeah of course I just want you both to speculate on

[01:42:04]  Red: yeah so so I was arguing at the time that until we understand qualia we probably won’t be able to make a general intelligence but that qualia is a is a learning algorithm that animals have it’s not unique to humans therefore if we if we only study large language models I was arguing that since that doesn’t include qualia the learning algorithm that is qualia then it would be incomplete and they won’t be able to become general intelligences now of course I’m making so many assumptions here right I mean it’s like it’s crazy how many maybe even ridiculous assumptions I’m making here including even just the idea that animals do have qualia that could turn out to be wrong it is our best theory that is a fact right anyone who tells you it’s not our best theory has not looked into it period end of story but the fact it’s a best theory doesn’t make it true and that’s just a fact right and so it could turn out that intelligence general intelligence is separatable from consciousness and that is possible like I don’t think that’s true but like it’s really hard for me to give you a principle reason why that couldn’t be true

[01:43:19]  Orange: okay

[01:43:19]  Red: right this idea of philosophical zombies I guess my argument against that would be that philosophical zombies just don’t seem to exist which suggests if animals have qualia the higher animals have qualia then you probably have to pass through qualia the qualia algorithm prior to general intelligence okay and if that’s if that’s true then it’s literally impossible to make a general intelligence without qualia but I don’t know that it could be that the animal intelligence that the qualia has got nothing to do with our ability to do general intelligence it’s just something that by evolution you know pro -keel reasons the path that evolution happened to take that it worked out that way and there is an argument to be made and I’ve seen people make this argument that we will be able to create intelligences general intelligences that can think as well as us but have no consciousness

[01:44:13]  Orange: so they could be psychopaths that just want to turn the world into a paperclip factory is that plausible?

[01:44:21]  Red: no that’s really not plausible okay they could be psychopaths if you went that far like that’s yeah at least somewhat plausible I suspect more likely so like some people argue they’d be psychopaths and there there is some reason to be concerned about that but Steven Pinker has made a I think fairly credible argument that to be a psychopath you have to have certain amount certain kinds of human feelings that that were part of evolution to want to dominate and things like that and there’s no reason for them to have evolved that so it’s it’s less likely they’d be psychopaths and more likely that they’d just kind of you know be some sort of benevolent autistic genius or you know Rain Man or something like that right so I see I mean we don’t know like we really don’t know that’s the problem right until we have a good theory we’re making we’re grasping at straws trying to figure this out so so what I want to clarify though is that I personally think and believe that animal intelligence is necessary for AGI and therefore qualia is necessary for AGI but I don’t know enough to claim that that’s a bet that isn’t a best theory that’s my belief it is not a best theory and there are because we’re in vague theory territory there are many competing theories that can’t be refuted and that are at least as good as my theory right so I’m trying to just be truthful here that I I don’t know but my my point of view was that they did have to have qualia to be a general intelligence

[01:45:57]  Orange: so there’ll be a lot like humans

[01:45:59]  Red: yes

[01:46:00]  Orange: okay

[01:46:01]  Green: and I guess what I’m arguing is that if you define AGI as open -ended creativity that you don’t necessarily need the qualia LLMS as is have have the potential of working in this open -ended creative way of building these stepping stones and if if that’s how we’re defining AGI of creating novel new knowledge like that then then then that’s possible and I on the top of my I I pinned to the top of my Twitter feed a debate between Ken Stanley and and a rat and a doomer essentially a rationalist doomer who believes that it all comes down to optimization everything is the optimization problem and if you and if you think like that then yeah the AI could be optimized to create paperclips and go off and figure out how to turn us on the paperclips but Ken Stanley does a good job of arguing and against that that when it comes to open -ended knowledge creation the AGI has to be able to follow its interests and it doesn’t come down to optimization it comes down to it doesn’t make any sense from from an AGI perspective of of turning everything into paperclips from from an AGI’s perspective it’s it’s about following its interests and and wherever that leads

[01:47:35]  Orange: so but you both think that quality could in principle be programmed into it

[01:47:41]  Green: yes I do agree that that it that it’s a program and perhaps under my structure it could discover the quality on its own perhaps oh

[01:47:53]  Orange: um

[01:47:54]  Red: I just have to do a shout -out to Dr. Jerry Swan who I’ve read his book The Road to General Intelligence and I’ve communicated with him via email he has a theory of general intelligence in his book which is one of the more complete ones that I’ve seen and in his theory a general intelligence is not a person so this idea that a general intelligence has to be a person that comes from Deutsch he actually has a different opinion on that so he actually does imagine that we’ll be able to create general intelligences that that simply have no consciousness so there is no moral qualms about using them you know in automation or something like that right so I know David Deutsch would argue against that as okay but then it’s it’s not really an open -ended creativity because it’s really just been programmed to do a specific thing but you have to actually see his theory it’s a little more convincing than you would first think right and you have to look at the mathematics that he puts together and of course he’s guessing too right because none of us know what our AGI is but it may be that both of them are right that

[01:49:07]  Red: there is like consider the fact that an LOM is in some sense already a general intelligence it is not a narrow AI anymore it’s it can mimic almost any other kind of AI so it is a general intelligence although it isn’t the kind of general intelligence that I originally had in mind when I was thinking of a person so I’m going to move the goalposts here so it may be that general intelligence in person hood are separable and that they’re not quite the same thing and LMS may already be a sign that that’s the case right if that’s true then Jerry Swann’s theory would be about general intelligence and Deutsch’s theory would be about personhood and they could both still be right but of course we don’t know I just wanted to put it out there as a possibility because I thought it was an interesting theory that deserved some attention

[01:49:58]  Orange: well okay this has been wonderful I’ve gotten a lot out of listening to you guys and I’ll look forward to coming back and listening again while I’m editing but thank you so much Dan and thank you so much Bruce

[01:50:14]  Red: yes thank you Dan Dan this was really thought -provoking episode and I can really appreciate what you had to offer here and the thoughts that you pulled together here

[01:50:24]  Green: yeah thank you I mean it’s cool that other people might listen to this I just enjoy talking about this with you with you guys so we’re a

[01:50:35]  Orange: very self -indulgent podcast we’re just going our own direction and if people want to come along that’s great and at least a few people out there do we do have oh I’ve meant to mention we do have like some patreon subscribers we do shout out that was really like the idea that people are we professionals now Bruce?

[01:50:58]  Red: yeah that’s right if you make 25 -30 bucks a month I think you can count yourself as a professional I think that’s exactly how it works okay

[01:51:06]  Orange: well very cool okay take care guys alright okay thanks everybody hello again if you’ve made it this far please consider giving us a nice rating on whatever platform you use or even making a financial contribution through the link provided in the show notes as you probably know we are a podcast loosely tied together by the Popper Deutsch theory of knowledge we believe David Deutsch’s four strands tie everything together so we discuss science, knowledge, computation, politics, art and especially the search for artificial general intelligence also please consider connecting with Bruce on X at B Nielsen 01 also please consider joining the Facebook group the many worlds of David Deutsch where Bruce and I first started connecting thank you


Links to this episode: Spotify / Apple Podcasts

Generated with AI using PodcastTranscriptor. May contain mistakes; please verify against the actual podcast.