Episode 14: Theories of Artificial General Intelligence
- Links to this episode: Spotify / Apple Podcasts
- This transcript was generated with AI using PodcastTranscriptor.
- Unofficial AI-generated transcripts. These may contain mistakes. Please check against the actual podcast.
- Speakers are denoted as color names.
Transcript
[00:00:10] Blue: Okay, we probably need to start wrapping this up, but I want to talk about actual theories of AGI and kind of what our guests are working on, as far as their own personal theories and where they’re trying to take their research into AGI. So maybe Ella, what’s start with you.
[00:00:27] Red: Yeah, sure. So I call sort of my thoughts on AGI CTP theory. And I think I said at the start of the podcast that is an acronym that sort of used to make sense, but the theory has changed a bit and it doesn’t really make sense anymore. But the basic idea of CTP theory is that it provides a way of representing ideas computationally. And it does so in a way that allows the mind to compute the consequences of an idea, which I think is very important. And it also allows the mind to determine when to ideas are contradictory. And so I think that that is something that any AGI algorithm is going to have to do, because that is sort of that right there is kind of the heart of critical rationalist epistemology. In my view, sort of being able to explore the consequences of ideas, the logical consequences, and searching for contradictions, problems between those ideas. And so that’s the basic framework that CTP theory allows for. And it also has some stuff to say about how, you know, once a problem has been identified how it could be solved. Though that’s, that’s sort of the more active area of research I think that there are some, you know, missing details and some errors in my current way of thinking about how the program would actually go about solving problems. And so that’s kind of my current area of research in CTP theory.
[00:01:56] Green: Thank you. Any progress in that vein?
[00:01:58] Red: Yes, so I should have a new article coming out relatively soon. I sort of introduced the idea of the article a while ago, which is that I think that what CTP theory is lacking in some sense is the ability to have to represent sort of desires in the mind. And desire is the term that I used when describing this, you know, idea a while ago, but I’ve sort of realized that that might be, might have too many anthropomorphic sort of implications. And I’m now thinking of them more as requirements, in the sense that CTP theory right now can find contradictions between ideas, and in order to resolve a contradiction between an idea and another idea. You could just remove one of the ideas, right, that’d be a very simple way to solve a problem just get rid of one of the problematic ideas. But if that’s all there is to it, then the way the mind works would just be completely trivial and a system like that certainly wouldn’t be generally intelligent. And so, but according to sort of the basic CTP theory, that’s, that’s the only thing that’s fully described is sort of a reason to get rid of ideas. And the idea of the requirement system is that it will hopefully provide a way, a reason, a force that makes the mind want to keep ideas around. And so I’m hoping that between sort of the, the desire to have, you know, to not have any contradictions, but also the desire to have sort of requirements be fulfilled, which is the basic idea of the requirement system that I’m currently working on.
[00:03:28] Red: I’m hoping that those two forces will be able to balance each other out and sort of be what would be necessary for the more sophisticated cognition that you would see in a true AGI.
[00:03:38] Blue: Right. Thank you. Dennis, why don’t you explain about your own, your book and your own thoughts on AGI at this point.
[00:03:44] Green: Sure. Well, the book generally is basically one of the underlying themes is that we need to perform a sort of unification of software engineering and philosophy because I think there are many problems in both fields that we are going to have a hard time solving unless we perform such unification and AGI is just one of them, but it is the the one that I’m most interested in by far these the specific theory that I’m currently entertaining is that at some point in our evolution, something occurred which must have been, I think, very similar to the origin of life on earth. The reason I think that is because although there are, I should paraphrase, or I should say from the start that there are differences between genetic knowledge and human knowledge. I still think that the mechanisms by which they are created and change are the same and that is, and this goes back to Popper, who discovered that there’s very tight analogy between evolution of genes and evolution of human knowledge because both are about evolution in the human mind right around birth is what I think one of the theories of how life on earth got started is basically, you know, billions of years ago when the planet was still forming, and the oceans had just cooled down enough, they were highly active chemically, and their molecules were forming spontaneously all over the place and there were some molecules that acted as catalysts, and a catalyst is just something in chemistry that can cause a net change somewhere without undergoing any change itself so then it can perform that change again and again. And it just so happened that some of these catalysts created components, molecular components of which they themselves were made.
[00:05:52] Green: So they were floating in the water and they created this in the primordial soup as it were, and they created these components of which they themselves were made. And if through lucky circumstances some of these components rearranged and created ever more of the same components, then over time if this process gets more targeted, you get, it gets targeted enough that you can speak of replication. And so the that’s how the first replicators came on the scene and I think replication is really the key ingredient of evolution. It’s one of the three key ingredients replication, mutation and selection, or variation. Instead of mutation. And basically what happened from that point on, these were targeted enough to be considered replicators they they instantiated more molecules of the same kind. And occasionally they make mistakes when they do this. So what you get is you get pockets of the population that look a little bit different from the original. And then, once you get this top down for us so to speak acting on this pool of replicators. That’s when you have natural selection happening and that’s how you get this appearance of design. That’s when it comes on the scene that it looks like it is purpose driven, even though it is not. And so you get at that point you can really speak of these replicators and coding knowledge, in the sense that they are adapted to replicating. And sometimes in order to replicate they then incorporate knowledge about their environment and sometimes like David George points out, they are such good replicators that they even incorporate approximations to the laws of physics and aerodynamics for example where a bird knows how to fly.
[00:07:27] Green: Now the reason I say all this is because I think something very similar happens like I said in a human mind right around birth. And that is I think when a baby is born or maybe it still happens in the womb. At some point the brain is more or less fully formed but it is fully formed in the sense that it is a universal computer. And babies are born just like all organisms with inborn knowledge and so they will contain ideas about how to let’s say breathe how to chew with their gums you know that that sort of stuff and I’m actually sure if these are a true example but they will have some inborn knowledge. Just like animals do you know a puppy will have inborn knowledge of how to walk how to bark, all these sorts of things. But there comes a point where I think one of these ideas that people are born with begins to replicate. It creates some of the building blocks of which it itself is made. And once this starts happening basically the same thing that happened in the primordial soup happens but it happens in a computer that is the brain. And so over time because as this idea replicates it makes mistakes, it then morphs, it changes into different ideas. And that is I think how we explain how it is that people come up with ideas that aren’t genetically encoded. So I think that creativity as a program intelligence as a program is fully genetically given so I differ here from David Deutsch for example who says that it’s partly mimetic. I think it is fully genetically encoded, but no particular piece of knowledge that is created is genetically encoded.
[00:09:09] Green: So that also means that for example evolutionary psychology which Bruce you and I have and others have spoken about, I think is not correct. I think it’s not true. There are some inborn ideas that may determine or inform I should say our behavior but they can easily be overwritten. Basically what’s missing from this theory is it still doesn’t explain for example what consciousness is. I think any good explanation of a GI will contain an explanation of consciousness because it seems that there are good explanations that I think suggest that a universal explainer is automatically conscious like that ability just kind of comes along for the ride. But it might be hard to build an API without a good explanation of what that is that might be the case that you could simply focus on the evolutionary part and simply build this sort of idea pool with self replicating ideas that then change over time so for evolution to take its course, but I have an inkling that there’s there’s something missing there there must be another component to this I sort of write about this and I talk about it as the meta algorithm that’s exerts selection pressure on it on the evolving idea pool. But yeah that is sort of the thing that I’m working on now is and thinking about now is what is the what are the missing pieces of the theory, so that you could actually build the thing.
[00:10:27] Blue: Yeah, I think it’s interesting that Ellis theory basically starts with preparing epistemology and tries to. This is my take so feel free to disagree Ella, but tries to to take that into an algorithm how do I how do I instantiate preparing epistemology into an algorithm. Dennis is really starting with, or at least again this is my take feel free to disagree Dennis starting with neo Darwinism with with how did evolution work and how could that be used to explain the creation of ideas so. Now obviously there’s a tight tie between preparing epistemology and neo Darwinism, but it is interesting how it seems like each of those is kind of the starting point for each of your theories that fair or am I kind of reading too much in.
[00:11:11] Red: I think that that’s a fair description. I can’t speak for Dennis but that sounds fair to me.
[00:11:16] Green: Yeah, I’m obviously hugely inspired by popper, much of the epistemology that I lay out in the book is is inspired by him. But yeah I do think poppers epistemology misses the notion of a replicator. I think Ella disagrees with this but I think that replication is a crucial ingredient of evolution and I’m, I don’t really know why popper didn’t write about this I haven’t read all this but maybe you did not just missed it. But yeah I think this idea that there is replication going on in the mind is crucial to understanding how the mind works and it also allows us to explain other things that are seemingly unrelated like memory. So called neuroplasticity and so forth so the theory has some reach to explain more things.
[00:12:06] Red: Thank you. I think that replication as being sort of essential to knowledge creation broadly speaking, whereas, as I explained earlier, I sort of take the view of Donald Campbell, and that I think any process of blind variation and selective retention has the potential to create knowledge. Now the neo Darwinian process of having replicators, you know competing for dominance in a population. That’s, that’s certainly one kind of one variation and selective retention, but I don’t think it’s the only kind. And I don’t think it’s the kinds that fits best with what we know from paparian epistemology. And so a process in which there are no replicators ideas don’t create copies of themselves, but they simply exist and the implications of them are explored, and compared to other ideas. And you look for conflicts between them. That is sort of a system which involves blind variation and selective retention, and thus could create knowledge in theory, which I think has much more in common with popper
[00:13:15] Green: and I should also add that it’s very possible that popper entertain the idea and rejected it, but he just didn’t think it was worthwhile or didn’t think there was much to the idea. I shouldn’t just assume that he didn’t think of it. But yeah, I think you, you summarize the differences between theories nicely.
[00:13:30] Blue: Thank you. And then tangible. I don’t think you have necessarily an AGI theory but you’ve been researching into related areas. Maybe just talk about anything that you’re, you’re currently researching and in particular I’m hoping you will bring up algorithmic evolution which you and I have talked about and you even gave me a paper you were very excited about recently.
[00:13:53] Orange: Right. So, so there are like some small ideas I am trying to think about but actually let me get to the core thing and I actually think that one of the most important thing I really want to understand is, is that I feel that I don’t really understand popperian and and deuchen enough in the following sense that I cannot understand many concepts of their work in the sense that I can, I don’t know how to program it. Right. There are so many concepts like the problem in the sense of conflicting between ideas, like the conflict between ideas, and to solve a problem which means to give good explanation, not just explanation but good explanation. And how to compare between bad and good explanation. If you want to program this thing. What does it mean to decide which one is a good one, which one is bad. Like what program this, if there is some some perfect program for deciding this, what would that program look like. So the point is, I think we can try to maybe make it more make this concept more mathematical. Make it very precise, and then see if it’s still fit with, with the concept that Daesh and popper still want wanted to to mean, and then compare them and like once it is one, once they are very precise and, and mathematical then then you know exactly how to how to make it as a program, computer program. And so I want to do that kind of thing. And like, basically said, let’s say like once I have this definition of program. Very precisely, then we can try to prove theorem about it right right, like, if universal explainer can really exist with respect to this definition.
[00:16:04] Orange: And then once we have this set precisely. The next step I would like to do is maybe to, to compare it with the, the, the main framework of, of evolution in theory that people look at like, like variant, Leslie valiant evolution that is related to pack learning that with that we say that it’s not universal at all. I want to prove that their framework is just a limited, limited process with respect to the popular in framework once it once we make it mathematics, mathematical, or like, like to say that the current genetic algorithm right now. What, why it what, in what precise sense, why it is not universal yet. I really want to understand this kind of limitation and make it make the make the dodge and proper theory right now be more precise and programmable. That’s, that’s the thing I think.
[00:17:13] Blue: Yeah, that’s, I have similar aspirations as you on that I would actually like to see a lot of these ideas more programmable. But do it does talk about how you don’t really understand an idea until you can program it in a computer and I definitely think that’s a limitation of a lot of the ways we think about both biological evolution and preparing epistemology today is that a lot of the ideas are we kind of intuitively understand them but we don’t understand them so well that they can be easily programmed. Yeah,
[00:17:43] Orange: yeah, yeah, so I kind of feel that, like, try to understand the theory itself right now is still, it’s maybe a bit fruitful. Yeah, instead of, I mean, trying to build a GI right now is is actually quite is already already nice but I think we can try to understand the theory, the popular in theory itself, as well. That’s that’s another direction.
[00:18:10] Red: Yeah, I just wanted to echo, like, I am kind of right on the same page with you and touch upon Bruce. So, it’s something that I’ve been writing about in my new article is this idea that really any theory of a GI is implicitly going to be a theory of epistemology right and it’s in the sense that it’s saying, you know, this program would be able to create knowledge, right, any theory of epistemology is at least implicitly a partial theory of a GI, and the sense that it says, well an AGI would have to work like this in order to be able to create knowledge. And so what we’re the current sort of task of a GI if you’re starting from papyri and epistemology is to try to nail down the details of papyri and epistemology and make it programmable right now we have, you know, good descriptions at a high level, we kind of have, you know, a lot of good explanations of, you know, what it should look like at a high level and some low level details, but the what we need to be doing right now. And this is sort of what I’m trying to do with CTP theory is nail down a lot of the details about papyri and and doichi and epistemology, which aren’t yet specified. And, you know, just, I’m sure we’re short on time but just briefly, that’s all you mentioned that you don’t really see a way to make the notion of a problem, a conflict between ideas, sort of computable, you don’t know how to program that. And I agree that that’s a very central thing that needs to be programmed.
[00:19:37] Red: But I actually think that CTP theory what I’m working on has a pretty good solution to that that’s something that I think is solved pretty nicely by the theory right now. The theory doesn’t solve everything but I think that problem specifically is quite well understood in CTP theory and briefly the way that CTP theory views this is that so the mind has a set of ideas you could call it the idea pool to seal a term from Dennis. And basically, it, the mind has a built in a way of detecting a direct contradiction between two ideas, which is to say that just by you know the computational form of the ideas themselves. The mind has a rule that says okay this idea directly contradicts this other idea. But importantly, and the reason why the system isn’t just trivial is that a direct contradiction isn’t the only kind of contradiction. So it might be that, you know, an idea A has a consequence idea B, and then that idea B contradicts something else in the mind. And in that case, idea A would also be implicated in that contradiction because it’s what led to a B existing in the first place. And so in that sense, you could say that a is sort of an indirect contradiction, or it’s in indirect contradiction with something else. And so that’s sort of brief look into how CTP theory views contradiction in the mind and how it could be expressed in a computational format.
[00:20:56] Blue: I actually wanted to explain one thing that triple brought up, but that maybe not everybody would understand he mentioned Leslie valiant and he mentioned a couple times pack learning so pack stands for probably approximately correct. And it’s the basis it’s the theory theoretical basis for everything that’s in machine learning today, and Leslie valiant also though in his book probably approximately correct. He raised the fact that just like Ella is talking about and that’s what Paul is talking about, that we don’t, we don’t really know entirely how to pin down preparing epistemology into an algorithm. He points out that we also don’t know how to pin down biological evolution into an algorithm, or that’s his claim in any case. And so algorithmic evolution is the study of how to try to pin that down into an algorithm. And tattrapole. He sent me a paper recently that made some interesting advances in that area and unfortunately, a lot of it went over my head and I would like some like to ask questions about not in the podcast but just separately to ask questions about it. But it’s kind of a related study that’s just as you can try to study. How do we make pop poppers ideas more computable you can also study how do we make evolution theory of evolution more computable. And how do you come up specifically what Leslie is valiant’s looking for is how do you make a one algorithm that is tractable, because obviously the process has to be tractable for it to work where it actually comes up with the new adaptions in a manual obviously over billions of years but we would consider that tractable in this case.
[00:22:37] Blue: So, and I’m actually very fascinated with a lot of your ideas on this tattrapole I would really like to share more ideas with
[00:22:45] Orange: this. I can, I can talk a bit if you want. So, right now, let’s see valiant actually formalize some, some notion of evolution. And it goes like this. It’s a process that kind of just in each step. But you you replicate yourself into variation and then into many copies of yourself, but each of the variation may not have the same, the same fitness. All right. And now, the guy with a good fitness with good with better probability will be survived for the next iteration. And the process keep keeps going on like this. And he proved that with with this framework with this specific algorithm. There are many actually many type of function or theory that this process can evolve to or can can can converge to to solve some some some explanation or to to to get to that explanation. But this class of explanation or functions are actually so limited with with with respect to do to this algorithm. And why, why is that I kind of in high level idea that I kind of say is that the reason for this is because the notion of fitness is fixed. So, that is the selection process is just fixed to be something. And it never changed. And this is really, really different from from property and or do is not like the way that that we understand how how knowledge of progress right because for us. We kind of see that it’s very important that the selection process also evolve that is that is we we have more and better and better way to to select the variation of of your idea. And then the new paper the paper that I sent to Bruce actually have some of this flavor.
[00:25:08] Orange: It kind of says that okay, it kind of study this valiant evolution process, but it says that okay, what if the way you select your, your children or the next, the next generation of population, the selection process actually change with respect to they call it ecology, that is, the selection process is actually can depends on the the current set of idea pool or whatever, like, the set of population right now, if that selection process can depends on on on this, that is the selection process evolves in some sense in the in the paper the way the selection process evolve is very limited limited. But anyway, this selection process evolve a bit, but he actually shows that just by allowing this, although his way of evolution of selection process is very, really limited. The class of theory that that you can get to is significantly widening widen. So, so this is very interesting to me. It kind of some people try to do something that is, that is formal mathematical and it has some flavor similar to popular philosophy so yeah I like that paper.
[00:26:36] Blue: Thank you for that that explanation so basically he had come up with a way to use ecology to add that into the evolutionary process and then allowed it to actually explore more paths and to therefore solve certain types of problems that the previous Leslie valiant approach to algorithm evolution wasn’t able to solve.
[00:26:55] Orange: Yeah, very
[00:26:55] Blue: fascinating paper. The math is a struggle for me I need to probably read it a few more times and try to tease that out. But yeah, it was, it was a fascinating paper and I can see exactly why you were so interested in it. All right, we’ve probably gone way over what we would normally do for a podcast episode. So I just want to say thank you to our guests for coming and joining us. Maybe we can invite you guys back some other time for another episode. But you’ve all been wonderful and you’ve you’ve shared great ideas and I really appreciate you coming. Thank you everybody.
[00:27:30] Green: That was fun.
[00:27:31] Blue: Thank you.
[00:27:31] Red: Yeah, it was great to be here.
[00:28:03] Blue: Some players have their own rating system and giving us a five star rating on any rating system would be helpful. If you enjoy a particular episode please consider tweeting about us or linking to us on Facebook or other social media to help get the word out. If you are interested in financially supporting the podcast we have two ways to do that. The first is via our podcast host site anchor. Just go to anchor.fm slash for dash strands f o u r dash s t r a n d s. There’s a support button available that allows you to do reoccurring donations if you want to make a one time donation go to our blog which is for strands.org. There is a donation button there that uses PayPal. Thank you.
Links to this episode: Spotify / Apple Podcasts
Generated with AI using PodcastTranscriptor. Unofficial AI-generated transcripts. These may contain mistakes; please verify against the actual podcast.