Episode 13: Objections to Artificial General Intelligence
- Links to this episode: Spotify / Apple Podcasts
- This transcript was generated with AI using PodcastTranscriptor.
- Unofficial AI-generated transcripts. These may contain mistakes. Please check against the actual podcast.
- Speakers are denoted as color names.
Transcript
[00:00:10] Blue: I just did a blog post not long ago. So I wanted to see if anybody wanted to pipe up on this. And the blog post wasn’t the greatest. I didn’t explain myself very well. But I asked if there can be multiple levels of universality. And then what I was really asking about though was if there could be multiple levels of computational universality. And that was the part I didn’t explain very well. And this actually relates to what Dennis just said. He said that physics is computational. Then he said, well, I mean, I’ve read that in David Deutsch but I’ve never actually studied physics. In fact - Sorry,
[00:00:45] Red: can I just, I don’t mean to interrupt but I wonder if I would turn that around. Okay. I would say computation is physical. I don’t know if physics is computational. Okay. Because then you get into the realm of people claiming that the universe is just a simulation.
[00:01:01] Blue: Right, fair point, fair point. In our last podcast that Camion and I did, and then I also put this up as a blog post, I pointed out that David Deutsch had actually raised the issue of maybe when we find a new theory of physics, theory of quantum gravity, that it seems possible we may be able to build oracle machines, which would be machines that could solve the halting problem. At least they could solve it for Turing machines, not for themselves. That would actually extend what type of computation could be done using just the laws of physics, which would be an example of what Dennis just said, where it would mean that as we currently define something to be computational as matching a Turing machine, that in fact, physics can do something that isn’t computational. We could then turn around and build a machine out of that and it would have a new type of computer. So I was asking, does that mean that there’s multiple levels of universality of computation? And then in general, are there multiple levels of universality? It would be kind of the sub question. Does anyone want to take a stab at that? Yeah, go ahead, Dachmar.
[00:02:07] Green: I just want to mention that definitely there can be many levels of universality, just because when you talk about universality, this is something that with respect to some domain, and now the domain can, like there can be some domain that is contained inside another domain, which contain inside another domain. And in each of which, you can have something universal with respect to that domain. And so like domain A is inside domain B and you have some universal process for A, but then there can be a universal process for B and that the one for A is contained in B. So it’s just abstractly, definitely there can be like some many level or even incomparable universality that can be.
[00:03:04] Blue: In fact, something like that. Deutsche actually says that universality can come in hierarchies, which is what you just described. Yeah, and
[00:03:10] Orange: I think I’ll try to give a sort of more down to earth example rather than to just sort of make this more clear instead of having it just in an abstract level to return to the example that didn’t escape about the movable type press, the Gutenberg machine, with all the different letters that you can rearrange is universal for printing books with Arabic characters. The characters that we use today, sorry, not Arabic characters, that’s thinking of Arabic numerals, but the letters used in English. But if you want to print a book in Chinese, a Gutenberg press designed for English isn’t going to help you. Now you could have a machine that would be able to print any book that written in any language, which is to say that it would be capable of representing any character and a modern printer that can just print arbitrary shapes with ink is an example of such a machine. And so a modern printer is sort of, you can think of it as being universal for printing any book or any text, whereas a Gutenberg press would be universal for printing English texts or texts that at least use the English language. And so that would be an example of a scenario where there’s a domain within another domain, the first domain being just books in the English language or with English characters and the second domain being just the broader notion of books written in any character set.
[00:04:34] Blue: So back in my day, we had printers that were like typewriters where they just had a letter on them and then they switched to dot matrix where they could form anything that fit within that dot matrix. So really, I never thought of this before, but you’re right, that was actually a jump from one form of universality, being able to print English documents with words to being able to print any language, but even then to be able to print graphics, right? Because now you’re - Exactly. Right, okay, that’s excellent example. Thank you. Okay, so now let’s tie this back into AGI. So - Actually, I can add one more thing
[00:05:12] Green: because this is related to universality of computation because when we talk about Turing machine, Turing machine is just like something called finite automata equipped with like a tab, a long tab, basically a memory. And if you throw away this memory out, the long tab, then you just only have finite automata. So, and this thing, although it is not, this is not a machine that can, like it’s not universal for all computation anymore, but it is still universal with respect to some small set, actually, and that is something that you are familiar with, which is a regular expression. Regular expressions, right. So, any regular expression can be decided by finite automata. And so finite automata is like universal with respect to this small class. So, like, yeah. And just to point out.
[00:06:19] Blue: All right, excellent. In fact, we actually talked about that a couple of podcasts ago. So that was a good - Oh,
[00:06:24] Green: okay, okay.
[00:06:25] Blue: Sorry. So, you just explained how that ties in. That’s exactly what I was looking for. So I guess I hadn’t actually been thought of it that way, but you’re right. Regular expression is a type of universality, but it’s not universal. It’s not all algorithms, right? It’s - Exactly. Okay, makes sense. Okay, so now what does this have to do with AGI? So, David Deutch has this term he calls universal explainers. Can somebody maybe explain what a universal - What is David Deutch’s concept of universal explainer and how does it relate to AGI and why is it important in terms of trying to understand what AGI is going to turn out to be?
[00:07:06] Orange: Sure. So I can take a stab at defining a universal explainer. So the notion of a universal explainer is essentially an entity, you can think of it as a mind that is capable of explaining anything that is explicable. So the conjecture is that we humans are like this, that if there is anything that can possibly be explained about our universe, about mathematics, et cetera, we are a type of entity which can create that explanation. So we are universal in the domain of creating explanations. Any explanation which can possibly be created by anything can be created by a universal explainer. And so this is sort of the way of looking at really the idea of general intelligence. One way to think about general intelligence is that it can create any knowledge and sort of this alternate way of looking at it, the universal explainer way is that it is capable of coming up with any explanation that can possibly be created.
[00:08:03] Blue: Okay, excellent. Anyone else wanna add to anything on that? I think that was a
[00:08:08] Red: good
[00:08:08] Blue: explanation. So in essence then a universal explainer is an AGI or the word A in AGI is artificial. So human beings would be not artificial general intelligence. That’s obviously an arbitrary distinction that doesn’t really mean much, but we are general intelligences. We are universal explainers and AGI would be like maybe on a computer creating a person, not in biology, but on the computer that is also a universal explainer. Is that what you’re getting at Ella? Yeah, that’s right. Okay, so now Steven Pinker has made the argument that many of you may not even know this, but he actually made the argument that science has things that it cannot explain and that it’s like the integers he said or it’s a lower level of universality, but there’s some things that are inexplicable. What do you think of that of his position on that? I guess would be my question. And would that put any damage to the idea of an AGI if that were true?
[00:09:09] Orange: So it’s certainly conceivable that you can imagine that human beings are not universal explainers. There’s things about the world that we just simply can’t explain, but in David Deutch’s terminology, that would be an easy to vary explanation, which is to say there’s just not really much detail to it. It’s just sort of saying, well, there is something out there which is inexplicable and that’s just not really a very good theory because it’s, first of all, it’s unfalsifiable. You could just keep saying, well, there’s something out there which is inexplicable, but if you don’t say this particular phenomenon, no human will ever explain it, this particularly is inexplicable. If you had that, that could be hard, very explanation. That could be something worth looking into, but the basic idea of just, well, perhaps there are things out there which are inexplicable to us. There’s just not much content there. It’s not really worth thinking about. And as of yet, I don’t think anybody has been able to make a good argument that any particular phenomena or any conjecture, anything in mathematics is just fundamentally un -understandable by humans.
[00:10:16] Blue: Okay, so to paraphrase you then, we can conceive the idea of a partial universal explainer or a universal explainer that only explains some things, but we have no good reason to believe that. That would just be a bad explanation at this point. Therefore, the better explanation is that we can explain everything. Is that what you’re getting at?
[00:10:36] Orange: Right, yeah. I mean, it’s certainly a valid conjecture. It could be true, but it’s just, unless there’s some particular theory, a detailed theory that says this is inexplicable, there’s just not really much to say about the theory. It’s just kind of a vague, easy to vary suggestion, which just isn’t really worth taking seriously as sort of the way that I think about it. Okay.
[00:10:58] Red: There is a somebody, sorry, somebody asked David a similar question when like years ago he was on a Boston radio show. I have the link I can share later. I forget the name of the program, where somebody asked him, well, how do you square the notion that we’re universal explainers with the fact that there are, that we know that there are unknowable truths. And I’m out of my wheelhouse there too, but if I understand correctly, because of Godel’s incompleteness theorem, there are things about mathematics we can’t know or we can’t ever prove to be true or prove to be untrue if I’m not totally butchering it.
[00:11:35] Blue: You’re correct, can’t prove to be true.
[00:11:37] Red: And so David said, well, but that doesn’t, you could still write a paper about why you think, why you conjecture that it’s true or why you conjecture that it’s untrue. So in terms of the knowledge that a person can create, you’ve still made progress there. And I would also add that you can, even though there may be things that are in principle unknowable, that doesn’t necessarily mean that the set of all noble things is finite. So there’s still infinite room for progress to be made and for knowledge to be created.
[00:12:12] Blue: Okay, thank you. In fact, I actually have a list of objections that I’ve heard in the past to the creation of AGI and you just nailed one of them, Dennis, which was Godel’s incompleteness theorem, okay? Let’s actually talk about some of the other ones that I’ve heard and I’m going to play devil’s advocate and I’m going to argue with you guys and I’ll play the role of the doubter that each of these would stop AGI from being real and explain to me why these are bad arguments, okay? So one of them was Godel’s incompleteness theorem. I actually think Dennis did a good job on that one. So another one I’ve heard is, well, what about like four dimensional space, more than three dimensions? I can’t comprehend that at all. Therefore, there’s something there, non -Euclidean geometry exists. We know it exists through Einstein’s theories. Doesn’t that show that we’re not really universal, that we can’t comprehend everything? In fact, I think Binker used this example. Well,
[00:13:09] Red: so I’m not sure that we don’t understand four dimensional space because I know that you can have equations with four dimensional vectors, with n dimensional vectors and they work the same for four dimensions and 10 dimensions as they do for three dimensions or two dimensions. That strikes me as a sort of argument. We can’t perceive four dimensions. Like when I look out at the world, I’m looking at my window now, I see three dimensions, but that strikes me as an issue of hardware maybe. I don’t really know. I mean, maybe if we had different senses, we could see it, but that doesn’t place any theoretical limit on what we could know that strikes me as a particular soluble problem. Maybe we could create knowledge that allows us to, excuse me, that allows us to build additional hardware that we could connect to us that then would allow us to see the world differently. So yeah, I’m not sure I agree that we don’t understand four dimensional space or four dimensions or whatever you said. I
[00:14:13] Green: think to answer this, our sense that we have been evolved until now, the sense is not universal. This five, the sense that we have are not universal.
[00:14:27] Unknown: This is
[00:14:27] Green: the
[00:14:27] Blue: sense, right?
[00:14:28] Green: But our access to explanation is still universal. It’s still true that our sense are quite rich. When we understand something through our sense, this is so rich and we feel that we really understand it in many aspects. But the fact that we can access four dimensional space through explanation, maybe it’s not as rich as you look at it using your eyes, it’s still some comprehension and this kind of comprehension using explanation is infinite and we conjecture that this is universal.
[00:15:13] Blue: Okay, thank you. Excellent. Both of you gave great answers, by the way. And I actually, I agree with what you’re saying, Dennis. I actually think that this is a misunderstanding between what we mean by comprehend versus what we can kind of envision in terms of senses, right?
[00:15:30] Red: Yeah, there’s something, can I elaborate? I don’t mean to prolong this, but if I may elaborate, there’s something to be said even about the understanding of three dimensional space. It’s not that our senses give us immediate access to that understanding. In my book, I give this example from the Oxford Companion to the Mind where there’s a really fascinating account of a boy who was born blind, cataracts. And through surgery, he was then able to see as a teenager. And he really had problems understanding three dimensions and it took him a long time. Like to him, everything was just on a plane and it took him along, like he wouldn’t understand, at least not immediately, the difference between a photograph or a picture and the different, like just looking out at the world. So even if your senses in principle give you access to, let’s say, the three dimensional space out there, by no means does that mean that you will simply comprehend three dimensional space. That has to come from inside you. This is a theory about the world that children come up with amazingly, really, that their senses don’t give them. Yeah, so the senses play a very minimal role there. The senses just provide data, but the theory that the world is three dimensional, that space is three dimensional, is a conjecture that minds come up with that is not at all ingested through the senses.
[00:16:58] Blue: I’ll go one better than you on that one, Dennis. I actually know people who like work in mathematics or physics or something that have told me, oh, actually I’ve started to comprehend things in four dimensional space because I’ve worked with it so much.
[00:17:11] Red: Wonderful, yep.
[00:17:13] Orange: That’s what I was gonna say, actually. I was hoping to get a little, I was hoping to say a little something just to put a cap on this since we’ve been spending a bit of time on this. But it seems to me that, well, basically what I was going to say is, I would guess if you took a human brain or something computationally equivalent to a human brain and put it into a fourth dimensional universe, then it would learn with appropriate senses, perhaps. It would learn to have an intuition about how four dimensional space works just as well as we do with three dimensional space. And then perhaps it would go on to say, oh, well, fifth dimensional space, that’s just completely unexplainable. You could never possibly explain that. And I think that would again be a mistake if fifth dimension of mind and the fifth dimension would work just as well. I think what’s going on here and what’s misleading Pinker and others who think this way is that we just have a ton of sort of intuition and inexplicit knowledge in our minds about how 3D space works. And so it’s just so intuitive to think about 3D space whereas it’s not intuitive to think about 40 space for most people, but people have spent enough of their lives doing four dimensional mathematics do develop somewhat of an intuition for what fourth dimension would just sort of be like. And so I don’t think that there is any fundamental limitation there. It’s just, it’s easier to think about what we’re used to.
[00:18:36] Blue: Thank you. Let me use one more objection that I’ve heard that I feel like probably needs a bit of a rebuttal. So I’ve heard this idea that, hey, we’ve moved from metaphor to metaphor about brains in the past. At one point we thought it was like a steam engine. At one point I actually had a neuroscientist recently say this to me, right? I was talking with her about my interest in building AGI. She said, well, it’s probably not correct to think of the brain as a computer. That’s a metaphor, but we’ve had other metaphors in the past. We thought of it as a steam engine. We thought of it as a clockwork mechanism. Based on whatever our current technology is, we liken the brain to that. But in reality, the brain isn’t a computer. It’s no more a computer than it is a steam engine. What do you guys say to that?
[00:19:24] Orange: Yeah, so I think that the core misunderstanding there is that computational theory isn’t really just the theory about these machines that we built that we call computers. It’s not just about this is how plates of silicon interact with one another. The theory of computation, as it was described by Alan Turing and the later computer scientists, it’s a theory about what kinds of sort of information processing are possible in the universe, in which are not. And the computers that we have, the physical objects, which we call computers, are sort of instantiations of that theory that allow us to do information processing. But I think that some people, when they hear, oh, well, the brain is a computer, they think it’s just a metaphor. They’re like, oh, you’re saying my brain is like this laptop sitting on my desk. And if that’s all you’re thinking of, then it’s understandable why you would say, oh, that’s just silly. You’re calling the brain a steam engine, essentially, or you would have 100 years ago. But really what’s being said when somebody says the brain is a computer is a precise statement about computational theory, and it’s saying the brain is capable of performing or simulating any computational process, which anything in the universe could simulate, is what it means to say that the brain is a universal computer. And so it’s not really a metaphor being used here. It’s a rigorous mathematical statement to say that the brain is a computer. And so I think that people who just view it as a metaphor along the lines of, you know, the mind is a steam engine or something are just sort of not really understanding the full arguments.
[00:20:57] Blue: All right. Thank you. Anyone want to add to that?
[00:20:59] Green: So I think here, when we said the brain is computer, computer is an abstraction. And the brain is an instantiation of that abstraction. So it’s not a metaphor to say that the brain is similar to other up to another object. It’s not because we are like talking about abstraction now.
[00:21:19] Blue: All right. Thank you. Anyone else?
[00:21:22] Purple: So I’m going to speak up for just a minute. One of the things that somebody who doesn’t know a lot about, you know, all the theories around artificial general intelligence. One of the things that is super interesting to me is we always look at our intelligence as a very aspirational thing for a place we want our computers to get to if we’re going to be able to build this general intelligence. But humans are kind of crappy as computational machines. Sometimes we’re not great at gathering data. You know, these examples we were just talking through of our perceptions of 3D space. Our data gathering systems are awful. They’re easily tricked. Our senses are very fallible. Our ability to reason through things without being swayed by our biases. We’re just kind of crappy at that aspirational idea of a computer and what we kind of think it should be. So it’s really interesting to me that, I don’t know, that’s my musings listening to all of this.
[00:22:26] Blue: Great musing. Does anyone want to talk to that? I actually think there’s a lot of things that could be said about that.
[00:22:31] Red: Yeah, something came to mind for me when you said that, and that was what Turing said about this issue. And he distinguishes between what he calls errors of functioning and errors of conclusion. So what you’re describing, like saying, for example, you know, humans have certain biases or our senses are easily fallible. I think that is true. But I think it is a fruitful approach to distinguish between these two different kinds of errors that Turing suggests. So there is a program in the human brain that makes up a person. And that person is still, that program still runs, you know, in scare quotes, mindlessly step by step without any room for deviation from the instructions. Maybe this has some interesting implications for free will. I don’t think this conflicts with free will, but another issues there that are interesting to discuss maybe, but it’s, so there’s still a program that sort of just executes, executes, executes stubbornly step by step, just like a computer would. And so there are no errors of functioning necessarily happening there, unless maybe you have some brain damage or a disease, then maybe errors of functioning can occur. So errors of functioning would be, if I, if I don’t misrepresent Turing here, if I remember correctly, errors of functioning are, oh, your computer, there was an issue in the processor or something. It took a wrong step. But then errors of conclusion are the kinds of things that that strike me as what you described as, you know, biases in our thinking or in our senses where we have this program that is us that as a person has created, let’s say a piece of knowledge and that piece of knowledge later turns out to be erroneous in some sense.
[00:24:15] Red: Well, that doesn’t mean that this piece of knowledge couldn’t have been created by a program. That’s not necessarily what you were saying, but it, but what I’m, what I’m trying to draw attention to is the fact that just because people make mistakes, just because they’re fallible, that is still, we can still square that with the notion that the brain is a computer, if that makes sense, because those are different kinds of, those are different categories of error.
[00:24:39] Orange: Yeah,
[00:24:39] Purple: that actually makes complete sense.
[00:24:41] Orange: Wonderful. Yeah, I’d sort of like to say something on the subject too, which is that there’s a certain sense in which I completely agree with you in that the human brain is not a very, you know, good aspirational goal in that it’s surely the case that the human brain isn’t a very well -designed computer. You know, it could probably be made much more efficient. Our senses could probably be made much more accurate. You know, if you try to do a calculation of the speed or the memory of the brain, then I suspect it wouldn’t be anything too amazing. And so it’s not that we’re aiming to, it’s not really that we’re aiming at sort of the hardware or the sort of peculiarities of the way the human brain works as an aspiration. What’s aspirational about the human brain or the human mind is just that somewhere in it there is the universality, the universal explainer. That is at least a part of our mind or our brain. And that is the thing that’s very, very special and that epistemology and AGI are still trying to figure out. And that’s sort of the piece of the brain or the minds that we are sort of aspiring to recreate. But it would certainly be nice to, you know, be able to recreate that little piece of the mind, the universal explainer parts with much better hardware and, you know, much faster processing speed, much more memory. And I think that that will certainly be, you know, very, very good once that happens. And so, you know, the hardware of the brain isn’t particularly aspirational, but it’s just that it is running a program that is very aspirational is the way that I think about it.
[00:26:14] Purple: So I want to ask a couple of questions about hardware and not just of the brain, but of the body, you know, when you start to talk about artificial general intelligence, how much of our processing power in our brain or even our holistic ability to comprehend things is actually impacted by the hardware and not just the brain, but the physical manifestation of us. Can you actually build a computer that doesn’t have a general intelligence that doesn’t have the ability to have senses and ever have it be wholly intelligent?
[00:26:55] Orange: Well, I can start to answer this. I suspect that you could have, you know, an intelligence running, you know, with no sensory input. So you can imagine a human being and a perfect sensory deprivation tank. They would still continue to have thoughts, you know, even if they were born in that and kept in there their whole life, you know, you could imagine them still coming up with thoughts, you know, thinking about things, perhaps in a form unrecognizable to us, but there would still be intelligence there. But I think to be able to recognize intelligence, you know, if we have some candidate program and we’re trying to see if it is intelligent or not, that’s not going to be, you know, the right way to test it because it’ll be hard to tell in that case. We’re going to want to sort of instantiate the AGI in a body of some sort with some sort of sensory organs or some sort of, and some sort of, you know, appendages for manipulating the outside world or some way of, you know, broadly speaking just some way of having inputs and providing outputs. And that body could either be, you know, something in a real world. You could put it in a little robot or it could be, you know, something simulated in a simulated environment. You have it, you know, perhaps can move around, interact with simulated objects. So I do suspect that in order to really, you know, get the benefits of intelligence and to be able to recognize intelligence, you’ll want something not just, you know, an abstract intelligence, but something embedded in an environment where it can interact and take input and give output.
[00:28:27] Blue: The theory of anything podcast could use your help. We have a small but loyal audience and we’d like to get the word out about the podcast to others so others can enjoy it as well. To the best of our knowledge, we’re the only podcast that covers all four strands of David Deutch’s philosophy as well as other interesting subjects. If you’re enjoying this podcast, please give us a five -star rating on Apple Podcasts. This can usually be done right inside your podcast player or you can Google the theory of anything podcast Apple or something like that. Some players have their own rating system and giving us a five -star rating on any rating system would be helpful. If you enjoy a particular episode, please consider tweeting about us or linking to us on Facebook or other social media to help get the word out. If you are interested in financially supporting the podcast, we have two ways to do that. The first is via our podcast host site, Anker. Just go to anchor.fm -4 -strands -f -o -u -r -s -t -r -a -n -d -s. There’s a support button available that allows you to do reoccurring donations. If you want to make a one -time donation, go to our blog which is 4 strands.org. There is a donation button there that uses PayPal. Thank you.
Links to this episode: Spotify / Apple Podcasts
Generated with AI using PodcastTranscriptor. Unofficial AI-generated transcripts. These may contain mistakes; please verify against the actual podcast.