Episode 58: Deutsch’s “Creative Blocks”: A Decade Later
- Links to this episode: Spotify / Apple Podcasts
- This transcript was generated with AI using PodcastTranscriptor.
- Unofficial AI-generated transcripts. These may contain mistakes. Please check against the actual podcast.
- Speakers are denoted as color names.
Transcript
[00:00:11] Blue: Welcome to the third of anything podcast. Hey guys. Hi Peter. Hi Bruce a number of years ago David Deutsch wrote an article called creative blocks or kind of the title in the URL is how close are we to creating artificial intelligence and He’s written a number of articles like this. There’s another really good one that was inside of what was the name of that book? You said that me Peter
[00:00:38] Red: The book I sent you. Yeah, you said
[00:00:46] Blue: Possible minds. Yes. Oh, he also wrote an article in possible minds. Oh, we’ll be covering that one today.
[00:00:53] Red: Yeah,
[00:00:54] Blue: but We wanted to kind of revisit that article now I have to say Those two articles are really important to me They were a lot of the turning point in me wanting to study artificial general intelligence I actually went back to school to start a study artificial intelligence, which may seem like a weird choice if You read this article since he kind of points out that AGI and AI aren’t the same thing But it led to a number of changes to my career path much as I could At this point in my life and so these are kind of important articles to me because I found them quite inspiring Having now studied this subject for a number of years I actually have some criticisms of the articles also that I feel like are things where Maybe certain parts of the article He’s not quite right on a few things So I actually think it’d be a really interesting idea and this was Peter’s idea to go back into revisit these articles and to kind of talk about them Talk about why it inspired me to go back to school to study artificial intelligence especially since it kind of attacked artificial intelligence and You know, well, how do I see it now based on having studied this more deeply? So I felt like that was an interesting sub subject idea. So that’s what we’re going to do today So this article credit blocks David Deutsch in his books He kind of developed a number of theories that are interconnected Based on what we call the four strands. So the four strands would be Many worlds quantum physics Computational theory
[00:02:46] Blue: Darwin’s theory of evolution really neo Darwinian theory of evolution And modern synthesis obviously not necessarily the original version of Darwin, which is long since we’ve advanced beyond beyond And then Karl Popper’s theory of knowledge or philosophy of science, which is creative Sorry critical rationalism So he developed a number of interesting ideas that kind of came out of those on this podcast that inspired a lot of this Podcasts tons of subjects come from me researching various things that Deutsch said and a lot of cases particularly on the four strands I like completely agree with him. I I was surprised for instance when I spent a Number of years tried to study quantum physics and came to the realization Oh, he’s actually right that we only have one explanation of quantum physics today and it’s many worlds No, does that mean many worlds is true. That’s not the way critical rationalism works Right, it’s it just means that that’s the only explanation we’ve been able to think up so far So right now it’s currently our best explanation But I was surprised at that because it’s well known. They’re supposed to be these other competing interpretations of quantum physics So when I actually dug into them and found out that they did not pass even the most basic criteria for being considered good explanations under critical rationalism, I was like shocked and The only one that actually passed was Many worlds quantum physics and I was really surprised when that happened and It really actually bothered me because I don’t particularly like many worlds quantum physics
[00:04:26] Blue: But the deeper I got into it the more I could see that the four strands Those fourth theories really are some really of our most powerful theories and that when they’re combined together They become even more powerful explanations So I ended up being completely in Deutsche’s camp for the four strands now if you’ve been listening to the podcast I’ve criticized a whole bunch of ideas that have come from David Deutsche So trying to figure out what are the implications of the four strands is not the easiest thing in the world and Just because you accept all four of the strands as being basic and our most important scientific and philosophical theories Doesn’t mean that we’re all going to agree on what those implications are and that’s where a lot of the room for criticism can come in where we need to We always have errors and mistakes in our theories and we have even when we accept them We have errors and our understandings of them or errors in how we apply them And so there there is still quite a bit of room for Disagreement with each other over the implications of theories Now and and we use criticism to then try to work through where we have Misunderstandings and try to get to a better understanding and try to get to an improved theory now one of the things that David Deutsch really mentions in the creative box article and one the probably the thing that I still most strongly agree with him on Like really totally agree with him on is the nature of computational universality Now I’ve done numerous episodes on that.
[00:06:05] Blue: I think When he talks about this he says we just quote from the article despite this long record of failure AGI must be possible And it’s because of a deep property of the laws of physics namely the universality of computation This entails that everything that the laws of physics require a physical object To do in principle can be emulated in arbitrary fine detail by some program on a general purpose computer Provided it is given enough time in memory The first people to guess this and to grapple with its ramifications were the 19th century Charles Babbage and his assistant Ada Countess of Lovelace It remained a guess until the 1980s when I meaning David Deutsch proved it using the quantum theory of computation that paragraphs really powerful and It’s saying so much And as far as I can tell looking into this he’s completely spot -on Right, I mean like this one. I cannot find any good criticisms of it So now I don’t want to repeat everything that we’ve talked about in past episodes But let me see if I can try to give a feel to why This one is so dang hard to criticize if you actually understand it and And it really When you look at people who have tried to criticize the concept of universality We have a mutual friend who has attempted to do that numerous times on Facebook for example listing one Scientist after another that has criticized this and you can very quickly see that the scientists being listed Don’t really understand the concept of computational universality. They’re criticizing all sorts of things But it’s not it’s kind of incoherent the way they’re going about it.
[00:07:56] Blue: It really comes down to something fairly simple It’s the fact that we use physics to build computers I mean, of course we do right. I mean like does anybody really seriously doubt that?
[00:08:09] Green: Isn’t it more accurate to say the computers we built we build are constrained by the by our current understanding of the laws of physics or Ability to manipulate. Okay,
[00:08:19] Blue: so so that’s where we’re going with this, right is let’s say you built a computer And there are there are computers that exist. Let’s say a Finite state machine that It falls the laws of physics, but the laws of physics can do things that a finite state machine can’t do okay So it’s no big surprise That we can build a machine that can do things that a finite state machine can’t do Because the finite state machine doesn’t have a full repertoire of what the laws of physics allows Sorry, could you could you define finite state machine? So finite state machine This is probably not actually worth defining okay A finite state machine is a very basic machine that has various states and a transition that exists between them And if you go back and you look at the the episodes of this podcast on On the theory of computation. I actually do go over it in detail. Okay, so it’s it’s really easy to understand They’re very basic sorts of computers. They have no memory basically, right? You basically just move between states and there’s certain kinds of algorithms that you can run on them in fact Regular expressions are equivalent to a finite state machine if I remember correctly, maybe I’m getting that one wrong but Oh, they may be the equivalent to push down automata actually now I think about it But like there’s different there’s actual programming languages that are equivalent to some of these different types of machines that are non -universal Okay, and even when I say non -universal even that’s misleading they’re universal within their own sphere, right? It’s just they’re not universal equivalent to the laws of physics. Okay,
[00:10:07] Blue: laws of physics allow you to Do things that these machines can’t do. Okay, they can run the laws of physics You can use them to implement a machine that will do and accomplish certain types of things do sorts of Certain types of computations that these more limited types of computers are incapable of doing Okay now Alan Turing had this idea and the idea was that there was a Universal computer that there was a computer that didn’t just have its own sphere of universality But that it was equivalent to the laws of physics. Okay And this was the Turing machine that we’ve talked a whole bunch about on this podcast So he know why did he come up with this idea? Well, it was it was actually a conjecture He didn’t know if it was true or not But it was based on a really interesting thing that happened It was the fact that there’s this grammar created by church so we call it the church Turing thesis so Turing church are two different people and It turned out that that Turing was able to prove that his Turing machine and the church grammar We’re which is like I think of it as like a totally different type of computer We’re exactly equivalent. He showed through showing that you could always take one and turn it into the other and then back to the other Which is how they do this in computational theory
[00:11:32] Blue: He showed that they were exactly equivalent now we have spent just decades and decades Trying to come up with machines that can exceed the Turing machine in terms of computational power Okay, now when I say computational power, I don’t mean speed of course the Turing machine wouldn’t be particularly fast and I’m don’t even mean in memory because the theoretical Turing machine has infinite memory Okay, whereas a real life computer never has infinite memory but just in terms of the class of algorithms that it can run and also if those algorithms are tractable or not Okay, if you can show that this algorithm is not tractable on a Turing machine There will be no physical machine you can build that is tractable now turned out that isn’t actually quite true because of the quantum computer But it but it turned out to be really close to true This is this is how Turing was looking at it Turing was trying to solve a specific problem The problem was how in the world I mean like you got these two entirely different machines this grammar by church and this machine that’s By Turing both of just theoretical is not like the years are actually being built and you can’t build a grammar But why are they exactly equivalent? What are the odds that these two machines would be exactly equivalent? And then you try to build other machines you come up with a Turing machine that has a 2d surface or a 3d memory You know things that you would think would add power to the machine and you can almost immediately show that they’re exactly equivalent to a Turing machine using the same trick that Turing used
[00:13:08] Red: what I’m getting is that universal computation a Turing machine anything that you could reasonably expect a Turing computer to do that the that’s universal it can do
[00:13:25] Blue: Yes, is
[00:13:26] Red: that a fair way to put it? Yes. Um, just to really dumb it down. Yes question to the Doesn’t the article say that the quantum computation proved Universality So let
[00:13:41] Blue: me get to that. Let me get okay. Okay, so when we’re dealing with computational theory There’s several there’s two different things we care about Okay, well, there’s probably lots of things to care about There’s two in particular for our purposes for this podcast that we care about one is is it computable or not? Okay, so there are certain types of problems The famously the halting problem that a computer cannot solve Okay, it is in completely impossible for a computer to solve the halting problem in a general way
[00:14:13] Red: it can’t It can’t tell you when it will stop the program
[00:14:19] Blue: Tell you if the program is going to stop or not.
[00:14:22] Red: Oh He is going to stop. Okay.
[00:14:24] Blue: Okay. Okay. Now. There are all sorts of things that are exactly equivalent to the halting problem So there’s there’s different types of problems that we can pose that we can then map to the Halting problem and then we know that problem is in computable Okay, does that make sense?
[00:14:42] Red: Yes,
[00:14:43] Blue: and there’s there’s tons of interesting questions that we would problems We would love to solve that computers cannot solve okay now The other thing is tractability. So now tractability a little weird. So let’s say that I have something like the traveling salesman Problem, okay. I could ask the question is the problem the traveling salesman problem. Is it tractable and generally speaking? We would say no, but we don’t really know that for sure. Okay. What if someday somebody discovers an algorithm that is polynomial time so tractable that solves the trialing salesman program well How can you ever know for sure someone won’t in the future discover such a program, right? You would They’ve tried to come with proofs. Is there some way to prove that this is an intractable problem There’s just they’ve never come up with anything. There’s no way to prove it. Okay, that we currently know of but there probably is no way to prove it So it would be Technically incorrect to say the traveling salesman problem is intractable Although people say stuff like that all the time and we know what they need. Okay What we mean is we don’t know of any current algorithm that makes this intractable problem And we have good reason to believe we will never discover such an algorithm Now, why do we have good reason to believe we will never discover such an algorithm? Well, there’s this they
[00:16:13] Blue: Discovered as part of computational theory of what came out of accepting the church during thesis and trying to study it and work out its implications They had this idea of NP completeness now I described this in a past podcast I don’t want to get into it because it’s kind of complicated But basically they were able to show That there are certain kinds of problems that are universal a traveling salesman is one of them so if I were to Come up with an algorithm That could tractively solve the traveling salesman program problem Then I could in fact take every single NP problem that exists currently And I could solve all of them tractively because it turns out the traveling salesman program is a universal problem It’s easy to come up with a way to take the traveling sales take every NP problem that exists And map it to the trial traveling salesman problem So what all it would need to do is if I had a way to solve the traveling salesman problem in polynomial time, which would be track is tractable time. Let’s say um I would just simply have a program ones that we already know exist that takes every single NP problem Maps it into a traveling salesman problem. I would then solve it using the tractable algorithm and then I would map it back
[00:17:32] Red: Sorry, are you saying NP? NP
[00:17:35] Blue: the letter n the letter p.
[00:17:37] Red: Okay.
[00:17:37] Blue: Okay. That’s what they’re called. Okay. There’s a giant class of decision problems
[00:17:42] Red: Okay,
[00:17:42] Blue: that um are all considered well Underneath NP is the class p which is the tractable algorithms, but um You don’t always know if a problem falls into you know, it’s NP, but you don’t know if it’s p or not So one of the questions they have open is is p and NP possibly equivalent Well, they don’t believe it is because of NP completeness that if you could actually do this Then it would turn out that p and NP are the same So we basically have a set of conjectures and and like all scientific conjectures You’re looking for counter examples If you can’t find counter examples then the conjecture continues to be treated as if it’s true And these conjectures are number one that the Turing machine is a universal computer I would have to say it’s actually today the quantum Turing machine But a lot of times when we say Turing machine, we actually mean the quantum Turing machine Let me explain the difference here in a second. Okay Um And number two that um NP NP aren’t equivalent so that there is basically that means there’s a Class of tractable algorithms and there’s a class of of intractable algorithms. Okay Um ones that we could compute but it takes so long Very quickly as it as the number of entities that need to be computed grows It takes so long that just realistically There’s no way to actually solve this problem in real time. Okay Now tons of interesting things come out of this the whole field of AI and this is I’ve talked about this in a past podcast Is basically trying to figure out
[00:19:21] Blue: What do we do When we have an intractable problem and we still need to get a good answer And there’s tons of answers to that you can relax the constraints a little bit you can Make it so that um, it simply finds a good answer that’s close to optimal but not actually optimal So let’s take the the traveling salesman problem Which is this problem where you have a salesman and he needs to visit a group of cities and we want to output um The shortest path between all the cities that then returns him round trip back to his own city So he’s got the shortest path
[00:19:54] Red: It’s very hard to understand From someone who’s not a computer Person immersed in that world why that would not be a A a solvable problem.
[00:20:05] Blue: Yes.
[00:20:06] Red: Well, it is a solvable problem.
[00:20:08] Blue: It’s computable Okay, as the number of cities you have to do you have to run between grows you have it becomes exponential how many Um options you have to try out. Oh,
[00:20:20] Red: so I guess it’s it’s a bit like the idea that there’s more Chest moves than there are atoms in the known universe. Yes, it’s exactly the same thing You can’t you know, your mind just can’t get get get get around exponential growth
[00:20:34] Blue: So so in fact, let’s use chess as an example. Okay, so the reason why Like why why did it take so long for chess programs to outplay chess masters? Like that only happened within our lifetime, right?
[00:20:50] Red: Yes
[00:20:50] Blue: Why was that so hard? Well, it’s because chess is an intractable problem So we know how to solve the problem of chess in principle, but you can’t actually so I could Very simply write a program that simply tries out every possible move in chess Given a a certain input for a board.
[00:21:12] Red: Okay. Yeah,
[00:21:13] Blue: and so in theory I can write a program that beats every chess master But it was just intractable.
[00:21:20] Red: Yeah,
[00:21:20] Blue: okay So what they actually had to do is they had to come up with these really clever Ways and one of the main things they did is they started using machine learning alpha go if you look at our alpha go episode To be able to evaluate the board More like how a human doesn’t where they can intuitively look at the board and say oh this board better than that board And then once they were able to do that They didn’t have to solve the problem of chess. They just had to look a couple moves ahead And and with a good enough program you can beat any human master
[00:21:49] Red: Yeah,
[00:21:50] Blue: and even human masters try to look ahead They they vary they use constraints and they don’t look ahead every possible move But they look ahead quite a number of different moves Well, we would train the computer to only look ahead The most promising possible moves was like a chess master would and then it can look ahead You know 17 moves or something once is trying to only do a few A few options and then no chess master can do that So the chess masters can’t compete anymore and the computer can win. Okay So we eventually figured out how to beat chess master if we figured out how to beat go masters go was even harder Because there’s no good way to evaluate the board unlike chess And the exponential growth of possible moves is much much larger with go So there’s a lot of problems in fact One might even say that the vast majority of problems that we’re interested in are intractable for a computer So AI then becomes the study of you know, how do I do that? So let’s say I don’t care about finding exactly the shortest path for the traveling salesman problem I just want something close to the shortest path turns out. That’s completely tractable. Okay And so they’re they’re going to be studying. How do we come up with algorithms that get us something close to the best possible answer? Okay, so in AI they define rationality contra The critical rationals for your rationality.
[00:23:16] Blue: They define it more like an economic concept of rationality rationality would be making the action that is provably close to the best action And AI is all about trying to solve those kinds of problems Okay, so Getting back to how this all relates to computational universality then Now it turns out that you can build a computer That can take certain kinds of intractable problems specifically the the problem of short algorithm of trying to solve Prime numbers what prime numbers make up a certain number which is used in cryptography. Okay You can build a physical machine that is able to do that in in a tractable amount of time Whereas a Turing machine can’t do that in a tractable amount of time. Okay At least not with any known algorithm that we have today So What was this computer? That’s the quantum computer. Now. This is exactly what made David Deutsch famous is that he pointed out He wrote this famous paper Where he basically invented He invented Quantum computational theory What he basically did is he showed that you could take all the different things you can do on a Turing machine You can do them all on a quantum computer It very simple thing like amazingly simple He’s world famous for this thing that nobody thought of before him, but in retrospect, it seems totally completely obvious And this is the way a lot of discoveries are right where once somebody actually comes up with them They’re completely obvious after that And he basically showed that you could take Anything you could do on a Turing machine you can do on a quantum computer But the quantum computer because it has this massive parallelization that takes place
[00:25:14] Blue: Because that’s what quantum computers do It’s able to we’re able to run something called shore’s algorithm That you could run on a regular computer basically by emulating a quantum computer Because a regular Turing machine can emulate a quantum computer But it would be intractable on the Turing machine, but it’s tractable on the quantum computer By doing this David Deutsch proved that Here’s the key thing that the church Turing thesis was false I’m going to say that again David Deutsch in his paper proved The church during church Turing thesis was false Now you might say wait, didn’t he prove it was true? This is where linguistics becomes a bit of a problem What he really did is he showed that there was a type of computer of the quantum Turing machine That was able to it’s got the same computational It still not there’s nothing that the truth. How do I say this? There’s nothing that the quantum Turing machine can compute that the Turing machine can’t compute So they’re equivalent in terms of computability But there are certain classes of algorithms Really shores algorithms the really the only known example today But there’s certain there’s certain algorithms such as shores algorithm that is Tractable on the quantum computer, but intractable on a Turing machine Which makes it a different type of computer If only a bit different, okay Now because of that the quantum computer has a number of speed up algorithms That can in general speed up any algorithm But it doesn’t take An exponential algorithm And turn it into a tractable algorithm.
[00:27:04] Blue: It doesn’t do that It’s usually only a quadratic speed up, which you don’t even know what that means But basically it’s a significant speed up that is still shy of an exponential speed And so An algorithm other than shores algorithm that is exponential on a Turing machine is still exponential on a quantum Turing machine It’s just the quantum Turing machine can do it in quite a number of fewer steps But it’s still an exponential algorithm So it didn’t actually change very much in terms of computational class of algorithms But it changed it enough and it changed it in this one important area shores algorithm cryptography That gets all sorts of you know government agencies excited Which is why there’s so much interest in trying to build a quantum computer Okay, and you’re probably familiar with the concept of quantum supremacy and All that that’s been in the news with ibm competing with microsoft and google or whatever I’m trying to have the best quantum computer We’re still a long ways off probably in terms of having a true quantum computer that could really start Um breaking our encryption algorithms But someday we’ll work it out. We’ll engineer it. It’s just an engineering problem at this point because You know, we’ve got the theories on how to do it Um, and we have no, you know quantum mechanics has got to be the single best theory that we’ve ever had Right. I mean, it’s the most well tested. There’s not a single known counter example to it It’s known to be wrong. By the way, this is something that I’ve debated with people on facebook quite a bit They’ll point out that it’s wrong in some way.
[00:28:43] Blue: I’ll go look we all know it’s wrong Like it’s at odds and in contradiction to general relativity and it can explain Quantum gravity is that’s why we’re looking for a theory of quantum gravity. It doesn’t matter that it’s wrong Okay, it doesn’t matter. It’s still the best theory. It’s still the only theory There are no competitors to it string theory is not a true competitor to it today, right? I mean, it’s It’s the level of verisimilitude of that theory is amazingly hot um Even there was there a reason why you say Is there a distinction between being wrong and incomplete?
[00:29:20] Red: Uh, no,
[00:29:20] Blue: it’s wrong
[00:29:23] Red: Wrong is the is the right word. Okay. No, so
[00:29:25] Blue: there is a distinction to be made between wrong and incomplete Yeah, okay And so you might try to throw it into the incomplete category by saying we can’t explain gravity today with it But like it’s literally in contradiction to general relativity So, you know, I guess we can’t say for sure it’s wrong until we actually have the new theory And maybe we will discover I mean like one could make an argument That when we have the new theory, it’ll turn out to be its quantum theory plus something else But like nobody believes that right? I mean like it’s it’s not even a very viable point of view I don’t think you you could find a single scientist who ascribes to that viewpoint. It’s generally accepted that it’s wrong And this is one of the things that dutch will point out is yeah, we know it’s wrong. It’s it’s an incorrect theory We’re actually in a unique point in history where we used to think our theories were true And we sort of don’t believe that anymore. We think they’re the truest Okay, this gets very well with with critical rationalism. Okay. It’s one of those ideas That has grown over time in science that it doesn’t really matter if our theories are entirely right or not What matters is is that we don’t have a good alternative theory. Okay um So what David dutch did Is not only did he create this new type of computer the quantum computer the quantum Turing machine Um, which then became the new universal machine because now that’s the machine that we know how in theory how to build Even though in practice, we can’t build one um that we know in theory how to build and
[00:31:05] Blue: It is not known how you could use the laws of physics to exceed that machine Now, maybe you can maybe the laws of physics allow for um, some new type of machine So what David dutch he didn’t want to leave that open door like that like Turning did so what he did is he created a mapping between quantum physics and the quantum Turing machine To show that they were equivalent Now this was really the genius of what David dutch did Because now he’s actually produced a proof that universal that um computational universality is true But and here’s the key thing only in so far as quantum physics holds quantum mechanics holds which as I just said It’s wrong. It’s known to be wrong Okay Now this is where things kind of get interesting so dutch proved something but based on a physical theory that’s known to be wrong So doesn’t that mean that his? Church Turing dutch thesis as we call it today is wrong. Well, no that doesn’t mean that If I could use an analogy um, let’s say that um We took we wanted to show that uh, we wanted to take the stance that um, you know actually The the laws of gravity are incorrect under newton’s laws, which you know today we accept that is true But back under newton’s laws before einstein’s time That would have been a real hard sell right Well, people will often point to that and they’ll say okay.
[00:32:46] Blue: We thought gravity was was real And we thought it was a force then einstein came along he showed it wasn’t even a force so there is no force of gravity and We have to get used to this idea and even popper would agree with this popper uses this as an example that um You don’t know how the new theory is going to falsify things how that new theory is going to overturn things that you thought were true And you may have to someday You really believe there was a force of gravity And then the new theory comes along einstein’s theory of general relativity And suddenly there’s no longer a force of gravity and this is generally the way we tell this story and this is even the way popper tells this story There’s a problem with that way of thinking about it though, even though it’s technically true Because there really is still a force of gravity Even if it’s now explained as curvature of space and things moving together and things like that It’s effectively exactly equivalent to a force Okay um What einstein really did is he he showed why the old theory was Successful now. This is what popper is trying to explain is that you can’t you don’t just throw theories out for being wrong You have to actually have a new theory that explains the success of the past theory And until you have that until you have that new theory That explains the success of the past theory You don’t have any particular reason to believe anything In that false theory is wrong other than the the the individual things that you know to be Incorrect, right?
[00:34:22] Blue: Well, there’s no reason to believe That the church during deutch thesis is going to be overturned in the new theory It might be Until we have the new theory won’t know but it might confirm the church during deutch thesis In fact at the moment that would probably be your best guess Is that the future theory is going to explain the success Of the past theories now, we don’t know okay But we’ve got good reason to believe that at least in so far as physics follows quantum mechanics That Universality is going to hold until there’s some new theory that tells us This is how you would build a machine That that is different than the quantum Turing machine that can do something quantum Turing machine can’t do Universality is going to hold It’s on Deutch in his original paper according to penrose actually wrote about how you would get around this So deutch is not naive about this right? He’s very open about this Maybe someday we will have a theory of quantum gravity And it will allow us to say build an oracle machine that can Chain that can solve the halting problem at least on a Turing machine You know, I don’t know Scott erinsons Erinsons written papers about that maybe someday that’ll happen at that point We will overturn the Turing deutch thesis Okay, and we’ll be able to build a new type of computer using this our new understanding of physics And it will have a greater computational class that might happen or it might not Is
[00:35:50] Red: that the definition of an oracle? You said oracle machine oracle machine is I can solve the solve the Problem. Yeah, okay.
[00:35:58] Blue: Yeah, okay So it’s in computational theory. We invent We invent computers that have a different class in the Turing machine It’s actually quite easy to do right like there’s all sorts of computers that exist theoretically
[00:36:13] Red: That
[00:36:13] Blue: have different that are different than the Turing machine and have a different class of computation One of them is the oracle machine where you attach this oracle to a Turing machine and it can solve the halting problem So now you have a Turing machine plus the ability to solve the halting problem What’s your new computational class? And we’ve answered that question. We’ve got tons of studies on that, right? um This is the key thing though And this is one of the brilliant insights of deutch Everything i’m talking about Directly links computational machines to physics now. I just said we have theoretical computing machines that aren’t linked to physics You just can’t build them because the laws of physics don’t allow you to Okay So when we talk about a physical computing machine, which is usually what we care about in computational theory, that’s what we’re studying We’re actually studying What do the laws of physics allow you to compute? That is what the theory of computation actually is okay And this is one of the things when i’ve had numerous arguments on facebook with A certain someone out there who’s argued with me over this quite a bit. Okay Yes, it might be that we overturn The Turing -Deutch thesis if that would be great if we could right
[00:37:33] Blue: But nobody knows how to do that today So there’s we don’t really have any reason to believe we will We don’t really have any reason to believe we won’t but but our best theory says this is what the computational class is All of our theories are computable today Physics theories are computable today Well, of course they are because if we had a physics theory that wasn’t computable Then that would mean there would be some way to build a machine using physics That would then make that computable and we would then invent a new type of computer The types of physical computers we can build is directly linked to our understanding of physics
[00:38:09] Red: Now on a practical level, what would overturning the church Turing -Deutch thesis look like Like how can I get my mind around that?
[00:38:20] Blue: Okay, so let’s say that we come out with a theory of quantum gravity and it allows us to Create a closed time loop Where you can run a program and then you can wait for forever in this closed time loop until The program does or doesn’t Terminate and then it comes back with an answer did the program terminate. Okay, okay Under those laws of physics now, this is completely made up laws of physics. There is no theory of quantum gravity So I’m making this up. We’ll understand that this is a complete work of fiction. Okay. Okay Um So I come back and I now can build a computer that can solve the halting problem now One of the things you have to understand about the halting problem is that it’s completely unsolvable So what I really mean when I say it can solve the halting problem is I mean you solved it for the Turing machine You then have a new halting problem For this new type of computer that can’t be solved. Okay, and this is what goldis theorem actually says um So you would never Truly solve the halting problem, but you could solve it for a certain type of computer if that makes any sense um This would then be a new type of physics.
[00:39:34] Blue: We would now have an idea how to go about making this computer We would then have to figure out the engineering problems around it, but eventually we would work through all problems are soluble We would eventually work through the engineering problems We make this new type of computer that is like this uses this closed time loop and you attach it to a Turing machine And now the Turing machine can solve the halting problem for other Turing machines But it can’t solve it for this new type of computer itself. Okay um There’s this really deep link and this is where the problem comes People who are trying to show the church Turing thesis or the church Turing dutch thesis if they even know that that thesis exists Which most of them don’t Um is wrong What is it? They’re trying to say Okay, if all they’re trying to say is we might have future
[00:40:25] Blue: computational theories based on the new laws of physics And it might overthrow that the church Turing dutch thesis then yeah, sure But what you’re going to end up with is the church Turing dutch You know x thesis whoever discovers this And a new type of computer that has a new computational class and everything’s still computable And this is why you can’t get around the church Turing thesis in general Okay Is Yes, you can overthrow any specific version of the church Turing thesis You can overthrow it with the church Turing dutch thesis you can overthrow the church Turing dutch thesis with the church Turing dutch Aronson thesis what whoever discovers the new version But you always wind up with something that’s exactly equivalent to what we thought the church Turing thesis was which basically says The everything’s everything that is physics allows you to do can be simulated on a computer It’s just it depends on what computer the laws of physics may mean we have to invent new types of computers to fulfill that Or they might not it may be we’ve already discovered The the top classic computer that will ever exist the quantum computer Okay, that may it may be every theory we have after this we’ll can we’ll Say that that is the highest classic computer um And when someone wants to get around the church Turing dutch thesis They typically just don’t mean the church Turing dutch Aronson thesis right or whatever this new future thesis is going to be They mean that in some mystical supernatural way We’ll simply find out that consciousness can’t be in any way computed or therefore explained by physics Well, that’s just supernaturalism, right? I
[00:42:09] Blue: mean and that’s clearly most of the people who say this Are getting into that camp Because they don’t quite understand what dutch was actually trying to say And so they’re just kind of putting stuff out there. Well, maybe You know Just recently had One of the critics on facebook say, oh, but see there’s a difference between a recipe and And a cake because if you try to eat a recipe it doesn’t taste like a cake. It’s like, okay, you’re you’re so thoroughly misunderstanding what dutch is saying that I don’t even know how to respond to you And I have to go back to the beginning with you and explain what it is dutch was trying to say because you’re wasting your time By trying to think with criticisms like this, right? Um, because dutch is not trying to say that the recipe tastes like a cake Let me assure you that that just has nothing to do with the church during dutch thesis, right? What the church during dutch thesis really says though is this if you think The biological substrate is able to do something that the that the digital computer can’t do Then what you’re really positing is a new branch of physics a new laws of physics that isn’t quantum computer um, and I think most people like, um Lee Cronin has been very critical of the church’s training thesis. Okay, doesn’t really understand it from what I can see
[00:43:32] Blue: He does he is not trying to say that we need the laws of physics But that is what the implications of his statements is he just doesn’t know that Right because he hasn’t actually understood the theory correctly And that’s really what you’re trying to say is anytime you say The the brain is not a computer is not equivalent to a computer You’re really trying to say we need new laws of physics to explain the brain Well, I don’t think most of the people who say that would ever actually agree to that statement because that’s such a wildly strange statement Okay This is why computational universality is so hard to get around and And it’s you’re either positing A future laws of physics, which you know what that’s fair. Okay, if you’ve got good re in Quantum mechanics is wrong. So you might even be right. Okay, but you need to actually Have entirely new physics to even posit a new type of computer because Right now our current understanding of physics shows that the quantum computer is the universal computer period end of story Because we because deutch mapped the two together. So they are equivalent period end of story And there’s no other way around that other than you’re going to have to actually show me entirely new laws of physics So when someone comes in they start to criticize the church -train thesis
[00:44:56] Blue: I’ll use the term church -train thesis or the Turing principle as a shorthand But I really mean the church -train -deutch thesis whatever the current highest computer is When you’re trying to criticize that They usually they need to show us new laws of physics and none of them even know that they need to show us very few of them Roger Penrose knows that he needs that. He’s one of the exceptions So, okay, so getting back to this we didn’t get very far He said this aside for computation universal computational universality But this is why I’m so completely in the deutch camp for the concept of computational universality and Yes, maybe Brains use special kinds of quantum Gravitational forces That we don’t currently know about under our current theories Is that does that kind of what penrose thinks that is what penrose is what penrose.
[00:45:51] Red: Okay.
[00:45:51] Blue: Okay Now in fact, let me read something a place where I do need to offer a criticism of deutch So penrose so deutch in his this article that we’re reviewing He talks about Various other ideas including pen roses He says some such as the mathematician roger penrose have suggested the brain uses quantum computation Or even hyper quantum computation relying on as yet unknown physics beyond quantum theory And this explains the failure to create agi on existing computers To explain why I and most researchers in the quantum theory of computation disagree That this is a plausible source of the human brain’s unique functionality is beyond the scope of this essay you want to know more read The 2006 paper is the brain of quantum computer and published in the journal cognitive science. So now I can summarize This point. Okay. So first of all, let me just say He says penrose suggested the brain uses quantum computation. That’s not true But then he goes on he says what is true or even hyper quantum computation Penrose in his books is quite clear He does not mean that the brain is a quantum computer because Penrose knows that if the brain’s a quantum computer, that’s exactly the same as saying that the brain is equivalent to a computer So penrose is smart. He really gets this stuff. He’s he’s he’s got really interesting ideas because of that Penrose is saying quantum mechanics is wrong And that we need to replace it with an entirely new version of physics And then he goes through a punch list of what he hopes that physics will hold and guess what that’s all it is It’s a list of things that penrose hopes the future theory is going to hold
[00:47:38] Blue: um He tried to come up with Something to make it more plausible that the brain could be Have be working off of quantum effects now This wouldn’t be to show it’s a quantum computer because quantum computers are exactly equivalent to Turing machines other than a slight difference in the class of what algorithms The shores algorithm can run so In his book, he says that he says I’m not suggesting the brains of quantum computer because that would that would undermine my entire Argument if that were true. It would be the same as saying the brain’s equivalent to a computer So he is trying to say There’s going to be some interesting effect that we’re going to find in future physics and that future physics is going to Show that there’s this quantum effect and the brain’s going to use that and that’s going to allow us to get around godel’s algorithm He makes his whole argument around godel’s algorithm and That is his whole argument. His whole argument today is basically You know, I think that we’re going to have some future physics and current physics is completely wrong And so I don’t accept its current implications And that’s what penrose is to say Now he tries to make his his point of view which when I put it the way I just did it doesn’t seem like a very good Theory and really it’s not a theory like it’s not by any stretch of the imagination A scientific theory today
[00:49:00] Red: I’m curious how many people Agree with penrose I mean, I’ve heard a lot of people who are interested in his idea But I don’t think I’ve ever like interacted with someone who is like penrose is right kind of it
[00:49:14] Blue: No, no, I don’t think you will find really many scientists at all that agree with penrose. Okay Um, but you know, what what would they agree with right penrose himself admits that this isn’t the theory This is him trying to point out problems with the existing theory and what he how he hopes You might go about trying to solve it. He’s come up with interesting theories trying to solve it his um I can’t remember the name. He’s got a theory that that uh didn’t catch on at first He was kind of disappointed and then it started to catch on as a possible way to go about Trying to make some advances in uh, physics. I’d have to look the theory up So, I mean to penrose to his credit. He’s actually trying to solve the problem. He missed. There’s a problem I kind of think he’s probably on the wrong path. I think he’s got intuitions That um, it should be the case that the brain isn’t equivalent to a computer And so he tries to show there could be quantum effects in the brain And he got together with a another scientist who believed that and they published a book on the subject But everything we know about quantum computation suggests that you just can’t have quantum computations In something as wet and warm as the brain Right, and I take it what kind of what Penrose is he’s searching for the mystery of consciousness in something more physical
[00:50:37] Red: whereas Deutch and I think it’s probably more mainstream position that it’s more of a software That’s problem. Is is is that fair?
[00:50:46] Blue: Yes. So this is dutch’s whole point Is that we don’t really I mean like penrose’s theories is not a good theory today. Now that could change It’s it’s a conjecture and well, let’s make a distinction between a conjecture Penrose has every right to use his intuitions to say i’m going to make the following conjecture And i’m going to let that lead me into a research program Okay, and that’s what penrose is doing to his credit, right? And it’s his risk if he’s wrong then that’s his time. He’s wasted going down a false path now False paths can be partially true. So even if you’re going to a false path, it may turn out to be useful in some way Okay, this is one of the reasons why dogmatism is sometimes good um But in terms of agi studies If you were trying to pick between penrose’s approach to agi studies and deutch’s approach to agi studies deutch’s is rooted in existing theory And it’s really hard to see how you’d get around it and penrose isn’t saying how you would either because penrose doesn’t know And probably will never know because honestly, he’s probably wrong And there probably is no way to get around The church -turing -deutsch thesis Or if there is it’s going to turn out to be a benign version where it’s just simply a tweak to the thesis and the church -turing -deutsch Aronson thesis or whatever, right?
[00:52:09] Blue: It’s i keep using scott aronson as the example because he wrote that paper on How we might someday build a better computer Uh, but that may never happen or it may either way it almost doesn’t matter Even if it did happen, even if it turned out that we could build a new type of computer that had closed time loops And we could solve the halting problem Do we have any reason at all to believe that human brains? Can solve the halting problem they can’t Right The limits of what humans can do is exactly equivalent to the class of computational theory That computational theory says we can’t we can’t Do things we can’t intractably accomplish things that computers Can only intractably do we can’t solve Uncomputable problems
[00:52:57] Red: So is that get it get out of link? sort of a link between computational universality and Universal at the universal explainer hypothesis
[00:53:09] Blue: so Oh Yes, in so far as the universal explainer hypothesis is built on top of computational universality
[00:53:17] Red: Okay,
[00:53:18] Blue: so the universal explainer hypothesis would hypothesize that Even a universal explainer can’t do things that are not allowed by computational theory Okay, because it’s still ultimately that the brain’s a physical object that it’s running software It cannot be different from What you can do on a Turing machine or a quantum Turing machine Really on a Turing machine We’ve got no reason at all to believe that human brains can solve Schor’s algorithm in exponential time Sorry in in real in a polynomial time So we’ve got every reason at this point to believe that the brain is a regular Turing machine not even a quantum Turing machine so this is Really kind of where we’re the starting point that if you and this is what got me excited right the realization that that We have good reason to believe good theories best theories best in class theories with zero competitors today that say that we’ve got every reason to believe that AGI is possible. It’s just a matter of finding the right software And it’s not going to require new physics. It’s not going to require Some sort of mystical connection, etc. Okay.
[00:54:43] Red: I was just going to say this that might be a good transition to Uh a question I have for you. Okay about the article It seems to me like most of what you say so far have said so far would be relatively Uncontroversial in the AGI field Yeah, or they might at least pretty much be be on the the same page as you but if if you Turn it around and said Also, what’s in the article to someone who would be a more mainstream AGI researcher and you said well the reason We don’t have AGI yet is because we’re basing our Programming and our ideas on on what dutch might call bad philosophy I don’t think he uses that in the article But he says well, we’re thinking about justified true belief and induction and bayesian is a Bayesian Ism and behaviorism and and I think he brings all that up in the article and seems to suggest that that’s why just AGI is on the wrong path I kind of think suspect That this mainstream AGI researcher would like look at you like you are crazy Or am I am I misunderstanding?
[00:56:05] Blue: Well, this is actually Where I do have some criticism of dutch.
[00:56:10] Red: Okay. Okay.
[00:56:11] Blue: Although. I think he’s still basically correct Your average AGI researcher isn’t aware of any of these theories so Yes The vast majority of AGI so how many AGI researchers are there in the world? Well, how would you even count that there? There’s probably very very very few Okay, you got a whole lot of AI researchers And probably some of them think that their AI research might someday be related to AGI research
[00:56:39] Red: Um,
[00:56:40] Blue: but I don’t think that there are true some large number of AGI researchers out there. That’s just not the way Big science works today, right? But if you were to talk to your average AI researcher who’s just who isn’t even really trying to do AGI research, okay um And you were to say Can we build a An AGI on a computer the vast majority of them would say yes Now if you were to ask them why? They would not cite Deutch’s paper that shows a mapping between physics and computational theory Well, that’s of course the right way to go about this. You want to show brains are physical objects Physical objects follow laws of physics the laws of physics are computable on a computer and therefore I now know I can build an AGI unless you can show that the laws of physics themselves are wrong And and not just wrong because of course, we know they’re wrong They have to be wrong in a way that matters to the way the brain would function and this is the thing that One of our mutual friends out on Facebook has missed over and over and over again. Okay is She might point out say the problem of time or something like that What is the problem of time even got to do with how brains function? They don’t right? Pointing out a problem isn’t sufficient. You need to have a really good reason why it’s got some sort of relevance to the problem of AGI and We just don’t have any reason at all to believe that Maybe we will someday.
[00:58:12] Blue: I don’t know you leave an open mind because you’re critical rationalist You leave an open mind Maybe we will find out that the church during church during dutch thesis has Thoroughly misled You know, someone like me In agi research But we’ve just got no reason to believe that today and we’ve got every reason to believe that It’s the right path because it’s the best theory we currently have so The vast majority of ai people haven’t don’t have any clue that Computational theory is actually a branch of physics. They don’t have any clue that David Deutsch wrote a paper That showed you can map physics to the computer and back Okay, so what are they basing this on well to be honest? It’s gut feel right. I mean They they have this kind of gut feel that if it’s not the case if you were to ask them They might produce something similar to dutch. They might say something like well brains are physical objects So there’s nothing there’s nothing mysterious about it and everything supernatural about it So of course we should be able to build a physical System that’s equivalent, right?
[00:59:14] Red: Yeah,
[00:59:14] Blue: but they don’t know I mean like you’ll come across people all the time that are scientists legitimate scientists who very sincerely think And we’ll argue I there was a friend of mine at work who’s a phd So she’s an actual scientist not working doing research in the field today And she she but she’s in neuroscience And I told her I said, yeah, I’m interested in neuroscience And because I’m interested in agi. She says well, we don’t know that the brain is equivalent to a computer. I said, yeah, we do And she goes no, we don’t and this is this is someone who knows what she’s talking about in the field of neuroscience Right, they’re not teaching in the field of neuroscience David Deutsch’s paper on mapping quantum physics to Computational, you know to the computer,
[01:00:02] Red: right?
[01:00:03] Blue: I don’t know. There’s an actual paper, right and Quantum physics is exactly equivalent to the quantum computer and vice versa So we know that the brain’s equivalent to a computer And then she would immediately say I mean she’s never heard this. She’s not going to take it from me I’m not a some official scientist, right? It doesn’t matter that I’m right So she goes no, no, no, we’ve we’ve we always think the brain is like, you know a steam machine or it’s a clockwork And she uses all the standard arguments Okay, and it’s not the same because we’ve actually mapped Physics to the computer. That’s what Deutsch did Unless you can show a mistake in his paper or you can show A new set of physics Then this is our best theory and there’s just no competitor to it Okay, so
[01:00:54] Red: from her from her perspective she she she’s kind of saying that well, you know Couple hundred years ago the steam engine was the most advanced thing around so people just Naturally compared the human brain to the the steam engine Now the computer is the most advanced thing around so we kind of just Make this assumption that the computer is like the the brain, but it’s it’s just in reality. It’s not not really like that whereas Everything you’re saying about universe Universality and all this seems to suggest that it must be
[01:01:30] Blue: this. This is a very different case than those Yeah,
[01:01:34] Red: yeah, this
[01:01:35] Blue: is a hugely different case than those other examples, right?
[01:01:39] Red: Yeah,
[01:01:40] Blue: okay Now by the way, she could still be right, right? It could be that We end up with some completely new theory at some point Okay, but I can always say that and this is this is the thing people miss right even people who call themselves critical rationalists They miss the fact that you can always always say well, maybe the theory is wrong That’s just not what you do. You have to actually suggest a new theory Okay, this is gets me to where I do have at least a little bit of a criticism of what dwight says But I still think he’s mostly right And here’s the quote for example It is taken for granted by almost every authority that knowledge consists of justified true beliefs and that therefore an agi’s thinking must include Some process during which it justifies some of its theories as true or probable while rejecting others as false or improbable But an agi programmer needs to know where the theories come from in the first place the prevailing misconception Is that by assuming that the future will be like the past it can be derived Or extrapolate or generalize theories from repeated experience by an alleged process called called induction But that is impossible So why is why is this still conventional wisdom that we get our theories by induction? Okay? um
[01:02:58] Blue: Having now studied this he’s kind of right But if I were to actually go talk to your average scientist Much less your average agi or sorry ai researcher None of them are most of them have never even heard of justified true belief Yeah Right and the idea that it is taken for granted is just not true Now I think what dwight would probably say to defend himself here is that he’s saying he would say Well, they don’t know about justified true belief they may they may have heard of induction because that’s a common enough term
[01:03:33] Red: And
[01:03:34] Blue: they may have even heard that science is based on induction because that’s a common enough idea But they don’t have any clue what it all means, right? It’s just there’s no interest. That’s that crazy philosophical stuff I’m just going to go do my science And maybe they even consent to it. They heard it from a good source Yeah, my buddy joe who’s a scientist who knows this stuff. He told me it was based on induction. That’s good enough for me Yeah, right.
[01:04:00] Red: Yeah,
[01:04:00] Blue: okay. That’s about as far as it goes. The idea that it is taken for granted is just not true um I think dwight would say but they act like it’s true. Well, this is maybe getting a little closer Okay, if you were to go ask the average scientist are scientific. Are there scientific theories that have been proven fact? Some scientists would say yes, but I think there’s a very large number of scientists who would say no, nothing’s proven So I have to take some exception to dwight cheer the idea that this is some Overwhelming prevailing philosophy that is misdirected science much less ai Is not really entirely true Now having said that The entire field of machine learning is pretty much entirely based on The philosophy of induction And machine learning is definitely where all the action is happening in ai research right now
[01:04:59] Red: Now
[01:04:59] Blue: again, I want to emphasize that for the most part ai researchers aren’t trying to build agis and have no interest in doing so
[01:05:07] Red: Yeah
[01:05:08] Blue: So the idea that it is misleading that field is not true
[01:05:12] Red: either
[01:05:13] Blue: Right because the field for the most parts not even attempting to build agi They’re just trying to take existing theories which they may consider to be inductive and in fact might be inductive Let me go that far. Okay, depending on what you mean by induction um And they’re trying to take them as far as they can and there’s nothing wrong with that That’s a completely thoroughly legitimate thing for them to be doing. Okay, so what i’m kind of getting is that The article or at least how i’m interpreting doge’s article. He seems to suggest that the these bad ideas about Knowledge are holding up ai research where kind of what you You’re saying is that that might be true to some degree, but really what ai researchers are doing is just something completely different Yeah, then then trying to create that’s right. They’re not even Right, right. Yeah. Now. What about the cases where they are trying like open ai? Well, yeah, they’re trying to go down the path of of using existing Neural network theories which are quote inductive And they’re trying to do a lot of these things that are mistakes just exactly like doge is saying So this is why doge isn’t entirely wrong and this is something you have to get used to Is that a lot of these statements we say things we explain things And you just have to get comfortable with the fact that human explanations are just full of true and falseness all over the place
[01:06:44] Blue: And a lot of time is just because we’re trying to briefly explain something we have in our mind And there’s just no way to give the full details And i’m going to give doge a pass based on that right i do want to clarify That the idea that there is this overwhelming, you know people take it for granted and Just totally is leading the world of ai in the wrong direction None of that’s true And yet he’s still kind of getting at something that’s really at heart true Which is that We understand so little about how to go about doing agi research That we basically were for all intents and purposes were the guy who’s looking for the keys underneath the lamp post Even though he didn’t only lose it there Because that’s where all the light is so he might as well and you know what that’s not even a bad Thing to try right if if you’ve lost your key even if it wasn’t in the light It may have bounced in the light and that’s the easiest place to search first You may want to actually go into the light and look there first. Yeah, it’s not as stupid as it sounds, right?
[01:07:45] Red: Yeah
[01:07:46] Blue: And yet go ahead Well,
[01:07:49] Red: I assume you’ve read uh nick brostrom’s book super intelligence.
[01:07:53] Blue: I haven’t
[01:07:54] Red: oh you haven’t oh, okay Well kind of what I get from that book is that that I You know, he goes through all these Different different ways. I can’t really describe them off the top of my head, but that An ai might essentially Turn into a kind of agi Just Just we might happen to just yes stumble across something that might might do that It’s probably a pretty common idea. It is uh, whereas kind of what I get from Deutsch and his is that that That’s not likely to happen. We’d have to really understand something about Human consciousness and how we create knowledge And you know his idea is if you understand something that you can program it So we kind of have to understand how knowledge is created first Before we could hope to program it. Is that a fair?
[01:08:48] Blue: Yes There and again, let me both agree with dutch and criticize what he’s saying.
[01:08:52] Red: Okay, okay,
[01:08:53] Blue: so If you were to ever read ray Kurzweil, have you read ray Kurzweil?
[01:08:59] Red: Never heard of him. No.
[01:09:00] Blue: Oh I’m surprised you haven’t heard of him.
[01:09:02] Red: Okay.
[01:09:02] Blue: Well, like even dutch mentions him. Okay,
[01:09:05] Red: right?
[01:09:05] Blue: I mean like he’s kind of a big famous name. Okay, so he’s he talks a lot about the singularity He’s kind of a singularity geek.
[01:09:13] Red: Okay,
[01:09:13] Blue: and um, he’s run a number of companies He’s definitely a smart guy who knows a lot of stuff And I would consider him a perfectly valid scientist, but he’s really kind of mostly known as a transhumanist booster, you know,
[01:09:28] Red: okay
[01:09:29] Blue: And he and like most people who do these sorts of things he gets so much wrong, right?
[01:09:34] Red: Okay
[01:09:35] Blue: And he’s definitely been in the camp of Oh, AI is making all this progress towards agi and he’s he’s boosted this idea in his books That there’s a dwindling field of things that every time we discover, you know First we say it would require real intelligence to to play chess better than a master And then once we actually have an AI that can do can be the master Then that’s no longer considered AI and and deutch criticizes that view and he’s definitely Boosted that view.
[01:10:06] Red: Okay,
[01:10:07] Blue: right. You just mentioned Bostrom, right? We’re the idea of well someday we’ll have complexity and it’ll just sort of happen There was a whole book written by a guy who believed that not a scientist of writer The guy wrote flash forward. I forget his name now, but famous science fiction author He has a whole series of books where the internet accidentally becomes conscious Because consciousness is is a something that just naturally arises out of complexity So there’s definitely a lot of kind of Pseudo philosophical ideas that are out there that are definitely in the mouths of These very small set of boosters like Nick Nick Bostrom and Ray Kurzweil and they say stuff like this Although, let me be honest. The real reason they say stuff like this is because they don’t have a clue what to say
[01:10:57] Red: Right.
[01:10:58] Blue: We’re we’re in this dark area where nobody knows, right? So you might as well say crazy things like maybe it’s just a matter of it happening on its own, you know Is that is
[01:11:08] Red: this kind of related to the scaling hypothesis? Yeah, the scaling hypothesis
[01:11:12] Blue: is very similar To that right the hypothesis comes from this group of people who don’t know what they’re saying And so they’re saying kind of crazy things.
[01:11:23] Red: Yeah um, just if as I understood just if we Something achieves a certain level of complexity that that consciousness or agi or whatever will just kind of Just happen.
[01:11:35] Unknown: Yeah.
[01:11:36] Red: Yeah,
[01:11:36] Blue: right. Okay. What do I just saying and I completely agree with him Is no we’ve got better theories than that Right, if you were at least aware of critical rationalism as a theory Then you’d have a better idea of how knowledge is actually created And that’s true. And then you wouldn’t say such crazy things as The scaling hypothesis is going to explain it or it will just arise from complexity Not because I mean who knows maybe those are true, right? Maybe who knows is we don’t have a theory. It could be true But the reason why they’re saying it is because they just don’t even have a clue what else to say And we’ve got better things to say we could talk about universal Universal explainers we could talk about the importance of explanations. Okay There’s so many things we could be talking about that would be better and would probably be very helpful in trying to Narrow down the search for what the right algorithm is having said that And having hung out with A lot of the fans of david doigt who are interested in agi and would really like to find it It’s really obvious that knowing critical rationalism just does not seem to help that much And in fact one of the things that i’ve criticized doigt on on this podcast extensively on this podcast Is that doigt based on this belief that critical rationalism was the key to understanding agi has And that it’s based on explanations He has defined creativity in terms of the ability to create explanations, which is one of the things he says in this article
[01:13:11] Blue: And he has um claimed that there are only two kinds of knowledge creation that of biological evolution and that of human minds And these are these are false ideas or they’re true if you maybe define these terms in really narrow ways But that doesn’t seem to be what he was intending right And when you read his books you come along along to various philosophical ideas that are themselves Probably every bit as misleading as the inductive ideas that he’s trying to put down And really the correct set of ideas is that knowledge creation is ubiquitous. This is the um The popper Campbell camp right that um We do trial and error all over the place We do um all sorts of things that create things that are legitimately called knowledge and they’re not just two sources and creativity Yes, we can define creativity as the ability to create new explanations And that’s what doigt’s does but then for example, you’ve eliminated biological evolution as a source of creativity Well, even doigt doesn’t want to do that So he’ll openly admit well There’s more than one definition for creativity,
[01:14:23] Red: right? Okay,
[01:14:24] Blue: but you know what? The moment you define creativity in your book as it’s the ability to create explanations every doigtian fan. I know Really honestly thinks that there isn’t there is no other possible definition of creativity even though doigt does not say that And doigt has personally told me he does not believe that Right and had a podcast episode where I talked about my had a chance to talk with him and what he told me So the problem is is that when you when you try to go about defining these things Doigt has come up with ways of looking at it that aren’t necessarily the most helpful either Okay, and they’re just as misleading And trying to tease through all these things and come up with okay. What’s the real truth? Okay, well, I can tell you a few things that I know at this point. I don’t know much Right. This is a hard problem to solve But I can tell you that ai is overwhelmingly built out of Filing error programs and that that’s why most of them work in a tractable amount of time Is because they do exactly what popper’s theory says Now that’s not what doigt is trying to say in this article And in fact, it’s at odds with what doigt is trying to say in this article But ai is actually deep rooted in popper’s theory. Nobody knows that Certainly nobody in ai knows that and nobody in critical rationalism knows that
[01:15:51] Blue: Okay, but that is the truth Because overwhelmingly every algorithm has to do a conjecture and reputation process To be able to work in the first place because that’s just how knowledge is created And ai algorithms work by doing that that should be zero surprise But it surprises everybody in both camps the deutch camp the critical rationalist camp and the ai camp and It’s hard for us to get past some of these even super simple ideas. Now if I know that if ai is actually based on Conjecture and reputation exactly like popper said Then why can’t they build a gi? Well as it turns out Knowing popper’s epistemology is insufficient to understand a gi What you really need to understand my guess Okay, obviously i’m guessing Is you have to understand how to create a universal conjecture engine The types of conjecture engines we build today in ai they’re narrow They’re they’re always within certain I’m going to try to solve chess I’m going to try to come up with a conjecture for what the next best move would be So i’m going to try every possible move and i’m going to look forward this many number of spaces and i’m going to use a board Evaluation algorithm that was built using machine learning or alpha zero or something You know and i’ve got this way i’m going about doing it and yes, it’s a conjecture refutation process but it’s not In any way capable of coming up with a conjecture for I want to do something else entirely, you know, it’s super narrow of what the search space is Well, that’s what you need to do to make ai work.
[01:17:36] Blue: You have to keep the search space narrow So how is it that humans go about doing this? Well, we’re not doing anything magic, right as per universality We can’t violate the limitations of computational universality So we have this miraculous ability to Come up with A search path that’s narrow where we come with an explanation and it narrows and it creates these constraints And then we’re able to figure things out based on that or we might be wrong and we might just waste all our time I love that universal conjecture engine that could be your your you thought of that That’s your Made up right now I think
[01:18:18] Red: we might have made history on this podcast. Let’s let’s uh,
[01:18:24] Blue: okay This once you realize that all all ai algorithms almost almost all ai algorithms I actually have counter examples. I discovered a number of counter examples that proved that the Campbell hopper camphor it was wrong that all knowledge is created by conjecture refutation Strangest this may sound I actually can show that there are algorithms that are do exactly the same as These conjecture refutation algorithms, but don’t use conjecture refutation And that’s something that still needs to be explained and I’ve got some ideas now on how to explain it I didn’t have any idea a few years ago when I discovered it But I’ve got some good ideas now about how to go about that But the vast majority of them use conjecture refutation processes, okay um No surprise there really at all if you really stop and think about it The issue is the search space when when evolution Or humans the two open -ended Sources of knowledge creation and this is what do it’s got wrong. He wanted to say they were the only sources of knowledge creation What they are is they’re the only open -ended sources of knowledge creation. There’s a difference There’s a really important difference between those two
[01:19:35] Green: Um,
[01:19:36] Blue: it’s got to do with the search space. So search algorithms are variation selection algorithms So if I’m doing a search and I’m trying out different I’m trying to search for something I’m trying multiple different things Well, that’s obviously variation and selection if I’m doing variation and selection Then I’m searching So the concept of search and the concept of trial and error or variation and selection They appear to be equivalent Now again, I have a kind of a couple counter examples. I’m not going to get into them right now We’re going to treat them as equivalent for the moment when you realize that’s the case Then you realize that what really is special about biological evolution and human knowledge creation Is the fact that we can we can jump across the search space In startling ways that no algorithm currently can That’s called the problem problem of open -endedness Okay, we and even our like our best genetic algorithms do not solve that problem today. They’re still so narrow Okay, and in fact, we don’t even know I just explained it to you So it seems like I understand what I’m saying and it maybe even makes sense to you But I don’t understand it well enough to put it into an algorithm and at the end of the day We understand things through in precisely through algorithms We understand everything through algorithms, which is one of the reasons why I don’t expect we will someday have laws of physics that we don’t define via math and instead we define them like functional descriptions or descriptions of like we would try to handle ethics today or something like that I don’t anticipate we will ever see laws of physics that are like that Okay,
[01:21:16] Blue: if we did it would be the same as saying we didn’t really understand them
[01:21:19] Red: Yeah,
[01:21:20] Blue: okay, and it would violate the idea that we can actually understand everything
[01:21:23] Red: Yeah,
[01:21:24] Blue: so and by the way, if you go look at a paper on ethics I don’t really believe we have any ethics theory ethical theories today that we actually understand like any of them So we understand them at a kind of glossy high level maybe But in terms of actually being able to understand them in any deep sense, we don’t would
[01:21:45] Red: you say that’s a fair way to At least briefly summarize the the importance of Alan Turing’s universality Is that just you could just say the world is comprehensible? I mean, that’s there’s a depth. There’s a deep
[01:22:01] Blue: link between those two Okay, there’s a deep link between the fact that We can comprehend the thing and that we can turn things into algorithms And then when we can’t turn a thing into an algorithm, we don’t fully comprehend it However, they can’t be exactly equivalent either Because it’s possible to understand how to put something into an algorithm without understanding it So there must be more to it than that, right? There’s it’s a necessary, but you know, but insufficient thing to understand, right?
[01:22:32] Red: Okay
[01:22:32] Blue: I could like easily teach you an algorithm to accomplish something And you may have you may be able to follow the steps without having any understanding of why it works Right.
[01:22:43] Red: I see
[01:22:44] Blue: so it can’t be that comprehension and algorithms are exactly the same thing and yet You can’t actually show me counter examples to what I’m saying, right? Where if we if we have a field that we can turn it into an algorithm We have a deep understanding of that field whereas if we have a field that we can’t turn into an algorithm Then we don’t have a deep understanding of that field, right? It’s it’s and then this the shocks people a little though, right? We don’t have a deep understanding of Darwinian evolution because we can’t turn it into an algorithm today We have a number of understandings of it the parts We do understand we can turn into algorithms and we can write algorithms that do something like it Right, but in a more narrow sort of way the genetic algorithms. I previously mentioned But what we’re really talking about here is a gap in our knowledge and we’re um, leslie valiant famously has championed the idea of research and artificial evolution Where we try to come up with what’s the algorithm that actually allows for open -ended evolution to take place There’s other researchers that are like studying the problem of open -endedness trying to make progress on that We will someday understand it and it will turn out to be something fairly simple Right and we just haven’t figured out how to Understand the problem deeply enough to be able to come up with the right sorts of solutions It’s to where we really kind of grasp.
[01:24:06] Blue: This is the problem And this is one of the things where I do agree with dutch where the fact that we put machine learning in terms of induction I don’t think that’s entirely wrong I think that induction had a degree of verisimilitude to it And therefore you can make quite a bit of progress in the field of machine learning by thinking of it as a form of induction But I think if you were to my guess is is that if you were to rethink machine learning in terms of poppers epistemology That you would understand it at a deeper level and that it would eventually lead to a deeper research program Um, now, you know, I don’t know that for sure, right? But that’d be my guess because I know that induction is an inferior form of epistemology. It’s not entirely wrong um, but it it is a Reduced understanding of how say science works or how say humans understand things And critical rationalism is a much better way to look at what critical rationalism does not address is how we come up with our conjectures Like it does not tell us anything today about how we come up with our conjectures My guess is that once you know how we come with our conjectures, you’ve solved the problem of agi And that’s when you have your universal conjecture engine Yes, then we’ll understand So we can ask questions. What’s an explanation don’t have a good. We don’t have a good definition of an excellent good Algorithmic understanding of what explanation is Right.
[01:25:34] Blue: Um, we have a whole theory of of and this is another example of where deutch is wrong that it’s taken for granted There’s a whole branch of machine learning called explanation based learning Where they’ve taken the idea of explanation seriously And have tried to model the concept of an explanation and then have built machine learning algorithms based on explanations And they have a lot of the quantities That popper said should exist. So for example when you do um explanation based learning A single example A single rule Can have instead of being just a heuristic, which is how they do most things like with neural nets Which is probabilistic heuristics You can actually have a single explanation That then has this reach and it goes out and it is true in all cases where they that’s not true for most machine learning algorithms And it’s it actually has a lot of the qualities that we would expect Of scientific explanations and yet the end result does not solve the problem of agi And in fact, it’s not even as good a form of machine learning today as regular neural nets that just use inductive probabilistic approaches Why is that? Right, that’s a totally fair question. Why is it that one of the things that we know is missing? In agi research explanations, why did that field sputter out and not make progress? What was missing in it? We don’t know, right? um But that is an example of how agi researchers tried all sorts of different things and guess what the inductive ones the ones we call inductive Those were the most productive so far Right, that’s why that’s why they get all the love and attention.
[01:27:18] Blue: It’s got nothing to do with a love of the philosophy of induction It’s it’s just got to do with the fact that that’s where we happen to make all the all the progress Which was specifically with neural nets And so that’s what’s getting studied the most They’re looking in the light exactly like you would expect And it’s not wrong for them to do that because these are all really useful algorithms that they’re coming up with um And yet I completely agree with deutch that it’s the wrong direction It’s this is not how you’re you’re not going to discover agi in the realm of probabilistic inductive um ideas It’s just not the way the human brain does do that But the human brain clearly does more than that and our universal explanorship is not rooted in it Right, it’s it’s clearly something more that we’re doing um And I think this is Maybe this would be a good place to stop this discussion What I’m really trying to get at is on the one hand I think computational universality is is just I can’t even conceive how you would get around it Okay, one of our mutual friends who has been critical of it One of the things she says she admits she says I don’t know what these future theories will look like Well, of course you don’t you would have to imagine some sort of physics that doesn’t follow math You would have to imagine I mean like it’s it’s you can you can’t even conceive what the paper is going to look like Right because at the end of the day humans actually understand things through Right, we try to model things out with math when we’re trying to get really precise.
[01:28:52] Red: Yeah,
[01:28:53] Blue: and um To try to imagine what a set of laws of physics that would Fit the criteria that she’s looking for You can’t even conceive what it’s going to look like because it goes so far against the way humans actually think right And what we expect of our scientific theories and so The proofs in the pudding, right? People who have these criticisms they need to come up with those laws and then just show them to me until then I’m going to still assume universality holds And I’ll change my mind fast if they can actually do it. I just don’t think they can right On the other hand, I don’t think that the world the field of ai is primarily stuck because of They’re rooted in inductive thought for one thing I don’t think that there’s any sort of inductive thought that isn’t ultimately really deeply related to crawl poppers Theories I think crawl poppers theories. I used to believe that they subsumed induction. I’m not sure I believe that anymore I think that there’s a statistical effect that’s got nothing to do with crawl poppers theories. That’s part of what we call induction today but um But you you do not I’m reading debaugh mail’s book Which is an attempt to merge statistical theory with poppers theories and to come up with something that’s better than either of them And she’s doing it like she’s actually got an advance on critical rationalism that that um makes progress
[01:30:23] Blue: And I’m kind of more in the mail camp these days because of that I think that she’s actually coming up with a generalized version of critical rationalism that will be still critical rationalism um That’s better than what popper had Because she’s actually uh studied this more deeply than popper was able to he he didn’t understand statistical theory It wasn’t his strong suit. So she’s she’s making some progress She’s also doing a few things that I think are probably wrong The book I’m reading is um, I think it’s called statistical inference as severe testing So she’s arguing that the concept of statistical inference is really boils down to Did you severely test the process? And until you did your inference is not valid and it’s a really it’s it’s a really interesting way of looking at I I had this I didn’t know how to merge popper’s theory with This inductive way that we think of machine learning today And she’s really that was where I like I approached vaden trying to ask him and His answer was really not I think on the right track But uh mails is I think mail has hit the nail on the head on how we would actually go about doing this So I’m still reading the book. So I I need to finish the book still and honestly, she’s a hard read Um, so I I wonder how much I’m understanding I feel like I’m getting a lot out of the book, but I feel like she’s making points that I sometimes just don’t understand And I’m unclear what her point is. She’s very poetic in the way she writes.
[01:31:56] Blue: So sometimes She just doesn’t make her conclusions as clear as she should have Uh, but I think she’s on the right track. I’m really convinced she’s on the right track And I think that that is where I can completely agree with dutch is that um If we could take a look at How would we reimagine ai from the standpoint of critical rationalism and how would reimagine machine learning from the standpoint of critical rationalism My guess is that would open up really interesting new ideas Now would those lead to agi? Probably not on their own, but it’s probably one of the components we need I think there’s more though. Like I said, I think we need a better understanding of explanations What’s an explanation we need a A model a proper model of what an explanation is computational model of what an explanation is and I don’t know that we have that today I think we don’t have that today. Okay. I think explanation based learning Doesn’t have a good model of what an explanation is. I don’t think it’s a good enough model Like I they’re not entirely off base. They’ve got some interesting ideas. They basically model it as logic Which is which is what popper did by the way He tried to model the idea of an explanation as logic That’s like the whole basis for his original form of critical rationalism as using deductive logic
[01:33:14] Red: Yeah,
[01:33:14] Blue: so they’re actually kind of on the right path right And this is one and this is one of the things where again, I have to just agree a little with dutch If you look at the scientific community, they may talk about science being induction Machine learning may talk about machine learning being induction. Okay They’re not doing induction because there’s there’s really no such thing as doing induction, right? What they’re really doing is critical rationalism even if they don’t know it And in fact critical rationalism predates popper by centuries This is one of the things that I think people miss And I’ve started to realize that the critical rationalist camp has a problem That I call it the their war on the war on words. Okay, when I read david miller He wrote a book defense of critical rationalism. He spends so much time attacking various philosophical ideas That Aren’t really wrong If you understand them the way your average scientist would understand them So for example, he spends considerable time attacking The idea that there are no good reasons If what you mean by there are no good reasons is that there are no sure or justified reasons Which is which is really what miller means, right? Then sure, he’s correct But if I were to go to the average person average scientist average ai worker, you know researcher And I would just say are there good reasons for believing in one thing over another They’re gonna say yes, and you know what they’re right Okay, in fact critical rationalism could be thought of as a description of what good reasons are And
[01:34:59] Blue: so when miller says there are no good reasons He’s right But only because he’s defining good reasons in a super narrow way at odds with how most people think of the term and Once you realize that critical rationalists are doing that on a regular basis I could give you several other examples of the critical rationalists war on words Where they are just wrong the idea that there’s no such thing as a justification Of course, there’s such thing as a justification, right? Critical rationalism could be thought of as the justification for why some theories are better than others Okay, um The fact that they’re so bent out of shape over certain words And these words actually have multiple possible meanings and they’re bent over Bend out of shape over a certain philosophical understanding of those words When most people don’t use those words in that way Okay, that’s when you have to start to realize There’s nothing wrong with saying science is inductive. Uh, popper had no problem. I could give you the quote of saying My theory critical rationalism could be thought of as induction, right? induction’s real it actually works, but it works through using critical rationalism and this is the idea of uh Critical rationalism subsumes induction induction is not a completely worthless theory. It never was it was partially on the right track
[01:36:21] Blue: And we wouldn’t have critical rationalism today But for the fact that induction as a theory existed first and that it It had problems and that carl popper needed to solve those problems And then he figured out why what we’re calling induction works and it turns out it’s because it’s through conjecturing refutation Okay Most people today most critical rationalists today Would never want to admit like popper did that there’s any connection between induction and critical rationalism, but the two are deeply connected together What it is is that one’s better than the other induction’s newton’s theory critical rationalism is einstein’s theory Okay, it’s a better theory. It’s a theory that explains more gets us further solves more problems and but explains the success of induction and Once you really understand or explain some of the success of induction Um pre mayo it may be it didn’t fully explain the success of induction I think that’s why a lot of people accuse popper of no matter what he did There was always this whiff of induction. It’s probably true Right that popper did not solve all the problems surrounding how to merge his theory with inductive theory I should probably also note inductive theory is so vague that it’s sometimes really unclear what people even mean by it
[01:37:38] Green: So
[01:37:38] Blue: for example, this one comes from miller and I think popper said this one too If all you mean by induction is that there is an element somewhere of the future in the past being the same well, of course science is based on finding universal theories Universal theories by definition are the same in the past and in the future So of course if that’s all you mean by induction, then yes science is inductive Right, that’s a vacuous way of saying induction is true, right? It’s Okay, there’s something more going on than that, right? It’s the fact that we use statistics to test things This is coming from mayo now. I didn’t know this but like einstein until I read mail Einstein when we did the eddington expedition and they tried to measure if einstein’s theory was correct or newton’s theory Was correct with the eclipse and where the stars were located They use statistics statistical inference to figure it out Because the instruments were so imprecise that they could they had to just do a bunch of measurements And then they had to take a probability over it okay That’s an example of how statistical inference Leaks into even the parts of science that we don’t think of as being statistical and how and That assumes certain things that we would consider inductive It assumes that we’re pulling from an identically distributed set of Probabilities, for example, I’m saying all sorts of things. I need to explain way better and I would have to have a totally separate podcast put my thoughts together in a more Consistent way to make make sense of it.
[01:39:13] Red: Well, it sounds interesting. Mayo. Mayo. Is that spelled like mal? M -a -y -o I
[01:39:19] Blue: would have said why oh,
[01:39:21] Red: okay, okay
[01:39:22] Blue: There’s a lot of interesting questions here that haven’t been entirely resolved popper took us so far But he didn’t take us all the way And honestly, he was off track in a couple places I’ve criticized popper at least I’ve criticized popper as His theory isn’t really about conjecture and refutation Because the word refutates only about refutation if you define refutation in a certain narrow way You have to define it as refuting the theory plus the background knowledge, right? But nobody takes it that way and tons of his understanding is a popper come from the fact that he insisted on using the word refutation And it probably wasn’t the best term. He probably should have used the term counter example for example And you can get a much clearer understanding of popper by thinking through counter examples rather than refutations because a counter example Doesn’t carry with it the philosophical baggage of This observation has to actually refute the theory. It just needs to show a problem with the way we’re currently thinking of the theory Um, and that’s a much more accurate way of thinking of popper’s epistemology.
[01:40:28] Red: Okay
[01:40:29] Blue: Um, that that’s my popper without refutation that we did a podcast episode on I don’t know if you’ve gotten to those yet or not when you’ve gone through the backlog I have I need to I need to listen again No, and you know what? These are all such small tweaks, right? And but that’s what we do It’s We’re not going to find that mayo disproves popper’s epistemology in the sense of it completely overturns it and we go some other direction She’s going to overturn it by showing why it was successful Which really just means she shows it was mostly right Right and that’s what Einstein did with Newton’s theory. That’s what If there is a future computational theory It’s going to show why the current computational theory was right in every case we knew about Right, and I mean it’s it’s got to explain the success of the theories And this is why I don’t worry so much about needing to find future physics to figure out what agi is It is going to turn out to be a software problem if the current theories are true And we’ve got every reason to believe that that they’re true, right? They’re false ultimately, but they’re true for everything that we’re currently discussing Right, if that makes any sense.
[01:41:38] Unknown: Yeah
[01:41:39] Blue: um So And this is why I love this article by dutch and yes, I think ultimately It misled a little I feel bad that a lot of the people who are trying to research agi now just layman trying to research agi using popper’s theories I feel like dutch has misled them on several points And we need to pull people back and realize ai’s actually doing useful stuff stop thinking of it as a completely ridiculous field, you know It’s interesting in its own right. It even has a link to agi right at a minimum ai Is a series of very serious attempts the original ai at least was a series of very serious attempts to understand what the human brain was doing that failed and popper would tell you that the way you Create knowledge is by understanding the problem and the way you understand the problem Is by trying to solve it and failing in this sense a ai has made progress towards a agi And we will once we understand agi look back someday and we will see why How ai did lead to agi by showing approaches that didn’t work And there will be that link and we will see it Right The most obvious example here is the early the earliest form of ai was some was symbolic logic Well symbolic logic was Way back to the time of Aristotle an attempt to formalize how humans thought That was wrong Although right in some cases in many cases.
[01:43:15] Blue: So On the one hand It failed right they tried they thought that in a matter of months They were going to be building agis because they were just going to put into the computer You know these these inference machines that use deductive logic And it seems so naive to us today right But why did it fail? Nobody’s really asking that question Right This is where there’s so much room for agi research If you start to rethink of this in terms of poppers epistemology poppers epistemology is rooted in deductive logic So they weren’t entirely on their own path Right, they they at least were partially correct to go down that path And it’s probably unfortunate that that path got abandoned because it was at least partially going down the right direction And yet something was missing Something was missing from the way of doing it. One of the most obvious things missing from it Was the fact that it’s so hard to represent things in deductive logic I’ve written a deductive logic Program to try to explore this further in my own ai agi layman research, right? So I wrote up a DPL Algorithm that does That works out implications of deductive logic statement propositional logic statements And it’s not tractable, right? I mean what humans are doing is clearly Humans don’t really reason in straightforward deductive logic Like this algorithm does we can It’s within our capacity to do so But it isn’t the way we normally go about things So I think We would have to rethink this right we would have to say, okay What are humans really doing? What are humans really doing if it’s not deductive logic?
[01:45:04] Blue: and This led to the entire field of ai where they try to um a kind of field of logics where they try to work out Non -monotonic logic they’re trying to find logics that are closer to what humans actually do Okay, and again all of them have failed, right? I mean they’re interesting fields and their own right I don’t mean to fail entirely as fields, but they’re not actually what humans are doing Right humans are doing something more than this And but I think those are the right approaches you try your best to solve the problem with the part you know And then you try to figure out what went wrong And in this sense ai is the study of agi even if it doesn’t mean to be Um And this is why I went back and studied ai This is why I said even though I know ai is not the same as agi I can see that this is a serious attempt at agi that failed and I know from critical rationalism That’s how you learned the problem. That’s how you understand the problem Well enough to finally formulate a solution Is that you have to fail to succeed at solving it and ai is absolutely Legitimate failed attempts to solve the problem of agi
[01:46:13] Red: Well, I’m very fascinating bruce. I I didn’t get to to Throw in my my favorite quote from deutch about agi I’d be curious what you think of it briefly Okay, um, he said I now i’m almost sure this is from deutch, but I can’t quite find the source But he says we’ll know when uh from this is just from memory, but I he said I will know that uh computers have achieved agi not when they can beat us Um at chess when they decide not to play chess and get caught up on the first 30 seasons of the simpsons Okay, I think that
[01:46:57] Blue: is a great quote right there I think he’s probably right. I think we’ve got good reason to believe that he’s right. However, let me let me walk through Why he says that And where you I think he might get it a little bit wrong
[01:47:13] Red: Okay,
[01:47:13] Blue: and Why he’s still probably right any, you know, okay? um So we kind of know deutch has this idea That a universal explainer can’t be programmed. He says this in the article that we’re discussing Okay, he says even the concept of programming is wrong He says the okay So he says in any case agi cannot possibly be defined purely behaviorally In the classic brain the vat thought experiment the brain when temporarily disconnected from its inputs and output channels Is thinking feeling creating explanations It has all of the cognitive attributes of an agi So the relevant attributes of an ai program do not consist of the relationships between inputs and outputs Okay, I don’t know have a clue what he just said there I mean, I I I don’t think it was a meaningful statement to be perfectly honest It may sound meaningful, but I don’t think it was um I could write an algorithm today That doesn’t take an input But does lots of interesting things anyhow, right? And then I can write one that never creates an An output, right? If I go and I play Um skyrim nobody doubts. That’s an algorithm. There’s no ultimate output that needs to come out of playing skyrim Okay, that’s just not how the game works so This seems to be a misunderstanding of the concept of an algorithm Um that deutch has propagated unfortunately, and I’ve seen other fans of david deutch propagated also The idea an idea of an algorithm is we take a program we take a computation and we turn it into an input and an output for the sake of studying it
[01:48:59] Blue: What I do that I take something like a program like skyrim And I can turn I can rethink of it as an algorithm or rather a collection of algorithms. So I’ve got an algorithm that Updates the world and I’ve got an algorithm that updates the ai’s in the world. I’ve got an algorithm that Looks at where the player is and then outputs what that looks like on the screen for a single frame of of the video game Okay, we can describe the entire game in terms of algorithms and yet Technically speaking the entire game together isn’t an algorithm because it has no inputs and outputs That ultimately it’s trying to Work towards some goal the concept of an algorithm is really one that we use for the study of computation and getting so caught up in What is or isn’t algorithm doesn’t make sense because everything can be turned into an algorithm just by the way we think of it um, and this is This is one of the things that I think a lot of people miss is that it’s a matter of how we study it It’s not that algorithms are some deep platonic concept Right, it’s it’s a convenient way of looking at computation and everything is an algorithm So I promise you an agi will have a series of inputs and outputs Exactly like the way she’s saying it won’t So I don’t think this was a meaningful statement And I think this is one of the places where he’s kind of gone off the rails in terms of what we can actually learn from critical rationalism mixed with computational
[01:50:31] Blue: theory But I can kind of see what he’s trying to get he’s trying to get at Okay, I can kind of see what he’s trying to get at here Because he goes on to say the upshot is that unlike any functionality that has ever been programmed to date This one can be achieved neither by specification nor by a test of outputs The need that what is needed is nothing less than a breakthrough in philosophy a new epistemological theory That explains how brains create explanatory knowledge and hence defines in principle without ever running them as programs Which algorithms possess the functionality and which do not? okay Again, I feel like this is a totally misleading statement because everything’s a program So the idea that it’s not going to run them as programs plus even just the statement Unlike any functionality that has ever been programmed to date It cannot be achieved by specification Nor by test of the outputs. That’s not how we program things It’s got nothing to do with it, right? It’s I don’t even know where this is coming from Okay, when we come up with a program if I were to go program Um anything for my job working as a programmer I don’t specify What the inputs and the outputs are supposed to be that’s just not how it’s done Right, it’s so I I don’t know where this is coming from and yet I can see what he’s trying to get at Okay, he’s trying to say We program an algorithm to do one precise thing And that’s not what human beings are human beings have all sorts of ideas and we’re jumping all over the place And that’s what your quote is.
[01:52:08] Red: Yeah
[01:52:08] Blue: from deutsch, right? Okay My guess is that’s the nature of universal explanation if we had an algorithm today a program today That was a universal explainer program It would be able it would not be it might decide to go watch the simpsons instead of playing chess like we wanted it to
[01:52:26] Green: Yeah,
[01:52:27] Blue: and the very fact that it is a universal explainer means it can gather knowledge through conjection refutation In areas it wasn’t the narrow. It’s not just within the narrow areas that it was programmed to Okay, that is going to be part of the nature of an agi because an agi solves the problem of open -endedness But it’s got nothing to do with inputs. It’s got nothing to do with outputs It’s got nothing to do with being or not being a program or being or not being an algorithm None of that has anything to do with it. That’s all misleading. Okay, because it will be an algorithm Well, we’ll think of it as an algorithm anyhow It will be a program It’s going to be all those things and really do it wouldn’t argue with me over any of those This is he’d probably say all this is linguistic differences, right? I’m trying to make a point I was doing it with the language I could here’s the thing though Deutsch has taken that to a radical level. He has said Therefore you can’t actually program an agi in any way. Well, we know that’s not quite right because Our genes coerce us, right? They have a program for us that we can’t ignore but they use coercion to Pleasure paying things like that to try to get us to align our interests with theirs and for the most part it works Right you you you can look at gondi as the counter example But there sure aren’t a lot of counter examples when it comes to hunger making you want to go eat, right?
[01:53:50] Blue: The reason why Gondi can ignore it With the right knowledge and with the right practice is because he’s a universal explainer that much is true Do I get that part right? Okay, but the reason why the vast majority of us aren’t gondi is because you can program an universal explainer to Be coerced by its actions and by what thoughts it has and things like that I think deutch makes a fair point. We maybe shouldn’t do that We don’t really know like we don’t understand this concept well enough Most of us don’t have a lot of problem with the fact that we find sex pleasurable Like that’s actually a case where We’re being in a certain sense coerced barge Coerced barred by our genes through pleasure and we like it so we don’t mind it Right So it’s unclear what type of agi safety programs will ultimately be necessary or if any will be necessary We’re too far out on that and I think deutch is making assumptions here that aren’t We don’t have enough knowledge to really know if if he’s right or not But he’s kind of right. He’s right that Even when you have a really strong agi safety program that the genes have on us to make us want to eat That you still end up with gondies that can ignore it right and So in that sense, he’s right.
[01:55:17] Red: Yeah
[01:55:17] Blue: Morally speaking should we right like if if I could Write an agi that only played chess By making it just like so pleasurable. That’s all it wants to do Should I Like there’s still an ethical question there that seems a little bad to me, right? and it may not even be possible
[01:55:40] Green: I I don’t want to sideline us because we should really be wrapping things up, but there’s an interesting verner vinge storyline that Asks that question
[01:55:52] Blue: Yeah, which which story?
[01:55:53] Green: It’s in a deepness of the sky. They A particular group of people figures out how to essentially infect people with autism So they find the like their very best minds and then they make them Super obsessed over Over over over solving just that one particular problem And it’s how they handle the fact they can’t make machines do the automation that they need they they make all their people into machines
[01:56:26] Blue: So and that’s a good way to put ask the question because are you okay with turning people affecting people with autism? Even if it benefits society because they solve problems better, you know They become obsessed with solving certain types of societal problems It doesn’t sound very pleasant to me, right? I mean So there’s this there’s there’s the ethical question. There’s there’s can it be done? Well to some degree it can we know that genes is able to do it through feelings Okay, let me just read one more thing from the dutch article. He says at one end of the scale There’s the full philosophical problem of the nature of subjective Sensations qualia which is intimately connected the problem of agi He doesn’t know that Right. I mean like he literally does not know that Our best theories on this subject which admittedly aren’t great theories So he could turn out to be right Is that animals have qualia that they experience things and that they’re not general intelligences So our best scientific theories at the moment is that those two are not connected and that qualia Actually evolved at an earlier state And that universal scholarship came later and the two have no connection at all Now is that true? I don’t know.
[01:57:40] Blue: I mean like our theories of qualia are so bad at this point And our even our theory of animal qualia are so bad at this point That you’ve got to be ready for an overturning You know that it may turn out that he turns out to be right But let me be very clear That we haven’t done an episode on this and we probably should That there’s a ton of science on this that that deutch doesn’t know about and things that made predictions that got corroborated and Really interesting experiments that have been done that deutch has no knowledge of that. He just has not studied and You can’t just ignore it all right I mean you’re going to have to explain the success of those theories once you have your final theory And at this point when he says that they’re intimately connected That’s just a wild guess on his part right even if even if he turns out to be right It wasn’t rooted in anything but a wild guess It certainly was not based on any theory of universality Um, and so, you know, we don’t know I mean like there’s a lot of questions here that we’re still trying to answer And getting back to cameo’s point Um, that may be the only way that we can try to program agis Maybe you can only do it through qualia. Maybe you can only do it through pleasure and pain And should we then becomes a separate ethical question? And maybe in some cases. Yes, maybe in some cases.
[01:59:03] Blue: No, we just don’t know at this point We’re in an area where our knowledge is so small that any guess you make is going to be very unknowledgeable and It’s just going to be a wild guess at this point. Anyhow, that was where I was trying to go with this At once this article really inspired me to go back to school to study ai To to really believe that the problem of agi is a trackable problem that really could happen in our lifestyles Right, if we get the right thinking and we understand the problem well enough It will turn out to probably be a very simple problem to solve um, and it probably will require exactly like uh, dutch is saying philosophical breakthroughs and the starting point should be critical rationalism exactly like dutch is saying Although critical rationalism is insufficient. It’s it’s not enough. It’s going to be a improved version of critical rationalism that we’re ultimately going to need One that understands better what a conjecture is For example, what an explanation is understands the problem of how to merge induction and uh, explanation better Statistics and explanation. I should probably say I feel like the this article is got way more right than wrong and in terms of its basic Premises it’s totally smack on But I also think it’s got a few misleading points that as I’ve studied this more deeply They’re they’re just more to it than that the the world of scientists are great critical rationalists.
[02:00:24] Blue: They are the best critical rationalists They are way better than critical rationalists at critical rationalism They’re enormously better at critical rationalism than critical rationalists as of today And so it shouldn’t be that surprising that science is doing a lot of things right even in the area of trying to study ai
[02:00:43] Red: Love it. Thank you bruce. That was wonderful
[02:00:53] Blue: The theory of anything podcast could use your help We have a small but loyal audience and we’d like to get the word out about the podcast to others So others can enjoy it as well to the best of our knowledge We’re the only podcast that covers all four strands of david deutch’s philosophy as well as other interesting subjects If you’re enjoying this podcast, please give us a five star rating on apple podcast This can usually be done right inside your podcast player Or you can google the theory of anything podcast apple or something like that Some players have their own rating system and giving us a five star rating on any rating system would be helpful If you enjoy a particular episode Please consider tweeting about us or linking to us on facebook or other social media to help get the word out If you are interested in financially supporting the podcast We have two ways to do that. The first is via our podcast host site anchor Just go to anchor dot fm slash four dash strands f o u r dash s t r a n d s There’s a support button available that allows you to do reoccurring donations If you want to make a one time donation go to our blog, which is four strands dot org There is a donation button there that uses paypal. Thank you
Links to this episode: Spotify / Apple Podcasts
Generated with AI using PodcastTranscriptor. Unofficial AI-generated transcripts. These may contain mistakes; please verify against the actual podcast.