Episode 12: Artificial Intelligence vs Artificial General Intelligence

  • Links to this episode: Spotify / Apple Podcasts
  • This transcript was generated with AI using PodcastTranscriptor.
  • Unofficial AI-generated transcripts. These may contain mistakes. Please check against the actual podcast.
  • Speakers are denoted as color names.

Transcript

[00:00:10]  Blue: All right, welcome to the theory of anything podcast today We’re gonna be talking about artificial general intelligence and we have a number of guests here today So let’s go around and have everybody introduce themselves.

[00:00:22]  Red: So cameo.

[00:00:23]  Blue: Why don’t you start? Of course, you’re a regular on the show

[00:00:25]  Red: Yeah, I’m a regular on the show and I’m cameo. I know Bruce for years I manage a soccer company in Salt Lake City and I know very little about Actual artificial intelligence even though it’s a word that gets thrown around a lot in Technology right now.

[00:00:46]  Blue: All right. Well, this will be a great place to kind of at least understand the difference between artificial intelligence and Artificial general intelligence Ella. Why don’t you give us an introduction?

[00:00:56]  Green: Hi, so I’m Ella Heppner. I am a software engineer. I have just a bachelor’s from in computer science from Virginia Tech I’ve been interested in AI and AGI for as long as I can remember and it was four or five years ago that I read David Deutch’s the beginning of infinity and since then I’ve been sort of obsessed with Coming up with some critical rationalist theory of AGI and so that’s sort of my how my interest in the subject started And that’s what I’ve been that’s been one of the main topics on my mind for the last few years Thanks, and Ella. What’s the name of your theory? My theory I’m currently calling it CTP theory which is an acronym which doesn’t really stand for anything anymore But that’s the name that I’m sticking with because I have nothing better

[00:01:39]  Blue: All right. So yeah, so Ella has her own AGI theory that she has been actively working on trying to solve problems for and improve So that was one of the reasons why I invited her to the show for for this topic All right Dennis. Why don’t you introduce yourself?

[00:01:55]  Orange: Yeah, hi, so my name is Dennis I am a software engineer and I’ve been fascinated with epistemology and AGI for years and it started when In 2015 I read the beginning of infinity by David Deutch after he was on Sam Harris podcast And I’ve been fascinated with the topic ever since

[00:02:13]  Blue: All right. Thank you And Dennis has a book called a window on intelligence and that was why I knew that he was obsessed with AGI Which is why I invited him to the show Yeah Tatchapal, why don’t you introduce yourself?

[00:02:27]  Purple: Sure. So I’m Tatchapal. I am now a research assistant professor in TTIC But I’m going in gonna move as a Gonna move to Michigan and other soon as a assistant professor there but so I work as a I do research on algorithmic design Like designing farce algorithm for big data something like that, but But I got interest in this as estimate as How do you call it this popper in epistemology, right? That’s epistemology. Yes Yeah Because of this book by David Deutch as well But after that I this interest get get back to me again because of Dennis actually I listen to his podcast and then and become Very interested and then right on

[00:03:26]  Orange: that’s great

[00:03:27]  Purple: And I yeah and kind of join this group of you of you guys and then try to learn more about it

[00:03:35]  Orange: Well, it’s probably Bruce’s group

[00:03:37]  Purple: Yeah For the fourth friend

[00:03:40]  Blue: so yeah, probably not a lot of people who listen to the show even know about the four strands group, but that’s that’s a Email group and now also a blog where a bunch of people who are interested in these sorts of subjects just kind of converse So it’s a secret society Okay, so Cameo and at the beginning of this before we started the show She wanted us to make our first topic be what’s the difference between artificial intelligence versus artificial general intelligence? So that’s actually a really important distinction that needs to be made does anybody want to take a Take that topic and explain what they understand the difference to be between by the way I’ve got my own opinion on the subject. So I’ll share my opinion also Anyone want to grab that subject first?

[00:04:29]  Red: Can I start with a joke?

[00:04:31]  Blue: Sure? Go for it cameo

[00:04:34]  Red: Maybe maybe not quite a joke. Um, what’s the difference between artificial intelligence and machine learning? The difference between machine learning and artificial intelligence is artificial intelligence is a bullet on your PowerPoint presentation for your pitch deck and machine learning is something that actually exists

[00:05:04]  Blue: So does

[00:05:05]  Green: somebody want to take this topic first I can go ahead and give my thoughts on it so there are a few different ways that you can look at this obviously but on the way is that Makes most sense to me is that an artificial general intelligence is a program that is capable of creating any kind of knowledge that a human being would be capable of creating in their mind and Whereas artificial intelligence or narrow AI as opposed to general AI is just a program that has some Knowledge created by humans sort of embedded into it. So, you know machine learning algorithms like Deep learning would be narrow AI, of course And those algorithms have sort of knowledge that humans have created about what kind of patterns exist in data and The AI’s job is essentially to you know, use computational resources in addition to the human knowledge program into it to find patterns in a really fast way Whereas a GI general artificial intelligence would be able to do everything that human would be able to do so it wouldn’t just Analyze data sets and find some pattern for you. It would be able to hold a conversation and you know, make art Make scientific discoveries. It would be a person and you know, the true sense of the word It would be capable of doing any mental task that any human could do.

[00:06:28]  Blue: Yes That I think that’s excellent Ella. That’s a good description. Anybody else want to add anything?

[00:06:34]  Orange: Okay, I just my two cents on the topic AI or narrow AI has a tell too And that is that there are so many different versions of it and different applications of it Like you have speech recognition AI. You have chess playing AI. You have self -driving cars that are powered by AI And all those things couldn’t do what the other one does and that’s it That gives it a way that this is not a truly universal system because an AGI could do all of those things and then some So That’s not to say that it would want to do any of those things That would be up to the AGI. Whereas a self -driving car can just be coerced if you you know, if you will to to drive a car So that’s a separate issue But the fact that narrow AI has very narrow applications and isn’t universal That gives it a way that that’s a tell that we’re not talking about AGI

[00:07:27]  Purple: right, thank you Actually, I have some some rule of thumb that uh to separate them and I this is quite rough but I would like to know actually the The opinion of your guys as well so the rule of thumb is If the program Do not have the potential to create Any goal that you want to do Then it’s not a GI.

[00:07:53]  Blue: Oh interesting.

[00:07:54]  Green: Yeah, I agree with that Yes, I think that’d be true any um any goal would be um, you know sort of part of a process of you know, human cognition And I think an AGI will be able to do anything that you know, human cognition could do So I’d agree that they could have any goal that a human could have

[00:08:10]  Orange: I would agree, but yeah, I would I would just point out that the focus would be on the AGI’s goals not the human’s goals or have recreated the AGI, but you wouldn’t you wouldn’t want it to Like you wouldn’t force it to go down a certain path just because that’s what you wanted to do. Ah,

[00:08:27]  Blue: right,

[00:08:27]  Purple: right, right.

[00:08:27]  Blue: Yeah So let me let me give my take on this. Um, I’m gonna be I’m gonna try to actually use Machine learning theory machine learning theory itself to explain what I see is the difference between AGI and and artificial intelligence First of all artificial intelligence is an umbrella term. It includes things like minimax algorithms search algorithms things that In in no sense at all are doing any sort of anything that could be called learning Machine learning is a specific branch of artificial intelligence Where the machine is actually seems to be learning something specifically it’s trying to formulate a function That does something and often it’s a function that humans don’t know how to write on their own So it’s trying to approximate a function that Humans don’t know how to write that the obvious example here would be like A face recognition algorithm if I were to ask even a human expert Write a program that would recognize my face They would have a very hard time doing that because they don’t have a good theory for How to write an algorithm to do that But they do know how to write a machine learning algorithm that can do that Where it takes examples of my face and examples of not my face and then learns to differentiate between them So that’s the the difference between artificial intelligence and machine learning now machine learning then That’s kind of the bigger name in artificial intelligence.

[00:09:52]  Blue: It has its own theory that’s been developed And tom mitchell worked a lot of this out I had to read his textbook is one of my classes for my master’s degree, which is my specialization is in machine learning And um, one of the things that he points out Is that um, there’s something called that all machine learning algorithms algorithms have something called an inductive bias That word inductions kind of important here um And in fact, he even produces a proof that if you don’t have an inductive bias There is no learning at all. It’s impossible for the algorithm to ever take examples and learn from it An inductive bias an example of an inductive bias would be like linear regression Where you have a bunch of data and you draw a line you try to fit a line to it And so maybe you’re you’re trying to on one axis have, you know, how many square feet a house Of house you have that you’re trying to sell on the other one you have You know something about the location or something like that You would use those features or those observations.

[00:10:52]  Blue: They would even call them Um to come up with a line and then you would use that line to make predictions Well, what you’re really doing what an inductive bias really is Is it’s a hypothesis or it’s it’s a theory Or an explanation about what the data is going to look like the shape of the data and all machine learning is based on inductive biases basically and so They’re what they’re trying to do is they’re trying to force fit machine learning and they even call it induction Right, but we’ve talked about how inductions and cameo and I have talked about how induction Is uh is a bad epistemology. It’s it’s a bad theory of knowledge They’re force fitting machine learning into induction When really the theory that they start with is whatever the inductive bias is that’s the the whole basis Well, obviously if you’ve only got a single theory you’re working with you’re not a general learner, right? You’re you’re only going to be learning things very specific to whatever your inductive bias is this. I think fits into tatterpoles Point of view about goals that a real agi would add a minimum There’s probably more to it than this Would have to be able to select its own theories and be able to learn from multiple theories Not just be given a single theory by a human and then learn from that and and even worse It’s almost always about the shape of the data, which is obviously going to be very limited in what you can learn using such a theory

[00:12:17]  Purple: so I actually Agree with this point and like even in the theoretical research um the the mainstream Thing that people call machine learning that like that that the main framework That people study is called pack learning and pack learning is exactly this It’s a theory that Study the following that you start with a set of hypothesis Which means some theory it’s fixed already and then given a set of data how to How to choose among this set of hypothesis, which are kind of the same Choose the one of the hypothesis that that kind of explain the data the most but The the thing that is very important is that this hypothesis is Is given to you is there is no theory how to explain how to come up with this set of hypothesis Hypothesis in the first place, which is something that is missing from in in In the theoretical research, I think.

[00:13:22]  Blue: Oh, yeah, excellent. So um, just to Clarify a few terms there. He’s talking about a hypothesis space So each machine learning algorithm beyond having an inductive bias It also has like a hypothesis space a space of possible hypotheses that it can learn from Some machine learning algorithms actually have infinite hypothesis hypothesis spaces. So like Genetic programming in theory has a Has an infinite open hypothesis space But most machine learning algorithms have a very limited hypothesis space They can only learn hypotheses within that space and then what they do is they look at the observations They look at the features the data and they eliminate Hypothesis from the hypothesis hypothesis space and what you’re left with is called the version space And so they’re they’re trying to narrow in with the version space on what the best Hypothesis within that space is to explain the data. That’s the way they would That’s this is to use the lingo of machine learning Um, and I think that your polls completely right here In most cases not only do you have an inductive bias But you’re upfront being given what the set of hypotheses are that you’re allowed to choose from Which obviously is an injection of human creativity And there’s not It’s the machine’s not really doing that much right it’s the human doing all the heavy lifting Yeah, so and I think that’s true of all machine learning today Right, I mean we’re getting better at letting the machine do more and more heavy lifting But all of it is based on this really narrow concept of an inductive bias And a hypothesis space and narrowing it to into a specific version space So, uh, it was interesting brutes.

[00:15:04]  Green: You mentioned that um genetic programming as one of the few Uh, sort of subfields of machine learning that has an infinite hypothesis space and I am, uh, I’ve thought about you know genetic programming a lot before and I just sort of want to get other people’s um thoughts on this Um, which is that it seems to me like genetic programming. Um, is probably the closest Um field in sort of contemporary mainstream Um machine learning the closest field to actual agi. Um, I don’t think that you know, it’s I would be surprised If it made its way to agi. I still don’t think the chances are very good I think there’s still a lot of um sort of inductivism in that subfield But uh, the the basic method there is sort of hypothetical deductive rather than inductive It’s about guessing and then um narrowing down the results blind variation selective retention Rather than sort of trying to build in some uh inductive Method. So I’m just curious if other people here have the same um intuition that uh genetic programming is probably closer to what we need to be doing than other machine learning fields

[00:16:07]  Orange: Yeah, I have the same intuition or I actually I would say it’s more than an intuition, but I share what you’re what you’re thinking is here because um, what I find promising about genetic programming is that it’s actually about the evolution of computer programs Rather than the evolution of just parameters say like in genetic algorithms

[00:16:26]  Green: And that’s exactly why the hypothesis space is infinite because it can represent, uh, you know, it’s turn complete It can represent any computable function

[00:16:32]  Orange: Right, and I think that is one of the necessary conditions for something to be agi is it would in principle need to be able to evolve Any program

[00:16:42]  Blue: right agreed So I actually I agree with you So let me let me just say that in reality all machine learning programs are actually various forms of natural selection Okay, so they all can be cast into that light once you know how to get them out of the The wording that makes them sound like inductivism It’s possible to take all of them and make them look instead like natural selection So I suspect that’s why all machine learning algorithms that work work Is because they’re all doing natural selection of some sort But most of them are only doing so in some sort of very limited hypothesis space Genetic programming is special in that it’s unlimited However, I think we all know that in practice it’s actually quite limited today There’s there’s something missing with the way we do genetic programming that makes it so that What it’s able to actually search over is Just too small a space like in theory it could find how to create a word processor But in practice you would never expect that to happen.

[00:17:46]  Orange: I would be careful Talking about natural selection in conjunction with machine learning generally Um, I I don’t think what you said is true. I think natural selection natural selection is the non -random differential reproduction of replicators And it’s really an evolutionary phenomenon So I don’t see how unless you have a system that has replicators and unless you have Um Like Unless here it’s talking about an evolutionary algorithm or an evolutionary program specifically I don’t see how for example A facial recognition algorithm that is built on the gradient descent mechanism How that has anything to do with natural selection.

[00:18:29]  Blue: Yeah, I I I I thought you might say that Dennis in fact In fact, it certainly it’s not neo Darwinism But even a gradient descent algorithm is trying out lots of variants of things and then seeing Which one actually works the best according to some sort of selection criteria? And so that’s what I really mean. It’s it’s certainly an incredibly narrow view of natural selection I

[00:18:52]  Orange: mean, I see in the in the widest possible sense I I see what you mean there if we if we if we’re talking about very loose definitions But the problem is with the gradient descent. There’s no Uh, there’s no replication going on. There’s no replicator. There is no replication going on. You’re

[00:19:08]  Blue: correct

[00:19:09]  Orange: So I I find that problematic because it might miss and no bad attention on your part But it might miss it might be misleading to some to say that natural selection is is happening an official recognition algorithm

[00:19:21]  Green: Yeah, so just um sort of to clarify the terminology here. I think that um, you know, um, Dennis is correct. You know, it isn’t it isn’t strictly speaking natural selection But bruce, you’re right that there there is a core of variation in selection It’s going on there and the way that I think about this is that the broadest um sort of process which you can create knowledge is And I use this phrase a lot Blind variation and selective retention And this is a phrase that I’m borrowing from um, donald cambell who is a contemporary of poppers who wrote a lot about Evolutionary epistemology and blind variation and selective retention. Basically just means, you know, you have some Set of objects you make variants of the objects and then you select You know a subset of the objects in some way and so neo darwinian evolution is one instance of that process But it’s not the only conceivable instance of that process And so uh, you I think what you’re trying to say in this terminology bruce is that some um Some or most machine learning algorithms involve some process of blind variation and selective retention Um and natural selection is also an instance of that process But they’re but you know natural selection is a different kind of instance Oh, no, I see I see the distinction

[00:20:29]  Blue: you’re making that makes sense. Yes. That is what I really mean.

[00:20:32]  Green: Ella So just to kind of sum up um Like machine learning does involve blind variation and selective retention But it doesn’t really involve natural selection is sort of the way that I would Unless

[00:20:43]  Blue: you understood natural select the term natural selection to be equivalent to blind variation. Yeah, I agree It’s not it’s not doing something like genetic programming, which is probably closer to although As we can probably talk about genetic programming probably is missing something important also compared to what absolutely Neo darwinism is doing

[00:21:02]  Orange: just to play devil’s advocate. I’m not I’m not sure I see how machine learning generally implements Um, blind variation even can you give an example of that?

[00:21:13]  Blue: Yeah, so gradient descent what it’s actually doing is is there’s a there’s a A number of next steps it could be taking it all of those are slight variants of wherever wherever you’re currently out in the algorithm And then it has some way of measuring which of those variants to select between

[00:21:29]  Orange: Yeah, okay, again, I would I would think that’s stretching the What what a variation is I would consider a variation just a copying error For for gradient descent to me it seems yes, there’s a you know, there’s a pool of Of points it could go to next But there’s a there’s a criterion of truth or an optimization criterion that it’s following mechanistically So there’s yeah,

[00:21:50]  Blue: I wouldn’t I wouldn’t consider the variation But again in the in the widest I think ls summed it up very nicely like in the widest sense you could you could Considered variation in selection.

[00:21:59]  Unknown: Yeah

[00:22:00]  Green: Yeah, so in blind variation selective retention the the method by which you’re selecting things matters a lot And so in the case of gradient descent the you know criterion by which you select You know you selectively retain The variations is very very simple. It’s just you know pick whichever has the you know lowest energy or whatever You know the highest utility um, and so it It is blind variation and selective retention. It’s just that the Selective retention part is very simple. And so it doesn’t produce anything like you know What neo neo Darwinism is capable of producing because the selection criterion there is much more nuanced Before

[00:22:35]  Blue: we lose cameo and Some of the audience entirely. Yeah, it’s gotten pretty

[00:22:39]  Orange: technical.

[00:22:40]  Blue: I should probably explain just quickly what we’re talking about. So Um, basically if you were to look at the machine learning technique called artificial neural networks, right? And you probably hear about this all the time is deep learning or some you know, whatever the current buzzword is

[00:22:55]  Red: Well, and I’ve gone through your um individualized course on machine learning. Yeah, so

[00:23:01]  Blue: Yeah So the the idea is is that there’s a whole I mean like it’s very very popular in machine learning a technique called gradient descent which basically You try to imagine the hypothesis space as kind of this imagine like peaks and valleys all over the place And then you’re trying to figure out how to get to the bottom of a valley or to the top of a peak And then you’re hoping that whichever peak or value, you know, depending on whether you try to minimize or maximize whichever one you get to That what it comes up with is a good enough predictor that you can use it in real life And that’s actually how machine learning works for, you know, artificial neural networks, but other types of machine learning also work And it’s not very much like Neo darwinian evolution, which is You know, how knowledge got created in biology, right? And so it’s that would explain why the fact that we’re using gradient descent to go do things That’s that’s just a very poor way of going about it Compared to whatever it is that biology is actually doing that was able to create all these different species That are adaptive to many different environments and things like that Right. Yeah This is probably a good place to jump into the next question since we’ve we’ve been drawing on this one About what is the jump to universality?

[00:24:19]  Blue: so I just gave an example of that that and this is one that we don’t understand really well, but evolution Using dna and you know, the whatever processes that we currently have It can create all sorts of adaptive species and it can create lions and fish and and You know jellyfish and bacteria and it’s it seems to be really highly open -ended in terms of what it can create Whereas machine learning seems to be extremely narrow in what it can do, right? And in what it’s able able to do So the one of the one of the hypotheses David Deutsch comes up with is that Biological evolution has made some sort of jump to universality although he doesn’t define what that jump is because he’s not sure So let’s talk specifically about Universality and the jump to universality. What is universality? Why is why is it relevant to a gi? And what is the jump to universality? Is anyone want to take that question?

[00:25:18]  Green: Um, I I suppose I can go ahead and give my thoughts if nobody else wants to go first. Um So basically universality is when you have some Well, so say you’re working in some domain and there are certain things within that domain Which um, you know only certain types of processes can get at A jump to universality is when one process goes from, you know being able to handle Only some small subset of the objects being able to handle all of the objects in the set In an infinite set generally is the context in which the word is used And the reason that this matters and why this is something worth studying and not just some arbitrary You know hypothetical is that whenever there’s a system, which you know can either have finite reach or sort of a you know limited set of things that it can do Versus an infinite set of things that it can do The change to being able to do an infinite things must happen in an infinite leap, you know You must you must at some point just do one thing that causes infinite progress And that is sort of the jump to universality It’s uh, you know moves the system from being non -universal only having a few things that it can explain or be used for To uh being universal in the sense that it can encapsulate everything And so jumps of universality are the point at which that transition is made probably speaking

[00:26:39]  Blue: Thank you. Anyone else want to add to that?

[00:26:41]  Orange: Yeah, a good example of that is the printing press The Gutenberg convention.

[00:26:45]  Unknown: Excellent.

[00:26:45]  Blue: Yeah

[00:26:46]  Orange: So basically the way that people printed books before was they either copied them by hand Which was incredibly laborious and slow and and costly Or they had they devised these printing plates. So for each page in the book you would like You would you would create I don’t I forget if it was wooden or if it was metal But you would create a mirror image of the page And then you would ink it and you would press it onto a new page And the problem with that approach is it only works for the books for which you have created printing plates There’s it’s very rigid and it’s not customizable so When Gutenberg invented the printing press And I don’t actually know if he did I mean sorry He did invent he invented a particular printing press that was powered by movable type The focus should be on movable type not printing press So movable type is customizable because now you’ve reduced the process of printing to the smallest unit That is universal within that system, which is the letter Because books are all made of letters and every word is made of letters So if you just rearrange letters now you can print any book. So To illustrate what Ella was saying with this example basically before movable type you had only Very narrow applications of printing plates for specific books and only the books that you had Already made printing plates for if you had made printing plates for the bible You couldn’t then go and print. I don’t know the beginning of infinity You would have to create the the printing plates for the new book first

[00:28:24]  Orange: But with movable type you can simply arrange letter rearrange letters And not only can you now print the beginning of infinity, but you could print any book whatsoever You could even print like the the movable type system is already able to print books that haven’t been written yet so that’s I don’t know if Gutenberg was actually after that kind of universality David Deutsch talks about how many of the Inventions that ended up being universal weren’t actually Created for the sake of being universal. It was kind of accidental That might have been the case with the printing press as well or the movable type printing press as well It may have just been the case that Gutenberg thought that this was cheaper, which it was and faster, which it was too but Whether it was a skull or not He happened to make something universal and it was universal in the domain of printing any Printable book and you could you could narrow it down a little more you could say well only books that You know that contain words and letters he couldn’t print You can’t move you can’t use movable letters to print images saying but Within that domain of printing books that were based on letters. He could not print any book

[00:29:31]  Blue: All right. Thank you and great example. I think that’s a an example that everyone can relate to now Not all of you have been part of this podcast cameo and I have been doing it for a while The episode that would come out before this one Was actually a discussion about computational universality and I was explaining computational theory So that the jumped universality to to relate to that Was that you had certain types of computers that Were limited in what types of algorithms they could create and then suddenly you have the Turing machine Which is this jumped universality where suddenly every possible Every possible algorithm at least that is allowed by classical laws of physics is now possible on that Turing machine And there are no other machines Out there that can run algorithms that the Turing machine can’t So that’s how we kind of relate this to Back to what we’ve been discussing previously on this podcast, which is the the jumped universality for computers But could somebody maybe explain to me now the relationship between the jump to computational universality and algorithmic universality and Really universality of simulation or virtual reality or something along those lines Does anybody have comments on that or? Um, I can try okay

[00:30:51]  Orange: I actually when I looked over the notes in your email for this podcast and I saw algorithmic universality I wasn’t quite sure what you’re referring to but I can make a guess and you can tell me if i’m wrong I’m guessing what you’re referring to here is actually I mean, maybe it’s the same thing as computational universality.

[00:31:03]  Unknown: I mean an algorithm is just something that can run on a Turing machine.

[00:31:06]  Blue: Yeah

[00:31:07]  Orange: um This Turing machine is computational universal when it’s a universal Turing machine Which means it can just run any algorithm any other Turing machine could run too So there’s no qualitative difference anymore between that universal Turing machine and any other Turing machine.

[00:31:21]  Blue: Yeah

[00:31:22]  Orange: now to get to the What it means to be universal simulator is basically you say well the laws of physics are computable and because everything that Happens in the universe follows the laws of physics That means everything that happens in the universe is also computable And so therefore any process that happens in the universe like I like to say if it happens out there Anywhere in the universe you could simulate it on a computer and by simulate that David Deutsch points this out in one of his interviews Simulate does not mean that it’s sort of fake or not genuine. It’s just it’s it still means the information processing is the same um So you could for example simulate solar system on a computer and it really is uh a simulation Of the solar system. It’s a it’s approximately the solar system because it’s not the same size and You know, there may be some quirks that you didn’t take into account But it’s a pretty it might be a pretty good approximation And so now you have done is you’ve simulated The solar system on a computer, which is really an amazing feat if you think about it I like to think it is and so universal simulation just means well again You’ve built a simulator a computer that could simulate anything all the other simulators could Simulate and it also means that it could simulate anything that happens in the universe Because everything that happens in the universe is governed by the laws of physics Which are computable. Um, so this this deep connection between computation and simulation and Explanation of whatever happens in the universe Is I think explained very well by David Deutsch and the fabric of reality All

[00:33:01]  Blue: right, thank you. Actually, that was exactly what I was hoping someone would say Dennis That was that was an extra that was better than I could have described Let’s let’s talk just a minute a second about this Because I actually do think this is something people tend to get confused on and so Sorry,

[00:33:15]  Orange: can I just say one thing? I do want to add a grain of salt because this is not my wheelhouse really like when I say things like The laws of physics are computable and they’re and everything follows the laws of physics And so therefore everything else is computable too. I know this because I’ve read it in David Deutsch’s books But it’s not my wheelhouse and I I would want people to take it with a grain of salt like it’s possible that I missed something or that David means something else by it.

[00:33:42]  Blue: Oh excellent. Okay. Good. Good fair point So I’ve noticed that people tend to get confused as to what a simulation is and so you used you explained that well But let me let me throw out there kind of some of the confusion. I’ve seen in the past from talking with people I was talking to a friend who said that a simulated You know, he said a simulation is not the same as reality And then he used the example that a virtual chair is not a real chair That would be silly to think a virtual chair is a real chair and I’ve also heard as an example If you simulate a tornado inside a computer the the chips inside the computer don’t get torn up by a real tornado It seems to me that this view of simulation might be a little bit Is misunderstanding something important here? It’s not that what they’re saying is wrong But but they’re they’re trying to equate things that don’t necessarily make sense, right? So Douglas Hofstetter he points out. Well, yeah, of course if you simulate a tornado in a computer It doesn’t tear up the chips. That’s the wrong level of emergence But if you had build it to simulated buildings In the simulation with the tornado it would tear those up. And so it’s easy to get kind of confused Um at love, you know, what we mean by simulation and then Hofstetter goes one further And he says a simulation of intelligence is just intelligence, right? They’re they’re the same thing There is no there’s no difference at this point if you have a simulated intelligence that goes and writes a math paper It’s a math paper. It’s not a simulated math paper, right?

[00:35:18]  Blue: So There’s some things where it makes sense to differentiate between simulation and reality But there’s other things where it doesn’t make sense to and that a simulation is reality And I think that Dennis that was really kind of what you were getting at in terms of Its simulations aren’t fake. They’re an actual thing that exists And it’s important that they do and they’ve got a lot to do with how we understand the world

[00:35:41]  Orange: That’s right. I think there there can be a sort of instrumentalism that can sneak in when it comes to simulation Where people say well, they’re just useful models or something, but they don’t actually tell us anything about reality Which I think is wrong. Um, you could fix the the specific example with a tornado. I think you could fix that by saying Well, yes, the computer doesn’t get torn up by the simulation of the tornado that it runs But you could on that same computer You could run the simulation of a tornado Destroying a computer right and then it will work again And and if it’s a good simulation, it would tell you How it would destroy the computer and why the theory of anything podcast could use your help We have a small but loyal audience and we’d like to get the word out about the podcast to others So others can enjoy it as well to the best of our knowledge We’re the only podcast that covers all four strands of david dutch’s philosophy as well as other interesting subjects If you’re enjoying this podcast, please give us a five star rating on apple podcast This can usually be done right inside your podcast player Or you can google the theory of anything podcast apple or something like that Some players have their own rating system and giving us a five star rating on any rating system would be helpful If you enjoy a particular episode, please consider tweeting about us or linking to us on facebook or other social media to help get the word out If you are interested in financially supporting the podcast We have two ways to do that.

[00:37:06]  Orange: The first is via our podcast host site anchor Just go to anchor.fm slash Four dash strands f o u r dash s t r a n d s There’s a support button available that allows you to do reoccurring donations If you want to make a one -time donation go to our blog, which is four strands dot org There is a donation button there that uses paypal.

[00:37:31]  Blue: Thank you


Links to this episode: Spotify / Apple Podcasts

Generated with AI using PodcastTranscriptor. Unofficial AI-generated transcripts. These may contain mistakes; please verify against the actual podcast.