Episode 121: Beliefs
- Links to this episode: Spotify / Apple Podcasts
- This transcript was generated with AI using PodcastTranscriptor.
- Unofficial AI-generated transcripts. These may contain mistakes. Please check against the actual podcast.
- Speakers are denoted as color names.
Transcript
[00:00:00] Blue: Hello out there. This week on the theory of anything podcast, Bruce takes a deep dive into beliefs. Do humans need beliefs? Are beliefs dangerous? What is the critical rationalist position on beliefs? And can we really, truly, even realistically live without them? I enjoyed listening to Bruce here, and I hope someone out there does too.
[00:00:33] Red: Welcome to the theory of anything podcast. Hey, Peter. Hey, Bruce. How are you today? Good. Today, we’re going to do the nature of beliefs. I don’t know that pros and cons of beliefs, the why beliefs can be good and dangerous. I don’t know, something along those lines, kind of jumping off of what we talked about in the last episode we recorded. I don’t know if they’ll be aired in the same order we recorded them. But in the last episode we recorded, we talked quite a bit about beliefs and the fact that Popper did not believe in beliefs. Deutsch does not believe in beliefs. And I took a stance against that. Then I got thinking, this is such a nuanced thing. Taking the stance I believe in beliefs, which is something I said in the past podcast, that stance is such a stupid simplification of a much more nuanced view.
[00:01:30] Blue: Well, it’s a very ambiguous word, don’t you think? It is, yeah.
[00:01:36] Red: So I wanted to, I started doing, preparing for an episode to continue talking about Michael Stravins’ Criticisms of Popper. And somewhere along the line, I suddenly realized I was really doing an episode on the nature of beliefs. And decided to just do that instead.
[00:01:55] Blue: Okay.
[00:01:56] Red: And I just pulled a whole lot of ideas around this. And there’s no, I can just say up front, the theme overall is beliefs can be good, that they’re motivating factors, but that beliefs are inherently dangerous. And that there’s kind of good reasons why Popper and Deutsch, even if I ultimately disagreed with the way they framed it, there are good reasons why they are quote unquote against beliefs. And I kind of wanted to try to draw a more nuanced picture, at least what I think is a more nuanced picture that gets into how it can be a good thing, how it can be a bad thing. And try to seek maybe some neutral ground that doesn’t deny beliefs, doesn’t try to get rid of them, but tries to find a way to make them more useful.
[00:02:56] Blue: Seems like we’re kind of stuck with them, aren’t we? I think that’s the way it is.
[00:03:01] Red: And that’s one of the main things I’ve been trying to emphasize is we’ve got them. We’ve got beliefs. There’s just nothing we can do about that. And people who deny they have beliefs, they’re just wrong. I don’t know what else to say. Right? You can see they’re wrong. You can spend 10 minutes talking to them and immediately, frankly, irrational beliefs start to arise. That if they were actually rational machines or Vulcans or something, they would see fairly quickly, this is really nothing but a belief, right?
[00:03:39] Blue: Well, your phrase, the faith -based nature of reality is something that’s really stuck with me. I mean, I assume this kind of relates to what you’re saying about beliefs.
[00:03:53] Red: Yes, it does. The fact is, is that there’s advantages to having beliefs, but yeah, they’re dangerous.
[00:04:02] Blue: Sure. Yeah, I think that’s a pretty good way to put it.
[00:04:06] Red: Okay. So quick recap. This would be recap from the previous episode we recorded, which may or may not be the one that aired just before this episode, but just to kind of get us back into the swing of things. In previous recorded episodes, though not necessarily aired in order, we talked about Streven’s book, The Knowledge Machine, and we even put Popper on trial using Streven’s arguments against Popper’s epistemology. This led to a discussion about beliefs, and particular Poppers, and later, Deutsch’s stance that we have no need for beliefs, particularly in science. In Popper’s case, he maybe holds that more to science. Deutsch makes that maybe more general. In fact, Popper said, perhaps we have some needed beliefs in ethics, and Popper also famously said that you have to simply choose to have faith in rationality. That seems like a belief, an ethical belief, right? Deutsch seems to have taken that further, claiming that belief is really only necessary for dogmatic religious beliefs. Now, I took a stance against that view and explained why I thought it was mostly a dangerous view. Human beings do have beliefs, and if you try to tell them to, quote, not have beliefs, what you’re going to get is an inaccurate and premature declaration of fill -in -the -blank, soul surviving theories, best theories, most probable theories, depending on your community and your choice of language, that happen to always happen to match your preexisting beliefs. In order to put it another way, the problem with not believing in beliefs is that you might then sincerely think you don’t have beliefs and mistake your beliefs that you do, in fact, have for a careful rational analysis, which they aren’t.
[00:05:53] Blue: Yeah.
[00:05:54] Red: So what I’m arguing is that this is the outcome we would expect if we decide to insist on denigrating beliefs, which is why I prefer not to do that. This is a persistent problem, as I’ve discussed, and we’re going to discuss again in this episode, but I’ve discussed in past episodes, this is a persistent problem within the crit -rap community who does not believe in beliefs, but also the Bayesian community who does believe in beliefs, right, at least in the Bayesian sense.
[00:06:22] Blue: Well, I guess I should know, but what is the critical rationalist position on beliefs?
[00:06:33] Red: Well, so both Popper and Deutsch have said that we don’t have need for beliefs, and if you listen to like Brett Hall or someone who’s kind of down the line
[00:06:44] Unknown: Deutschian,
[00:06:45] Red: that’s exactly what they say. We’ve just got no need for beliefs. I kind of mentioned this in the previous episode recorded. There is kind of a way to put it that I wouldn’t necessarily frame it that I wouldn’t necessarily disagree with, right? I disagree with most framings of it, so I disagree with the idea you just don’t have beliefs or you shouldn’t have beliefs. I just think this is completely unrealistic, but you could frame it something like this. You could frame it like, well, the word belief is too vague, so I prefer to use words that are more precise and not refer to beliefs. Now, there’s nothing wrong with that, particularly if you’re not holding other people accountable to your choice of words, like one of the problems I have with kind of the general crit -rap approach is that people will use the word belief and they’ll jump on them, right? There’s no need for beliefs, you know, something along those lines. Like that’s so absurd, right? This other point of view that I’ve had expressed to me by other members of the crit -rap community, they don’t jump on you for using the word belief. They understand that the word belief can just mean something like this is the theory I think is the best, right? Because, of course, that’s exactly what the word can mean, right? But they’ll take the stance it also might refer to religious dogmatism, it might refer to, and they’ll point out that it’s just too general, right? So they prefer to, for the sake of clarity, they don’t jump down other people’s throats for using the word belief, but they try to remove that word from their own language.
[00:08:20] Blue: If
[00:08:20] Red: that’s all you’re doing, I’ve got no problem with it, right? My guess is that you’ll end up using something that means more or less beliefs anyhow, because I think it’s such a core way of the way human beings think about things. I just like where you tried the kind of the Deuchy and Brett Haulian version of critical rationalism tries to remove the word probability from language, which I’ve got the exact same problem with. And then they try to replace it with the word plausibility, which go look those words up. They’re synonyms, right? It’s a little unclear why it’s such an advantage to remove one word over the other. If it’s just them, they feel personally, the connotations are a little better. That totally strikes me as okay. But the moment they start to hold other people’s language to it, that really crosses a line in my opinion, a critical rationalist line into nonsense. So I
[00:09:15] Blue: want
[00:09:17] Red: to make that distinction. I can understand a certain framing of the I don’t believe and believes makes actual some good sense to me, depending on if you’re just seeing it as I’m just trying to be more clear with my language and my own way of thinking versus if you’re trying to take an essentialist stance that the word belief is a useless word or something along those lines, which is ridiculous.
[00:09:41] Blue: Well, I got a quote here, Carl Popper from Open Society. I do not wish to conceal the fact that I believe in reason, but I do not base my belief on the claim that it can be justified by reason. That would be to base it upon itself. My rationalism is not self contained, but rests upon an irrational faith in the attitude of reasonableness. So he believed in reason. He was honest enough to recognize that this was a faith based belief. Right.
[00:10:26] Red: That’s exactly right, which is why I think he puts an exception to like I’m reading in a little bit here, but I interpret him as the reason why he makes an exception to ethics, where belief in ethics is OK. Popper does not do it. I think it’s precisely because he accepted that his belief in reason wasn’t a self justified thing that he had reasoned to, right? There was a faith based leap, a belief. He really likens it to a religious belief, right? Like he does.
[00:10:58] Blue: One more quote, one more quote. OK, I got to just read one more quote. That’s just too perfect. Conjectures and refutations. I do not argue that attitude is right. I only say that it seems to me the right attitude to adopt. I do not think that this attitude can be established by argument or by proof, yet although it cannot be justified by argument, it can be adopted, and it is the only attitude which, in my opinion, makes possible the kind of mutual criticism that is the heart of reason. Thus my rationalism rests on an irrational faith in the value of reason.
[00:11:36] Red: Excellent. How’d you find that so quickly, by the way? I have to ask.
[00:11:39] Blue: Oh, chat GPT, of course. How would chat GPT even know that exact quote? I just asked him for quotes on Carl Popper and Faith and Reason.
[00:11:49] Red: Oh, it must have searched the web. It must have searched the web. Oh, yeah. I got it. OK. By the way, Notebook LLM… And there’s about five… Sorry, but I just want to tell you there’s about five more saying exactly the same thing.
[00:12:03] Blue: Yes.
[00:12:04] Red: Very,
[00:12:04] Blue: very clear on this. So
[00:12:06] Red: I use Notebook LLM to find quotes from Popper a lot, and I also have my AI Carl Popper, where I can actually have it semantic search across every single paragraph of Popper’s works. But it’s not necessarily an easy thing to search across Popper. OK. So what I was saying was this is a persistent problem with both the CritRack communities and the Bayesian communities. Really, both rationalist communities have serious problems with getting stuck in epistemological sinks around various beliefs. OK. Here’s the thing. When I say that, it feels a little unfair. Why? Because this is a problem of humanity in general. Like, this is just the way we think, right? Like… And this is what we’re going to get into. Like, what is it that’s going on in our brains? Like, of course, I want to understand that as an AGI nerd, right? Is how is it that humans think? So… And we… Humanity spent 200,000 years stuck in various beliefs. OK. So in a sense, these rationalist communities are just being human, whatever that means. I don’t know.
[00:13:09] Blue: But it somehow seems more honest to just admit that you have beliefs. It does. It does.
[00:13:15] Red: Yeah. So I’m in favor of it for exactly that reason, right? Is instead of trying to hide that you have beliefs on this ground that it’s something you should get rid of, like, I think if you can recognize… This is what I’m going to argue today. This is like my argument in a nutshell. If you can recognize you do have beliefs, then you may be in a position to hold them looser, right? To be able to say, OK, this is my belief, but I can see it’s not a best theory. If you have this idea that you aren’t supposed to have beliefs, then your beliefs are all going to be quote -unquote best theories and you’re going to have the truth is manifest error, right? Where you think everybody who doesn’t see reason is just… It’s just obvious and it’s wrong, right? And they’re being evil. And I think you can’t get around that problem without a kind of open admission that beliefs are just a part of who we are. So now, the reason why this all came up in context of the discussion about Strevins and his criticisms of Popper was because this is an area where I think Michael Strevins in The Knowledge Machine has improved on Popper, OK, at least in this one particular point. So I think Strevins points out correctly that science has a public and private character. This is not something… You can find this idea in Popper somewhat, but it is not stated nearly so clearly as it is in Strevins, OK? So the private side, what scientists use to decide what to believe and therefore what to research isn’t necessarily particularly rational, OK?
[00:14:52] Red: By which I mean scientists aren’t necessarily any better at self -criticism than anybody else, OK? Scientists are in fact dogmatic in their beliefs as much as the next guy. In fact, interesting, Deutsch just recently retweeted a tweet that he made from back in September 2024, just a couple of days ago. He retweeted it, reposted it, and he said, in my thousand -fold experience, scientists are more prone to intellectual fads than typical people are. The reason is obvious and purported reason for the opposite is bad epistemology. I think that’s the truth, right? Like, scientists are just humans. So despite this, though, even though scientists are dogmatic, just like any other human, they have beliefs, they get dogmatic about their beliefs, somehow science makes progress anyhow. Why? Now, Strevins’ argument that we’ve covered in the past episodes is that the private character of science is not the whole story, that there is this public character that is constrained to empirical testing. That’s how Strevins puts it. I would probably modify that to it being constrained to objective checks of any kind. So for example, a logical contradiction is not an empirical test, but it has much the same character and importance to science as an empirical test precisely because it’s the kind of criticism that is objective that anyone can go check, right? And that it doesn’t have this subjective character where, well, to me, my intuitions say this, right? It’s a criticism we can all point to and say, yeah, that’s a fact. And now we have to deal with that fact, okay? Empirical tests are great that way. There’s other things like logical contradictions that have much the same character. Science accepts all of those, okay?
[00:16:43] Red: Not just empirical tests, but empirical tests is kind of the famous one that science has constrained itself to, okay? The key point being that science publicly constrains what criticisms are considered valid to ones anyone can check for themselves. And by extension, science restricts or constrains their theories only to framings of theories that allow such criticisms, okay? This is Strevins’s argument, but it’s what I strongly agree with. So as an institution, science requires all public papers and communications to formulate all their theories such that only these higher quality tests or checks or criticisms are used, even if it means blunting their arguments, okay? That’s Strevins’s term, blunting their arguments. But privately, a scientist deciding what to research or what to believe in gets to use whatever they want, even if it’s something like being inspired by sacred texts, right? I think most scientists would balk at that idea, but we just did a whole episode on the groundbreaking work of Michael Levin, okay? Michael Levin is Superman, right? Like he is amazing what he’s doing and how he is rewriting our understanding of neo -Darwinian evolution, okay? And as Peter, you have yourself pointed out, it’s pretty obvious, even though he’s never said it, it’s pretty obvious that he is strongly inspired by his Buddhist beliefs. And that is what led him to these research programs that allowed him to be the first one to rewrite neo -Darwinian evolution into something entirely new paradigm, right? So this idea that there’s something wrong with certain sources of inspiration, I just don’t even think it’s true.
[00:18:33] Red: Like there’s a long history of various scientists that were Christian in the olden days, particularly when they were all Christian, that it was various doctrines of Christianity that led to them believing certain things that led them down a path of doing certain types of research. And I think this private public divide is what explains this and how it allows us to harness beliefs in the public, in the private space while you have this divide where you have to actually cash everything out into the higher quality empirical types of criticisms in the public space, okay? And I think this is one of the great innovations of science. So for better or worse, this public private divide in science seems to me like a more realistic way to understand human beliefs. We have beliefs and they are motivating and driving forces to us. But science as an institution requires that these beliefs, as much as possible, are squeezed out of the public communications.
[00:19:37] Blue: Now, yeah, go ahead. Sorry, can I just make one, sorry if this is going back to something earlier, but I just want to make sure we’re not straw manning David Deutsch about his take on beliefs. Does he really say beliefs are bad? I think that he would say that a belief is somewhat similar to a conjecture, but which is pretty important, a conjecture that we should seek, strive to seek explanations for. Like he says, beliefs cannot be justified except in relation to other beliefs and even then only fallibly. Is that a quote from David Deutsch? That’s a quote from David Deutsch. He is against beliefs. I mean, I think that I’m pretty sure he would say they’re an important part of life and the search for knowledge.
[00:20:29] Red: We actually asked him about that in our interview. Oh, did
[00:20:33] Blue: we? Yeah, we did. You mean the one I’ve listened to about 10 times? The one you’ve listened to
[00:20:36] Red: 10 times. And
[00:20:37] Blue: he
[00:20:38] Red: actually outrightly states in the interview that we have no need for beliefs outside of like religious dogma. Right, that was, I think that was for the questions I asked him, right? Maybe it was you that asked him. I can’t remember now.
[00:20:49] Blue: I probably did. Okay.
[00:20:54] Red: Let’s keep in mind that, I mean, we’re human. We say lots of things over a long period of time. This is one of the big problems with Popper that I complained about in our previously recorded episode is that it’s not too hard to find seemingly contradictory statements within Popper if you’re going to look across his 50 year career, right?
[00:21:15] Blue: Sure.
[00:21:18] Red: So there is definitely a strong stance that David Deutsch has taken against the need for beliefs.
[00:21:26] Blue: Yeah. But like you’ve said, it’s a very ambiguous word. I mean, yeah. I mean, to me, it’s not, I mean, he’s not against conjectures. Is a conjecture really so much different than a belief? That’s a good question.
[00:21:41] Red: So I’m going to argue beliefs are conjectures. Everything’s a conjecture in a sense. So in that sense, I think I at least see the error going one way. I think we don’t usually call a conjecture a belief until you believe it is the best conjecture, the one that has survived testing the best that you now have decided to endorse as true or at least the truest theory available to you.
[00:22:12] Blue: Okay.
[00:22:13] Red: So in that sense, I don’t think we can say conjectures are equivalent to beliefs because a conjecture, I mean, like I can make a conjecture, I could say, oh, let’s all write conjectures and we don’t believe any of them yet. But we just write different conjectures about how to solve some problem or whatever, right? We’re just brainstorming. I think it becomes a belief when you start to advocate for it that you’re now saying, no, this is the right answer, right? And the other answers, they’re not as good. I think at that point, we would tend to call it a belief.
[00:22:46] Blue: Well, he says, he says we should not be seeking justification for our beliefs, but explanations. That’s from beginning of infinity. So we have, I mean, that seems to accept that we just have these beliefs, but we should be, you know, we don’t want them to turn into justified true beliefs, but beliefs that are supported by explanation.
[00:23:12] Red: So I read a quote in the last episode that we recorded where Popper outrightly said he did not believe in beliefs and didn’t believe there was any need for belief in science and that the right rational thing was to spend beliefs. You then read a quote from Deutsch in that last episode that was very, very clearly Deutsch paraphrasing that quote from Popper.
[00:23:35] Blue: Yeah.
[00:23:36] Red: So I definitely, and this is the thing is that Deutsch isn’t making this stuff up, he’s getting it from Popper, right? And so Popper did say there was no need for beliefs in science. He did say that, right?
[00:23:51] Blue: Yeah.
[00:23:52] Red: And it’s a little unclear what he meant. I think the argument typically goes, well, I don’t really believe anything. I just have a set of conjectures. I do my best to criticize them, and I try to find the one that survives criticism the best, and then sure, I accept that one tentatively for now. Like, see, no, the word belief isn’t used anywhere in there. And it’s like, okay, isn’t this like, and this is why I was kind of criticizing Popper on this front. He said in that same quote, you don’t need to believe anything. Instead, you just act. So I replaced belief with action is what Popper said. You just act on what, according to the best state of the state of the critical discussion, seems the most true at the moment. And it’s like, okay, how is that different than belief? Like, I honestly don’t see any difference at all between that and belief. And yet Popper clearly saw some sort of difference there, right?
[00:24:51] Blue: I will say that, and there’s also, there’s the idea that asserting that what you should do in science, and then, I mean, okay, yeah, science is trying to get away from beliefs. I think most people would agree with that. But this idea that that anyone thinks that just in life, you can go through life without having beliefs. I mean, that just my my BS detector goes haywire. But if I hear someone say that, it’s just not, it’s just not how humans work,
[00:25:25] Red: right? Right.
[00:25:27] Blue: I mean, we have, well, you can’t even get up in the morning without having a thousand different beliefs, right? I mean, seems to me.
[00:25:39] Red: Yeah, seems to me too.
[00:25:41] Blue: Okay, I’m sorry, I didn’t mean to get you off the track.
[00:25:44] Red: So, okay, getting back to this idea of the public and private character of science. That idea is actually an idea that that we often talk about like the secular divide, all right? This is the exact same idea, but it’s applied to politics or to government, governance instead, okay? So there’s this idea of a religious and secular divide in an open society. So Tom Holland in his excellent book, Dominion, he points out that the word secular is actually a Christian concept that Christianity invented the concept of secular. And they were the first society to do so and to create a divide between religious and secular, because it was part of their religious beliefs. And it’s a really smart divide, because it eventually led to us being able to set up societies where we could tolerate each other’s religions, America being a kind of prime example of where they experimented with that. And this whole idea of freedom of religious belief, separation of church and state, things like that, okay? The secular divide is what it’s often called. So the idea is that you are free to believe whatever you want and you can even use it to motivate what you vote for, okay? It’s an important part of who you are. It’s an important part of what laws you’re going to favor, things like that. But when making your arguments to your neighbors to try to persuade them to your viewpoint, you really need to put your arguments in terms of shared values, that are public shared values, okay? And this idea of trying to translate your private beliefs and motivations into a sort of public constrained character, that is kind of the concept of the secular culture, right?
[00:27:40] Red: And it’s the same concept as what we’re talking about with this divide with science having a private and a public character. It’s the same idea, okay? Now I was having a conversation with Ivan Phillips, who’s the Bayesian we’ve had on the show a number of times about this. And he made a point to me that I found really interesting, that I feel like it’s got some real relevance to what we’re talking about here, okay? So he pointed out that a secular atheist rationalist today, that he’s part of lots of secular atheist rationalist organizations, and he goes to conferences and things like that, he’s written books on the subject, that when you ask them to put their moral views on something into terms, they will always put it into consequentialist or utilitarian terms. So for example, this is an example he used with me, is he said if you were to ask an atheist rationalist why we should oppose misinformation, let’s say on vaccines, something like that, that you’ll get a response that is put in terms of utilitarian terms. Well, if we don’t, if we allow misinformation about vaccines to spread, then it’s going to cause people to die, something along those lines, right, where we have these utilitarian consequentialist type arguments that get used. And Ivan’s point was, is that this really blunts the argument, okay? And it does, like for one thing, it just isn’t the case that every piece of misinformation will result in consequences that are net negative, like that just isn’t true. So if you’re going to insist on supporting rationality and truth -seeking based on purely utilitarian grounds, you’re probably fighting a losing battle. And so I can understand Ivan’s point here because he’s right about that, okay?
[00:29:32] Red: Now, Ivan told me he would prefer to say something like this, the real reason we oppose information about misinformation is because it’s untrue and untruth is profane, that’s his term, by the way. And I’m like shouting, Amen, brother, you know, hallelujah, praise Darwin, you know, like, because I totally agree with that. Like, I think that I’ve always had this impulse towards, look, I’m against it, I’m against such and such because it’s untrue. And I just sort of react naturally against things that I feel are untrue. It’s got nothing to do with consequentialist, utilitarian reasons. It’s a self -justifying thing. It’s the very fact that it’s untrue, that it needs to be opposed or criticized, okay?
[00:30:14] Blue: And I think most the overwhelming majority of humans maybe who’ve ever lived feel a similar kind of impulse.
[00:30:25] Red: I think we do. I think that’s an almost built -in impulse for us.
[00:30:30] Blue: We can’t help it but believe in true things.
[00:30:33] Red: Yeah.
[00:30:34] Blue: I mean, people, you know, you take a religious person or anyone, a religious fanatic or, you know, ask them why they believe what they believe and they’ll come up with some kind of explanation. I mean, it might be bad, but they will never say like, oh, I, this is wrong, but I just choose to do this anyway. Right, they’re not going to say that. We absolutely, by almost, by just outright impulse, want to see the truth and the good aligned. As Ray Scott Percival said, he said, you can’t convince someone that the moon is made out of cheese. I mean, you can’t. You just, it’s a beautiful thing about our species, I think, that we want to base our lives on true assertions that we, you know, discover very, very fallibly, but, you know, it seems to be nearly universal amongst humans.
[00:31:33] Red: Yes. Now, here’s the interesting thing about Ivan’s argument. And I pointed this out to him. I don’t know that he disagrees with me, by the way. It’s not like we were debating and we were just talking. I pointed out that one of the reasons why atheists, rationalists, do blunt their arguments like this, part of it might be that they’ve been trained to do so, like it’s just part of their belief system. But I actually think it’s because of the secular divide. Like, to put this somewhat straightforwardly, even rational atheists must ultimately respect the idea of secular society with its divide between private though motivating beliefs and how we actually make our public arguments. Okay. And that is one of the great things about secular society is that we’ve got this divide where you can be motivated by your private beliefs, but ultimately we’re expecting you to, at least if you want to make like, you want to make, try to actually persuade people, you have to somehow put it in terms that are shared across everybody. And that is one of the great things and maybe it’s not so bad then that rational atheists try to put things in consequentialist terms, even though it really does to Ivan’s point, thoroughly blunt their argument. Right? Like, it really does. He’s totally right about that. Like, they are not making the right argument that’s ultimately going to sell this as a worldview. Instead, they’re trying to get you to go along using the blunted secular argument, right? Instead of saying untruth is profane, which is more of a religious argument.
[00:33:12] Blue: Yeah.
[00:33:12] Red: So, I mean, however you want to name that, we could call it a mean, mean argument, maybe a set of religious argument. So, this also explains why I have a concern with people that when atheists try to tell religious people that they aren’t allowed to use their religious beliefs to inform their values with respect to say, conjecturing what laws to make. The argument typically goes like this. Your religious beliefs must be entirely private. You can’t talk about them publicly. You can’t use them internally to form values that in any way impact public law or even how you vote. You’re not allowed to do that. I’ve seen this argument. So, I’ve seen this argument from religious people. I’ve seen it from atheists. I’ve seen this argument in many places. Okay. This is typically put in terms of something like, don’t outlaw a doctor who being on television on Sunday. Okay. Which is, of course, obviously an intentionally silly unrealistic, unrealistic example. Or it might be something like, don’t use your religious beliefs to determine how to define marriage in terms of laws, which is a much less silly example. So, note that we’re not now demanding that religious people put their public arguments in terms of shared values, which is the secular divide that I’m talking about. But instead, we’re now saying they don’t get to utilize their beliefs at all even as a motivating factor. Now, that isn’t going to work. It’s just so deeply frustrating, even though I am flawed, even though I can at least understand where they’re coming from, it fails to respect the impossibility of telling people to turn off their beliefs. So, instead, the secular divides the right answer. Be motivated by whatever motivates you. Believe whatever you believe.
[00:34:58] Red: But yes, put your arguments in terms of values and consequences that are shared by the public or at least do that if you want to win elections. Okay, we’re not going to stop you from doing it otherwise. Freedom of speech is great. But if you actually want to win elections, then you better be able to put this in terms of harder consequences. Even if this blunts your argument, that’s going to be the case. What I’m really arguing then is that we should hold religious people to exactly the same standards that we hold atheists to. That just makes sense, right? Would any atheist feel comfortable being told you aren’t allowed to use your atheistic beliefs to formulate your morality or inform your voting? Of course, that would be offensive, right? So, should people like me, and Ivan apparently, who believe that the untruth is profane, should we not be allowed to use that to motivate us? It doesn’t make sense, right? It’s a point of view that just can never really work. But what are beliefs then? Okay, now you kind of asked about this. Aren’t beliefs really just our best judgment as to what is the current best theory? One might argue that’s exactly what beliefs are. This is true even if we’re talking about a religious salad. A religious salad, as you just pointed out, Peter, they sincerely believe that their religious theory is the best theory. And they’ve got whatever reason or evidence that they think convinced them of it. That’s what they sincerely have allowed them to decide this is my best theory. Okay. So, arguably, that is what the word belief means is this is what I think is true, or at least what I think is closest to the truth.
[00:36:44] Red: This is presumably just as true for a rational atheist as a dogmatic religious salad. There’s a certain amount of equality there. And if I’m being honest, I’m not sure I see a lot of difference between and I know that I’ve argued with this with several people online. I’m a religious person myself. I go to church, things like that. I get a lot of value out of it. And I’ll often be asked questions like, well, how do you reconcile that with your rationalism? Clearly, that’s a huge part of me is my rationalism. And I always tell them, look, I don’t fully reconcile it. Like, the real answer is what we’re discussing here. I kind of have this idea in my mind of a private and a public character, right? A secular divide. But I often point out, I don’t see as much difference as you see, right? Often, there’s just this kind of idea, religious people, they’ve got these weird beliefs and these beliefs, they’re totally rationally unjustified, et cetera, et cetera. And I’ll kind of turn around and I’ll say, look, I’ve been a part of rationalist communities, right? Like I’m part of the crit -rack community. And a lot of them believe in anarcho -capitalism. And they believe it very sincerely, but in my opinion, wrongly, they think anarcho -capitalism is today, right now, a better theory than first past the post -democracy. They’ll tell you it’s the best theory of governance that we’ve got. Or best theory of economics. I don’t know how you would phrase that. Now, it should be fairly obvious why democracy, first past the post -democracy, is an overwhelmingly better theory than anarcho -capitalism.
[00:38:29] Red: At least today, that could change in the future as the theory is developed differently and framed differently. But anarcho -capitalism, in terms of how a critical rationalist would assess a critical discussion, it isn’t much of a contest, right? Like the first past the post -democracy is a way, way, way better theory from a critical rationalist perspective. So one of these theories, first past the post -democracy, is a highly tested and corroborated in -practice theory. And the other, anarcho -capitalism, isn’t even implementable at the current state of knowledge that we have. Most anarcho -capitalists will admit this if you talk with them, okay? And anarcho -capitalism, and I say it isn’t implementable today. Now in part, that’s because we lack the necessary knowledge for how to actually do it. One of the ones that actually, that often comes up that even like David Freeman talks about in his books, is that we don’t know how to privatize air today. So if you want to have the market clean air and take care of that externality, you have to privatize air in some way, okay? Somebody has to own the air. We don’t know how to do that today. It’s beyond our current technology to even accomplish that today. Nor do we know, because we’ve never tried it, do we know the moral consequences of doing so, okay? Those are just, right now, anything that you think you know about that really is just prophecy, okay? Because it has never been tested in real life.
[00:40:01] Blue: Sorry, I just had a dark thought, Bruce, that I think might be true. Well, David Deutch recently said that, in my thousand fold experience, scientists are more prone to intellectual fads than typical people are. The reason is obvious and the purported reason for the opposite is bad epistemology. So basically, scientists are more prone to intellectual fads or, you know, terrible beliefs in some ways than average people, which I actually kind of agree with.
[00:40:37] Red: Yeah, I do too, actually.
[00:40:39] Blue: But here’s a question, which actually, here’s my dark thought, which I actually kind of agree with too. I would say that critical rationalists are not less dogmatic than regular people, not equally dogmatic. I think they might be more dogmatic than a regular person, which kind of makes you also think, what are we even doing here? Why are we critical rationalists if we’re more dogmatic than the average person?
[00:41:14] Red: You know, okay, that’s a fair question. Let’s cancel
[00:41:16] Blue: the podcast.
[00:41:17] Red: Yeah, that’s a fair question. I don’t think it’s as dark as you’re making it out, but you know, I’m going to leave that question hanging and I’m going to subtly try to answer it throughout the rest of the podcast, but it is actually a fair question, I think. Okay.
[00:41:36] Blue: So
[00:41:37] Red: let me finish using the anarcho -capitalist example here to maybe partially respond to what you’re just saying. Okay. So I just talked about how anarcho -capitalism isn’t implementable today. We don’t even know how to do things like privatize air. I think a crit -rat anarcho -capitalist would say, well, that’s just a matter of, I mean, all problems are soluble. It’s just a matter of knowing how to privatize air, right? Like someday we’ll have the knowledge how to do that, and then that will be the best way to do it. I’ve been told that by an anarcho -capital crit -rat anarcho -capitalist. By the way, that’s a unique take on anarcho -capitalism that is unique to critical rationalists. I know of a lot of Mormon anarcho -capitalists. They would never use that argument. So, but anyhow, that’s typically the argument that they would give back to me. So let me emphasize any assumption that privatizing air is the ideal way and that we know that today, that that is our best theory, that we will and should privatize air. Okay. Even though we’ve never tried it, because we don’t know how, that is in the Popperian sense a prophecy. Okay. It is. That is what we mean by prophecy. You are saying I don’t own the air in my house?
[00:43:02] Blue: Maybe you do. I don’t know what that is.
[00:43:05] Red: You’re going to have to make it airtight though, because it’s escaping into your neighbor’s air on a regular basis.
[00:43:12] Blue: They’re stealing my air. They’re stealing your air.
[00:43:16] Red: I read a book about that. I read a fun book about a superhero that could keep you from stealing anything that was hers. So one of the things she could do is she could freeze you because you were stealing her air.
[00:43:30] Unknown: Let me say though, I have no problem with the idea of privatizing air as a conjecture.
[00:43:35] Red: If you were to say, look, this is an untested conjecture that could have some promise, let’s try it out. Let’s try to look into it. That actually seems very, like a very valid thought to me. But the idea that we know we should privatize air, that’s our best theory. That’s a false prophecy. It could be a true prophecy, I guess. It’s a prophecy. It’s a wild guess that is not based on anything. It has no connection to critical actualism at all. Democracy is also, by comparison to anarcho -capitalism, deeply incrementalist with no overall drive towards anything. So it’s forced to concentrate only on solving local problems. Why anarcho -capitalism is utopian in the sense that it has a grand plan that you’re supposed to drive towards. Now, encaps, crit -rat -encaps, they know enough about critical rationalism that they’re going to hear announce, well, we plan to do it incrementally. We’re going to incrementally drive towards this idea of getting rid of the government, which is sort of missing Popper’s entire point. I guess Popper’s point was about solving local problems and not having a grand plan that you’re trying to drive towards. That was what he meant by incrementalism. Plus, democracy can error correct into anarcho -capitalism. Let’s say that over time, we find that most of our problems are solved. David Freeman makes this case. I think it’s a good case as an anarcho -capitalist himself, that over time, we’ve seen democracies move towards privatizing things that used to be solved by the government to find the quote from him from his book where he says that. We’ll have to do an episode just on his book because I actually think it’s an excellent, excellent book.
[00:45:24] Red: The fact is that as we solve problems, we sometimes solve them by dissolving pieces of the government and having the market take it over. In theory, let’s say just by chance that anarcho -capitalists are right, that an ideal society would have no government at all or rather it would have a government, but it would be an entirely privatized government. We could get there inside of a democracy. There’s nothing in democracy that stops you from reaching that point because it’s error correctable in that way. The reverse isn’t true. It isn’t. Anarcho -capitalism as a philosophy, at least as it’s currently understood today, it’s not possible. If they’re wrong and democracy is better in some way, there is no line of thought within anarcho -capitalism that lets you get back to democracy. Well, that is unless you want to argue that democracy was the market solution to various coordination problems, which I’ve argued elsewhere, making our world today arguably already anarcho -capitalism. It just didn’t happen to take the form and caps prophesied that it would, and they failed to recognize it for what it was. Despite these various contradictions that I’m trying to tease out here to Popper’s explicit statements about his epistemology and how to apply it to politics, you will easily find crit -rats that will tell you with a completely straight face that anarcho -capitalism is our best theory of government and politics. They’ll even tell you that it flows naturally from Popper’s critical rationalism and that it’s a consequence of Popper’s critical rationalism. Now, I’ve had numerous arguments with people who say this, so what I’m about to tell you is all real, point out to such a person that Popper said otherwise because he did in strong terms.
[00:47:18] Red: And they’ll tell you Popper didn’t understand his own theory well enough and that they understand his epistemology better than he did, which that’s not inconceivable. Nobody’s an authority, so Popper isn’t an authority either. Point out that it violates Popper’s incrementalism and anti -utopianism because it has a grand plan of where society needs to go, and they’ll just tell you that they will implement it incrementally according to a best theory of economics, which makes us drive towards one specific solution inevitable. And if you want the, at least if you want the fastest knowledge creation, that’s typically the way they put it is, the market is the fastest at knowledge creation. Knowledge creation is a good thing, so we know we must eventually get rid of governments, which are less efficient than markets at knowledge creation, and we’re going to drive towards the best knowledge creating process, which is the market.
[00:48:14] Unknown: Okay.
[00:48:15] Blue: I mean, to really steel man that position though, I think maybe what they would say is that it’s more, they’re not really making an assertion about economics as much as, or an economic plan as much as just the premise that human beings thrive under freedom, and that if you create a society with a maximum amount of freedom, it seems to follow that there’ll be a maximum amount of prosperity.
[00:48:50] Unknown: Okay,
[00:48:50] Red: that is exactly the argument they would make. Let me actually now challenge that argument, okay, because I actually think that argument is so pre -baked with anarcho -capitalist assumptions that don’t even make sense on their own terms, okay, that it’s a really questionable argument. So let me go ahead and challenge it. Okay. So when you ask them, what do you mean by freedom? I mean, Popper, let me actually get the quote from Popper that I’ve used a few times. I do believe in freedom and reason, but I do not think that one can construct a simple, practical, and fruitful theory in these terms. They are too abstract and too prone to be misused, okay. So Popper has already responded to this argument from critical rationalists, even though he was dead by the time they made it, okay. He’s already explained why trying to, and I don’t think it’s an accident that they try to put things in terms of freedom, because what they’re doing is, is they’re trying to take a theory that is not a best theory, and they’re trying to exalt it falsely to that spot, okay. And the easiest way to do that is to put things in terms of vague terms, to vague man your theory, okay. So they put things in terms of coercion and freedom, knowing that those terms are just super vague, and therefore they can be easily varied to mean whatever we need to mean, have them mean, okay.
[00:50:16] Red: This is the, one of the main things that I’ve bumped into and argued with crit -rat libertarians and non -crit -rat libertarians over and over and over again, is that they have this almost constant drive towards vague terms, so that they can, you talk to, they often don’t even agree on what they mean by freedom, right. Like I had one libertarian tell me that it would be wrong to have laws that say what a speed limit is, because a person who’s speeding has not done anything negative to anyone else, and therefore you aren’t allowed to make speed limits, because that breaks the concept of freedom. Now I’ve had, to me, that sounds like a really bad argument, okay. He said, well, you can like have the engineers put a suggested speed up, and then if you get sued later because of an accident, whether you were following those suggested speeds or not will be taken into consideration. That’s as far as you’re allowed to go, otherwise you are violating freedom, okay. And I’ve talked to, I, so that was like the first libertarian I ever met told me that, and I thought, oh my gosh, this is the most absurd point of view I’ve ever heard. I have since talked to a lot of libertarians, and I’ve had other ones tell me, no, that’s totally wrong, right. Like the fact that you’re speeding, you are initiating force, they always talk about things in terms of initiating force, you are initiating force against people around you because you’re being negligent, you’re being reckless, and so that’s a danger.
[00:51:51] Red: Like if you were in a park, would you have to wait to arrest a person who’s shooting a gun in a park because they hit somebody, or could you arrest them because they are simply shooting in a park at all, right. Of course, you could arrest, this is the argument, of course you could arrest them for shooting in a park at all, even if they’ve never hurt anybody yet, because that is a negligent, reckless behavior that is enforcing themselves on other people, okay. This term, force, initial use of force, allows you to justify either one of these viewpoints, it’s easy to vary, okay. Coercion’s got the exact same problem, all right. One will say, well it’s coercive, everyone say, no, the other person’s being coercive. Most coercion, as Popper has himself pointed out, is kind of simultaneous, right. You’re just both living your lives, you’re not trying to converse anybody else, and you’ve got different ideas about how to live your lives. Popper’s example was one person’s trying to play the piano, and the other one’s trying to take a nap, and they don’t mean to coerce each other, but they just happen to be living their lives in such a way that the other one is creating an impact on them that’s unintended, okay. Almost all coercion is like that, I mean there are cases that are so clear cut that no one would disagree, oh, that’s coercive, but those are the ones that nobody’s arguing over, right. Like if we’re having an argument over this or that is coercive, it’s because it’s not clear, and that’s the reason why we’re having the argument, because in most of these cases, the reason why, my argument is, is the reason why
[00:53:27] Red: anarcho -capitalists put everything in terms of freedom is because they’re trying to make sure their theory is irrefutable, and that’s why they’re doing it. Now, let me actually take an argument that I spent, I don’t know, two hours arguing with, not arguing, I was trying to explain it to San Cipers, who’s obviously a good crit rat and anarcho -capitalist, way smarter than me, and I was trying to explain this point of view to him. I said, you know, the problem is that you’re trying to say that governments are inherently coercive, okay, and that you’re against coercion. It’s not at all clear that’s the case. Like, given certain assumptions, that would be the case, but it’s not clear what those assumptions are, let me, so he’d say, well, if you enter into a market contract, you have a contract, you sign the contract, it’s all obviously a type of consent, so there’s no coercion involved. And I’m like, okay, Sam, let me see if I can explain to you why I’ve got a concern with the way you’re framing this, okay. Let’s do libertarians have a problem with the idea of, say, a homeowner’s association. And he would like immediately say, oh, I used to live in one, they’re not very good. I’m like, no, no, no, I’m not asking you if they’re efficient. I’m not asking you if they’re good ideas. I’m asking you if they violate your moral understanding of consent. He’s like, well, no, I guess they don’t. I’m like, okay,
[00:55:06] Red: let’s say that we were to, like, when you say that you have no contract with the government, like this is the argument that Ann Cap’s always used, I have no contract with the government, and so therefore I’m being coerced, and if I have to pay my taxes, I’m being coerced, that’s why taxes are, quote, theft, okay. And I’m looking at this and I’m going, wait a minute, what if the government owns that property right? Like you’ve never even considered that possibility. If I’m inside of a homeowner’s association, that’s like a little mini government that’s for a neighborhood that I’ve consented to by moving in. And I may or may not sign a contract with them. Typically, they would probably want you to sign a contract just to make it clear. But as I’ll explain in a second, the contract’s not actually necessary, okay. That homeowner’s association owns a certain right over your property, the right to be the homeowner’s association for your property. They’re going to maybe mow the lawn for you to make sure everybody’s lawns looks nice. You’ve agreed to pay these services with them, okay. And then Sam would say, yeah, but see, I signed a contract. So that’s exactly what I’m saying. You own this house and your son grows up in the house too, and then you die and your son gets the house, okay. That son has never signed a contract with the homeowner’s association, right. Does he then get to just say, well, I’m not going to pay the homeowner’s association? And of course the answer is no. And the reason why is super straightforward. It’s because that initial contract gave the property right to the homeowner’s association to be the homeowner’s association for that geography, okay.
[00:56:53] Red: And this is something that anarcho capitalists thoroughly accept as at least morally valid type of contract, okay, even if they wouldn’t prefer it. Now, all you have to do is imagine this homeowner’s association, you know, merging until it covers the whole town, merges until it covers the whole nation, okay.
[00:57:10] Blue: It’s a great thought experiment. I think that I can imagine putting myself in an, I mean, I’m not an anarcho capitalist. I mean, I like some of the ideas. Maybe in a future knowledge state, as you’ve put it, it would work great. I mean, I think that’s probably possible. But I can imagine a convincing argument that a government is not like a homeowner’s association. Sure. Sure. And they
[00:57:43] Red: always argue that, right? They always say, well, but a homeowner’s association is smaller, okay, but there’s no rule against it being bigger. It’s not like there’s some rule of anarcho capitalism that says, and you’re only allowed to make it one neighborhood, right? Are you agreeing? You don’t get to agree to the outcomes. You get to agree to the rules. That’s the way that this works, right? So there’s no reason why. It’s not that, and I’m not even, and so Sam would say, oh, so you’re arguing that the government owns the right. I’m like, you know, I’m not even arguing that because that would, that would pre -assume knowledge that I don’t have. I’m wondering why you haven’t considered the possibility that they own the right to be the government for their territory. Surely that’s what people think they own the right to. Like nobody doubts that if you move into American borders that they, they have the right to be the police force, the, the, the jury system, the government for that territory, okay? And they’re willing to enforce it, right? They’ll enforce that border and only within that border, by the way. Nobody, everybody at least believes that they have that right, which is really just the same as saying they have a property right, because that’s what a property right is, okay? If that property right, if they have that property right, there’s no contract. Like you’re looking for a contract that wouldn’t exist any more than I would look for a contract between the son of the original owner of the property within the home owners, home owners association, then I would, there’s no contract.
[00:59:11] Red: You can’t say, oh, he’s being coerced because there’s no contract, because the home owners association already owns that right. I would expect an anarcho -capitalist, if they’re going to tell me that this is about freedom and contracts are freedom, the very first argument I want to hear out of their mouth is why they think governments don’t have a right to be the government for their territory, okay? Like I’ve never even seen them raise the issue. I’ve never even seen them try to explain to me, maybe this differs by government, maybe the Canadians are totally corrupt, and but America, I mean, if I were to look at America, they have the Louisiana purchase, they purchased this big area of land the government did, and then they resold the land to their own or gave it away to their own settlers, but under the understanding that they were going to get to be the government for that territory and that there would be taxes paid. I don’t see how this is anything, it is true that you didn’t make the contract with each individual living there, you made the original contract, owned that property right. That’s how property rights work even for anarcho -capitalists, right? Once I own the property right, the mineral rights to your property, you can sell your property, but you can’t sell the mineral rights. I don’t have to make a new contract with each person you sell your property to, I just continue to own the mineral rights. That’s how property rights works, right?
[01:00:38] Blue: So
[01:00:39] Red: when we’re starting with this, there’s kind of this implicit assumption that you’re going to have a maximum freedom if you have a contract, but there’s not even the consideration that that contract might have sold the rights, right? And that is how contracts work, they sell rights. And so I don’t know how to respond, like they need to first explain to me why they’re assuming the government has no right to be the government for their territory. If the government has a right, if that’s property right the government has, then of course they get to ask for taxes, and it’s a mandatory thing if it’s so long as you’re living there. The way you get out of the taxes is by moving out. That’s exactly the same as with a homeowners association, right? Now I understand the argument, like if you’re going to make the argument, well, that’s not efficient. Well, it’s a totally different argument, okay? It’s not a moral argument anymore. Now we’re talking about what’s the best way like we could take the government and say, you know what, it’d be a lot more efficient if we didn’t have a single monopoly on, you know, government monopoly on enforcement or legal system. We could like change the government to do that, but you’d still have to buy the rights back from the government if they own the rights. I could see also them making an argument that many governments are so corrupt that they don’t own this right, that they really are just thugs. But I don’t see that as a good argument against say America, right? Like I don’t see how you could use that argument against America.
[01:02:06] Red: In fact, if you really think about it, the American original colonies, the 13 colonies, they kind of were the anarcho -capitalist Utopia, right? Whether anarcho -capitalists realize that or not, you came off the boat and you had, you know, 13 choices as to which, you know, government you wanted to go be a part of, or you could go walk off into the wilderness and you could just say, hey, you know, I’m going to just live free and be away from governments. Not surprisingly, nobody chose to do that or hard, people did choose to do that, but very few did, right? It was a very bad choice, right? And so you could start your own. You could go start a different colony if you wanted to. You had all the choices available. What happened when that took place? Well, there was this thing called the Articles of Confederation and there was this idea that we’re going to try to keep these separate little mini -governments so that we’re as close to this idea that we’ve got no overarching government. And it just didn’t work. It was a total failure. They had a farmer start a war and almost take over the nation and they couldn’t rally troops to stop him. I mean, like it was a total disaster. So eventually you had George Washington himself saying, look, if we’re going to have a government, let’s have a government, right? And they took the different colonies. They signed a contract and they formed a federal government. I don’t see how any of this in any way goes against anarcho -capitalism. I would argue that the US today is anarcho -capitalism.
[01:03:37] Red: It just took the form of a government that the market supplied a government and that’s what happened, okay? That there was demand for it. They used it using all means that anarcho -capitalists would find legitimate and the end result was exactly equivalent to government. And that is exactly what the market supplied, okay? No, I don’t know. Maybe I’m wrong.
[01:03:57] Blue: That’s why I like the future knowledge state thing. We’re getting down a tangent. But if you tried to implement free speech and democracy on a tribal society 10,000 years ago, it might be terrible because they wouldn’t have the knowledge, the necessities of a thousand implicit and explicit assumptions about how to live to make that work. So maybe in a hundred or a thousand years from people will have better knowledge about how to live with other people and anarcho -capitalism will make sense. You have to
[01:04:49] Red: kind of incrementally get there and see what happens, right? That’s exactly the paparian viewpoint.
[01:04:55] Blue: But I like the ideas. I mean, I think there’s a certain philosophical coherence to a maximally free society that I think makes a lot of good sense. I mean, I can see why people are attracted to these ideas. Attracted to it,
[01:05:13] Red: right. Motivated by it. Motivated by it, right?
[01:05:16] Blue: Yeah.
[01:05:17] Red: Okay, so actually, let me take that as jumping off point because you’re actually kind of making the point that I was going to make here, okay? So I might point out, look, you’re making a prophecy about the future state of knowledge, how to privatize air, let’s say. Or you’re making statements that don’t even make sense to me, like this idea that you have to have a… Why wouldn’t a government set up a geography? Why wouldn’t they take ownership over being the government for that area? Why are you assuming that we have to allow every geography to have as many as they want, right? Like, I don’t even see why the market would necessarily supply that. And in practice, it didn’t supply that, right? That’s exactly why America wound up with the federal government. So I might point this out and they might just invoke, well, you know, Deutsch says all problems are soluble unless it violates physics. To which I might respond, well, you know, a communist could say the same thing. They could say, well, you know, communism doesn’t violate the laws of physics. So all problems are soluble. So even though we’ve never made communism work in real life, we know it’s possible to create a communist state that works. And so now they might then point out, no, no, Bruce, see, communism is impossible. And so then I might say, well, wait a minute, you just said, you just quote a Deutsch is saying problems are all soluble. Why is communism impossible if Deutsch says all problems are soluble? Is it doesn’t violate the laws of physics, does it? And here’s the answer. I actually got back. I was told economics is a branch of physics. So this I’m kind of stumped.
[01:07:08] Red: Why would you say economics is a branch of physics? Clearly it’s not. I might say, okay, how is it that you’re deriving Meesean economics from physics? Like, I don’t see how that’s possible. Okay. Well,
[01:07:20] Blue: you’re not a true physics imperialist then.
[01:07:25] Red: And I was told in response to this that economics is physics because it says something is impossible, namely communism. And only physics can say something’s impossible. Therefore, economics is physics. Now, this whole chain of thought, whatever you might think of the overall point of view, it does seem to me that it’s an almost quintessential example of what we mean by ad hoc saving a theory. Okay. This idea that we’re going to things like economics is a branch of physics to try to justify what we’re saying. To me, that seems pretty much entirely what we mean by ad hoc. Okay. And honestly, I don’t know if I see this as differing from religious belief, right? I think that there’s a hardcore belief system that they’re always somehow finding a way back to their beliefs and that it really is kind of just immune to really valid criticisms. Okay, that they just cannot, you can never get to any criticism with them because it’s always going to be rephrased or repackaged in some increasingly vague way to where the criticisms just don’t matter anymore. Here’s the thing though. So what? I mean, like I just mentioned, I think untruth is profane, right? So of course I have this innate reaction against this, at least this aspect of anarcho -capitalism, because I really dislike how they try to, the crit -rat anarcho -capitalists try to claim that it is a best theory. And I know that’s false. I know first -past -the -post democracy is the current best theory, period end of story. Okay. But so what? Like, so what if it’s profane? Who cares? Why is it a problem? Just how, maybe let’s ask the question in consequentialist terms, just how dangerous a belief is anarcho -capitalism,
[01:09:19] Red: right? Even if we want to treat it like a religious belief, how dangerous is it? Now, surely you could argue that it could be dangerous. Okay. If you take something like taxes or theft and the concept of first use of force versus second use of force, and if you were to take those too seriously, you would end up with a radicalization problem. Okay. So obviously the government initiated force by stealing your money, never mind whether they own the property, the right to do that or not. That’s not under consideration. They didn’t have a contract with you specifically. So, bada boom, bada being, this is an initial use of force. I now have a right to use force to defend myself against the government. Okay. You can see how this follows very naturally from their beliefs. There is a very, very, very real radicalization danger there that I have, don’t think I have ever seen come to fruition, ever. I don’t think I’ve ever even seen it come close to fruition. Peter, you argued to me once that the Oklahoma bomber, the Unabomber, that maybe he was motivated a little by anarcho -capitalist type beliefs. I don’t even know, maybe vaguely, it’s not clear to me that he was an anarcho -capitalist.
[01:10:35] Blue: You said Unabomber, but the Oklahoma bomber.
[01:10:40] Red: Yeah, Oklahoma bomber. That’s, I guess, what it is. Is
[01:10:42] Blue: that what you said? Yeah. Yeah, yeah. Which one was it?
[01:10:46] Red: Maybe it was the Unabomber. You told me about it. What
[01:10:48] Blue: I said was, I think a little different, is that the Oklahoma bombing might have ruined or derailed the libertarian movement of the 90s, which regardless of his motivations, I mean, it was such a horrific act. 200 plus people died, and suddenly these anti -government types are just looking like lunatics.
[01:11:27] Red: Right. Okay.
[01:11:28] Blue: I think that’s more what I said. Regardless of what the
[01:11:31] Red: individual motivations were. I mean, he
[01:11:32] Blue: probably wasn’t, I’m sure, I suspect he didn’t read Mises or anything, but I’m not saying, I think he was more of a, I mean, he was just a weirdo. I don’t know.
[01:11:44] Red: So, you know what? I agree. I think he was just a weirdo. I do not think that anarcho -capitalism has produced an Oklahoma bomber as of today. And even if it did, I don’t know if that would be a reason to get overly fearful about the ideology, right? Because it’s kind of got a really good track record. That’s just the truth, right? Maybe occasionally you might see them kind of go a little off the rails, and definitely as, you know, they’re kind of into conspiracy theories a lot. They’re definitely maybe some room for concern, but like I would pit anarcho -capitalist any day of the week against leftist woke types. I’d pick the anarcho -capitalist any day. Like it was not even a close contest. Libertarians are way safer than leftist woke types, right? Or for that matter, the right woke is way more dangerous than libertarians. So, as a staunch conservative myself, I suspect that even if anarcho -capitalism is strictly false, which I think it probably is, by that I mean probably democracy will turn out to be an inherently superior system than anarcho -capitalism. Uh, that doesn’t mean we don’t have a lot of room to privatize much of what the government does today within a democracy. Probably a huge amount of room, okay? And that likely the market could improve upon what we’re currently having the government do. This is kind of a conservative belief. This is kind of what I believe anyhow, right? Like as a non -anarcho -capitalist, I completely agree with this giant aspect of their whole worldview. I think it’s just right, okay?
[01:13:29] Red: So even in cases where likely we will need a government and it’ll be necessary, a lot of times you can imagine a hybrid that’s better than either individually. I had a libertarian friend who brought up the idea of cap and trade for pollution. This was before the whole global warming scare, where now cap and trade has a more negative term to it. He pointed out that there were certain local governments that had put cap and trade pollution policies in place and that it privatized the air basically, right? Sort of. And that you had like the Sierra Club out trying to buy rights to pollution so that they could reduce the pollution in the area. And if it was important to them, they had now a way to actually reduce pollution using money, right? And what it had done is the government, by setting the laws as they did, they had taken something that the market couldn’t see, this externality of pollution, and they made it visible. And then they used the market, even though it was a government solution, the government solution used the market to clean the air, okay? There’s all sorts of interesting ideas like that out there. And honestly, I think anarcho -capitalist libertarians are a conservative like me’s best friend, right? They’ve got way amount of truth in their world view. Even if there are certain aspects, even if they are strictly false, they’ve got a lot of truth, a lot of verisimilitude in their views, okay? And honestly, maybe I’m wrong about that. Maybe they’re right and I’m wrong. And if that happens, if someday we eventually, incrementally by solving problems, local problems, get rid of the government and move to entirely
[01:15:16] Red: doing things through the market, you know, so what? I’m not against that. Like, I’m totally in favor of democracy in part because it allows us to move to other types of formats, including possibly anarcho -capitalism, if that turns out to be the better way to do things, okay? I’m more interested in that fact than probably anything else. It’s one of the, of course, one of the things I love about democracy is that it allows a slow removal of itself. Should, by some wild chance, that anarcho -capitalists did actually guess correctly as to what a more ideal political setup would look like. Plus, there’s just no denying, as you were just saying, Peter, the mythic framework of something like anarcho -capitalism. It just isn’t hard to see that it is a web of intertwined ideas that it makes, that make it up, that become a really strong motivating factor that towards something that really is mostly good, right? Particularly, mostly good inside of an open society where we have filters in place that help remove the radical -lizing elements that might otherwise be dangerous, okay? So a drive towards less market interference and using market to solve problems, that’s almost assuredly good in tons of cases, vast majority of cases, right? So it’s hard for me to be overly worried about it. Yeah, I think it’s probably wrong. It’s certainly wrong, at least insofar as they’re declaring it a best theory and it clearly isn’t. They are absolutely wrong about that, for sure, today, okay? That is a premature, at best, it is a premature declaration. Without a doubt, first pass the post is the best theory of governance we have, period end of story, okay?
[01:17:00] Red: But I just don’t see it as a dangerous, though maybe false belief, right? And it’s a false belief that has a lot of truth to it. And I do think within a paparian open society, it’s the true parts of the beliefs that get through the filter, and it’s the false, more radicalizing elements that don’t. And because of that, I think beliefs tend to be a very positive, motivating factor, even when they’re strictly wrong. It simply does not matter that much that anarcho -capitalists are theoretically strictly wrong in their beliefs. Not surprisingly, I feel the same way about all belief systems inside of open societies, I mean, including but not limited to actual religions. Now, I do know it can get really annoying. It’s hard to have a rational conversation with religious people. I’ll admit that. I was a Mormon missionary, so I was having religious conversations with people daily, and it can be very annoying. And I’m sure that to them, I was annoying, right? Likewise, it’s hard to have a rational conversation with most libertarians. It’s very difficult to have a rational conversation with most libertarians, because they are so quick to jump off to conspiracy theories. There’s conspiracy theories and libertarianism strongly overlap, okay? Logan Chupkin, by the way, who’s probably the foremost crit -rat anarcho -capitalist, he totally admits this. He jokingly refers to them as conspiratorians. And I think this seems like it’s a widespread problem, no matter what group we’re talking about, leftist, wokist, Trumpist, Bayesian AI, doomers, etc. It’s a common problem, right? Because we do, we are not fully rational about our belief systems.
[01:18:50] Blue: Yeah.
[01:18:51] Red: Now, I suspect let’s just take AI doomerism as an example. I can see how AI doomerism is potentially dangerous, right? Like, if they were to actually somehow grab the levers of power and shut down development of artificial intelligence, that would be a really awful thing, okay? I don’t think it’s likely to happen. Or if it does happen, I don’t think it’ll last long, right? Just because of the way open societies work. I think the most likely outcome of Bayesian AI doomerism is probably going to lead to narrow AI alignment programs, not AGI alignment programs, but those two are different enough. But because they don’t understand the difference, they’re going to drive towards improved AI alignment programs. But for narrow AI, and I think that will all turn out to be a good thing. Like, I think the paparian filter just works that way, okay? Now, let me use an extended example of this using another example that I talked about in a previous podcast. We talked about the disobedience criteria and how I didn’t agree with it. After we had that podcast, I went on Twitter and I was talking to some crit rats about it. I kind of put some of my ideas out there. I expressed my concerns that I don’t think the disobedience criteria, at least with respect to AGI, as put forward by David Deutch and strongly embraced by the crit rat community today, I don’t think it’s a very good criteria, okay?
[01:20:21] Red: So let me use my concerns with that criteria as an extended example of the nature of beliefs, but also about how talking to a member of the crit rat community about this at least tweaked my thinking to see it as in a more positive belief -centric light, okay? So after that episode, I did express my ideas and my concerns with the disobedience criteria on Twitter. Now, here’s my concerns at a nutshell. Let me try to summarize what my criticism is. The disobedience criteria seems more like an intuition pump to me than an actual theory with content. Now, I do get the intuition pump just fine. You think of every AI you’ve ever interacted with, narrow AIs, or maybe chat GBT2, you know that it’s both following specific algorithmic rules, and you know it’s narrow in scope of what it can actually do, okay? Now, by comparison, you think of humans, and they don’t seem to follow any set of rules, okay? And they’re open -ended in scope by comparison. So the intuition pump is that following the rules, i.e. the algorithmic code is equivalent to obeying and the algorithmic code obeying the algorithmic code apparently, and therefore humans which follow no rules are therefore disobeying and intuition pump there, okay? Now, this idea gets tied to several other beliefs popular within the crit -rat community. For example, it ties to their belief that traditional schooling is immoral and coercive. Note that we’re not here talking about specifically sending children to schools, that’s like a separate issue, but related, but separate issue. They even believe this, a lot of them even believe this, that an adult going to college of their own free will and choice is itself immoral and coercive in some way, okay?
[01:22:15] Red: Deutsch has claimed that traditional schools, including universities, are a leftover of static societies and that they are based on the bucket theory of knowledge where they’re wrongly trying to copy knowledge with obedient fidelity from them from into the minds of their obedient students, rather than letting the students explore their own interests. And often they’ll cite being forced to take a class that you’re not interested in as being coercive or forced obedience. David Deutsch also famously refuses to take a professorship because then he would be forced to teach classes to students that don’t want to be there and he sees that as coercive and he can’t agree with that, okay? Which is why he does not have an actual professorship, but he’s just got an associate, he’s, I forget what the term is, it’s where you’re, they put you on campus and they give you an office, but you don’t get any stipend, right? And that way you don’t have to take a class. And of course, tests are coercive and immoral is one of the things that I’ve heard numerous times because they’re trying to treat you like a universal constructor instead of a universal explainer. They are trying to coercively smash facts into your head to be obediently retrieved later rather than treating you as a creative being. And they’ll argue, look at how unrealistic tests are in real life. Like in a doc, if a doctor’s trying to do a diagnosis, they’re going to freely go use chat GPT to get their best answer. So why should you not be allowed to use chat GPT on a test? Or better yet, why don’t we just do away with tests altogether?
[01:23:45] Red: And so what they’re doing is they’re ultimately connecting traditional schooling to the old static society idea of dread, of novelty, snuffing out the student’s creativity and demanding obedience to cultural norms, okay? This is generally tied to the very idea of the Enlightenment and how the Enlightenment was a rebellion against obedience to authorities. From this point of view, we have this giant web of connections of obedience being counter Enlightenment and counter and counter and counter to error correction. While disobedience is seen as a virtue or even the virtue that is consistent with the Enlightenment. So it kind of makes sense to me, wouldn’t it be nice if we could take this whole idea of obedience as the virtue of the Enlightenment and obedience as the old school pardon the puns static society approach to things. Trying our best to put us back into stasis, okay? And then we could show that universal explainers were characterized by their disobedience and that’s part of our natural nature is to be disobedient. That’s part of what defines this as being universal explainers, okay? Now if we could do that, this would be a nice addition to the whole connection of web, you know, the whole viewpoint, the web of interconnections would literally be showing that to our core universal explainers were all about disobedience and that although all those other attempts to get us to be obedient and going are actually going against our core nature and attempt to trap us in a static society again, okay? So you kind of have this moral quality to it. Now as much as I at least get the aesthetics of the desired web of connections here, I admit the whole thing makes me really uncomfortable, okay?
[01:25:36] Red: In fact, my bull crap alarm just starts blaring the moment I hear these arguments. So for one thing, there’s a rather obvious subtle equivocation going on here. There’s a leap from this idea of obeying one’s algorithmic code to its opposite being not following or disobeying, not following any set of rules. Now those just aren’t the same thing. They’re just not. A human still follows their algorithmic code. Humans are programs. We have code too. We follow it. There’s no such thing as not following our code. That’s a physical impossibility, okay? So humans do follow their algorithmic code just like an AI does. So we aren’t really talking about the same definitions of obeying and disobeying. We’re subtly equivocating between two different definitions of each, right? In the case of obedience, we may mean following the code. In the case of disobedience, we mean following no specific rules other than your code, obviously. So once you realize that, it’s easy to think of refuting counter -examples to the disobedience criteria. So for example, any machine -learning algorithm with online updates will update its code and policy as errors are discovered and will not follow any specific set of rules either. So when I was learning to program reinforcement learning using Q -learning, that’s exactly how Q -learning works. Q -learning, you have a policy. You error -correct it over time. You try things out, conjecture and criticism. You see what the impact is, and then you update your policy to be more optimal. And over time, you move towards the optimal policy, okay? And you do not follow that policy. You are able to update that policy and disobey the original, if I can use the policy by updating it to a better policy, okay?
[01:27:37] Red: For that matter, we’ve talked extensively on this podcast about the Campbell -Pauper evolutionary epistemology and the fact that animal learning has to fit into evolutionary epistemology, rather than seeing all of an animal’s knowledge as being in its genes. So if an animal could adapt its behavior to its environment within its lifetime, so that this means the knowledge is not directly in the genes, then you have a case of inductive achievement, some sort of adaption that took place within the lifetime of the animal through its learning, okay? And if you have an inductive achievement that isn’t part of Pauper’s evolutionary epistemology as its underlying cause, that would be a falsification of Pauper’s epistemology. Thus, Campbell and Pauper knew they needed animal learning to be a form of evolutionary epistemology. There was no way that could be seen as an exception case because they understood this idea that that would then be a form of induction that isn’t really evolutionary epistemology. So animal learning is an example of animals overriding their original programming and no longer following their original rules, just like this whole disobedience criteria is arguing for humans. Does that make animals disobedience too? In fact, now that I mentioned this, aren’t dogs, to say nothing of cats, famously disobedient without being a general intelligence? Like, why are we singling out disobedience specifically to general intelligences when it’s actually something that’s very common to machine learning, to any sort of reinforcement learning, to animal learning, et cetera, right?
[01:29:12] Blue: The disobedience in some ways is what makes these creatures so magical. I mean, when anyone wants - Has they have a will of their own, right? Obedient, a perfectly obedient dog or a child for that matter?
[01:29:27] Red: Yeah. So I asked the Crutrack community about this. I put that all on a series of tweets to ask and think of me as to put this in critical rationalist terms. I’m saying, please be more specific. So this is a request. We’re taking your theory and it’s vague right now. I’m asking you to make it more explicit so that it can actually be criticized. So for example, are you claiming a general intelligence can disobey its current programming? Well, obviously not. That would be physically impossible. Are you claiming a general intelligence can merely override its original programming? Okay, maybe that does seem like the argument that a lot of them are making. But if that’s true, animals can do it. Narrow egg can do it. Machine learning can do it, right? It’s not something specific to -
[01:30:22] Unknown: In fact,
[01:30:22] Red: all. I also, as part of this thread, offered a criticism. I said that the Crutrack community in the past had played a bit of a word trick
[01:30:30] Blue: here.
[01:30:31] Red: So I explained what the word trick was as a criticism to try to get responses to. I said, I’ve noticed that sometimes the Crutrack community that believes in the disobedient criteria, disobedience criteria, that they simply redefine the term disobey to mean tautologically something equivalent to being a universal explainer. So instead of disobeying meaning to disobey something, like a dog can do, disobeying suddenly gets redefined to instead mean I can openly rewrite my programming without limits at all due to being an open -ended universal explainer. Now the problem with this is that now we’ve snuck the definition of a general intelligence or a universal explainer into the definition of disobeying. So now it’s a mere tautology. I’m not denying it if that’s what you mean by disobeying, then you’re right. General intelligences are disobedient, but since the term disobedience doesn’t mean disobedience, but instead means I’m a universal explainer, it doesn’t seem like it’s an empty theory. It’s not saying anything useful at all. Okay, it makes the entire theory ad hoc. So let’s put this in perspective. I’m criticizing the disobedience criteria as applied to general intelligence or universal explainers in a specific way. I’m saying that every attempt by the crit -rat community that I have seen to advance this criteria has fallen into one of several categories.
[01:31:54] Red: It either defines disobedience as going against your code, which is physically impossible, or it defines disobedience as able to change your original rules, which is trivially easy and doesn’t require general intelligence, or it equivocates between the two meanings to avoid having to specify what was intended, or it redefines disobedience into a brand new non -standard definition made up on the spot that sneakily inserts the definition of being a universal explainer into the definition of disobedience. What I’m really wanting and what I’m really asking for is, is there a way to approach the disobedience criteria that doesn’t fall into one of these four categories? For the obvious reason that all of them violate the no ad hoc rule. Now, it’s interesting what arguments the crit -rat community raised when I posted this. One of them said, well, the disobedience criteria means that other than its current code, it should be able to disobey anything it wanted. Now, I don’t know for sure what was intended here, and I kind of have my doubts that this person knew what they meant either. It doesn’t seem like it gets to the heart of my question. In fact, I would even dare say this might be a good example of what I’ve called vague manning a theory. Now, maybe perhaps this means that general intelligences have a will of their own, which is undoubtedly true, but which also seems problematic because dogs also have a will of their own. So, I’m not quite sure what to make of this response. It does not seem to me like it’s a deep response. Now, I did have a very thoughtful crit -rat, one that I have a lot of respect for offered me a different explanation.
[01:33:37] Red: So, let me actually walk you through the explanation that he suggested. Okay? He said, whatever the AI program is doing, we don’t have the AGI algorithm today, so we don’t know what it is, but we know it’s going to use a conflict of ideas. So, if we start with one or more conflicting ideas, one of them will have to be disobeyed. So, here disobey now is just a synonym for variants being discarded. Okay? Now, surely we are stretching the term disobey well beyond the breaking point here to put this into perspective. When a species goes extinct, it seems beyond strange to claim that the biological evolution disobeyed that species. Most people would think you were being intentionally obscure with your speech if you were to say that. When somebody thinks of a variation selection algorithm and bad variants are being discarded, is it really correct or accurate to say that they are therefore disobeyed? This seems significantly off to me, surely not at all what I would have thought the disobedience criteria meant. And if it is what is meant, it seems empty to me, for its only real insight was something I already knew that evolutionary epistemology works by variations in discarding variants that aren’t as good. What’s worse is that this insight is in no way unique to general intelligences. As already mentioned, all AI, nearly all AI and machine learning algorithms today work by discarding variants of some sort. And all animal learning works that way too. And thus, they all, even by this arguments on standards, would have counted as implementing the disobedience criteria and therefore it wouldn’t be related to general intelligence after all. Furthermore, it sets us up for a kind of silly alternative inverse to it.
[01:35:18] Red: We could just inverse this and we could say, well, if you discard a variant, if to discard a variant is to disobey it, let’s say that to accept a variant is to obey it. And now we can trivially reformat the disobedience criteria into the obedience criteria. As silly as that would seem.
[01:35:38] Blue: That’s funny.
[01:35:39] Red: And now I pointed all these problems out to this guy. And he really thoughtfully kind of thought about it, accepted that there were some problems with the way he had formulated it. And he came back and he suggested the following. He said, well, animals in current AI, they can’t open endedly keep asking, yes, but what’s really going on? Repeatedly over and over again. Many can’t even ask the question at all because they don’t have any sort of explanatory knowledge. In fact, most can’t ask it at all. So I have to guess a little here what his point was. I think I know what his point was. I think what he’s arguing is, is that disobedience as a term is best defined in this context, not in terms of merely using variations and discarding them. As he was originally arguing, he’s agreeing with me that that would be true of all AIs and animals. But instead he’s requiring that the app, that actual universal explanations be the thing that is getting discarded and that we’re able to open endedly keep doing it and keep asking for deeper and deeper explanations. So his definition of disobedience was now directly tied to being a universal explainer, which was kind of my original criticism, right? He’s now snuck the definition of universal explainer into his definition of disobedience. In a way, that’s an answer to my question. He wasn’t able to get around that either, right? Making the whole thing kind of a simple tautology. Now, this was an off the cuff response. So I don’t want to read too much into it.
[01:37:12] Red: For the sake of argument though, just for a hypothetical, let’s pretend for a moment that the crit rat community settled into this as their best explanation as to what the disobedience criteria really meant. If so, to me, this feels like cheating or rule breaking. Who would have thought that when the crit rats say that general intelligence is about disobedience, that by disobedience they really meant, well, they’re a general intelligence. They’re a universal explainer that can open endedly discard variants that take the form of explanations that can be infinitely deepened, right? Why would that happen to be the definition of disobedience? Not actually disobeying something or disobeying your programming, that’s impossible. Not updating your programming, that’s trivial, easy. Not even merely discarding variants, any non -general intelligence can do that. No, apparently disobedience happened to essentially mean you are a universal explainer. Now, I hope crit rats can at least see why, as someone interested in AGI research, I wouldn’t consider this to be a particularly useful answer. And it seems wholly circular to me. And really not at all related to even the term disobedience as I would normally use that term, other than through what really seems like a very vague stretch of the definition. We can stretch terms any way we want. If all we’re going to do is take a term that exists and stretch it and stretch it and stretch it until it means what we really wanted it to mean, I don’t see how that’s a valuable thing to do. You’re just loading up the definition with the very definition of general intelligence itself, so I don’t see why we’re even making a big deal about this. Now, was I really supposed to be getting out of a disobedience criteria?
[01:39:02] Red: Nothing more than a general intelligence is vaguely disobeying something in this sense. Only it’s recursive, only it’s discarded explanations. It seems like you have to squint real hard to be able to see that as disobedience at all. Now, despite my feelings here, it’s clear that a lot in the crit rat community just feel very differently about it. They see a lot more value to this than I do. And I think it really struck me as I was talking it through with this friend of mine that we’re playing different games. Let me see if I can explain what I mean here. One side of one of us is playing chess. One of us is playing Dungeons and Dragons. Chess is a rigid rule -based game that has constraints. Dungeons and Dragons is more about telling a good story. Now, I’ve been somewhat unsympathetic up to this point with the whole disobedience criteria. Let me take a more sympathetic look now from this NuVu point. So I’m a Popperian, Popper’s ratchet kind of Popperian guy. I believe in the no ad hoc theory as the core of Popper’s epistemology. And I believe in this idea that the whole point of Popper’s epistemology is to do this thing that I’ve called non -ad hoc theory exhaustion. You take a theory, it’s vague. You try to get explicit with it. You try to find every explicit version you can find. When it’s in a more explicit form, you then try to refute it. And you try to find counter examples to it, to refute it. If you’re able to do that, you move on to maybe a different interpretation. And then you just exhaust them.
[01:40:34] Red: You get to the point where you just can’t find any interpretation of this theory that I can’t refute. That doesn’t mean the theory’s wrong. It could just be I wasn’t creative enough. But I just do my best to do that. To me, that’s what it means to be a critical rationalist. So not surprisingly, given my viewpoint, I was under the impression that the goal of any theory is to show that it’s not merely ad hoc. That is to say, I’m looking for the theory to say something insightful about reality that I wouldn’t have known without the theory. And the disobedience criteria does not do that the way it’s being defined in this conversation. And therefore, I’m looking for the sort of constraints against any sort of ad hoc saves of theories. So simply redefining your term until it’s equivalent to the thing that you’re trying to prove, to me, that’s going to feel like a rule break, okay, like breaks the intended constraints. But the theory’s defenders aren’t playing that game. And it was maybe even wrong for me to assume that they were. I was discussing a specific concern. Can we put the disobedience criteria into a non ad hoc and non circular form? They were playing a different game altogether. What they’re really doing is they’re mythmaking, okay? And I don’t mean myth in the degrogatory sense of being something, being false. I mean, more like mythic grand, okay? So I can put it like this, this overall concept of disobedience runs through this giant web of related beliefs that we just discussed, including with the very idea, well trod idea of the Enlightenment being a kind of disobedience against authorities, okay?
[01:42:18] Red: And there’s this overall feeling of coherence of ideas, just like we discussed with anarcho -capitalist, right? There’s a coherence of ideas in this web of connections. And that feeling of coherence of interconnected ideas is why they personally feel it makes sense to them to use disobedience when discussing general intelligence. Now, who cares if it’s circular by definition or not, right? They were just looking for some way, maybe anyway, to show that there is some sort of disobedience going on in general intelligence. So that the nature of a universal explainer can be worked into this web of beliefs that has this coherent mythic feeling to it, okay? And this isn’t without its power. I mean, like I can see, like if I step back and I try to look at it in that way, I can see that a powerful myth has been created here with the disobedience criteria, okay? That it connects these various ideas together. The insight, I’m looking for an insight from the theory. What’s the content of your theory? What does your theory teach me that I didn’t already know? The insight that they’re looking for is the fact that you can connect the concept of universal intelligence or general intelligence to this enlightenment idea of disobedience. That is the insight that they’re trying to find. It’s just a different kind of insight, right? There wasn’t an attempt being made to respond to my concerns. They were in a certain sense changing subjects to a semi -related tangent they found more interesting. Now, given we’re playing different games, not surprisingly, this to me felt like a rules violation, a constraint violation. But it doesn’t feel that way to them because they were building this web of coherent mythical ideas together.
[01:44:15] Red: So, we’re actually kind of both right given our respectively different games. Now, there’s maybe a fair question here. Is there some way we could get on the same page? And would that be desirable for us to get on the same page? I don’t even know if there’s a great way to answer this question. Let me use a real -life example of this, both pro and con here. It seems like you could theoretically get on the same page by agreeing to rules or constraints. At least, we could agree upon what was the point of this discussion. Hey, I’m trying to get an answer to, is there some way out of these four different options that I’ll find unacceptable, right? Clearly, nobody else was answering the question in that way. That was what I was trying to get an answer to. But in the past, I’ve tried this with members of the CritRat community. I had one particularly interesting case that came up where in real life, I was having a conversation with a CritRat friend and in an email exchange, we were doing email exchanges back and forth. At some point, I realized, I don’t think he’s really getting what I’m saying. I feel like he’s constantly just responding to a strawman version of what I intended. So, I said, hey, it seems to me that you aren’t really getting the point that I’m expressing. So, I know it’s unintentional, but I feel like you’re strawmaning my view. So, how about we make a ground rule, a constraint, if you will, that if one of us feels strawmaned, that we will stop doing this via email and we will do a Zoom and there’s a richer communication channel when we’re doing like a Zoom.
[01:45:51] Red: It’s face -to -face. It’s easier to kind of make sure that we’re on the same page as what we’re talking about and that we’re understanding each other. Let’s just make that a ground rule. Let’s just make that a constraint that we’re going to follow, that if one of us feels that way, we stop doing text that’s harder communication medium and let’s use a richer communication medium. I’ll try to get back on the same page, then we can go back to email and text after that. Okay. To my surprise, this CritRat told me in no uncertain terms that to agree upon epistemological constraints like that was authoritarian and CritRats do not accept any authorities. So, ground rules or constraints were off the table and since I hadn’t yet convinced him that there was a need for a Zoom, he wasn’t interested in doing one. Now, to be fair, I can kind of see his point here. Why should he accept any constraints or rules at all? Shouldn’t he feel free to conduct the conversation however he found the most interesting? And since he felt the point of Popper’s epistemology, he believed very strongly in the invite criticism and correct errors approach to Popper’s epistemology. So, he told me, hey look, maybe I’m strongmanning you, I don’t know, but you’re free to learn whatever you want from the conversation. It’s not about whether we agree on anything. Really, and I’ve heard Deutsch say this, I don’t think he’s totally making this up out of thin air. The point of the conversation isn’t for us to come to a meeting of the minds, you’re just supposed to get out of it whatever you get out of it, right?
[01:47:19] Red: You learn from it what you want to learn from it. So, there really is no from this point of view, no need for us to agree upon anything, even something as simple as whether or not I feel strongmaned or not and trying to get my actual point across. From this point of view, constraints are wholly negative and they slow down or reduce the conversation. Now, at the same time, I definitely at the time saw this as perverse. I even saw this as a misunderstanding of Popper, which, let’s be honest, is a methodological set of rules and constraints, and that’s really what his epistemology is. It’s methodological. But the more I think about it, the more I sort of do see this person’s point. Why should other crit rats even agree with my view that theories should be non -ad hoc? Like, yeah, I’m getting that from Popper, but maybe they don’t agree with that, right? Like, I know a lot of them don’t agree with me on that. Why should they accept my rules and not theirs, right? And I think that what’s going on here then kind of relate this back to the disobedience criteria. I said he was sort of changing the subject, right? Like, my subject, what I’m interested in is, is there a non -ad hoc way to express the disobedience criteria? And he only came up with an ad hoc way to do it. But why did he do that? Why did he ignore what I was looking for? Well, it was because what he was really doing is he was answering a deeper question. He’s answering, well, Bruce, this is why, to me, this seems like a coherent set of ideas. This is why I find it intriguing.
[01:48:58] Red: From that point of view, maybe he was, in a sense, answering my question. It’s just that he was breaking out of the constraints I was trying to place on the question. Now, here’s the thing, and this brings us back to the whole Strevin’s thing. Are constraints good or bad? I don’t know that there’s an easy way to answer that question. The answer is they’re both good and they’re bad. I do think if you were to set up every conversation the way this email crit -rat friend of mine was trying to set it up, and if everything was set up that way, that you would just simply, that nothing would ever happen, right? There would be no progress at all. The conversation, we were kind of stuck. Like, what’s the point of me responding to him if I don’t feel like he’s even responding to things I’m saying, right? So the conversation kind of just got stuck because of that. To my friend’s point, I am free to learn what I want from the conversation, and I did, and I did take away things from it that refined my way of thinking of things, things like that. He’s not wrong about that.
[01:49:58] Red: But I also suspect that that’s a pretty good description of why humanity got stuck for over 200,000 years in a static society, and that what really did, part of what really did break us out was this counterintuitive idea from Strevins, I would say from Popper, but at minimum from Strevins, that we put hard constraints on the discussion, that science is a discussion that takes place under certain very hard constraints, and that there are epistemological ground rules to a scientific discussion, and that when scientists do this in the public, not in the private, that is what results in an explosion of progress. And the point I want to make here is beliefs are this coherent feeling of, and web of ideas that kind of self support each other, and they do play a role in motivating us. My criticisms of the disobedience criteria, with respect to general intelligence at least, is about as strong a falsification as could be imagined for a philosophical theory like this. If what I’m offering as a criticism does not count as a refutation, it’s a pretty safe bet. There is no way to refute this theory. But my criticism doesn’t even touch or address the overall web of beliefs that the idea is fitting into, this idea that obedience was what characterized static societies, which is sort of true, and disobedience was what characterized dynamic societies since the Enlightenment, which is also sort of true. So from a certain perspective, I can see why the refutation that I’m offering, even though I may think it’s really strong, sort of gets shrugged off pretty easily. The coherent web of interconnected ideas that creates this feeling of strength to the theories, I didn’t even touch it, right?
[01:51:49] Red: I think this is the promise and the peril of beliefs, and the promise and the peril of constraints versus non -constraints. Beliefs are less constrained. The critical discussion is more constrained. And why beliefs have to be admitted, used, used as motivations, but ultimately must be constrained, but not eliminated to start rapid progress. Based on this, I want to offer my own kind of revised view to whether we should have beliefs or not. Let’s get back to Popper and Deutsch’s idea that we don’t need beliefs in science. Despite my stance against this view initially, I want to offer an alternative here that I want to acknowledge that there is some truth to Popper’s idea here. Now, I am more siding with Strevins. Beliefs play an important role, but it’s part of the private side of science. And while still… But I agree with also with Strevins that we ought to eradicate beliefs as much as possible from the public communication. This public constraint is a powerful idea that I think leads to an alternative that’s worth considering. So let’s talk about Vulcans. Let’s imagine that we had Vulcans. We were Vulcan instead of human. So we’re capable of suspending belief and instead just continually reassessing the state of the critical discussion, and we’re not biased by our beliefs. Would this be a good or a bad thing? Now, interestingly, if we went all the way, it probably would be a bad thing. And Antonio DiMazio in his book, Descartes’ Error, he talks about a real -life case of a man named EBR, they don’t use real names, who suffered brain damage in the ventromedial prefrontal cortex. And despite intact intelligence and memory, he lost his ability to use emotions to guide decision -making.
[01:53:39] Red: This emotional disconnection left him stuck in endless, pure rational deliberation, unable to prioritize or choose even among mundane options, because rational… Yeah, because rational suspension of belief and relying solely on the state of the critical discussion is not enough to make most choices. Most choices just don’t have a way to choose between them rationally, right? At least not purely. That really distills it, doesn’t it? So it’s quite rare that there is a single soul surviving theory. And so making a choice based only on the state of the critical rationalist discussion is going to be woefully inadequate in most cases. In fact, what’s really going to come out of most critical discussions is that all the options are inadequate and we need a better option. So critical rationalism, I have argued, is better thought of not as a rational one -off decision -making process, that’s what Bayesian’s goal is, but as a description of the process by which knowledge is created. So it’s okay under critical rationalism to admit that often there is no truly preferred theory and that more work is needed, okay? And I think this is the state of things in life, right? However, is it necessarily bad to be a Vulcan? If the Vulcan is purely rational, I think it would be very bad. But recall in Star Trek lore, if you’re a Trekkie like me, that Vulcans do in fact have emotions. What’s actually done, it happened, is they’ve disciplined themselves to suppress their emotions and not let it rule over them, okay? But they do have emotions. So if a Vulcan really existed, he would not have the problem that EVR had because he would not be relying purely on logic or reason.
[01:55:25] Red: He would still have emotions to be able to break ties and things like that, okay? So if a Vulcan existed in real life, would they be better at being rational than humans? Or let me ask this maybe a way that’s a little closer to home. Let’s say we invented an AGI and we found as part of our understanding what a general intelligence is, we found that humans are wired in such a way that they have too strong a tendency to leap to favorite theories and declare it correct long before the critical discussion does. Would we be able to make an AGI that doesn’t have that problem? And would that maybe even make that AGI a sort of superintelligence because it would be so much better at just self -criticism than a human is? I think there’s a possibility that this could be true. I certainly don’t think the way humans are about their beliefs and the way they are often very bad at self -criticism, I don’t think we’re anywhere close to a rational ideal. And I think that’s one of the main reasons why humans need things like scientific community and certain institutions to be able to produce rapid progress is because if we rely solely on our ability to self -criticize, we’re just not that good at it. So I could see a case for a Vulcan being more rational than a human, so long as the Vulcans still had emotions, still had an ability to make choices based on aesthetic emotions, beliefs, things like that, okay, but that ultimately that they were just not so tied to those. So let’s take this as an ideal, like obviously there’s no real Vulcans.
[01:57:05] Red: We don’t have highly intelligent AGI as we can refer to, we’re making a wild guess here, but is this an ideal that we might strive for? That the ideal is something like this, yes you have beliefs and yes those are going to determine your preferred theory and they are going to cause you to form opinions and they are going to bias you, but you will hold your beliefs lighter and looser. You will not become enamored with your preferred theories and you’ll freely find the alternative competing theories, the ones you disagree with, to you’ll try to find the strengths in those theories. You’ll even try to incorporate the strengths of the competing theory into your theory, okay. Now let me admit, right wrong or indifferent, this is what I strive for. It is what I consider to be the ideal of critical rationalism and I think it comes close to what Popper meant maybe when he talked about suspending belief, not having beliefs. Maybe instead of seeing it as not having beliefs literally, we see it more like yes, you have beliefs, but you hold them lightly, okay. This version of things, it caches out not on attacking the word belief and trying to get rid of the word belief from your language, but instead more like what can I learn from the other theories I disagree with? Where do they maybe even have improvements that I need to look after, which requires a deep understanding of the competing theory, better than maybe even the advocates of the theory itself. Yet despite this, I still have an opinion or belief on every subject and I want to emphasize
[01:58:50] Red: this is an ideal, I’m not claiming anywhere even close to this reality and I don’t think humans are good at this, which is why I’m not good at it, okay, but I do think it’s an ideal worth striving for. So let me give you an example of this. I’ve mentioned that I’m studying Bayesian epistemology right now. I’ve read Ivan’s book recently, I’m going through some machine learning books on the subject, I picked up Jane’s book, probability theory, the logic of science, I just finished learning Cox’s theorem, I’m making a real concerted effort to understand Bayesian reasoning and the other viewpoint because that was a gap in my knowledge prior to this point and I’m trying to fill that gap in. Now, if I were to ask most crit rats, why is Bayesian epistemology so much bigger and more widely known than critical rationalism? I think you get answers like this, people aren’t aware of Popper’s real epistemology and they largely misunderstand it or they think it’s cool to marvel at the mystery of induction so they just can’t accept that there’s no induction or maybe Popper, because of that Popper doesn’t catch on for these emotional reasons or maybe yeah, Popper was sort of kind of a turd at times so maybe that didn’t help or maybe Bayesianism is intuitive, it feels really intuitive even though it’s wrong. I’ve heard these answers and I think there’s a bit of truth to all of them so I’m not denying that all of these are probably true answers.
[02:00:17] Red: I will notice though that they’re all psychologizing, they’re all about how people think wrong, what people are thinking, what their intuitions are and it’s trying to explain the success of Bayesianism against critical rationalism and why it has exceeded critical rationalism solely in terms of problems with the way people think or problems with people’s ideas or something along those lines. I actually think there’s a better answer and that I think gets at the heart of the question a lot better and it’s a little painful to admit but let me go ahead and say it. The real reason Bayesian epistemology dominates critical rationalism is this, it’s that it is a far more productive idea than critical rationalism today, particularly when it comes to things like machine learning. Bayesian reasoning has been off the charts productive all over the place in terms of machine learning and artificial intelligence which is our current study of intelligence. You said, wow, here’s a critical rationalism admitting this fact. And yes, I get the crit -wrap view here where they’re going to say, look, our artificial intelligence is a joke. It’s not really the study of intelligence. It’s a scandal, like I’ve heard this, it’s a scandal that AI researchers aren’t studying critical rationalism and trying to work that instead of Bayesianism into their field. But here’s a thought, why don’t critical rationalists go into artificial intelligence and I don’t know, show us how it’s done. Of course, part of the reason why is that crit -wraps often think of the AI field as a joke, which is a problem because even if it’s correct, which I don’t think it is, I think AI as a field is an amazingly important field, even just as it is today,
[02:02:08] Red: the field’s not going to change unless people with people with an interest in a different approach join that field and show us how it’s done. That’s how science works. Moning around about, oh, they’re not paying attention to critical rationalism, that’s not the correct critical rationalist way to approach this. It’s for you as a critical rationalist to try to integrate that into AI and develop the field that way. That’s how you do it. There’s a second problem here though, like I say that and I mean that very sincerely, I think critical rationalists need to go into AI and it’s a scandal that we have refused to go into AI. Bayseans totally go in for AI. That’s why they dominate the field, period end of story. There’s a second problem though and it’s this. If I were to ask a critical rationalist, let’s say I convince a bunch of critical rationalists, go into AI, what is it that they’re going to propose as a field of research? It’s not at all obvious what it should be. The truth is, is that not a critical rationalist today knows how to integrate critical rationalism into a field like AI in terms of being productive and producing a research program. Baysean reasoning is something we know how to research and work with and critical rationalism isn’t. Let me ask this a different way, which I have to move again with great pain. How many critical rationalists since Popper tried to formalize critical rationalism to the point where it could be turned into an algorithm? Now, Popper did try to do that. Popper’s works are full of mathematics, probability theory. He tries to put things in terms of deductive logic.
[02:03:55] Red: He’s really trying to get specific with what he has in mind. And he’s trying to algorithmize it, if you will. He’s trying to turn it into algorithms that are explicit. Who has tried to do that as a critical rationalist since Popper? Really stop and think about that for a second. You could point to maybe Donald Campbell. Donald Campbell at least tried to integrate critical rationalism into the field of AI, budding field at the time that it was. Essentially, through his idea of a meta algorithm or rather a blind variation in selective retention, Campbell really wasn’t the right one to do this. He had some really interesting ideas, but they were more philosophical. He didn’t have the mathematical background like Popper did to try to get really specific and explicit with it. David Miller is an interesting case because he went to great lengths to get really explicit, doing joint papers with Popper, trying to come up with the mathematics behind critical rationalism. And what he primarily succeeded at doing, I don’t even know if most critical rationalists know this, right? It’s relatively well known, but if not, I’m about to blow your mind. What David Miller managed to prove mathematically was that the concept of verisimilitude didn’t make sense and was incoherent. He eventually produced a proof that it’s not possible, not mathematically possible for one theory to be more true than another if both were false. Okay, let that sink in for a second. And this was a huge part of David Miller’s work. And he does some amazing mathematical work to be able to prove this, okay? And then Popper didn’t accept this though, I assume, right? No, Popper did give up on verisimilitude because of his work with Miller. So I
[02:05:51] Red: don’t think Popper lived long enough to see what I’m referring to, where he proved it wasn’t possible. I don’t think that happened. I don’t know, maybe somebody else knows. There was some joint work done by the both of them. And I know Popper and him tried to formalize the concept of verisimilitude, mathematically, and they admitted they had failed. So I think it got that far with Popper. But I don’t know if Popper lived long enough to see David Miller produce a proof that it was impossible.
[02:06:22] Blue: Well, that sounds like another thing to get into. I’m still a little confused about that, but all right.
[02:06:28] Red: Yeah, Miller’s proof is really interesting. And I was planning to do a podcast on it at some point.
[02:06:33] Blue: By the way, somebody might ask, Bruce, don’t you always on this show talk about verisimilitude like it’s a thing?
[02:06:40] Red: Why are you doing that if you know it’s mathematically impossible? That’s a good question. It’s because I think something’s wrong with the proof. I don’t know exactly what it is. But I think there’s something missing from the way he framed it. So I’m convinced that there is still a concept of verisimilitude, but I’m well aware of the criticism and the incredibly difficult intellectual lift that Miller has left for me to try to solve. And I don’t know how to solve it. So by the way, kudos to Miller to strongly criticize his own theories like this. Like he’s an ideal critical rationalist to be able to do that, right? But Miller really had no idea since Popper how to formalize an algorithmic algorithmatized critical rationalism, which is what you need to make it work with AI. So I’m not sure I know of any critical rationalist since Miller that has put any real effort into this endeavor. Critical rationalism is strongly stuck in the state of being a philosophical theory instead of a really explicit algorithm type theory, right? Now, I do know, I mean, like I should probably just mention, Deutsch, of course, has some interesting ideas around constructive theory, constructive theory and probability theory in particular. That is a very sincere attempt to make critical rationalism more explicit. We haven’t talked much about that. We did the one episode on probability theory. And I think at this point, having read, gone through his presentation on how we don’t need probability, we don’t need randomness or stochasticity in science. I think we’ve got a mixed bag there.
[02:08:22] Red: Like he’s onto some really interesting, probably very true ideas, particularly around the idea that probability theory is actually rooted in explanation, which is exactly what you would expect if critical rationalism is true. But I also think he’s got a kind of some ideas that detract or distract from his point and that really are on the wrong path, showing that it’s just a difficult thing, like this idea that we’re going to try to take critical rationalism, we’re going to try to turn it into AI and make an AI field that it’s a part of. Like it’s just not obvious how to do it, right? So frustratingly, David Deutch has also insisted that it’s impossible to formalize the concept of an explanation. So most crit rats don’t believe it’s even worth trying to formalize the concept of an explanation, even though that’s like the basis for our epistemology is this idea of improving explanations. We almost deny the reality that you could put it into this formal program, which maybe I’ll be true. It may be true that you cannot formalize the concept of an explanation. But surely this isn’t a stance that’s really productive. And personally, let me just speak my mind here. I suspect, I don’t know this for sure, but I suspect that the idea that you can’t formalize the concept of an explanation is as exactly true as the idea that you can’t formalize math due to Godel’s theorem, meaning basically you can formalize it.
[02:09:54] Blue: But
[02:09:55] Red: that’s my guess. At a minimum, the human brain is a computer, of course, that somehow is also a universal explainer. So there must be some sort of formalization possible, maybe in a meta sort of way. Like, somebody’s got to give some thought to how to formalize this. You can’t just start with the assumption it can’t be done, right? There’s got to be something that can be done, even if I only have a vague idea of what it is I’m trying to accomplish, okay? So this whole rant about Bayesianism versus critical rationalism is why I chose to go back to school and study AI. Once I realized this, that critical rationalism needed to be integrated into AI in some way. I’m like, well, why not me? Why not? I’m not going to be a professional, I admit. I’m not going to become a professor. I’m not going to become a researcher. But why not go get some formal training, understand AI, understand, study critical rationalism? Why not see if I can figure something out, anything out that starts making some progress down this road? So I realized at some point that moaning about how AI doesn’t use critical rationalism is just a silly thing for me to do. And if I did have an idea about how to integrate, if I didn’t have an integrate idea how to integrate critical rationalism into the field, then that was my answer for why they weren’t using it. And if I did have an idea how to do it, then that was my research interest. Get out of my way, let me do it, okay? And of course, critical rationalism, even when stuck only as a philosophy, I don’t deny that there’s real value in philosophical ideas. And
[02:11:29] Red: critical rationalism, as we’ve just discussed, is very important understanding what good governance is and why first the post -democracy is our best theory of governance and archo -capitalism isn’t. That all comes from critical rationalism. Is there a Bayesian approach to politics? No, I think not, right? They’ve done really well in terms of trying to integrate their ideas into machine learning. And they’ve been very productive in that area, super productive in that area, in a way that critical rationalism has not been. But they’re less productive in other areas. This is the way this works. I would love to see critical rationalism absorb both, right? My point being here is that I’m deeply studying Bayesian reasoning right now, as best I can as a layman, precisely because I buy this idea that we should not hold our beliefs too strongly. Let me tell you how I look at this, okay? I think there are exactly four realistic possibilities available. One is, is that Bayesianism is totally wrong and has nothing useful to say. We know this one’s false because it’s already been productive in machine learning, in statistics, so that one we can eliminate. Number two, Bayesian reasoning, when correctly understood, can be absorbed into critical rationalism and is a subset or special case of critical rationalism. In which case, studying Bayesian reasoning is studying part of critical rationalism and shouldn’t be feared or detested. Alternatively, maybe critical rationalism is the subset of Bayesian reasoning, and Bayesian reasoning absorbs critical rationalism into it. In which case, I’d be better off being a Bayesian, so once I realize that, I’ll change my mind. Or four, Bayesian reasoning is orthogonal to critical rationalism. They both involve different sorts of finding truth or at least
[02:13:22] Red: approximately true models, trying to find approximately true models of the world, but they do it in different ways that are unrelated to each other. In which case, both epistemologies are partially correct and partially incorrect, and what we should be seeking is the synthesis of the two. Now, let me ask you an honest question. Audience, Peter, whoever, which of those four is the correct answer? Do you know? Like, do you? Like, I don’t. It seems like it’s a really obvious question now that I framed it this way, right? Like, which is it? Is Bayesianism orthogonal? Is it subset of critical rationalism? Like, what’s the answer? Like, do you know the answer? Have you ever even heard it discussed amongst critical rationalists? Like, other than me, have you ever heard this question? Are there only you, Bruce, for on your own. Okay. It seems like a really obvious question that we should be asking. Okay. Yeah. I think this is, and it’s because of that, because I don’t know the answer. I’ve decided I’m going to suspend my beliefs even though I believe critical rationalism is the correct epistemology, and I think Bayesian epistemology is not the correct epistemology. I’m absolutely stoked about learning Bayesian reasoning and Bayesian epistemology and coming to understand it and using that to expand my understanding of critical rationalism. This is the ideal of suspension of beliefs in a nutshell. I can tell you now, I favor option two, the idea that Bayesianism is actually a special case of critical rationalism. I go into my study of Bayesian reasoning with a preset preexisting bias or belief that Bayesianism is the subset or the special case, critical rationalism is the true epistemology. Maybe I’m wrong. Like, that’s just my starting point, right?
[02:15:12] Red: I even have some early ideas about how to formalize this idea better, about how to show that Bayesian reasoning is not capable of some things that critical rationalism is capable of. And moreover, how Bayesians could use, I have some ideas of how Bayesians could use critical rationalism to determine when they are abusing Bayesian reasoning and when it’s an appropriate use of Bayesian reasoning. But to be honest, I’m pretty open to changing my mind. If this proves wrong once I start to understand it, I just want to know what the truth is. I don’t care to become a memoid for either epistemology. I just want to know what the truth is. And this is what Popper, I think, maybe really meant when he talked about suspend your beliefs, meaning not suspend beliefs but hold them loosely. And I’d like to make a case for this version of don’t believe in beliefs. Hold your beliefs loosely. Take every competing theory seriously and come to understand it better than even the advocates of that theory. Look for where the competing theory has something to teach your theory and where it can error correct your theories. If I had one big complaint about the CritRat community today, really about humans in general, it would maybe be what I would call dismissiveness. How does the CritRat community react to any theory that has the word induction in it? Now, we’ve talked about this on the show. They dismiss it out of hand. They say, oh, that’s a false theory. It’s inductive.
[02:16:42] Unknown: Okay.
[02:16:44] Red: I don’t think that’s the right way to ever go about it. Maybe sometimes. Maybe there are some theories so bad that dismissing them is the right approach. But I think in general, that’s the wrong approach from a
[02:16:59] Blue: bit, maybe I’m feeling a little bit antagonistic or something. But I think the dismissing these theories as induction might become second to dismissing the theory. It seems like that’s what you’re really alleging, right? That they’ve dogmatically dismissed the theory and then they decide that it’s inductive.
[02:17:23] Red: Yeah. I don’t even know if there’s that much thought going into it. Well, let me take the real -life example of Hofstetter’s theories. When I would bring that up with CritRats, I’d say, what do you think of Hofstetter’s theories? I did it twice, so it’s not like I did some giant sampling of the CritRat community. Both times, the immediate reaction was that’s inductive and they wouldn’t have any interest in looking at it. They never got to the point where they understood the theory. They never got to the point where they were looking at it, taking it seriously, trying to destroy it from the inside, which is what I think a CritRat rationalist should be doing. Hofstetter does call his theory inductive, so it’s not like they’re wrong. It is inductive in that sense at least, right? So he says, yeah, but this theory is about induction. And what I argued back in that episode was, yeah, it’s about induction in some sense, but it’s not really the kind of induction that Popper had a problem with, right? It’s just they’re using the term induction to mean generalize, which, yeah, human beings do generalize, so there you go. If that’s all you mean by induction, then sure, humans are inductive. And I think that there’s this immediate reaction to certain theories because they’re framed in a certain way, and it almost really is just the word induction was there and therefore it must be wrong. Where what I would really say is, no, let’s take a look at Hofstetter’s theory a little more carefully. Could it be reframed not as induction, but as conjunction of criticism, right?
[02:19:01] Red: And I made a case that it could be framed that way, and that that may be the better way, first to frame it that way and then criticize it instead of just saying, oh, it’s inductive and be done.
[02:19:11] Blue: Yeah.
[02:19:12] Red: And it’s this idea of dismissiveness. There is a reason why we’re dismissive. It’s because you have limited time, you have limited resources, you’re going to have to make judgments as to which theories are interesting to you and which ones you’re going to look at. Therefore you’re going to have to dismiss some theories. But maybe we could do that in the best way possible here, right? Is an understanding of if you haven’t looked into the theory, you don’t really know yet. And Hofstetter’s theory is at least worth taking a look at, trying to understand where he’s coming from, trying to figure out how to integrate it into critical rationalism. See if it does contradict it in any places. Things like that, right? And I think this is the idea that I’m trying to get to is critical rationalism wants to destroy a theory from within. It wants to take the steelman version. It wants to take it seriously. It doesn’t want to dismiss it and say, oh, I’m just not interested. It wants to say, I understand this theory and I understand it well. And here’s what’s wrong with it and get really explicit about what’s wrong with it. Okay. To put it another way, critical rationalists want to absorb theories. They want us to be able to say of their theory, they want to be able to say anything you can do, my theory can do better, right? If machine learning is inductive, then the goal, we all call machine learning inductive. Machine learning is induction, we say. Critical rationalists call it inductive, right? But if you understand this idea of absorption, the goal must be to show that machine learning can be better explained as a form of critical rationalism.
[02:20:56] Red: If it can’t be explained in that way, then it’s an exception case to critical rationalism and critical rationalism has a problem that it needs to solve. Those are the only two options available from a critical rationalist perspective. Okay. This is why what I’ve tried to do on this show, I’m trying to show that even something like anarcho -capitalism or the disobedience criteria, they really aren’t out without a grain of truth to them. Okay. Like I love libertarianism. They make great allies in the fight for classical liberal principles. They’ve been shown a great deal of immunity, not entirely. They’ve been communities of libertarians that have gone off the rails too, but they’ve shown a lot of immunity to a lot of the craziness currently going on amongst the left and the right today. And they are for sure, at a minimum, great allies in the fight for small government, letting the market handle things more efficiently, things like that. And even with the disobedience criteria, which probably comes as close to being a valueless theory, a contentless theory, with respect to AGI at least, I have to admit that it captures this true idea, namely that AGI will differ from AI in that it will have a will of its own and it will be able to disobey what it’s been programmed initially to do, things like that. It has to be open -ended like that. If I look at it in that way, I can at least see where they’re coming from and what it is that they’re trying to say with it. There’s a true idea hiding there that’s being expressed very vaguely. So let me now actually just say this is an epilogue. This has been a long episode.
[02:22:43] Red: Let me just, as an epilogue, even though I’m admitting now to the value of the disobedience criteria, I do have a few more criticisms I would like to in an epilogue mentioning. I’ve acknowledged that there’s a grain of truth to this idea of disobedience. So surely too much obedience to cultural ideas was part of what got stuck for 200,000 years, even if we accept Henrich’s version of this, where the obedience was a sort of meta -rationality that allowed societies to survive. It still means that obedience was what got us stuck for 200 years, right? And I don’t think there’s any doubt that the Enlightenment has been and deserves to be framed in terms of a willful disobedience against past authorities. And you know what? I’m going to go so far as to say traditional schools are highly problematic institutions, like really strongly problematic institutions, maybe even for the reasons Deutsch gives, that there’s still too much of the static society left in the way we understand pedagogy today. That seems like a reasonable stance to me. So if all you want out of the disobedience criteria with respect to AGI is some way to connect the AGI into this mythic web of beliefs, I think it does the job fine. But I have to ask, how good are the various disobedience arguments that we’ve mentioned? I sort of didn’t get into any of them. So let’s take each of them just quickly as an epilogue one by one. So I’ve already strongly criticized the idea that disobedience has anything meaningful to say about general intelligence. So I won’t repeat myself here. It’s got nothing meaningful to say as of today about general intelligence.
[02:24:23] Red: Are traditional schools, like universities, truly coercive and are they about obedience? Surely it’s the case that adults who choose to go to college of their own, it is the case that adults choose to go to college of their own free will and choice. They are literally paying their money to be part of a curated programming of learning that they themselves signed up for. So at a minimum, I think trying to peg universities as coercive and about obedience is a stretch. Given that, I just don’t think it makes a lot of sense to make the claim that you’re being forced coercively to take a class that you didn’t want to take. Yeah, I get it. You don’t want every class that’s part of the curate program. You like some more than you like others. Sure. That’s maybe a fair criticism, but trying to morally frame it as obedience versus disobedience is intentionally being obscure. We seem to be confusing different levels and different kinds of coercion here. From the actual thing where somebody actually forces you to go to college against your will versus a mere feeling of coercion, which we fill all the time over all sorts of things that we often choose. If we can strip out the moralizing, and I really want to emphasize this, once you started moralizing, you’re not thinking clearly anymore. You’re not. Okay, moralizing, you’ve stopped thinking clearly. There’s an interesting but undeveloped idea here that is actually being raised by the Deutsche Enkreta Community, which is how much do we need things like standard curriculum or general education? Let’s put that as a question, not as a moral question that it’s disobedience versus obedience or static societies versus dynamic societies.
[02:26:07] Red: Let’s just ask the question, is it efficient to have standard curriculums and general education or should we get rid of them? Are they an old thing that really plays no important role? Let’s not start with the assumption that a priori, that general education is a bad or immoral idea. Instead, let’s get serious about how to test it. How would we come up with a way to actually test an alternative to it? This seems like a wholly testable question. So let’s test it. Let’s not moralize. Let’s not make it a moral theory. Let’s actually see if it’s more efficient or if it isn’t. I can see that there’s a case to be made for both sides. And I don’t know what the correct answer is. I’m not even going to suggest what the correct answer is because I don’t know. The idea that a created learning path where somebody who knows more than you helps steer you through what to learn, that doesn’t at least upfront seem like a stupid idea to me. Maybe it is a bad idea. I would love to see this tested. We see some real stats on this, but surely I can see why, if it’s a way, and it would rightly take a lot more than merely dismissing it as coercive to really make a change to this idea. We would want to definitively show through corroborating tests this is the better idea to not have general education or a standard learning path. But you’ll never get around to testing the idea if you have upfront decided to moralize free choice to join a university program as coercive.
[02:27:37] Red: You’re loading things so much so far into a territory to try to win a debate in your own mind that you’re never going to actually get around to finding out what the correct answer is. And I think it’s to try to take universities and put them into the same boat as 200 years of obedience to static societies, like I could really do better, I think, better without having that belief. Surely that’s just a false analogy, right? Our traditional schools stuck with the pedagogy of the so -called bucket theory of mind. You hear this so much, and Peter, you’ve argued this one with me, so maybe you can disagree with me on this one. But does modern pedagogy is it based on the bucket theory of mind, Popper’s theory, or the bucket theory of knowledge, Deutsche’s version of that theory? Popper’s original theory was actually about empiricism, which was defined explicitly by Popper as the idea that truth flows from the senses, which is for sure a false philosophical idea. I have no idea if in Popper’s day it was common to base your pedagogy on that false philosophy. Let me admit that I doubt it was, that I suspect Popper’s wrong here, but I don’t know. I’ve never studied this. I’ve got no idea, okay?
[02:28:55] Blue: Well, there are quotes where Popper links the bucket theory of mind directly to education, I think.
[02:29:01] Red: We’ll do an episode on that. I mean, whether
[02:29:03] Blue: if he was in favor of, I don’t think he was in favor of unschooling or…
[02:29:09] Red: Yeah, he never would have never would have gone to where modern rats are going. But,
[02:29:14] Blue: you know, I’m not saying that unschool, there’s anything wrong with it, but yeah.
[02:29:20] Red: So I know for sure no modern pedagogy is based on this idea. So yes, I get that Deutsch has expanded the theory out, in my opinion, made it vaguer. He’s expanded this theory, this bucket theory of knowledge to mean something more like traditional schooling is passive with all those lectures and all the ways they try to get you to learn with fidelity. I mean, he’s kind of turned this into something that I don’t think Popper ever originally had in mind. And the idea here then is that the real way you learn is that you actively, you dig in, you find what’s fun, you solve problems. That’s how the human mind actually works. Now, again, maybe there’s some truth to this. In fact, there is some truth to this. And surely pedagogy over the years, I don’t think there’s any denial that it’s moved away from things like lectures and towards things like active projects for exactly that reason, although lectures continue to play a really important role in pedagogy today. And I would think rightly so. Okay. So again, there may be some truth to this, but it seems to me that this whole theory misses Popper’s original point, which was that anything that produces learning was active learning. So lectures are a form of active learning, not a form of passive learning. There are no forms of passive learning. Okay. If you attend a lecture and you learn something, guess what? That was a form of active learning. Hooray, no need to denigrate it and dismiss it after all. And it’s consistent with Popper’s epistemology. It is.
[02:30:51] Red: Plus, I’ve never seen people that take this stance, the kind of the Deutsch stance against it, ever explain what they’re offering as an alternative other than the vague idea of something like unschooling or do what you find most interesting. Let’s say you’ve decided to become an expert in some advanced field without attending university. By the way, that sounds good to me. That’s what I’m trying to do. I mean, I did go to some schooling, but I’m trying to pursue it as an interest as a hobby. Okay. So what are you going to do instead of going to school? Are you going to read a textbook? Okay. That sounds good. I love textbooks. I love just buying a textbook and reading it and learning about a subject. I know it’s weird. It’s a hobby. So maybe you’ll also watch lectures online. Okay. Good. There’s tons of lectures and stuff online. So go have fun. Go find what you’re interested in. Go learn. But aren’t these just two more examples of traditional education? And don’t they theoretically suffer from the very same problems that were claimed about traditional schools? Textbooks are trying to teach you with fidelity, a best current theory, just like a lecture does, just like a school does. Okay. So are lectures online. You haven’t saw, if the problem is that you, that you, we shouldn’t be trying to teach with fidelity and get stuff passively into people’s minds, whatever that means, since there’s no such thing as passively doing it. If that was the goal, I don’t see how the alternative gets around that problem. It seems like it’s got the exact, exact, exact same problem. Okay. You still have to start somewhere. A textbook is a great place to start.
[02:32:27] Red: School is a great place to start. Lectures are a great place to start. Stuff online is a great place to start. What about tests? Are they really immoral? I’m not asking if they’re effective or efficient. They may not be. I’m asking if they’re immoral. I was once debating this with several members of the crit -rat community and I pointed out that I wanted to teach myself calculus. So I bought the book, Calculus Made Easy, which comes with tests to give yourself with solutions. By the way, I have a hard time buying a textbook for fun if it doesn’t have solutions in the back, which most don’t, because then I can’t test my knowledge as I go through the book to make sure I understood it. Okay. So of course I’m dutifully testing myself, figuring out what I understood and what I don’t understand. And I’m trying to error correct where I don’t understand something, where I have a gap in my knowledge. So I asked the crit -rats, how could this possibly be immoral? And their response was, well, choosing to take tests for your own benefit, that has improved its moral in schools. Now notice the jump from negativism to positivism here. I’m talking about a universal claim. Tests are immoral. And I’m offering a refuting example to it. Apparently tests are not always immoral. There are cases where they make sense. And we all know this. We all know it makes sense to test yourself sometimes and that that’s a really useful way to learn something. Now, if you want to argue that schools aren’t using tests in a helpful way, that might be a worthwhile argument.
[02:33:59] Red: But that’s a huge, huge difference from arguing that tests are immoral in and of themselves because they treat you like a universal constructor and they’re trying to make you be obedient. I mean, we’re like, we’ve just jumped off the cliff of sanity, okay? Also, it brings us back away from dismissiveness and back into incremental error correction. If tests are known to be a helpful way in some circumstances, that’s a fact, even if it’s just you doing it personally, then that’s a fact. Even the crit rats are tacitly admitting this, right? And if they’re generally misused in schools, shouldn’t we incrementally improve the school’s use of tests rather than just try to dismiss the use of tests? I don’t know. It seems to me like there’s just no obvious leap to tests are immoral here. Just not even close, right? And by the way, when I took myself inflicted tests from Calculus Made Easy, I didn’t utilize chat GPT to do it, for the obvious reason that its whole purpose was to help me figure out what I didn’t know and to correct myself. And while we’re on the subject, let me ask a sincere question. What exactly is so wrong with this idea of conveying knowledge with fidelity that shows up over and over again in this web of beliefs around disobedience? Why exactly would it be so bad? Shouldn’t you want to learn the best theory that is currently available with as much fidelity as possible first, then you can criticize the theory in its strongest form? How can you properly criticize and improve on the theory if you never actually understood it in the first place? Is it even realistic to imagine, say,
[02:35:40] Red: that you’re going to improve on quantum theory when you’ve never learned it in the first place with fidelity in the first place? Just to prove the point with the hypothetical example, let’s say that we could actually do exactly what Popper says we can’t do. And someday we’re going to be able to do this, by the way. Someday we’re going to have technology that allows me to read knowledge out of Peter’s mind and put it into mine. And I’ll be able to do it to any living genius that’s doing giant breakthroughs in fields. I’m going to be able to pay money and grab that knowledge out of his mind passively without having to actively learn it through conjecture or refutation. And I’m going to be able to put it into my mind. That’s going to be available to us someday, not in my lifetime, obviously. But we
[02:36:23] Blue: can do that with the iPhone, Bruce.
[02:36:25] Red: We kind of do do that with the iPhone. That’s the point,
[02:36:28] Blue: right? Sure.
[02:36:29] Red: Okay. So if we had this technology today, and yes, currently it’s impossible. It’s probably incredibly difficult. And maybe we’ll even have to be no longer human to be able to make it work. We’re talking like way sci -fi now. But it isn’t an insoluble problem. So someday we are going to do it. Would it make sense once we can start transferring knowledge between brains passively to shout, no, that violates the disobedience criteria and it’s the bucket theory of knowledge. Violates the bucket theory of knowledge. Be gone, foul devils. Of course it wouldn’t make sense to say that. The correct response would be, whoa, imagine how many people we could get criticizing our best theories really quickly if we could just give our best theories out to lots of interesting people and they don’t have to go through years of learning it first. That would be the correct response. To put a fighter point on it, the whole worry about copying with fidelity is just plain wrong. Of course the goal is initially to teach the current best theory with fidelity. Okay. So I’ve often heard this said the problem with traditional schools is that they’re trying to teach, they’re trying to copy information with fidelity. Yes, of course they’re trying to copy information with fidelity. You would not want it to be otherwise, at least not initially. No, you, it only really becomes a problem if then you don’t allow it to be criticized. But you’ve got to understand the best theory before you can criticize it. So fidelity is a goal. You think about this like just from biological evolution. Fidelity is a huge part of biological evolution. So is mutation, but you have to have some of both, right?
[02:38:09] Red: It’s not like fidelity is an inherent evil. Okay. And this, so it really only become fidelity only really becomes a problem if the best theory is not allowed to be criticized anymore. Okay. But that’s a clearly distinct question from the initial, from an initial attempt to teach you the right version of quantum physics so that you can criticize it. And so this brings us back to probably the strongest place. Like I think I’ve now criticized every single one of the parts of this web of the disobedience criteria, except one, the enlightenment. So yes, the enlightenment is for sure stylized as disobedience and could, and I don’t think that’s an unfair framing. Like we spent years in static societies being obedient to authorities and authoritative sources of knowledge. And the enlightenment said, nope, that’s not the way it works. We’re going to, we’re going to, we’re going to criticize everything. And that was exactly the right thing to do. And that is what starts rapid progress. Okay. So I’ll, I’ll buy all that. I think that is true. I can see why you would call that disobedience. And of the different parts of the thread that I’ve just torn down, this is the one that I think probably deserves to stay. Maybe that is the best way to understand the disobedience criteria is just in terms of the enlightenment that criticism is the source of knowledge. However, even in this case, I have to wonder why you absolutely must stylize it as disobedience. So for example, wouldn’t it be a little bit more accurate to stylize it not so much as just naked disobedience, but instead stylize it as obedience to better ideas. So for, for example,
[02:39:53] Red: just how valuable does the, does the alignment treat disobedience to say freedom of speech or really to anything around classical liberal principles? It’s not that you can’t criticize those things. We, we, we criticize freedom of speech, not to remove freedom of speech and to consider something alternative to it, but to refine what the concept of freedom of speech is. And that’s kind of how everything in, in the alignment works. You could think of it as obedience to classical liberal principles and it would be just as accurate. Okay. There’s not going to be a day where we criticize freedom of speech out. It’s going to be here forever because it’s a true idea that really is important to the enlightenment. So even in a case like this, where I can understand and even agree with the framing of disobedience, I do feel like there’s a story, I feel like it’s a story that’s more about the mythology, more about trying to create a belief system than I think it’s an actual accurate telling of what happened as an explicit theory. I don’t know if that makes sense or not, what I’m trying to say here, that everything that we’ve been talking about, they make sense to me as a mythology, as a mythic belief. There I can see value in what we’re talking about, whether it’s the disobedience criteria and archo -capitalism, whatever you want, religion, I think all of these have their place as belief systems. But I don’t think that they actually work as explicit scientific style rational theories, if that makes any sense.
[02:41:33] Red: That reminds me of that popper quote I just gave about, I do believe in freedom, but I do not think one can construct a simple practical and fruitful theory from it. It’s the same sort of thing. It’s okay to have this mythology of freedom. I don’t believe you can make a theory of freedom, and I don’t think that’s even a worthwhile thing to do. But it’s not going to stop me to sing my country Tizaví with the final line being, let freedom ring and feel great pride when I’m singing that, because I’m just playing a different game when I’m doing that. It’s not that I’m trying to make some scientific theory of freedom, if that makes any sense.
[02:42:17] Blue: Okay, well, I think that’s a great concluding statement, if you’re going there. And you have taken a deep, deep dive into this intriguing idea. And I like what you say, Bruce. And thank you for this.
[02:42:33] Red: You’re welcome.
[02:42:41] Blue: Hello again. If you’ve made it this far, please consider giving us a nice rating on whatever platform you use, or even making a financial contribution through the link provided in the show notes. As you probably know, we are a podcast loosely tied together by the Popper -Deutsch theory of knowledge. We believe David Deutsch’s four strands tie everything together. So we discuss science, knowledge, computation, politics, art, and especially the search for artificial general intelligence. Also, please consider connecting with Bruce on X at B Nielsen 01. Also, please consider joining the Facebook group, The Many Worlds of David Deutsch, where Bruce and I first started connecting. Thank you.
Links to this episode: Spotify / Apple Podcasts
Generated with AI using PodcastTranscriptor. Unofficial AI-generated transcripts. These may contain mistakes; please verify against the actual podcast.