Episode 71: Can Values be Objective?
- Links to this episode: Spotify / Apple Podcasts
- This transcript was generated with AI using PodcastTranscriptor.
- Unofficial AI-generated transcripts. These may contain mistakes. Please check against the actual podcast.
- Speakers are denoted as color names.
Transcript
[00:00:01] Blue: Welcome back to the Theory of Anything podcast. Today we have Ivan Phillips with us. Ivan is someone who is frequent on the many worlds of David Deutsch page on Facebook, Peter’s page, and he makes a lot of really interesting comments, sometimes agreeing with David Deutsch’s worldview, sometimes criticizing it. After episode 56, which was the episode on religion and rationality, he made some comments in favor of the concept of subjective values. Now, David Deutsch talks about the existence of objective values, and that’s something that I think a lot of us probably buy into. But I felt like he made a pretty good argument in favor of there are no objective values and that all values are subjective. I don’t know if this is necessarily his viewpoint. He’s just maybe throwing out the best arguments there. But I talked to him and I said, you know what, I’d actually like for you to come on the show and to make these arguments. I know good arguments from bad arguments. Normally, when I hear people make arguments in favor of moral relativism or subjective values, it’s often kind of sucky arguments. And that’s clearly not what Ivan’s doing. So I found that really interesting. So Ivan, why don’t you give us a quick introduction to yourself?
[00:01:32] Red: Thanks for having me on the show. I love the podcast and it’s an honor and a pleasure to be here. So my training was in theoretical physics. My PhD was about detecting CP violation at colliders. After I left graduate school, I went into the software business, but I was still interested in science and critical thinking outreach. A couple of years ago, I wrote a book on rationality, specifically from a Bayesian point of view, but really noting the fact that rationality is not something that’s very visible in society. That we don’t, like if I were to ask you, who was your rationality teacher in high school, you’d have no answer to that question, right? There’s no rationality section at the bookshop. So this has been a constant interest of mine along with philosophy. And a few years ago, I read The Beginning of Infinity and then more recently read The Fabric of Reality, which I should have read in the other order, probably. But it was one of those books that I thought was very thought provoking and I didn’t necessarily agree with everything, but it was one of those things where I need to think about this a lot more. And so, of course, I joined the Many Worlds of David Deutsch forum on Facebook. I was also exposed to the critical rationalists, which is sort of a related philosophical doctrine.
[00:03:14] Blue: Yes, critical rationalism being the name that Popper gave to his epistemology, although it’s branched into all sorts of different ideas since then.
[00:03:24] Unknown: So,
[00:03:24] Blue: yes.
[00:03:25] Green: The real Slim Shady, who that, what the heck?
[00:03:30] Orange: Well, that’s cameo.
[00:03:33] Green: Cameo because Bruce is busy being me. Okay, okay. I just wanted to make sure we weren’t being my kids. My kids have turned me on to these, not turn me on to them, but occasionally will show me these Zoom bombing videos online where they will, you know, these YouTubers will Zoom bomb someone and then, like, I’m like, okay, are we about to get Zoom bombed here? So
[00:03:59] Blue: cameo is basically Zoom bombing her own Zoom. That’s what she’s basically doing. Yeah,
[00:04:05] Orange: there’s definitely an odd thing happening here.
[00:04:09] Green: Anyway, before we move into the main topic, I’m just so curious, Ivan, I just have to follow my curiosity here. You are a theoretical physicist, correct?
[00:04:22] Red: I was trained as a theoretical physicist, but it’s one of those things where I’ve forgotten 90 % of what I ever learned in graduate school.
[00:04:31] Green: And yeah, I did look you up on one of these research sites or something. It looked like you published stuff in the, at least in the 90s, or had your name on papers and things. Now, were you, I’m just so curious about the mini world’s angle. Is that something that you currently subscribe to, or is that what attracted you to do it? Or did you always, going back to the 90s, were sympathetic to this?
[00:04:58] Red: I think that in the 90s, when I was a graduate student, I was more agnostic. There were elements of Copenhagen that I thought were appealing, not the sort of woo woo features of it, but more the idealistic idea that maybe there are some questions that cannot be answered, and we should not be too upset if there really are fundamental limits on what kinds of questions make sense. But I was also, when I would consider the question of which interpretation I thought was the best, most of the time I would go and say, well, I don’t see what kind of experimental test is telling these different, these different things apart, that they make the same predictions. But in recent years, I’ve come to conclude that the mini world’s interpretation is by far the best that I’m aware of. And part of that was due to Max Tegmark. What he points out is that every version of quantum mechanics has wave mechanics in it. And so as a result, every version of quantum mechanics has many worlds in it. And it’s just a question of how the other interpretations want to get rid of these other worlds, and they have to do that by adding something. And so it doesn’t seem like the other interpretations really have very much, they don’t have any advantage and they have some disadvantage. They’re not as simple as many worlds is. Right.
[00:06:43] Green: Well, I think we’re all on the same page there. But I’m sure we could spend hours just talking about that. But thank you for that.
[00:06:50] Blue: We need to invite Bobby Azarian on the show and ask him about the quantum Darwinism, which I’ve never actually explored. But well,
[00:06:59] Green: he hasn’t returned my message. Bobby, if you’re out there, return my message and come on my podcast.
[00:07:06] Blue: Yeah, he undoubtedly listens to our podcast on a regular basis, but doesn’t return our messages. That’s exactly right,
[00:07:14] Green: Peter.
[00:07:17] Blue: Ivan, I felt like you made a fairly compelling case in favor of subjective values. Let me clarify one thing that I think sometimes gets confused before I give you a chance to turn it over to you and give me your arguments. But the argument between objective and subjective values isn’t generally that either values are all objective or values are all subjective. Someone like me who believes in objective values, I still also believe in subjective values. There is no objective truth to the fact that I prefer strawberry ice cream over chocolate ice cream. I mean, it’s subjectively true that’s the case. But that’s not what I mean by objective value. Like that is without a doubt in my mind a subjective value. I don’t feel like there’s any need to argue in favor of the existence of subjective values. We all have them. We know we have them. That’s just an accepted thing. And in some sense, the hard part is arguing in favor of objective values that objective values, if they exist, there should be an explanation for those worth it. I don’t really need an explanation for subjective values because I know I have subjective values. I don’t know if that makes sense or not. So in some sense, I accept that those of us who believe in objective values, this is a completely wrong term, but the burden of proof is on us. It’s really up to us to make the case for objective values. And in some sense, your arguments may be in favor of subjective values is in some sense easier. Still, I found your arguments fairly compelling. Could you maybe take us through some of the things that you brought up and explain your thinking on this subject?
[00:09:11] Red: I think one of the first arguments I would use is Hume’s dictum that you cannot get an ought from an is. So there doesn’t seem to be any way to look at morally neutral facts and can get some moral truth from it. So what Hume’s argument is saying is, his argument is not that there could be no, I don’t think his argument is that there could be no moral realism, but more like when you do have a moral conclusion that depends on some facts, that you’re really just leveraging one moral value to reach another moral conclusion. So if we say something like we should limit, let’s say we want to put some constraints or some limits on inflation in the economy and we could say that this is morally important. Well, why is it morally important? Well, because we can say that if we allow inflation to grow too high, then it results in the following kind of suffering. And then, well, that may be, we may be able to put some real measure on what suffering is, but you still have to add the moral premise that suffering is bad.
[00:10:38] Blue: The argument Hume’s guillotine, in essence, is there’s no way to take a set of facts and turn it into an argument in terms of you ought to do this morally and have a moral duty come out of it. Now, David Deutsch does make a number of arguments against that way of thinking in his books. I don’t know if you recall them or not. If you need me to summarize them quickly, I can. I was curious what you thought of his arguments, though.
[00:11:08] Red: I do not recall them, so I would love it if he could summarize them.
[00:11:11] Blue: Okay. So he would basically agree that you can’t come up with a foundation for morality using facts, right? But he calls that foundationalism. He considers that bad philosophy to begin with. And he would argue that just as you can’t come up with a foundation for morality and try to explain it through facts, you can’t really do that for anything in science. So goes his argument. I’m not sure if I entirely agree with that argument or not, but that’s the way I’ve seen him argue it. And that really it comes down to conjecture and criticism and that morality is a set of conjectures and what we criticize them. And then he would say that the way facts tie into morality is that they do adjust what types of moral explanations are feasible. So he uses the example in beginning of infinity of slavery, that you might have a theory that there’s a divine right for white people to hold black people as slaves or whatever, right? Whatever was used to justify it back then. And then out comes a slave who’s escaped from slavery and he writes a book. He’s an obviously great writer. And you had this explanation in your mind that people from Africa needed to be in slavery and their lives were better in slavery. And now you have this guy who’s actually writing a book about the evils of slavery and you can see he’s just as educated as you are. And while it doesn’t actually disprove the original theory, which was completely metaphysical and so it was completely irrefutable to begin with, it does ruin your explanation you had in your head for how you were justifying that theory.
[00:13:01] Blue: And therefore that’s how facts can play into moral theories without ever actually violating humans’ guillotine.
[00:13:14] Green: This idea that explanation, the search for better explanations unites both science or empirical theories and morality and art and just all of these areas human beings are capable of making progress and makes a lot of sense to me, you know, like if you look at science and but if you really blur your eyes, what are you looking at? You’re looking at a large conversation, there’s arguments, there’s debates, humans have decided to let their ideas do their fighting and dying instead of their bodies, which is how Popper defines what a rationalist is and could be considered the central insight of the Enlightenment. But it’s the same with morality really, is that what are you looking at? This argument that took place about slavery, is slavery right or wrong? You know, well there’s people tried different things, they argued, well in that case they actually literally did fight, but maybe that’s not the best argument, but there’s plenty of other moral arguments where people are having that don’t involve actually fighting and dying and you know it’s just it’s a messy, complicated process where people are coming out up with better explanations for how to live and you know, I mean if that’s not objective then I don’t know what is. Does any of that ring true, Ivan?
[00:15:00] Red: I guess the notion that you know we have refutations and people make arguments and so on, I think that that’s very fuzzy. So there are lots of bad arguments, there are lots of things where people create rationalizations for things that they already believe and so it seems a little fuzzy to say that because people are having a debate about something that there is either there is something objective about the field or that I don’t know, it just doesn’t seem like this. So I am not a critical rationalist, I am a Bayesian.
[00:15:47] Green: So
[00:15:48] Red: I expect arguments that are based on evidence to sort of make some probabilistic prediction about the world. Now I don’t see that as actually being in contradiction, direct contradiction with what David Deutsch says because I think that David Deutsch’s position on inductivism and on Bayesianism seems a bit strange to me. So obviously as a Bayesian it’s one of those things like whatever you think your deepest ideal of rationality is you’re not going to be able to justify it rationally without circularity right because any arguments you apply are going to use those same deep principles. So when I think about Bayesianism and what David Deutsch has to say I’m usually looking for how do I translate what David Deutsch has to say into a Bayesian view and you may recall I forget which podcast number it was when you were talking about justification and I said that as a Bayesian I approve of this podcast. I feel like when you get down to the nuts and bolts you’re going to end up putting it into a view that’s actually something that kind of fits within Bayesianism. So what I would do with the morality picture is ask how would I construct some Bayesian argument for moral realism and I don’t really see how to do that. In other words for a Bayesian it’s not just that people have arguments and that there is debate. For a Bayesian you would say science features debates and it features all of these sort of human institutions but that we think that these human institutions are approximating some kind of Bayesian inference. So
[00:17:57] Red: that I would want to say if you have an argument for moral realism it should be something where you can phrase it in a Bayesian picture and I know what you’re talking about specifically that I can be assured that you’re not talking about people debating their subjectivity right and people do change their subjectivity. The things that we think are moral today are different from things that were thought moral centuries ago and part of that has to do with the fact that we no longer under the same mortal threat that we were right. So when in World War I if you’re on the battlefield and you start advocating for animal rights that people are going to just look at you strangely right it’s like why would I even think about that today? I want to survive till tomorrow I’m not interested in animal rights and I think that these are the kind of things that lead to what we today would call moral progress. So I think our version of moral progress is we’ve made the world so much more comfortable and so much safer that now we can expand our sphere of interest, moral interest in the well -being of others. So I have a model for the way morality works and this will involve people talking about abstract concepts like ethics and utilitarianism and things like that but it doesn’t mean that there is an absolutely correct view and if the world falls into a worse state we can say that well I definitely you know I would prefer to live in this more comfortable world but can I say that a particular moral choice was objectively wrong I think that’s difficult to do.
[00:20:00] Green: Well I think you kind of answered a question I was going to ask you I was going to ask you well what would a world where an alternate reality where morality was objective look like I mean are we really arguing about religion or would there have to be like a 10 commandments or something like that but you kind of said well you would have to be able to define probabilities to different moral moral
[00:20:32] Red: ideas. One of the examples that I’ve used before is what if stolen gasoline burned with fewer calories than fairly purchased gasoline. Okay right or in Harry Potter the world of Harry Potter right good and evil have different physical effects like there are different spells that work in the Harry Potter universe and they work in different ways based on this notion of good and evil and if those spells and you know moral attributes were fundamental then you’d kind of have to say well moral realism seems to be true like it makes different predictions it’s a fundamental part of the universe.
[00:21:18] Green: So in Harry Potter morality is objective in the Harry Potter world.
[00:21:24] Red: If you can say that magic is is an objective part of reality in Harry Potter’s world and I think you would say that moral realism was true in Harry Potter’s world.
[00:21:35] Blue: Perhaps a stronger example here would be the force from Star Wars okay where you have literally two sides of the force the light side and the dark side and they are a fundamental physical part of the universe and in that universe if I’m following Ivan’s argument correctly that’s what he sees as objective values. In Star Wars you literally have a physical good and evil that exists in the universe it’s not just how we feel about it but it’s that they have different impacts on the laws of physics. Am I following what you’re like I’m trying my best to interpret you
[00:22:19] Red: but yeah I think that’s I think that’s true I was just thinking about specifically what is the difference between someone that is uses the force for good versus evil like I don’t know enough about Star Wars lore to know like do you only get the lightning if you’re evil? Yes you
[00:22:36] Green: only get the lightning if you’re evil that’s right okay I have to look that up in the my kid Star Wars encyclopedia I don’t know it used
[00:22:45] Blue: to be that you only got the choke if you were evil too but like they’ve changed that now it’s the good guy sometimes do the choke so
[00:22:52] Red: okay yeah so then it becomes a bit more ambiguous if they both had the same power then you know I mean some of the Jedi sometimes I think seem kind of maybe like to have a neutral alignment anyway so yes
[00:23:09] Blue: and that was actually a part of the story is that the Jedi the movies don’t handle this as well as it should have but like the the Clone Wars cartoon does a better job of making this clear the Jedi had in some sense unintentionally become corrupted they may have individually still been trying to do what they thought was right but they were making choices like becoming soldiers in a war instead of just being peacekeepers and it had led to a kind of corruption of the Jedi at a systemic level so it’s a little more complicated than I’m saying but like the kind of the original three movies they literally have different force powers they the force the the dark side is seductive because it it’s more powerful initially but then in the long run the light side becomes more powerful so that they have literal physical differences is the way the first three movies describe them yeah that sounds like moral realism to me
[00:24:12] Green: well I mean maybe we’re asking too much from morality in a way to want to tie it into like physical forces I mean if I if I ask myself a moral question well should I go out and cheat on my wife today well I mean there’s probably not just one reason why there’s that is a bad idea there’s could probably think of about 10 reasons and and you know that it just wouldn’t be good for me and and my marriage and my kids and my life it just wouldn’t be the right thing to do you know it’s not it’s not just that it’s written down in a book somewhere or if it’s a physical law it’s just I mean why isn’t that doesn’t that make it objectively wrong in some sense
[00:24:55] Orange: I I’m gonna take some some issue for just even in that example
[00:25:02] Green: okay
[00:25:02] Orange: because I think we’re conflating the the sexuality part with it with the the act of the betrayal so betraying your spouse is obviously not good you know regardless of whether or not they were a romantic partner so that’s what I’m trying about but
[00:25:23] Blue: yeah
[00:25:23] Orange: but there is not actually anything inherently wrong with you having sex with another person outside of your your spouse that is a religious I believe that’s a religious thing that we’ve adopted and is not actually have nothing to do with the natural human state so I don’t for me that example it shows that that that’s not a good example for objectivity okay
[00:25:54] Green: but there may be objective reasons why humans tend to get along better in monogamous relationships I
[00:26:02] Orange: versus I would I would argue that a fair amount of it has to do with the way masculinity has um has really kind of changed our perceptions of what natural interactions should be like I personally believe that men in particular like the concept of owning a woman I believe if you if we had a society that was predominantly kind of run by women that there would be a lot more liberal sexuality because I think I think women actually don’t need that ownership quite as much those are all just theories and that are just mine but because we don’t have any culture that’s not driven by by by men but generally the religions that we created have this notion of a dominant male and we know through like the science that we can see on how cultures work right now the more rigidly adherent to to sex roles and to that kind of morality that’s driven by the religions that all came out of the Abrahamic roots I think I think that um so much of that is it’s all about the religion and the domination that’s my opinion
[00:27:30] Green: okay we
[00:27:34] Blue: could easily go down a rabbit hole you guys missed me right you know I let me actually so I’m kind of withholding my opinions at this point in fact to some degree forming my opinions right I want to at some future point do a separate podcast where I try to explain my own views of objective morality and I admit it’s a tougher case than it first appears right I I feel like Deutsch makes a lot of interesting cases and it opened up a different way of thinking about objective morality that’s different than the way Ivan’s suggesting it but I would be probably the first to admit I don’t find his argument as locked solid as I wish it was um but I don’t find it to be ridiculous either like I feel like he’s on to something even if he didn’t go far enough I the example that I use in a past podcast was the his argument for objective beauty um using the idea that Mozart error corrected and therefore we’re moving towards some sort of objective standard of beauty and then I pointed out that that argument is intriguing but it doesn’t really work on its own because you can make the exact same argument using waffle love recipes which is something that we know is just parochial and so I I feel like Deutsch is like on to something and it’s going in a good interesting place but I don’t feel like he quite gets there if that makes any sense and so I’ve been trying to figure out how would I improve upon his formula let me just in terms of cameos comment let me just say that I’ve actually researched this
[00:29:17] Blue: so I’ve read the book sex at dawn which is makes that case basically and let me just say that the book is absolutely a polemic it doesn’t mean that it isn’t sometimes correct but it is clearly making exactly one side of the argument and hiding evidence against itself it uses the example of there was one small -scale society that’s well known and it’s been well documented that considers itself matriarchal and they are in fact openly sexually promiscuous that there is no concept of of marriage in that society actually that’s not quite true let me get back to that but the idea is is that the women simply pick up a guy at the bar every night the guy it’s a different guy each night maybe and then the guy leaves before morning and because you don’t know who the children are the the the father has no incentive to try to raise his own child so they compensate for that by instead having the brothers of of the woman raise the child play the father role because the brother knows just as the woman knows for sure it’s her child and therefore genetically hers the brother knows it’s his sister and therefore genetically related and so they’re able to actually organize their society in a different interesting way now they’ve never gotten past a small -scale society and and
[00:30:43] Green: this is science fiction or is this a real
[00:30:45] Blue: no this is a real this is a real this is a real society okay and they do consider themselves to be matriarchal um and where is this I forget it gets mentioned in in the book sex at dawn but it’s also mentioned in in other completely legitimate science science books um uh Nicholas Christakis talks about them and because I because they get mentioned in all these other books that I’ve read I know the things about society that that the author of sex at dawn is intentionally cutting out to try to make his case for example as much as they have tried to get rid of monogamy in that society it’s such a natural part of the way human beings think that that society has a problem with rebels who run off and become monogamous and leave the society behind so that they can be monogamous he completely cuts this part of the story out but it’s a huge part of the importance of the story where they’re they have all these institutions in place to try to enforce polyamory instead of monogamy and they actually have the inverse problem where people will run away from the society and set up their own monogamy because that’s what they really wanted um and the other thing that’s interesting is that this is the only society I think we know about that has done this and it’s small scale it’s not large scale there are no large scale societies they’ve ever done it and cameo raised the the idea that it comes from Abrahamic religion well that’s the Abrahamic religion has had huge influence all over the place so I can see why she would raise that as the possible influence but there are societies large scale societies in the east china for example that have had something like zero influence from Abrahamic society early on and they also are monogamous and so it’s it’s interesting that every single time a society became large scale they almost immediately adopted monogamy and it was it’s so consistent that it’s it’s difficult to explain it purely I mean religion clearly played a role and her argument ownership of women may well have played a role that’s such a big part of a lot of that early thinking the the early small scale societies prior to agriculture were egalitarian and it’s really it’s really only after agriculture comes and you start building large scale societies that you suddenly start worrying so much about monogamy and this idea of women as owner being owned in some way suddenly springs up and it does in every single society even ones that were talking to each other so
[00:33:23] Orange: that agreed but does that mean are you equating the fact that humans have a preference for monogamy on a large scale or at least performative monogamy right because given the opportunity a lot of people will choose not to be monogamous regardless of what they state they believe that is
[00:33:44] Blue: that is correct
[00:33:45] Orange: however that all of that being as an aside to is it right or wrong just because we have a both culture and religious inclination to doing it those are very very separate things right just because we as a society think that monogamy is the right way to structure our society on the surface at least again does that mean that it’s wrong to choose to be non -monogamous so that is the
[00:34:14] Blue: question and that is a that is a difficult question I think what Peter is getting at is that let’s say that we we throw this scope out further into time right you have to explain why is it that monogamy keeps coming up right and maybe this is going to turn out to be a bad example right but is there something objective about the fact that there that that societies have so consistently structured their societies as monogamous or is it just something entirely parochial it could I argued that you couldn’t have something objective but parochial
[00:34:54] Orange: I’m gonna flip your statement back on you like why does you know infidelity or polyamory keep coming back up
[00:35:02] Blue: well and I think I think it’s not too hard to explain that it’s because we have both impulses right it’s we have both a monogamous impulse and a polyamorous impulse and that that’s fairly typical of how evolution works is that it will wire complete contradictions into us like
[00:35:20] Orange: yes but why do is either one of them need to be right or wrong like I I don’t if if somebody chooses it kind of coming back to Peter’s example if Peter and his spouse had a an agreement that it was okay to have outside partners then it wouldn’t be bad for him to go and have an outside partner right if they they’ve as adults made an agreement to have a relationship that worked that way the the thing that would be sneaky would be to have a relationship where you had agreed to a certain standard in Peter’s case monogamy and then sneak around that agreement to to to do something else to have to have an affair on the side to to try and have your cake and eat it too but like the the the actual act of monogamy or or polyandry do not have to be like inherently related to to good or evil in any way
[00:36:25] Green: well I think it’s I think it’s an interesting question like on a kind of more meta level is would a society based on polyamory rather than monogamy be as psychologically healthy for for children and women and men too over the course of the life maybe their their maybe monogamy is just kind of this arbitrary thing that just came out of this arbitrary religion I don’t I mean that’s a possibility but I guess I just like to see these societies where it works out so well and you know I mean I I do have some experience with my friends that are that are polyamorous and all this and I mean I think there are probably good objective reasons why monogamy since tends to work for human psychology for most people over in most places it most times but I mean I could be wrong maybe maybe not I think that I think that morality can be objectively described but not objectively prescribed so I think that that we’re not blank slates that that evolutionary psychology has some valuable insights and that we can we can point to facts about human evolution that say this is why we have we have these impulses and maybe this is why we have both of these impulses even if they’re contradictory right like Bruce said that their evolution will will give us contradictory I what seem like contradictory principles you know especially if they’ll operate in different contexts so I think that that all makes sense the what you you just said Peter about you know does this contribute to psychological health or or something like this you we could probably say that given I think that first of all we should note that humans
[00:38:47] Green: the the great evolutionary advantage of humans is that we can adapt to new environments then by environments I I include culture right that we we can we can adapt within a generation right we’re not we’re not as well adapted to any single environment as say a crocodile is to its environment but our advantage is that we can you know we we have people that live in the Arctic Circle and we have people that live in the desert and that we found ways to adapt to living those environments and so we’re also very flexible in what kind of society we can live in and under what kind of rules we might want to live in so I don’t think those things determine you know just because this is this is the environment in which we evolved doesn’t mean that this is the way that we ought to live in the future right that we can’t we can say objectively that as humans are today given the you know economic constraints around pregnancy and the costs and and other societal factors we might say that today the best thing that you could do along these metrics is to be monogamous but that’s that’s first of all it’s just an average it doesn’t apply necessarily to individuals and second it’s just relative to those metrics so one of the the examples that I probably brought up in our conversation Bruce was imagine that the planet Mars has been terraformed and now it’s it’s like a twin earth
[00:40:30] Green: and so there’s a human society there that’s much like ours but they get infected by some random virus that causes everyone on the planet to think that assassination is just the coolest thing it’s the most beautiful awesome thing right and we could say like wow you know this is this is really bad right so obviously you know assassinating someone might feel really good to you but if you’re the one getting getting done in that’s that’s not cool right um and we can also say well it’s going to be bad for GDP um and it’s going to be bad for the their well -being but this is what they value like the virus has changed their values so what can we do to change their mind like we can’t argue them out of this and there doesn’t seem to be any I mean if you say like well your society is eventually going to go extinct if the the most valuable thing to them is is killing people then they’ll be like well yeah well I’d rather do this than live forever
[00:41:39] Red: and then you don’t really have anything it’s sort of you no longer have any any any moral argument has to weigh on values that they already have and if they don’t have that value in common you can’t make the argument that well this is what’s best for for society so now whether this is really practical a practical value is another is another question you know how how important is it that morality be objective in the first place like why do you even want morality to be objective it
[00:42:14] Green: sounds like you’re kind of making an argument for human universality applied to morality in a way that we are unique in nature and that we’re we’re basically infinitely malleable in in how we can shape our own our own morality I
[00:42:36] Red: think human morality comes from as it is today a lot of it is is caused by the value of non -zero sum games right so if I go out and catch a meal and I share it with you I actually even though I’m giving up my resources to you now the next time I go out and I don’t catch anything and you do you’ll share with me and so on balance it’s it’s non -zero sum and so I think this is the origin of a lot of human morality or you know when when we are our morality around kindness and and you know treating family well has to do with spreading our our genes so we can talk about the the causes of why this is we can talk about non -zero some games and non -zero some games are extremely powerful right it you know I have flown around the world and I’ve flown around the world because other people found value in making airplanes and refining metals and all this stuff so it’s been it’s non -zero some games are really powerful and so you may say any any species that evolves that is going to be able to learn a lot about the universe and develop advanced technologies is probably going to benefit from non -zero some games and so statistically we would expect species that are highly successful to
[00:44:20] Red: have certain moral values in common with us I think that’s so I think that statistically you could say that we can imagine I think peculiar scenarios right we can imagine this peculiar scenarios where there is maybe one giant life form on a planet or or an AI becomes so powerful that it doesn’t need anybody else and it can do everything itself in which case it’s it’s only playing non -zero some games sort of internally that maybe we can imagine scenarios where this is not the rule and for that reason it’s not an absolute it’s not something that I consider to be objective it’s more like well statistically this is what we would expect could
[00:45:08] Green: that be a form of objective morality that it’s better to cooperate than to you know where species who the wolf pack that that collaborates to hunt is more likely to survive than the species that that just wants to tear tear each other apart
[00:45:24] Red: I think let’s say we wrote that down we say yes you know species that do this statistically are more likely to survive does that make them morally right for surviving
[00:45:37] Blue: so let me get back to something that cameo said she actually made a carve out and this is actually what I wanted to ask her about the carve out was well I’m not talking about the betrayal I’m just talking about the sexuality okay well let’s talk about the betrayal right what exactly is immoral about betraying your spouse so at some level the way cameo said that there was an assumption of an objective moral there that you should not betray your your spouse but there are people who betray their spouse there’s people who get away with it there’s people who justify it morally I once came across a comment on the internet you can find anything you want on the internet where a woman was explaining that she cheated on her husband and he didn’t know and it was just better for everybody it was an objective good in her mind that she was she was cheating on her husband because they just weren’t they didn’t have a happy marriage back before she started cheating and and once she started cheating she was happier that made him happier so everybody was better off objectively in her mind because she was now betraying her husband and he wasn’t going to find out and so wasn’t going to do him any harm so there was no harm involved in her mind or this is her argument anyhow so even that carve out okay I’m not talking about betrayal couldn’t you make the exact same argument around even something like betrayal
[00:47:14] Orange: and I guess it comes down to what what we expect in our relationships with other humans because you of course don’t have to be romantically involved with somebody or have a sexual relationship to be deeply betrayed by another human it it requires only having given trust and a high level of vulnerability in some way or another and then having that that trust be be violated so is is it wrong to do that to each other I honestly don’t know I think it’s difficult to continue to maintain a relationship with another human if they do that to you um you know you see it in business a whole lot most the most one of the most common reasons startups fail is because they’re you know somebody essentially betrays the other person or does something kind of awful to them as business partners so I I I think we as humans choose to continue to interact with other humans who don’t treat us that way I don’t know that that necessarily makes violating somebody’s trust wrong so
[00:48:42] Blue: let me I don’t want to put words into your mouth cameo but it does seem like regardless whether we call it right or wrong there’s an objective effect there
[00:48:54] Orange: is of course an objective effect because because human relationships we use like a sliding scale of trust to decide how to interact with other humans and and the sliding scale goes from you know a complete stranger and I have very low trust to our most intimate relationships where we’re willing to share our our deepest darkest fears thoughts etc um and every new human interaction or every continued human interaction we utilize that scale to decide how to show up in that relationship or what portion of ourselves to present so I I think that for different people their scales are different which is why I think it is subjective there are people who will willingly be betrayed over and over again and continue to come back to a relationship that doesn’t necessarily mean that they believe it’s right it just means that they choose to tolerate it in their in their relationships I and that probably says something about them too but I I don’t know that right or wrong are necessarily the guide points of objective objectivity that we use to evaluate our interactions with other people
[00:50:20] Green: I’m kind of reminded what what Sam Harris and the moral landscape wants to base morality on and I think probably most dutch people would see this as sort of reductionistic but I still like it as a thought experiment where he says well if you think that all morality is just subjective I you would have to say that the greatest possible suffering are all conscious creatures for eternity is no different than I you know anything else I guess that’s how I interpret what he says and I mean that really rings true when he says well if you if you say it’s no different I don’t know what kind of word games you’re playing here but it’s just it seems preposterous I mean that if you want to go down if you want to just accept that that one thing is objective then you in the area of morality then you have to accept that there are some objectives truths in in morality what do you think Ivan is that the worst possible suffering for all conscious creatures for eternity no different than anything else
[00:51:44] Green: so when you say no different I think that’s that’s the the thing that’s slightly deceptive about this idea right so for humans obviously it makes all the difference in the world so it for every I mean we probably care more about avoiding that scenario than about reason itself right that I mean even of all the moral values that I think that like the moral value one of the moral values that I feel like at my core is that one ought to try to be rational and it is a moral value that you can’t you know if you it’s we find it we have a compulsion to be rational most of the time okay to to the extent that we’re able right that is in some sense a moral value like can you argue that you you can’t create an argument that you shouldn’t have
[00:52:44] Red: this moral inclination to be rational because you’d have to use reason to do it but I still don’t think that even that is truly objective like is is a rock evil for not not being reasonable or is an is a life form that is not making good inferences is it is it is it being evil for not doing so I think that that when you sit when if someone were to say that it makes no difference whether humans encounter the worst suffering forever like what you mean is it makes no difference to the laws of physics or no difference to the universe if that’s the way it turns out right obviously it makes a difference in the details and it makes a difference to humans we care about that more than anything but but I think that when people are reaching for this idea of objective morality they they want to say that it’s more than just what humans care about and I don’t think that that can be justified but
[00:53:53] Green: humans are part of nature right I mean of course you it’s going to be the the what matters to the these these two humans I mean it’s not going to matter to Jupiter that would be asking too much I think I mean that’s who else would it matter to where right this this aspect of nature that is our knowledge creators yeah
[00:54:15] Red: I mean that that’s fine but then it doesn’t if you’re asking you know is it objectively true that we care about avoiding suffering then statistically speaking you know there are a few people that don’t care about it but the vast majority or what you might call a normal human does care about it and so you can say objectively it exists or objectively there is this pattern in nature where we care about this thing but is that objective morality I don’t think that it is I don’t think that’s moral realism that’s just saying like moral realism wants to say that that moral values can are not only objectively describable but also objectively prescribable and I don’t think that’s true what about like something more like verisimilitude applied to morality so it’s not something you can say oh well this this this is moral but we can through our explanations and discussions and conversations and experiences we can move closer to something that’s more moral
[00:55:23] Red: I think that you could say you could through discussion and well there are two levels to this one is suppose that we have we are we freeze our moral values in for some period of time and then we discuss okay what can we do about immigration policy for example to to be more humane and to best satisfy our values I think that kind of question is objective right we could say look if we did if we employed this policy it would satisfy more people’s values and more people’s bad values better than some alternative policy that kind of thing that kind of question is objective but the question is what if we don’t you know like if we were to go back in time 500 years we would find people with moral values that today we think are abhorrent right and so we think that values need to be updated yeah in order to get to where we are today yeah but and we can do that to some extent by leveraging the values we already have
[00:56:32] Red: right maybe maybe we we held certain values because we didn’t think about things or because we were fearful of something but but I think that frequently like I say a moral moral progress comes from the fact that you know I live a very comfortable life and I I’m not afraid of the kind of things that would terrorize the average person 500 years ago and so I have time to consider the rights of LGBTQ community and so on that maybe in the past somebody would have been like I don’t know where I’m getting my meal tomorrow what are you talking to me why are you talking to me about this the subject right it may not be that it could be that if they lived as life you know you prop them down into a situation where they were living the same life that I am now that they would be like oh yeah you know I never thought about that I can now I’m relaxed I can kind of accept that you know that’s a good idea so I think that you can have progress and you can have intellectual discussions about these things you can talk about how do values get formed how do values get updated but if one of the examples the other arguments I was going to bring up about why is moral realism even useful is to make this what I think is an important distinction between morality and legalism so imagine that I have a book that has moral commandments in it and we know that this book because it was handed to us by some omniscient AI that it’s it contains the moral the real moral truth about what you ought to do are
[00:58:22] Red: you going to do what the book says without having read what the book you know would you commit to doing what this book tells you to do no matter what it says like do you want to be good for the sake of being good without knowing what goodness entails and my claim is that you don’t know what’s in the book it might tell you to do the most horrible things you can imagine and just because this AI knows that it is objectively good like why not be objectively evil if if good and evil are determined by what it says in the book then it’s sort of like a legalistic frameworks like do you want to follow the law do you want to be a lawful person or do you not want to be and if you didn’t know what society you were living in you’d go like well what does the law actually say like we don’t value being lawful without knowing what being lawful entails well
[00:59:20] Green: I guess I’m kind of hung up on this idea that the that an AI could really reduce a book like that I mean I don’t think an AI could always be operating under false assumptions just like anyone else so I guess I would be highly skeptical of of any conclusion it would make
[00:59:39] Red: what if it was an AI a natural intelligence instead then if we want to set aside his
[00:59:45] Blue: arguments not his arguments not going to change if it’s a natural intelligence what he’s what he’s saying is that there’s there’s this is a critical rationalist point of view that there there is no theory that can’t be criticized because it’s impossible to ever know you’ve arrived at the truth I think that’s what he’s really getting at here is that of course if this book said this is the objective moral truth and for the sake of argument let’s say it was the objective moral truth okay it would still be the case that he wouldn’t know that he would need to have it explained to him the book would need to explain to him this is why this is the objective moral truth and he would have to see that it is correct before he would be able to adopt it I think that’s what you’re trying to say right Peter
[01:00:30] Green: oh yeah yeah I think it’s just I it’s sort of the I guess pushing back against this idea that there could even could be an infallible anything in principle because there’s always misconceptions I mean isn’t that kind of one of popper’s central insights that are ignorance is infinite why wouldn’t why wouldn’t an AI have infinite ignorance as well
[01:00:55] Blue: so obviously when Ivan raises a thought experiment like this thought experiments aren’t meant to be fair enough I just said I was hung
[01:01:05] Green: up on right but
[01:01:06] Blue: I’m trying to like find a bridge here though because I think I think a bridge exists okay
[01:01:10] Green: okay
[01:01:10] Blue: hopper does not say that you have objective truth now I do think doigt says that but but popper doesn’t in fact popper would strongly disagree with doigt on that and so you can have a theory that is true okay under popper’s critical rationalism you just you
[01:01:27] Green: mean a deft of true in the terms of like justified truth please if
[01:01:31] Blue: I were to give okay and in fact it’s not even that hard to see why this has to be the case like there has to be an objective truth like let’s say I have a theory that George Washington was a real person okay that’s a legitimate theory for me to hold right well it’s either objective I’m either objectively right or I’m not I’ve formulated the theory in such a way that there’s no it’s not a matter of verisimilitude I’m either right or I’m wrong okay it’s not that hard to formulate theories that way um so a theory can be correct objectively so but you can’t know that it’s correct okay I can never know for sure that George Washington is in fact a real person and that he wasn’t just some made up fictional entity that we have since forgotten and we they for reasons that are absurd to us people tried to you know create all the different evidence that exists for him uh because they needed that to win a war you know like we could make up some sort of story right I could never know for sure that that isn’t the case I can simply have a best theory on the subject but I could be objectively right so I think what Ivan’s trying to say is okay set aside the fact that it’s absurd that this AI could ever um you know really exist let’s say that there’s this super intelligence so we’re assuming a super intelligence is a thing that this is part of the thought experiment and it has actually worked out what is objectively correct morally because we’re assuming real moral realism is a thing right and so it’s worked it out and it has and for the sake of argument it’s correctly worked it out okay now under Popper’s epistemology it can have objectively worked it out if moral realism is a thing okay it may not be able to ever prove that beyond doubt and you can never know for sure beyond doubt that that would still be the case but I think Ivan’s arguing it contains the objective moral truths okay even if it’s just by chance maybe it it did its best it criticized it it got to the objective moral truths okay now however far -fetched that that still is and it is pretty far -fetched because unlike the example with George Washington where I intentionally made it that it has to either be true or not even if there was moral realism it seems unlikely that it would just be a straight theory that is either true or not it would be far more likely it would be some sort of verisimilitude right just like physics it’s it’s all of our physics theories they’re useful because they’re they have verisimilitude but they’re all false right and that’s really what David Deutch is getting at when he talks about we should talk about all our theories as misconceptions I realize this is like a massive stretch but I feel like he is getting at something fair here which is if by just by chance you know we could have it’s it’s at least a possibility that there is a true physics theory and that we’ll someday possess it we may not know it that we possess it but it’s it’s life does you know physics does something I don’t see any particular reason why we couldn’t stumble upon the one true theory of everything for physics right um if if if that was impossible for us to do there would have to be a reason why it was impossible for us to do now Frank Tipler has argued that it is impossible for us to do and he has this argument based around the fact that it requires an infinite number of parameters I’ll buy that I’ll buy that it could be physics is completely impossible to specify as a single theory and it’s a true theory but you could imagine like it us living in a simulation it’s running on a computer and somebody comes up with what the algorithm is for that simulation it would be an objectively correct understanding of their reality okay so with the right sort of reality you could have the one final theory now I know that Deutchians Sam for instance kipers immediately argues with me no that’s like not likely to happen because then you couldn’t make progress well that’s not true even when you have an objectively correct theory like let’s say let’s let’s use um euclidean geometry now it’s actually not an objectively correct theory physically but let’s say it was okay you can have an objectively correct theory that is you have the axioms and then you can spend the rest of eternity working out the implications of the axioms so there is a difference between having the theory the idea that once you have a final theory you can’t make progress is not true so i’m gonna remove that from the argument okay but even
[01:06:12] Blue: with the simulation environment if you know know the algorithm of the simulation couldn’t you couldn’t doesn’t that just open up a whole new set of questions it does about where that algorithm came from it does you know okay so yeah
[01:06:26] Red: i i just wanted to say like the the omniscient ai is you can substitute for that whatever plot device you want to assure you that what the instructions in the book say are morally so
[01:06:39] Green: it could be god objectively we’re talking god
[01:06:41] Red: well if it’s god it could be yeah i mean let’s say it’s god right so god writes down these rules that say these are the things that you must do if you want to be objectively good do you want to follow what’s in the book without knowing what the book actually says or would you prefer to be evil i mean this is setting aside you know what is god going to do as retribution if you don’t follow what’s in the book but but but even then right like if if the book says that you must torture people for eternity or something and that’s what it takes to be good according to what god says would you rather be good or evil like if you would like to be objectively evil but subjectively good and be kind to people like it’s difficult to say that you would be you would be wrong or that that this would be a like this book wouldn’t be valuable to you right it doesn’t it doesn’t tell you it’s not because it would only be valuable to you if you wanted to be good no matter what the contents of the book were but i argued that nobody actually wants that yeah right like we could imagine like some some peculiar version of utilitarianism right let’s say somebody could prove some some bizarre version of utilitarian was true but it entire but it meant that you know it had some bizarre consequence like we have to kill off the unhappy people or something like this right to raise the average um like do we do you then care about this objective morality like you don’t right it’s not something that that you value okay
[01:08:31] Blue: so i
[01:08:32] Red: wonder like so i’m sort of getting at a meta angle here like why do we want morality to be objectively true anyway isn’t subjective truth the only thing that that matters in any case
[01:08:43] Blue: okay let me let me ask you a question this is this is something that you actually i’m going to quote you from um peter’s facebook page so in episode 56 i did argue that if values are subjective then i guess i’d perceive no nothing particularly wrong with believing a lie if it comforts you and if you find great utility in it and you really strongly reacted to that let me actually read what you said um you said that uh you quote me making that argument and then you say we might as well do nothing at all or join the millions of
[01:09:27] Blue: epistemologically irrational people and trick ourselves into believing fantasies in which we would live forever this seems like a weird sort of pathological consequentialism that attempts to maximize a hedonic impulse at the expense of epistemic rationality it is absolutely horrifying to me admittedly i am a moral subjectivist and you may take that route if you truly value the fantasy above epistemic rationality however i feel very different okay so i guess when i read that though i kind of shrug like if i’m taking the subjectivist moral view which i was which is why we were having this discussion i i don’t actually believe that myself i’m intentionally taking what i understand the subjectivist moral view to be okay um it seems to me that the fact that it horrifies you means nothing i mean it’s like you saying i’m horrified that you prefer chocolate when chocolate ice cream when strawberry ice cream is what i like and i’m horrified that you prefer chocolate ice cream and i kind of shrug it’s like okay great i mean i can see that that’s a subjective value of yours i i can’t think of any particular reason why the irrational religious person who is getting value out of their false belief that they’re going to they have an afterlife and live forever and things like that i honestly can’t see why it matters in the slightest that they’re being irrational because of this reality of subjective values would you like to maybe respond to that
[01:11:04] Red: um i i can see your your point of view there i just think that um i maybe i i wonder to what extent i mentioned in that that uh that quote something about consequentialism and for the longest time i was someone who was like a thorough consequentialist that that i was really personally sort of upset whenever tradition or deontological ideas that i that caused injustices would occur and so i was very much consequentialist but lately i’ve been a little bit more deontological and maybe virtue ethics oriented myself where i think that you know when i would what i would find myself doing online of course i love to get into it on in debates online so um but like why why do that like i wrote a book on rationality do i really think that um you know tomorrow people are going to you know buy my book and go oh okay and then they’ll start teaching rationality in schools like what it’s the probability that this will happen right you put some very very a lot of zeros after the decimal point right and then a one um so then was it a complete waste of my time to write this book like i spent a lot of time writing it or i go online and i argue with people how effective would you say i have been in you know changing people’s minds probably not very effective right but i think that that this is something important and i’m very i’m especially concerned when i you know if someone has a false belief i don’t find myself that offended by the fact that they have a false belief but if they start teaching the false belief then it really bothers me and it occurs to me that that where this is coming from is not
[01:13:14] Red: just a consequentialist argument it’s not just okay they’re doing this and if they do this it will have this negative consequence it’s also that i think that rationality and believing the truth is virtuous right that that um can you know believing something that is much less probable than the alternative is in some way profane now this is when i say profane i don’t mean like in an objective sense that there’s you know i’m an atheist so i don’t i don’t think that there’s there’s anything to lay down an objective idea of what is pure or what is good but that that part of the reason that i’m interested in what doge has to say or or that i’m that i’m follow the critical rationalism forum or that i’m a bayesian is that i think good epistemology is virtuous so
[01:14:13] Blue: it seems to me that you’re now talking about an objective moral and there’s a i still see what you’re saying you’re saying i’m not really claiming moral realism here and yet you’re kind of it seems to me you’re kind of sneaking it in the back door that that that there is in fact this objective moral that we should be rational by the way the name of his book is textbook rationality rationality and why we should teach it in schools by iven phillips 4.6 on the reviews so that’s the that’s the book he’s referring to and i guess i have a similar question about the fact that you talk about moral progress i mean like what what what does it even mean to refer to moral progress in a purely moral subjective world i mean you’re you’re describing to me what we mean by it where well we have this chance to stop and think and then we think oh i i now have the ability to stop and think about animal rights for example but there’s still even just the word moral progress is baking into it an objective morality that we’re progressing towards something right in a purely subjective world where there there just are no objective values at all i i’m not even sure the word moral progress is a meaningful term it almost strikes me as it must just be an incorrect term it’s it’s a complete it seems it
[01:15:48] Green: seems to me i think david dorge said something similar that if you’re if you really are saying that morality is subjective then what is moral progress in that universe so you’re really saying that a society where slavery is practiced is morally equivalent to one where slavery is abolished and you know applied to a thousand different other things but
[01:16:21] Red: when you say it’s equivalent you mean it’s equivalent if the only consideration is objective moral justification yeah if if right that’s sort of like that’s embedded in that in that statement okay if you’re saying but is it equivalent given subjective moral justification then of course it’s it’s not equivalent at all okay so i think that there’s something that maybe subtly begs the question there when you say that there’s the same thing really for moral progress is that we live in a society now where you know we still we still have plenty of problems that we need to solve but you know most children make it to adulthood right so we don’t most people are not okay well okay yes parents do worry about their children making it to adulthood but for the most part you know though those uh those worries are uh maybe it’s sort of overblown right like most we live in a society where we have medicine where we have public education where we have access to healthcare not as much as we’d like but you know we have a lot of things that make the quality of life now we live today better than kings lived 500 years ago
[01:17:46] Green: and that sounds like moral progress i mean it’s intertwined with technological progress but it’s still moral progress i mean right but it’s one feeds off the other but
[01:17:56] Red: it’s subjective moral progress right like in this it’s subjective moral progress in the sense that if you start with human uh like our evolved baseline values that we have these propensities for non -zero some games where we care about our children and you know we care about families and you know we care about friendships and we care about good health and all of these things which we cared about before but we were really when we were when we were making decisions about you know who we’re going to punish we were sort of relying more on intuition and on our fears and on our lack of options previously and now that okay well you know if if if the world were to you know fall into some sort of disaster you know there’s some meteor meteorite strike or something and civilization collapses i think that we would make the reverse moral progress right that that in because we need food like i don’t know where my next meal is coming from i’m willing to take more risks and i’m willing to care less about what kinds i’m going to care less about animal rights if it means that i get a meal
[01:19:26] Blue: right
[01:19:27] Red: and so in that sense i think that you can have even if our values are that were instilled in us are a result of our evolutionary history which is partly accidental you know there may be common other species social species may share some of the same things like evolution is there are going to be some like convergences in evolution but even if some of those things are subjective that um we can still have progress and
[01:19:58] Green: in that world we probably wouldn’t care about scientific progress either i mean those consider except we wouldn’t who cares about the behavior of subatomic particles we’re just trying to stay alive
[01:20:09] Blue: so um i think that this is going back to kind of my initial statement i don’t think we need to ever explain subjective values and i think that ivan’s definitely right that in some sense we come prewired at least on average probabilistically due to evolution with certain values so i can see his argument that um when he talks about moral progress that perhaps one could interpret moral progress as um well we happen to have these subjective values that are wired into us by our genes by evolution and it’s entirely possible to make objective progress against those parochial values and if that’s what we mean by moral progress then okay i guess i can see his point there that um moral progress exists in the same sense that you can make progress uh towards a better recipe for waffle love um waffles that taste better to more people or something along those lines is that kind of what you’re getting at uh there ivan
[01:21:19] Red: yeah i mean i i think that that that the problem is really more like if if a species had different values it might make it would have a different idea of what moral progress is and that how would we be able to convince them using reason that they were wrong that that to me is seems to be to be the problem so so while while we’ve been talking here i’ve been thinking okay so what would it take for me to say you know what i do think that there is a moral realism and this is the way you could get there like let’s suppose that you did something where you said well we’re going to create ai’s and all ai’s will converge on this this idea of what is right and wrong then and by ai here i just mean something so in other words you’d you’d have to accept that rationality or some form of rationality was like a moral axiom but that but that if that is all you needed then you could reach all these other moral objective moral facts then i would be more inclined to say well okay well maybe i might call that moral realism i just have a hard time i mean if i if i just make a statistical thing and say well statistically we think that societies like ours that you have these non -zero some games so we expect that those societies will want to respect similar sorts of moral values but then it’s just a statistical statement well
[01:23:06] Green: here my mind’s going to aliens here we’ve been talking a lot about aliens so when that when the aliens come down and and visit us i mean i think the most most people would probably be quite afraid that they would decide to eradicate us or maybe just blow up our planet or something but i actually find it i think because i am more sympathetic to this idea that morality that there are some objective a basis basis for morality i think it’s far more likely that the aliens would come down and and see us as a as fellow knowledge creators and want to want to help us maybe care death or whatever um what do you think Ivan do you think that if they that it would be more likely that aliens like a completely different species totally unconnected from us in an in an evolutionary sense would would want to help us or or hurt us or would be completely unconcerned about us
[01:24:09] Red: i i think it depends on i think it’s difficult to say right so one of the concerns we have about artificial intelligence yeah is that that is this alignment problem right so so that what the ai wants may not be aligned with what humans with human morality and so you know maybe we make an an ai you know that is in some way pathological right maybe it’s got some basic form of utilitarianism in there but it’s maybe pathological in some way maybe it has interests in preserving its its own civilization and it sees other civilizations as threats to that i don’t know i feel like i don’t know the answer i think that if if it’s something where you know humans end up going into space and we develop warp drive and so it’s some spin -off of human society then i think your scenario is much more plausible peter that that the aliens are going to want to help us they’re going to be sympathetic to us just the same way that humans are sympathetic to animals that we see that we think are maybe you know intellectually less capable than ourselves it doesn’t mean that we we don’t have compassion for them but if it was an ai and the ai has some objective that was you know put into it either you know deliberately without understanding what its consequences would be it’s difficult to say what the aliens would do
[01:25:56] Blue: so i didn’t want to just i was going to make a comment and it was a while back but i wanted to insert it in i even took a moment to try to work out under what circumstances would could moral realism exist and what he was describing was fairly close or at least on track for how i understand frank tipler’s omega point which is why i do with one of the things that i find fascinating about tip was a mega point keep in mind that i don’t endorse the theory i think it’s wrong but just kind of wrapping your mind around the theory on its own terms is that he actually had in my opinion worked out a true theory of moral realism so i actually do believe it is entirely possible to work out a theory of moral realism but i think it’s a very difficult thing to do which is why i guess i at least somewhat agree with ivans critiques here i think without the right type of cosmology you simply cannot have moral realism and you are stuck with moral subjectivism or really not just moral but all values so but would have to go back and listen to it again the way he was i even was going about that saying well in this case maybe i could buy moral realism that was fairly close to how i interpret tipler’s omega point and that was actually what i was trying to say back in episode 56 is that i do believe that you need some sort of a mega point like cosmology to get to moral realism and that you can’t actually have it without that special kind of cosmology
[01:27:35] Blue: so just wanted to clarify my own thoughts back from episode 56 and also point out that there might be a point of agreement between me and ivan on that possibly
[01:27:44] Green: most people don’t quite take heat death as seriously as you do bruce but i agree and we loop this back to the is ought distinction that was brought up initially tell me if you guys think of this as a fair way of framing the the debate what i kind of hear ivan saying is that hume’s hume right his his ought distinct distinction is a sort of just separate ideas there’s is and there’s ought and these these aren’t gonna gonna meet whereas i think i actually think a more realistic way of thinking of it is the are these are intertwined ideas i mean why do we you know whenever you get into like a scientific truth well that opens up a whole can of worms in terms of of ought why are we why do we care about reason why do we care about science why do we want to be right why do we want to tell the truth and i just i i i guess i just see it as as a as a false dichotomy but is it is is that another way of saying it is it a real dichotomy or a false dichotomy
[01:29:03] Blue: so i’ve actually given the the hume’s guillotine quite a bit of thought and first of all i really don’t buy dutch’s argument against it so i i tried my best to summarize his argument i do think there’s something to his argument there often is dutch happens to be a rather smart guy so even when i think he gets something wrong he’s almost always kind of on the right track to something interesting if that makes any sense but i’ve always felt a little suspicious of his argument and let me see if i can explain why um it’s true that we solve problems i think what dutch would say is you have a problem you solve it that problem solution is often going to include moral content and that’s part of the your solution that’s an objective fact for how you solve the problem and that’s how morality comes into the picture into a universe that’s full of just facts
[01:30:00] Blue: i think that’s all true and i think that’s a really interesting point but it seems to me that that when he tries to lay out this is no different than science because in science i just conjecture things also seems to me that we’re playing a shell game because science when i’m doing science and i let’s say i’m coming up with new laws of physics or i suppose i made the theory of newton came up with newton’s theory or something like that i am explaining things right i’m i’m explaining what it is that’s going on and that’s my conjecture where with morality there’s almost always competing moral views and they’re both right in some sense right like i i’ve got these competing moral values and i’ve got one that says i shouldn’t harm people and another one that says i’m going to
[01:30:50] Blue: take care of my family and then we come to the scenario where and hollywood is great at this where those two come into conflict and now there there literally is no objective moral answer to the question and depending on how they portray the character as sympathetic or non sympathetic hollywood can get you sympathizing with either viewpoint fairly easily okay so i’ve always felt like dutch’s explanation misses the fact that morality is in some sense a different beast you can’t really just simply compare it to physics there is often a lack of explanation that exists with with our moral explanations because there’s often two moral values we’re trying to go after at the same time and they’re in conflict okay so that was all kind of background as to my view now is this a false dichotomy i don’t think it is i actually think humes guillotine and i know this is going to be a really unpopular opinion so i’m going to just throw it out there for people to consider i think it has to be solved i think that that there must be a way to get an ought from an is and in fact you can get an ought from an is it’s not actually that hard okay i if if i were to go to peter and peter were to say to me you know bruce i want to be really good at chess and so i i were to tell peter okay since you want to be really good at chess this is a night this is an idea that comes from um the book after virtue by the way it’s not my idea uh peter one of the things you need to do is you need to not cheat in chess okay you ought to not cheat in chess and that’s a fact now and the reason why you ought not to do that is because if you do cheat you’re not actually going to get good at chess that’s also a fact and so it’s not that hard to derive odds from izz’s now of course i’m cheating a little for one thing there’s clearly a subjective value that we took as our starting point that he wanted to be good at chess okay but it does clarify the fact that the idea that there’s it that you can’t get ought from is is just not true the real reason why it seems like you can’t get ought from is is because there’s a there’s a slipped in additional assumption in the way hum is posing it that we’re somehow ruling out the existence of subjective values that they don’t account okay the moment you actually accept that people do have natural subjective values and that a lot of them are derived from evolution things like that right it actually becomes quite easy to um derive odds from izz’s just factually you look at facts you have different things that you care about naturally and then you can figure out how to derive them so i don’t i’ve never felt like it’s a true dichotomy at all i think that it only becomes a dichotomy if you’re starting off with the assumption that people don’t have values now that’s not necessarily an argument against ivan’s subjective value viewpoint on the contrary one might say it’s in favor of his point of view but i do want to make that clarification right off the bat that i’ve never really bought that it just that there there is absolutely no way to go from ought to is in fact you can’t and we do it all the time
[01:34:10] Green: so it sounds like you’re agreeing with me or are you disagreeing
[01:34:16] Blue: i don’t know it’s an agreement with you but with some caveats right there’s still a problem to be solved which is can you get around this outside of subjective values and i think that’s the i think that’s the problem ivan’s raising is he’s he’s saying i don’t not sure you can right that the only way that you can ever actually derive an ought from an is is by starting with subjective values okay now that may be a fair point in fact i would i would love to follow i mean this is outside the the bounds of this podcast because we’re probably coming up on time anyhow here but that’s like an interesting thought to me right okay so we accept that you can get from ought to is um from is to ought rather um i keep saying it backwards don’t i but uh it requires subjective values okay does that mean that we’re limited to subjective values or are there objective values if there are objective values then it shouldn’t be that hard to go from is to ought for objective values either okay do you see where i’m coming from
[01:35:18] Green: our subjective values another oftentimes another way of just saying we don’t really know the reason behind them i mean of course there unknowns there’s unknowns everywhere
[01:35:29] Blue: my my ultimate example of subjective values is your preference for ice cream right i mean sure there’s a reason behind why i prefer strawberry you know over chocolate and you prefer chocolate over strawberry and i’m i have no doubt that we could get down to some reason about how our brains are wired or what happened in our childhood or you know there’s some sort of reason behind it that we’re unaware of whatever that reason is i don’t think it matters right the fact of the matter is is that i prefer strawberry you prefer chocolate and we’re not really there’s not even an interesting whatever that reason is it’s not even going to be interesting right in some sense because it’s just going to be some happenstance of the way we happen to develop as human beings throughout our lives some
[01:36:16] Green: people might have a hang up for things that are brown or something right
[01:36:19] Blue: right i mean like it sort of just doesn’t matter right what the reason is because i’m intentionally picking something so completely subjective that nobody doubts its preference or
[01:36:32] Red: even if you talk about evolution having this you know advantage for non -zero some games you can say well this is why we think altruism is good like we have a natural impulse for it because evolution saw this as useful and those you know those people with this impulse ended up surviving because it’s a non -zero some game
[01:36:53] Blue: right
[01:36:53] Red: so now we have a reason we know why it exists it is an objective fact that it exists and maybe we could know it so well know the evolutionary psychology so well that we were they were very highly confident that this was a fact about the world but just because evolution has programmed us to do something doesn’t mean to say we ought to do it objectively we might we might be able to point to oh evolution caused us to do this and this is why we like to do it and this is why we value it and you know so now we can use these facts to help us better satisfy these values but if we find something from our evolutionary past like you know there’s evidence that um you know rhesus macaques will commit these sort of genocides the same way humans do maybe it’s in our evolutionary history that we do this it doesn’t mean to say that oh well because we have this value and there was a there was an evolutionary real that reason why we have it doesn’t mean say we ought to have it so subjective morality is saying that we can we can objectively describe our moral
[01:38:05] Red: choices and our moral values but there’s nothing we cannot objectively prescribe it we can’t say that well because we have it that’s what makes it objectively right that if there was another society that didn’t have those things we don’t actually have a good reason you know within our within our framework for objectively saying that they’re wrong we can say that objectively we would be displeased by it but that’s not the same thing and I think the thing about Hume’s Hume’s dictum is also that that even in Harry Potter’s world even if you said okay well it’s objective we know that good and evil have these different properties and they can cast different spells or you’ve got the force right the force is employed in different ways if you’re good or evil does that then tell you whether you should be good or evil I think Hume would say that even in that scenario even if you could show that objectively the universe actually does care whether you do one thing or the other like it results in a different outcome does that mean that you ought to be dark side or light side so
[01:39:11] Red: the fact is though is that I can imagine a universe where objective values exist I can’t imagine that the cosmology is such that eventually everybody comes to regret their their actions that are evil and decides on their own by their own preferences by their own subjective values to change at that point I don’t think even though they may have started as subjective values in some sense they really are objective values now right because you use the example of a society that disagrees with us so let’s say that they do change and they end up preferring it they go oh wow we were wrong even by our own way of thinking once I finally came around to seeing what that other society was saying this really is our preference after all we were mistaken in our preferences okay I it’s not that hard I mean I can see that that cosmologies like this are special they’re certainly not the heat death cosmology they’re certainly not the popular cosmologies of science but one could imagine in a mega point like cosmology like this where it just is the case that everybody eventually changes their mind because that is the way the universe is and I think in this sense you absolutely could have objective values but I think it’s hard to get to those objective values it requires a special kind of explanation or requires a special kind of cosmology and I feel like when Ivan makes his arguments they’re very good arguments but they’re always kind of sneaking in the assumption we don’t live in such a cosmology which is maybe even a decent starting point because we’re told over and over again heat death is the preeminent cosmology and I would agree that if heat death is that Arvin’s arguments all make sense to me right and that I actually agree with what he’s saying and I know I’m the only one who cares about heat death in the entire universe but but you see what I’m saying though right is I I do believe it is entirely possible that subjective values could in fact be objective given a certain cosmology
[01:41:16] Green: no I actually just want to say that I you have convinced me that heat death is at least a incredibly compelling way to put it I mean it’s kind of like the logical extreme of this stuff really yeah it is want to but I mean it doesn’t really keep me up at night I wouldn’t say but yeah it is interesting
[01:41:38] Blue: there’s a star trek episode where these these kind of eccentric geniuses are spending all their time worrying about heat death and everybody just thinks they’re strange and the the the writing is such that you’re clearly meant to laugh at them for being so ridiculous that’s me I’m the extension
[01:42:00] Green: is that is that the original series or it’s deep space it’s deep space not which
[01:42:05] Blue: by the way is the best star trek yes yes I do understand why people often consider next generation better and they’re not entirely wrongs there’s some good and bad on both you know there’s next generation has some advantages over deep space nine but in terms of the actual writing if not you know the the actors are all very good in deep space nine but nobody’s on the level of Patrick Stewart and Brent Spiner because nobody is but I don’t know
[01:42:32] Green: I kind of like the original that’s the the the campiness of it yeah the mini skirts and captain Kirk having love affairs with aliens and stuff I don’t know that’s just I’m
[01:42:43] Blue: watching the original right now and it’s fun I’m enjoying it um you know what let me just actually take it aside to sense I love deep space nine to plug deep space nine since it actually has some relevance to what we’re talking about deep space nine’s whole plot and this is why I loved the show so much is it is the um star trek the Federation was intended to be a utopia and one of the things that made star trek a difficult franchise to write for was that Jean Roddenberry wanted to portray this optimistic future of um humanity being more rational being you know having better values than us they’ve made tons of moral progress and these people are living in this utopia and it’s really hard to write for you you can’t have interpersonal conflicts like literally he didn’t allow interpersonal conflicts because he wanted to portray a society where everybody they may have a conflict for an episode but they always work it out at the end because they’re they’re more rational and it’s a really tough thing to write for although it’s it’s interesting in its own right and I do feel like the original series did a great job within that um and part of the reason why the show is so popular and it turned into a whole franchise was because it was like it was difficult to write for but they pulled it off and so you ended up with this optimistic worldview that people find attractive okay now deep space nine turned that all on its head and it started off with the utopia and then it presented a villain so terrifying uh you know really honestly worse than the board which was the best villain up to that point um but subtly so that uh with the dominion that was so terrifying that it forced the federation members to really get serious about how much do I believe in my values and in the end they come up with a kind of it’s not a dark show it’s still an optimistic show just like all other star treks but but they are pushed to the edge of their values and sometimes break them before finally coming back around to following their values and it’s actually a really good interesting study in objective versus subjective values how do that how does that work and of course it’s being done with interesting science fiction plots and things like that one of the big reveals this is a big spoiler is that the federation in fact has a dirty tricks division section 31 that has existed since the foundation of their society because they never really had faith that their utopia could could exist without somebody who was willing to bend on break all the rules and so they
[01:45:30] Blue: have this interesting idea that comes out of that and it was one of the things I really loved about d -space done is they put these characters in so many interesting circumstances where they just have to really think hard okay what I’m how can I best implement my values given that I’m in a completely ridiculous situation where my values are actually going to be a negative if I implement them entirely and so it was one of the things I really loved about the show and I do think it does a great job of making you think hard about are your just how objective are your values that
[01:46:03] Green: kind of leads into what I was thinking could be a final question here if that sounds good to you guys sure so there’s this idea that that I I sometimes think about that you know people will really tell you what they believe more effectively through their actions rather than what they say and you know I mean it’s pretty maybe a common kind of belief and I can see see it in a lot of different contexts I’m I’m curious Ivan and and cameo and Bruce I mean would you say that you regardless of how you feel feel intellectually about subjective versus objective morality do we live in terms of your actions as if morality were more objective or subjective or does the question not make sense oh
[01:46:55] Blue: no that makes perfect sense you know what I don’t want to answer first I’ll let somebody else go first but I do want to express my opinion on that
[01:47:01] Red: yeah so I am a I’m a physicalist so I guess I’m I’m treating the question as you know how how do my actions differ if I think moral realism is true versus subjectivism I don’t think it does matter or can matter it’s it’s very difficult to come up with a scenario where that happens because even if you are a a subjectivist you’re still going to have some theoretical model of what you think is optimal you know something something maybe not not optimal just for yourself but optimal for friends and family and society at large right so um so
[01:47:52] Green: you have that optimal framework but you just you don’t think objectivity plays a role in it
[01:47:58] Red: I don’t I I think that there are some I think that when we’re creating public policy we should be more we should more carefully try to measure what are the outcomes of of the policies because I think there are a number of policies that get put on the table where this policy it feels right to some people it feels like yeah we should do this but but the bottom line is it will actually make things worse it’s just like it or you know and it could that could be on on you know on both sides it could be that hey you know it seems like if we did this policy we’re just causing you know this policy is just causing needless discomfort to people we should change this but what if you know sometimes it does make sense to be cruel to be kind in the long run right that there may be policies like that that we should measure those things more carefully and I do think that there are objectivity of morality is just seems kind of useless like it doesn’t really have any bearing on it the real more a more useful kind of question to ask is you know we have very people may have very strong opinions about abortion or about public policy in various areas but I think it’s very helpful to isolate your values from your your is and your morally neutral questions so if you have a strong opinion on
[01:49:34] Red: let’s say you know religion in schools or something right that you may say well is there some set of circumstances in which you would you would say that you would relax your opinion and once you start to think about that you can actually become more you can become less fearful and more flexible and more clear -headed about policy decisions I guess if you’re talking about personal decisions which is probably the direction you wanted to go I don’t know where to go with that I think still people are much more likely to make decisions from from a point of view of fear or conflating their values with the facts people very frequently rationalize that you know the way the world is because they have some moral agenda and so I don’t think that realism is helping I think if anything realism is hurting
[01:50:44] Green: interesting okay okay I can see that
[01:50:46] Orange: I am I think my my answer is kind of a lot the same you know earlier Bruce was talking about the kind of the pretty lie that if if even if there is no god is there is there harm in somebody going through their life and believing that there is a god I’m sure I’m paraphrasing that poorly Bruce but it’s kind of the same I think about whether whether morals are objective or subjective I’m not completely certain it matters at all you know if if Bruce and I probably operate fairly the same on a day to day even if his morals he believes are objective and I believe mine are subjective I’m not I still am choosing a set of standards to operate under and and my behavior is not substantially different than his I would I would assume yeah
[01:51:44] Green: I mean I think about people I know and most people probably I think most people at least in the environment where I live if you press them they would probably say that most morality is relative or subjective but it doesn’t kind of really seem like that’s how they live I mean they
[01:52:02] Blue: I agree
[01:52:03] Green: take care of their kids they go to work they work hard they try to tell the truth then just I just seek sort of a disconnect there that I think is interesting
[01:52:12] Blue: so I agree with what Peter’s saying and it’s again I cannot possibly explain my viewpoint quickly enough before this episode is going to end so we’re going to have to probably do this in a separate episode sometime and I’ll try to back up what I’m about to say rationally but I actually fairly strongly disagree with Ivan and cameo on this I think that everybody believes in objective morality whether they claim they do or not and I think that there’s really no other way to make it work I think we can talk about subjective morality and I think we can see that there’s a strong argument for it rationally and I feel Ivan’s done a fantastic job making a strong argument for it rationally and I think it even could be the case he’s right and yet it would still be the case that at the end of the day morality for it to function and for society to have it useful that we have to act and at some deep gut level
[01:53:18] Blue: believe it is in fact an objective thing that we have moral duties that moral duties objectively exist I think that that would be true now that doesn’t really prove objective morality to be honest it really only proves that you need the concept even if it’s false this gets back to the happy lie or what I call the high utility falsehood okay I think what I’m arguing here doesn’t really argue for or against objective morality I think it’s an argument that it’s really important and it was necessary that human beings think of morality as objective and that there’s such a thing as a moral duty and that at the moment you actually do away with that concept basically everything falls apart for society and again I know I’m making a strong claim here that I have in no way backed up up to this point and so I would have to do a separate podcast to try to back up why I feel that’s the case but I think that’s precisely why people act as if morality is objective is because there really was no alternative but to act as if morality was objective
[01:54:27] Green: well well put and seems we’ll have to do a part too if Ivan is willing to come back any final thoughts Ivan or
[01:54:37] Red: um no this has been a fantastic discussion I really enjoyed it thanks for having me on
[01:54:43] Green: I have to
[01:54:44] Blue: I I’ve been I would like to invite you back for an episode on Bayesianism
[01:54:49] Red: I would love to participate okay that’s gonna get
[01:54:54] Orange: spicy is what that’s gonna do
[01:54:59] Blue: the theory of anything podcast could use your help we have a small but loyal audience and we’d like to get the word out about the podcast to others so others can enjoy it as well to the best of our knowledge we’re the only podcast that covers all four strands of David Deutsche’s philosophy as well as other interesting subjects if you’re enjoying this podcast please give us a five star rating on apple podcasts this can usually be done right inside your podcast player or you can google the theory of anything podcast apple or something like that some players have their own rating system and giving us a five star rating on any rating system would be helpful if you enjoy a particular episode please consider tweeting about us or linking to us on facebook or other social media to help get the word out if you are interested in financially supporting the podcast we have two ways to do that the first is via our podcast host site anchor just go to anchor.fm slash four dash strands f o u r dash s t r a n d s there’s a support button available that allows you to do reoccurring donations if you want to make a one -time donation go to our blog which is four strands dot org there is a donation button there that uses PayPal thank you
Links to this episode: Spotify / Apple Podcasts
Generated with AI using PodcastTranscriptor. Unofficial AI-generated transcripts. These may contain mistakes; please verify against the actual podcast.