Episode 37: Animal Intelligence and Knowledge Creation (part 1)

  • Links to this episode: Spotify / Apple Podcasts
  • This transcript was generated with AI using PodcastTranscriptor.
  • Unofficial AI-generated transcripts. These may contain mistakes. Please check against the actual podcast.
  • Speakers are denoted as color names.

Transcript

[00:00:11]  Blue: Welcome to the theory of anything podcast. Hey guys. Hello. Hello. Hey, Tracy. Then we got Saudi here too, Saudi. Hi. So this is a subject that I’ve been wanting to do for a while. It’s something that seems like it just, it comes up multiple times in the David, Dwight fan community in particular to a lesser degree amongst the popper fan community, which is animal intelligence. What is animal intelligence? How is it? How is it similar and different from human intelligence? Now, obviously humans are universal explainers to use the David, Dwight’s term and animals are not. And so that’s a gigantic difference right there. From that point though, there’s lots of different opinions as to what that implies about animals, everything from some people claiming animals are just robots and automatons. They don’t feel anything to animals are incapable of suffering to, you know, animals are sure they create some knowledge, but it’s super limited and we could probably do something similar to them using artificial intelligence today. And since I am someone who has studied, you know, at a graduate level, artificial intelligence and machine learning. I’m actually in a unique position to talk about a lot of these different claims to kind of look at what animal intelligence is. And I guess the short summary, this is the abstract is that animals do all sorts of things that we can’t do in artificial intelligence and that they’re mysterious. We don’t really understand what animal intelligence is. Never mind that we don’t understand what human intelligence is. We don’t even understand what animal intelligence is, although we probably are closer on that front. And

[00:01:52]  Blue: this is one of those things where I would like to see more people interested in studying animal intelligence, because first of all, all knowledge is valuable. So the fact that animals are not universal explainers doesn’t mean understanding animal intelligence won’t turn out to be a valuable thing all its own. And honestly, it’s just an interesting subject. Animals are super fascinating. The things they do, the way they act, the difficulties that we have in trying to explain what they’re doing and what they’re thinking. Animal learning is also, it’s way more general than existing machine learning or artificial intelligence algorithms. The types of things that we can teach animals to do would just be, are just completely beyond what we could do. We could like make an artificial intelligence algorithm that could play Jenga, but we could never make a general one that you can also happen to teach to play Jenga. That’s just not the way it works. You can teach an animal to play Jenga, right? I mean, this animals weren’t evolved to play Jenga, but you can teach it to them. There’s nothing like that in machine learning. Animals are doing something there that is truly mysterious when they learn to do these amazing things that we see them do and no one programmed it into their brain. Evolution obviously has no pressure to try to teach an animal how to play Jenga directly. Therefore, the fact that they can learn to play Jenga implies that they have some sort of flexibility that goes beyond what current artificial intelligence algorithms have. Richard Byrne, David Duets quotes Richard Byrne. I don’t know how to pronounce that name, by the way. Maybe it’s Byron or something, but I’m going to pronounce it Byrne.

[00:03:30]  Blue: It’s B -Y -R -N -E. David Duets quotes him quite a bit in beginning of infinity. So I became curious about him and I got, went and read some of his studies and got one of his books. He’s got more than one book. I tend to read more of his books because they’re really fascinating. If he is correct, now him and David Duets disagree on a number of things. They disagree over Richard Byrne’s theory on a number of things. They interpret Richard Byrne’s theory very differently on some items. But if Richard Byrne’s interpretation of his own theory is correct, then animal intelligence is a precursor to human intelligence, and thus it may end up relating to AGI studies. Now, if we don’t know that for sure, and I’m not making any strong claim here, I’m just saying that’s what Richard Byrne thinks. He could be right, he could be wrong. And then there seems to be a great deal of interest on if animals are quote conscious or not. Do they feel things? Is there something that’s like to be an animal? I think all of these are questions that we can actually make progress against using critical rationalism. We can take a look at looking at the studies, the observations that exist. We can actually learn about these different things. So now let me tell a little bit of a story just background on how I became interested in this. So I read Fabric of Reality, I think back in 2009. So it was before beginning of infinity came out, which I think came out in 2011. Most people I know who are fans of David Duets came to David Duets through beginning of infinity.

[00:04:56]  Blue: And so they read that first, or they had definitely read it. And he makes very specific statements about animals in beginning of infinity that aren’t in Fabric of Reality. So when I had just read Fabric of Reality and started to want to research the four strands that he talks about in that book, I didn’t have any background that was biasing me towards a certain view of animals. So I didn’t know what to think of animals. I just went out and started to look stuff up on my own about animal intelligence as part of my own studies because this is still pre beginning of infinity where he kind of lays out a bit of a theory about animals in there. And I came across in my reading things that were just really interesting. And I didn’t know what to make of them. So for example, here’s a quote from Roger Penrose from Shadows of the Mind. Shadows of the Mind was one of the books I read that was at odds with David Duets’ theories and I wanted to read sources that were specifically at odds with his theories to see what the criticisms of it were. And one of the things that he brings up is he says, I was particularly struck by a sequence on a television program in which a squirrel realized that by biting through the wire along which it was crawling, it could release a container of nuts suspended on some horizontal distance, suspended some horizontal distance away. It’s hard to see how this insight could have been instinctive or any part of the squirrel’s previous experience to appreciate this positive consequence of its actions. The squirrel must have had some rudimentary understanding of the topology that was involved.

[00:06:25]  Blue: It seems to me that this was an act of genuine imagination on the part of the squirrel. Okay, by the time we’re done, I’ll have you questioning whether Roger Penrose is right. But he’s gotten the right track here, even though I don’t know about squirrels. They don’t have a ton of intelligence from what I can tell. But this really is amazing. Animals do really weird things like this that is just really hard to explain without attributing to them some sort of understanding. Where David Dwight says that animals probably don’t have any understanding. We’ll get to what that means in a future episode. And then the word he uses here is the right word, insight. Richard Byrne refers to animals. What he studies is when did animals evolve insight, which I will give you a much better definition of that in the future, but just briefly insight would be the ability to make mental models in your head and to use them to solve problems. So Richard Byrne’s whole studies is to try to figure out which animals are able to have insight, able to create mental models and to use them. And he has a really interesting methodology for how he tries to study that. One that’s very paparion. He’s a good scientist is what that means. Here’s another one that’s interesting. This comes from Nicholas Christophus. And this is one of his books that he published, which he published, and he says in there, one experiment placed a pair of, I don’t know how to pronounce this, in adjacent sections of a single test chamber separated only by a mesh partition.

[00:07:56]  Blue: In front of each of the monkeys was a cup holding food and a bar that when pulled would move the two clubs closer. Given the weight of the apparatus, both monkeys needed to pull the bar simultaneously and high levels of cooperation were observed to ensure that this cooperation was not merely a chance occurrence. The researchers showed the animals perform much better when they could see each other through the partition, suggesting that the monkeys understood the other’s role in the task and that they maintained their cooperation through forms of communication. Okay, again, this is really interesting. I mean, it’s not for a human. This would be, you know, very, very simple, right? But how are animals doing things like this? This goes beyond, you know, simple automatomism. And then just an example from my sister. My sister is an animal lover. She has all sorts of animal stories for me, but she was telling me about her dog. This is fairly typical of dogs. It wants, if it thinks you’re going to go to the car, it doesn’t want to get left behind and it likes to go for a ride in the car. So it will get really super excited and nervous if it can see that people are going into the garage to go to the car. But it feels differently if like you’re going out the front door, you’re putting on your coat to go on a walk or something like that. Okay. So the dog starts to whine and whine. And my mom, this is a particularly smart dog. I might add not all dogs are this smart. My sister has stories about her previous dog that it could never do something like this.

[00:09:24]  Blue: My mom comes to the dog and gives it an explanation. Strange as this may sound says, you can go in the car, but you need to first calm down and wait while we get ready. And that dog tried to control itself and try to stop its whining and it’s just trying to do its best and just a little tiny bit, but it’s starting to calm down. And then she said, okay, now, and then it runs and jumps in the car. Again, none of this is like super complicated intelligence compared to a human, but there’s a weirdness going on there where how did this dog. Was it understanding an explanation? Was it just some sort of weird classical conditioning? You know, how do we explain this? And this is the type of thing that Richard Burns trying to study.

[00:10:09]  Red: So can I interject just a minute on the, on, especially on the monkey thing humans are super, generally are super reliant on, on verbal communication.

[00:10:21]  Green: Yeah.

[00:10:21]  Red: And I don’t know if you’ve ever been involved in a lot of team building stuff that’s non -verbal where you have to try and as a team accomplish tasks without being able to, to use verbal communication. Those kinds of things are very challenged. The experiment that you’re describing, I think would be very difficult for humans. Eventually we would get it and we would figure it out and it would create kind of a synergy between those individuals who had essentially had to overcome our dependence on, on language to be able to get things done. So I just think that that’s interesting because they don’t have language. Those non -verbal things are probably more natural for them than they are for us.

[00:11:06]  Blue: Interesting. Now here’s another thing that I studied after reading Fabric of Reality that, that this one was more disappointing. Now people will laugh at me over this. I really seriously wondered if sign language, apes that can use sign language could pass a Turing test. Right. You’re familiar with the Turing test, this idea that can you tell that you’re talking to a person or can you tell you’re talking to a chat bot? Really nothing has ever passed the Turing test other than a human, you know, up to this point. And I was, and you heard these stories about Coco the ape, who was the most famous of the sign links, signing apes. And it really sounded like Coco the ape was quite intelligent, was capable of talking. And I thought, I wonder if Coco could pass a Turing test. So I mentioned this to my wife and she bought me a book on Coco the ape for Christmas, which I read. I was very disappointed to be perfectly honest. The things that you hear in the media about Coco the ape, they are the single best examples out of a whole lot of examples. So for example, they’d have some examples of conversations with Coco. Now mind you, in this book, they’re, they’re giving you the best conversations, like they’re giving you the best conversations where Coco showed the most intelligence. Okay. Here’s one of them. So Barbara is one of the trainers. Barbara says, now in the book, they put stuff in italics if they had a sign for it. And then if it’s not italicized, then they said it out loud. So, but I’m going to just read it.

[00:12:40]  Blue: And obviously they don’t sign every single word just to make that clear, but they do speak it. But then when Coco talks, it’s only signs. And so a lot of the extra words that make it into complete sentence are missing because you don’t do that in the sign language that Coco uses. So Barbara says, okay, can you tell me how gorillas talk? And Coco beats her chest. Which that doesn’t necessarily sound like a great answer, but the trainers think that she would, Coco was trying to show that apes make signs that they do things to make gestures, which we’ll see later. That is exactly how apes communicate. So this could have been a good answer or maybe not. It’s hard to tell. Barbara, what do gorillas say when they’re happy? Coco says gorilla hug. Barbara, what do you say to your baby, meaning a doll? Coco love Coco. Barbara, what do you say to Mike when you play Coco? Mike Coco love Barbara. What scares gorillas? Coco hat dog. Barbara, what do gorillas think is funny? Coco clowns bug. Now these answers, you can kind of tell there’s a bit of intelligence there. I mean, this is, I don’t think, I think I could probably tell Coco from a chat box, but like it’s not even close, right? And like, why is a hat scary? Right? Well, can maybe we can understand a dog. So the trainers say, oh, well, we had this incident where there was these strangers, older women with these really large hats, and they got really close to Coco. And it made us nervous. And so Coco could detect that we were nervous. So maybe that’s true.

[00:14:19]  Blue: Maybe Coco is repeating something from this incident, or maybe the trainers are remembering something and somehow trying to make sense of what Coco is saying. And it’s actually just a random word, right? It’s so hard to tell how much Coco understands, even when you’re crossing sign language with her. Yes.

[00:14:36]  Orange: I have a question here. Since I’m not familiar with how they actually, you know, how they think that Coco is understanding these things. The first thing where it says, how gorillas talk, like if the gorillas really, if the gorilla talk is actually gestures, how are they able to figure out what the talk word, talk really means? And then gorilla saying that the way the gorilla talks is gestures. That makes me wonder like how, how would a gorilla understand the concept of talk

[00:15:10]  Blue: if all they do is gestures? Yeah. So Coco does, has learned sign language. And that is a fairly common thing that they’ve done with different apes of the great apes. And they can actually, they do communicate with gestures well. And even in the wild, they do. And they do it with intent. That’s something that Richard Burton studies have shown. What I think what you’ll find though, is that when you teach an ape sign language, like 70 % of the time they’re talking about what they want to eat. It’s, they are learning new signs. There is a certain creativity there where they can actually learn new signs. Most gorilla gestures are built in. They have a set of gestures and that’s it. And they never learn any new ones in the wild, I’m saying. But in a lab, you can actually teach them novel gestures, like American sign language, and they can actually learn it. And then they can learn how to use it with intent. And this is something Richard Burton brings out in his book. So there is a level of intelligence there in that they can learn these new gestures, they can use it in with intent, they can use it in, you know, creative new ways. But boy, is it limited. And you’re spot on. To some degree, I have to feel like, you know, half the cool things to hear about cook of the eight are really interpretations on the part of the researchers.

[00:16:34]  Green: What concerns me most is why the gorilla found the hat and not the clown. Scary.

[00:16:39]  Blue: Yeah, you know that right there proves that Coco doesn’t know what she’s talking about because clowns are scary.

[00:16:45]  Orange: Sorry to interject, but I also wonder if the difference, you know, you wonder that why they can’t have more complicated, you know, conversation or understanding. Could it be there’s something to do with values that maybe, you know, in some sense limited number of values are programmed. The animal beyond a certain point is just not interested. So they just limit themselves. But anyways, I just thought I just throw that.

[00:17:08]  Blue: You know, this is an example of the mysteries of the mysteries of animals. Right. We don’t understand. And I will go into this in quite a bit more detail in, in this series in this podcast, because this is something that this is what Richard Burn is studying. Animals do these amazing things, but they just seem to not quite get it. They’re like one notch away, half a notch away from being way more intelligent than they currently are, and they just never seem to transcend it. And I’ll get into this into, into future podcasts and I’ll give you more specific details. It is very hard to figure out exactly what’s going on with animals. They sometimes show very strong intelligence and then sometimes they just don’t. And there’s seen, and Richard Burn has a theory as to how far they can go. And I will explain his theories. It won’t be in this episode, but in a future episode. Now, another thing that I found interesting and did you guys have a chance to look at those YouTube videos that I sent out about the monkeys.

[00:18:14]  Orange: So there was in one video, there was one young orangutan who grew up supposedly in some sort of a confinement and just didn’t know because of not being with the mom, didn’t know how to hang from the trees or climb and do stuff like that. It seemed like it was really anxious and it was surprising. I didn’t know that they could cry. So the trainer who was helping that younger orangutan, you know, they eventually wanted him to go back to the wild or be able to make it on his own. And it seemed like when she did try to make him, you know, hang on a rope, he started crying. It was interesting that how there was stress and it literally was making crying noises. He didn’t want to do that. So I thought that was interesting. Interesting. I had no idea.

[00:19:04]  Blue: Yeah. But do you have a chance to see the one that Fran’s wall did that about the monkey with the cucumbers in the grapes? I did not. Okay. I’ll let me describe that one. So there was an experiment. I actually found this from Bart. I don’t know if you guys know this, but he did a Ted talk at one point. And so he put this in his Ted talk, but it’s a, it’s a famous study that comes up all over the place. So they, they have these monkeys and they’re in cages. They’re in captivity. They have a, they’re given a task. They’re being classically conditioned. We’ll talk about classical conditioning here in a second to do a certain task. Basically, if you give me a rock, I will give you a piece of food. They got two monkeys that can see each other, but they’re in separate cages. So the first monkey gives a rock and they give him a cucumber, which he gladly eats because monkeys like cucumbers. The second monkey then gives a rock and he has given a grape. And which they like grapes even more because grapes are sweet. Okay. So then the first monkey gives a rock and he’s given another cucumber. Now remember, he accepted the cucumber the first time gladly, but now he has seen that the other monkey is getting a grape in exchange for a rock. So when he’s given a cucumber again, he throws it across the room and then grabs the cage and starts shaking it angrily. You really have to go watch it. Okay. And basically will not accept a cucumber from that point forward because the other monkey is receiving a grape.

[00:20:37]  Blue: Now, if I were to ask you, what’s going on in that monkey’s head? How would you describe it?

[00:20:43]  Red: Well, I would challenge an implied assumption that it’s necessarily about the other monkey.

[00:20:51]  Orange: So in that video, I feel like one, obviously the monkey seemed like it had the concept of being unfair or fair, right? Yes. And that seems to be

[00:21:02]  Red: a pretty complicated, you

[00:21:03]  Orange: know, I think that’s the complex concept. Well,

[00:21:07]  Red: and, and, and I will go and watch it, but I just wonder if we’re personifying the monkey a little bit by, by trying to make it about a jealousy emotion versus a knowledge that, oh, you have grapes and you’re not giving, you know, I mean, maybe that counts as jealousy, but. I don’t, I don’t, we could get into the concept of what jealousy is. So

[00:21:32]  Blue: the word that you’re looking for here is, are we anthropomorphizing?

[00:21:36]  Red: Yeah.

[00:21:37]  Blue: Okay. And I think that’s a completely fair question.

[00:21:41]  Orange: Well, let’s try to think about it. Like how else could we think about it that the, well, one of them is just angry that he’s not getting, I mean, I guess it would be interesting to see if there wasn’t another monkey and did they, I wonder if they did that experiment just with one. Sometimes it gets the grape and other times it gets cucumber and might just get mad even then.

[00:21:59]  Red: Right. Once it knows that a grape is an option and whoever the, the machine or the human or whatever it way that the items being distributed, you know, I would love to see the exact same experiment with the first time they got a cucumber. The next time they got the grape, what happens the third time when they get the cucumber, if there weren’t another monkey anywhere around. So

[00:22:22]  Orange: just to give you an example, I have a cat who has this set time when my husband pets him and in the set place that he likes. So if someday, you know, Mark is tired, my cat literally gets upset at that time. Like he has gestures and he makes all these weird noises and it’s clearly, you know, upset and then shows a lot of like almost aggressive behavior. So that in that case, that’s one, you know, I mean, obviously there isn’t any jealousy involved. It’s just upset that

[00:22:52]  Red: it’s not getting what it’s. It’s the response to the Pavlovian nature that the animal mission to this very specific thing.

[00:23:00]  Blue: So it’s interesting that they have a similar experiment. I’ve got a link up that you can look at this too, where they did it with dogs and it works with dogs. And in this case, one dog gets a reward. Another one dog gets, you know, a pet in words of affirmation and the set, the dog that was just getting pet in words of affirmation continued for a while and then just went into rebellion. And

[00:23:20]  Orange: it just laid down, right? Yeah. Yeah. We just lay on the floor and almost in defiance, like I’m not doing this.

[00:23:27]  Blue: Yeah. You know, it’s, and this is the hard question. We have a natural tendency to anthropomorphize and, and cameos try to lay out an alternative theory and how to test it, which is important here, but there are many things about animals that are just hard to explain without basically making some assumptions that they have some feelings similar to ours. Notice that even cameos version of this, even though it doesn’t reference jealousy, it’s still referencing very human like emotions about preferring one over the other and getting angry about it. So there’s still this reference to emotion similar to humans, simply saying, oh, we’re anthropomorphizing isn’t really a sufficient explanation of anything that’s going on. And this is an example of that. Maybe we could come up with some experiment that would differentiate here. And that would be even in that dog

[00:24:20]  Orange: one, you know, one could say that, okay, so dog is going to do the trick as long as you’re giving it the treat. If you don’t, they just, you know, it has nothing to do with whether the other dog is involved.

[00:24:29]  Blue: This is also the challenge that exists and how you go about trying to test it. How do you, how do you figure out what’s really going on in, and cameo came up with a testful case that would actually differentiate between the two theories, whether the animals action is a matter of jealousy or if it’s simply upset that it is not getting the reward that it would prefer. And so that is how you would want to go about this. You would not just make an assumption and then make it non -testable. We’ll come back to that. That is a tendency people have. They say, well, this is what I think is going on and they make it non -testable. We want to try to make it testable. Now this is kind of the background. So I had all these things in my mind when beginning of infinity came out. So I knew about numerous things about animals that were very hard to explain. And I was starting to study artificial intelligence at that point. I started to realize you can’t use artificial intelligence to explain animal behavior well, that there’s a gap there. And David Deutch, in beginning of infinity, he says the following, that there are two types of information that DNA and human minds evolve to store and have a property of cosmic significance. Once they are embodied, physically embodied, they tend to cause themselves to remain so. Such information, which I will call knowledge, is very unlikely to come into existence other than through the error correcting process of evolution or thought. So this, this is the quote where David Deutch is saying, there’s only two things that create knowledge, evolution and thought. Now I’ve challenged that in past podcasts.

[00:26:00]  Blue: We talked about Campbell’s and Popper’s theory and how it, it’s significantly different than this. And then we also looked at the possibility that maybe they’re talking about, they’re using the word knowledge in different ways. And maybe they’re both right for their definition of knowledge. But based on statements like this, people have picked up this idea that animals get their knowledge entirely from their genes, people meaning fans of David Deutch. I don’t think there’s probably anyone outside of fans of David Deutch that strongly think that, and you’ll hear this, you’ll actually hear, well animals get all their knowledge from their genes because there’s only two knowledge creation processes, evolution and thought animals don’t have thought. Therefore their knowledge must come from evolution. Now I used to wonder, is this something that David Deutch actually intended? Well, maybe yes. Here’s a quote from an interview, David Deutch’s interview with Tyler. Tyler says, dogs understand human social life pretty well. David says, they do not. Dogs have genes which contain knowledge, but it is fixed knowledge and it is not the kind of knowledge that constitutes understanding. Understanding is always explanatory. And then he goes on and uses an example, which we’re going to be using through the rest of this episode. So there’s a, there is the case of squirrels. They put a squirrel on a concrete floor. The squirrel did exactly the same behavior to try to bury nuts, even though it was having no effect whatsoever. It’s just a program being enacted by its genes. I think that it is statements like this is where the idea comes from that animals, all their knowledge is in their genes. They have no other sources of knowledge.

[00:27:28]  Blue: And this has even led to, and these are funny, by the way, but there’s one gentleman that puts tweets up about quote, they don’t buggy animal behavior. And they’re very funny. They’re things like a dog, you hold it up in the air and it hears water and it starts to try to swim or something like that, right? Or barking at stuffed animals or something along those lines. They’re often held up as proofs that animals are just robots. And they have a set program and that’s it. So let’s talk about this. And this is where we have to get back to the pseudo -Deutsch theory of knowledge. Okay. And the fact that honestly, it’s just wrong. Knowledge in the genome is a few hundred mechs. It’s less than a thousand mechs. It can be, in most of its repetitive, it can be compressed to like four mechs according to me Googling around and looking this up. So we’re not talking about a ton of information that exists in the genome. If all the knowledge the animal needed was in its genes, there would literally be no need for brains with hundreds of billions of neurons. Those brains are expensive. Brains are expensive. If all you needed was a little program that was in the genes, you wouldn’t evolve brains. It just wouldn’t happen. Okay. Those brains are there for something. So let’s ask the question this way. Do animal brains. So never mind the word knowledge because this seems to trip people up. Do animal brains contain adapted information that is not found in the genes that are solutions to problems that they face in their environment. Where this adapted information was created by a paparian evolutionary epistemology. We know the answer is yes. Okay.

[00:29:04]  Blue: This is not in doubt. Whether you want to call that knowledge or not, this is what’s going on. This is what’s filling up their brains. Example, a dog learns to sit for a treat. That is not in its genome. It has to learn that. Okay. That’s a piece of knowledge of adapted information that solves a certain problem that it learned through classical conditioning, which is a variation in selection process right along the lines with the way Campbell understands that. So once we re situate the question and we don’t worry so much about, is this knowledge or not? Because then we might get in a war over a definition. Once we realize we’re talking about adapted information containing the brain that solves problems. The answer is yes. That animals have sources of knowledge through learning. Animals can learn basically. Okay. And I don’t, there really is no doubt about this. This is not something and we’ve got tons of theories about this. Really good theories that have been around for a long time. It had been well -cooperated. So if by knowledge you mean adapted information that are solutions to problems that are created by a paparian evolutionary epistemology that did not come from the genes, then no, animals do not get all their knowledge from the genes. But if you want to not call that knowledge, then who knows, right? It just depends on what you happen to be calling it, right? So this is though, what Popper and Campbell were getting at. Okay. When they were talking about this ubiquitous number of knowledge creation processes that come through variation and selection, they were including animal learning in that.

[00:30:34]  Blue: This is why they call it knowledge is because it’s part of the umbrella of paparian evolutionary epistemology. Whether you call it knowledge or not, it still is part of that umbrella. Does that make sense? Do you understand what I’m saying here? Yeah. Okay. So let’s talk about animal learning. Animals have several forms of learning available to them. I’m not even sure we know everything that animals, all the learning algorithms that animals have. The most famous though is classical conditioning, which is the Pavlovian effect, right? And we talk about that one a lot. Almost every animal, you know, other than like single -celled animals have some form of classical conditioning. It seems to go way, way, way, way down on the evolutionary chain. Okay. Even single -celled animals have a form of learning that they call habituation, which is kind of an early form of classical conditioning. It’s almost like the opposite of classical conditioning, where classical conditioning, when something positive happens and there was some sort of stimulus near it, then that stimulus becomes a predictor for the animal of what’s going to follow. You know, you told the word sit, you sit, you know that, that means that now you’re going to get the treat. And then classical conditioning can also go the other way, that if you get punished, something bad happens, then you avoid that stimulus or whatever that thing was that happened just before. That leads to some really funny circumstances where I know of a woman who built a scratch post for her cat. The cat used it once. It fell on top of it. And then it would never use the scratch post again, even though she had now anchored it down.

[00:32:05]  Blue: The problem was that it was now a classical condition to avoid it. And so it couldn’t overcome that fear. Temple Grandin claims that animals never overcome their fears. That’s a testable theory. I don’t know if it’s true or not, but that’s one of her claims. Habituation is like the opposite, that something causes a stimulus. If it happens too often, you stop giving the response. We’re all familiar with this because we have habituation too. You put your shirt on and the tag in the back is bothering you. But after a while, the neurons stop firing and you sort of don’t notice it anymore. Right. You sort of forget about anything that any stimulus is happening too regularly. That’s what habituation is. And even single cell animals have a form of learning that is called habituation. According to Richard Byrne, some animals have something called insight, which is the ability to create mental models and use them. He’s trying to figure out which animals have this. This turns out to be actually a fairly difficult question. For the most part, he limits it only to the highest animals. The ones that are known to be the most intelligent so obviously the great apes, elephants. Interestingly, crows. Crows are as intelligent as elephants and great apes, even though they have very small brains. So he limits it to a very small number of animals, although he gives examples of other animals having it, even though his theory is that they don’t have it. So they may have it in some circumstances that are narrow or something like that.

[00:33:35]  Red: Have you ever watched the YouTube videos about the squirrel, ninja squirrel challenge?

[00:33:44]  Blue: No,

[00:33:45]  Red: tell me about it. No, you need to watch this. There is a… Yeah. It is crazy. It is a retired NASA scientist. And the squirrels kept stealing the bird feed that he put out for his birds. So he started coming up with and building more and more and more complex ways to keep the birds from the birds feed. But ultimately he built this whole obstacle course. It’s a ninja obstacle course. And you should watch it, Bruce. Okay. The ability for the squirrels to… He cannot believe how creative the squirrels are to break through all these barriers that he gives them.

[00:34:26]  Blue: Interesting. Yes. By the way, I put an article up on Twitter. Animals show a form of creativity. They come up with interesting things that could not possibly have been anticipated by their genes and that they’re able to still come up with creative solutions to those. Obviously made out of movements that do come out of the genes, so that there’s kind of a dividing line there that’s difficult to explain.

[00:34:49]  Orange: I guess to me it seems like, yeah, they definitely have creativity, but it’s limited to solving only specific problems.

[00:34:56]  Blue: Yeah.

[00:34:57]  Orange: For example, like my cat quite often experiences boredom, but it’s only willing to entertain certain things when it comes to alleviating it. Either play with me or I killed some other animals, some chasing game, something like that. But why not if I just turn on the TV and there’s something going on, why is it that my cat is incapable of… There is something going on there, but it doesn’t work for my cat.

[00:35:24]  Blue: Okay. So the learning algorithms that we just… Oh, then the one last one. A very few animals really only confirmed in great apes, although there could be other animals that have it that we just haven’t confirmed it in, can do something called behavior parsing. This is the thing that David Deutsch brings up in his book, Beginning of Infinity. Behavior parsing is a mimetic transfer of whole programs of action that can be used flexibly to accomplish desired goals. It’s very impressive what animals can do with… The animals that can do behavior parsing, what they can do with it. And I will give you some examples of this. On the other hand, behavior parsing has a very strong mechanical element to it in terms of learning the steps in the program. And David Deutsch uses that to suggest that animals aren’t very intelligent. And he also… I’ll give you a quote here in a second. He gives examples of where they do it even though it doesn’t make sense. They’ll learn some gesture and then they’ll do it even though their thumb’s missing or something like that and it’s not going to do anything for them. They’ll still do it. So it also demonstrates… On the one hand, it demonstrates a great deal of intelligence. On the other hand, it also demonstrates a great deal of lack of understanding. And so we’ll talk… And this is Richard Burns’ kind of main theory is behavior parsing. So we’ll spend some time on that in one of the future episodes.

[00:36:36]  Orange: Could I throw out an example here? Because again, observing my cat. So whenever cats use a litter box or something outside, they like to cover up their poop. So they’ll dig it up and then pee or poop. They’ll just cover it up. So a lot of times, I’ve noticed sometimes my cat will go into the litter box. The poop is in the litter box, but my cat is too done to understand that if it’s scratching the floor, that’s not covering up the poop. It’ll look at the poop and it’s scratching the floor. And then it just eventually just walks away. It’s like, it’s unable to see the connection that what I’m scratching here is not covering the poop. So it seems like it’s very mechanical at that point.

[00:37:17]  Unknown: Yes, it is.

[00:37:18]  Blue: Who has seen a dog try to bury its food with its nose? I mean, I’ve seen dogs try to do that. They’re doing it on a linoleum floor with a ball, right? And they’re making this gesture like they want to bury their food. Okay. So yeah, there’s things like this definitely show a really strong lack of understanding on the part of animals.

[00:37:39]  Unknown: But the

[00:37:39]  Orange: squirrel that Ninja track,

[00:37:42]  Unknown: the

[00:37:42]  Orange: squirrels have this amazing ability that given different types of variations of problem, they actually do figure out novel ways of getting around it.

[00:37:52]  Blue: Okay. And this is the weird thing about animals is that they show gigantic lack of understanding and gigantic creativity, but just in different circumstances.

[00:38:02]  Red: Yeah.

[00:38:02]  Blue: It’s very hard to figure out what’s going on with animals. Right. There’s a great deal of mystery with animals. And like I said, you can’t look to AI to help you here because animals are doing things that I can’t, right?

[00:38:13]  Unknown: We

[00:38:13]  Blue: don’t, we don’t have any, if you were to, if I were to go create an AI that could do classical conditioning like a dog and be a Nobel Prize winner. Right. I mean, like it’s that big a deal. Like we just can’t do it. Now these algorithms that we’re talking about, all of them have a variation selection process involved. It’s most obvious in the case of classical conditioning, repeat variations that keep happening near a positive event or the opposite of that. Insight allows you to do mental variations to a problem in your head. So if animals have insight, then that’s what they’re doing. They’ve got some ability to try out mental variation, just like a human would. Although humans obviously do this much, much, much,

[00:38:51]  Red: much better.

[00:38:52]  Blue: All of these learning algorithms are variation selection processes and therefore fall under the Popperian understanding of knowledge that they’re created through evolutionary epistemology. Okay. That’s the way Popper looked at the word knowledge different than maybe the way David Deutch looks at it. So if we’re sticking with this idea that knowledge only comes from two sources, which David Deutch has advanced, then we would have to say that animals learn and what they learn is adaptive information that gives them solutions to their problems. It’s stored in their brain. It uses a Popperian evolutionary epistemology, but we’re just not going to call it knowledge. This is at a minimum. I mean, like you can have a word have as many meetings as you want, but at a minimum, this may not. It’s a little confusing to try to not call it knowledge. So this is something that Bart in particular, but Saudi also kind of brought this up to me. So this is probably closer to the way Bart brought it up to me, Saudi, but you did say something similar to this to me once. The argument was David Deutch is reserving the word knowledge to mean specifically open -ended knowledge creation that can produce true novelty or some variant of that. This is actually not an unreasonable idea. It seems likely to me that maybe he is thinking something like that. Now, mind you, he’s never said that. Okay. So we are guessing. If he ever actually came out and said, look, I’m using the word knowledge to specifically refer to the open ended sorts of knowledge creation or something along those lines. That would make things a lot more clear as to what he’s getting at.

[00:40:27]  Blue: He’s never actually said that, but I don’t think this is necessarily a terrible guess, right? That that’s kind of what he’s thinking.

[00:40:33]  Orange: I think the problem there seems to be that he doesn’t want to bring in any mention of induction. And as you kind of talked about this too, like, you know, a lot of people working in AI and stuff, they do talk about some sort of inductive reasoning, right? David Deutch, he’ll always just say either you have knowledge, induction is a myth, there’s nothing there in between, right? But the problem is even if you have, you know how there was some, not to get off topic, but there was a program that generated a bunch of theorems, right?

[00:41:08]  Blue: Yes. So David

[00:41:09]  Orange: Deutch said that he had the knowledge was already there in it, but I think there’s something missing there, right? Because we have to run it on a computer and the computer, you know, the algorithm and some process had to happen for some, you know, those theorems to be generated. So what exactly is that then, right?

[00:41:28]  Blue: So the theorem prover was like one of the very first AI algorithms ever written, AI programs ever written before the word AI existed. And it led, it was brought up in a conference where the term got coined for the first time. And it created a proof that was novel, that no one in the world had ever seen before. It was for something we already knew how to prove in some other way, but it came up with a shorter novel proof for that thing. I don’t remember exactly what it was, which was really kind of amazing, okay? And it does work through a variation selection process. It tries out different variants. It has heuristics that it uses to try to narrow the search scope, which did come from a human, okay? Now that theorem prover is one of the specific examples Donald Campbell in his paper uses as an example of something that does create knowledge. Okay, so this is why I originally raised the issue that the Popper Campbell view of knowledge of necessity must be something very different than the Deutsch, the way Deutsch is trying to use the term. Because

[00:42:33]  Orange: something has to unfold, right? There is something that is on top of just the algorithm. Yes. You have to run the algorithm and it has to try, there has to be a variation selection. So there’s a process involved.

[00:42:49]  Blue: Okay, so let’s pretend like, let’s take this idea that Deutsch is trying to reserve the word for the more open -ended kind of knowledge creation. We can always allow multiple definitions. It’s always okay. You never need to argue with somebody over what a word means. You can say, well, let’s have two definitions and that’s always available to you. Okay, so knowledge one, we’re going to call that the Popper’s definition of knowledge. It is adapted information about solutions to problems, possibly from the product of improvements that come from variation selection. He would have actually said that comes from the improvements of variation and selection, but I’m leaving a little bit open -ended just in case there’s other sources we don’t know about. Not saying that there are, but I’m trying to leave it a little bit open -ended. Therefore follows Popper’s evolutionary epistemology. That would be knowledge one. Knowledge one is this adapted information. It’s the Campbell version of knowledge. It’s what you get out of improvements from any variation and selection process that’s trying to solve a problem. Then we’ll call knowledge two, Deutsch’s definition. It’s the subset of knowledge one that comes from open -ended variation and selection processes. And therefore, because they’re open -ended, can create extra novel results. This is okay. We could think of it this way. We could understand both men as having legitimate definitions of knowledge that even have a relationship between them, where Deutsch’s definition is a subset of Popper’s definition, but it’s a special subset. If we were looking at it in this way, then the statement that there’s only two sources of knowledge, which will now take to mean knowledge two, genes and human ideas and memes. That’s now a true statement.

[00:44:25]  Blue: This is how we might go about reconciling David Deutsch’s statements with a certain understanding of knowledge, specifically the more open -ended variety creation process. And I suspect this is what he has in mind. And that’s also just to make this a bit more clear. I doubt he would tell you animals don’t learn, right? I really doubt he would tell you that because everybody knows animals can learn. Everyone knows you can teach a dog to sit for a treat. So let’s look at this. What would be the advantages of looking at what we’re now going to look at as Deutsch’s definition, knowledge two? Well, there really is something special about open -ended knowledge creation because it really does produce novelty in a way narrow knowledge creation can’t. Just to put this in perspective, the immune system, David Deutsch in an interview with Eli Tier says, I think we know for sure the immune system doesn’t create knowledge. We’re now going to assume, now, according to Campbell, it does, right? According to Popper, it does because it’s a variation selection process. It creates antibodies for a disease that has never been known before in the history of the world. That’s the way it works. That’s why it uses a variation selection process to try to come up with antibodies. And it keeps the ones that are actually working. It throws away the variants that don’t work. It’s a straightforward variation selection process that is knowledge creation in the knowledge one, Popperian sense, but isn’t knowledge creation in the David Deutsch knowledge two sense. You can see that what he might be getting at here. The immune system is never going to produce a jet, right?

[00:46:02]  Blue: I mean, it’s whatever it’s doing, you know, it’s creating knowledge, it’s creating adapted information, but it is so narrow. It is going to do it specifically to make antibodies, and that’s it. And its repertoire is set to creating antibodies.

[00:46:17]  Orange: So what comes to my mind then is that it seems like in the case of humans, the knowledge creation involves even creating new values. Whereas in the case of animals, the values might be set. Interesting. An animal can’t create new values. So it is definitely, there is a clear difference. There is something quite different about humans in that sense.

[00:46:40]  Blue: That’s a really interesting theory, Saadia. I actually think there could be something to that.

[00:46:46]  Orange: Yeah. And then what you could say is that in that sense, where are those values come from? Those were programmed through genetic evolution, but then once those are programmed in an animal, then the variation selection can be kind of like an optimization process that’s always constrained by the values that are programmed. But with humans, there is no such limit to what value we can create.

[00:47:08]  Blue: You know, that’s a really good theory. We need to give that theory more thought. I don’t think I’ve ever heard anyone vocalize that theory before, Saadia.

[00:47:16]  Orange: That’s behind my theory of morality too, but that’ll be different topic.

[00:47:20]  Blue: Okay. We’ll have to bring that up again. And I need to think about that more, but everything, initially my thought is that’s a very interesting theory. We need to look at that more. Maybe what we could say is that the genes have a set values for animals, but then they’re able to create knowledge in ways to how they’re going to solve for those values in a limited capacity. But humans can actually create entirely new values.

[00:47:45]  Orange: So can I quickly quote Popper in his book, Unended Quest, he said that values actually, values are created with problems. And then he also says all life is problem solving. So if we say that, like, we can see that the problems that a cat or a dog solves are limited and so are the values. They go hand in hand, right? But something in the humans actually, you know, the unended problem solving, we don’t even know what the future problems are going to be like, right? Right.

[00:48:18]  Blue: Anyway, that’s just mentioned. And let me just say that knowledge one creation processes, the ones we currently know about, the ones that we can actually program, they really aren’t that impressive in a lot of ways. I mean, sometimes they do some pretty impressive things. Alpha goes very impressive, but even Alpha goes never going to create a jet, right? I mean, like it is still super narrow in its domain, right? And it

[00:48:41]  Orange: did seem like, because I listened to your podcast and I even watched that video on AlphaGo. It seemed like they were nonstop teaching. If they gave the AlphaGo had access to all the games that had been played out there by different players. So it was learning from those, right?

[00:48:57]  Blue: Yeah, initially they eventually, the current AlphaGo does not use any human games. It just plays itself. But now that knowledge is in there, right? They started from scratch in AlphaZero. That’s why there’s the zero in it. So they don’t actually, they actually found that that worked better to not use human games. But initially when they didn’t know what, they were just getting it off the ground, human games were very useful. But once they kind of figured out the algorithm, they found that it actually was better just to let it play itself. With the human games, they thought that might have been part of the reason why it would like to loot itself. But anyhow, that’s, that’s a different story. So, and you know, the Paramecium example that we use a lot that comes from Campbell, it literally has a set number of predetermined directions that are already known. So some knowledge, some knowledge one creation processes are so limited that their repertoire is literally set. Now that’s not always true. Genetic programming, for example, has an infinite repertoire. In theory, it could discover anything. It never does in practice. That’s a why, you know, we don’t know. That’s part of the problems that need to get solved. But many knowledge creation processes in the knowledge one category, the Parian category, are super unimpressive to the point that you can understand why people maybe are even hesitant to call it knowledge. So in some ways, the fact that Campbell was willing to call it knowledge, willing to say, look, the Paramecium is a variation selection process and it is coming up with them within the domain it was meant to do, a novel solution to a problem. It’s blocked, it needs to find its way out.

[00:50:32]  Blue: It has no way of knowing which is the way out. It is novel within that sense, right? But it does it with preset predetermined outcomes. And you can see why some people would say, oh, you know, that’s not quite what I meant by knowledge. So in some ways, but we can also see that it does follow a Popperian evolutionary epistemology, which is why Campbell and Popper wanted to include it under the umbrella of knowledge creation. So

[00:50:58]  Orange: let me ask you this question of what you said, can you go back one slide with the genetic, the genetic programming you said has an infinite repertoire. Do you mean by that like all the different possibilities, the landscape of possibilities? It

[00:51:12]  Blue: uses programming languages and programming languages are known to be, you know, Turing complete. So the repertoire would be everything. There’d be nothing if you buy computational universality, which maybe you don’t. But if you do, the repertoire would be universal.

[00:51:28]  Orange: I guess I’m still not getting it. You know, we’re still in what sense,

[00:51:31]  Blue: like,

[00:51:31]  Orange: oh, so anything that is program that you can possibly discover.

[00:51:36]  Blue: It can in theory, discover any algorithm that would any Turing algorithm. Got it, got it. Okay. Okay. So now we’ve talked about why the Deutsch definition of knowledge, what we’re assuming is the Deutsch definition of knowledge actually does make some sense. Now let’s talk about why maybe it doesn’t make some sense and why the popular Campbell one might be better. And of course it depends on your circumstance, right? And it’s okay to have two. But let’s be realistic about this. There really is a meaningful difference between knowledge created that’s in the genes and the genes having a learning algorithm that then in turn creates knowledge based on its environment and then puts it stores it in the brain or the nervous system. Okay. Those are just not the same thing. It’s really important to understand that they are not the same thing. Now we talked about this in episode 20 evolution outside the genome. I received comments. So in the evolution outside the genome, we had it where the, they had the Picasso frogs that a scientist grabs in the tadpole, its eyes or its, you know, other sense organs, pulls them into an incorrect position. And then when it transforms into a frog, the eye moves into the right spot, even though it’s starting in the wrong place compared to what the genes would have known about that. And then it does it through error correction. It does it by correcting itself until it gets it into the right spot. Well, that sounds like a learning algorithm. No, we don’t know what the algorithm is. And so we’re making some guesses here and. Everything’s a conjecture anyhow. But that sounds like a learning algorithm.

[00:53:12]  Blue: It sounds like it’s doing some sort of variation and selection process that under the Cambalian sense. And it’s moving the eyes into place, even though they’re starting in the wrong spot. So I got comments from people on this. And it makes sense. These comments do make sense. I understand where they’re coming from. They would say you claim the knowledge of where to place the eyes in the Picasso frog is determined by a learning algorithm in the cells rather than being in the genes, but didn’t the knowledge for how to do that come from the genes? So doesn’t that mean the knowledge was in the genes? Okay. This is what we might call the credit assignment problem. That since the knowledge for how to do it came from the genes, we’re going to say the knowledge was in the genes, not in the cells. Okay. This argument is equivalent to saying, this is obviously an extreme version. You claim human beings create knowledge via their creative algorithm. But didn’t the knowledge of the creative algorithm come from the genes? So isn’t all creativity really just knowledge in the genes? Well, that’s a stupid argument. Okay. I mean, obviously that’s a stupid argument, but it’s the same argument and it misses it. The point for the same reason. If you’re going to try to assign credit like that, everything has to go back to the genes at some point. And we’re not really interested in assigning credit. What we’re really trying to understand is how do variation selection processes, how does evolutionary epistemology work with animals with algorithms? How does it work? That’s the question we’re really trying to answer. We’re not that interested in who gets the credit and really think about it.

[00:54:42]  Blue: The genes could never have anticipated a problem like a scientist decides to move the eyes in the tab pole to the wrong place, but it didn’t have to. And the reason why is because it already had realized that having a flexible error correcting learning algorithm that moves the eyes in the right place is less brittle than trying to just move it in some direction. That is more flexible. So yes, the genes created the knowledge that’s in the learning algorithm. The learning algorithm in turn creates knowledge also. Maybe you want to say less so. I don’t care. The genes have more knowledge, more important knowledge, more open -ended knowledge. We can say this in a lot of different ways, but it’s important to understand that there is a difference there. And that’s the point I’m trying to get across. Also, this really does lead to confusion. People, even if Deutsch really does intend this as like I’m reserving the word knowledge for this open -ended, more novel kind of knowledge creation, people are instead interpreting it as the pseudo -Deutsch theory of knowledge, which is, which is wrong. I don’t know how else to say it. So you hear things like learning algorithms are misnamed. They don’t create knowledge at all. All knowledge came from the programmer. So there’s nothing special about them. This is false. Like this is literally just a false statement. That is a misunderstanding where you’ve taken what Deutsch said and you’ve turned it into the pseudo -Deutsch theory of knowledge. And this is just wrong. Okay. Animals do not learn. I’ve heard that before. Animals have all their actions, reactions pre -built into their brains, you know, from the genes. Okay. It’s literally impossible for the reasons that we gave.

[00:56:17]  Blue: The genes don’t have enough information to do that. One response that would be a very good one would be if somebody said to me, I can see that Popper and Deutsch define knowledge differently and they are both right under their respective definitions. I have had two people say that to me so far, I do believe that my response between Popperon So in that are Sabia and Bart. Everybody else I’ve talked to about this will tell me and I’m not kidding you. Popper is wrong. Deutsch is right. Okay. They’re not seeing it as a different way of looking at knowledge. They’re seeing it as there is. There’s only one way to look at knowledge. They’re falling into what I call word is Jamoss. And they’re missing the fact that there are things outside what boats called knowledge that follow evolutionary epistemology. That’s that’s just the truth. And that is what Campbell’s whole theory is about.

[00:57:01]  Orange: I think to me, it seems like what they might be missing. So, you know, like DNA has sort of like a memory, right? Where it stores information. But I think the fact that now in animals, there’s also a memory associated with brain, that in itself tells us that, okay, if it can store knowledge, it’s not also update that memory. And hence, why can’t we see that there could be a variation selection at that level going on? But I guess the problem that those people see is that they somehow feel like the overall genetics offer the biggest constraint, right? Like genetics eventually just constrain everything. And I guess the question would be, is it possible that to a certain amount that there could be some knowledge created in the brain of an animal that could almost have, I hate to use the word top -down causation, but somewhere where some constraints now created there could now sort of like an epigenetic phenomenon, where now that certain knowledge that’s created in the brain starts affecting your DNA, right? I think that that is where this is going, right?

[00:58:10]  Blue: Because if you know that there is such a thing, right?

[00:58:13]  Unknown: The

[00:58:13]  Orange: epigenetics, I guess that would be epigenetic, right? Yes. Like if you’re mentally, if you’re stressed out and stuff that can actually affect your level of your genes.

[00:58:26]  Blue: So why couldn’t that be the case with animals too? Yes. Okay, and then here’s my next argument here. I need a word. I can’t keep saying adapted solutions that are solutions to unanticipated problems that use a paparian evolutionary epistemology process of variation selection that are stored in the brain. It’s just too long, right? I need a word to refer to whatever that is, okay? The immune system. That’s good.

[00:58:53]  Red: That’s good.

[00:58:53]  Blue: Yeah. So the immune system, what the immune system does, it does something, it creates something and it does it through variation and selection algorithm. There’s a word for that. There is actually a word in the English language for what the immune system creates. The word is knowledge. That’s the word. It’s been the word for a very long time, okay? And this is kind of the problem with utilizing an existing word in a spectral way, which George likes to do. And sometimes that’s the best thing to do. I’m not trying to say it’s bad. I’m just trying to say there’s a danger with it that you have to accept, which is that it will cause confusion with what the word originally meant, okay? Which might still be a useful thing, a useful concept, in which in this case, it is a useful concept because it’s part of what we want to study as part of Popper’s evolutionary epistemology. Refusing to call that knowledge doesn’t change the fact that something is going on there that needs a word and it’s distinct from the genes. And that this something is following Popper’s epistemology. This is my real argument and why I’m going to use the word knowledge. I’m not trying to say the other word, other definition is wrong. I don’t believe that. I don’t believe in word essentialism. I think it’s fine for David Deutsch to use the word knowledge in a special way. But people need to understand why I use this other word. It’s because I just don’t have another choice. There is no other word for it in the English language. So Popper’s, here’s another thing. Popper’s definition, go back to the way I defined them. The Popper definition is about what knowledge is.

[01:00:29]  Blue: It’s a solution to a problem, okay? Deutsch’s definition is about what process created it, came through an open -ended knowledge creation process. Now there’s some precedent for this, like industrial diamonds versus natural diamonds. They’re physically the same thing, but one came through one process and one came through a different process so we value one more than the other, okay? So this knowledge too, defining knowledge in terms of the process that creates it isn’t necessarily wrong or bad, it might make sense. Especially since we don’t - I don’t

[01:00:59]  Red: know, if you’re using that as an example, the human choice to value the one over the other is silly and is artificially created by people who want to monetize it. And the fact that they’re able to trick people into valuing the one over the other, there is no difference. Why would you possibly value one over the other?

[01:01:24]  Blue: You make a good argument that I am not gonna try to challenge. So yeah, I mean, like if I could, I would probably want to buy an industrial diamond for cheaper and use that for a wedding ring. Why not? It looks just as pretty, no one can tell the difference. So I - The gemologist couldn’t tell the difference and so why? So I’m going to, if that’s the example, then we should not value the way something is created. We should only value the outcome of what it does for us. Okay, this is probably a good spot to remind us though that this is not necessarily Deutsch’s definition. We’re pretending like it is based on an argument that Bart had made to me, that maybe this is what Deutsch had in mind, but we don’t know that for sure. Let me make an argument here. Okay, so let’s imagine a scenario. This is probably in the future because I don’t think we have knowledge to do this yet. But let’s say that we, you know, think about like some pandemic happens and you have a bunch of scientists in the lab that are going to engineer and structure an antibody that will then fight off this virus or bacteria or whatever it is. Okay, so they use their creativity and their knowledge of how antibodies work and how diseases work. And they use explanations, scientific explanations and they work up, we need this antibody to be like this and they instantiate it and then they duplicate it and they create these antibodies that you can inject into a person and it makes the disease go away. Okay, we don’t do that today. We use the immune system itself to create it.

[01:03:05]  Blue: But we can imagine in the future, scientists being able to do this. Okay, they just use knowledge too, the Deutsche inversion of knowledge because it’s the open -ended creation process. So the antibodies that they created now embody knowledge in the sense of knowledge too. Now imagine the immune system just on its own, hypermutates, uses DNA to hypermutate according to the novel. So it’s not really different than the genome and it comes up with the exact same antibodies just by using random variation in selection. Okay, these antibodies are identical in every way to the ones created by the scientists that use creativity to bring it together. The only difference was that it was created by a quote, mechanical process of the immune system using variation in selection. We now have this weird circumstance where they have identical antibodies, literally identical antibodies. And one set we say in bodies knowledge, meaning knowledge too. And the other is just preexisting knowledge in the genes. Okay, even though they’re the same, it doesn’t make sense, right? It leads you into weird circumstances like that if you try to define things in terms of the process that creates it rather than what it is. And that may be fine. It may still be okay to talk that way. Cameos, I acknowledge cameos argument here, but someone else may feel differently. Maybe they’re very much against industrial diamonds. And this would be my argument for why maybe it’s not the best to try to use the word knowledge in this way, to only refer to certain open -ended processes. In regardless of what we call it, the only reason why when we say, oh, these antibodies, they embody knowledge too, but the others were just preexisting knowledge in the genes.

[01:04:53]  Blue: Well, the only sense in which that’s true is in that the genes had a learning algorithm, the immune system, that then created knowledge one. Okay, that was novel. So I don’t think this is the best way to go about trying to understand knowledge. I guess at the end of the day, I propose that we use the existing word for animal learning. Animal learning creates knowledge, period end of story. You can think of it as knowledge one versus the Deutsche and knowledge two, that’s fine, but it’s the only word I really have to describe it. Now, I also want to make, this is my final argument on this front. I believe that trying to define it in a special way like this leads to confusion even for David Deutch himself. And here’s my argument for why I believe that’s the case. In fabric of reality, David Deutch defines knowledge. And he’s had more than one definition. He’s like five or six different definitions of knowledge he claims. In the fabric of reality, it was convergence across the many worlds. So the example he uses was DNA. And he says, look, you’ve got some string of DNA that’s a gene and it contains knowledge. And maybe you have the exact same string inside of the junk DNA, which doesn’t contain knowledge. How do you tell those apart? You could look across the many worlds if you could and you could see that the junk DNA varies across the worlds, whereas the knowledge -bearing genes, DNA, would be the same because they’re hard to vary. I actually think that’s a very good definition of knowledge. And you don’t actually need many worlds to do this.

[01:06:18]  Blue: You could go look at just multiple variants of the same species and you would discover the same thing, that the genes, the knowledge -bearing sequences, they’re the same across members of the species. Whereas the junk DNA, even if it was the exact same sequence, would vary across the species. You can do this with computer programs. If I run a machine learning algorithm, and it’s a neural net and it’s trying to find the right set of parameters that are going to work, there’s only a small set of different… The set of possible parameters is much larger than the set that’s good. Because good parameters, even for heuristics like this, are hard to vary. And so it’s going… You would find across multiple runs that it tended to find… There may be hundreds instead of the infinite number, but you’d probably find there’s a hundred or so different good configurations, and it always finds one of those in that hundred. So under that definition of knowledge that he used in Fabric of Reality, he is describing knowledge one, the Popper definition of knowledge, not the Deutsche definition of knowledge. Can you see that that’s the case? And think about the immune system as the example. The DNA that it hypermutates would converge across the many worlds to the right sequence to kill that bacteria, to create the antibody that kills that bacteria. So in its DNA, in this case, according to the Nobles paper, so that definition of knowledge from Fabric of Reality, that is the Popper definition of knowledge. Now you might argue with me here, oh, but he changed the definition in the beginning of infinity. He defines knowledge as information that once physically embodied tends to cause themselves to remain so.

[01:07:59]  Blue: This is really a very novel definition of knowledge. And by the way, I think it’s a correct one. Let’s take a look at the immune system again. Is that definition of knowledge, does it fit the immune system? Sonny, that’s probably a question for you.

[01:08:11]  Orange: I’m going to pass on that.

[01:08:12]  Blue: It’s the same thing, right? The immune system has to hypermutate to find the sequence of DNA that creates the antibody that actually works against the disease. The variants of the antibodies, it tries lots of variants. It kills off the ones that didn’t work and it keeps the ones that were actually effective. So they stay embodied. So under the Deutsch definition of knowledge from beginning of infinity, the immune system creates knowledge. So that definition of knowledge also fits the Popper definition of knowledge, not the Deutsch definition of knowledge. I think this is the problem, is that if you try to define it in terms of a process, you bump into problems. So he’s trying to come up with a more substantive definition, but we don’t know enough about open -ended processes, so any definition he comes up with will happen to always be the Popper definition. Because that’s the only way I’m changing it. Actually, one thing that I was

[01:09:01]  Orange: thinking about why I kind of drifted off was I was actually thinking about the whole importance of the concept of a process in the definition of knowledge. Because, I mean, even if it is open -ended, right? I mean, if we just focus that as long as there’s a process of variation in selection, it’s a process of knowledge. Like, that’s the way I’m kind of thinking about it. Right? In one case, okay. So maybe it keeps going. In the other, maybe it just becomes restricted. But I don’t think, I think in that sense, there would be kind of a unity in both. That in both cases, we see that as knowledge. The knowledge is the process. Like, it’s not like a book sitting on a table just has knowledge. The book doesn’t have knowledge till somebody is actually doing something with it. Right? That there is a mind that’s engaged with it and knows how to decipher what’s in a book. I mean, if the entire human race died out, and would that book still have knowledge? Because do it just say, right? That the…

[01:09:58]  Blue: Actually, Popper says it does. He says that that’s World 3. Because if an alien found it, they could decipher it and it would contain knowledge. So he believes that the book, in some sense, contains objective knowledge, which he sees as separate from subjective knowledge, which would be what’s in the mind.

[01:10:14]  Orange: Maybe we’re going off track, but one of the things I’ve always, it’s always fascinated me is that if there was a book with a bunch of… There were no pictures or anything. It’s just like, let’s say, a book written in English. And there was no book, nothing else that showed how to decipher it. Could an alien actually figure out what

[01:10:34]  Blue: that has? Yeah, they may not be able to. You’re actually right. That’s what I mean. Think about us trying to decipher ancient languages without the Rosetta Stone. We can’t do it, right? Or not

[01:10:45]  Orange: just that. I will always wonder that if some alien communicated with us and if there wasn’t any type of a visual thing involved, because the visual might still help us. But if it just came in a bunch of symbols and even if they were patterns, without knowing how to decipher them, we wouldn’t really know,

[01:11:00]  Blue: right? Yeah. So in

[01:11:01]  Orange: that sense, from our viewpoint, we wouldn’t even see it as knowledge. So for something to have knowledge, there has to be a process and there has to be some understanding between whatever is… Like if it’s happening in our brain that when we’re reading the book, that there is a way, like meaning is important in the sense of things.

[01:11:20]  Blue: Yes. The way Popper explains that is he says that world three can only interact with world one, world one being the physical realm, world three being the realm of objective knowledge, through world two, which would be minds. So that is actually pretty similar to what you just said. I just happened to read that recently, by the way. So that’s interesting that you brought that up. Let’s get back to animals now and how this all applies to animals. There’s actually one more really interesting point that needs to be brought up in terms of knowledge creation in animals and genes. So let’s go back to the squirrel example. A squirrel has a little program in the brain that when it has food, it attempts to dig into the ground. It attempts to put the food into the ground. It attempts to use its nose to pat the food down into the ground and then it buries it. Okay, so that it can hide the food. And it will try to do that on concrete, right? Just like your cat will try to bury stuff on linoleum. So here we have this algorithm that is very mechanical, shows really no real understanding. And so that’s why Deutsch is using this example. So he would say then, well, that’s knowledge in the genes. It’s an acting, a program that’s in its genes, and which is true, by the way, that’s correct. There’s something missing though from this story, namely, how does an algorithm like that evolve in the first place? So now you think about it, this is the exact same problem that all evolution has. Let’s say an animal evolved the ability to dig once. It does one attempt to dig. That wouldn’t be useful.

[01:12:58]  Blue: Let’s say it maybe doesn’t multiple times, digs a hole. That’s not a survival, has no survival value. It’s a useless program. Let’s say it evolved the ability to pat down the food. That maneuver on its own without having first dug has no survival value. It would never evolve it. This complex set of steps that are in the algorithm, each one has to work under new Darwinian evolution. Each one has to, on its own, be valuable. And it looks like they’re not valuable, except as a collection. So when we claim that animals just evolved these algorithms, and we don’t go any further, and don’t explain anything any further, we’re actually being Lamarckian, that this animal needed this algorithm so it evolved it. If you want to understand how to square this with neo -Darwinian evolution, there’s actually a well -known explanation that exists. Campbell points this out. It’s called the Baldwin effect. So quoting Campbell, he says, complex adaptive instincts, instincts are the mechanical processes that the genes have. This is separate than from what they learn, according to Campbell. Adaptive instincts typically involve multiple movements and must inevitably involve a multiplicity of mutations, at least as great in number as the obvious movement segments. Furthermore, it is typical that the fragmentary movement segments or the effects of a single component mutation would represent no adaptive gain at all, apart from the remainder of the total sequence. The joint likelihood of simultaneous occurrence of the adaptive form of the many mutations involved is so infinitesimal that the blind mutation and selective retention model seems inadequate. Okay, in plain English, it’s not possible under neo -Darwinian evolution to evolve complex algorithms like we see animals do. It literally violates neo -Darwinian evolution.

[01:14:43]  Blue: Unless, this is the next quote from Campbell, the adaptive pattern being thus piloted by learning, any mutation that accelerated the learning made it more certain to occur or predispose the animal to certain component responses would be adaptive and selected no matter which component or in what order affected. The habit thus provided a selective template around which the instinctive components could be assembled. In plain English, it’s not a problem because evolution evolved learning algorithms first. If you were to go look at that squirrel and you were to go back in time, maybe even to something that was an ancestor prior to the squirrel, you would find that the original animals learned to dig and bury the nut and pat it down with their nose basically through regular learning processes. And then they memetically transferred it to other animals. It turned into a survival advantage to have learned this algorithm that’s not built into the genes. Once that has moved through the animal populace through means, animals do have means, we’re going to talk about this, then there is a survival advantage to it evolving into the genes because if the animal evolves to pat down automatically once it’s buried, it’s learned how to dig a hole and then it now has a gene that says, once the hole’s dug, just pat it down with your nose. Now it doesn’t matter that it doesn’t by itself have survival value because it’s happening within a habitat that includes learning algorithms. So if you want to understand the whole concept of learning things, sorry, of having algorithms that exist in the genes, which certainly exist, you have to understand that the learning algorithm has to come first or neo -Darwinian evolution doesn’t even work.

[01:16:27]  Orange: That’s really interesting. So let me just first of all make sure I get it. So you’re saying that with the animals, a lot of that behavior that we talk about instinctual behavior, right, that knowledge in the genes actually came because they were trying certain things out and whatever works now is - That’s

[01:16:46]  Blue: correct. This makes it more impressive. Yes, it’s true. Animals have algorithms in their genes that they follow and they don’t understand. That is absolutely true. Once you realize that some animal in the past had to learn it, it’s more impressive, right? In other words,

[01:17:01]  Orange: the explanation of that is not in their genes, but actually in their learning, other than that, right?

[01:17:07]  Blue: Yes, it did not get into the genes until they learned it. They had to learn it first. It had to pass them medically. Okay, and Nicholas Costakis, I won’t read this whole quote here, but in Blueprint, he talks about this as well. He talks about how over time, certain behaviors that were previously learned may become genetic. They call this the Baldwin effect. He was a Baldwin was an early person who studied this. And then he gives an example of a bird where the bird has to learn the song and there’s survival value to attract mates if you can get the song right. So either it’d be a survival advantage to have parts of the song start to be part of the genetics and eventually the whole song become a genetic. And at that point, it’s just an automatic response that they do. This is a very important thing to understand. And thus, when we talk about buggy animal behavior, we need to go back and we need to understand buggy animal behavior in light of what we actually know with the Baldwin effect and with knowledge in the Paparian sense. So let’s talk about the squirrel example where it tries to dig on concrete and tries to bury the nut on concrete. What does this actually demonstrate now that we have better context? Well, it does not suggest that if some animal behavior is innate, that all must be innate. We know that can’t be the fact. The Baldwin effect requires that they have to learn it first. Unless, I mean, like, you could make a case against the Baldwin effect in certain cases where each individual step was by itself a defensive, like maybe fight maneuvers or something like that, right?

[01:18:37]  Blue: But when we’re talking about the squirrel example, you have to reference the Baldwin effect or you’re basically supporting Lamarckian evolution. It does not demonstrate that animals get all their knowledge from the genes. This is literally impossible. Knowledge in the Paparian sense, not the Deutsche sense. Okay. However, they do get all their open -ended knowledge creation from regular genetic evolution. So there’s a sense in which we might say all the knowledge is in the genes, but it’s only the Deutsche in definition, which I’ve given my reasons for why I don’t prefer it. It does not demonstrate that humans never do equivalent things. So most of you probably know the pot roast principle. But if I say that the pot roast principle, do you know what I’m talking about? And maybe I’ve never heard it called that. Okay. Can somebody explain the pot roast principle? The pot roast principles is a story about a mom putting the pot roast in and the daughter said, why do you always cut the ends off?

[01:19:31]  Red: And she said, well, my mom always cut the ends off. And so they asked grandma, why did you always cut the ends off? And she said, that’s how big of a pan I had.

[01:19:37]  Blue: Right. So they were carrying on this meme that you’re supposed to cut off the end of the pot roast and the reason for it wasn’t there anymore. And then humans also have automatic reflexes. Like we have a ton of automatic reflexes and responses that are very much like animals having an automatic algorithm. Even if we’re just talking about like the fact that you’ll pull your hand away when the snake strikes, even though it’s behind glass or something like that. So humans do have equivalent things just like animals do. Although we don’t have anything quite as complicated as the squirrel example. Or maybe we do. I just don’t know what it is. But I don’t know of any examples like that. It does not demonstrate that animals are not conscious or that they are automatons. Those are not the same thing, by the way. We’ll talk about that in the later podcast. What does it demonstrate? It does demonstrate that animals have really severe inabilities to understand things like a human does. A human, even if they had an automatic response to try to bury their nuts and pat it down with their nose, they’re not going to do it on concrete because they have an explanation for why that’s the wrong thing to do. And animals just don’t have that, right? And this really is a big giant gap, which is part of the reason why animals sometimes come across so intelligent and sometimes come across so stupid. They do not have access to human level explanations. We will talk about Bern’s theory. He thinks they have access to a certain kind of explanation, very limited, and we’ll get into that. Then only some animals.

[01:21:10]  Blue: I do think the squirrel example is a very strong indicator of the lack of understanding that animals tend to have. Now, you could probably train an animal to not make these mistakes. Like you could classically condition an animal to not dig on concrete, right? Because they still have a learning algorithm that can override those built -in algorithms. But with a human, you wouldn’t need to classically condition them, right? Once the pot roast principle, once they talk to their mother and they find out, okay, here’s the explanation for why she did it and it doesn’t apply to us, they change their behavior immediately, right? You don’t need to go classically condition them to change their behavior. You just explain it to them and the behavior changes. And I also think Deutsch would argue here that humans make up explanations, that if you were to ask these people who were cutting off the end of the pot roast, you know, why do you do that? They may say something like, well, I think maybe, you know, my mom used to always do that. I thought maybe it was to make it juicier, you know, when it’s cooking. Humans will make up explanations. Now we’re going to see later, I’m going to challenge whether those explanations are actually the reasons why we do things or if we’re confabulating them on the spot. Because there’s actually some really interesting studies around this, but I do think that’s a difference, right? Is at a minimum, we can give you explanations, even if we’re making them up on the spot, we can give you explanations for why we behaved the way we did, that’s usually at least semi -reasonable. Where animals can’t do that at all.

[01:22:37]  Blue: With this in mind, animal learning really does amaze me, okay? So Carlos E. Perez, who’s a machine learning expert that’s trying to study AGI, he says, have you seen a machine with the autonomy of a honeybee? I did a post on Jenga playing cats and dogs. The fact that a dog can be trained in a few examples to play Jenga, a dog uses its mouth, the cat just uses its its pie, it will like actually go try to find something to grab. And at one point in one of the videos that I have, it tries to cheat. It doesn’t want to have to experience knocking over the pile. So it tries to convince the owner that the piece that he pulled out is its piece. So I mean those clever things that these dogs and cats can come up with.

[01:23:18]  Green: If I

[01:23:18]  Blue: were to try to do that via machine learning, using reinforcement learning, it would be, I mean, coming up with a robot and training it from the ground up to do Jenga, it would require thousands, tens of thousands, hundreds of thousands of runs and you would have to have it program in a world space first. You know, I mean like it’s nowhere near we don’t know if I were to take any existing robot out there, the best built robot that wasn’t built specifically to play Jenga and wasn’t given specific knowledge about how to play Jenga by a programmer. And then it said, I’m going to class the condition, this robot to play Jenga, you could not do it. There’s none in existence that could do it. Animals are amazing. Classical conditioning as an algorithm, it seems so simple. You ring the bell and the dog salivates because it’s used to getting the food when it hears the bell ring, right? It sounds like we understand what we’re talking about when we talk about classical conditioning. Go try to program it. Okay, and this is where the rubber reads the road. This is why I believe that you don’t fully understand it if you can’t program it. Okay, that comes from Richard Feynman. Go try to program classical conditioning. No one knows how to program it. And I’m serious when I say you would win a Nobel Prize if you could program classical conditioning. It is amazing what animals can learn and learn to do. Clearly it’s something way shy of AGI and yet even it is beyond our current understanding. It’s mysterious to us.

[01:24:48]  Blue: So this is kind of the background for the rest of what we’re going to talk about is on the one hand, I’m going to emphasize animals are brilliant. On the other hand, I’m going to emphasize animals are stupid. On the other hand, I’m going to emphasize animals are amazing how well they can learn. On the other hand, I’m going to emphasize animals are so stupid compared to humans on how we learn. And these are just all simultaneously true. And trying to figure out exactly why that is is what Richard Byrne tries to study. And it is hard. We read one of his books and he’s constantly contradicting himself because there’s almost always counter examples to every theory about animals. And he goes through and he just takes you through it. It’s one of the things I really love. The book I’m talking about in particular is Evolving Insight by Richard Byrne. I also have The Thinking Ape. I haven’t read that one yet. It’s a fascinating book. The next podcast that we do, I’m going to start to take you through his theory. And I’ll compare it to Deutsch’s interpretation of his theory. Both of them are the same on most points, but they definitely have a different slant on what they think the significance of the theory is. Deutsch emphasizing how stupid animals are, Byrne emphasizing how smart animals are. And I think you end up getting kind of a more rounded view by looking at both views simultaneously. Byrne,

[01:26:06]  Orange: humans can also be so stupid at times that you see that a certain part of the population, they really will say that a certain

[01:26:16]  Red: part of the population should just be treated as autonomous

[01:26:19]  Orange: or

[01:26:19]  Red: not very…

[01:26:20]  Orange: You see the same argument even humans using that towards other humans.

[01:26:25]  Blue: Well, there’s actually an interesting point there. You’re talking about still universal explainers, but there are humans that are not universal explainers. I mean, there are humans that exist that have only a brainstem. And that’s it. These humans with only a brainstem, they still have preferences. They like things and dislike things. They have favorite songs. They get happy when you give them their baby sister and get excited. There’s a lot that these humans will do, even though they’re not universal explainers. And they certainly act like they have a degree of consciousness. These humans, they are probably more equivalent to an animal. They may even be less intelligent than some animals because many animals have a neocortex. Many mammals have a neocortex and these humans don’t. There’s a great deal of mystery around all of this. And these questions aren’t just about animals because there are humans that what we’re talking about also applies. David Deutch in his books, he proposes defining a person as a universal explainer. And I’m actually not against that definition, although I do think it’s another case of a specialized definition. I would not consider a person that isn’t a universal explainer, a human that isn’t a universal explainer as not a person. They’re not a universal explainer. They’re not a person in that sense, but they are still a person in some sense, some legitimate moral sense. That is something that we have to deal with is there’s questions around this. How much do these people feel? How much do they really experience? How much do they understand when they have no neocortex at all? There’s all sorts of mysteries around them too. Well, maybe we can even cover that in some future podcasts. I

[01:28:05]  Blue: probably won’t get into the humans version of this for the animal intelligence podcasts, but there’s a number of interesting mysteries around what is it like to be human that doesn’t have intelligence, that doesn’t have the ability to explain things. All right, that’s it for today.

[01:28:22]  Orange: I think it would be interesting to get into even just the whole thing that why in humans, certain people, even coming from the same circumstances, why certain people throughout their childhood even are more open to change, which I would say to exercising their free will while the others just pretty much just whatever, whether it’s a culture or they just remain, they like to remain conditioned, no matter how hard you try somehow. Yes. It’s tough one. I mean, I really don’t know, at least we could maybe discuss possible reasons that why certain humans are more open to being rational than others.

[01:29:01]  Blue: Yes, there’s interesting questions. Well, what is autism, right? Why Temple Grandin argues, and maybe this is true, maybe it’s not, she argues that autistic people are in between humans and animals, that they have some of the perceptions of animals because they have a reduced frontal lobe. Based on that understanding on her part, whether it’s right or wrong, she felt that she understood animals better than most, than neurotypicals. And she built a career out of the fact that she experienced life more like an animal. So she could figure out what was causing the animals to become afraid. There’s something waving, a little flag waving, and so the cows won’t go this direction. And no neurotypical can figure it out. And she sees the flag wave and it immediately causes fear for her. She goes, oh, the animals are gonna be feeling fear over that waving flag. So she’ll go tell them, you need to get rid of that flag. And then suddenly the animals will all start traveling the path that they wanted them to, they’re no longer afraid to. And she gives tons of examples of this in her books, where she’s able to kind of capitalize on her autism as an advantage over neurotypicals because it allows her to understand where the animal’s coming from. All right, well, thank you guys. This has been a fun episode.

[01:30:20]  Orange: Yeah, yeah. All right, let’s do it again soon. Okay. All right. Goodbye. Take care. Bye -bye. Bye -bye.

[01:30:29]  Blue: The theory of anything podcast could use your help. We have a small but loyal audience, and we’d like to get the word out about the podcast to others so others can enjoy it as well. To the best of our knowledge, we’re the only podcast that covers all four strands of David Deutch’s philosophy as well as other interesting subjects. If you’re enjoying this podcast, please give us a five -star rating on Apple Podcasts. This can usually be done right inside your podcast player, or you can Google the theory of anything Podcast Apple or something like that. Some players have their own rating system, and giving us a five -star rating on any rating system would be helpful. If you enjoy a particular episode, please consider tweeting about us or linking to us on Facebook or other social media to help get the word out. If you are interested in financially supporting the podcast, we have two ways to do that. The first is via our podcast host site, Anker. Just go to anchor.fm -4 -strands -f -o -u -r -s -t -r -a -n -d -s. There’s a support button available that allows you to do reoccurring donations. If you want to make a one -time donation, go to our blog, which is 4strands.org. There is a donation button there that uses PayPal. Thank you.


Links to this episode: Spotify / Apple Podcasts

Generated with AI using PodcastTranscriptor. Unofficial AI-generated transcripts. These may contain mistakes; please verify against the actual podcast.