Episode 122: The Case Against Logical Fallacies

  • Links to this episode: Spotify / Apple Podcasts
  • This transcript was generated with AI using PodcastTranscriptor.
  • Unofficial AI-generated transcripts. These may contain mistakes. Please check against the actual podcast.
  • Speakers are denoted as color names.

Transcript

[00:00:00]  Blue: Hello out there! This week on the Theory of Anything podcast, Bruce takes a deep dive or rather shallow dive by his standards, considering this episode is only an hour, into logical fallacies. How important is it to think about these fallacies? Can we become more logical by avoiding ad hominem attacks, not straw manning, not appealing to authority and avoiding slippery slopes? And is there a difference between rational and logical fallacies? I enjoyed listening to Bruce here and I hope others get something out of this too.

[00:00:51]  Red: Welcome to the Theory of Anything podcast. Hey Peter. Hello

[00:00:54]  Blue: Bruce,

[00:00:55]  Red: how are you

[00:00:55]  Blue: today?

[00:00:56]  Red: Good. We’re going to talk about a subject that I’ve hinted at throughout the show and it’s an article, it’s based on an article I wrote back in September 28, 2021, and then we’re going to, I’m going to read the article, the blog post that I wrote, and then we’re going to kind of talk about it from there, okay. So the article was called The Problem of Logical Fallacies and it discusses why so -called logical fallacies so rarely have anything meaningful to say in most rational discussions, admittedly they sometimes do. So they’re not a complete waste of time. So here we go. Here’s the original article I wrote and then I’ll discuss it afterwards.

[00:01:37]  Blue: Well, I think in advance I think I might agree with you. The whole concept of the logical fallacy has been something that has been triggering my BS detector for a while. Yeah, totally. Now, so I’ll be very interested in what you say here.

[00:01:56]  Red: Yeah, so I’m going to try to explain why I think it triggers people’s BS detectors because it is, in fact, most of the time, total BS. So is this the logical fallacy fallacy?

[00:02:10]  Blue: Yes, the logical.

[00:02:11]  Red: What I’m going to do is I’m going to develop the idea that maybe we should have something called a rational fallacy that’s different than a logical fallacy and I’m going to suggest what that is, but

[00:02:21]  Blue: okay,

[00:02:21]  Red: okay. Everyone knows about the logical fallacies. There are whole books and websites written about them. People trot them out on internet debates and attack by claiming their opponent was utilizing some logical fallacy or another. I’ve even seen people in an internet argument simply respond, you are using a logical fallacy and then leave a link to a website, but not even bother to explain which logical fallacy was supposedly being utilized. Something has always bothered me about the logical fallacies, namely that probably 90 % of the time that they get trotted out, they’re used in a false way, probably the single most commonly abused logical fallacy is the ad hominin fallacy. Rarely have I seen this one appropriately used when trotted out by someone in an internet debate. The issue is that logical fallacies only apply to logic, but not always to rationality. Most people don’t understand the difference between these two. Mr. Spock was logical when he should have been instead rational. For example, let’s say there is a critical debate, in a critical debate, Person A quotes a study, say to prove some herb really works, and person B responds, oh, that study came from the very company that sells the herb, so you can’t trust that study. Besides, the company is dishonest. When Person A responds, that’s an ad hominin attack, you need to learn your logical fallacies. Is Person A correct? Yes, this might technically be an ad hominin attack, had this been a purely logical debate, that the company is dishonest, doesn’t mean that they didn’t happen to do a good study, nor can we just assume that they are dishonest because someone said so. One does not logically follow from another, so technically, this might be a logical fallacy, but so what?

[00:04:06]  Red: The reason Person B brought up their doubts about the study was that they were challenging the very premise of Person A’s whole argument, that it is a rationally valid thing, this is a rationally valid thing to bring up. Pay attention to debates, and you’ll quickly discover something. The vast majority of times, people are simply challenging each other about their assumptions and premises. Rarely, if ever, do debates actually challenge each other on the basis of logical deductions. Once you realize this secret, you know that the vast majority of claims to have fallen into a logical fallacy, quote unquote, will turn out to be irrelevant. Rationality isn’t the same as the same thing as logic. Rationality is best understood via Popper’s epistemology of critical discussion and his conventions to avoid ad hocness. The study, the best response to Person A, would have actually been something like this. Sure, that study came from a company that is biased, and yes, we can’t just assume upfront that a study is correct. But this study was a double -blind study to correct for possible bias, and the study was confirmed by another study. In other words, the best rational response wasn’t to call out the supposed logical fallacy, but to take the other person’s theory seriously and explain why you feel their theory still has problems to deal with. It’s not that logical fallacies don’t sometimes become relevant. Some logical fallacies are fairly common, though not nearly as serious in a rational discussion as in a logical deduction. For example, strawman arguments are very common, but in a rational discussion, they may well be nothing more than the fact that understanding your opponent is quite difficult.

[00:05:45]  Red: Strawman arguments at least have the advantage that the person being strawman now has a chance to correct the misunderstanding. And circular arguments are common because of the vagueness of language and how hard it is to sort meaning out from words. They may still be a valid problem, but the proper response might well be to not call the person out for a logical fallacy, but to help them analyze their arguments better. But for the most part, people don’t understand how a critical rational discussion differs from a logical analysis and misapply logical fallacies. Here are a few examples from Purdue’s online writing lab on logical fallacies, from a very prestigious school, along with my analysis of why the supposed fallacies are actually possibly valid in a rational discussion. So the first one that they gave, I’m going to now list a series of logical fallacies from the Purdue website. So the first was slippery slope argument. So the argument, and then I’m going to read the argument they had on their website. That’s an example of a slippery slope argument. Okay, and then I’ll do an analysis of it. So their example of a slippery slope argument was, if we ban hummers, because they are bad for the environment, eventually the government will ban all cars. So we should not ban hummers. So my analysis, actually, this isn’t such a bad argument if one holds certain theories about governments. So it’s probably a rationally valid point worth discussing. It can easily be strengthened by instead, instead by saying, instead of saying all cars saying many more good cars or something like that. Hasty generalization example. Even though it’s only the first day, I can tell this is going to be a boring course, my analysis.

[00:07:30]  Red: This may well be a good conjecture based on a decent heuristic, or maybe not. But in a rational discussion, it seems unlikely that this was ever intended as a logical deduction. Post hoc ergo promptor hoc, I drank bottled water and now I’m sick. So the water must have been, must have made me sick. Analysis, this is only a logical fallacy. If you chose to read this person as meaning absolutely proven beyond doubt, the water made me sick. But practically speaking, it would be charitable to read this speaker as meaning something closer to the water made me, the water may have made me sick. And I can’t think of an alternative explanation at the moment, in which case it’s a good starting conjecture. Generic fallacy. The Volkswagen Beetle is an evil car because it was originally designed by Hitler’s army. Analysis, this one actually is both a logical and irrational fallacy in its current form. Begging the claim, filthy and polluting coal should be banned. Analysis, coal is arguably filthy and polluting under some legitimate definition of those terms. This seems like a fair starting conjecture worth debating. Circular argument. George Bush is a good communicator because he speaks effectively. Analysis, an opinion about someone not being a good communicator is always going to be difficult to show logically. And there is no need to try explaining one’s reasons as best as you can. It’s probably a good place to start, even if the reason isn’t very deep initially. The correct next question is, what do you mean by speaks effectively, rather than claiming it is a logical fallacy? Either or fallacy. We can either stop using cars or destroy the earth. Analysis, is this really a logical fallacy at all?

[00:09:18]  Red: Or is this a case of this person simply not having additional theories available to them? Rationally, we can’t consider theories we don’t yet know about. It is at least a possibility that this person simply isn’t aware of a better alternative. And so this is logically valid for them to say at this point. Would calling it a logical fallacy really help the person understand why we need to keep making progress? Add hominin. Green pieces strategies aren’t effective because they are all dirty, lazy hippies. My analysis. If this was the sole argument being made, I’d agree this is a logical fallacy. But as part of an overall discussion, the motivations and lifestyle choices of an organization do matter and are worth bringing up and debating. It may not actually be true that green piece is a bunch of dirty, lazy hippies. But if they are, I’d want to know about it because it might affect their effectiveness. Add popular bandwagon appeal. If you were a true American, you would support the rights of people to choose whatever vehicle they want. My analysis. This one is pretty lame, I admit. But likely the overall debate included what the speaker meant by true American. And thus may or may not be rashly valid, depending on what they meant. And frankly, someone that didn’t believe I had a right to buy the vehicle I wanted probably is misunderstanding certain important western American values that actually do matter. Red herring. The level of mercury in seafood may be unsafe, but what will fissures do to support their families? Analysis. This actually strikes me as a fair question. Or actually speaking, you can’t disentangle issues nicely such that they can cleanly be used in a logical deduction like this.

[00:11:04]  Red: And since this is really an economic question, which what is the cost of making it safer than it already is compared to the cost of not doing so. Economic side effects seem to me to be spot on. straw man. People who don’t support the supposed state minimum wage increase hate the poor. Analysis. People’s motivations do matter in overall theories. This might be a true theory. Or maybe not. Likely not. But it is a fair point to raise if done as part of the overall argument that might include a theory as to why one group is motivated by a repeal of a law. That is definitely worth debating even if it turns out to be false. In fact, we need to debate false ideas in rational discussions. Moral equivalents. This parking attendant who gave me a ticket is as bad as Hitler. My analysis. Is this really a logical fallacy at all? Or is just someone is just someone saying something colorful to express their anger? So here we have 12 examples on a web page of a prestigious university and only one of them was definitely a valid rat rational problem. For most of these, it’s not even clear if they are true logical fallacies in the first place because they have more charitable interpretations available to them. The logical fallacies really only apply to what you do after everyone agrees upon an initial set of assumptions. Rational debate almost never includes and agree upon initial set of assumptions. Pay attention to rational debates and you’ll quickly notice that so -called logical fallacies are often rationally valid and calling them logical fallacies really just misses the point. So that was the end of my original article.

[00:12:45]  Red: So if logical fallacies do not really relate to rational discussion in many cases, most cases I’d even say, does it therefore follow that there is no such thing as a rational mistake or fallacy? Well, not at all. What we need is a new kind of fallacy, what we might call a rational fallacy, which is in some sense not a logical fallacy or at least not always a logical fallacy. Maybe we could call them an epistemological fallacy or maybe we could call them a reasoning fallacy, but those sound terrible. So I’m going to go with rational fallacy for now. We’re trying to answer this question. What type of epistemological mistakes do humans fall into that causes them to reason poorly? The answer must be rooted in good versus bad epistemology, and since Popper’s epistemology is our best current theory of epistemology, we would want to build this idea of a rational fallacy out of our understanding of Popper’s epistemology. Now on this podcast, I’ve argued that we critical rationalists do not agree on what Popper’s epistemology is. I myself have favored a reading of Popper that makes the no ad hoc rule central. Most crit rats I know today prefer what I’ve called the invite criticism correct errors approach, where we stay a bit vague as to what counts as a good criticism and by extension, a bit vague as to what counts as a good explanation and instead rely on the idea that good criticism and good explanations are like pornography. I know it when I see it. But if you’re not even going to try to answer the question what counts as a good criticism and be sharp and precise in our answers to that question, error correction of our epistemology becomes impossible.

[00:14:24]  Red: This seems undesirable. So I’ll put forward my own non vague claim about epistemology that good epistemology is rooted in the following idea. Always choose to formulate your theory sharply and precisely enough that it is easy to detect errors due to the theory having implications empirical or logical or otherwise that may turn out to be wrong. So we know exactly what wrong would look like. This is known as poppers no ad hoc rule, which is a convention Popper recommended where you’re supposed to formulate your theories to always have independent independently testable consequences, meaning testable consequences not related to the specific problem that you’re trying to solve that you can test separately and to never save your theory from refutation unless the save is itself non ad hoc. If you always follow this convention, you are by definition only allowed to save your theories from refutation by increasing the overall testability of your collective theories. Now I have called this version of poppers epistemology poppers ratchet because you are only allowed to solve problems with your theories by increasing the sharpness and testability of your theories, never by making your theories vaguer and less testable. The term poppers ratchet is my own, but the no ad hoc rule is not. It came directly from Popper himself. It’s interesting how crit rats today will actually argue with me over this idea even though it is in fact pure Popper. So I’ve mentioned this in the past, but one crit rat who runs a business doing poppers epistemology, by the way, wrote to me and told me that this is just, just isn’t right. So he argued like this.

[00:15:57]  Red: He says one can’t predict knowledge creation and thus it would be wrong to claim all problems should be solved via increases to sharpness and testability of your theories. Now in his mind, so thus in his mind, you can’t choose to formulate a theory sharply or precisely. You just solve problems and it may or may not be the case that you’ll do so by formulating your theories more or less sharply. You can’t predict what the solution will be according to this gentleman’s argument. But consider this quote from Popper. Einstein really in context, he means any scientist constantly consciously seeks for error elimination. He tries to kill his theories. He is consciously critical of his theories, which for this reason, he tries to formulate sharply rather than weak. The quote continues only objective knowledge is criticizable. Subjective knowledge, meaning knowledge in the head, becomes criticizable only when it becomes objective. And it becomes objective when we say what we think and even more so when we write it down or print it. That’s all from objective knowledge page 25, by the way. I think most crit rats I know today would agree we should write down our ideas and even seek criticism of them. But few would agree that it is on us to choose to formulate our theories sharply rather than vaguely, even though this is something that as I just quoted, Popper literally says, and he says it multiple times throughout his books. I think to I want to suggest that that many rational fallacies tend to collect around issues like this, i.e. aspects of Popper’s epistemology that are not well understood. They are epistemological misunderstandings or mistakes that human beings tend to naturally fall into.

[00:17:36]  Red: Because of the gap between default or what I’ve called folk epistemology, the default epistemology we all normally follow, versus the true epistemology, where Popper’s epistemology is our best understanding of what the true epistemology is today. For example, people too often choose to formulate their theories vaguely to avoid criticism instead of sharply because such that their theories take risks and are likely going to turn out to be wrong, which is embarrassing. But this also means you can then error correct the theory or even abandon it, if needs be, in favor of a better theory. This is an extremely common and ubiquitous rational fallacy people tend to fall into without even realizing they are doing it. So what are examples of rational fallacies? I’ve developed a few already on this podcast and even given them names, so let’s formalize them now. My favorite and one of the most important is what I call vagemanding your theories. Every false theory can be turned vaguer and vaguer until it becomes impossible to test, and thus can be claimed to not be refuted. Moreover, the vaguer your theory is, the more likely you’ll be able to post facto claim that you were right all along by simply finding any vague way to show the correct theory was the same as your original theory, with your original theory being so vague and open to interpretation that you can always find some way to map the truth once it’s found to your original theory after the fact. This is why vague theories are so popular. They are both irrefutable and also almost guaranteed to turn out to be true in some sense. What they lack, epistemologically speaking, is any actual implications or testable content. They are empty.

[00:19:22]  Red: Folk epistemology prefers irrefutable theories, whereas the true epistemology prefers refutable theories that take large bold risks. Moreover, vague theories can’t be error corrected or improved. Vagemanding is literally, is really the opposite of Popper’s ratchet. You solve the problems of your theory, not by increasing the sharpness and content of your theories, but by making your theories vaguer so that you’ve reduced the content of your theories until the problems all disappear. Famously, this is what happened to communism, according to Popper, which originally, according to Popper, did make falsifiable predictions, but then the defenders of communism would simply ad hoc save the theory until it had no content. For that reason, I also sometimes call this unratcheting your theory, meaning you violate Popper’s ratchet and solve problems with your theory by making your theory vaguer. Another common rational fallacy I’ve mentioned on this podcast is the creationist fallacy. This fallacy is that you try to treat a vague, untestable, or uncheckable theory as a competitor to a theory that is testable and checkable. This was the mistake I made as a teenager when I was a creationist. I would put forward completely legitimate problems with Darwinian evolution, and I’d watch Darwinians time and again dodge these problems in ways that, at least to me, were deeply uncomfortable. See Episode 74, The Problem of Open Edinus, for a frank discussion about uncomfortable Darwinian dodges. I couldn’t even get these Darwinists to admit really obvious things like Darwinian evolution can’t explain how life got started. Now that this isn’t a small problem with Darwinian evolution today, it’s a gaping hole in the theory, and it continues to be a gaping hole even today.

[00:21:10]  Red: Another famous example is, of course, irreducible complexity, like the bacterial flagellum that we’ve talked about in the podcast in the past, which really isn’t easily explained even today, and is usually just papered over, rather than admitting to the potential problem that exists there. My epistemological mistake as a teenager, what I’m calling the creationist fallacy, is that me showing even giant holes in Darwinian theory tells us nothing whatsoever about the merits of creationism as a theory. In Popper’s epistemology, there is really only one sense in which you can have, quote, evidence for a theory, namely the idea of testing a bold theory that makes large risks that may have refuted it but didn’t, what Popper calls corroboration. Creationism can predict any set of data no matter what. It can’t clash with reality. Therefore, it is impossible for creationism to ever have evidence for the theory, at least the way I was wielding it at that age, I should say. I mean, I suppose in principle, if creationism were actually true, there would be a version of creationism that would make testable predictions, probably involve knowing the exact mind and will of God or something like that. But certainly the way I and every creationist I’ve ever met wields creationism as a theory, it makes zero predictions and it has zero content. Sure, I was correctly collecting a very large list of problems with Darwinian evolution, such as pointing out that it can’t explain how life got started in the first place, since that violates the second law of thermodynamics. My thinking was that I was refuting Darwinian evolution and leaving only creationism standing, which on the surface takes a sort of shallow form of Popper’s falsification. And I want to emphasize this.

[00:22:57]  Red: Full -capistemology is inherently falsificationist. It’s just not falsificationism the way Popper intended falsification. In fact, I would argue many crit rats I’ve talked to very much believe this is the correct understanding of Popper’s falsification. Though they’d never make the mistake applying it to creationism, they would make this epistemological mistake with their own pet theories. But this is actually just a misunderstanding of what Popper meant by falsification. Popper had in mind the idea that you choose to formulate your theory sharply, as we just read, such that they are at least equally testable or even outright progressive. Meaning the new competitor must explain everything the old theory did and then some, all without reducing the testability or preferably even increasing the testability. This is what falsification really meant to Popper. Falsification in the sense that you want your theories increasingly falsifiable or testable or have more empirical content. However you want to say that is fine. It is no merit to creationism that there is a problem with Darwinian evolution unless creationism is at least as testable as Darwinian evolution and can explain as much. That was the problem that I as a teenager was completely missing. This idea that formulating your theories sharply by which I mean making them progressively more testable than their competitors. If I want to count the black bacterial flagellum as quote evidence for creationism, I need to not merely show that it is a problem for existing Darwinian theory. In fact, I would argue that it is, but that my version of creationism did not only not only explain bacterial flagellum, but also made unique testable predictions that we could test separately or independently. This is precisely where I as a creationist was entirely missing the point.

[00:24:53]  Red: My theory made no such prediction or really any predictions. I would note that what changed my mind wasn’t that I came to believe that the problems that I that I had raised with Darwinian evolution were off base. They were not off base. Even today I take a strong interest in evolutionary theories that take such problems seriously. Thus my interest in third way evolution and particularly in Michael Levin because there the some of the few evolutionists out there that really do take a look at these things that I used to raise as a creationist and say, you know what? Those are problems. Let’s take those seriously and let’s try to solve them. What really convinced me was the slow collection of observations that evolution could explain and therefore predict that creationism could only post facto ad hoc explain. After collecting the hundredth or so of these, I realized that I had to admit to myself that evolutionary theory was generally valuable. But another way showing a problem with a competitor doesn’t count as direction towards your theory unless your theory was both equally risky, i.e. had as much testable content as the competitor and solved that problem in a way that can be independently tested. I can’t tell you how common this epistemological mistake is even amongst crit raps. I’ve argued that my disagreement with Saadia, we’ve had her on the show a number of times to talk about this, I’ll take this form. She offers problems with an existing theory, typically her favorite being the problem of time. And she has various vague philosophical explanations why she thinks the problem points to her the problem points to her pet theory as a solution.

[00:26:27]  Red: But I do not recognize problems with an existing theory as counting as merit towards one’s pet theories unless your pet theory can actually explain the problem in a non ad hoc way, which by her own admission none of her theory today can do. Because they’re all fully untestable at this point. The vague philosophical explanations rooted in gut feelings that she often offers, they don’t really mean anything to me. So to me, the problem of time, I would admit it’s a legitimate problem with quantum theory, which, you know, does include the Church -Turing -Deutsch thesis. So you might say it’s adjacent to a problem with CTD, which is one of the arguments she makes. But it is just as much a problem with her theories. The meta rule here is that if she’s allowed to cite the problem of time as a disproof of, say, the Church -Turing -Deutsch thesis, then I should be able to cite the problem of time as a disproof of her even less or even non testable theories. Or put another way, I believe Saadi is simply making the creationist fallacy with her arguments. If and when she can make her theories as testable as the one she’s criticizing, and then show me how it solves the problem of time while making new unexpected empirical predictions, then of course I’ll change my mind in a heartbeat at that point. Before that point, I do see her theories as legitimate conjectures for research interests. And I’ve never had a problem with her looking at them in that way. What I don’t accept is, is that her arguments somehow create evidence for her theories. I don’t think there’s any evidence for her theories, nor could there be in their current form.

[00:28:09]  Red: Of course, the real reason why Darwinian evolution has giant gaping holes in it compared to creationism is because creationism is a bad explanation, not worthy of any consideration at all, because it has zero testable content. Or put another way, my creationist theory explained everything, so it explained nothing. Sure, I could then use it to post fact to explain whatever observations actually happened. But so what? I could literally make an infinity of such theories up on the spot. And without there being real testable predictions, there was no way to differentiate my version of creationism to those infinity of other content free theories. So the creationist fallacy is defined as offering a theory empty of content as a competitor to a theory that has real testable or checkable content. This is such a ubiquitous rational fallacy that everyone including crit rats falls into it again and again and again. It’s also surprisingly easy to identify and objectively exists. So for example, even Saadia will admit her theories are not currently testable, and thus currently they are ad hoc at the moment. So our actual disagreement isn’t over whether her theories are ad hoc, we agree that her theories are ad hoc. It’s really an epistemological disagreement. I think offering untestable theories as a competitor to testable ones is basically meaningless, epistemologically speaking, and she disagrees with me under her epistemology. Most I consider her theories as an interesting conjecture and I wish her luck developing them into real theories if she’s able to, which she will only be able to if it turns out her theories happen to have real verisimilitude. If they don’t, it will simply be impossible to develop them any further. What are other kinds of common rational fallacies or put another way?

[00:29:54]  Red: What are common epistemological mistakes? The goal isn’t judging which theory is right, but what process or methodology would stand by no matter which side of the bait we’re currently on. Here’s an analogy. Judges, ideally, focus less on outcomes and more on whether rules behind the decisions were just. The digital process operates at a meta level. I don’t know if people know this, but like listen to advisory opinions podcasting, this will become very clear to you that this is the way judges work or are supposed to work. Always thinking about the next case. Unfortunately, the media frames Supreme Court rulings as left versus right winning a case, ignoring whether the rule created is good or bad rule. So epistemology is more like the judicial process. It’s about the meta rules of rationality, not a surefire way to decide which theory is right or even probabilistically. For each argument defending your theory, that you make defending your theory, ask what rule did I just make up? Would I accept the same rule if my opponent used it? How comfortable are you with the rule allowing untestable alternatives to replace testable theories simply because you found a flaw in the testable theory? If you accept that rule, you must also accept creationism. Are you prepared to do that? We’ve been focusing on the category of ad hocness up to this point. So vague manning your theories, making a theory vague so it can’t be refuted. And the creationist fallacy treating flaws in a good explanation as evidence for a bad irrefutable theory. Here are four more related fallacies that are variations of this theme of ad hocness. So ad hocary saving one’s theory by the introduction of ad hoc theory.

[00:31:40]  Red: This is always possible and so methodologically popper ruled it out as it turns every theory into what Deutsch would call an easy to vary or ad hoc theory. Explanation gapping. One particular common form of ad hoc argument is an explanation that has a conceptual gap more or less the same size as the problem it’s trying to explain. I’ll give an example of this in a moment. Untestable argument. People tend to favor untestable arguments because they can’t be refuted but rationally speaking these should be considered not even wrong. Shiftable argument. Similar to untestable argument but here one can theoretically test them but they are really easy to change up to match or not match any test. This is what Deutsch would call easy to varyness. Note that easy to varyness is actually just one kind of ad hocness being the more general category. So let’s talk about an example of ad hocary. An old crit rat debate over face blindness that I have mentioned in past podcasts is a good example of this. I’m going to give some details on this that I haven’t given previously because I really want to kind of drive home why this is an example of a rational fallacy. At the time this happened years ago by the way and may not even represent Deutsche and Crit Rat views today in fact probably doesn’t. At the time Deutsche and Crit Rats equated universal explainership with the entirety of human intelligence probably some still do today but it’s not as popular theories it used to be. Meaning that all human intelligence was understand via explanations that there’s a belief that all of human intelligence or almost all of human intelligence was based and rooted in explanations.

[00:33:20]  Red: Presumably the correct theory is that human intelligence is actually made up of both the older animal intelligence and then the universal explainership is built on top of it meaning that we have both and that we’ve got quite a bit of both.

[00:33:34]  Blue: So you think the crit rats have moved away from that old explanation based theory?

[00:33:41]  Red: You will not find it stated as clearly as it was stated back then but they still seem to hold on to some of the implications of the theory. So I think they’re phasing it out slowly and I don’t even know if they’re doing that intentionally.

[00:33:51]  Blue: I just think the evidence is so overwhelmingly against them that it’s they have no choice but to gradually let this one go. That’s interesting.

[00:34:00]  Red: But crit rats at the time sorry crit rats at the time claimed even mundane abilities like fake face recognition in this case were solely products of explanation creation. Even today Deutsche and Crit Rats claim that genes have little or no influence on human beings because we’re quote universal. This is a remnant of the same idea that human intelligence is built entirely on the ability or almost entirely on the ability to create new explanations which with perhaps just a teeny bit of initial animal influence that quickly gets overridden. As I’ve argued throughout this podcast many times there is actually a huge leap between being universal and genes having little or no influence on us. Those aren’t ideas in contradiction to each other as crit rats often assume. My own view is built on the assumption that human intelligence is built on top of animal intelligence. Crit rats back then even claimed feelings like pain were actually a kind of explanation. I have this is the one I don’t think I’ve seen in a long time which makes me think they’re moving away from it. And since animals didn’t have explanatory power they were thought to not feel things. And yes this is where the idea that animals have no feelings comes from. Humans have explanations and feelings are explanation based. Animals don’t have explanations therefore they can’t have feelings they’re just automata. Assume for the sake of argument that Deutsche Crit Rats are right. All human intelligence is the ability to create explanations so we have no or very little animal intelligence layer. Now here is my argument against that view. Imagine someone utilizing your genetic built -in alignment program called pain and suffering to coerce you into doing what they want.

[00:35:43]  Red: Maybe by torturing you or threatening your family. Now imagine a Deutsche Crit Rat arguing to you no they aren’t coercing you. You are universal. You can choose to ignore the pain. Gandhi can do it. Now would you consider this a good response? It’s probably a technically correct response by the way. Or would you like me consider this a serious misunderstanding of the implications of the theory of universal explainership which is what I think is actually going on here. Yet if I instead said your genes use pain to try to coerce you to do what they want and often they succeed despite you being universal, Crit Rats will scream but Gandhi ignored pain all the time. You can’t explain that with your theory which of course you can. That’s a misunderstanding of what’s being raised. Under this Crit Rat theory animals were built on something else. Interestingly some Crit Rats actually claimed it was induction and this is something we’ll have to come back to later the fact that the Crit Rat community is often crypto inductivist. But humans were universal explainers so they didn’t utilize such lower animal intelligence very much. Even on the face of it this just seems false. What then is inexplicit knowledge isn’t that a major part of human intelligence. But the Crit Rats at the time simply redefined such examples as inexplicit explanations which as Vedin pointed out to me on his podcast is really just a contradiction. They also claimed that the mind software had because of this theory had no modules arguing the brain was just universal hardware and therefore couldn’t contain distinct modules.

[00:37:14]  Red: When I’d point out to the Crit Rats I was discussing this with back then that there is a considerable body of literature with experimental tests showing the brain did have such modules that give responses like oh scientists are all empiricists or inductivists so they don’t know what what they are doing. No real attempt was ever made back then to understand and respond to actual observations by coming up with non ad hoc alternative theories of their own. I want to emphasize this was years ago so I’m not claiming that the Crit Rats community is still saying things like this necessarily. So to help Crit Rats understand I’m trying to give the history of how I came to understand this rational fallacy if that makes any sense. So to help Crit Rats understand why they were going down a bad path I offered a real life counter observation to their theories hoping to engage them with it. Face blindness is a condition where a person and it’s a real real condition where a person can see and understand faces but struggles to recognize and differentiate them. Humans can create explanatory knowledge but they don’t rely solely on that process because it’s often just too slow. Humans also use older animal processes and facial recognition is actually one of these older animal processes. Experiments show that your dog can recognize your face in a photo without other cues like you’re said. Because face recognition is rooted in older animal intelligence it exists in a single module in the brain that can be destroyed contrary to the Deutsche Crit Rat claim that this wasn’t possible.

[00:38:51]  Red: Back then I mean in real life if the module is damaged a person can no longer recognize faces for example a man with face blindness this is a real life example by the way could tell that something was a face and even judge facial attractiveness but at a party he had to have his wife wear a pink bow in her hair or he wouldn’t recognize her. Interestingly once someone becomes face blind due to damage to this module in their brain they can never relearn the ability to recognize faces. There is a substantial experimental and natural evidence supporting this fact this has been documented up and down all over the place. This refutes the Crit Rat theory that there are no modules in the brain but it also refutes the idea that all human intelligence is built on explanations. Now why? Because if the mind is a universal learner that only utilizes explanations it would follow that if you damage the part of your brain that happened to recognize faces you’d be able to simply relearn that ability again. Makes sense right but you can’t in this case because it’s actually a genetically hardwired non -explanatory process that we use to recognize faces. What you can do by the way is you can learn to use your explanatory power to replace face recognition to some degree although much slower. The example I just gave asking your wife to put a bow in your hair that’s an example of using your explanatory process to replace your facial recognition that’s the sense in which you are still universal by the way okay but it’s a still a big loss.

[00:40:27]  Red: You can even learn to recognize specific features i.e the person with the mole on their nose is Bob though if someone else happens to have a mole on their nose too you’ll likely find that explanation insufficient in this case and you won’t recognize you won’t differentiate the the two people but it is a much slower process and much harder to do because that is how explanations work they are a slower intelligence process and we cannot rely on them for everything. So a person that loses the face recognition module for the rest of their life has a disability around recognizing faces and it’s a serious disability. I say this refutes the crit rat theory but that’s actually not quite right rather it would refute it if you accept poppers no ad hoc rule but if you don’t and most crit rats usually don’t then nothing can refute your theory ever. To put this in critical rationalist terms we have two competing theories theory one humans recognize faces using the same creative explanatory process they use to create all ideas thus it is impossible for there to be a facial recognition module that the mind uses to recognize faces and theory two my theory humans have both have both the ability to create explanatory have process to create explanations but also older animal processes and it turns out that facial recognition is actually one of the older animal processes and thus exists in a single module in the brain that can get destroyed ruining an otherwise functioning universal explainer unable to recognize faces well ever again.

[00:42:00]  Red: Now both I’ve made both of these theories explicit and I want to emphasize why I did this is because you must make theories explicit to make them testable then I took a real -life observation that there actually exists people that get damaged with certain part of the brain it’s always the same part of the brain by the way and they lose their ability to ever recognize faces well ever again this is a clear observation as we could have hoped for to differentiate between these two theories if this doesn’t count as refuting observation for theory one then it seems doubtful any observation could ever refute theory one thus making it an untestable theory and from a perperian standpoint of no interest at this time but this critical rationalist was unfaced that I was debating this with he quickly responded perhaps adults just have a hard time creatively reconstructing modules they previously had or he says perhaps some adults do relearn face recognition but do so quickly so that we think they never lost it now can you see the rational fallacy he’s making or do you see this as a legitimate argument if you do see it as a legitimate argument are you prepared to accept his reasoning at a meta level as a valid rule that you’re going to allow your intellectual opponents his responses are telling to him he was doing paparian epistemology because he used the language of refutation he refuted my refutation he likely believed that he had refuted these observations and therefore he was doing poppers epistemology but in reality he was committing the rational fallacy of ad hocary this matters popper held that his epistemology collapses if ad hoc arguments are alive allowed from objective knowledge page 15 to 16 when speaking of best theory it is assumed that a good theory is not ad hoc the idea of ad hocness and its opposite which might be termed boldness are very important ad hoc explanations are explanations which are not independently testable independently that is of the effect to be explained they can be had for the asking and are therefore of little theoretical interest or maybe even more clearly in logic of scientific discovery 19 to 20 it is impossible that any theoretical system should ever be conclusively falsified for it is always possible to find some way of evading falsification for example by introducing ad hoc an auxiliary hypothesis or by changing ad hoc a definition it is even possible without logical inconsistency to adopt the position of simply refusing to acknowledge any falsifying experience whatsoever I must admit the justice of the criticism that this is a fair criticism of falsificationism I’m going to propose that the empirical method shall be characterized as a method that excludes precisely those ways of evading falsification that is what it what is it about this crit match response that makes them bad ad hoc explanations instead of a good counter example the key problem is is that his new explanation

[00:45:04]  Red: being offered are clearly meant to simply save a pet theory rather than to sincerely advance knowledge now how do we know this am I just psychologizing when I say this I’ve been accused of that oh you’re just a college I’m psychologizing no according to popper we know whether this is correct or not because there’s an objective understanding of the concept of ad hocness okay we know this this crit rats explanations are ad hoc because they made the situation less testable rather than more testable this is the defining characteristic that separates ad hoc explanations from good explanations according to popper if this doesn’t count as refuting as as I mentioned if this doesn’t count as a refuting observation nothing would popper’s critical rationalist conventions require discarding explanations that reduce testability many pop parians commit this common fallacy though popper himself was clear ad hocness ad hoc explanations uh sorry ad hoc saves nullify critical rationalism to avoid this popper asked that we only offer counter explanations that are independently testable now I discussed this in episodes 82 and 83 by the way popper’s ratchet is the key here the overall testability of your theories must increase or you are being irrational as an example some adults relearn this is his code again some adults relearn face face recognition so quick we won’t notice okay so fine this could be a good theory he’s just not treating it as such propose a way to test it by deriving unexpected implications the theory this theory touches upon reality so testing it should be possible though it would require refining this theory into something sharp and precise with real reach in other words and then working out unexpected implications of it he just doesn’t have interest in doing so but it could theoretically be done my guess is that when he does it it will always get falsified immediately and that’s probably why he wants to keep it vague okay ad hoc saves are trivial anyone can invent them if accepted as valid epistemology collapses in progress becomes impossible by contrast making a theory testable without instantly refuting it is difficult and impossible if the theory is wrong this crit rat clung to his pet theory of universal explainer ship but in doing so he stripped it of testability blocking error correction now consider this argument he made um this is the second argument perhaps adults just have a hard time creatively reconstructing modules they previously had now this is a good example of the rational fallacy of explanation gapping notice how this explanation he has an explanation gap that is the same size as the gap he was trying to fill this is equivalent to saying if i’m putting it more clearly adults cannot create some kinds of explanations around facial recognition but without explaining why this would be the case moreover it isn’t even clear how this isn’t an outright refutation of his whole position it’s basically claiming adults aren’t universal explainers isn’t it at least when it comes to face recognition note also that neither of his counter arguments follow from his own theory and this is really the key here theory one doesn’t predict that any that a adults struggle more than children to to reconstruct the module or b people either relearn instantly or never but never in between why why never in between these claims are added ad hoc to save the theory from refuting evidence the explanation gap in theory one is that it tacitly made a prediction that we should not find cases of people having face blindness where that can’t be just relearned so theory one tacitly predicted no cases of irreversible face blindness the prediction failed so an arbitrary rule was invented adults can’t create can’t creatively reconstitute the module or do so instantly or never but without explaining as to why that would be but why no explanation was was was being offered for why so we still haven’t really explained the original problem caused by the observation that real face by the blindness exists in real life no reason for this rule was given so the original problem remains unsolved the explanation quote unquote still has the same gap disguised this is the rational fallacy of explanation gapping the problem could have been avoided if he had admitted the gap proposed testable conjectures consistent with his theory and sought reputations for them but instead he was ad hoc saving his theory this like with psadias theories this isn’t an opinion and i want to keep emphasizing this i think that when i’ve talked to the crit rap community everything has been posed in a way where it’s just a matter of i’ll know it when i’ll see it i’ll know a good explanation when i’ll see it but nobody knows what that actually means popper was proposing objective criteria that we can all agree upon okay this crit rat would agree his theory isn’t independently testable he just rejects the part of popper’s epistemology that says that that’s unacceptable so he’s unconcerned over that fact that’s the strength of the no ad hoc rule it’s objective you either violate it or not this is as opposed to easy to varyness which does not seem to me to be objective and seems to be entirely subjective but would he accept this move from someone he disagreed with no in fact i know for a fact that he has engaged people who use similar arguments and he will not accept this argument coming from somebody else we don’t even have to subjectively decide based on personal feelings if it’s a good meta rule or not we all accept it as a good rule when someone else violates it that’s the mark of a good rational rule meta rule so asking to hold yourself to it isn’t asking for anything beyond just fairness so far i’ve only concentrated on rational fallacies that that deal with violating the no ad hoc rule there are many other epistemological mistakes that people commonly make that fall into different categories we will talk about these other categories of rational fallacies in future episodes that’s the end of this first one do you have any questions peter

[00:51:28]  Blue: no i i think you’ve made an excellent case i mean i it sounds to me like you’re not completely against the idea of these logical fallacies i mean they they do have their place in the world i guess especially if you’ve ever had the experience of someone who has been raised on so -and -so owns the so -and -so kind of videos and they just have never heard of a logical fallacy and they are i mean it can be informative i think for people in in certain context but maybe maybe they’re overused a little bit i

[00:52:10]  Red: you know i i definitely think that there are sometimes legitimate logical fallacies i mean there are certain logical fallacies that i think are even common enough that i would argue i’ll explain this better in a future podcast when i get to it

[00:52:27]  Blue: yeah

[00:52:28]  Red: certain logical fallacies such as straw manning is a good example

[00:52:32]  Blue: yeah

[00:52:33]  Red: that are common enough that it would be a mistake not to admit that they’re valuable but i honestly wonder in those cases if they shouldn’t instead be thought of as rational fallacies so what is the problem with straw manning is it actually some sort of problem with logical deduction i don’t think it is i think what it really is is it’s a problem with epistemology so there’s like this group of logical fallacies that i would claim to instead be rational fallacies and then even of the ones that are left you do sometimes see people make logical fallacies it’s it’s not common

[00:53:09]  Blue: yeah

[00:53:10]  Red: like like i said like 90 percent they’re they’re misused um when somebody calls on a logical fallacy 90 percent of the time you can ignore it but you know 10 percent still often sure

[00:53:20]  Blue: sure sure

[00:53:21]  Red: so i mean they definitely have their place and since logic is basic to rationality in the sense that it is the logic of scientific discovery that popper’s whole epistemology is rooted in logic and falsification through logic um you can’t really claim that it’s got no place because it surely does but man it’s overused it’s it’s just abused and overused to a degree that is almost silly

[00:53:47]  Blue: yeah no i i agree with you there the other concern i have about some of these rational fallacies is as much as i agree i mean we’ve talked about popper’s ratchet at length and you’ve made a pretty solid case if anyone wants to weed through 15 hours of conversation i hope they do but it’s it’s you’ve made you’ve convinced me at least that this this idea that has has a has some validity um and is is pretty uh pretty um congruent with what popper thinks but at the same time like it’s a little bit hard these logical fallacies have such like they’re they come across as sort of zingers in a debate like they sound cool i guess it’s a little bit hard for me to imagine an actual debate where someone brings up popper’s ratchet and it has the same resonance as the slippery slope fallacy or an ad hominem attack or i i mean maybe that’s just in our current knowledge state maybe maybe when our descendants embrace Bruce Nielsen’s form of haperianism these things will become common you

[00:55:19]  Red: know that’s a good question i hope

[00:55:20]  Blue: i hope so

[00:55:21]  Red: let me just say that the very fact that we use logical fallacies as zingers is in and of itself a problem

[00:55:29]  Blue: yeah no i agree i see that yeah

[00:55:31]  Red: having raised things like popper’s ratchet and trying to explain it even to critical rationalists that have studied popper deeply i can’t say that it’s ever been even slightly effective right okay it’s i mean like when i bring up the know ad hoc rule and i say look you’ve violated the know ad hoc rule like crit rats just do not care right like it it is meaningless to them i

[00:55:55]  Blue: i mean that’s an interesting point that even to people interested in epistemology it’s almost never useful to bring up epistemology when you’re actually having a discussion or debate so you know

[00:56:09]  Red: i i think what i would say first of all will there be a future state where that’s no longer true there could be there’s nothing stopping that future state from coming where we are more rational right

[00:56:21]  Blue: yeah i mean it’s would be a very hard for our ancestors to imagine a future knowledge state where people just talk about things and don’t fight it out in the street right probably would have been almost impossible for them to imagine that so

[00:56:35]  Red: having said that do i think that’s happening in my lifetime like no right we’re not even close now i think it’s interesting we have built communities that do a very good job of implementing exactly the rules that i’m talking about right it’s the scientific community it’s the knowledge the the truth -based community has jonathan roush called it when we interviewed him right it’s we we do know how to build institutions that cause people to be to follow poppers ratchet automatically right

[00:57:12]  Blue: there it’s more in it it’s somewhere in their brains they’re just not thinking about it that’s right in explicit knowledge i guess you’d say

[00:57:19]  Red: so a completely fair question might be is there any point in formalizing this and you can make a case there isn’t because we already do it well within certain sort types of institutions like this is exactly why scientists have no interest in epistemology why should they they’re better at it than philosophers of epistemology right yeah they know they’re better at it than philosophers of epistemology okay so i can i can completely see why someone might say there’s no value in this but you really have to stop and think about this long term right like are you really sad like are we do we have some perfect state of institutions of course not right the way you would go about improving these institutions is by coming to understand them and to criticize them and to improve them which is really getting specific as to what epistemology actually is and then of course there’s the fact that i’m interested in agi if i want to understand how to program this there’s only one way through you you must actually understand what’s going on when humans reason well okay and when’s what’s going on when they don’t reason well you have to actually get specific as to what that is now could that turn into people learning to be more rational maybe like i don’t even know if that’s the goal or not like maybe hopefully that’s the goal but it could be it would still be a valid goal if only to write create agi or if only to make improved institutions around science something along those lines right um even if you never could be used for whatever reason never could be used to allow you to improve your own thinking but you know what i don’t think i believe that i do think i do think human beings struggle like really mightily struggle with any sort of rational epistemology when it comes to their own pet theories i think they’re just overwhelming evidence of that and that they really need to be embedded into community with certain institutions if they’re going to really behave rationally

[00:59:26]  Red: but i i don’t think everybody’s equally bad either like some people get very good at learning to criticize their own theories and start to internalize the epistemology as values that they hold and so i do think there’s hope that maybe someday we will get to the point where we actually are more rational individually because we understand epistemology better i don’t see that anytime soon though and i don’t think it matters that it’s not coming anytime soon because i think it’s got other uses if that makes any sense

[00:59:57]  Blue: okay well it sounds like that’s a good way to end it and i’ve appreciated your thoughts here bruce and i think it’s kind of fun to do a shorter episode yes all right

[01:00:10]  Red: talk to you later

[01:00:13]  Blue: hello again if you’ve made it this far please consider giving us a nice rating on whatever platform you use or even making a financial contribution through the link provided in the show notes as you probably know we are a podcast loosely tied together by the popper dutch theory of knowledge we believe david dutch’s four strands tie everything together so we discuss science knowledge computation politics art and especially the search for artificial general intelligence also please consider connecting with bruce on x at b neilson 01 also please consider joining the facebook group the mini worlds of david dutch where bruce and i first started connecting thank you


Links to this episode: Spotify / Apple Podcasts

Generated with AI using PodcastTranscriptor. Unofficial AI-generated transcripts. These may contain mistakes; please verify against the actual podcast.