Episode 17: Shiri’s Scissor: Polarization and Politics
- Links to this episode: Spotify / Apple Podcasts
- This transcript was generated with AI using PodcastTranscriptor.
- Unofficial AI-generated transcripts. These may contain mistakes. Please check against the actual podcast.
- Speakers are denoted as color names.
Transcript
[00:00:00] Blue: The Theory of Anything Podcast could use your help. We have a small but loyal audience, and we’d like to get the word out about the podcast to others so others can enjoy it as well. To the best of our knowledge, we’re the only podcast that covers all four strands of David Deutsch’s philosophy as well as other interesting subjects. If you’re enjoying this podcast, please give us a five -star rating on Apple Podcasts. This can usually be done right inside your podcast player, or you can Google The Theory of Anything Podcast Apple or something like that. Some players have their own rating system and giving us a five -star rating on any rating system would be helpful. If you enjoy a particular episode, please consider tweeting about us or linking to us on Facebook or other social media to help get the word out. If you are interested in financially supporting the podcast, we have two ways to do that. The first is via our podcast host site, Anchor. Just go to anchor.fm -4 -strands. There’s a support button available that allows you to do reoccurring donations. If you want to make a one -time donation, go to our blog, which is four strands.org. There is a donation button there that uses PayPal. Thank you. All right, welcome to The Theory of Anything Podcast. I’m Bruce Nielsen. How are you doing, Camille? I’m doing well,
[00:01:29] Red: Bruce. I’m Camille
[00:01:30] Blue: Duran. All right. So we have an interesting subject today that Camille picked. I guess it came from a Facebook post I made probably a few months ago, I guess. Yeah,
[00:01:42] Red: probably maybe even three months ago.
[00:01:44] Blue: So the subject is called Shuri Scissor, which it’s a story, a fictional story, although whatever I was reading it, I kept wondering if it was really fiction because it seemed so true to life at times. But it was written back in October 30th, 2018, is when it was posted on a blog.
[00:02:03] Unknown: And
[00:02:03] Blue: I can’t remember who, somebody on Facebook posted a link to it and I went and I read it and I thought it was awesome. So I posted it on Facebook and Camille picked it up and she thought it was awesome too. And so we wanted to talk about this story and how it relates to stuff going on in real life. So what it’s symbolic of. Camille, do you want to maybe summarize the story quickly for us or do you want me to do that? I want you to do that. You’re better at that kind of thing. All right. I’ll go ahead and I’ll summarize it. So the idea is, so in the story, you’ve got this group that’s doing machine learning, which first of all, I thought that was cool because I’m into machine learning. They’re trying to come up with a machine learning algorithm that will generate political statements that will lead to people becoming polarized over the statements. And so they keep trying to play with this algorithm. And no matter what the guy who’s like the programmer who’s the one who’s telling the story, no matter what he did, it would just come up with statements that were just obviously true. And so he thought it must be broken. So he grabs one of his coworkers, Shiri, and she says, I don’t understand what’s wrong with the algorithm either. It’s just coming up with statements that are obviously false. And they start to argue over, no, no, these statements are obviously true. And she goes, no, it’s obviously false. And soon they’re like in this massive argument and they’re trying to hate each other because they can’t agree and they pull in other coworkers.
[00:03:36] Blue: And it turns out every time they pull in new coworkers, that one side, one person will see the statements as obviously false. And the other person will see the statement as obviously true. And then that leads to a divide between the new coworkers and slowly this company gets ripped apart by trying to argue over whether the code is producing obviously false statements or obviously true statements. They end up with a lawsuit going on with this guy suing the person who owns the company. And then it finally strikes them. The machine learning algorithm is working. It’s doing exactly what it was meant to do. It was polarizing people. And so he goes out and they do various, once they realize what they have, they’re not quite sure what to do with it. So they try calling the Defense Department and offering to give an example. And they end up, they end up accidentally taking out a third world country using the machine learning algorithm as a way of trying to show it to them, to the Defense Department, how the algorithm worked. And then he starts to realize that all the news articles that are coming out, the different things that keep happening in the news, that they are all on the list of statements coming out of the machine learning algorithm. And he realizes somebody’s got a hand, got this algorithm, which they call Scissor because it splits people. Somebody’s got this algorithm and they’re using it to generate fake news stories or real news stories. And it’s leading to the polarization and the downfall of the entire country.
[00:05:09] Blue: And so obviously this is a fictional story, but it’s very interesting that this is what it feels like to be living right now in our very polarized environment where things just seem so obviously true or false to everybody. And we can’t agree on what it is. And we can’t even agree on the existence of a middle ground. It feels like a lot. Cameron, why don’t you give your thoughts here too? Maybe you have somewhat different take than me on the story.
[00:05:41] Red: Well, no, actually, I have the same exact take. And I, right after reading the story, I was thinking about a conflict in my own life actually with a person who works with me, where he has, or within our communication, we regularly are running into our own personal sherry scissor of a statement that he’ll make to me that I’m sure isn’t designed to be that. Has this like almost instant makes me feel totally at conflict with him, right? And it’s something benign that’s like anti -agile to me that like, well, you know, I apologize to the customer for the fact that we went over budget. And I’m like, no, we’re agile. That’s not actually a thing we can do because and look at all these decisions that they, you know, I like there were all of these things that were like a sherry scissor within our, in our regular interactions. And since reading this article, I’ve kind of noticed it a lot throughout all sorts of interactions that people have with each other.
[00:06:49] Blue: Yeah.
[00:06:50] Red: And just in day to day life, I think you start to see these kinds of phrases being used really commonly, especially across memes on Facebook, like memes are end up being like they, they typically that’s their focus is to be purposefully decisive.
[00:07:12] Blue: Yes, divisive to, to
[00:07:15] Red: you
[00:07:17] Blue: and I had a Facebook post about that too. One of my friends had posted that and he did an article on it that you liked, but he was pointing out that Facebook memes, they intentionally leave one impression with a liberal and one impression with a conservative and that that’s not a bug. That’s a feature. They’re trying to make one side feel one way and one side feel another way. They’re trying to intentionally pull out either exaltation of, oh, yes, I so much agree or really deep anger, right? They’re, they’re, they’re not trying to make some sort of even handed statement that’s true or something like that. They’re intentionally trying to seek out and make both sides react strongly, but and I think the
[00:08:02] Red: importance is that, that it’s designed to manipulate both sides and, and that’s, it’s, it’s a scary thing that’s happening to us and it makes me wonder, you know, has it happened before in human history? You know, you look back at, you know, the, the, the new sheets that they would print out and you know, newsies, the news boys out trying to sell their papers for, you know, a nickel or a penny or however much you sold a paper for them like 1902. I’m certain that that same kind of thing, you know, in the way that the language or the, you know, one street corner, you might have one guy talking one way and another street corner, another guy talking another way, you know, it and probably that same kind of conflict, but it, the way we communicate and the power that we communicate with right now is also just, you know, it’s unprecedented in Indian history.
[00:08:56] Blue: So let me actually couple, couple things I want to mention here. I read a book once written by a guy whose job it was to come up with controversial news stories to be able to create news coverage for his company.
[00:09:12] Red: Okay, okay, interesting.
[00:09:14] Blue: He worked for, I think it was called American Apparel, which had some really controversial ads. Yeah, yeah,
[00:09:19] Red: yeah. I remember a lot of, they routinely got lumbasted and, you know, really criticized for very controversial stuff.
[00:09:30] Blue: So here’s the thing that’s interesting. The ads that were so controversial never ran as real ads and were never intended, according to this guy, I write the book anyhow, never were intended to be run as real ads. They existed for no purpose except to get bad press so that, that American Apparel would end up in national news. So he would actually be hired, so he claims, to create these ads that they knew they weren’t going to use because they, they, they would have something controversial. They would have, you know, explicit nudity or something like that in it. Yeah, they were
[00:10:05] Red: going to go so far past the realm of what’s socially acceptable that the only possible outcome was, was fury and indignation across the country.
[00:10:15] Blue: So
[00:10:16] Red: he
[00:10:16] Blue: would create the fake ad and then he would put a fake blog post up somewhere, lambasting his own ad. Then he would call some bigger blogger and he would say, look what people are reacting to over here and that blogger would pick it up and then another blogger would pick it up and then another blogger would pick it up and then once enough bloggers had picked it up, he would call the news. He’d say, look at this news story about all these bloggers that are so angry over this American Apparel ad and then the, the national news would pick it up and now American Apparel had gotten what they wanted, which was now that they’re a nationally known brand. It didn’t matter to them that it was bad press, you know, all press is good press as far as they’re concerned. Now actually, I’m not sure if this story is true, but this is what he claimed happened in his book. I haven’t done enough research to, but he then went through and he explained that this is how people are using the internet to manipulate the national news and thereby manipulate us. He talked about, there was some movie that a friend of his made and had hired him and he did the same thing. He would, he put out, he purchased a billboard for the movie. Then he went out and he graffitied his own billboard. Sure, sure, sure. And then he, he did a blog post on how angry people were about this movie and then he got other bloggers to pick it up and then it went to the national news.
[00:11:43] Blue: And so that was, that was the technique that he had, that he was doing over and over again to, to create the, to basically just create a new story out of nothing so that he could get something into the national news and therefore get attention. And that was what his job was, was to create this sort of tension and upsetness to get things to go. Now he made an interesting point. He claims that back in the old days of the Newsies that yellow journalism was the norm, just like it’s become the norm today. Right. And he claims that the thing that broke yellow journalism was the subscription model and the thing that broke the subscription model was the internet. And so, so we, we had yellow journalism because the Newsy had to sell the newspaper on the side of the road. So the only way you’re going to get that thing sold is by to intentionally run news stories that are going to get people upset. So you have this era of yellow journalism that’s historical and then it goes away. And because they’re now doing subscription models, you pay for the newspaper, you’re paying every month anyhow. So now what’s actually going to be rewarded is the, the, at least the appearance of balance, the idea there. Okay. So then newspaper models break the internet breaks all the newspaper models and you have to get clicks. Okay. And so now you’re suddenly back into the era of yellow journalism, although now at light speed, you know, at internet speed. Right. And so you have to be able to get your news stories out there. You have to be able to get people to click.
[00:13:18] Blue: Now I’ve seen some, I’ve seen like some newspapers claim this story isn’t true. They say, well, actually newspapers are moving towards an online model. New York Times and Washington Post, you have a subscription model. I’m still not sure I buy that point of view because the way they still get their sales is they make you want to read the article and then you can’t and you need to pay money to get a subscription. So yeah,
[00:13:44] Red: no, absolutely. I agree. They’re still using the same thing to get the click into the, into the paywall. Right.
[00:13:50] Blue: And so it seems to me that there’s at least some credibility to this guy’s story that we’re back into the era of yellow journalism. And that’s one of the main reasons why we do see suddenly so much polarization is that that’s the way you make sales. That’s the way you get your story out is you have to enrage people and you have to enrage both sides. You have to make sure both sides are enraged. So it’s got to be worded just right to make both sides really feel like one side’s got to feel like they were wronged and the other side’s got to feel like, you know, it was obviously true. And that’s really what, what they’re targeting today. Right. So that’s, let me tell you my, let me tell you my own story about this. I, I accidentally got stuck in a big argument on Facebook with some friends. I had come across this new story and the new story said that they, the title for the new story, it said that Stanford or some famous university, I don’t remember which one it was, was creating a different standard, a new standard to accommodate women in math. Okay. And it’s like, oh my gosh, that is so horrifying. It is awful. Okay. So I click on the link, which is what it’s trying to get me to do. And it turns out, it turns out that the story is actually that they’ve wondered if maybe having time, having math test be timed is favoring one style over another of math that you’ve got. That could end up adversely affecting one gender versus the other. Yeah. But, but over something that isn’t really related to whether you’re good at math or not, right?
[00:15:36] Blue: There’s no law of the universe out there that says, you know, if you can do a math problem in 15 minutes, you’re much, much better than someone who can do it in 16 minutes. Right. I mean, it’s, there’s nothing obvious about, about a time thing. And they weren’t giving women more time. They were giving everyone more time. Okay. So
[00:15:55] Red: that’s a pretty interesting detail.
[00:15:58] Blue: Yes. So their solution wasn’t gender specific at all.
[00:16:02] Red: That’s right.
[00:16:03] Blue: Okay. They were, they were simply experimenting with the idea that it’s possible that one gender had more of one style. And therefore they were, they were penalizing that style. And there was no reason to. And so what they ended up actually doing is they only, they only changed it for undergrads. They didn’t change it for graduate students, which that’s a fairly good evolutionary change. You don’t just go change everything all at once. You experiment. And then, and then what they found is the result was that that they, I guess they had, there was like four categories or quadrants of students. The top quadrant, it made no difference. The, the second quadrant, it did in fact turn out that more women, now that they’re beginning, giving, being given extra 15 minutes on the test were scoring better. So the second quadrant had changed. And I think the other quadrants, it didn’t matter.
[00:16:56] Red: Oh, interesting. And
[00:16:58] Blue: so it wasn’t even a huge, it wasn’t a huge change. It was only an extra 15 minutes. And it wasn’t, the result wasn’t a huge result change. It only affected the second quadrant, right? The first quadrant at all. And, you know, in an experiment like this really strikes me as something that’s worth trying. There’s, there’s nothing obvious about how math should be connected to a time test. So I, so I post this on Facebook and I’m upset now. I’m upset because I feel that the title of the article misled me. And so I’m not really talking about the content of the article. I’m just simply pointing out, it really bothers me that the news, I hate the news media. The news media is so just our era of the news media with yellow journalism. It’s awful. It is. The ethics of journalism are just terrible today. Maybe they’ve always been bad, but they’re really bad today. Sure. And so I’m just, I’m really just talking about how bothered I am that the news media intentionally wrote a title for an article to make it sound like this university was giving extra time to women only when really they were being even -handed. And the moment I put, so I, the moment I put that up, my conservative friends on Facebook just start lambasting me. And I consider myself a conservative. I am probably not a fairly standard conservative anymore.
[00:18:19] Red: I would call you non -traditional conservative. But I’m definitely not a liberal. I’m like, no, no, not even close. You’re way, way, way more conservative than you are anything else.
[00:18:30] Blue: Right. And whenever somebody asks me, I always say I’m kind of a traditional conservative. I think
[00:18:35] Red: you are. You’re kind of conservative that most liberals wish would come back, honestly. And you know, people who actually believe in the Constitution believe in, in being fiscally conservative. Right. Yes.
[00:18:51] Blue: So, I’m, I’m a never -trumper. I’m very, I never vote for Trump. You’re welcome to talk about that. Yeah. And a lot of conservatives are never Trumpers. That’s a perfectly normal thing for conservatives to fill because Trump is not a very good representative of traditional conservative values and views. Anyhow, my conservative friends just start lambasting me and they’re so upset. Now they’re read, they’re not upset about the title of the article. They could care less. The only thing they’re upset about is that this university had changed something to try to accommodate women at all. And they felt this was wrong.
[00:19:26] Red: And I’m like talking with them.
[00:19:29] Blue: I’m trying to just, you know, talk with them. And I’m starting like, I don’t understand why giving 15 more minutes obviously means that, and the analogies they’re using, the analogy they keep using is things like, well, let’s just lower our standards for Navy SEALs so that we can get more women in. It’s like, this is nothing like that. This is, this isn’t even remotely like that, right? There’s nothing obvious about a time, time test. There’s nothing obvious that that means you’re better at math. If you wanted someone to go do math and you wanted them to do it well, is there some reason why you would prefer someone who’s lightning quick but gets it wrong a little more often over someone who’s slow in plotting but gets it right more often, right?
[00:20:13] Red: Yeah, that actually is a bigger question. We could almost have a whole podcast about that.
[00:20:19] Blue: So, and there’s probably room for both types of people. Sure. But it’s not obvious that one of those deserves a better grade than the other, right? No,
[00:20:27] Red: I absolutely agree. And it is interesting that that made, that that’s the part that made them so, I’m not surprised at all, at all. You know, if you watch on social media anytime that there’s anything that could veer a little bit into anything being changed that seems preferential to generally to almost any, what we consider minority, although I don’t know why we would consider women a minority, but an underrepresented not men and generally white men kind of category. Anytime you see anything that seems in the least bit preferential to anybody in one of those groups, you will watch people jump on like why don’t you just let crippled people be Navy sales? That’s what we want, right? Now that’s the
[00:21:14] Blue: America we want. It’s funny just how much venom it brought out, right? Because I wasn’t, the way I was posting that article and what I was concerned about was so different than the way everybody else viewed the article.
[00:21:28] Unknown: Right.
[00:21:29] Blue: And it was kind of an eye -opener to the fact that I don’t, you know, I don’t fit in well anymore.
[00:21:37] Red: Well, and that particular thing it revealed was a sherry scissor within our within our culture. And, you know, you know, who who the click was designed to catch that the trap was that you clicked on because you were a little indignant that somebody would suggest such a thing and you were curious is that, you know, those other people that were so indignant at the article, that’s who it was designed to catch. Right. And who knows how they, you know, I mean, it’s just interesting. So let’s talk about bots in all of this for a minute. Yeah, okay. You know, they say the prevalence of bots is really high across all of kind of social media. I think projections are somewhere in the range of 25 to 30 % of Facebook accounts are fake and likely bots. Yeah, they really, really, really high number. And the same on Twitter, like just the numbers are really, really high. And as far as I have read, they’re not doing a great job combating them at all. And as just kind of a side note, we regularly respond to federal like RFPs and RFQs for work, you know, and there’s this category of project funding that’s called the Sibbers or sitters. And they’re kind of innovation funding through the through the government for very specific things that are innovations that they are actively looking for. And the recent one was to build a machine learning algorithm and probably kind of a little mini platform to combat people within the Navy sharing bot crap propaganda, because they know it is such a big problem. And
[00:23:26] Red: they don’t, they know that the propaganda and the bots are a big problem across the board, but they find that a lot of times people in the military have very, you know, distinct feelings that can be easily manipulated by. Wow. So they, I mean, they actually are trying to invest money into innovations to prevent a very particular problem. So anyway, all these bots on the internet, all spreading who knows what agenda, you know, you can’t assume that they have one agenda or the other. There’s probably, you know, it’s not like there’s some big shadow bot creator. It’s sure it’s the Russians and the Chinese, but it’s also like just hundreds of thousands of individual people trying to create clickbait in the form of bots, some of them to build up big lists that they can sell to other people. In fact, I had read once that, you know, how on Facebook for a while there was this kind of proliferation of here’s my 82 year old grandmother. Here’s a picture of granny with a little picture. Let’s get her 100,000 likes so that to show her the excitement of the internet or she went out on her first date. Let’s show her how exciting that was, you know, and you’d see these things get shared and, you know, millions of likes. All of that is, was generally being written as little programs, little bots to go out and generate these things so that they could turn around and sell lists of people who are following things to people who buy such things.
[00:24:58] Red: So it’s you’ve got this kind of underbelly of machines essentially that are being trained to go out and do, you know, it’s, in fact, it’s the basis of this article that, that they were building a machine learning algorithm that was designed to, to be divisive, you know, that’s
[00:25:16] Blue: interesting to me. So actually I read an article about this that is actually the other side of this. You’re talking about them intentionally being divisive. I read an article that pointed out that even if you weren’t intentionally being divisive that that would be divisive. Yeah. No. So, okay. So let’s say that you have an Instagram algorithm that’s going to, it’s going to, you know, go out and get your likes, whatever. So you got a bunch of, let’s say, young ladies that are on Instagram and they want to get likes because that’s a very natural thing for any human being to want. And so they, you’ve got a big group of people doing it and some of them end up getting lots of likes and some of them don’t. And what are the ones that are getting lots of likes? You know, when you get, start getting lots of likes that causes the algorithm to put you out there more because it wants more and more people to like it. It’s kind of a runaway self -prophecy sort of thing. Okay. So let’s, let’s just say that the algorithm has nothing in it that’s particularly biased. All right. It’s just, it’s just simply going to exploit whatever, whatever natural human tendencies actually exist. Well, one of those might well be that if the girl looking for the like is wearing a mini skirt that more men are going to like it.
[00:26:37] Red: Okay.
[00:26:38] Blue: And so suddenly the algorithm is automatically optimizing for fewer clothes equals more likes. Sure. Okay. Even if there was nothing originally programmed in it to do that. All right. That would naturally just come out of such an algorithm. This, this is what the article was claiming anyhow. So, and then you look at like YouTube, why is it that conspiracy theories, you know, they pointed out that when you start off into YouTube, you end up watching whatever is your preference. You’re going to watch more conservative stuff or more liberal stuff. The moment that it detects that you’re either conservative or you’re liberal, it’s going to now start feeding you kind of garbage stories, you know, because those are the ones that really get people upset and interested. Sure. And so the algorithm will naturally tend towards extremism and you’re going to start being automatically fed more and more extremist viewpoints. Okay. Even if there was nothing in the algorithm that originally meant to do that, because the algorithm is what the algorithm, you know, this is the way the machine learning works, right? It automatically tries to exploit whatever it can to make a prediction. Sure. And if, and if human beings have a natural, let’s call it a bug, that’s probably not accurate, but let’s call it a bug, a natural bug that we get caught up in extremism under certain circumstances, it’s going to find and exploit that reality at some point. Right. Even if no one who programmed it had any idea that’s what they were doing, that would naturally happen if that’s in some sort of tendency towards that. Okay. Well, again, I don’t really know how true these things are.
[00:28:18] Blue: These are just different conjectured ideas that people are putting out there, trying to figure out what’s going on and what do we do about it. Right. So, and I do think there is something to that, at least as far as outrage goes, it seems really obvious that people share stuff that’s, that outrageous and other things just don’t get shared as much.
[00:28:38] Red: Sure. But, but you started that statement that it might not, that that might not necessarily indicate ill intent. Right. Well, and generally I agree that that’s true. I don’t, so I was scrolling through Facebook this morning and I follow a page called dad lab. And every day he posts, it’s, it’s this British guy and his like nine -year -old kid. And every day they post little crafts that are both a little bit sciency and fun. All right. So, like as benign as Facebook gets. Today the post was this little Camarie go around you can make with a toilet paper roll. And it’s a ghost that goes around the thing. The first post, the first response to this post is I almost want to pull up my phone and look at it. It was something like they can’t make us give up our freedoms to be able to try and protect everybody around us. It was, it was a, it was an anti -mask post, but it never said the word mask. It was just like our free, they can’t attack our freedoms this way. And, and you know, a lot of people responded to that. I had this, I clicked into the person’s name and looked at their, at their, you know, what I could see of their profile. And I had this moment of like, I wonder if this is a bot. Well, because, you know, who’s the like, well, I mean, because here’s this no reason for there to be any, any disagreement. It’s, it’s a picture about, you
[00:30:12] Red: know, it’s just so, it’s so far removed from needing to be politicized, but wouldn’t it be easier to build a bot that was kind of, just didn’t discriminate on the types of posts it was trying to cause, you know, and if, if you don’t care what kind of posts you’re going to start fights on, if you’re just trying to start fights everywhere you go and like keep division constant, that’s how you’d go about it is by dropping things in irrelevant places that might get people riled up, you know, because people were disagreeing right there on the stupid clown, you know, I mean ghost thing about, you know, their personal freedoms being attacked or, you know, protecting each other. And so it’s just, I don’t know, because if, if there are that many bots that are playing around on, on these platforms, a lot of them probably do have ill intent. We just don’t actually understand what their ill intent is or what, you know, maybe just making us hate each other is a good enough intent.
[00:31:10] Blue: You know, it’s, it’s okay. So it’s interesting you say that, obviously the whole Russian thing with the last election, that is definitely how people, the way, the way it got ended up in the news story, which unfortunately the news did a very, very terrible job with the whole Russian thing. Okay, but the way that all got started was the discovery that the Russians had this, I can’t remember the name of the group was, but the group that was doing the bots that was, when they actually started doing that, the bots half the time were supporting Hillary Clinton and half the time we’re supporting Trump.
[00:31:48] Red: Yeah,
[00:31:48] Blue: absolutely. So they did not care who won initially. They were just trying to stir up discontent amongst US voters. Yeah. And then at some point they, when it turned out that it looked like Hillary was going to win, they switched over to supporting Trump primarily. Interesting. In the, the conjecture here, I mean, obviously there’s a lot we don’t know, but the conjecture here is that they weren’t actually trying to get Trump elected. They were, they had decided that was what would cause the most discontent is since Hillary Clinton, Clinton was going to win that they wanted to rile up Trump voters as much as possible right before the election so that when Hillary Clinton won, it would, it would do the most damage. Okay. Then Trump ended up winning. That was one of the main things that then led to the later story about that there might be some sort of collusion between Trump and Russia. I mean, you can never, you can’t, you can’t prove a negative, but let’s just say there’s zero evidence that there was ever any collusion between Trump and Russia. All right. That statement alone probably will get some people upset. It’s the truth.
[00:32:56] Red: Oh no, we started talking about Colin Digg.
[00:33:00] Blue: So I’m, I’ve never read the Mueller report. I actually distilled it down and put a blog post about it. It’s broken up into two sections. One part is the investigation into connections between Trump and Russia. And then the other half is, you know, did Trump obstruct justice? Right. And the Mueller report itself concludes that there was no collusion between Trump and Russia, right? And you can rave through all the examples of every single connection between Trump and Russia that existed. They’re exactly what you probably would have expected of any politician, right? There’s just nothing there. I know that some people got really upset over the idea that Trump was so open to getting dirt from a foreign country to on his competitor. And I’m not saying that’s okay because it’s not, but it is all in practice. That wasn’t something Trump invented. Right. And so it, based on this, there just doesn’t seem to be any real connection between Trump and Russia. I was telling somebody, I said, you know, I wish we would, I mean, if you’re going to keep claiming that Trump’s in collusion with foreign powers, you probably ought to claim China at this point, because at least China hasn’t been investigated. We have investigated Russia and we didn’t find anything, right? And so you might be better off just claiming China at this point and trying to get an investigation over there or something like that. That probably all started just because they had made a decision, well, let’s support Trump because he’s going to lose. And their whole goal was to just simply sow dis -consent as much as possible, right? So it’s interesting. And
[00:34:34] Blue: I read another article, and this is, I kind of understand where the Russians are coming from, right? If you’re anywhere else in the world, the election in the U.S. affects you, right? Absolutely. Absolutely. Whoever wins as president, that’s a big deal. If you’re in Sweden or China or, you know, just pick any country, Latin America. It doesn’t, any country. Yeah, who wins as president matters to you in a way that I could care less, you know, to the opposite of the degree to which I care about who the prime minister of Britain is. Right. Although maybe we should care more, honestly. Maybe we should care more. But it’s not likely to have a giant impact on my life in the way it’s going to have a giant impact on their lives. Sure. No,
[00:35:22] Red: I agree.
[00:35:23] Blue: And so I can understand why, since they’re non -citizens and they don’t get a vote on this really important thing that affects their lives, I can understand why there would be this desire to go try to influence the election in whatever means you can get away with. And I think that’s what’s really going on when it comes to the Russia thing.
[00:35:42] Unknown: Now,
[00:35:42] Blue: I’m actually not sure Russia makes that much of a difference. I mean, if you go and look at the number of bots and the amount of money they put into that, it’s unlikely that one organization impacted much of anything when it came to the election, right? I mean, clearly that there are other forces at work that were within the United States that would have been a much, much larger impact than anything the Russians could have even conceived of. So it’s unlikely that the Russians even changed the outcome of the election at all because they were only pouring a couple million or something into it. It was not much money at all. Not a money that was flying around at election time by comparison was just absolutely enormously large all within the United States, right? So I do think though, I can see why other foreign powers would want to influence the election if they could. And I can see why the Americans should probably start realizing, hey, it used to be really hard for foreign countries to influence our election very much because there was no internet. Now they can. So of course they’re going to try to. We probably ought to make new laws. We probably ought to respond to that new threat, that new circumstance and figure out how to deal with that and create new legal standards and new ethics around is it okay to accept dirt from a foreign power? Is it okay to accept money from a foreign power under what circumstances? I’m sure they have laws around this already, but to refine those, that probably does make sense to try to figure out what we really want to allow, right? And new standards around that.
[00:37:22] Blue: But I doubt that any when we talk about bots, there’s a lot of organizations using bots, I would have made sure. And it’s not even just bots too. There’s Trump Trump’s campaign used a machine learning group. And it was one of the first uses of I can’t remember their name, but it’s a famous name. It’s not coming to me off the top of my head though. There was a group that used machine learning for him to target voters. They had worked out an algorithm that targeted the US voters that they either needed to get out to vote that they knew were going to vote for Trump if they got them out to vote, or they were on the edge. They had figured out how to target on social media, the ones on the edge, and then feed them ads and what they needed to push them over into voting for Trump. And I’m pretty sure both sides are doing that now, but at the time it was something brand new. And so Trump’s campaign had better access to that than the other campaign. Now this would all be American money, American citizens. This is not a foreign power. But even in this case, you can send. And honestly, I would be hard pressed to say this is anything but legitimate, right? I mean, isn’t this just good campaigning? Good marketing, right?
[00:38:42] Red: Good marketing manipulation and reaching out to the people who are on the edge and making sure we talk about marketing automation all the time. Well, and even taking machines out of it across almost all of, especially Instagram is infamous for this, the algorithm, like you say it, it responds to people liking things by showing it to other people to like, right? That’s what it’s designed to do. And so very prevalent are, you know, pods of real people who when somebody is posting, they have pods of thousands of people who are going to go on and like each other stuff so that they can boost the algorithms. It happens on LinkedIn too. I’m sure it happens on Facebook, although, I don’t know, Facebook’s kind of a funny place. Who knows that? But, you know, those are real people doing fake actions to manipulate real things, but, you know, for their own purposes.
[00:39:45] Blue: Yeah. And you know, it’s, I guess the question that comes to my mind from all this is, what do we do about this? Because some of it actually feels legitimate, right? And I hate the outcomes. I wouldn’t expect a campaign to do anything except try to campaign as best they can. And certainly I would never be in favor of a law that tried to stop using machine learning to target the voters that you needed most to target. That would be so wrong to do that. Sure, of course. And if that’s true, I have a hard time imagining ever being in favor of a law that like say told YouTube, they have to change their algorithms for likes because it’s causing people too much to head towards extremes. Yeah, absolutely. And I don’t know what we do about this, right? There’s got to be, and I’m sure there is a solution to this. I just don’t know what it is at the moment. You know, how do we, and then, and then the other flip side of this is because so much communication goes through a very small set of companies, Twitter, Facebook, you know, things like that, YouTube, Google, they end up with this really disproportionate amount of censorship power.
[00:41:04] Red: Oh, yes. Let’s talk about that for a minute because you’ve been, it’s, that’s been happening a lot, I think.
[00:41:09] Blue: Yes. Yes. In fact, I have, I have a friend, one of my fairly extreme conservative friends who recently just left Facebook because he kept getting banned for, you know, for a couple of days or whatever, a couple of weeks. And he told me what he was getting banned over, and I’m assuming he’s telling me the truth, but I don’t, he isn’t a liar, so I think he’s telling me the truth. And it was stuff that it’s really hard to believe it should have been, it should be banned over it. One of them was, it had a little golden book with Hitler on the front and it said, you know, so it looks like a kid’s book and, you know, the title was something like learning to refer to people as Hitler so you can win an argument or something like that, right?
[00:41:54] Red: Okay. Okay. Super
[00:41:55] Blue: benign in a lot of ways. Right. Okay. And I guess it was just the fact that it was Hitler that got him banned, right?
[00:42:03] Red: Possibly. Yeah.
[00:42:04] Blue: And again, these are private companies. So from a certain point of view, this isn’t the government, the freedom of speech laws are specifically about the government not being able to ban things because the government has a unique monopoly on violence, right? Facebook has no armies that I have to worry about. So Facebook should be able to decide their own community standards, whatever that happens to be, right? That shouldn’t be legislated. That happens. Right. And yet in the end, we end up with this situation where I’m very uncomfortable and Facebook is one of the better companies, right? In terms of they’re hated precisely for the reason that I like them, which is that they refuse to like Twitter, Jack Dorsey, he will decide, I’m not even going to take certain kinds of ads or something like that makes him popular. But in a lot of ways, that’s the end of free speech at that. Right. I mean, he’s got a right to do that too. I’m not saying he doesn’t. But in some ways, the fact that Facebook is just kind of more neutral, that’s exactly what we would want, but that’s what makes them so hated also. And even with Facebook, where they probably are the most neutral of the social media companies, they’re still banning things that kind of make me go, huh?
[00:43:22] Red: So when there’s been a couple of these big conspiracy things that that happens like a pandemic and all of the big social media things banned it and took it off whenever it would show up, spent outsized efforts pulling down those videos every time they would get shared. And you know, I was watching the whole thing. And I think when you when you start to have that level of censorship, it helps facilitate the rise of this whole conspiracy like sickness that’s happening, where because because if these big media giants are deciding what we get to see, you know what, I don’t I don’t love people sharing things that are obviously
[00:44:08] Blue: like incorrect or stupid,
[00:44:11] Red: you know, I mean, and it to me, it’s like, why not let people go and watch something like that without it being censored and decide for themselves that it’s ridiculous to think that we have the technology to insert like some tracker in you with a with your inoculation. I mean, it’s just it’s madness.
[00:44:32] Blue: So you know what, let me let me actually tell you I a lot of these things we’re talking about, I don’t have a clue how to fix that one you just mentioned, I actually do think I know how to fix. Okay, so going back to Karl Popper’s philosophy, critical rationalism, a lot of the things that we’re seeing, from a certain point of view, the fact that there’s that we’re suddenly not beholden to the mainstream media, you could actually make a case that’s a good thing, right, that there’s so many alternatives out there, right? And I hate to say this too loud, but I was actually kind of pleased that I won’t watch Fox News because I know it’s propaganda, I won’t go anywhere near it. But I was actually kind of pleased they came into existence, because it’s it sort of broke the power that the mainstream media had to determine narrative on their own, what hasn’t been mostly good or bad, probably mostly bad at this point. But in the long run, I still kind of hold to the idea that at least we’re getting multiple viewpoints available to us, even if people are kind of choosing echo chambers at the moment. Right. So from a critical rational standpoint, that should be viewed as a good thing. How do you deal with bad news stories? How do you deal with people, you know, on YouTube, I saw a video on YouTube that claimed that you could stop the coronavirus by putting a hairdryer up to your nose and blowing hot air into your nose. Okay, cool, cool. And it said if this worked for a cold, I had a cold at the time, so I okay, but here’s the thing.
[00:46:08] Blue: The fact that I tried to try it, I knew going into it, it was probably false, right? And there was this giant firestorm of comments on the video talking about why it was false. And the guy who made the video, he at least knew enough medical stuff that he could defend himself.
[00:46:26] Red: Sure. He was up trying to explain why their arguments were wrong.
[00:46:30] Blue: And there was this, there was this discussion going on about what the nature of a cold is, what the nature of a virus is, how, where it actually exists in the body. And it was really interesting discussion, right? And I was learning stuff from the conflict that was going on in the comments. And then YouTube took the whole video down. Because they’re protecting us from false information. That’s correct. Right. Okay. You know what? I want that video back up. And I want those comments back up. What they need to do is they need to think in terms of Karl Popper’s philosophy, and they need to be able to empower the people who are the viewers to easily put criticisms up.
[00:47:15] Red: Yes.
[00:47:15] Blue: That are easily available that prove or disprove what the, what’s being claimed in the video. So that the video, you can go make false videos all you want, but it will become trivial for people out there to be able to show that it’s making false statements. Okay. You got to make that so that’s an easy thing to do. And then you kind of need to trust that the truth is going to come out that way, rather than censoring things. Sure. You will, you know, if, if we’re going to have these extreme conspiracy theory videos, great, let’s, let’s leave them, but let’s make it so that it’s really easy, just trivial to click on links connected to that video that show you that there’s something wrong with the video, that it’s making false statements. Okay. Then people need to make choices after that. Okay. We want to empower people to be able to get information. One of the things that people get worried about is that false news stories spread, you know, six times faster. That’s the number I was here, than true news stories. Okay. Maybe that’s true. Okay. But we need to make it so it’s easy to see that they’re false news stories, then let them continue to spread along the information that shows that they’re false. And I think this is just a matter of figuring out the right computer platform so that it becomes, think about, think about Wikipedia. Wikipedia has the right approach where anyone can go write whatever they want on Wikipedia for any entry, which at some level you, you look at that and you compare it to everything we’re talking about with Shuri scissor. It should be a disaster and it’s not. Right. Okay. It’s effective.
[00:48:53] Blue: It’s effective. Yeah. And the reason why is because they forced the conflict, the conflict good, right? The bad thing is shutting down the conflict. The conflict has to take place. If you’re going to, you know, decide what goes on Trump’s Wikipedia page, you have to choose two word things in such a way that the other side is going to let it pass or has to let it pass because it’s, it’s sourced. It’s something that they can quote and they can show you the source and it forces them to get down to facts, not spin. In many ways, even though even Wikipedia has somewhat of a bad reputation, they’re a better new source than most new sources, right? They’re the very fact that they have to synthesize everything. The, the end result, especially on controversial items is that I can go to Wikipedia on any controversial item and I’m pretty much guaranteed I’m going to get both sides of the story and that I’ll become aware of both sides of the story. And it’s really hard to do that anywhere else. And I think the Wikipedia model is the right model. We just need to adapt that to YouTube. We need to adapt that to Facebook stories. We need to adapt that to absolutely everything so that it, it’s not that someone’s out there deciding what to shut down. It’s that we’re going to let the bad news stories run. We’re going to let them run as far as they want, but they’re going to come with truth attached and the way we’re going to deal with this from now on. And I really think that’s the correct solution. That’s the correct critical rationalist solution to this problem.
[00:50:27] Blue: And it doesn’t solve all the problems we’ve talked about today, but the one you just mentioned here at the last is to do that. And I don’t even think this is necessarily a difficult problem to solve. I mean, like, I can’t right off the top of my head tell you, this is how you would make an interface that would make it trivial to attach truth to a false story, right? But that’s hard to start coming up with ideas that you could then test out and figure out what works. I really wish we would move in this direction. The only other thing I can think of that might, we might have to go to is there are platforms out there that simply they’re platforms that are like Facebook platforms that are like YouTube, they’re like Patreon, except that they refuse to do anything but just let people do whatever they want, right? They decide for people. None of these are really taken off though. And I don’t see how they can at the moment compete with the big platforms. They’re faces are how good their algorithms are.
[00:51:31] Red: Well, it’s also hard to fight against dominance and establishment, you know, that ultimately that the rise of a platform like that would have to happen at the kid level or at the young adult level where they adopted it and started using it and then had the prevalence get to the level it would so that people would switch over it where I think we’re a long and I could be wrong. You know, maybe one day it’s like Yahoo just goes away and Facebook just goes away. But I think it takes a lot to change the trajectory of the amount of people that use Facebook and how it’s used.
[00:52:12] Blue: Yeah, I agree. And so I think the best long -term solution, it seems to me like this would benefit Facebook, right? If your Facebook, you probably don’t want to spend, I don’t know how much money they spend on on sensors. It’s got to be a lot. They’ve got to have people are complaining all the time. It’s very natural for people to try to use reporting to Facebook as a way of shutting down statements that they don’t like. Sure. Living in a culture now that’s divided like that. So every report report this coverage. Right. You’re using every weapon at your disposal right? Or some people are. So if there’s anything, if you make a Facebook post and it has a Petruv Hitler, you can get upset over that and you can choose to go report it and then Facebook’s going to be forced to go send somebody to respond to that and to look at it and to make a decision. And we can imagine that they’re not going to be super consistent in how they make decisions. They’re going to, whoever, whatever the leanings of that individual investigator are probably is going to make a big difference. And Facebook probably doesn’t want to have to pay for all these people to go investigate things. So it does seem like there could be some real value in letting the audience, democratizing it meant letting the audience decide, okay, I’m going to attach truth statements to this. I’m going to, and you could probably even figure out a way to adapt. This is good. This would be harder, but you could probably figure out a way to adapt it to safety of content.
[00:53:41] Blue: Make it so that on Instagram, yeah, sure, you can let people do whatever they want on Instagram just about, maybe there’s some level that you stop that’s fairly extreme, but you let a lot get away, but then people can like decide this is PG 13 or right along those lines. And then people who don’t want to see the PG 13 posts, it just never even comes up for them thereby lowering the number of likes that are available to you. And it seems like you could come up with a good algorithm that would create the balance that we all wish we had without actually undermining the freedom of the company and would be to their advantage because they don’t have to hire so many people to monitor everything, right? But the audience monitor. So that’s what I think the correct solution is for as far as how to handle the big media companies. And to your point, I don’t see them going away. The thing that will cause them to go away will be some sort of entirely new idea that we can’t even see right now. Sure, sure, places that need for them, just like Amazon versus Walmart or whatever. Right. Although both of those are doing well.
[00:54:48] Red: Well, and there I’m not I’m, I wouldn’t want to say Walmart was winning, but they’re certainly not losing. Yeah, well, that’s actually a separate topic, right?
[00:55:01] Blue: That’s that’s actually a very interesting point. Walmart is a good example of a company that came under, that was dominant, that came under siege by a complete disruptor, Amazon, something that was unforeseen and then figured out how to take advantage of their previous dominance, the fact that they had a footprint on the ground near you. And very effectively too. Yeah, and innovative, very innovative and almost perfect example of how you end up with two really good options. You know, it benefited everybody. Sure. Because they were willing to pursue, they were willing to think not just roll over and die, but to say, okay, what what advantage do we have over Amazon? How do we exploit that?
[00:55:48] Red: Yeah, and they’re powerfully effective. And still, they are fighting the fight with guns blazing. One, we have a client right now that we’ve been building technology for and they provide 1099 delivery drivers and Walmart’s their biggest customer. And, you know, Walmart pays a lot to delivery drivers to do deliveries. But I bought a year membership to free delivery for I paid $89 for a full year of all the free deliveries that I want. And Walmart’s paying delivery drivers $10 of delivery. So they’re totally willing to lose money on every single delivery that they’re going to make to me after 10, which I did that weeks, months ago, because that’s going to keep me coming back to ordering in the end, they poured money into a really great online interface. So it’s super easy for me to order groceries every week from them. Yes.
[00:56:48] Blue: And you know, just it’s funny when when they first did the grocery pickup thing,
[00:56:53] Red: they
[00:56:54] Blue: were terrible at it. It almost it almost kept us from using them again, because of how bad they were at first where you would go to pick stuff up and there’d be nobody there to do and but then they just started getting good at it and it only took like a month right before they really sorted out the bugs in the system. And now it works really well.
[00:57:14] Red: So I was probably one of the most fervent early adopters of that to the level that I used to show people and mostly because they killed it on their UI of their their app, the ordering app. I used it as an example of great e -commerce experience for multiple clients when we would be in product discovery workshops. I was pulling it up all the time saying like, you want to see like how to deliver great e -commerce. Let me show you this Walmart grocery app because it’s really fantastic. It was just that good, you know, and and and you’re right. They were not doing a great job on the pickup. I would wait 25, 30 minutes when I would go and to pick up the groceries, but I was willing to forgive it because I it was easy to order online and I’m just sitting in my car. So big deal, right? Right.
[00:58:02] Blue: In many ways, this is what we’re talking about though. It sounds like a tandem when we tie it back to what we’re talking about. Okay,
[00:58:08] Red: good, good.
[00:58:09] Blue: Walmart and Amazon, their whole viewpoint is particularly Walmart posts the Amazon era where they had no choice but to become like this to stay competitive. But Amazon’s the one who started this whole idea. We’re just going to make things really convenient for the customer. We’re going to always think in terms of how to make this so frictionless for the customer that they’re going to just keep buying more and more from us. And they were the first to do the free delivery club and things like, you know, the free club. So Walmart is now doing that too. And they’re thinking in terms of what’s best for the customer. How do we make the customer the most satisfied? And it’s interesting to note Amazon as a company is doesn’t have the best reputation. They don’t. Neither does Walmart, right? We know that they maybe don’t take care of their employees the best or whatever, right? And yet people won’t stop using them because the one person that they’re really taking care of is their customer, right? They’re making sure that their customer is always just as easy as possible, right? This is really what Facebook needs to do, right? With as far as fake news stories and things like that, when you go in and you say, I’m going to ban you because you broke some very vague community standard rule, you’re automatically offending some group of people that some of them maybe are bad people, but a lot of them are just people, right? And they really probably need to be thinking more in terms of, you know, what’s going to be the right experience for users, which probably means you don’t ban people. It probably means I don’t, you don’t make heavy -handed decisions.
[00:59:48] Blue: You’ve got to figure out how to let the people have the experience that they want, but everybody can feel safe. And I gave a few suggestions about how you might do that, how you might allow people to mark, you know, PG 13 level or something like that, how you allow people to easily attach truth to a false story, right? That it has to follow it around. Something along those lines is what we really would allow, it would be more customer -centric, it would be more user -centric. It would allow a better experience for them as well as us. And I think it would, I think it would get the truth out there. I mean, the thing that the truth has an advantage on, I mean, maybe people get outraged over false things and therefore it spreads faster, okay? But the truth has one really big advantage, which is you can’t refute it, right? I mean, it’s, that’s a huge difference. A false story can be refuted. A true story cannot be refuted.
[01:00:47] Red: I agree conceptually, but we’re also seeing, and we talked about this in the spring, we’re also seeing a swing of conceptualization across our culture where people refuse truth even when it’s blatant because they’ve convinced themselves that there’s an infrastructure that’s hiding, that’s hiding the truth, right? That they know more than that the truth that the rest of us accept is wrong.
[01:01:19] Blue: I agree. And I think that that is part of what’s coming out of polarization is this kind of choosing us over what you believe the truth to be. But if you’re going to be on Facebook and if you’re going to want to go post a post for a story that let’s say it’s a false story, but you sincerely believe it to be true deep within you, right? You end up posting it and you’re forced to post it along with the counter stories, right? It sort of doesn’t matter at this point. You’re going to have to think about, well, I may be empowering the other side if I post this fake, you believe it to be a true story, but I post this fake story because it’s going to come along with this counter argument that actually I don’t know if I want people to see that counter argument because it kind of bothers me that it’s so good.
[01:02:11] Red: You’re more rational than the vast majority. So
[01:02:17] Blue: and I think that’s where I’m coming from this, right? Popper’s philosophy, it’s not rooted in individuals. And this is something I think people get wrong about critical rationalism a lot. They try to think in terms of how do I personally be more rational. And that’s great. I’m not saying there’s something wrong with going and trying to be more rational, but I kind of feel like what Carl Popper would say to that is, well, you got to be a little careful with trying to make yourself more… He actually did say this over something else. I’m reapplying it to this. Francis Bacon said, you need to remove prejudice from your mind and you need to just see. And Popper said, I totally am against doing that because the problem with thinking that you, of trying to remove prejudice from your mind is that you might think you succeeded. But in reality, you still have your biases, right? To some degree, I think that’s just the way this works, right? Is critical rationalism doesn’t work at the level of the individual. You don’t go try to figure out how you’re going to be more rational. The danger of trying to figure out how I’m going to be more rational is that you might think you are. It works across groups. It works across the conflict between groups. It works across people. He would make statements to the effect that science needs… There’s benefit within science to being ideological. We want to think of science as this non -ideological, non -biased body of knowledge. That’s not true, right? Ideas need people that are really ideological and prejudice and they want that idea to be true, whether it’s true or not, because that idea needs to get tested. And
[01:04:05] Blue: if everybody was just sort of just trying to be completely unbiased in science, a lot of really good ideas would die before they got tested.
[01:04:15] Red: Because especially when you take radical concepts that are hard for our brain to comprehend like quantum physics, as a great example, it’s really easy to reject those out of hand because they seem fake.
[01:04:34] Blue: Yes. And so Popper says that he thinks that if all the scientists in the world suddenly became unbiased, that science would cease to function. I’ll have to find the quote. And so I guess I’m saying it should work the same way everywhere. It should work that way with news stories. It should work that way with everything. It’s okay that the world is full of people who want to fool themselves. It’s okay that people are ideological. It’s okay that people are prejudiced to a certain viewpoint, right? What we want is we want them to have to compete. And the moment, and it doesn’t, in fact, if we really get down to it, I mean, if I were to say, you know, how biased and prejudiced our liberals compare to conservatives, it’s a dumb question because they’re both so deeply biased, right? Ridiculously so. That isn’t what matters. The fact that there’s nobody in this mix that is anything but biased does not matter as long as we get the system right, as long as we get it so that they’re forced to compete and they’re forced to improve their arguments and stories because otherwise they’re going to get their clock cleaned by the other side.
[01:05:46] Red: So let’s bring this back to Shiri’s scissors. And I’m going to call on you a little bit with your, since I know you’re in a deep learning class, how do we, the problem with Shiri’s scissors is that it has a tendency to divide people before they can be critical.
[01:06:09] Blue: Yes.
[01:06:10] Red: And so is there an answer in our, you know, so much of censorship on Facebook, it does end up being automated. I would assume that algorithms and machines are responsible for more bans than individuals marking somebody so bad.
[01:06:28] Blue: That’s my understanding. Yes. Is that most of it is automated.
[01:06:31] Red: So then since we know they are investing heavily in machine learning technologies and building better robots, how, you know, what’s the secret sauce to make Shiri’s scissors not such a powerful thing? Is there?
[01:06:50] Blue: Yeah. And now outside of the answer I already gave where I think. Which
[01:06:55] Red: was a good one, by the way.
[01:06:56] Blue: Which is that we need to make it easy for the alternative viewpoint to travel along. Right. I honestly don’t know. And I think this is why a lot of people are scared by this. Like on Twitter, I’ve seen people say we need to pass laws that require the algorithm writers to not favor extremism. That is such an extreme response that is so much worse than the, that’s a cure that is so much worse than disease. It’s just crazy.
[01:07:26] Red: Well, and also shows a level of ignorance about what’s happening with those, how those algorithms are actually behaving.
[01:07:35] Blue: Yeah. And, you know, I, okay, are you familiar with GPT two and three? I am not. You’ve got to go look this up. This is an amazing thing. So, OpenAI, they just released GPT three. I hope I’m saying that right. I’m going to double check that I’m saying it right.
[01:07:54] Red: OpenAI API. GPT three family with many speeds. It looks like you are.
[01:08:00] Blue: Okay. So GPT three is this amazing. So two was amazing. Three blows your socks off. Okay. GPT two, you could start writing something. You could say, you know, you could like start a new story with a single sentence, you know, whatever. And then it would write the rest for you. It would sound like a real new story. So you could actually auto generate fake news.
[01:08:24] Red: Oh, wow. Okay.
[01:08:26] Blue: So there was a lot of concern over this. So, so OpenAI, which is Elon Musk group. Oh,
[01:08:32] Red: okay. Okay.
[01:08:33] Blue: Elon Musk group. So they, they actually didn’t release the full model because they were worried that it would get compromised and used to generate fake news stories and things like that. Okay. So GPT three is the same idea. So basically the algorithm, what it does, it looks at what words came before and then it predicts what’s the next word. Okay. And then once it’s predicted that next word, you can then take that new, new stream with that new word it’s added and it can predict the next word and then the next word and then the next word. So it’s like it’s a human writing stuff, right? I mean,
[01:09:10] Red: and I assume that this is the technology or similar technology to what Google’s actively using inside of Gmail, where it, you can just hit the space bar and it will fill in your entire email. I’ve written whole emails where I haven’t like put any actual content in myself.
[01:09:29] Blue: It’s called the transformer, but it’s not something I’m a great expert in, but this is, since I am studying machine learning, I have to know about some of these things. And so it’s amazing with GPT three, there are so many parameters in the neural network. There’s like, I can’t remember how many, but there’s just a very large number of parameters in the neural network. So it is so good at predicting the next word that it creates completely comprehensible things, right? And it keeps track of what it said before within a window of 250 token. I can’t remember how many, but 2000 tokens or something like that token being a word, it keeps track of what it said, and it will make sure that what it’s saying is consistent with what was said before. Wow. And it, it can fool you, like it will write new stories, but there’s it’s, they also released a, a bot that could tell you if this new story was written by a human or by GPT two or GPT three. And I can’t tell. I was written by a human one that wasn’t, you don’t know which was which and I would read through it and I couldn’t tell. I couldn’t tell which was human and which one wasn’t. Wow. And I would feed it to the bot and the bot would immediately answer and it would know which one was which. Seriously. Yeah.
[01:10:47] Red: Oh, interesting. I, you know, I’m reading this now and you know, it’s, it’s the crazy part of our world that this kind of stuff, you know, I’m a pretty big nerd and but like, just reading this, it says GP three GPT three was trained on hundreds of billions of words and is capable of coding in CSS, JSX, Python, among others. And since GPT three’s training data was all encompassing, it does not require further training for distinct language tasks. Yes. Wow.
[01:11:17] Blue: It can, it’s a one shot or few shot learner. So like you can say, you can give an example, you can say I want an English sentence colon sentence in, you know, Chinese and then you give it a few examples and then you just simply give it the English and it will automatically write the Chinese translation. It figures out just by context. Oh, you’re looking for a translation and it puts it there. It’s amazing what it can do. Some of the examples like you go on Twitter and they’re keeping track of the cool, it can code. You can, you can give it examples. You can say, I want, I want to create a window with a button, blah, blah, blah in English. And then you can give it example of code. And then you can just start giving it English sentences and it will code them. Wow.
[01:12:03] Red: So, okay, so let’s wrap up for today. Because I have to go and read this and I’m about to geek out on it and I’m not going to get you for a little while because this is really, really cool. And you know, I mean, it’s funny because you start, you see manifestation of technology in your day to day, you know, especially by really forward thinking groups like Google, you know, they’re always pushing the envelope, Apple. But to read about what these kinds of technologies are capable of is a little bit all inspiring and overwhelming. Yeah.
[01:12:42] Blue: By the way, if you go, there’s a, there’s an app on your phone that if you pay the money, it will play Dungeons and Dragons with you using GPT -3 and it will be the Dungeon Master. So it’s one of the easier ways to go interact with it and go try it out. I want to have access to it to see what it’ll write for me. Yeah. It’s, let’s be honest, it still can’t pass the Turing test. Like it can write an article that I can’t tell was written by a computer. But if you could interact with it, it’s not too hard to figure out like, oh, it’s got no memory past a certain point. Sure. Sure. I’m not dealing with a human. So, but it, but it’s, it’s an amazing leak forward. And the reason why it’s so good is because it’s been trained on, you know, the whole internet, right? So what it’s coming back with is sentence structures that some human wrote at some point. That’s why it’s so good. Right. Right. About how to statistically run between words and it’s going to come back with information that it was, that was somewhere on the internet at some point, right? Well,
[01:13:49] Red: it’s because language while being an aspect of intellect is not what defines intellect. Right. And so having language and being able to use language is not the same thing as having an intellect to be able to use and a mind to be able to use. Which is why it doesn’t pass the Turing test. There’s not actually a mind per se. There’s no mind there. It’s completely stupid still. Right. Yeah. It’s, it’s, it’s a dumb generator.
[01:14:17] Blue: It will, it will seem like a mind at first because it’s fitting out stuff that was written by a person with a real mind. Right. Finding the right words to string together that came from real people. But beyond that, there’s nothing behind it. Right. It’s just, just simply predicting the next word and then the next word and then the next word and then the next word. So, but anyhow, the way, and the point of all this aside was that we can write bots that fight bots. Right. It’s possible to write bots that will, that will find the fake bots that aren’t, there’s no real person behind and remove them. We, there’s always going to be a war of technology and ideas like that, but we can keep up with them. There are always solutions to them because stuff written by GPT -3 still has no mind behind it. And so it’s still detectable as long as you’ve got the right machine learning algorithm to detect it. So anyhow, that was this, we’ve gone on a bit long here. This turned out to be a fascinating subject and gave us a chance.
[01:15:21] Red: How long can we do this? The longer each of our episodes
[01:15:24] Blue: seems to get. Yeah.
[01:15:28] Red: Okay. Well, this was a fantastic conversation. Thank you. Thanks.
Links to this episode: Spotify / Apple Podcasts
Generated with AI using PodcastTranscriptor. Unofficial AI-generated transcripts. These may contain mistakes; please verify against the actual podcast.