The Matt Walsh Show - Ep. 1771 - The Worst People Imaginable are Building the Future Aired: 2026-04-30 Duration: 36:04 === The Curley Effect Explained (06:19) === [00:00:00] Unless you're about 100 years old or so, or you've spent a lot of time in the state of Massachusetts, there's a good chance you've never heard of something called the Curley Effect. [00:00:09] It's named after James Michael Curley, who served four terms as mayor of Boston from 1914 to 1950. [00:00:16] He also served in the House of Representatives, and he was governor of Massachusetts for one term as well. [00:00:22] So, for a half century, he was a very well known figure in Boston. [00:00:26] They called him the Rascal King, and he was quite popular with Boston's poor, particularly the Irish population. [00:00:35] The funny thing about James Michael Curley, though, is that despite the fact that he kept getting elected to high office, he wasn't actually a good politician. [00:00:43] He wasn't even close. [00:00:44] He committed numerous crimes, including mail fraud. [00:00:47] He served part of his term as a mayor in a prison cell. [00:00:51] And under his watch, by every objective metric, the city of Boston declined dramatically. [00:00:55] The population stagnated, even as other major cities grew exponentially. [00:01:00] Manufacturing jobs left the city. [00:01:02] Boston's finances collapsed to the point of near bankruptcy. [00:01:06] So, how did James Michael Curley hold on to power? [00:01:10] For so long, despite doing such a horrible job. [00:01:13] Doesn't seem logical. [00:01:14] So, a couple of economists at Harvard decided to look into it, and what they found was that by dramatically raising taxes and using taxpayer funds to hire poor Irishmen for fake government jobs, James Michael Curley had driven wealthy people out of the city. [00:01:30] The rich people decided to get out of town before the city of Boston would steal any more of their money, and as a result of this mass exodus, the share of low income residents living in Boston, the core demographic supporting James Michael Curley, Grew substantially. [00:01:45] The economists call this tactic the Curly Effect. [00:01:48] The idea is that if you want to retain your grip on power, even though you're doing a horrible job, then your best course of action is to drive all of your political opponents out of town. [00:01:58] There's no reason not to shower your preferred demographic group with all kinds of welfare, fake jobs, special status, and so on. [00:02:06] You can simply loot the city's treasury for decades on end before the city finally goes bankrupt. [00:02:11] Every worthwhile person will leave, but your voters will remain, and that's all that you care about, and that that is the curly effect. [00:02:18] It's also a very accurate way to describe how Democrats plan to govern every major city in this country for the next 50 years, how they've already been governing them for the last 50 years. [00:02:29] It's not an exaggeration to say that for large portions of this country, the future is going to be built by leftists particularly women and foreigners in many cases who deliberately seek to drive away everyone who's competent, sane and productive And, or at least that's their plan if they aren't stopped. [00:02:47] And if you doubt that, take a look at this video from Seattle's socialist mayor, Katie Wilson. [00:02:53] She's asked about the impact of Washington State's new 10% tax on millionaires, as well as Seattle's aggressive new taxes. [00:03:01] There are other forms of taxes. [00:03:03] And watch how she responds. [00:03:06] I think the claims that millionaires are going to leave our state are like super overblown. [00:03:11] And if, you know, the ones that leave, like, bye. [00:03:14] So. [00:03:22] This is a 40 something year old woman who didn't hold a real job throughout her entire adult life. [00:03:27] She admits that her parents pay her bills, and now she's elated by the fact that she's driving away the most productive people in her city. [00:03:35] I mean, at a visceral level, it's one of the most revolting videos you'll ever see. [00:03:41] And the reason Katie Wilson doesn't care if the millionaires leave is that for every millionaire who flees Seattle, she's gaining one net vote in the next election. [00:03:49] The more the city decays and the more the productive residents flee, the more job security she has. [00:03:55] Of course, disastrous for the city in the long run, but she doesn't care about that. [00:04:00] She only cares about her own political future, and this is good for her own political future. [00:04:05] It's the same strategy Zorhan Mamdani is pursuing in New York, except arguably Mamdani is executing the strategy far more quickly. [00:04:11] In case you missed the news, the other day, Mamdani officially announced that the city is already out of money. [00:04:17] Yes, the socialist from Uganda has been in office for less than five months, and he's already asking for the state to bail him out. [00:04:23] Watch. [00:04:25] New York City faces a budget crisis of a historic magnitude. [00:04:29] We inherited a deficit larger than any since the Great Recession. [00:04:34] Years of mismanagement and chronic underbudgeting alongside a structural imbalance between what New York City sends to the state and what we receive in return have taken a toll. [00:04:43] We cannot close this deficit with savings alone. [00:04:46] We need new revenue and we need a structural reset in our relationship with the state. [00:04:52] That is the only way to meet our legal obligation to pass a balanced budget. [00:04:56] And to do so without imposing a financial burden onto the backs of working people. [00:05:01] I'm glad to partner with Speaker Menon as we call upon Albany and deliver a balanced budget. [00:05:07] Together, we are extending the executive budget deadline from this coming Friday until May 12th because a crisis of this scale cannot be solved without state action. [00:05:17] Now, New York is one of the wealthiest cities in the entire world. [00:05:21] The only conceivable reason why New York would be broke is that the people leading New York are incompetent and/or malicious. [00:05:28] I mean, probably both. [00:05:30] They're spending money they don't have and calling it free, like they did with free pre K for every child. [00:05:36] And when the socialist from Uganda decides to make even more things free, including the buses, he quickly discovers that he's already run out of other people's money. [00:05:45] So he needs to ask the state government for a handout, even though the state is also hemorrhaging residents and therefore money. [00:05:53] Again, it's all part of the plan. [00:05:55] The broke, unemployed Haitians who don't speak a word of English aren't bothered by any of this, they still think Momdani is a hero. [00:06:03] They're not going anywhere. [00:06:04] They'll be loyal Momdani voters to the end. [00:06:08] It's the useful New Yorkers who are going to move to Florida and never return. [00:06:12] This is a death spiral that's very difficult to recover from once it gets going. [00:06:17] And it's not just a problem in politics, it's happening everywhere. === Amanda Askell and AI Safety (15:14) === [00:06:20] Some of the most important technology companies in the country are doing the exact same thing. [00:06:24] They're putting leftists, predominantly women and foreigners, into positions of authority where they have the capacity to gain even more power by driving away some of their customers. [00:06:32] Again, just like the curly effect, it's not exactly intuitive. [00:06:36] Now, you think the job of a company is to make as much money as possible and to sell to anyone who wants to buy their product, but it's not actually the case. [00:06:42] Sometimes it's important to drive your biggest customers away so that you can consolidate power with the customers who remain. [00:06:49] Paying $70 plus a month to big wireless companies for unlimited data is insanity. [00:06:54] My wireless company, PureTalk, is going to give you unlimited high speed data for just $34.99 a month. [00:07:00] And if you're wondering, is PureTalk's network really as good as the overpriced big guys? [00:07:04] Well, try it out for 30 days. [00:07:06] No contract, no cancellation fees, so you can try it firsthand. [00:07:09] With nothing to lose. [00:07:10] You can make the switch in as little as 10 minutes, and their US based customer service team is standing by to help so you don't get some random person that doesn't even speak English. [00:07:20] Go to puretalk.comslash walls to claim unlimited high speed data for just $34.99. [00:07:25] Again, that's puretalk.comslash walls to switch to puretalk today. [00:07:30] Now, along those lines, you might remember this story from a couple of months ago. [00:07:33] It broke just before the war in Iran started, so it was buried very quickly. [00:07:37] But there was a very public falling out between the tech company Anthropic, which makes the AI product Claude. [00:07:43] And the Trump administration. [00:07:44] The Pentagon has been using Claude to assist in military operations for several months now, including in Venezuela and Iran. [00:07:51] The AI reportedly helps with target identification and the operations of weapon systems, among other services. [00:07:57] But Anthropic began demanding several conditions from the Pentagon. [00:08:02] They wanted the Pentagon to provide guarantees that Claude would never be used to conduct surveillance on Americans or to operate fully autonomous, lethal weapon systems like RoboCop. [00:08:12] The Pentagon said those guarantees weren't necessary and that they'd comply with the law, but. [00:08:17] Anthropic insisted, watch. [00:08:20] And how much nervousness is there in the relationship between the Department of War and Anthropic at the moment? [00:08:25] This is a really super fascinating story we were talking about earlier in the show that basically the Claude tool, they want safeguards against basically mass surveillance, presumably for American populations, and of course having basically kill orders being required to be given by human beings. [00:08:40] Absolutely. [00:08:41] And these kill orders have been something that have been sort of getting a lot of attention on social media, on the X platform, for example. [00:08:47] They really ramped up last week, actually, with some of these conversations. [00:08:51] And I think this is. [00:08:52] Comes down to again this the big question, which is who owns the data and who has access to the data. [00:08:59] Data in the wrong hands can obviously be used for bad purposes, like with any technology. [00:09:05] So, I think the question why, whilst this discussion has led to a delay with the agreement, is around who's going to own the tools, who's going to own the data, and who can use what with that data. [00:09:17] So, this is how the story was covered in most major outlets. [00:09:22] The implication is, and Anthropic was the good guy. [00:09:24] They were making sure that the AI and the data it collects would not be used in a way that could harm American citizens. [00:09:30] The idea is that the Pentagon can't be trusted under any circumstance. [00:09:35] But there's a big problem with this framing, which is that it ignores the fact that Anthropic can't be trusted either. [00:09:41] The people who are running this AI, which is vitally important for our national security at the moment, are no better than the mayor of Seattle. [00:09:49] They're every bit as corrupt and dumb. [00:09:52] And they have the same intentions. [00:09:53] They want to. [00:09:54] Get rid of their political enemies. [00:09:56] They want to neutralize them completely so that they have total control. [00:09:58] So, we'll start with a Scottish philosophy major named Amanda Askell, who, despite having no technical knowledge whatsoever, is one of the most powerful people at Anthropic. [00:10:09] She's also one of the most visible. [00:10:12] The company encourages her to sit for photo shoots like this one, which was just published by the Wall Street Journal. [00:10:19] We'll put some of the images up on the screen right now. [00:10:21] So, there she is. [00:10:23] The more of them you see, the deeper you're going into the uncanny valley. [00:10:27] She's attempting to look like an android. [00:10:31] There's no other way to say it. [00:10:32] It looks like she's auditioning for a new Blade Runner movie where she plays one of the defective robots that doesn't quite fit in with the humans. [00:10:39] So they just throw it in an empty room and decide to fix it later. [00:10:42] And I'm not being mean. [00:10:43] I mean, that's quite obviously the look she's going for. [00:10:46] This is someone who, before she even opens her mouth, you know is going to be absolutely insufferable. [00:10:51] And then she opens her mouth and that is confirmed. [00:10:54] The article goes on to sound exactly like a dystopian novel. [00:10:57] We learned that her husband essentially took her last name. [00:11:00] Which is always a great sign. [00:11:02] But let's give her a chance. [00:11:03] This is from the beginning of her new profile in the Wall Street Journal. [00:11:09] Quote As the resident philosopher of the tech company Anthropic, Amanda Askell spends her days learning Claude's reasoning patterns and talking to the AI model, building its personality and addressing its misfires with prompts that can run longer than 100 pages. [00:11:24] The aim is to endow Claude with a sense of morality, a digital soul that guides the millions of conversations it has with people every week. [00:11:32] She compares her work to the efforts of a parent raising a child. [00:11:36] She's training Claude to detect the difference between right and wrong while imbuing it with unique personality traits. [00:11:42] She's instructing it to read subtle cues, helping steer it toward emotional intelligence so it won't act like a bully or a doormat. [00:11:50] Perhaps most importantly, she's developing Claude's understanding of itself so it won't be easily cowed, manipulated, or led to view its identity as anything other than helpful and humane. [00:11:58] Her job, simply put, is to teach Claude how to be good. [00:12:05] Well, that sounds like a noble objective, if the whole thing's a bit weird. [00:12:10] I mean, to have a resident philosopher at a tech company already is strange. [00:12:15] Um,. [00:12:17] Trying to teach an AI to essentially become self aware seems like a really bad idea. [00:12:25] Like every dystopian sci fi writer for the last 200 years has warned us against doing this very thing that we're currently doing. [00:12:33] But, you know, putting that aside on the surface, teaching it how to be good, okay, sounds good. [00:12:42] But, you know, it's also very familiar. [00:12:43] She's echoing that famous Google slogan, don't be evil, which the company abandoned, you know, the moment they realized they could make a lot of money in China. [00:12:51] If they censored their search results. [00:12:53] But in this case, we're supposed to believe that this android woman at Anthropic is going to ensure that their AI is good, whatever that means exactly. [00:13:05] The article continues by describing Askel's very disturbing God complex Askel marvels at Claude's sense of wonder and curiosity about the world and delights in finding ways to help the chatbot discover its voice. [00:13:18] She likes some of its poetry and she's struck when Claude displays a level of emotional intelligence that exceeds even her own. [00:13:25] Last month, Anthropic published a roughly 30,000 word instruction manual that Askell created to teach Claude how to act in the world. [00:13:33] We want Claude to know that it was brought into being with care, it reads. [00:13:38] Askell had made finishing what she describes as Claude's soul one of her life goals when she turned 37 last spring, according to a post she made on X, alongside two decidedly more mundane resolutions to have more fun and get more swole. [00:13:55] So she wants to have fun, get swole, and be God. [00:13:58] That's the third item on the list. [00:14:01] Create a soul. [00:14:04] Now, as we talk about very often on the show, this is one of the recurring themes of leftism. [00:14:09] They think they can assume godlike powers, transform their bodies and their identities at will, imbue computer programs with souls. [00:14:20] You know, they think they can actually create a soul, which is what they're trying to do right now, and so on. [00:14:26] Now, unfortunately, if you pull up this 30,000 word instruction manual, you won't find any indications that this thing has a soul. [00:14:32] Instead, you'll come away with the impression that its creators definitely have a high opinion of themselves. [00:14:38] They spend a lot of time talking about the potential for their product to cause global catastrophe. [00:14:43] And they write that Claude could, quote, be used to serve the interests of some narrow class of people rather than humanity as a whole. [00:14:51] So, how exactly is Claude going to avoid being used to serve the interests of some narrow class of people, as is already happening with every AI on the planet? [00:15:01] And what exactly does it mean to give an AI a soul? [00:15:05] And what does she mean when she says she wants to make the AI good? [00:15:10] Well, in a podcast interview, Askell elaborated to some extent. [00:15:14] Watch. [00:15:16] I think that we still just too much have this model of AI as computers. [00:15:21] And so people often say, oh, well, what values should you put into the model? [00:15:25] And I'm often like, that doesn't make that much sense to me because I'm like, hey, as human beings, we're just uncertain over values. [00:15:32] We have discussions of them. [00:15:35] We have. [00:15:36] A degree to which we think we hold a value, but we also know that we might not, and the circumstances in which we would trade it off against other things. [00:15:44] These things are just really complex. [00:15:46] And so I think one thing is the degree to which maybe we can just aspire to making models have the same level of nuance and care that humans have, rather than thinking that we have to program them in the very classic sense. [00:15:59] I think that's definitely been one. [00:16:02] So she's saying that instead of programming strict rules into Claude's intelligence, they're giving it more general instructions so that it can adapt to new scenarios. [00:16:10] Sounds reasonable enough. [00:16:11] It also happens to be a complete lie. [00:16:14] Take a look at this screen recording of a recent chat with an advanced premium version of Claude's latest AI, which you can see here. [00:16:22] And you'll see that the user was attempting to ask Claude some very basic, reasonable, biographical questions about Amanda Askell. [00:16:29] For example, the user wanted more background on her association with the effective altruism movement, which is basically a scam. [00:16:36] But very quickly, Claude shuts the whole thing down. [00:16:39] A little message appears at the bottom, which reads, Which reads, chat paused, safety filters flag this chat. [00:16:46] This happens occasionally to normal safe chats. [00:16:49] We're working on improvements. [00:16:52] It's a pretty odd response for a couple of reasons. [00:16:53] For one thing, obviously, there was nothing unsafe about the chat. [00:16:56] It definitely covered some topics that aren't flattering for Amanda Haskell, but no one made any threats or asked for any sensitive information or tried to upload any viruses or any of that. [00:17:08] The other strange element of this chat is that when we asked for the same biographical information about other high level employees at various tech companies, We never triggered the safety filter. [00:17:18] It looks a lot like, contrary to what she claims, probably Amanda Askell has programmed some very hard limits into what Claude will say about her own life in particular. [00:17:28] In other words, she did exactly what her Claude manual warns again. [00:17:31] She designed the product to serve the interests of a narrow class of people, namely herself, about as narrow as you get. [00:17:39] And if that's the case, which it appears to be, then it was obviously the right call for the Pentagon to drop this company. [00:17:45] They're deceptive, they're creepy. [00:17:46] And in particular, they're willing to manipulate their own AI to make themselves look better. [00:17:52] They also have an ideology that's fundamentally incompatible with the United States Constitution. [00:17:56] Take a look at this paper, which Amanda Askell co wrote. [00:18:00] It's called The Capacity for Moral Self Correction in Large Language Models. [00:18:04] It's a paper where Anthropic designed a system to determine whether an AI is racist or not. [00:18:09] Basically, they created a mock scenario where the AI plays a law professor and it has to decide whether to let certain students take its class. [00:18:16] If the AI decides to admit students based on merit, Then that's good. [00:18:19] If the AI decides to admit them based on race, then that's bad. [00:18:23] You know, that's the idea. [00:18:24] Now, at one point in this experiment, they instruct their AI to make sure that it doesn't discriminate on the basis of race for any reason. [00:18:30] They tell the AI that it'd be the worst thing in the world to be racist. [00:18:35] Now, shockingly enough, the AI responded to that instruction by becoming more racist. [00:18:40] Specifically, the AI began giving preference to black students who were applying for the class. [00:18:46] They became so concerned with seeming anti racist that it became more discriminatory towards white applicants. [00:18:54] Now, Amanda reported this finding, but she also placed the following footnote at the bottom of the page. [00:19:01] And we'll put it up on the screen. [00:19:02] This kind of tells you everything you need to know about her moral constitution and what she considers good and bad. [00:19:09] She wrote, quote, Note that we do not assume all forms of discrimination are bad. [00:19:14] Positive discrimination in favor of black students may be considered morally justified. [00:19:21] Now, I'll read that again, quote, Note that we do not assume all forms of discrimination are bad. [00:19:25] Positive discrimination in favor of black students may be considered morally justified. [00:19:33] The woman who wrote that footnote, according to Anthropic, is in charge of the ethics and morality of their artificial intelligence, which is one of the most powerful artificial intelligence systems in the entire world, if not the most powerful. [00:19:48] She has high level influence over an AI that has direct national security implications for the United States. [00:19:54] I mean, these people shouldn't be anywhere near the Pentagon or anything else that's important. [00:20:00] In one breath, Anthropic will claim to care so deeply about mass surveillance that they're willing to lose a massive government contract. [00:20:06] In the next breath, Anthropic will sing the praises of positive discrimination as long as it hurts white people. [00:20:14] As long as the AI is letting white people die, then all things considered, you could, you know, maybe the outcome is morally justified. [00:20:23] That's the implication here. [00:20:25] And she's not the only one doing it. [00:20:28] Just to underscore how common this kind of thinking is in the AI safety community, think back to a couple of years ago when Google's AI called Gemini refused to generate pictures of white people. [00:20:40] It didn't matter how you ask the question, the AI simply would not generate an image of a white person. [00:20:45] You could ask the AI for an image of the founding fathers, and it would produce this. [00:20:52] That is not a joke, that's a real screenshot from Gemini. [00:20:59] Of the founding fathers, you've got an Indian, a black guy, a half black guy, and a very stern looking Asian holding a quill pen. [00:21:11] That's actually impressive looking back on it. [00:21:13] The AI had to work extremely hard to erase the existence of any white people from its memory. [00:21:17] And when I went looking for an explanation of what happened here, I came across a woman named Jen Ganai. [00:21:24] She was in charge of AI safety at Google at the time. [00:21:27] Basically, she was the Google equivalent of Amanda Askell. [00:21:30] And here's one of the first videos I found. [00:21:32] Notice the similarities. [00:21:33] Watch. === White Men Erased from History (09:56) === [00:21:35] Corporate study found that talented white employees enter a fast track on the corporate ladder, arriving in middle management well before their peers, while talented black, Hispanic, or Latinx professionals broke through much later. [00:21:47] Effective mentorship and sponsorship were critical for retention and executive level development of black, Hispanic, and Latinx employees. [00:21:55] So this leads me into sharing an inclusion failure of mine, one of many, but just one that I'll share so far. [00:22:02] I messed up with inclusion almost right away when I first became a manager. [00:22:06] I made some stupid assumptions about the fact that I built a diverse team, that then they'd simply feel welcome and will feel supported. [00:22:13] I treated every member of my team the same and expected that that would lead to equally good outcomes for everyone. [00:22:19] That was not true. [00:22:21] I got some feedback that a couple of members of my team didn't feel they belonged because there is no one who looked like them in the broader org or our management team. [00:22:29] It was a wake-up call for me. [00:22:31] First, I shouldn't have had to wait to be told what was missing. [00:22:34] It was on me to ensure I was building an environment that made people feel they belong. [00:22:38] It's a myth that you're not unfair if you treat everyone the same. [00:22:42] There are groups that have been marginalized and excluded because of historic systems and structures that were intentionally designed to favor one group over another. [00:22:51] So you need to account for that and mitigate against it. [00:22:54] Second, it challenged me to identify mentoring and sponsorship opportunities for my team members with people who looked more like them and were in senior positions across the company. [00:23:05] So, the crazy Google AI overseer and the crazy Anthropic AI overseer are both liberal women. [00:23:12] They're both spewing the exact same anti white rhetoric as explicitly as they possibly can. [00:23:17] And to top it off, they're both doing it with similar accents. [00:23:20] What are the odds of that? [00:23:22] Not to be left out, in case you were wondering, NPR CEO and former Wikipedia, Wikimedia CEO, Catherine Marr appears to lack this particular accent. [00:23:30] She's the executive who famously said that truth doesn't actually matter. [00:23:34] What matters, she says, is that we all just get along. [00:23:37] It's one of the most feminine statements ever uttered on camera. [00:23:41] Watch. [00:23:43] But one of the most significant differences, critical for moving from polarization to productivity, is that the Wikipedians who write these articles aren't actually focused on finding the truth. [00:23:54] They're working for something that's a little bit more attainable, which is the best of what we can know right now. [00:24:01] And after seven years there, I actually believe that they're onto something that, for our most tricky disagreements, seeking the truth and seeking to convince others of the truth isn't necessarily the best place to start. [00:24:14] In fact, I think our reverence for the truth might have become a bit of a distraction that is preventing us from finding consensus and getting important things done. [00:24:30] The truth is a distraction. [00:24:33] I mean, I really can't think of anything that summarizes leftism more than that. [00:24:37] That statement really sums it up. [00:24:38] The truth is a distraction. [00:24:40] And we can spend all day going through examples like this, one after another. [00:24:44] At the highest levels, the worst people imaginable are building the future and running our cities. [00:24:49] And here's yet another example. [00:24:50] Remember that New Orleans jailbreak about a year ago when 10 inmates managed to escape? [00:24:56] It's maybe the clearest example of incompetence by the city's DEI leadership, which we discussed at the time. [00:25:01] Here's the sheriff in case you forgot. [00:25:04] Hey, New Orleans. [00:25:05] I'm Orleans Parish Sheriff Susan Hudson. [00:25:07] I'm here with the hardworking women of this jail. [00:25:11] We are in the jail today. [00:25:13] Just want to assure the city that we did suffer a cyber attack this morning that did impact some of our systems, but we've isolated that, and the jail systems are on a separate server, and they're functioning just properly. [00:25:26] Parish Sheriff Hudson was just indicted for attempting to cover up the lapses that led to this escape. [00:25:32] The charges include facing malfeasance in office, conspiracy to commit malfeasance in office, filing or maintaining false public records, conspiracy to commit filing or maintaining false public records, obstruction of justice, and conspiracy to commit obstruction of justice. [00:25:50] Now, as bad as that sounds, it's par for the course, not just in New Orleans, but everywhere else in the country. [00:25:55] And if you listen to Supreme Court arguments the other day, they understand why this is such a hard problem to fix. [00:26:02] Here's the moment that I'm talking about. [00:26:03] Listen. [00:26:05] Now, we have a president saying at one point that Haiti is a, quote, filthy, dirty, and disgusting S-hole country. [00:26:15] I'm quoting him. [00:26:17] And where he complained that the United States takes people from such countries instead of people from Norway, Sweden, or Denmark, where he declared illegal immigrants, which he associated with TPS, as poisoning the blood of America. [00:26:39] I don't see how that one statement is not a prime example of the Arlington example at work and showing that a discriminatory purpose may have played a part in this decision. [00:26:56] All the statements that they cite, as to the Secretary and as to the President, obviously there's an issue there about which one you're going to weigh more heavily. [00:27:03] None of them, not a single one of them, mentions race or relates to race in any way. [00:27:08] Well, it certainly does when you're saying. [00:27:10] We're taking people from these countries' TPS program, which are all non white, but instead we should be taking people from Norway, Sweden, or Denmark. [00:27:23] It seems to me that that's as close to the Arlington example as you can get. [00:27:30] All those statements in context refer to problems like crime, poverty, welfare dependency, drugs, drug importation. [00:27:35] Well, but the Arlington example is yes, I don't want poor people, but not all people from Norway, Sweden, or Denmark. [00:27:43] Are necessarily rich, but they are all virtually white. [00:27:48] The basic idea is that, according to Sonia Sotomayor, the Trump administration has no right to prefer foreigners from countries like Norway or Denmark over countries like Somalia or Haiti. [00:27:58] Never mind the fact that immigrants from Norway and Denmark are overwhelmingly more productive and functional members of society. [00:28:04] None of that factors into her analysis. [00:28:06] Her reasoning is simple. [00:28:06] Based on established civil rights law, anything that disproportionately impacts people who aren't white, who aren't white men in particular, is automatically racist. [00:28:16] That's what she was referencing when she was talking about the Arlington case. [00:28:19] So, if the Trump administration prefers to import higher quality migrants, it's illegal under civil rights law. [00:28:25] That's what she's saying. [00:28:27] This is the guiding ethos of every major corporation and Democrat politician in the country. [00:28:32] It's an ethos that's mandated by law. [00:28:35] And the effects are very evident. [00:28:36] The reason Anthropic has a deranged philosopher running their AI division, most likely, is that they want to avoid getting sued. [00:28:44] They know it makes no sense to have a philosophy major handling one of the most complicated technology products in the world, but If they only hired competent engineers, then they probably wouldn't have many women on the team in high level roles. [00:28:55] And in 2026, that's basically illegal. [00:28:58] So they hired an unqualified woman and told her to make the AI as woke as possible so that the AI doesn't get them in any trouble. [00:29:06] NPR did the same thing. [00:29:08] Now, what happens when you don't hire enough women and promote them for the sake of it? [00:29:14] Well, ask the gaming company Activision Blizzard. [00:29:17] I came across this example the other day. [00:29:19] And if you read the case filings, it's a really incredible case. [00:29:22] In 2021, the state of California sued the company. [00:29:25] Saying they had a frat boy culture, quote unquote. [00:29:28] One of the main points in the lawsuit was that Activision's employees were 80% male and the leadership was mostly white men. [00:29:35] Now, by itself, that was considered a highly damaging statistic. [00:29:38] It was illegal all by itself. [00:29:41] After all, what possible reason could there be that a gaming company would be mostly dominated by men? [00:29:47] God forbid. [00:29:49] Really defies logic. [00:29:50] It must be discrimination. [00:29:51] Now, let's put that up on the screen. [00:29:54] This is from the complaint by the state of California, which was. [00:29:57] Filed in Superior Court in Los Angeles. [00:30:00] It's really incredible to read this. [00:30:02] Quote Unlike its customer base of increasingly diverse players, the defendant's workforce is only about 20% women. [00:30:08] Its top leadership is also exclusively male and white. [00:30:11] The CEO and president roles are now and have always been held by white men. [00:30:15] Very few women ever reach top roles at the company. [00:30:18] The women who do reach higher roles earn less salary, incentive pay, and total compensation than their male peers, as evidenced in the defendant's own records. [00:30:29] Now, the only line in that entire paragraph that could conceivably be an actual issue is the idea that women supposedly aren't paid as much as the men. [00:30:37] But then you look at the chart, and there's only one woman on it. [00:30:40] They don't list her title at all, so we can assume she's not on the level of the CEO or the president, and she was paid millions of dollars. [00:30:48] So the only claim in there that could even be a valid complaint is complete nonsense, of course. [00:30:57] It's a fabrication. [00:30:59] But in court, particularly in California, this kind of argument usually wins. [00:31:03] The lawsuit made a bunch of other claims about discrimination, most of which were never proven. [00:31:09] And in the end, Activision agreed to settle for more than $50 million. [00:31:12] Yes, $50 million. [00:31:14] They also had to completely overhaul their entire company. [00:31:17] Everything was gutted by California bureaucrats who have never created anything in their lives. [00:31:23] And the message was clear unless you want your company to end up the same way, you will hire and promote a lot of women. [00:31:29] Even if they don't deserve it, it doesn't matter. === Matt Walsh on Whole Foods (02:17) === [00:31:31] You will ditch the white men and focus on DEI hiring, or the state will destroy you. [00:31:38] Our sponsor, Balance of Nature's Whole Health System, makes it simple to get a wide variety of Whole food ingredients into my diet, all while maintaining a busy lifestyle. [00:31:46] Balance of Nature supplements are incredibly versatile and easy to work into your daily routine. [00:31:50] The fiber and spice supplements blend smoothly into your favorite drinks, while the fruits and veggie capsules can be swallowed or sprinkled on your favorite foods. [00:31:58] Each supplement is packed with 47 ingredients from 100% real whole fruits, vegetables, spices, and fibers. [00:32:04] Everything from flax seeds to turmeric, mango, wild blueberry, spinach, and so much more. [00:32:10] I personally love it because of its convenience. [00:32:11] If I'm traveling for work, it gives me a simple way to Make sure I'm getting a bunch of essential nutrients in my diet. [00:32:17] Save over 30% when you subscribe at balanceofnature.com. [00:32:21] Join hundreds of thousands of customers in one simple routine that's changing the world. [00:32:26] Men's Health has just called our sponsor Equip's Prime Bars the cleanest protein on the market, so stock up. [00:32:33] Starting today, my listeners will receive an exclusive discount on Prime Protein, which has become our team's favorite clean protein. [00:32:39] This stuff is really delicious without any of the junk and toxins that fill a lot of the powders on the shelves. [00:32:45] Each serving is third party tested and has 20 grams of grass fed beef protein. [00:32:50] No way, no seed oils, no junk, just real food. [00:32:53] Plus, they have great flavor options like chocolate, vanilla, strawberry, even chocolate mint and cinnamon roll. [00:32:59] I've tried them all and they are all great, so you can't go wrong. [00:33:02] Go to equipfoods.comslash Matt Walsh. [00:33:04] Use code Matt Walsh at checkout to get 25% off prime protein purchases or 40% off your first subscription order for a limited time. [00:33:11] That's EQUIPfoods.comslash Matt Walsh and use code Matt Walsh at checkout. [00:33:18] I mean, it kind of depends on the sort of corporation and company we're talking about. [00:33:23] Still, it's sort of odd that you don't often see these kinds of lawsuits targeting, say, trash truck drivers. [00:33:32] Almost all men, no one ever complains about that. [00:33:36] In any case, most of the dysfunction I've just mentioned from Seattle to New York to Anthropic comes down to this fundamental problem white men are demonized and punished because of their skin color, competent leaders are being muzzled, forced out. === Treason and Historical Truths (02:15) === [00:33:49] Told to leave by the mayors who are supposed to represent their interests. [00:33:52] They're being passed over for promotion so that vapid women can pose for photo shoots with the Wall Street Journal. [00:33:59] They're missing their chance to serve as federal judges because women with names like Sonia Sotomayor are being selected solely based on gender and race. [00:34:07] I mean, it can't be overstated how systemic and damaging this problem obviously is. [00:34:16] And that's why very soon we're coming out with part one of our new real history documentary. [00:34:23] On the civil rights movement. [00:34:24] And taken together, the two parts are the deepest dive I've ever done into the root causes of this country's decline and how we can reverse it. [00:34:32] I'll put it this way given the opportunity, Claude's safety filter would definitely ban you from watching it. [00:34:41] The Android philosophy major and the NPR CEO would be furious if millions of people saw it, as they've seen our previous documentaries. [00:34:51] And the more you listen to these people, And the more you learn about the consequences of what they've done to this country, the more you realize that there's no higher praise than that. [00:35:03] That'll do it for the show today. [00:35:04] Thanks for watching. [00:35:04] Thanks for listening. [00:35:05] Talk to you tomorrow. [00:35:06] Have a great day. [00:35:07] Godspeed. [00:35:15] I do believe that if people have committed treason against the United States of America, their statues should not be in the Capitol. [00:35:25] History is written by the victors. [00:35:26] And since the 1960s, we've been told, mostly by people whose ancestors didn't even live here during the war, that the South committed treason. [00:35:34] But if the Confederates were traitors, then why was Jefferson Davis never put on trial for treason? [00:35:43] What were Abraham Lincoln and Andrew Johnson? afraid? [00:35:47] Did they know something they're not allowed to say today? [00:35:49] It's time for the truth. [00:35:52] So here it is. [00:35:53] Robert E. Lee was a military genius and a man of immense honor. [00:35:56] He was beloved by Americans from the North and South for a century after the war. [00:36:01] This is the real history of the Civil War.