Making Sense - Sam Harris - #467 — EA, AI, and the End of Work Aired: 2026-03-30 Duration: 29:31 === Defending Weird Ethical Ideas (15:06) === [00:00:06] Welcome to the Making Sense podcast. [00:00:08] This is Sam Harris. [00:00:10] Just a note to say that if you're hearing this, you're not currently on our subscriber feed and we'll only be hearing the first part of this conversation. [00:00:17] In order to access full episodes of the Making Sense podcast, you'll need to subscribe at samharris.org. [00:00:23] We don't run ads on the podcast, and therefore it's made possible entirely through the support of our subscribers. [00:00:29] So if you enjoy what we're doing here, please consider becoming one. [00:00:36] Wilma Caskill, thanks for joining me again on the podcast. [00:00:39] It's great to be back on. [00:00:40] Yeah, I don't know how many times this is, but it's many. [00:00:43] Yeah, I think this is maybe number four on the main podcast. [00:00:45] Yeah, yeah, awesome. [00:00:46] Well, you are my go-to guy on so many ethical questions, but effective altruism being the frame under which we think about these things. [00:00:56] You have the 10th anniversary of your book, Doing Good Better, has just come upon us. [00:01:01] There's a new edition. [00:01:01] That's like, yeah. [00:01:03] And what's changed about the actual text? [00:01:06] So in the text, the statistics have updated, and there's a new foreword, which is responding to some objections and reflecting a little bit on the last 10 years growth in some of these ideas. [00:01:17] Well, the last 10 years have been eventful for EA. [00:01:20] I think the last time we spoke, we dealt with much of the controversy around Sam Bankman-Fried and FTX and all of that brain damage. [00:01:28] Is there more to say about that? [00:01:29] I mean, how is the EA movement slash community doing now? [00:01:34] And what has been the net effect of all of that? [00:01:37] Yeah, I think the main thing to say is that obviously that was a huge hit. [00:01:42] It was like a huge knockback. [00:01:44] But now if you're looking at influence of the ideas, you know, really what matters, then there's been enormous restoration of growth. [00:01:55] So if you look at, for example, how much money is being moved to effective nonprofits, that figure, I mean, it actually grew just kind of steadily even through these periods of drama and in cryptocurrency and cryptocurrency and closions and so on. [00:02:11] But over the last year, best guess is it grew about 50%, is closing in on $2 billion a year now. [00:02:17] And that's not just from a small number of large donors. [00:02:22] Actually, it's across large donors and small donors and so on. [00:02:26] Similarly, if you look at giving what we can members, so people who pledged 10% of their income, that had year-on-year growth of about 20 or 30%. [00:02:35] Similarly, if you look at people engaging with effective altruism as a movement via conferences and so on, that is also growing really quite healthfully. [00:02:44] So I think the overall story is that, yeah, that was a huge hit, but the underlying ideas are very good. [00:02:50] That means that maybe things are a little bit less in the public eye, but people are still being convinced by the importance of giving more and giving more effectively or using their career to do good. [00:03:01] And I think that's got momentum all of its own. [00:03:03] Right. [00:03:04] Well, let's talk about those pieces. [00:03:05] I mean, for me, the biggest change in my life that I ascribe to effective altruism in general, and I mean, this is and your influence in particular, has been the pledge. [00:03:15] I mean, just deciding in advance to give a certain amount of money or a certain percentage of money, in this case, 10% of pre-tax earnings. [00:03:23] Just knowing that that, on some level, that money isn't even mine when it comes in the door because it's been pre-committed to causes that seem important. [00:03:33] That's just an enormous kind of psychological change and just a kind of life benefit. [00:03:38] And it's just, you know, I've discussed this with you before, but I mean, it's just fun and virtuous and just, it just seems good all the way around. [00:03:45] The places where I remain uncertain whether EA has all the wisdom it should have to inform the conversation is around just what constitutes effectiveness. [00:03:55] I mean, how we think about that, like the list of causes that are on the menu, if you're EA versus, you know, causes that are almost by definition not on the menu. [00:04:03] I think in your current thinking, you're arguing that we should expand the footprint of philanthropic targets beyond what is traditionally thought of as obvious EA causes. [00:04:13] Maybe let's just start there. [00:04:15] So when people think about effective altruism and its causes, what is the shortlist of causes that are obviously on the menu? [00:04:23] Yeah. [00:04:24] So firstly, thanks for bringing up your 10% pledge. [00:04:28] And one of the amazing things looking back at the last 10 years, including from our first podcast was 10 years ago, was the impact that you taking the pledge and being public about it has had, where we're now up to 1,200 10% pledges that have come from people who follow this podcast and over $30 million of donations that have moved. [00:04:50] So we're talking about thousands of lives saved there. [00:04:53] Nice. [00:04:53] Which is pretty cool. [00:04:54] Yeah. [00:04:55] Hopefully we haven't undone those benefits with something else I've done on the podcast. [00:04:59] Fingers crossed. [00:05:00] But in terms of areas of focus, so a huge one is global health and development. [00:05:07] And still, that's where most of the philanthropic money that gets directed goes. [00:05:13] So just maybe I'll touch each of these as you send it over the net. [00:05:17] The obvious cynical retort to the wisdom of that is people should be more concerned about suffering that's close to home. [00:05:26] You know, America is kind of retrenching now under the influence of not just the orange menace in the Oval Office, but lots of people who, if they weren't EA, they were EA adjacent in Silicon Valley. [00:05:41] I mean, all the tech bros who kind of went MAGA are, to my eye at least, building a kind of iron wall of cynicism against many of the values that you've just begun to articulate. [00:05:54] And one brick in this wall is certainly this notion that philanthropy doesn't really work. [00:06:00] Sending money to Africa is just kind of foolish. [00:06:04] I mean, you might be helping people, some identifiable people, but we've really doged all this so effectively now under the wisdom of Elon and his incel cult that we just saw that all of this, these are just all criminals who are wasting our money over there with USAID. [00:06:22] The money should be used at home and it should be used for the most part. [00:06:25] I mean, philanthropy is just a boondoggle. [00:06:27] We should be just building businesses that are effective and solving problems that we want to solve. [00:06:31] And this seems to be the genius of Silicon Valley and its top people now. [00:06:36] So this first claim that global health is such an obvious target and that the differential value of every dollar over there is so much more than it is here that you can do so much more good with a single dollar in sub-Saharan Africa than you can do it in Menlo Park that it's there's the argument. [00:06:55] But what do you say in the face of the cynicism? [00:06:59] Yeah, I mean, I think this rise in cynicism is a terrible shame. [00:07:04] And in fact, I think it will probably result in hundreds of thousands or millions of lives lost. [00:07:10] So here are some things that are true. [00:07:12] Building companies. [00:07:13] Let me just, sorry, I can interrupt you again. [00:07:15] But let me just add that you might have seen the Lancet study that suggests that Elon's dismantling of USAID will cost 14 million people to die unnecessarily in the next five years from infectious disease, 4.5 million of whom are under the age of five. [00:07:32] Now, I mean, those numbers, I think, I mean, I would bet my life that the tech bros will be, you know, frankly incredulous when they hear those numbers, but let's discount them by a factor of 10. [00:07:42] I mean, let's say it's only 1.4 million people, 450,000 under the age of five, right? [00:07:47] It's still enough people. [00:07:49] Mind-boggling numbers. [00:07:50] Yeah, exactly. [00:07:51] And so it's true that building companies can be a great way of improving the world. [00:07:55] It's also true that much aid can be ineffective, even sometimes harmful. [00:08:00] That is just not true for the most effective global health and development interventions, which have saved hundreds of millions of lives over the course of the last 50 years. [00:08:10] Even the leading aid skeptics like Bill Easterly will proactively say, of course, I'm not talking about global health. [00:08:16] That has had enormous benefits. [00:08:18] And when you look at the most effective organizations, you can show with high quality evidence, randomized controlled files, that these save lives. [00:08:28] And in fact, the donations that have gone via GiveWell have saved hundreds of thousands, best guess, over 340,000 lives now. [00:08:36] This is at a cost of about $5,000 per life. [00:08:40] Whereas in the United States, a typical or good cost, low cost to give someone one year of life is about $50,000. [00:08:51] So you're looking at kind of in the United States, giving someone an extra month of life for $5,000 or saving a child's life for $5,000 in a poorer country. [00:09:00] Right. [00:09:01] Right. [00:09:01] Okay. [00:09:02] So global health, what's the next? [00:09:04] A next big one is animal welfare, in particular, farm animal welfare, where every year, about 90 billion animals are raised in factory farms and slaughtered. [00:09:15] And the conditions they live in are truly associated. [00:09:18] These are the worst off animals in the world, such that, in fact, I think when those animals die, that's the best thing that happened to them because their lives are full of such suffering. [00:09:27] And there are things we can do to have enormous impact. [00:09:31] So organizations within the kind of broader effective altruism ecosystem championed and then funded corporate cage flea campaigns, going to big retailers and restaurant chains and advocating for them to cut out the use of eggs from caged hens. [00:09:47] And there were many pledges to do so. [00:09:49] 92% of those pledges have been fulfilled. [00:09:51] Now, every year in the United States alone, there are 3 billion chickens that would have been brought up in cage confinement that instead have at least somewhat significantly better lives. [00:10:03] And that was on the basis of really quite small amounts of money. [00:10:07] We're talking about tens of millions of dollars for these campaigns. [00:10:11] So if you're concerned about the well-being of non-human animals and what are the just worst off creatures in the planet, well, the amount of impact you can have per life there is just absolutely enormous. [00:10:23] I think that factory farming is one of the worst atrocities that humanity is committing today. [00:10:28] And sadly, it's getting worse every year. [00:10:31] But we can make this extraordinarily large impact on it in absolute terms. [00:10:35] So yeah, so this is one area where perhaps my own cynicism creeps in. [00:10:38] I worry that any focus on suffering beyond human suffering, it risks confusing enough people so as to damage people's commitment to these principles. [00:10:49] So I mean, I'm not, there's zero defense of factory farming coming from me here, but when I see a philosopher who's clearly EA or EA adjacent arguing on behalf of the welfare of shrimp and claiming that maybe the worst atrocity perpetrated by humans is all of the mistreatment of shrimp because they exist in such numbers and live such terrible lives, one imagines though I don't really have strong intuitions about what it's like to be a shrimp. [00:11:18] I just feel like those kinds of arguments, and this is where kind of the kind of vegan dogmatism can come in, like that you can occasionally find a vegan who's arguing that we need to actually do something with the state of nature so to protect the rabbits from the foxes kind of arguments. [00:11:34] This begins to look like a reductio ad absurdum of just a whole enterprise. [00:11:38] Like, okay, okay, you know, I feel like people then declare on some level ethical bankruptcy. [00:11:43] They say like, okay, I'm just going to worry about me and my family and my friends and figure out what to do on the weekends because these philosophers have gone crazy. [00:11:49] They're telling me that I have to worry about shrimp now. [00:11:51] And I worry that the same thing is now in the off end. [00:11:54] We'll talk about this when we talk about AI when we start talking about the possible suffering of digital minds. [00:12:00] Now, I'm not actually prejudging the intellectual case you can make for the plausible suffering of shrimp or the likely suffering of some digital minds, or if not now, in the future. [00:12:12] But I just think if we're going to push the conversation to a place where we're asking people to care about how NVIDIA's latest chips feel, you know, in some configuration, it's going to be, again, whatever is true, remaining agnostic as to what is true or will be true once we build more powerful AI. [00:12:32] I mean, I just think even the Dalai Lama is not going to be able to shed a tear about digital minds. [00:12:37] That's an epistemological boundary, but even if it's not epistemological, I think it's an emotional boundary for most people, at least for the longest time. [00:12:45] Okay, great. [00:12:45] So lots to unpack there. [00:12:48] And so I actually personally am not convinced by the Schlimp argument. [00:12:55] But the thing I want to defend is people really taking ethical, including quite weird, seeming ethical ideas seriously and trying to reason that through for themselves, where perhaps, you know, there are some groups which should be just really thinking about PR and how ideas will be received and kind of trying to build some kind of broad coalition on that basis. [00:13:19] But I think some people just need to be trying to figure out just actually what is moral reality at the moment. [00:13:25] What might we be missing? [00:13:27] So there's this historical period that I got very obsessed with in writing my last book, which is The Early Quakers, which led to the British abolitionist abolitionist movement that actually led to the abolition of the slave trade and then of slave owning globally, in fact, for shadow slavery. [00:13:45] And boy, those people were weird early on. [00:13:49] Like at the time, I mean, the idea that it would be immoral to own slaves was regarded as laughable. [00:13:55] Let alone, many of them were vegetarian, and that was just absurd. [00:13:59] What next? [00:13:59] They'll be saying that women should have the vote. [00:14:02] They should be pacifists, which they also were. [00:14:05] And looking back at ideas that we now take on, now think of as utterly morally common sense, like equal rights for women, or like the idea that it's utterly immoral to own slaves, let alone the completely absurd things like men having sex with men or something. [00:14:23] These are things you would have been mocked for, maybe even regarded as kind of repulsive, you know, opprobrious for suggesting. [00:14:29] You can also add to that, the picture that was given to us by Descartes and others that, you know, animals as complex as dogs and apes could experience no pain, right? [00:14:39] So they would just vivisect dogs by nailing their feet to boards and then just performing surgery on them while alive. [00:14:45] Yeah, or even torch thing cats for entertainment was a reasonably popular practice. [00:14:52] So we have this long track record of humanity getting morality wrong really quite badly. [00:14:59] And those people who pushed early on those changes being what I call a moral weirdo. [00:15:07] And I think at least some groups need to be in the business of really trying to figure this out. === Pandemics from Basement Labs (03:13) === [00:15:13] And maybe that means that lots of people will say, okay, I'm into effective giving, but not effective altruism. [00:15:18] That comes with all this baggage. [00:15:20] And then I'm like, I don't really mind about labels. [00:15:22] I don't really mind it. [00:15:23] There may be, yeah, there are other people that can just take some parts and leave others. [00:15:28] But I think this kind of cauldron of ideas and intellectual and like moral exploration and seriousness, including when it comes to esoteric ideas like shimp or like digital minds or perhaps something else, I think is something important and something I really would like to protect, in fact. [00:15:46] You're right. [00:15:47] Okay, so global health and animal welfare. [00:15:50] What else is canonical EA? [00:15:53] Yeah, so another is pandemic preparedness, which, you know, again, in writing this book and thinking about the last 10 years when I was first on the podcast, you know, we had these more speculative areas like pandemic preparedness and AI. [00:16:07] Who knows if that's going to happen. [00:16:08] Exactly, on either counts. [00:16:10] And, you know, that's something I'm personally particularly excited about because it's just the things that we can do are so slam dunk. [00:16:19] And even despite a pandemic that killed tens of millions of people, caused trillions of dollars of damages, you know, what sort of lessons did the world learn? [00:16:29] Maybe people became more skeptical of vaccines. [00:16:33] Yeah. [00:16:34] Yet things that could absorb, you know, take a lot of money, not enormous amounts globally, but hundreds of millions to billions. [00:16:41] We could have mask stockpiles. [00:16:43] We could build and deploy lighting that kind of sterilizes the air. [00:16:48] Often these things look good, even if you're just concerned about the economic impacts of colds. [00:16:53] We could be monitoring wastewater for any sort of new viruses. [00:16:57] These protect against regular normal pandemics like we've seen throughout history, but they also protect against novel pandemics where we have the ability now to create and build new viruses, new pathogens. [00:17:12] At the moment, that ability is constrained to people with sufficient skills in a handful of labs, but the equipment needed to do so is getting, is not that expensive and it's getting cheaper all the time. [00:17:25] The knowledge needed to do so is becoming more and more democratized. [00:17:30] And this is something that we really want to get ahead of because it's really not that unlikely to me, maybe I'd say one in three, that we will just see waves and waves of new pandemics as a result of people tinkering with viruses in their, you know, ultimately in their basement and it leaking out. [00:17:50] So you're imagining just like endless lab leaks or you're imagining that plus biological terrorism? [00:17:58] I'm thinking the most likely thing is lab leaks where obviously there's this big debate about COVID, but let's just put that to the side. [00:18:05] Leaks of viruses from labs are just extremely common. [00:18:08] In fact, they average, I think for every hundred person years of people working in at least the higher security labs, a virus leaks out. [00:18:16] So in the United Kingdom, the foot and mouth disease, which I remember from a kid seeing just millions of like cow carcasses being burned. [00:18:24] That was because of a that was the result of a lab leak. === AI Risks and Hard Questions (10:28) === [00:18:27] Right. [00:18:27] Where the same lab, in fact, leaked the virus two weeks after getting reprimanded for leaking it before. [00:18:34] It's actually just very hard to contain viruses. [00:18:37] And so small mistakes can lead to leaks. [00:18:39] But yeah, it could be that. [00:18:40] But in even worst case scenarios, yeah, bioterror attacks or just the threats of that. [00:18:46] So North Korea could have a lot more bargaining power on the world stage if it could credibly say and is in fact reckless enough to say, well, I have these bioweapons. [00:18:58] We could release them. [00:18:58] Yes, we would suffer mass casualties too, but I'm the dictator. [00:19:04] I don't mind so much. [00:19:05] Okay. [00:19:05] Well, so what else is on the list beyond pandemics now? [00:19:09] Yeah. [00:19:10] And then the biggest one I would say is a kind of final category. [00:19:14] Though there are many other categories too, including kind of scientific development, scientific innovation, certain kind of pro-growth, like sensible pro-growth policymaking as well. [00:19:23] But is issues around AI, where again, this has been a worry for many years, was regarded as, you know, when I'm doing it better, utterly sci-fi, you know, something for the year 2100s perhaps hasn't come up for now. [00:19:37] I still remember that. [00:19:38] Like when I gave my AI talk at TED, which was exactly 10 years ago, I remember, I mean, I just, as a kind of rhetorical device, just said, for argument's sake, let's say we're not going to get there for 50 years, right? [00:19:50] But I remember when I said that, I wasn't predicting that timeframe, but it seemed totally plausible to think it might take 50 years. [00:19:58] There's no one talking in terms of 50 years, as far as I can tell now. [00:20:01] Yeah, exactly. [00:20:02] And I was the same. [00:20:03] I was just had these huge error bars on when AI could come. [00:20:07] And it's been a lot faster than I expected. [00:20:10] I think you sent me a link or in one of your articles, there was a link to this stat that as of like 2022, AI researchers forecast that it wouldn't be until it was something like 5% or 2% of AI researchers thought that AI would win the math Olympiad in like by like 2025. [00:20:33] I mean, it was just not, it was a total outlier position, but that's exactly what happened. [00:20:38] Yeah, exactly. [00:20:38] So both machine learning experts and forecasters have all been taken by surprise by just how fast progress has been, in particular on domains related to reasoning, so mathematics, coding, and so on. [00:20:53] And we're now in this very strange situation where actually the progress in AI capabilities is remarkably stable over time, which is what I would say stable exponential progress, where there are gains in how much computing power are just being thrown at AI for training, for experimentation, for inference. [00:21:17] There are gains in algorithmic efficiency. [00:21:20] So how much of a punch can you get from that computing power? [00:21:24] And then when you look at how does AI perform, whether that's on benchmarks or in terms of the time horizon of human equivalent tasks, so a task that might take a human three minutes or 30 minutes or three hours, that just occupies this relatively smooth exponential trend where at the moment, AI for software engineering can do tasks that human would typically take a few hours to do. [00:21:52] That seems to be doubling something like every four to six months. [00:21:57] So in, you know, think about it, a year's time, maybe you've got AI that can do what it would take a human a week. [00:22:05] Year after that will be a month and so on. [00:22:07] And so that really changes the dynamic of how to think about AI. [00:22:11] Whereas 10 years ago, it was much more based on kind of abstract arguments, how do agents behave and so on in general. [00:22:19] Now we can do experiments on AI systems to get a sense of how they act, what the risks are, what the potential benefits are. [00:22:27] And we can have a lot more confidence than we used to be able to have on when certain capabilities are coming. [00:22:35] And in particular, the really scary point in time is when the AI loop feeds back on itself and you are able to automate via AI the process of doing AI research itself. [00:22:48] And there are good arguments and my organization has done some kind of deep dive investigation into this question for thinking that you get this big leap forward in capability at that point in time. [00:22:58] All right, we're going to jump into AI in a minute. [00:23:00] I think that'll be the entire second half of our conversation. [00:23:03] But you used a phrase a moment ago that caught my attention. [00:23:06] You said something about positive growth or it just flagged for me that almost invariably our discussion about ethics and our discussion about EA in particular is kind of negatively valenced. [00:23:20] We're just talking about the risks that need to be mitigated, the suffering that needs to be alleviated. [00:23:25] But there's this other side of the question always, when you're talking about human flourishing, we also need to think about the positive goods that remain unactualized and a failure to actualize them is also another cost, right? [00:23:40] And I think I've seen people argue that it's in many respects, it could be a larger cost. [00:23:48] I mean, I think there's an asymmetry in our thinking and in our experience where suffering gets weighted more heavily, which is to say that the worst pains are worse than the best pleasures are good, right? [00:24:02] However you want to grammatically finish that sentence. [00:24:04] But I do think, I mean, when you think about what's possible for us on the good side of the ledger and how, you know, I mean, just we know nothing about the horizons of the good, really. [00:24:16] I mean, how good could human life be? [00:24:18] And what are the, you know, how can we weight the opportunity costs of the present? [00:24:23] I mean, the things we're doing now that prevent us from actually exploring the deeper reaches of human flourishing and the ability to make a society that allows for us to spend time there as opposed to just putting out fires and figuring out how not to kill one another. [00:24:41] That's also part of the calculus. [00:24:43] Absolutely. [00:24:44] So medicine often has this idea that it just wants to restore normal functioning. [00:24:50] And the point of medicine is to, if someone is below normal, we'll get them back to normal. [00:24:55] But it doesn't care at all about going from normal to very good. [00:24:58] Yes, you're not going to be in the Olympics. [00:24:59] We just want to get you out of bed. [00:25:00] Yeah. [00:25:01] Except what counts as normal functioning obviously changes over time. [00:25:04] And it is true, I think, that in the world today, for present day people, you can often have more of an impact by preventing suffering than by kind of enhancing people to have even more well-being. [00:25:19] But that's a contingent fact. [00:25:21] And I do think that future generations will look back at our lives today and think, oh my God, they missed out. [00:25:29] They didn't have good. [00:25:30] And then insert goods like X and Y and Z in the same way as, you know, take our lives and imagine a different society where no one experienced love. [00:25:39] And you'd think, well, that's how impoverished that society would be because of this absence of a good. [00:25:45] And so I do think that when we're looking towards the future, we should be trying to think, yeah, not merely just how can we eliminate obvious causes of suffering, but actually how can we perhaps have a life that's radically better today than today, where the best days in my life are hundreds of times better than a typical day. [00:26:07] I would like more of that. [00:26:08] I would like more of that for everyone. [00:26:09] Right. [00:26:10] Yeah. [00:26:10] So I do think in those terms a lot when I look at the kinds of things that capture our attention, certainly in politics these days, I do view almost everything as an opportunity cost. [00:26:21] And so this actually brings me back to my initial question and concern around EA in specifying how we think about effectiveness. [00:26:29] I mean, so the E in EA is effectiveness, effective altruism. [00:26:34] And insofar as there's a bias toward the quantifiable and a bias toward hitting the targets that we just described, things like global health or pandemic risk, et cetera, or just existential risk more generally. [00:26:48] I worry that we're sort of blind to obvious problems that are, you know, the intervention into which would be hard to quantify, certainly in advance, but they're blocking everything. [00:27:00] I mean, like if you could imagine a project that would have, you know, this doesn't even sound like an expensive one, but if we could have done something in advance to have inoculated the tech bro slash manosphere podcasters against the charms of Trump and Trumpism, right? [00:27:15] I mean, it's like Joe Rogan and the All-In podcast and Theo Vaughan and all these guys who put Trump on for hours at a stretch and didn't ask him a single skeptical question and just normalized his idiocy and dishonesty for just a vast audience. [00:27:30] I mean, I think it's not too much to think that that, you know, since he only won by whatever, 1.5%, that was among the many things that perhaps over-determined his victory. [00:27:40] That was one of those things. [00:27:41] And there you wouldn't have had, and then you just look at what an opportunity cost our current politics and, you know, America's current retreat from the world, our disavowal of value. [00:27:50] I mean, all the values we're talking about in this podcast, America as a country has completely disavowed them. [00:27:56] I mean, we don't care what other nations do. [00:27:58] We certainly don't care about climate change. [00:27:59] I mean, there might be five people on earth now who have the bandwidth to think about climate change. [00:28:04] We don't care about nuclear proliferation. [00:28:06] And I think our retreat from the world is going to usher in a new era of that. [00:28:10] So if you're talking about existential risk, that seems like a bad thing. [00:28:14] I mentioned Elon and his doging. [00:28:18] If the Lancet is even remotely right over how many people will needlessly die as a result of that alone. [00:28:23] I mean, that's, again, that was all downstream of a bunch of dummies talking to Trump in ways that could have easily been prevented if they only knew to prevent them. [00:28:32] But like, that's not a project if, I mean, it's not the most realistic thing that you would target with philanthropy, but it is the kind of thing that, you know, if you could have gotten your hands around that lever, that's arguably more important than anything that's on GiveWells website right now, right? [00:28:48] Given the opportunity costs we're looking at in the unraveling of American values and American politics. === Shaping Charitable Resources (00:35) === [00:28:55] So I just, I'm wondering how you think about being charitable and allocating resources in the context of problems that often have that shape. [00:29:06] Just like, you know, the shape of what social media is doing to us and our capacity to cooperate about to solve any problems. [00:29:14] If you'd like to continue listening to this conversation, you'll need to subscribe at samharris.org. [00:29:20] Once you do, you'll get access to all full-length episodes of the Making Sense podcast. [00:29:25] The Making Sense podcast is ad-free and relies entirely on listener support. [00:29:30] And you can subscribe now at