#450 — More From Sam: Resolutions, Conspiracies, Demonology, and the Fate of the World
In this latest episode of the More From Sam series, Sam and Jaron talk about current events. They discuss Sam's 2025 New Year's resolutions, the benefits of meditation, Sam's conversation with Ross Douthat, AI risks, Tucker Carlson's midnight encounter with a demon, the fracturing on the right, antisemitism on the right and the left, the Bondi Beach massacre, the Epstein files, accusations made by Joe Rogan and Bret Weinstein, and the collapse of shared reality, which Sam argues is the central problem driving many of these crises. If the Making Sense podcast logo in your player is BLACK, you can SUBSCRIBE to gain access to all full-length episodes at samharris.org/subscribe.
Just a note to say that if you're hearing this, you're not currently on our subscriber feed and we'll only be hearing the first part of this conversation.
In order to access full episodes of the Making Sense podcast, you'll need to subscribe at SamHarris.org.
We don't run ads on the podcast, and therefore it's made possible entirely through the support of our subscribers.
So if you enjoy what we're doing here, please consider becoming one.
Okay, we're back with another episode of More from Sam, where we get more from you, Sam.
You guys, good luck.
Thank you.
How are you?
I'm good.
How are you doing?
I'm good.
Good to see you.
Before we get into things, I just want to quickly remind everyone that you will be giving talks in a number of cities in 2026.
Los Angeles, Dallas, Austin, Portland, Vancouver, Palm Beach, Toronto, Washington, D.C., and New York City.
Last time we warned Portland to start buying tickets or we were going to cancel, and that seemed to work.
So good news, the show is still happening.
If you want to see Sam Live, this is the time to do it.
And it's a really great talk.
Republicans and Democrats alike will be thrilled.
Both hate it.
Yeah.
And annoyed, just like this podcast.
Okay, onto our first topic.
Last year, you made a New Year's resolution where you essentially planned to live like you were dying.
I want to know how that went for you and if you're making any adjustments to that plan for 2026 or is that just for the rest of your life?
No, no, I think I rephrase it this live as though it were my last year.
Yeah.
I mean, it was a great frame to put over the year.
The year didn't quite become what I expected because seven days into it, the half the city burned down and I had to flee my house, which didn't burn happily, but we still haven't been back.
So, you know, the year got jiggered around by real estate concerns way more than I was anticipating.
And I wouldn't expect to spend the last year of my life on that.
Did that help you actually sharpen up your goal a little bit more?
It played a little havoc with it.
I mean, in terms of actually the content of what I was paying attention to, it was more, you know, more terrestrial and practical than I would have hoped for.
But it was, you know, I mean, I would give myself maybe a B on this aspiration.
I mean, I think I did have my priorities pretty straight.
I mean, it's always the filter for me with respect to the podcast is, I mean, there would be less.
If I could really do this, I think there would be less politics or, I mean, the level, the cut would be higher.
I mean, it would really be sort of emergency politics more than just, okay, here's another thing in the news that I can't help but respond to.
Well, you're going to hate this episode then.
Yeah, it's going to be, this is probably not going to be a moment of me walking my talk.
But no, it was good.
I would do the same.
I think I'll do the same year after year because it's really, you know, obviously one doesn't know how many one has left.
And yeah, I mean, no regrets that that was my resolution.
All right, good.
I think we're all addicted to social media and our phones at this point, and our attention has become so fragmented that it's almost impossible to fully be anywhere anymore.
Is there a quick pitch for the Waking Up app you can throw in here?
Well, I mean, you know, we often demonize the smartphone as the locus of all of our fragmentation and collective derangement.
I think that's true, but obviously there are different uses of a smartphone, and this is certainly one that I can stand behind as being just categorically different from the other stuff that's driving us crazy.
And I view most, surely most of the audio I consume on a phone, which is a lot, as some version of, you know, very productive and unifying of my attention.
I mean, I listen to good books and good conversations and obviously listening to guided meditations and meditation instruction is, I think, the ultimate example of a good use of attention on this device, which is just not at all analogous to having your attention shattered on the regular by social media engagement and kind of the dopaminergic checking in to the response, to the response, to the response.
So yeah, I think waking up is, I mean, I love what we've created over there.
And I think it's, it's a great use of the device, you know.
Can you give us some examples of what types of results one could expect once they develop this skill that could, I think honestly, take somebody about a week to really begin to see something.
But, you know, what kind of results can one expect?
Yeah, I love these questions.
These are exactly the way I come at it.
Hope the irony was detectable there.
I mean, so there are kind of two sides to this topic, right?
There's the conventional mainstream pitch for mindfulness that, you know, here, here's a list of benefits you can expect from the practice.
And most of what is claimed there, I think, is true.
I mean, some of it is, you know, poorly researched, or at least the research is kind of thin and some of it might wash out.
But the thing that is obviously true is that virtually everyone, until they learn to practice mindfulness in some form, is spending their life perpetually distracted.
And they're so distracted, they're not even aware of that, right?
They're thinking every moment of their lives.
Their moment-to-moment experience of being a self in the world is being filtered through this discursive, conceptually framed conversation they're having with themselves.
And again, the conversation is so incessant and so loud that it achieves this kind of white noise status, whereas people aren't even aware that they're distracted.
Half the people hearing me say this who have never tried to meditate will be thinking, what the hell is he talking about?
Right.
But it's that it's that voice in the mind.
What the hell is he talking about that feels like you?
That's the endless conversation that is defining your experience moment to moment.
And it's the medium on which all of your dissatisfaction and frustration and regret and annoyance and everything that makes you an asshole in the world, it is the medium that transmits that and makes it actionable emotionally and behaviorally moment to moment.
It's the capture by thought unwittingly, right?
The impulse that you can't see creep up from behind that just becomes you, that becomes your, you know, the next thing you say, the next thing you reach for, the next thing you aspire to become.
I mean, it's just, again, we're, we're living in a dreamscape and virtually nobody notices.
So when you try to practice meditation for the first time, really any form of meditation, but, you know, mindfulness in particular, initially you're given this very basic exercise of just try to pay attention to the breath or try to pay attention to sounds, you know, moment by moment.
And every time you notice you're lost in thought, come back to the feeling of breathing or the sounds in your environment or the sense of your body resting in space or just some sensory experience that you can use to try to build some concentration on.
And it is, you know, it's if you persist in it long enough to notice how hard that is and how vulnerable your attention is in every present moment to the next thought arising unrecognized and just seeming to become you, it is a kind of revelation, albeit a negative one.
I mean, you just, you, what you recognize in yourself is this pervasive incapacity to pay attention to anything for more than a few moments at a time without being distracted.
And it's very hard.
But whenever you think about meditation from that moment forward, maybe you think it's just too much of a hassle.
You know, you're too restless.
It's too hard.
You don't have a talent for it.
You know, you're going to move on to other things and you kind of bounce off the project.
Even if you fall into that condition, I think it should be very hard to argue to yourself that this is somehow psychologically optimal to not be able to pay attention to something for more than a few moments at a time and to be helplessly buffeted by the winds of your own distraction.
I had a good thought actually about that.
I wanted to run by you.
There's this idea of sort of like a dealer dealing cards like thoughts.
And imagine sitting at the card table, blackjack table, just every time, just waiting for the cards.
Just, no, don't want to play that card.
No, just you can wait for a blackjack every time if you know how to wait for the cards.
You don't have to play the card that's dealt.
You can say, I don't want that one.
Don't want that one.
And everyone at the table is looking at you like, this guy's cheating.
And it's kind of like cheating, but if you have that skill, it's kind of cool.
Oh, yeah.
The cards are thoughts, right?
Thoughts taken seriously.
So, you know, the voice of your mind says, oh, I can't believe how you fuck that up.
Right.
And so, okay, so how much, so what does that, what does that convey, right?
Like how much shame or regret or what do you do with that?
You don't have to do anything with it.
You tell the dealer to give you another card.
Yeah.
I mean, there's this image in Tibetan Buddhism of, you know, ultimately thoughts are like thieves entering an empty house, right?
There's nothing for them to steal.
Right.
So that, like, just imagine what that's like.
Imagine the scene of, you know, thieves come, you know, storming into a house that has nothing in it, right?
I mean, there's no implication of their presence there, right?
That there's nothing for them to do.
Thoughts recognized are just these mysterious mental objects, right?
I mean, they really don't, you know, it is just a bit of language or a bit of imagery.
And it is genuinely mysterious that, you know, this next thought can so fully commandeer your physiology and your whole sense of being in the world.
I mean, the next thought taken seriously could define the next decade of your life if you can't see some reason not to take it seriously.
So meditation on some level is a way of relaxing the hold that thoughts automatically have on us.
Yeah.
As we move into 2026, I hope people will develop a practice or give this a look.
Check out Waking Up.
And if not Waking Up, there are plenty of other apps where you can learn to meditate because it really is a basic skill that you should, you should just, if you don't go super deep, that's fine.
But you should have a basic understanding and just explore that for yourself.
And it's life-changing.
So if anyone's focused on nutrition and physical fitness and sleep, you're missing this aspect.
If you care about those three and not caring about your mind, you're missing this.
All right.
On to our next topic.
I really liked your podcast with Ross Douthett, that recent one.
He's very likable and very smart.
And I think it was the first time I liked an argument for why God, a perfect God, would put a bad idea like slavery in the Bible, where he basically had said that he knew it was bad, but wanted to allow room for Christians to evolve and to sort of put their fingerprints on it over time.
Now, that was a good argument.
No, he said, you know, if that's not good for you.
No, I wasn't persuaded by it, but I actually liked it.
And I thought, oh, that's a, it's a well-made argument that he's explaining that God wanted to make room for Christians to evolve and to.
Well, it's well-made in the sense that it's totally unfalsifiable.
It could absorb anything, any possible contents of the Bible, you know, even mathematical errors, right?
I mean, you know, I think pie is calculated somewhere in the Bible as three, you know, full stop.
I think I have that right.
I mean, I think it's, it's said that the, you know, the, the circumference of a circle is with a diameter one is three or something like that.
But, okay, so God can't do math.
Well, he's waiting for us to get better at math.
I mean, that's, he's leaving room for our own genius to explore that topic.
It's just idiotic.
I mean, I guess maybe I'm in an uncharitable mood around this, but I mean, as at a certain point, we have to run out of our patience with these dodges.
I just know how they, I don't know how you spend your life circling that specific attractor again and again, year after year.
I mean, it's just so obviously wrong and foolish.
I mean, we'd have no patience for it.
If this wasn't grandfathered in by this tradition, again, you just look at the crazy, you know, it's Scientology.
I mean, you look at you, watch Alex Gibney's documentary on Scientology and look at that whole project and how embarrassing it was.
And look at those exit interviews, right?
And it's on some level more sophisticated than what we're talking about when you talk about any defense of Christianity.
Yeah.
And you guys also talked about AI.
And for some reason, I find myself much less scared now than I was earlier this year simply because I believe the problem is going to be so big that it will get addressed quickly.
Where are you on that now?
How are you feeling?
I mean, I know at some point earlier, we were both thinking, how is everybody not talking about this every second of the day?
You still there?
Well, I continue to be impressed about how hard it is to maintain one's concern even when one hasn't found a rational argument that should give you comfort, right?
I mean, it is a sort of unique threat.
I mean, this was at least the starting point of the talk I gave on it now, nearly 10 years ago, that TED talk in 2016, I think.
It's just, there's something entertaining about it.
It's fun to think about the downside.
I mean, there's something kind of sexy and interesting about it that is, it's not like a coming plague or, you know, an asteroid impact or something that you like that was just, it's just scary and depressing, right?
So even if you think the risk is undiminished, I mean, you can't figure out how to be more comfortable with the probabilities or the possible negative outcomes.
It's just, there's something very elusive about it.
I mean, part of it is that the upside is also very compelling, right?
I mean, unlike the threat of nuclear war or anything else that is any other technological self-imposed risk, I guess synthetic biology is slightly different.
It's a little bit more like AI because there's obviously some real upside to our breakthroughs in biology.
But I mean, it's just, there's something so, you know, in success, it looks like it could be amazing, you know, leaving aside total unemployment and the social challenge of trying to grapple with that.
But yeah, I don't, you know, I think it's frankly terrifying when you look at the arms race condition we're in and the people who are the kind of the moral quality or lack thereof of the people who are making the decisions for us.
I mean, it's like you can count on two hands the number of people totally unregulated by a kleptocratic government at this point, to speak of the United States.
Don't you believe we have to somewhat be unregulated at this point in order to win this arms race?
Well, I just think if we had a morally sane and competent government, I think we would be forcing some sort of global approach to this global problem, right?
I mean, everything would be bent toward that.
I'm not talking about stopping development of AI.
I don't think that's in the cards under any regime.
But clearly we need to get out of this arms race condition.
I mean, we're just merely in an arms race.
I mean, that's it.
Again, I don't consider myself very close to the behind the scene details here.
So I just, I don't know how scary it actually is.
But from what people say in public, it should be alarming that the people who are doing this work and the people who are closest to it, when asked, you know, what probability they give to our destroying ourselves with this technology, I think to a man, they all say something terrifying.
I mean, they're all like, oh, maybe 20%.
I mean, it's just not, it's like you're not hearing people like Sam Altman say, oh, no, no, no, no, we've got this totally in hand and we're being, you know, really safe and scrupulous.
And yet people just have misunderstood this technology.
There's no way there's no self-improving thing on the horizon that could conceivably get away from us.
No, no, no.
Like this has just been way overblown.
I would put the risk as one in a million.
Virtually, maybe Jan Lacun is still somebody who talks that way, but virtually nobody in a position to make decisions says anything like that.
So just imagine if all the people developing nuclear technology at the time were saying, okay, we're probably, we're running a 20% chance of destroying everything, and yet we can't stop.
We're doing this as fast as possible.
And now you're going to witness a multi-trillion dollar buildout that is going to basically subsume every other economic concern for us and environmental concern.
I mean, where did climate change go?
You know, you got to get all these people building these, the most resource intensive technology anyone ever dreamed of.
And, you know, even some of the real climate change-focused people like Elon in the past are, you know, there's none of that, right?
It's just, you know, we're going to use all the water and all the electricity and let's go.
Well, what's the alternative?
Weren't we given a 10% chance for total destruction with nuclear war with you creating the bomb?
I mean, there was a very small chance that the actual, you know, the Manhattan Project principles thought that we might.
I mean, they did some final calculations and put it, I think, at a truly small chance, like, you know, like, you know, one in 10,000 or something.
It was not 10%.
But they were still placing bets on whether we would ignite the atmosphere and destroy everything.
And that wasn't 10%?
I thought that was 10%.
No, no, I mean, they didn't think it was 10% at the time.
But imagine if they had.
Imagine if they had been willing to pull the trigger at Alamogordo on the Trinity test, thinking there was a 10% chance that they were going to ignite the atmosphere and kill all life on Earth.
That would have been irresponsible.
That would have been pretty shocking if they had done that.
The fact that any of them thought it was within the realm of possibility is still a little alarming.
And that's always been held out as a moment where scientists showed their capacity to whatever the real probabilities were.
You can hold that aside.
Within their own minds, they showed their capacity to roll the dice with the future of the species in a way that should have kind of shocked everyone.
But here, we're in a completely different game, right?
I mean, again, whatever the actual probabilities are aside, there's no way to know that.
You have the people who are doing this work, funding this work, making the decisions on a daily basis, the closest to the engineering understanding that should govern one's sense of the probabilities here.
They're telling us, yeah, this is, we're sort of in coin toss land, you know, or at least dice roll land.
You know, a single die may come up, you know, six and we cancel the future, right?
That's insane.
And yet somehow we're not even in a position to have an emotional response to it.
Okay, but what's the alternative?
Don't you want Sam Altman or the U.S. to get whatever that is first?
No, I think we do need to navigate this growing superpower contest with China.
We need whatever economic levers and military levers we can get in hand.
But there should be some possible version of a carrot, which is we figure out how to solve truly global problems jointly, right?
I mean, so it's just, we need an American president who could have conceivably unified the world's democracies on this and other points, right?
Like all of our European allies and Australia and Canada could be on the same page vis-a-vis Russia and the war in Ukraine, vis-a-vis China and the AI arms race.
We need leverage, right?
We need to be able to hit the stop button somehow.
We need a system of alliances where we can actually credibly threaten China with a plunge back into poverty when we take back all of our supply chain, you know, and we can't just be America, right?
We has to be every other democracy that cares about the fate of civilization.
We just don't, we're not in a political environment where we can collaborate globally in those kinds of ways.
And the reason why we're not is it's not exclusively Trump, but it's, you know, it's Trump to an extraordinary degree.
We're not in a position to absorb this kind of shock to our system.
You know, it's like you're going to have Tucker Carlson talking about the rise of the machines.
Is that really the information diet that half the country got to have Joe Rogan, Tucker Carlson chopping it up for four hours and making sense of this?
We're not ready for the political and economic pivot that will be required in success, right?
And in perfect success, again, without any downside risk.
I mean, if all of the catastrophic concerns are pure fiction, the alignment problem was never a problem.
And the malicious use of this powerful technology is never significant, right?
You don't have cyber terrorism that's AI, you know, weaponized, you know, China doesn't turn out the lights on us, et cetera, right?
None of that happens, right?
It's all just the good use of this technology, just curing cancer all day long, you know, curing cancer, playing chess, making things.
You got AI, you know, Hollywood, Netflix gets to make movies that cost $14 to make, but they're the perfect, you know, wide release summer confections, right, that everyone wants to see because they've been algorithmically tested on a billion brains.
And, you know, the next Tom Cruise looks 32 for the rest of his life.
And it's all, you know, it's perfect.
And yet, where's the economy around all of that?
Who edited it?
Who shot it?
Who, you know, who catered it?
None of that happened.
It was made for $14 and on a laptop.
You don't think government intervenes quickly and says, okay, wait, we got to reset.
I mean, it would be amazing if we had the foresight to make the changes politically and economically that we would have to make to spread the wealth around.
That would be amazing, right?
But I just see nothing in our history or in the present that makes it seem like we're capable of doing that.
I mean, what you would need, you would need people to be able to agree about the questions of fundamental value, right?
Like what is good for people?
What kind of lives do we want to live?
We can't even agree about the ethics of universal basic income.
The moment you've raised, I mean, I think there's some question as to whether UBI is the right remedy for a situation like this.
There are certainly debatable points.
I think the research on UBI, the actual practical and psychological and social effects of UBI, that research is somewhat ambiguous still.
Although I would point out it's being conducted in a context where these changes haven't been forced on culture across the board, right?
But we can't agree about it.
The moment you mention UBI, half the people will say, this is obviously what we're going to need, some version of this, and it should have no moral stigma attached to it.
And then the other half will say, no, this is a catastrophe because people need to work.
The people derive their meaning from work and they should derive their meaning from work.
And I can't imagine any other system of norms or expectations where you wouldn't get your meaning from work.
And they're, you know, good Christian anchors to a lot of that thinking.
If we can't even have that conversation and agree about what we should do, again, under conditions of perfect success, where it's analogous to the creator of the universe just handing us the ultimate labor-saving device, right?
just become as wealthy as you want to be.
Here's the hardware and here's the software that cancels the need for human drudgery until the end of the world and will produce every scientific insight of which nature admits is possible, right?
Here is an intelligence explosion in your hands.
Go have at it.
In our current condition, we seem totally incapable of absorbing that.
That would be the kind of the final irony.
It's like, give us the best thing that could ever be invented and we will turn it into the worst thing that has ever been invented.
The Chinese would nuke us if they knew we had it now kind of thing.
That's how politically combustible we are as a species.
Like if we knew that China had it, right?
If China had the AI that could cancel all of our efforts because it's perfect, right?
And it's perfectly, again, perfectly aligned to their use, right?
Like it's not going to get away from them, but they can decide to just enjoy a winner-take-all spoils situation.
They've got there first and they've proven it.
What would we do?
Given the level of antagonism in our world, given how zero-sum we are, what would we do?
Would we just bomb them?
I mean, I think that's, I think I would put the chances at 50-50, right?
I mean, like, we're so far from even having a conversation about how to have a global civilization that works because, I mean, and this, again, this comes back to our own domestic politics and how it is so much worse than the alternatives.
I mean, say what you want about Biden and what could have been true under a Kamala Harris presidency.
Lots of awful things I could also whinge about.
But the one thing we wouldn't have is this plunge into America first, no-nothingism, this retreat from the world, the sense that even our allies are contemptible, right?
I mean, we don't even like our allies, you know, and we sort of like our enemies.
If we can get our enemies to pay up, we kind of like them more than our allies because they have our ethics.
We don't want to hold ourselves to any standard of decency globally.
So we actually are more comfortable doing deals with other countries that don't hold themselves to any standard of decency, right?
Like, yes, of course, we can do a deal with the UAE or Qatar.
It's just bakshish, right?
I mean, like, so that change, the stepping back from alliances and imagining that we can go it alone on some level and the fact that half of our society is celebrating that and is just trying to figure out, you know, whether they can suffer the Jews in their midst as they become more and more selfish and more and more oblivious to what's happening in the rest of the world.
I mean, that is far worse than the alternative, right?
I mean, again, Kamala Harris was not a good candidate.
The wokeness is as terrible as everyone on the right has said it is.
I would never minimize any of that.
But we would have continued being a country that is looking to solve global problems in a sane way under President Harris.
There's no question about that.
Well, we know you'll have job security because you're going to have to help us think through all this continually on the podcast.
And then maybe.
On the other side, you're going to have to help us figure out what to do with all this time with the meditation app.
So let's shift gears now to Tucker and his demons.
You're determined to keep me in a good mood.
Yeah, no, I just saw, I think this is kind of funny.
I saw him tell this story.
I know he's told it before, about him being clawed by demons while asleep in bed with his wife and four dogs.
And again, this was accepted by.
If you'd like to continue listening to this conversation, you'll need to subscribe at samharris.org.
Once you do, you'll get access to all full-length episodes of the Making Sense podcast.
The Making Sense podcast is ad-free and relies entirely on listener support.