Tucker Carlson - Ep. 89 Bryan Johnson is a very smart, very rich, very well-meaning man who wants to live forever. That sounds like a terrible idea. This is one of the most interesting debates we’ve ever had.
Bryan Johnson, tech billionaire and "anti-aging" pioneer, claims his $1M/year Project Blueprint—vegan diet, plasma swaps, and DNA tweaks—slowed aging to 7.6 months per year, but Tucker Carlson counters with skepticism about engineering immortality’s moral void. Johnson envisions a "don’t-die nation" aligned with AI, while Carlson warns of spiritual forces beyond science shaping human fate, leaving unresolved whether humanity’s future hinges on tech or unseen forces. Their debate exposes the clash between measurable progress and existential uncertainty. [Automatically generated summary]
So it is the most basic truth of biology that the second you reach maturity, you exit adolescence and become an adult, you start dying, you degrade, and then you expire.
This is called the aging process, and you maybe first start noticing it in your 40s, long after it's already begun, because there are visible symptoms.
You get wrinkly and bald, and if you can't stay away from the pizza, you get a little fat, and that's kind of inevitable, or we've been told it's inevitable.
But a man called Brian Johnson has decided it's not necessarily inevitable.
He was a very large figure in the tech world, made a ton of dough, and then started thinking about his body and the nature of life and the future of human existence, and has become pretty famous recently for saying that he has, in a way, begun to reverse the aging process and maybe even cracked the code that limits the human lifespan.
But watch him explain.
unidentified
It's hard to believe tech millionaire Brian Johnson is 46 years old.
But no matter his chronological age, he's striving for the biological age of an 18-year-old.
His team of 30 doctors utilize all the latest tech.
The plan is rigorous.
At $2 million a year, a life like this is out of reach for almost everyone.
Yeah, what we tried to do with the diet is we said, if you take the frame that every calorie you put into your body has to fight for its life, what would that be?
And so we went through, we referenced all the scientific literature.
We said, what has the best evidence?
And then we put them to my body.
Then we measure.
So if a given thing is supposed to do a thing in the body, it stays.
And if it doesn't, it stays.
If not, it's out.
And so what I told you is where every calorie is precisely designed.
And these are population-level studies.
This is not just me.
This could be applicable to you as well.
And so, yeah, we are very particular about what goes into my body, and not a single calorie goes in that's not backed by science.
How interesting that you bring this up because right now the team and I are talking about plasma infusions and that some of the studies are looking at the effects on Alzheimer's and Parkinson's and other kinds of things like that.
And so I said, if you're interested, I'm happy to give you a liter of my plasma.
And then my 17-year-old son was there and he's like, hey, if you guys are doing it, I'm in.
Great.
We'll make this a family affair.
And so my son- Cheering the plasma.
Yeah.
So my son gave me a liter of plasma.
I gave my dad a liter of plasma.
And the data showed that in me, there was no effect.
That my biomarkers didn't change.
But in my dad, his speed of aging reduced by 25 years.
So he was aging at the rate of a 71-year-old.
And after the plasma infusion and continued for six months, it lessened to a 45-year-old.
I've taken the opposite approach, and I'm not claiming it's superior to yours, but I had my appendix swole up and burst, and I never, and I had it, of course, taken out.
I never asked, like, what the appendix is because I didn't really want to know.
I don't know what a spleen is.
Like, I've really made an effort to not focus on those things because it seems like a lot of self-focus, and it seems like a short trip from there to, say, narcissism, which is obviously death.
And I had this burning desire to be useful to the world.
I didn't know what or how.
And so I thought, I'll make a whole bunch of money by age of 30. And then when I'm 30 years old and have a whole bunch of money, I'll decide what to do then.
And so I've been searching for this mission my entire life.
And upon doing that, I organized dinners with my smartest friends.
And I said, let's imagine we're existing in 2050. And this was 2016 at the time.
And the world is amazing.
What did we do in 2016 that would make that possible?
And then I listened very intently to everybody's responses.
And then I put them in a box.
And I made a rule that I can't do anything inside that box.
I have to do something outside that box.
And what nobody was working on was trying to solve death.
That it was always inconceivable that you could try to legitimately conquer death.
But what's interesting, I mean, now, and now, okay, so we're at the philosophical part of this.
And my friend who recommended this interview said, you know, he's really interesting on the practical stuff, the serum transfers and all that, but he's much more interesting on the philosophical questions.
And I think you will be.
So let me ask.
You grew up in a world, a Mormon world, that believed and taught you that it had already solved the question of death.
So you've abandoned that worldview, or at least you're agnostic, I guess, would be the word on that worldview.
Right.
Not to get too personal, but I'm just interested, because a lot of people would say, You know, religious people, Christians, would say, well, we've already solved death.
I just wonder if, as someone who grew up in a religious community, if part of you, maybe deep inside, fears that when you start to say things like, we can defeat death, that you won't be smote down by the God of the universe for assuming his role.
I mean, right now, we have organized society around capitalism.
We strive to make money, have power and wealth.
We engage in warfare.
Everyone's angling for their best interests.
And I'm suggesting that this is not about me trying to live forever.
This is me trying to answer the most pressing question in existence.
What do we do as a species?
Now, when death is inevitable, you're going to have an answer like, well, I'm going to live fast and die young, or I'm going to conquer territories and be immortal for my quest, or I'm going to make up your meaning of life game.
But if death is not inevitable, we can extend our lifespans to some unknown horizon, the meaning-making games we have as a species all change.
I mean, many people through history have reached similar conclusions, but not with similar technology to affect those conclusions, or those outcomes.
But history laughs at those people, and the story of history is men addled with hubris, being humiliated.
Yes.
And so, I mean, I would say there's a great deal of evidence that you will be...
Crushed and humiliated for saying that.
And I hope that's not the case, of course.
But everyone, every other living person who's reached the conclusion that you've reached has been crushed and humiliated in the end, and we laugh at them.
So if you pose that question from the 25th century, and so that really, for me, creates this clarity of thought.
Like if you try to really clear your head of all the noise happening now, what do they say right now that we did as a species in this moment that allowed intelligence to thrive in this part of the galaxy?
And this is what I would say is this is when Homo sapiens realized.
That they reached a technological threshold where the only objective of existence was to continue to exist at the basic level.
I mean, I grew up in a religion and then I found out that that entire thing had been packaged in a way where it's like, we're good and everyone else is bad.
And then I went through a process of behavioral psychology where I realized that I have all these shortcuts of hypocrisy and irrationality that I am a disaster as a human.
I'm blind to my own behavior.
I went through it with authorities that I trusted in other ways.
And so I don't know what to trust in reality outside of things that I find more stable like physics and math.
And so if I try to ground myself in reality of what can I trust, my mind is very far down the list of things I trust.
So imagine we travel back in time, one million years, and we're hanging out with Homo erectus.
They have an axe in their hand.
And we say, where's shelter?
Where's food?
And where's danger?
We listen.
If we say, now wax poetic on the future of the species, what is the future of intelligence?
We laugh.
They have nothing to say about computers.
Or the internet, or that there's a microscopic world, or how large the universe is.
They have no idea.
And in this moment, if we contemplate that we may be just as primitive as Homo erectus, we think we are at the apex of intelligence.
Is that true?
We're giving birth to superintelligence.
Could that intelligence, relative to us, make us caveman-like in a similar fashion, or more so?
And this is what I'm saying in this moment.
It's an absolute invitation for humility that we may know nothing about existence, or very little, or that what's coming our way may transform existence to ways that we can't even fathom.
That's how significant the change is going to be in the coming years and decades.
Yeah, I just, I wish, and I'll stop at this, but I just wish we had a better handle on why.
We have those impulses.
I feel like it's very hard to proceed with any assumption at all until we understand what just happened or what's happening now, why we're acting the way that we are.
And if we don't have a consensus on why people hurt themselves, pretty hard to make any future plans at all based on human behavior.
Now, so what's interesting is the next question I ask in this conversation is now imagine the 21st century is observing our conversation right now and they observe your answers.
What do they observe are the characteristics and morals and ethics of the early 21st century?
So it flips people's mindset from the knee-jerk reaction of I hate this idea to being Observational on what are the characteristics of being human now.
And I do this because it is so hard to see time and place.
Well, that doesn't surprise me, actually, because you have one quality which I, again, really admire, which is your dedication to seeing things outside of your own, the narrow tube that we all live in, seeing the bigger picture.
And I love that.
I think it's so important and wonderful to hear it.
You made a bunch of allusions to superintelligence, presumably the AI we keep hearing about.
And since you're in that business and this is what you think about, describe what that means exactly.
What we do know is that software can be programmed and mathematical functions can be organized to do things that we humans do, and they can do it much better, and even do things that we humans can't do.
So we've seen this where I just took my first self-driving car ride in San Francisco last week.
Held it, got in, entirely autonomous.
And that's a remarkable feat that is capable of driving a car.
It reads medical imagery, it flies airplanes.
So we know algorithms are very good at doing many things.
And they're getting better all the time.
And so what I'm observing is I'm saying that AI is progressing at a speed.
That is impressive, and maybe even unfathomable to how we can observe it much faster.
And it's doing these things that we humans do, and it's going to increasingly do those things, and it will help us achieve our objectives, so we're going to say yes to it.
Now, when these algorithms become as good or better at being us than we are, then it creates an invitation to say, who are we?
And that's what I'm saying, is AI is going to create a series of existential crises for the species.
It'd be basic ones like, do we trust our government?
Who's in authority?
Who verifies identity?
All these basic things we've settled as a society, roughly, it's going to call into question everything at a speed that won't allow us cycle time to really fill it out.
And so we're going to have this feeling of bewilderment where it's moving very fast, we can't keep up.
How do we stop ourselves from falling into anarchy?
Now, when that happens, we say, what games do we play as a species?
What do we do?
And that's why I'm saying it's time to rally around this don't die concept.
Don't die individually.
Don't kill each other.
Don't kill the planet and align AI with don't die.
That our singular objective is a species.
Even though this sounds unimaginable right now, like from our vantage point, that's like, that's no way.
Impossible.
You just look at the underlying characteristics of how this is progressing.
If the Industrial Revolution, the steam-powered loom in England gave rise to Marxism and the First and Second World Wars and Vietnam and Korea and every other conflict for 100 years and the deaths of hundreds of millions of people, technological change causes...
Yes.
The fall of religions, the fall of empires, the murder of millions.
So I said when you asked, would I be willing to follow the instructions of the algorithm?
And I blurted out without thinking about it, no.
And then I admitted in the interest of honesty that I don't really have any reasons for saying no other than my animal sense tells me, no, that's slavery.
You can't live like that.
You'd rather be dead, which is how I feel.
That was my instinct speaking, which I regard as a kind of co-equal.
When I saw Evening Brian pull up and he'd give me all these really compelling reasons, like tonight's the last night, you know, like tomorrow morning, we'll work out extra hard.
And I'd say, I'm sorry, that's not going to happen.
So I fired him.
So 5 p.m.
to 10 p.m., I removed my ability to eat.
He's like, no matter what, it doesn't matter what the occasion is, you cannot eat food.
And so I started playing with my different characters of Brian, like Dad Brian, Work Brian, Evening Brian.
And I found it really liberating that I'm not the behavior.
I'm not that actual practice.
And so this is what I started doing.
Blueprint as well, like, could I construct an algorithm that actually improved me?
Because I spent all day building technology in my company, Brainstream MO. You would write the code and the technology and improve it.
And then you would improve it again.
And again, version 2, version 3, version 4. So all day, my technology got better.
And every day, I got worse.
And I couldn't fix my own problems.
And it was such a weird juxtaposition where technology is improving radically, and I'm getting worse.
So it's like this difference.
And I thought, this is wild that as a species, we're so focused on the improvement of our technology, and we are this self-destructive species in every regard.
And of course, every religion answers it very neatly and sensibly, I would say, and every religion always has.
And it does strike me, if you're looking back into history, that this is the only period, post-war, post-World War II, where you've had a society at scale that assumes that there's nothing beyond itself.
And so that raises a lot of questions, but the first is, like, why did every previous generation assume there was a God, but we don't?
No, I don't know that we do agree, actually, because there's no meaning without a power beyond ourself, is there?
I mean, there's only this sort of like shallow, silly, or sets meaning that we attach to various things like sex or living longer or feeling good or whatever, but there's no meaning beyond our physical momentary experience, whereas...
A person who acknowledges a power beyond himself attaches ultimate moral meaning to events, right?
So you have no God, no meaning, or am I missing something?
I guess I try to speak in the world that I can operate.
Practically.
And so your thought of meaning is a biochemical process in your brain.
It's a thought you have.
It's a biochemical state you experience, whether it's love or whether it's meaning making or whether it's belief in death, you're experiencing this thing as a human.
So if there's no acknowledged power beyond people, or only the power that we create through these machines, and there are giant data centers, then how can we say, if I feel like killing you...
What you're saying is, what I'm hearing you say is the technological revolution or disruption opens up the space for these questions to be asked anew, even though we don't even know where it came from in the beginning.
This is what I am proposing is that just like when America was founded, it was this concept of, hey, the monarchy has been doing its thing for quite some time.
Not great.
We think we can do this really new, weird thing of democracy and vote people in.
We have these two representative bodies.
And half the people thought, that's insane.
Half the body's like, kind of cool, let's try it.
So we chose democracy as a form of governance that was supposedly better than the monarch.
And so in that moment, we chose a new form of governance in trying to do that.
Now, we've been trying to solve the thorny questions of democracy for over 200 years.
So in many ways, I'm currently working to create a don't-die nation state.
So if you're serious about not dying, and this, again, not for immortality, but just for the purpose of we're at the dawn of a new era as a species, and we're going to try to create some stable structure for birthing superintelligence, then you can walk into that, and no government in the world is helping its citizens not die.
Really basic things like blood draws and therapies and medical care.
It's very much treat the symptoms when they arise or when you're near end of life, let's keep you alive for some short duration of time.
But otherwise, we don't do a good job.
And so I'm trying to figure out how to create a new societal structure that has the sole objective, a nation state, of helping its citizens not die.
Of course, the first order of business would be to...
Construct a military to defend yourselves against people who wanted to kill you anyway.
Because not everyone agrees that you should get to live, of course.
But let me ask you, maybe there's a shortcut to all this.
And I so admire your energy and your willingness to think about questions that most people don't bother to think about but can feel are important.
Everything you've said, I can feel it's important.
This is not nonsense what you're saying at all.
Why wouldn't it just be a lot easier to blow up all the processing centers, save ourselves the massive climate change-inducing energy draw that AI really is?
If you're worried about the planet, we've got to stop this crap immediately because we can't generate the power for it.
And arrest everyone who's getting rich imposing this revolution on the world.
That's a lot easier.
You could do that in an afternoon with nuclear weapons, and why wouldn't you if you thought it would help us, quote, not die?
And I hate to reduce everything to the if you could kill baby Hitler, would you?
But it is sort of a question like that because you would not disagree if I said, here's what we know.
We know that AI is likely to spawn some improvements also.
Certain to kill millions of people.
Millions will die because of this.
There's any doubt about it.
The chaos alone, right, will cause that.
I'd bet my house on that.
That's going to happen.
Why let that happen?
Want to just strangle this puppy in the crib?
Like, seriously, why wouldn't you as a rich guy fund a bunch of saboteurs to blow up the data centers and to take out the people pushing this crap and to try and end it?
And so that gives me kind of an advantage, I would say a moral advantage over the machine.
And therefore, I'm a preferable father, for example, to my children than my iPad would be.
Because I have a soul and the iPad doesn't.
But again, that's a theological distinction.
But as a practical matter, there's no way you can look into the camera and say, AI is not likely to kill millions of people, because you know that it is.
The effect of it will kill people, for sure.
The displacement that you described, the power vacuum you described, the chaos that you described correctly, you're predicting.
That's all that's real, in my view.
So millions will die because of that.
So why wouldn't you just take your money and try to blow it up in the name of saving millions?
But I still just want to go back to, like, why not save ourselves?
I mean, there's something sort of classically American or Western or overfed, too much money passive about the society that I live in and all of us live in.
We're just like, well, it's going to happen.
It's like, why doesn't somebody stop it?
Why even go through all this drama?
These are just machines.
Like, let's go full Luddite and just take him out.
I'm serious.
Arrest these creepy people who are trying to impose this dystopia on our children.
I mean, okay, so let's just say at what point in time have humans known all things?
So we walk back through history and say, what did humans know then?
And what do we know now?
And there's been a track record of we haven't known all things.
In fact, we've known a very shallow set of things.
It's like even if you said how big is reality a few hundred years ago, you wouldn't be able to say, oh, there's a microscopic world down to the nanoscale and beyond.
There's this big universe on this scale at this size.
You wouldn't say there's an electromagnetic world that's a trillion times bigger than what we can see.
You wouldn't be able to say reality is like trillions of times bigger than what we experience.
And so if you say, what could our conscious experience be?
What could existence be in a few decades?
It may be orders of magnitude larger than what we have now.
So I realize we come at this now with this fear response.
We're saying we can pattern things we've seen, but going forward, we may be cavemen and have no idea what we're talking about.
But the idea that harnessing the computing power of machines...
We'll inform us to a greater degree, ignores what just happened over the last 30 years where everyone now has the Encyclopedia Britannica, as we used to call it, in his pocket in the form of an iPhone where all human information is available and people are way more ignorant than they were 30 years ago.
And moreover, any machine we create will never be able to answer the questions that actually matter like, why is my wife mad at me?
No machine can ever determine that with certainty or even explain how does life begin?
Is there any—I'm so rooting for the future you describe.
I really am.
I just—speaking—you said at the outset that you were no longer a believer because there's no evidence, which I think is a fair thing to say.
I disagree, but I respect your evidence-based standard, okay?
Where's the evidence that technology has ever— Brought people closer together.
Has ever done anything but enable people to be people, which is to use it in part for good ends, you know, better food, more food, and evil ends, nuclear weapons.
I just don't think that there's any evidence for what you're saying!
But until we can account for why we do it to ourselves, we're probably not going to change it.
But I think the most obvious explanation is we're being acted on by demons.
And this is how every religion I'm aware of has described it correctly, in my opinion, acted on by demons, whose goal is to destroy and kill people, and they're counterbalanced by God.
But if you don't agree with that, then you need to substitute another explanation in its place in order to proceed in the hope that we can change.
Otherwise, we're just in this cycle with more powerful technology that allows us to do the same evil things, but at a greater scale.
I don't see any evidence of that because it's remained constant throughout all time that we're aware of.
This pattern has never changed, and it's existed in times when people are getting massive amounts of aerobic exercise because they had to walk through the fields all day, when they were eating no carbs, when they were hunter-gatherers, or whatever.
Well, my certainty is that we are being acted on by spiritual forces that we cannot see, that there is a war going on all around us, out of our sight, not perceived by our senses most of the time, between good and evil.
That's the whole objective of this endeavor is to identify what we cannot see and reconcile with and eliminate the forces that deteriorate our life experience in all of its capacity, spiritual, physical, all of it.
We're saying the same thing after the same objective.