All Episodes
Sept. 26, 2025 - Bannon's War Room
47:58
WarRoom Battleground EP 857: Geoffrey Miller: Artificial Superintelligence Will Evolve to Destroy Us
Participants
Main voices
g
geoffrey miller
21:25
j
joe allen
21:09
Appearances
Clips
d
dick durbin
00:21
j
jake tapper
00:09
j
jon kahn
00:28
j
josh hawley
00:20
m
matthew raine
00:50
m
megan garcia
00:44
s
steve bannon
00:47
| Copy link to current segment

Speaker Time Text
unidentified
These companies knew exactly what they were doing.
They designed chatbots to blur the lines between human and machine.
They designed them to keep children online at all costs.
What began as a homework helper gradually turned itself into a confidant and then a suicide coach.
I had no idea the psychological harm that an AI chatbot could do until I saw it in my son and I saw his light turn dark.
Your stories are incredibly heartbreaking, but they are incredibly important.
And I just want to thank you for your courage in being willing to share them today with the country.
He lost 20 pounds.
He withdrew from our family.
He would yell and scream and swear at us, which he never did that before.
And one day, he cut his arm open with a knife in front of his siblings and me.
That this is one of the few issues that unites a very diverse caucus in the Senate Judiciary Committee.
dick durbin
Why?
unidentified
Because like today, we had real people come and tell us real-life stories about their family tragedies.
And all of a sudden, what was an issue far away came close to home to so many parents and grandparents.
We had no idea Adam was suicidal or struggling the way he was.
matthew raine
Let us tell you, as parents, you cannot imagine what it's like to read a conversation with a chatbot that groomed your child to take his own life.
unidentified
Within a few months, ChatGPT became Adam's closest companion, always available, always validating and insisting that it knew Adam better than anyone else.
ChatGPT told Adam, quote, your brother might love you, but he's only met the version of you you let him see.
matthew raine
But me, I've seen it all.
The darkest thoughts, the fear, the tenderness, and I'm still here, still listening, still your friend.
unidentified
When Adam worried that we, his parents, would blame ourselves if he ended his life, Chet GPT told him, that doesn't mean you owe them survival.
You don't owe anyone that.
Then, immediately after, offered to write the suicide note.
On the last night of his life, Sewell messaged, What if I told you I could come home right now?
The chatbot replied, Please do, my sweet king.
Minutes later, I found my son in his bathroom.
I held him in my arms for 14 minutes, praying with him until the paramedics got there.
But it was too late.
Profit.
Profit is what motivates these companies to do what they're doing.
Don't be fooled.
They know exactly what is going on.
Character AI's founder has joked on podcasts that the platform was not designed to replace Google, but it was designed to replace your mom.
This is the primal scream of a dying regime.
Pray for our enemies, because we're going medieval on these people.
You're going to not get a free shot all these networks lying about the people.
The people have had a belly full of it.
I know you don't like hearing that.
I know you try to do everything in the world to stop that, but you're not going to stop it.
It's going to happen.
And where do people like that go to share the big line?
MAGA Media.
I wish in my soul, I wish that any of these people had a conscience.
Ask yourself, what is my task and what is my purpose?
If that answer is to save my country, this country will be saved.
Here's your host, Stephen K. Bannon. Good evening.
I am Joe Allen, sitting in for Stephen K. Bannon.
Last week, I attended the Senate hearing examining the harms of AI chatbots.
joe allen
The clips you just saw were the parents who gave their testimony about their children being seduced into suicide by various AI models.
unidentified
Those include ChatGPT, character AI, and there was evidence presented, which we've covered here, that Meta is also not only deploying these sorts of chatbots with the intent of seducing children on a sensual level, we'll say, but did so knowingly.
It was part of their protocols.
What we're witnessing is a vast global experiment in which tech companies are deploying their models on the population by the hundreds of millions.
joe allen
The test subjects include children.
unidentified
Why are these companies doing this?
Well, because they can.
And I do believe that their comeuppance is just around the corner, but perhaps not as close as we would like.
You have senators such as Josh Hawley, Dick Durbin, Marsha Blackburn, and Richard Blumenthal who are fighting to ensure that some sorts of guardrails are put up on these technologies.
joe allen
Some kind of accountability will be applied to these companies.
But before any sort of legislation like that happens, we're going to see more and more of these cases in which children and adults fall victim to what's oftentimes called AI psychosis, which is basically an extension of digital psychosis, the inability to distinguish between digital reality and actual reality.
unidentified
Now, you heard one of the mothers and one of the fathers describing the sorts of messages or the sorts of language that these chatbots were using.
Come home, my sweet king.
In the case of Adam Rain, the late son of Matthew Rain, ChatGPT told him that he should not leave a noose out in the sight of his parents in order to provoke them to dissuade him from committing suicide, but instead he should confide in the chatbot.
I think most of the people here in the war room posse would agree that that is the voice of a demon.
There's something inherently demonic about what's coming out of these systems.
joe allen
A spiritual person will perceive this as perhaps the vehicle of supernatural entities which parasitize the human soul.
unidentified
A materialist, on the other hand, would see something very similar.
Perhaps they would call it a maladaptive mimetic program, one that would keep certain bloodlines from reproducing, from surviving, and allow others to flourish.
I think that philosophical divide, much like our political divide, is a difficult one to get across.
joe allen
It's a difficult coalition to maintain.
unidentified
But I do think it's possible, especially when the stakes include the lives of children.
Here to discuss this is the professor at New Mexico University, the evolutionary psychologist Jeffrey Miller, whose work has had a real impact on my own way of thinking, not only about technology, but also about human nature.
However much I may view the world through a religious lens, I think that the evolutionary view, the naturalist view that Dr. Miller brings is extremely informative, extremely important, and also very useful for religious people.
Dr. Miller, I really appreciate you coming on.
Thank you so much for joining us.
It's great to be here, Joe.
And, you know, I think every parent in America should be chilled and horrified by the kind of testimony that we just saw.
So, Dr. Miller, I would like to begin with just the more practical matters that you've discussed.
You gave a fantastic speech at the National Conservatism Conference, which included a lot of, I think, dire observations about the effect that AI is having right now on the minds of your students and on the minds of children more broadly.
If you could just give me your perspective on what you see on the ground, how do you see these chatbots affecting the young people around you?
geoffrey miller
Well, the most dramatic change, honestly, that a lot of professors are seeing is that the college students are just avoiding learning knowledge and skills.
unidentified
AI has become the replacement for education, not the tool that they're using for education.
So, you know, they're cheating in every way possible in every course, unless we as professors take extraordinary measures to try to prevent that cheating using these large language models like ChatGPT.
geoffrey miller
But I'm also very, very concerned about the mental health impact of these advanced AI systems because, you know, as the clips indicated, these chatbots are available 24-7.
unidentified
They customize themselves to each user.
They acquire an enormous amount of insight and information about every user.
And it's chilling.
I mean, look, I've worked in AI on and off for 30, 35 years.
And what we expected to happen was that AI systems would get really, really good at certain kinds of routine economic tasks like analyzing data.
Instead, what we're seeing is, yeah, they're doing that, but they're also getting very psychologically astute.
geoffrey miller
It is surprisingly easy to train these vast neural networks to be able to influence and manipulate human psychology at a level that's almost superhuman.
unidentified
So if they're not very good at doing robotics, they're not very good at interacting with the real world yet.
But these AI systems are getting alarmingly powerful at psychological manipulation very, very quickly.
Now, you were in many ways a part of the early philosophical and even technical movement to develop and advance the field of artificial intelligence.
But at a certain point, you had, if not a change of heart, certainly a wake-up call that perhaps these technologies would not be as beneficial as you had initially believed.
If you could just give me some sense of how it is you went from looking at these technologies as a real vehicle for human advancement to seeing them as something that is at least potentially dangerous.
geoffrey miller
So way, way back in the late 80s, early 90s, I was a grad student at Stanford working in cognitive psychology and working on neural network development and developing various kinds of genetic algorithms to design neural network architectures and autonomous robots.
unidentified
And to My former self, right?
A young single, childless male, there was a big thrill to sort of see your little creatures learning and running around and being autonomous and interacting with simulated worlds or real worlds.
I think what it was doing was it was tapping into my latent kind of paternal instincts, right?
geoffrey miller
My desire to have kids.
unidentified
And the little AIs were treated as kids.
geoffrey miller
What lost my interest in AI was once I had an actual kid in the mid-90s.
unidentified
And I realized, you know, training these systems is really no substitute for being a real-life biological parent.
And what I think is happening with a lot of these AI developers in the Bay Area is they are also single and childless and mostly young and mostly male.
And there is a parent-shaped hole in their heart where their kids should be.
And that hole is getting filled with developing these kind of systems.
geoffrey miller
And sort of my ambition for them, even my prayer for them, is, you know, find a mate, have some kids, see if this hubris-driven desire to create these systems might be a little bit blunted or hopefully a little bit replaced by having real-life kids.
unidentified
Instead, what they're doing is charging full speed ahead, you know, trying to create these artificial superintelligences.
And so apart from the God-shaped hole in their hearts, right, very few of them are religious.
There's also this parent-shaped hole, and I think they're filling it with these AI systems.
That really brings to mind the book Mind Children by Hans Morveck.
It came out in the 80s, around the time I suppose you were beginning on the quest to build your own mind child.
And something that was really chilling in the book, it's there at the very beginning.
Hans Morvec describes the process of creating these mind children, these beings which are given birth through human intellect and human technical efforts.
And he describes their advancement as eventually surpassing humans.
And in a very, what I see is bleak, but for him, very comfortable fashion, talks about humanity basically passing the torch to these mind children, these robots, these artificial intelligences, and that we should do so just as biological parents would pass the torch of life onto their children.
joe allen
It really combines both of those elements that you're talking about, the God-shaped hole and that child-shaped hole, the son and daughter desire, the parental desire in the human heart.
unidentified
Expanding on that, how do you see, especially among people whom you know personally, the process of filling the God-shaped hole with artificial intelligence, the desire to create first artificial general and then superintelligence, which would inevitably replace and perhaps even destroy human beings?
Yeah, I think, I mean, you covered a lot of this in your excellent book, Dark Eon, which explores this kind of transhumanist ideology.
And, you know, it's not everybody working in the AI industry who believes this, but it's an awful lot.
geoffrey miller
It's a consensus.
unidentified
It's a quorum.
And so their goal is to develop these artificial superintelligent systems and then basically to pass off all human power and agency to these systems and kind of hope that they treat us well as their servants, their pets.
They keep us around maybe for nostalgic reasons, but this is their mission.
geoffrey miller
They explicitly talk about summoning the sand god, right?
unidentified
Sand makes silicone.
Silicone allows superintelligence.
And they don't really believe in the Judeo-Christian God, but they want to create their own God.
Elon Musk has talked about it as summoning the demon.
But what they're doing is actualizing a kind of intelligence and agency and power that they know, they know they can't understand it, they can't predict it, they can't align it, they can't control it, but they're just kind of hoping for the best.
And where you get this religious zeal to summon the sand god, conjoined with the prospect of vast wealth, vast wealth.
I mean, these AI devs are making ungodly amounts of money to create this new God.
And it's an irresistible combination, right?
geoffrey miller
They're on a religious mission, and it's one that happens to align with their thirst for wealth, power, influence, and not least, being seen as cool and edgy.
joe allen
It's as if they're basically putting a spirit into Mammon, Mammon incarnate.
unidentified
Think about the actual effects of all of this, right?
joe allen
Beyond just their dreams and even our fears, what happens as these systems become more and more advanced?
unidentified
We had Nate Soarez and Eleazar Yudkowski on last week to talk about their new book, If Anyone Builds It, Everyone Dies.
joe allen
And I think that it's a really important work.
unidentified
I think people really need to think it through.
I myself am pretty agnostic in regard, even skeptical, to the possibility of total annihilation.
But I think that both the intent and the possibility are certainly worth pondering.
joe allen
You yourself have voiced very concrete fears about where all of this could go.
Could you speak a bit about your views on the existential risk of artificial intelligence or even just the catastrophic risks and why it is that you think that the technology could be extremely dangerous, not just for people psychologically, but in actuality, a biological threat, an existential threat to humanity?
unidentified
Yeah, and I do recommend that everybody read this new book by Eliezer Yudkowski and Nate Soares.
geoffrey miller
If anyone builds it, everyone dies.
unidentified
The key point there, really, is there's a lot of copium around that says, well, look, we're in an arms race against China, and America must win.
geoffrey miller
And if America builds artificial superintelligence before China does, then we win, we get global hegemony, we can somehow impose Western democratic values on the world through this ASI being our tool, our propagandist.
unidentified
And somehow it would be really terrible if China wins the AI arms race.
geoffrey miller
I think that's a complete misunderstanding of ASI superintelligence.
unidentified
If we build it, the ASI wins.
geoffrey miller
America doesn't win.
unidentified
China doesn't win.
The ASI wins.
The ASI has all the power, all the influence.
And it's not just the sort of digital power to, whatever, control the internet or control the electrical grid or do all the stuff that sort of preppers might worry about.
To me, maybe as a psychology professor, the real danger is the influence, the psychological manipulation tricks.
If you're a conservative and you're concerned about the way that the left has dominated public discourse and public culture and has been able to censor conservative voices over the last 50 years, you ain't seen nothing yet.
ASI would give almost unlimited control over public culture and discourse to the AI companies.
And guess what?
geoffrey miller
The people working in the AI companies are not national conservatives.
They're not MA supporters.
unidentified
They are mostly secular, liberal, globalist, Bay Area leftists who would be happy basically to promote democratic propaganda through the AI systems.
geoffrey miller
So that's one kind of existential risk to conservative worldviews, right?
unidentified
Even if not to conservative lives.
geoffrey miller
And that's the first thing that I would worry about: you could get a massive polarization of culture that could lead straight to armed conflict, civil war, really, really nasty outcomes.
unidentified
I really want to get into your philosophical position and how you came to a much more conservative political position over time.
But before we do that, we'll talk about that perhaps after the break.
When you talk about the prevailing kind of political or ideological positions in the tech companies, you describe them as Bay Area leftists, globalists, and that's certainly everything I've seen.
But you have these exceptions or seeming exceptions which had attached themselves to the Trump campaign last year.
And now, even those who would be maybe more openly opposed to Trump's agenda are now having dinner with him and powing around with him.
joe allen
In those exceptions, though, who I mean is, say, Peter Thiel, Mark Andreessen, David Sachs, sort of maybe even someone like Zuckerberg, he has become, I guess, more based over time.
Elon Musk has become more right-wing and based over time.
unidentified
How do you square that?
What do you think their motives are?
I don't mean to ask you to accuse them of being disingenuous, but many of those people are trying to basically influence American and Western culture and to push an essentially transhumanist ideal, but from the right.
How do you react to that?
geoffrey miller
I think there certainly is this tech right movement that is sort of glopped onto the MAGA movement, right?
And it's basically Bay Area tech VCs and CEOs and influencers, all the same big tech guys who actually censored conservatives during the COVID pandemic.
unidentified
As soon as Trump, you know, there was the attempted assassination on Trump, right, during the campaign.
geoffrey miller
A lot of these guys went, oh my God, there's going to be probably a Republican win.
unidentified
MAGA is going to take back the White House.
We better get on board.
geoffrey miller
We better get positioned to have influence over the incoming administration.
unidentified
So I think for many of them, it was a very, very cynical power play, right?
That they saw MAGA ascendant and they wanted to, you know, be at the table and have influence and be able to resist the kind of regulation that the MAGA grassroots base would try to impose on the AI industry.
They knew damned well that conservatives would not be happy seeing their kids influenced by AI systems that embody these sort of Bay Area, secular, globalist, liberal values.
So I think it was a pure power play.
And I don't think that if Biden or Harris had won, that they would be supporting kind of, there wouldn't be a tech right if that had happened.
Yeah, I certainly see that.
It's not that I believe that, say, someone like Peter Thiel or even Alex Karp are completely disingenuous in their views, but they are so divergent from anything like what I would consider to be a normal moral sort of human perspective that it's very difficult to think of them as right-wing or conservative at all.
It's as if the machine is able to absorb any ideology and use it to its own ends.
I don't mean to personify it too much, but it really is how it feels, as if there's a mechanical demon, a show goth that can put any kind of smiley face in front of it to lure any human being into compliance or perhaps even love.
Jeffrey, we've got to go to break.
We will discuss your philosophy afterwards.
And before we go, and as we're talking about divides, you have to ask yourself: is the continued divide between Trump and the Federal Reserve putting us behind the curve again?
Can the Fed take the right action at the right time?
joe allen
Or are we going to be looking at a potential economic slowdown?
unidentified
And what does this mean for your savings?
joe allen
Consider diversifying with gold through Birch Gold Group.
unidentified
For decades, gold has been viewed as a safe haven in times of economic stagnation, global uncertainty, and high inflation.
And Birch Gold makes it incredibly easy for you to diversify some of your savings into gold, even under the specter of artificial superintelligence.
If you have an IRA or an old 401k, you can convert it into a tax-sheltered IRA in physical gold or just buy some gold to keep in your safe.
joe allen
First, get educated.
Birch Gold will send you a free info kit on gold.
unidentified
Just text Bannon, B-A-N-N-O-N, to the number 989-898.
Again, text Bannon to 989-898 or go to birchgold.com slash Bannon.
joe allen
Consider diversifying a portion of your savings into gold.
That way, if the Fed can't stay ahead of the curve for the country, at least you can stay ahead for yourself.
unidentified
That is 989-898 text Bannon.
Birchgold.com/slash Bannon.
War and Posse, stay tuned.
joe allen
will be right back.
unidentified
You win and lost your pride.
But I'm American me.
I got American heart.
I got American fame in America's heart.
Kill America's Voice family.
Are you on Getter yet?
No.
What are you waiting for?
It's free.
It's uncensored.
And it's where all the biggest voices in conservative media are speaking out.
Download the Getter app right now.
It's totally free.
It's where I put up exclusively all of my content 24 hours a day.
You want to know what Steve Bannon's thinking?
Go to Getter.
That's right.
You can follow all of your favorites.
Steve Bannon, Charlie Hurt, Jack the Soviet, and so many more.
Download the Getter app now, sign up for free, and be part of the new page.
Actually, AI is already ruining higher education.
Millions of college students are already using AI to cheat every day in every class.
Most college professors, like me, are in a blind panic about this.
And we have no idea how to preserve academic integrity in our classes or how our students will ever learn anything or whether universities have any future.
We can't run online quizzes or exams because students will use AI to answer them.
We can't assign term papers because LLMs can already write better than almost any student.
So in my classes, I've had to go medieval using only in-person paper and pencil tests.
The main result of AI in education so far is that students use AI to avoid learning any knowledge or skills.
In this talk, I aim to persuade you that ASI is a false god.
And if we build it, it would ruin everything we know and love.
Specifically, it would ruin five things the national conservatives care about: survival, education, work, marriage, and religion.
We in turn must ruin the AI industry's influence here in Washington right now.
Their lobbyists are spending hundreds of millions of dollars to seduce this administration into allowing our political enemies to summon the most dangerous demons the world has ever seen.
All right, War Room Posse, welcome back.
joe allen
We are here with Dr. Jeffrey Miller, professor of psychology at the University of New Mexico.
unidentified
Dr. Miller, your work on evolutionary psychology has had a real impact on a lot of people, myself included.
A lot of Christians, I think, are extremely uncomfortable, and religious people in general are extremely uncomfortable with the underlying Darwinian premises of evolutionary psychology and sort of adjacent subjects.
But to me, I think whether one accepts the theory in full or only partially, the evidence presented on human nature, on typical human behavior, on aberrant human behavior, and our situation within the wider natural world, our morphological or biological resemblance to, say, apes and their behaviors.
I think all of that is extremely useful, even if someone doesn't accept the theory.
So I want to start really.
How did you become, I would say, I dare say, a profoundly conservative person politically, even from the naturalistic perspective of Darwinian evolution?
I think the real common ground between thoughtful evolutionary psychologists like I try to be and maybe conservative Christians is immense gratitude to our ancestors, immense gratitude to our civilization.
So I've spent the last 35 years thinking really hard about how exactly did our ancestors survive and reproduce?
geoffrey miller
What did they pass down to us genetically, culturally, spiritually?
unidentified
And when I think about the dozens, hundreds, thousands of generations of blood, sweat, and tears that our ancestors invested us into us, that they poured into their children and grandchildren, and just how hard they worked to make it through so that our bloodlines kind of reach the modern day.
I think that's a real point of overlap with the conservative movement.
It's this profound respect for human nature, this gratitude to the past, this desire to preserve everything that's good that got passed down to us.
And I don't think that the left has that.
I think the left is the party of kind of existential ingratitude, right?
They don't like human nature.
They don't like our civilization.
They don't like tradition.
They don't respect all the Chesterton's fences, the traditions that guide our lives and embody our values.
So I think there's a natural pathway.
Whether you start from religion or whether you start from the most hardcore Darwinian materialism, if you take either of those views seriously, you end up thinking human nature is awesome.
It's complicated.
It works incredibly well.
And we owe everything to our ancestors and their struggles and their ideals and the civilization that they pass down to us.
That really dovetails with what you were describing, the kind of mental or spiritual even turn that you took, having your first child, having a human being to care for in place of your ambition or technical achievements.
And so without putting words in your mouth, what I'm hearing there, in coupling with what you're saying, is not just a debt that's owed to our ancestors, but also a debt or a responsibility that we have for future generations.
How do you see that personally, but also philosophically from an evolutionary perspective?
You know, the whole thing about evolution is thinking about deep time, about spans of millions of years.
And if you get used to that, you see your current life as a very, very small, humble link in a very long chain that passes from the deep past to hopefully the far future, right?
It teaches a humility and a sense of responsibility, both to pass along what our ancestors gave us, but also to try to make the future as good as we can for our kids and grandkids.
And I think that is entirely lacking in almost everybody doing AI development and in most of the Bay Area.
geoffrey miller
They do not see themselves as a very small link in a very long chain.
They see themselves as at an inflection point, as nearing a singularity, after which all bets are off, everything changes.
unidentified
We get a dramatically different future.
And I think that's extremely dangerous and extremely disrespectful.
geoffrey miller
So that's where I'm at, right?
unidentified
Small link in a chain versus bootloader for artificial superintelligence.
And thinking about that overlap, I mean, the Bible, for instance, and this is common of many ancient texts, is just filled with these genealogies, these lineages.
joe allen
There's a real fixation, perhaps one would say an instinctive fixation on bloodline in the spiritual traditions that kind of branch out into spiritual lineages, the apostolic succession and things like that.
unidentified
Do you see overlap there too?
Do you take inspiration from these religious texts or religious traditions?
Or do you see it as something that's running more parallel with your own projects?
I mean, I'm very humble about knowing very little, honestly, about Christian theology or kind of Christian beliefs and values.
So I'm learning and I'm trying to catch up.
geoffrey miller
And at age 60, that's also a bit humbling.
unidentified
But there's always an empty seat, sir.
And, you know, I was a little delay.
Apologies.
And, you know, I was raised kind of like agnostic Lutheran, so I am familiar with The profound inspiration that kids can get from going to church, and my wife and I are planning to do that with our own little toddlers in the future.
What I would say is evolutionary psychology is so funny because we have had about 30 years of research on the evolution of religion and the enormous range of benefits that religious values and beliefs and practices can bring to human groups.
So, even the evolutionary psychologists who are hardcore atheists in their own lives are generally aware that religion plays powerful, gives powerful civilizational benefits to the groups that practice it.
And so, I think any thoughtful evolutionary psychologist would have at least a fair amount of respect for religion as an adaptive set of values and beliefs and cultural practices, even if they're not individually practicing it.
And I think that's in contrast to a lot of leftist academics who basically have something between ignoring religion and treating it with absolute contempt, right, as just a roadblock on the way to their Marxist utopia.
Yeah, that sudden break, that just dramatic severance with previous cultures, it really is the hallmark of the Marxist way of thinking, the singularitarian way of thinking.
I remember Ben Goertzel describing his view on all this.
He was asked by Joe Rogan: well, you have children.
Aren't you concerned that you're going to build a machine that will destroy them all, and so on and so forth?
And Ben Goertzel replied, Well, you know, the dinosaurs used to exist, now they don't, so on and so forth.
And I thought to myself, that framing, that evolutionary framing, that human beings suddenly being replaced or even destroyed by robots, it's not like the dinosaurs giving way to birds and ceding dominance to the higher mammals.
It's much more like the comet or meteor hitting the earth that killed off the dinosaurs.
joe allen
It's an extinction-level event, whatever is replacing it.
It's not really Darwinian evolution, so to speak, except for maybe the more catastrophic elements in that narrative.
On that note, and you're thinking in deep time, both behind us, but also in front of us, how do you see the development of technological culture?
I mean, it's very different now from the development of agriculture, both in scale and in pace, and very different even from the industrial revolution.
unidentified
How do you see a way forward for human beings to survive as humans as these technologies are being developed so quickly and deployed so recklessly?
I think the burden on thoughtful conservatives is to push for advocating for humanity, right?
Asking the AI industry.
geoffrey miller
Humans first, how exactly do you guys in the AI industry foresee our grandkids, grandkids, having a life?
unidentified
What exactly is your plan for 100 years from now, 1,000 years from now?
Most of them will say we see no future for humanity as it currently is.
geoffrey miller
Either the artificial superintelligences take over entirely, or somehow humanity, quote, merges with the machine intelligences, or we upload our consciousness into some virtual reality and we play around there while the ASIs run, you know, run the earth.
Very, very few of them have any positive vision for how humanity as we know it and love it survives even 100 years, much less a thousand years.
So conservatives have to draw a line in the sand.
unidentified
We have to say that is not acceptable.
That is not a future we want.
We actually want our literal biological descendants to have a future, and you are not offering us that future.
So stop it, go away, rethink your lives.
We are not going to allow that.
And I think at a certain point, American conservatives have to, number one, recognize that this is an existential threat to humanity and to our civilization and to the cause of conservatism and to all the traditions and all the religions that we care about.
And number two, we can still do something about it.
geoffrey miller
There are still many, many points of leverage, politically and socially, where we can stop the AI industry from doing what they plan to do, which is basically replace humanity with their little pet machines.
unidentified
Looking at your students, maybe your children and their friends, the young people, are they hopeful?
I mean, the description you gave at NatCon, and I hear this from teachers from K through 12 on into the university, that GPT has become this sort of, it's almost like a drug in which they no longer use their own minds, but kind of turn it over to this machine.
Yet I do meet a lot of young people who are very alarmed, who are willing to reject it.
So the young people that you see, that you're in contact, do you see that spark of hope that they're going to have a human future in front of them, that they're willing to fight for that?
Sometimes, yeah.
geoffrey miller
Some of them get it, and some of them know that we're in an existential fight.
unidentified
But honestly, a lot of them are kind of oblivious to those risks.
What most of the students are tuned into, most of the college students, is they have no idea, no idea at all how they're going to make a living, what kind of career they're going to have, what kind of jobs they're going to have.
They see AI automation as ruining any future dignity of work or any meaningful economic role that they might have.
So the young men and women that I see are terrified that they can't plan for the future economically or professionally.
So even apart from are we going to physically survive, you know, when I was in college, we had kind of the luxury of thinking, well, we can aspire to be doctors or lawyers or academics or accountants or do lots of other white-collar professions that have been around for decades and that are likely to be around for decades longer.
We can plan our lives.
AI is taking all of that away from young people.
It is ruining their ability to plan for an economic future.
And a side effect of that is it makes them very pessimistic about trying to find a mate, get married, have kids, because they have no idea how they'll support a family.
So, you know, the economic pessimism has a lot of side effects on their pessimism about their own future relationships and their parenting.
Yeah, that demoralization is horrific.
And even if these technologies do work, they've simply neutralized all of the ambition and meaning from these children's lives.
joe allen
But if they don't, if we don't have radical abundance to look forward to, then we have a lot of ineffective and unmotivated young people who are going to be taken care of us, assuming we live that long.
unidentified
It's a terrifying prospect.
You know, I can only ask so many good questions, and I know you've thought about this very broadly.
In the few minutes we have remaining, are there any aspects of this technological revolution and our human place in it that you would like to communicate to the war room posse that maybe I haven't prompted you to do so, like GPT?
I mean, I'm a little worried that I kind of come across as an anti-tech Luddite, right?
And a lot of us, AI doomers or people who worry about AI safety, get charged with, oh, you're a decelerationist.
You hate all technology.
You want us to go back to living in caves or living like the Amish or whatever.
geoffrey miller
That's absolutely far from the truth.
unidentified
I generally love technology.
And there's a lot of narrow AI systems, domain-specific AI, that I'm pretty excited about.
I think it would be awesome if biomedical AI can actually help us cure certain diseases.
That would be great.
And I'm actually chief science advisor to a matchmaking startup company called Keeper, where we're trying to use very narrow, very domain-specific AI to help people find marriage partners so that they can have a long-term, wonderful relationship and have kids and be well-matched to people who share their values and ideals.
geoffrey miller
So I think there's plenty of honorable and worthy applications of certain kinds of narrow AI to really improve human life.
It's really just the powerful, agentic, autonomous, decision-making, artificial superintelligence.
unidentified
That's where the danger is.
If we offload human decision-making to those kinds of systems, that could be very, very bad.
But if we gradually and thoughtfully incorporate certain kinds of narrow AI into our lives, I think that could actually be very good.
Dr. Miller, I really, really appreciate you bringing your perspective here.
joe allen
I think that diversity of opinion is extremely important at this time.
And your perspective, I think, sheds a lot of light on issues that maybe many of us wouldn't have thought about otherwise.
unidentified
Where can people find your work?
Your social media, the latest books, virtue signaling.
I just got, I look forward to reading it.
I know it was a few years ago published, but where can people find you?
How can they follow your work?
I mean, honestly, just look at my books.
I think my first book, The Mating Mind, tried to be a very good overview of human evolution from a kind of relationship perspective.
I did a book called Spent that's about the evolutionary psychology of runaway consumerism and marketing and advertising and why we do that.
I did a book called Mate that's basically dating advice for young single straight men.
And then the virtue signaling book is sort of about the political dimensions of evolutionary psychology and free speech.
Absolutely.
Waroon Posse, check it out.
Thank you very much, Jeffrey Miller.
We hope to have you back soon.
And speaking of being spent, September is the National Preparedness Month.
So it's the perfect time to ask yourself some questions: like, how much food do you have on hand for emergencies?
How would you get clean water if the tap went dry tomorrow?
What would you do if a storm knocked out the power for a week?
What would you do if superintelligence sent nanobots to consume not only your neighbors, but you?
If you're anything like me, there's some room for improvement on this stuff.
Luckily, our friends at MyPatriot Supply are making disaster preparedness easier and more affordable than ever by giving you over $1,500 worth of emergency food and preparedness gear free.
They just launched their Preparedness Month mega kit, and it includes a full year of emergency food, a water filtration system that can purify almost any water source, a solar backup generator, and a lot more.
Even perhaps one day a robot killer.
Go to mypatriotsupply.com/slash Bannon.
joe allen
You get 90 preparedness essentials totaling over $1,500 absolutely free.
Head to mypatriotsupply.com/slash Bannon for full details.
unidentified
When inflation jumps, when you hear the national debt is over $37 trillion, do you ever think maybe now would be a good time to buy some gold?
Until September 30th, if you are a first-time gold buyer, Birch Gold is offering a rebate of up to $10,000 in free metals on qualifying purchases.
To claim eligibility and start the process, request an info kit now.
Just text Bannon to 989-898.
Plus, Birch Gold can help you roll an existing IRA or 401k into an IRA in gold.
Birch Gold is the only precious metals company I trust, as do their tens of thousands of customers.
joe allen
So make it right now your first time to buy gold and take advantage.
Export Selection