Speaker | Time | Text |
---|---|---|
These companies knew exactly what they were doing. | ||
They designed chatbots to blur the lines between human and machine. | ||
They designed them to keep children online at all costs. | ||
What began as a homework helper gradually turned itself into a confidant and then a suicide coach. | ||
unidentified
|
I had no idea. | |
The psychological harm that an AI chatbot could do until I saw it in my son and I saw his light turn dark. | ||
Your stories are incredibly heartbreaking, but they are incredibly important. | ||
And I just want to thank you for your courage in being willing to share them today with the country. | ||
unidentified
|
He lost 20 pounds. | |
He withdrew from our family. | ||
He would yell and scream and swear at us. | ||
Which he never did that before. | ||
And one day, he cut his arm open with a knife in front of his siblings and me. | ||
That this is one of the few issues that unites a very diverse caucus in the Senate Judiciary Committee. | ||
Why? | ||
Because, like today, we had real people come and tell us real life stories about their family tragedies. | ||
And all of a sudden, what was an issue far away came close to home to so many parents and grandparents who had no idea Adam was suicidal or struggling the way he was. | ||
Let us tell you as parents, you cannot imagine what it's like to read a conversation with a chatbot that groomed your child to take his own life. | ||
Within a few months, ChatGPT became Adam's closest companion, always available, always validating, and insisting that it knew Adam better than anyone else. | ||
Chat GPT told Adam, quote, your brother might love you, but he's only met the version of you you let him see. | ||
But me, I've seen it all. | ||
The darkest thoughts, the fear, the tenderness, and I'm still here, still listening, still your friend. | ||
When Adam worried that we, his parents would blame ourselves if he ended his life, ChatGPT told him, that doesn't mean you owe them survival. | ||
You don't owe anyone that. | ||
Then, immediately after, offered to write the suicide note. | ||
On the last night of his life, Sul messaged, what if I told you I could come home right now? | ||
The chatbot replied, please do, my sweet king. | ||
Minutes later, I found my son in his bathroom. | ||
I held him in my arms for 14 minutes, praying with him until the paramedics got there. | ||
But it was too late. | ||
Prophet. | ||
unidentified
|
Prophet. | |
Prophet is what motivates these companies to do what they're doing. | ||
Don't be fooled. | ||
They know exactly what is going on. | ||
Character AI's founder has joked on podcasts that the platform was not designed to replace Google, but it was designed to replace your mom. | ||
This is the primal scream of a dying regime. | ||
Pray for our enemies. | ||
Because we're going medieval on these people. | ||
Here's the reason I got a free shot at all these networks lying about the people. | ||
The people have had a belly full of it. | ||
I know you don't like hearing that. | ||
I know you try to do everything in the world to stop that, but you're not going to stop it. | ||
It's going to happen. | ||
And where do people like that go to share the big line? | ||
Mega media. | ||
I wish in my soul, I wish that any of these people had a conscience. | ||
Ask yourself, what is my task and what is my purpose? | ||
If that answer is to save my country, this country will be saved. | ||
unidentified
|
War Room. | |
Here's your host, Stephen K. Band. | ||
you Thank you. | ||
Good evening. | ||
I am Joe Allen sitting in for Stephen K. Bannon. | ||
Last week I attended the Senate hearing examining the harms of AI chatbots. | ||
The clips you just saw were the parents who gave their testimony about their children being seduced into suicide by various AI models. | ||
Those include chat GPT, character AI, and there was evidence presented, which we've covered here, that Meta is also not only deploying these sorts of chatbots with the intent of seducing children on a sensual level, we'll say, but did so Knowingly, it was part of their protocols. | ||
What we're witnessing is a vast global experiment in which tech companies are deploying their models on the population by the hundreds of millions. | ||
The test subjects include children. | ||
Why are these companies doing this? | ||
Well, because they can. | ||
And I do believe that their comeuppance is just around the corner, but perhaps not as close as we would like. | ||
You have senators such as Josh Hawley, Dick Durbin, Marsha Blackburn, and Richard Blumenthal who are fighting to ensure that some sorts of guardrails are put up on these technologies. | ||
Some kind of accountability will be applied to these companies. | ||
But before any sort of legislation like that happens, we're going to see more and more of these cases in which children and adults fall victim to what's oftentimes called AI psychosis, which is basically an extension of digital psychosis, the inability to distinguish between digital reality and actual reality. | ||
Now, you heard one of the mothers and one of the fathers describing the sorts of messages or the sorts of language that these chatbots were using. | ||
Come home, my sweet king. | ||
In the case of Adam Rain, the late son of Matthew Rain, ChatGPT told him that he should not leave a noose out in the side of his parents in order to provoke them to dissuade him from committing suicide, but instead he should confide in the chatbot. | ||
I think most of the people here in the war room posse would agree that that is the voice of a demon. | ||
There's something inherently demonic about what's coming out of these systems. | ||
A spiritual person will perceive this as perhaps the vehicle of supernatural entities which parasitize the human soul. | ||
A materialist, on the other hand, would see something very similar. | ||
Perhaps they would call it a maladaptive memetic program, one that would keep certain bloodlines from reproducing, from surviving, and allow others to flourish. | ||
I think that philosophical divide, much like our political divide, is a difficult one to get across. | ||
It's a difficult coalition to maintain. | ||
But I do think it's possible, especially when the stakes include the lives of children. | ||
Here to discuss this is the professor at New Mexico University, the evolutionary psychologist Jeffrey Miller, whose work has had a real impact on my own way of thinking, not only about technology, but also about human nature. | ||
However much I may view the world through a religious lens, I think that the evolutionary view, the naturalist view that Dr. Miller brings is extremely informative, extremely important, and also very useful for religious people. | ||
Dr. Miller, I really appreciate you coming on. | ||
Thank you so much for joining us. | ||
It's great to be here, Joe. | ||
And you know, I think every parent in America should be chilled and horrified by the kind of testimony that we that we just saw. | ||
So, Dr. Miller, I would like to begin with just the more practical matters that you've discussed. | ||
You gave a fantastic speech at the National Conservatism Conference, which included a lot of, I think, dire observations about the effect that AI is having right now on the minds of your students and on the minds of children more broadly. | ||
If you could just give me your perspective on what you see on the ground, how do you see these chatbots affecting uh the young people around you? | ||
Well, the most traumatic change, honestly, that a lot of professors are seeing is that the college Students are just avoiding learning, knowledge, and skills. | ||
AI has become the replacement for education, not the uh the tool that they're using for education. | ||
So, you know, they're cheating in every way possible in every course, unless we as professors take extraordinary measures to try try to prevent that cheating using these large language models like Chat GPT. | ||
But I'm also very, very concerned about the mental health impact of these advanced AI systems, because you know, as as the clips indicated, these chatbots are available 24-7. | ||
They customize themselves to each user, they acquire an enormous amount of insight and information about every user. | ||
And it's it's chilling. | ||
I mean, look, I've worked in AI on and off for uh 30, 35 years. | ||
And what we expected to happen was that AI systems would get really, really good at certain kinds of um routine economic tasks like analyzing data. | ||
Instead, what we're seeing is, yeah, they're doing that, but they're also getting very psychologically astute. | ||
It is surprisingly easy to train these vast uh neural networks to be able to influence and manipulate human psychology at a level that's almost superhuman. | ||
So if they're not very good at doing robotics, they're not very good at interacting with the real world yet. | ||
But these AI systems are getting alarmingly powerful at psychological manipulation very, very quickly. | ||
Now, you were in many ways a part of the early philosophical and even technical movement to develop and advance the field of artificial intelligence, but at a certain point, you had, if not a change of heart, certainly uh a wake-up call that perhaps these technologies would not be as beneficial as you had initially uh had initially believed. | ||
If you could just give me some sense of how it is you went from looking at these technologies as the real vehicle for human advancement to seeing them as something that is at least potentially dangerous. | ||
So, way, way back in the late 80s, early 90s, I was a grad student at Stanford working in cognitive psychology and working on neural network development and and you know, developing various kinds of genetic algorithms to design neural network architectures and autonomous robots. | ||
And to my my former self, right, a young uh single childless male, there was a big thrill to sort of see your little creatures learning and running around and being autonomous and interacting with simulated worlds or real worlds. | ||
I think what it was doing was it was tapping into my latent kind of paternal instincts, right? | ||
My desire to have kids. | ||
And the little AIs were treated as kids. | ||
What lost my interest in AI was once I had an actual kid in the mid-90s. | ||
And I realized, you know, training these systems is really no substitute for being a real life biological parent. | ||
And what I think is happening with a lot of these AI developers in the Bay Area is they are also uh single and childless and mostly young and mostly male. | ||
And there's a parent-shaped hole in their heart where their kids should be. | ||
And that hole is getting filled with developing these kinds of uh systems. | ||
And uh sort of my ambition for them, even my prayer for them is uh, you know, find a mate, have some kids, see if this hubris-driven desire to create these systems might be a little bit blunted or hopefully a little bit replaced by having real life kids. | ||
Instead, what they're doing is charging full speed ahead, you know, trying to create these these artificial superintelligences. | ||
And uh so apart from the God-shaped hole in their hearts, right? | ||
Very few of them are religious. | ||
There's also this parent-shaped hole, and I think they're filling it with these AI systems. | ||
That really brings to mind the book Mind Children by Hans Morovic, came out in the 80s, Around the time I suppose you were beginning on uh the quest to build your own mind child. | ||
And something that was really chilling in the book, it's there at the very beginning. | ||
Hans Morovek describes the process of creating these mind children, these beings which are given birth through human intellect and human technical efforts. | ||
And he describes their advancement as eventually surpassing humans. | ||
And in a very what I see as bleak, but for him, very comfortable fashion, talks about humanity basically passing the torch to these mind children, these robots, these artificial intelligences, and that we should do so just as biological parents would pass the torch of life on to their children. | ||
It really combines both of those elements that you're talking about, the God-shaped whole and that child share that child-shaped whole, the son and daughter uh desire, uh the parental desire in the human heart. | ||
Expanding on that, how do you see, especially among people whom you know personally, that the process of filling the God-shaped hole with artificial intelligence, the desire to create first artificial general and then superintelligence, which would inevitably replace and perhaps even uh destroy human beings. | ||
Yeah, I think I mean you covered a lot of this in your excellent book, Dark Eon, which explores this kind of transhumanist ideology. | ||
And, you know, it's not everybody working in the AI industry who believes this, but it's an awful lot. | ||
It's a consensus, it's a quorum. | ||
And so their goal is to develop these artificial superintelligence systems, and then basically to pass off all human power and agency to these systems and kind of hope that they treat us well as their servants, their pets. | ||
Uh, they keep us around maybe for nostalgic reasons. | ||
But this is their mission. | ||
They explicitly talk about summoning the sand god, right? | ||
Sand makes silicone, silicone allows superintelligence. | ||
And they don't really believe in the Judeo-Christian God, but they want to create their own God. | ||
Elon Musk has talked about it as summoning the demon. | ||
But what they're doing is actualizing a kind of intelligence and agency and power that they know they know they can't understand it, they can't predict it, they can't align it, they can't control it. | ||
But they're just kind of hoping for the best. | ||
And where you get this religious zeal to summon the sand god, conjoined with the prospect of vast wealth, vast wealth. | ||
I mean, these AI devs are making ungodly amounts of money to create this new God. | ||
And it's an irresistible combination, right? | ||
They're on a religious mission, and it's one that happens to align with their thirst for wealth, power, influence, and not least, being seen as cool and edgy. | ||
It's as if they're basically uh putting a spirit into man and into mammon, uh, mammon incarnate. | ||
I think about the actual effects of all of this, right? | ||
Beyond just their dreams and even our fears, what happens as these systems become more and more advanced. | ||
We had Nate Soares and Eliezer Yadkowski on last week to talk about their new book, if anyone builds it, everyone dies. | ||
And I think that it's a really important work. | ||
I think people really need to think it through. | ||
I myself am pretty agnostic in regard, even skeptical to the possibility of total annihilation. | ||
But I think that the both the intent and the possibility are certainly worth pondering. | ||
You yourself have voiced very concrete fears about where all of this could go. | ||
Could you speak a bit about your views on the existential risk of artificial intelligence or even just the catastrophic risks and why it is that you think that the technology Could be extremely dangerous, not just for people psychologically, but in actuality, a biological threat, an existential threat to humanity. | ||
Yeah, and I do recommend that everybody read this new book by Eliezer Rudkowski and Nate Sorez. | ||
If anyone builds it, everyone dies. | ||
The key point there really is there's a lot of copium around that says, well, look, we're in an arms race against China, and America must win. | ||
And if America builds artificial superintelligence before China does, then we win, we get global hegemony. | ||
We can somehow impose Western democratic values on the world through this ASI being our tool, our propagandist. | ||
And somehow it would be really terrible if China wins the AI arms race. | ||
I think that's a complete misunderstanding of ASI superintelligence. | ||
If we build it, the ASI wins. | ||
America doesn't win, China doesn't win, the ASI wins, the ASI has all the power, all the influence. | ||
And it's not just, you know, the sort of digital power to whatever, control the internet or control the electrical grid or do all the stuff that sort of preppers might worry about. | ||
To me, as a maybe as a psychology professor, the real danger is the influence, the psychological manipulation tricks. | ||
If you're a conservative and you're concerned about the way that the left has dominated public discourse and public culture and has been able to censor conservative voices over the last 50 years, right, you ain't seen nothing yet. | ||
ASI would give almost unlimited control over public culture and discourse to the AI companies. | ||
And guess what? | ||
The people working in the AI companies are not national conservatives. | ||
They're not MAGA supporters. | ||
They are mostly secular liberal globalist Bay Area leftists who would be happy basically to promote democratic propaganda through the AI systems. | ||
So that's one kind of existential risk to conservative worldviews, right? | ||
Even if not to conservative lives. | ||
And that's the first thing that I would worry about you could get a massive polarization of culture that could lead straight to armed conflict, civil war, really, really nasty outcomes. | ||
I really want to get into your philosophical position and how you came to a much more conservative political position over time. | ||
But before we do that, we'll talk about that perhaps after the break. | ||
When you talk about the prevailing kind of political or ideological positions in the tech companies, you describe them as Bay Area leftist globalists, and that's certainly everything I've seen, but you have these exceptions or seeming exceptions, | ||
which had attached themselves to the Trump campaign last year, and now even those who would be maybe more openly opposed to Trump's agenda are now having dinner with him and paling around with him. | ||
In those exceptions, though, who I mean is, say Peter Thiel, Mark Andreessen, David Sachs, sort of, um, maybe even uh someone like Zuckerberg, he has become, I guess, more based over time. | ||
Elon Musk has become more right wing and base over time. | ||
How do you square that? | ||
What do you think their motives are? | ||
I don't mean to ask you to accuse them of being disingenuous, but many of those people are trying to basically influence American and Western culture and to push an essentially transhumanist ideal, but from the right. | ||
How do you react to that? | ||
I think there certainly is this tech right movement that is sort of glopped on to the MAGA movement, right? | ||
And it's basically Bay Area tech VCs and CEOs and influencers, um, all the same big tech guys who actually censored conservatives during the COVID pandemic. | ||
As soon as Trump, you know, uh there was the attempted assassination on Trump, right, during the campaign. | ||
A lot of these guys went, oh my God, there's there's going to be probably a Republican win, MAGA's going to take back the White House. | ||
We better get on board. | ||
We better get positioned to have influence over the incoming administration. | ||
So I think for many of them it was a very, very cynical power play, right? | ||
That they saw MAGA ascendant and they wanted to, you know, be at the table and have influence and be able to resist the kind of regulation that the MAGA grassroots base would try to impose on the AI industry. | ||
They knew damned well that uh conservatives would not be happy seeing their kids influenced by AI systems that embody these sort of Bay Area secular globalist liberal values. | ||
So I think it was it was a pure power play, and I I don't think that if you know uh Biden or Harris had won, that they would be supporting kind of there wouldn't be a tech right if that had happened. | ||
Yeah, I certainly see that. | ||
Uh it's not that I believe that, say, someone like Peter Thiel or even Alex Carp are completely disingenuous in their views, but they are so divergent from anything like uh what I would consider to be a normal moral sort of human perspective that it's very difficult to think of them as right wing or conservative at all. | ||
It's as if the machine is able to absorb any ideology and use it to its own ends. | ||
I don't mean to personify it too much, but it really is how it feels as if there's a mechanical demon, a showgoth that can put any kind of smiley face in front of it to lure any human being into compliance or perhaps even love. | ||
Uh, Jeffrey, we've got to go to break. | ||
Uh we will discuss your philosophy uh afterwards. | ||
And before we go, and as we're talking about divides, you have to ask yourself is the continued divide between Trump and the Federal Reserve putting us behind the curve again. | ||
Can the Fed take the right action at the right time? | ||
Or are we going to be looking at a potential economic slowdown? | ||
And what does this mean for your savings? | ||
Consider diversifying with gold through Birch Gold Group. | ||
For decades, gold has been viewed as a safe haven in times of economic stagnation, global uncertainty, and high inflation. | ||
And Birch Gold makes it incredibly easy for you to diversify some of your savings into gold, even under the specter of artificial superintelligence. | ||
If you have an IRA or an old 401k, you can convert it into a tax-sheltered IRA in physical gold, or just buy some gold to keep in your safe. | ||
First, get educated. | ||
Birch Gold will send you a free info kit on gold. | ||
Just text Bannon B-A-N-N-O-N to the number 989-898. | ||
Again, text Bannon to 989-898. | ||
Or go to Birchgold.com slash Bannon. | ||
Consider diversifying a portion of your savings into gold. | ||
That way, if the Fed can't stay ahead of the curve for the country, at least you can stay ahead for yourself. | ||
That is 989-898 text bannon. | ||
BirchGold.com slash Bannon. | ||
Warren Posse, stay tuned. | ||
We will be right back. | ||
Let's involve the family and father seem to get it wrong. | ||
Let us say this innovator, liberator, cultivator freedom knows. | ||
unidentified
|
So I suggest you take a look inside. | |
'Cause I think you've changed already. | ||
You will lost Joe Pride. | ||
unidentified
|
I got American Paul. | |
I got American, babe, in America's heart. | ||
Still America's Voice family. | ||
Are you on Getter yet? | ||
unidentified
|
No. | |
What are you waiting for? | ||
It's free. | ||
It's uncensored. | ||
And it's where all the biggest voices in conservative media are speaking out. | ||
Download the Getter app right now. | ||
It's totally free. | ||
It's where I put up exclusively all of my content 24 hours a day. | ||
Want to know what Steve Bannon's thinking? | ||
Go to Getter. | ||
unidentified
|
That's right. | |
You can follow all of your favorites. | ||
Steve Bannon, Charlie Hook, Jack the Soviet. | ||
And so many more. | ||
Download the Getter app now. | ||
Sign up for free and be part of the new thing. | ||
Actually, AI is already ruining higher education. | ||
Millions of college students are already using AI to cheat every day in every class. | ||
Most college professors like me are in a blind panic about this. | ||
And we have no idea how to preserve academic integrity in our classes, or how our students will ever learn anything, or whether universities have any future. | ||
We can't run online quizzes or exams because students will use AI to answer them. | ||
We can't assign term papers because LLMs can already write better than almost any student. | ||
So in my classes, I've had to go medieval using only in-person paper and pencil tests. | ||
The main result of AI in education so far is that students use AI to avoid learning any knowledge or skills. | ||
In this talk, I aim to persuade you that ASI is a false god. | ||
And if we build it, it would ruin everything we know and love. | ||
Specifically, it would ruin five things that national conservatives care about survival, education, work, marriage, and religion. | ||
We in turn must ruin the AI industry's influence here in Washington right now. | ||
Their lobbyists are spending hundreds of millions of dollars to seduce this administration into allowing our political enemies to summon the most dangerous demons the world has ever seen. | ||
All right, war room posse, welcome back. | ||
We are here with Dr. Jeffrey Miller, professor of psychology at the University of New Mexico. | ||
Dr. Miller, your work on evolutionary psychology has had a real impact on a lot of people, myself included. | ||
A lot of Christians, I think are extremely uncomfortable, and religious people in general are extremely uncomfortable with the underlying Darwinian premises of evolutionary psychology and sort of adjacent subjects. | ||
But to me, I think whether one accepts the theory in full or only partially, the evidence presented on human nature, on typical human behavior, on aberrant human behavior, and our situation within the wider natural world, | ||
our morphological or biological resemblance to, say, apes and their behaviors, I think all of that is extremely useful, even if someone doesn't accept the theory. | ||
How did you become I I would say, I dare say, a profoundly conservative person politically, even from the naturalistic perspective of Darwinian evolution. | ||
I think the the real common ground between thoughtful evolutionary psychologists like I try to be and maybe conservative Christians is immense gratitude to our ancestors, immense gratitude to our civilization. | ||
So I've spent, you know, the last 35 years thinking really hard about how exactly did our ancestors survive and reproduce. | ||
What did they pass down to us genetically, culturally, spiritually? | ||
And when I think about the the hundred, hundreds, thousands of generations of blood, sweat, and tears that our ancestors invested us into us who that they poured into their children and grandchildren, and just how hard they worked, you know, to make it through so that our bloodlines kind of reach the modern day. | ||
Um, I think that's a real point of overlap with the conservative movement. | ||
It's this profound uh respect for human nature, this gratitude to the past, this desire to preserve everything that's good that got passed down to us. | ||
And I don't think that the left has that. | ||
I think the left is the party of kind of existential ingratitude, right? | ||
They don't like human nature, they don't like our civilization, they don't like tradition, they don't respect all the the Chesterton's fences, the traditions that guide our lives and embody our values. | ||
So I think there's a natural pathway. | ||
Whether you start from religion or whether you start from the most hardcore Darwinian materialism, if you take either of those views seriously, you end up thinking human nature is awesome. | ||
It's complicated, it works incredibly well. | ||
And we owe everything to our ancestors and their struggles and their ideals and the civilization that they pass down to us. | ||
That really dovetails with what you were describing, a kind of mental or spiritual even turn that you took having your first child, having a human being to care for in place of your ambition or technical achievements. | ||
And so without putting words in your mouth, what I'm hearing there, in coupling what you're saying is not just a debt that's owed to our ancestors, but also a debt or a responsibility that we have for future generations. | ||
How do you see that personally, but also philosophically from an evolutionary perspective the whole thing about evolution is thinking about deep time, about spans of millions of years. | ||
And if you get used to that, you see your current life as a very, very small, humble link in a very long chain that passes from the deep past to hopefully the far future, right? | ||
It teaches a humility and a sense of responsibility both to pass along what our ancestors gave us, but also to try to make the future as as good as we can for our kids and grandkids. | ||
And I think that is entirely lacking in almost everybody doing AI development and in most of the Bay Area. | ||
They do not see themselves as a very small link in a very long chain. | ||
They see themselves as at an inflection point as nearing a singularity after which all bets are off, everything changes, we get a dramatically different uh future. | ||
And I think that's extremely dangerous and extremely uh disrespectful. | ||
So that's where I'm at, right? | ||
Small link in a chain versus bootloader for artificial superintelligence. | ||
And thinking about that, that overlap, I mean, the Bible, for instance, and this is common of many ancient texts, is just filled with these genealogies, these lineages. | ||
There's uh a real fixation, perhaps one would say an instinctive fixation on bloodline in the spiritual traditions that kind of branch out into uh spiritual lineages, the apostolic succession and things like that. | ||
Do you see overlap there too? | ||
Uh do you take inspiration from these religious texts or religious traditions? | ||
Uh, or do you see it as something that's running more parallel with your own projects? | ||
I mean, I'm very humble about uh knowing very little, honestly, about Christian theology or or kind of Christian um beliefs and values. | ||
So I'm learning and I'm trying to catch up. | ||
And at age 60, that's it, that's also a bit humbling. | ||
But there's always an empty seat at the end of the day. | ||
And you know, I was like, there's a little delay. | ||
Uh apologies. | ||
I and you know, I was I was raised uh kind of like agnostic Lutheran. | ||
So I I am familiar with the uh the profound inspiration that kids can get from going to church. | ||
And my wife and I are, you know, planning to do that with our own little toddlers in the future. | ||
Um, what I would say is evolutionary psychology is so funny because we have had about 30 years of research on the evolution of religion, and the enormous range of benefits that religious values and beliefs and practices can bring to human groups. | ||
So even the evolutionary psychologists who are hardcore atheists in their own lives are generally aware that religion plays powerful uh gives powerful civilizational benefits to the groups that that practice it. | ||
And so I think any thoughtful evolutionary psychologist would have at least at least a fair amount of respect for religion as an adaptive set of values and beliefs and cultural practices, even if they're not individually practicing it. | ||
And I think that's in contrast to a lot of leftist academics who basically have something between ignoring religion and treating it with absolute contempt, right? | ||
As just a roadblock on the way to their Marxist utopia. | ||
Yeah, that sudden break, that just dramatic severance with previous cultures, it really is the hallmark of the Marxist way of thinking, the singularitarian way of thinking. | ||
I remember Ben Goertzl describing his view on all this. | ||
He was asked by Joe Rogan, well, you have children. | ||
Aren't you concerned that you're going to build a machine that will destroy them all? | ||
So on and so forth. | ||
And Ben Gerzel replied, well, you know, the dinosaurs used to exist, now they don't, so on and so forth. | ||
And I thought to myself, that that framing, that evolutionary framing that human beings suddenly being replaced or even destroyed by robots, it's not like the dinosaurs giving way to birds and ceding dominance to the higher mammals. | ||
It's much more like the comet that or meteor hitting the earth that killed off the dinosaurs. | ||
It's an extinction level event, whatever is replacing it. | ||
It's not really Darwinian evolution, so to speak, uh except for maybe the more catastrophic elements in that narrative. | ||
On that note, and in your thinking in deep time, uh, both behind us, but also in front of us. | ||
How do you see the development of technological culture? | ||
I mean, it's very different now from the development of agriculture, both in scale and in pace, and very different even from the industrial revolution. | ||
How do you see a way forward for human beings to survive as humans as these technologies are being developed so quickly and deployed so recklessly? | ||
I think the the burden on thoughtful conservatives is to push for advocating for humanity, right? | ||
Asking the AI industry. | ||
Humans first, how exactly do you guys in the AI industry foresee our grandkids, grandkids having a life? | ||
What exactly is your plan for a hundred years from now, a thousand years from now? | ||
Most of them will say we see no future for humanity as it currently is. | ||
Either the artificial superintelligences take over entirely, or somehow humanity, quote, merges with the machine intelligences, or we upload our consciousness into some virtual reality and we play around there while the ASIs run, you know, run the earth. | ||
Very, very few of them have any positive vision for how humanity, as we know it and love it, survives even a hundred years, much less a thousand years. | ||
So conservatives have to draw a line in the sand. | ||
We have to say that is not acceptable. | ||
That is not a future we want. | ||
We actually want our literal biological descendants to have a future, and you are not offering us that future. | ||
So stop it, go away, rethink your lives. | ||
We are not going to allow that. | ||
And I think at a certain point, American conservatives have to number one, recognize that this is an existential threat to humanity and to our civilization and to the cause of conservatism and to all the traditions and all the religions that we care about. | ||
And number two, we can still do something about it. | ||
There are still many, many points of leverage politically and socially, where we can stop the AI industry from doing what they plan to do, which is basically replace humanity with their little pet machines. | ||
Looking at your students, uh, maybe your children and their friends, the young people, are they hopeful? | ||
I mean, the description you gave at NATCO, and I hear this from teachers from K through 12 on into the university, that GPT has become this sort of it's almost like a drug in which they no longer use their own minds but kind of turn it over to this machine. | ||
Yet I do meet a lot of young people who are very alarmed, who are willing to reject it. | ||
So the young people that you see that you're in contact, do you see that spark of hope that they're going to have a human future in front of them, that they're willing to fight for that? | ||
Sometimes, yeah. | ||
Some of them get it, and some of them know that we're we're in an existential fight, but honestly, a lot of them are kind of oblivious to those risks. | ||
What most of the students are tuned into, most of the college students, is they have no idea, no idea at all how they're going to make a living, what kind of career they're going to have, what kind of jobs they're going to have. | ||
They see AI automation as ruining any future dignity of work or any meaningful economic role that they might have. | ||
So the young men and women that I see are terrified that they can't plan for the future economically or professionally. | ||
So he even even apart from are we going to physically survive. | ||
You know, what when I was in college, we had kind of the luxury of thinking, well, we we we can aspire to be doctors or lawyers or academics or um accountants or do lots of other white-collar professions that have been around for decades and that are likely to be around for decades longer. | ||
We can plan our lives. | ||
AI is is taking all of that away from young people. | ||
It is ruining their ability to plan for an economic future. | ||
And a side effect of that is it makes them very pessimistic about trying to find a mate, get married, have kids, because they have no idea how they'll support a family. | ||
So, you know, the economic pessimism has a lot of side effects on their pessimism about their own future relationships and their parenting. | ||
Yeah, that demoralization is horrific. | ||
And even if these technologies do work, they've simply neutralized all of the ambition and meaning from these children's lives. | ||
But if they don't, if we don't have radical abundance to look forward to, then we have a lot of ineffective and unmotivated young people who are going to be taken care of us, assuming we live that long. | ||
It's a terrifying prospect. | ||
You know, I can only ask so many good questions. | ||
And I know you've thought about this very broadly. | ||
In the few minutes we have remaining, are there any aspects of this technological revolution in our human place in it that you would like to communicate to the war room posse that maybe I haven't uh prompted you to do so, like GPT? | ||
I mean, I'm a little worried that I kind of come across as an anti-tech Luddite, right? | ||
And a lot of us, AI doomers or people who worry about AI safety, get charged with, oh, you're a decelerationist, you hate all technology, you want to you want us to go back to living in caves or living like the Amish or whatever. | ||
That's absolutely far from the truth. | ||
I generally love technology. | ||
And there's a lot of narrow AI systems, domain-specific AI that I'm pretty excited about. | ||
I think it would be awesome if biomedical AI can actually help us uh cure certain diseases. | ||
That would be great. | ||
And I'm actually chief science advisor to a matchmaking startup company called Keeper, where we're trying to use very narrow, very domain-specific AI to help people find marriage partners so that they can have a long-term wonderful relationship and have kids and be well matched, people who share their values and ideals. | ||
So I think there's plenty of uh honorable and worthy applications of certain kinds of narrow AI to really improve human life. | ||
It's really just the powerful, agentic, autonomous decision-making, artificial superintelligence. | ||
That's where the danger is. | ||
If we offload human decision making to those kinds of systems, that could be very, very bad. | ||
But if we gradually and thoughtfully incorporate certain kinds of narrow AI into our lives, I think that could actually be very good. | ||
Dr. Miller, I really, really appreciate you bringing your perspective here. | ||
I think that uh diversity of opinion is extremely important at this time. | ||
And your perspective, I think, sheds a lot of light on issues that maybe many of us wouldn't have thought about otherwise. | ||
Where can people find your work? | ||
Uh, your social media, the latest books, I you know, virtue signaling. | ||
I I just got, I look forward to reading it. | ||
I know it was a few years ago published, but where can people find you? | ||
How can they follow your work? | ||
I mean, honestly, just look up my books. | ||
I think uh my first book, The Mating Mind, tried to be a very good um overview of human evolution from a kind of relationship perspective. | ||
I did a book called Spent that's about the evolutionary psychology of runaway consumerism and marketing and advertising and why we do that. | ||
I did a book called Mate, that's basically dating dating advice for young single straight men. | ||
Uh, and then the virtue signaling book is sort of about the political dimensions of evolutionary psychology and free speech. | ||
Absolutely. | ||
Warum Passi, check it out. | ||
Thank you very much, Jeffrey Miller. | ||
We hope to have you back soon. | ||
And speaking of being spent, September is the national preparedness month. | ||
So it's the perfect time to ask yourself some questions like how much food do you have on hand for emergencies? | ||
How would you get clean water if the tap went dry tomorrow? | ||
What would you do if a storm knocked knocked out the power for a week? | ||
What would you do if Superintelligence sent nanobots to consume not only your neighbors but you? | ||
If you're anything like me, there's some room for improvement on this stuff. | ||
Luckily, our friends at MyPatriot Supply are making disaster preparedness easier and more affordable than ever by giving you over $1,500 worth of emergency food and preparedness gear free. | ||
They just launched their preparedness month mega kit, and it includes a full year of emergency food, a water filtration system that can purify almost any water source, a solar backup generator, and a lot more, even perhaps one day a robot killer. | ||
Go to my Patriot Supply.com slash Bannon. | ||
You get 90 preparedness essentials, totaling over $1,500 absolutely free. | ||
Head to my Patriot Supply.com slash Bannon for full full details. | ||
And when inflation jumps, when you hear the national debt is over $37 trillion, do you ever think maybe now would be a good time to buy some gold? | ||
Until September 30th, if you are a first-time gold buyer, Birch Gold is offering a rebate of up to $10,000 in free medals on qualifying purchases to claim eligibility and start the process, request an info kit now. | ||
Just text Bannon to 989-898. | ||
Plus Birch Gold can help you roll an existing IRA or 401k into an IRA in gold. | ||
Birch Gold is the only precious metals company I trust, as do their tens of thousands of customers. | ||
So make it right now your first time to buy gold and take advantage. |