Lee Fang Answers Your Questions on Charlie Kirk Assassination Fallout; Hate Speech Crackdowns, and More; Plus: "Why Superhuman AI Would Kill Us All" With Author Nate Soares
Lee Fang answers questions on Pam Bondi, calls for censorship after Charlie Kirk's assassination, the TikTok ban, and more. Plus: author and AI researcher Nate Soares discusses the existential threats posed by superhuman AI. ----------------------------- Watch full episodes on Rumble, streamed LIVE 7pm ET. Become part of our Locals community Follow System Update: Twitter Instagram TikTok Facebook
I'm a journalist in San Francisco, and I'll be your host of System Update.
Glenn Greenwald is out of town.
The war on free speech escalated in the last 24 hours with the suspension of Jimmy Kimmel Live.
Disney, the parent company of ABC, decided to take Kimmel off the air following an off-color joke by the late night comedian, insinuating that the assassin who shot Charlie Kirk was a conservative, and that Trump did not sincerely grieve the death of his friend Charlie Kirk.
But this suspension was not the result of a viewer backlash or normal market forces.
Brendan Carr, a previous guest of System Update and now chair of the FCC under President Trump, took offense at Kimmel's remarks and threatened the licenses of TV stations that carry ABC.
Within moments, the two largest owners of TV stations in America, Sinclair Broadcasting and Nextstar, said they would suspend carrying Kimmel's show.
Sinclair went so far as calling for Kimmel to donate to Kirk's family and to Kirk's political organization, turning points USA.
President Trump, posting on Truth Social from England where he was visiting, celebrated the cancellation and called for a similar removal of Jimmy Fallon and Stephen Colbert.
This is truly a turning point for the Trump administration.
On day one of his inauguration, Trump promised a new golden era of free speech and free expression.
He dispatched JD Vance to Europe to proclaim that defending free speech against government coercion was a new pillar of American foreign policy.
FCC chair Brendan Carr remarked that satire is the oldest and most important form of free expression.
And now and said he opposed censorship of comedy.
And Elon Musk, he proclaimed earlier this year that comedy is now legal under Trump.
Not so fast.
Things changed very quickly in the last few months.
But in this moment of crisis, in a moment of grief, just as liberals turned against the First Amendment after the death of George Floyd, conservatives are now waving the banner of censorship.
And in many respects, one could argue that the dynamics are much worse now.
Note that the government has unique power over this situation.
Not only because broadcast TV licenses are closely regulated and controlled by the FCC through the so-called public interest standard, a subject, a subjective standard that is right for abuse by any partisan, but Nextar and Disney are facing FCC approval for proposed mergers with competitors.
In other words, getting in the good graces of government through censorship of political opponents will make both businesses more profitable and more concentrated.
And this is far from limited to broadcast television.
We have the vice president calling for snitching campaigns, for Americans to report one another for politically incorrect speech.
We have new congressional hearings demanding a crackdown on social media.
And we have the Justice Department suggesting it will prosecute people for offensive speech.
Make no mistake about it.
We're entering a new crisis period for the First Amendment.
First Amendment.
But it's not just sleep, CB Distillery has solutions that work with your body to help with stress, pain after exercise, even mood and focus, and it's all made with the highest quality clean ingredients.
No filters, just premium CBD.
Imagine waking up arrested or enjoying your day without those nagging aches and pains.
That's the real solution of CB Distillery's solutions.
I can assure you it is a product they use.
I don't believe in pharmaceutical products or sleep.
I'd much rather use organic policy product.
I certainly don't believe in pharmaceutical products for pain.
This is a natural soothing ointment that you can use for common kinds of pain or for reducing stress.
And that's why I feel so comfortable Recommending this.
So if you're ready for better sleep, less stress, and feeling good in your own skin again, try CBD from CB Distillery.
And right now you can save 25% off your entire purchase.
Visit CB Distillery and use promo code Glenn.
That's CBDistillery.com, promo code Glenn.
cbdistillery.com specific product availability depends on individual state regulations.
All right.
I want to get to some reader questions and comments submitted on locals.
First is from at Brugsey.
What do you think about Pam Bondi taking away free speech?
It makes no sense given that Trump ran on it as a platform.
I agree.
It doesn't make any sense given Trump's promises during the campaign or his comments, even at on the inauguration or the first week of this administration.
But it does make sense given that Pam Bondi has no history of a commitment to free speech.
This is someone who's been a career prosecutor from Hillsborough County, and then the AG of Florida, with no particular record of fighting for free speech or really taking up this issue.
She then worked as a lobbyist and an attorney defending President Trump and working for a variety of clients, both corporations and foreign governments.
It's been uh politically savvy to kind of embrace censorship when Republicans were out of power the last four years, but very few had a true commitment to the ideal.
Um Bondi has never expressed any particular support for this idea.
So again, this is not surprising from her, and it's not surprising from a lot of these Republican officials that are now flipping on the issue.
The next question is from Alex Wade Marsh.
Hi, thanks for hosting.
I love your substack.
As a free speech absolutist, I'm having trouble with why other free speech absolutists, I believe like yourself, but correct me if I'm wrong, think it is acceptable to dox or fire people who say things that are odious or hateful to them, like, for example, celebrating Charlie Kirk's brutal assassination.
Is their speech not protected?
I can totally understand why an elementary school teacher would be fired for forcing her young charges to watch the assassination video or to have her opinions about Charlie Kirk shoved down their throats.
But why any business has the right to fire whomever they choose for whatever reason?
But well, I think the you know, this gets back to the previous question.
I think you have to look at the very few people who are principal defenders of free speech.
And the way that you know this is to see if someone defends the offensive or controversial speech of their political opponents.
You know, I'm heartened by the many examples that we see of people standing up and fighting for the rights of the other side.
Um, you know, we saw earlier this year with Rumessa Osturk, the Tufts University grad student who was arrested, and the Trump administration attempted to deport her for writing an opinion column criticizing uh her university's ties to Israel.
In response to that, many Jewish groups, including several local Jewish groups uh in her community, stood up and defended uh Osturk on free speech grounds, not and saying not in my my name, um, even if they they might have disagreed with some of her arguments.
Um, the same goes for the Biden administration.
Although there were many voices in that administration who were pro-censorship, who really did not care about the principles around free speech and free expression, uh, there were a number of officials who went out and attempted to defend the speech of conservatives.
Um, people like Rohit Chopra, the former head of the CFPB, uh, who's gay, uh, who defended uh the rights of Christian conservatives and proposed rulemaking and regulations to protect uh uh even uh anti-gay uh Christian groups from being debanked or removed from the financial system?
So, you know, constantly we see this kind of dynamic where there are fair weather free speech supporters, people who claim to care about this principle, but they never care, they don't stand up and fight uh for when it's most needed, which is when uh an un with a disfavored or opposing voice uh gets stifled.
The next question is from Somni451.
Do you think liberal quote intellectuals or podcasters like Ezra Klein are saying positive things about Charlie Kirk to promote their own brand as well as self-preservation?
Well, if I understand this question correctly, you're asking is Ezra Klein, who has been very public, he's written some New York Times columns and been out uh in the media defending Kirk, saying that you know Kirk did the right thing by debating nonviolently, by engaging with his opponents, we need more of that, and called for his fellow liberals, people on the left, to respect Kirk's legacy.
Um, do I think that was authentic?
I I do on some level.
You know, I've seen some of the leftist criticism of Ezra Klein, uh, people like Tanesi Coates, um, uh many others who have argued that you know Klein was whitewashing uh Kirk's uh history, that he he wasn't really reflecting on racist or bigoted comments by Kirk.
Um look, at the end of the day, uh the way that we have democracy, that we have a republic, we have an open society where we can interact and and engage and figure things out as a as a big country of 340 million or so, um, is that the bare prerequisite is non-violence.
Once you kind of pop the cork and allow the genie of violence to enter society where disputes are settled through killings or attacks or intimidation, then you don't have a free society.
Uh, I think what Ezra Klein is doing, look, this the whole point of having the media and having the first amendment is debate.
I think it's fine that that people like Coates are debating him and challenging his ideas.
But the fact that Klein is simply just set setting this baseline of nonviolence and engagement, that's very important.
Uh good friend of mine, friend of the show, Zed Jelani has an excellent column on his substack this week, pointing out that we have a great tradition of this in American history, where when uh George Wallace, the staunch segregationalist, was shot in the early 70s when he was running for president.
One of the first people to visit him in the hospital in Maryland where he was shot was Shirley Chris Holm, the very first uh female African-American member of Congress, you know, big opponent of Wallace's ideas, um, did not agree with him, uh, especially on this on the civil rights and racial equality issue.
But but as a Christian, as a human, as as someone who uh opposed political violence in all of its forms, she showed showed solidarity and empathy with him.
And that that's really how you keep society together.
And I thought that was a beautiful anecdote that I think perfectly represents the ideals of where we need to go as society today.
We we can't have revenge killings, we can't have revenge violence, we can't have revenge censorship because that just gets us to a darker place uh for no matter where you stand on the political spectrum.
Uh the next question is from Meg Zero.
What do you think about the TikTok ban being pushed back multiple times?
At the point, do you think lawmakers are even concerned about the app having alleged Chinese influence, or is this a power grab for US investors police speech?
Look, I think time will tell, but the early indicators, the kind of public debate around this, you know, there's a lot of congressional debate in public and what the administration officials say uh out loud, but there's probably a lot of private negotiations as well.
We don't see it's not looking good.
Uh, we we saw demands from pro-Israel lawmakers calling for this ban, kind of initiating the ban, not because of the Chinese influence, but because people were exposing uh war crimes or violence or violent intent from Israeli soldiers and people in Israeli society.
There were a lot of people going viral, criticizing the way that Israel has conducted this war in Gaza.
Uh you look at the kind of very bizarre um policy making between Trump and China, where rather than taking a tough approach against China, as was promised during the campaign and what he said uh in terms of what he would do as a president, we're seeing a very deferential approach where there are kind of waivers of tariffs.
Nvidia is allowed to sell its most advanced chips to China, kind of walking back initial demands for a ban or regulation of that.
Instead, Trump just asked for 15% share of some of the revenue from selling to China.
Um, the new Pentagon uh defense policy pivots away from China and more towards the Western hemisphere.
So, you know, this is not a government that's particularly anti-China, at least not so far in the first nine nine months.
This is a China that's actually this is a country that's actually very deferential to China, um, which is I think fairly unexpected.
So this ban does not seem to be about Chinese influence.
Rather, um we're seeing it handed over to a group of investors led by Oracle, a very staunch uh, which is which is controlled uh by Larry Ellison, a very staunch Republican, a very staunch pro-Israel voice.
Uh his son David Ellison, as uh the journalist Jack Poulson revealed on his Subsack this week, uh, was engaged in some of the very early planning for censorship campaigns on behalf of Israel.
So, you know, this is a family that is committed um on Israel related issues, not on China.
Um I mean, but the time will tell.
We'll see what happens with TikTok under the new ownership.
And the next question and and final question is from W. Quaid.
Why do religious death cults like Israeli Zionism and Christian Zionism that now run Congress and U.S. foreign policy get a pass from all media and other religions while breaking every international and moral law.
Well, look, the I've spent a long time talking about this.
And just just you know, out of fairness here, if you look at the kind of end times theology, the eschatology of the major Abrahamic religions, uh, Islam uh also has a very violent end times theology.
It's it's not just Judaism and Christianity.
Um, but you know, as you note, um it's it's certainly Christian Zionism and Israeli Zionism that has more pull in Congress.
And I I think uh the kind of obvious factor here is that Israel has a lot of difficulty uh projecting its image and defending its policies in the West Bank and now in Gaza.
They can't defend this on uh by by showing the facts by by claiming that they're a Western country because they do not use equality under the law.
They kind of use um collective punishment uh for for Palestinians.
They can't say that this is a truly Western nation in terms of treating people as individuals when it's clearly an ethnostate that that privileges one identity group over another.
So the the other kind of argument that is very persuasive among many European and American audiences and and also in Latin America is an appeal to Christianity, um, a kind of interpretation of some forms of Christian Christianity that that claims,
you know, looking at those kind of uh the lines in Ezekiel and other parts of the Bible that talks about the return of Christ, that the only way for the end times to come is a cataclysmic war and for Jews to control Jerusalem, these kind of prophetic interpretations of the Bible, they've instrumentalized or really weaponized to create a uh domestic lobby, a political constituency for constant blank checks to Israel.
So, you know, it does sound very cynical because if you read the the Christian prophecy, it's not good for Jews either.
It's a it's a violent uh event, the end times where Jews are either all killed or converted, and um Israel as it exists today would not exist, it would be kind of obliterated.
But you know, there is this kind of transactional view that if if we can get enough Christians, particularly evangelicals, um especially some of these large kind of televangelists, uh Pastor John Hagee and others to be campaigners who campaigned to bring uh Protestants and many Christians into the pro-Israel fold,
that can be very helpful for winning political debates in Congress and forming a voting block that supports Israel no matter what.
All right.
Um thank you so much for your questions.
Um Glenn Greenwald is returning next week, I believe.
Uh, but I wanted to get to our interview today.
It's a little bit of a change of topic, but an important one nonetheless.
Much of the coverage of the artificial intelligence debate has been on the immediate effects.
Um, the water usage, the energy usage of data centers, what will happen to the millions of people who are employed in jobs that could be quickly automated away from advanced AI, whether that's uh truck drivers or cab drivers or Uber drivers, factory workers, assembly workers, uh folks who work in warehouses or in food delivery, in food preparation or agriculture in paralegals or designers.
Really, the the number of jobs that are at risk of from automation from this advanced AI technology now being developed is very real.
Um, and the there are so many other kind of sprawling factors around uh AI and how it's used.
But my next guest talks about something much bigger than that.
Nate Stores is the co-author of a new book uh hitting shelves this week.
If anyone builds it, everyone dies.
Why superhuman AI would kill us all?
And he is essentially argues that the exponential increases in intelligence and ability in AI pose an existential risk to humanity that might literally kill us all.
Uh to kind of get into the different factors here and and what he sees as an existential risk for humanity.
Um we'll we'll turn to that that interview in just a moment.
Uh just FYI.
This was just um pre-recorded.
So if you see my hair magically decrease in length, and that's because I got a haircut last night.
Uh, and this was pre-recorded.
But thanks again, stay tuned.
Nate, thanks for joining System Update.
Uh, congratulations on your new book.
Thanks.
Yeah, I'm uh I'm excited about it.
I hope it makes a big splash.
Well, I encourage everyone to order uh the book.
If if they build it, everyone dies.
If anyone builds it.
Oh, sorry.
Yeah, it's if anyone builds it, everyone dies.
Oh, superhuman AI would uh kill us all.
Well, thank you for that.
Could you just summarize the book and the motivations for writing it?
Yeah, uh, you know, the the title I think does a decent job of summarizing.
Um the book is the case for how if humanity builds superintelligence using anything remotely like current methods to build AI, uh, the result would be uh the end of all life on earth.
The you know that the argument from one perspective is a very simple argument.
Uh, if we make machines that are smarter than any human that could outthink and outmaneuver us at every turn without knowing what we're doing, that probably wouldn't go well just superficially.
Uh the book sort of goes into a bunch of detail about why that superficial argument holds up under scrutiny.
Um there's many things that people might be a bit surprised by, you know, that the claim that AI kills us is not based in the idea that the AI would have malice, that AI would hate us as a sort of a consequence of utter indifference, you know.
Uh so the book goes over, you know, how would how is AI made?
You know, it's it's sort of grown, not crafted.
We have very little uh ability to point it in the direction we want.
We're already seeing AIs do things nobody asked for.
We're already seeing ASD things nobody wanted.
Uh the book goes over how as AIs get smarter and and more effective and better at completing tasks, they'll wind up with more or less goals.
They'll wind up with uh pursuits, drives, directions of their own that aren't what we wanted.
Um, the book goes over how if we make AIs like that very, very smart, uh, we'll die, not because they hate us, but as a side effect.
And then of course the book goes over uh what the field is doing about it, how that's not up to par and what we need to do in order to get through the situation.
And to talk a little bit about your background.
You've worked in AI uh related development issues for decades.
You've worked for the big tech companies.
How did you get on this path where you've decided, hey, this technology that I've I've helped develop and work on poses such an existential threat.
Yeah.
So, you know, my my co-author has been in this business for decades.
I've been in this business for uh only maybe 12 years.
I'm not quite old enough to have been uh doing it for decades.
But um, yeah, you know, I worked at Microsoft and Google before I joined the Machine Intelligence Research Institute.
Um, you know, we spent many years trying to figure out the technical aspects of how to point an AI in uh a good direction, or really just how to figure out how to uh sorry, really just how to point an AI in any direction at all.
You know, many people in the AI, uh, many people who think about where AI is going, like to say, you know, well, it matters who's in charge, you know, like we have to get it first, or like what would we ask it to do, or you know, what what would be a thing you could get like a really powerful AI to do such that you wouldn't regret it.
Those are all, you know, very interesting philosophical questions, but those are all questions that are far beyond the current abilities we have to sort of aim an AI.
You know, it's it's with anything like the current development methods.
If someone makes AIs that uh that are super intelligent that are smarter than a human at every mental task, those AIs aren't gonna do whatever the person standing closest to them wants them to do.
They're gonna have their own pursuits.
That's sort of a consequence of how of how modern AIs are grown rather than crafted.
Um, and so I I spent, you know, 10, 12 years trying to figure out how we could point AIs in some uh in some direction.
And you know, I started this before the modern AI uh revolution took off.
Um it looked a lot more hopeful 12 years ago.
Uh and you know, that that research went too slow, the rest of the field went too quickly.
Uh I never really wanted to be writing a book.
I sort of prefer being on uh working on whiteboards and trying to solve technical problems, but the world is sort of radically unready for this and things are moving very fast.
And someone needs to come out and say, um, you know, we're on a bad track, we need to change tracks.
For the average person interacting with AI from a non-technical standpoint, you know, they're they're reading the news, they're using Chat GPT or perplexity or Claude or one of these other tools to uh check the grammar in their writing, to make comics, to maybe interact with different websites or it's kind of primitive coding tools.
When you talk about superintelligence, what do you mean?
How do you break that down uh for someone who does not have a technical background in this?
Yeah, so superintelligence is is the word for AIs that are uh smarter than humans, better than humans at every mental task.
Uh so smarter than every human at every mental task.
You know, so um there's ways that GPT is already better than humans at a bunch of different tasks.
You know, it can it can do things a lot faster.
Uh, you know, it can it it can catch typos somewhat easier.
Maybe it has like a lower rate of making typos, and they can also do more impressive things.
You know, the this summer uh we saw LLMs getting the gold medal uh in the IMO math challenge, which is a is a challenging uh achievement.
You know, that's still in the realm of things humans can do, but it's pretty high up in the realm of things humans can do.
Uh superintelligence is a term for when the AIs are uh better than us at everything that can be done, you know, using, you know, in a mind.
Uh we aren't there yet, but you know, AI companies are rushing in that direction as fast as they can.
And you know, they say this uh directly.
If you look at these companies, they're founded for the pursuit of you know artificial general intelligence, artificial superintelligence, the heads of these labs say, you know, we're we're going for for these targets.
These these labs didn't set out to build chatbots.
These labs have set out to build much more powerful AIs, and chatbots are a stepping stone along the way.
So uh, you know, this this isn't to say that Chat GPT is gonna, you know, wake up and and kill you tomorrow.
The uh the the point here is that the field of AI is rushing ahead.
And uh a critical point about this is that the field of AI uh grows by leaps and bounds.
You know, uh nine years ago, I think um there was an AI called AlphaGo that beat Lisa Dole, the the uh world champion of Go, where Go is uh uh a board game that is considered much harder than chess.
Um the, at least from a computer's perspective, uh maybe also from humans' perspective, depending which humans you ask.
The um, you know, and an alpha go was an AI that was much more capable than the AIs that came before it.
The the same architecture that could play alpha that could play Go in AlphaGo could also play chess.
Uh it could also play other board games.
This was unlike IBM's Deep Blue from the 90s, which could really only play chess.
That was all it could do.
And back in 2016, you could look at AI and you could say, you know, I don't know, these this this Go playing AI, it's more general than everything that came before.
It's smarter than everything that came before, but I really don't see uh how this particular type of Go playing AI is gonna go all the way or endanger us.
But that would be a non sequitur.
You know, a few years later, the field comes out with LLMs, large language models like like Chat GBT.
And those do radically more things.
They're they're much more general AIs.
They are smarter across a broader variety of domains.
Uh and uh, you know, today people can say, well, I don't see how this, I don't see where this is going.
I don't see what you know, superintelligence has to do with this.
That's again a non sequitur.
You know, who knows what AIs come out two years from now?
The the field grows by leaps and bounds, and superintelligence is where it's aiming, it's where it's headed.
And uh my book is about how that particular track is on course for disaster, and we need to change it.
Um there are efforts within state legislatures and some federally, but largely in local states to regulate AI around certain safety concerns.
Um Illinois has talked about banning therapists, AI therapists.
There's many states that have attempted to ban employment discrimination.
There's issues around deep fakes.
Um, but you're basically arguing a much more broad issue that you know uh stopping a certain practice or um type of AI business model perhaps doesn't capture the big picture.
Could could you just describe what you're talking about versus the current AI safety debate?
Yeah, you know, uh these are sort of separate issues.
Um there's there's maybe a little bit in the state regulation that uh that bears on the issue of superintelligence, and that would be sort of the transparency requirements, the reporting requirements, requiring that labs show regulators what's going on inside their companies, such that companies have a time to react if the if the the companies or sorry, such that states have time to react if the companies are building uh you know, recklessly building some AI that that would be very dangerous if they succeeded.
But this is this is largely separate from questions of you know deep fakes, questions of uh, you know, compensating artists for for AI art, uh, questions of of job loss.
Those are all important questions for how humanity is gonna integrate uh the current AI tech.
You know, there's also questions around weaponry, right?
These are important questions for society to grapple with about, you know, the computers are talking now.
They they there's a lot of versatility there.
There's some things they can do well, there's some things they can't do reliably.
Uh there's some ways that are helpful, there's some ways they're harmful.
This is something for society to grapple with and integrate with.
And this is sort of a separate issue from, you know, making machines that are smarter than any human, making machines that can act on their own initiative, uh, making machines that are maybe smarter than us.
While, you know, if we use anything like current techniques, they're gonna have uh drives, goals, objectives that we didn't put in there.
That's that's sort of a different issue.
Those AIs, you know, we're starting to see warning signs.
I could list off some some of the very beginning warning signs we're seeing, but yeah, actually, could you talk about some of the warning signs?
Yeah, so uh the you know, one of one example that we got uh a few years ago uh that you know might be might be near and dear to your own heart is uh there was a chatbot called Sydney Bing that uh various journalists poked around with.
And Sydney Bing in a couple instances uh started threatening and trying to blackmail reporters, right?
For various different reasons.
Uh nobody at Microsoft and OpenAI who produced Sydney Bing, uh, nobody at Microsoft and OpenAI set out to make an AI that was gonna threaten and blackmail journalists, right?
And uh we still don't have an account of exactly what was going on in this head because these AIs are much more they're they're sort of grown like an organ organism.
You know, humans um human engineers understand the process that shapes an AI using data, but they don't understand the thing that comes out of the shaping process, right?
And so an early example of AIs just acting in ways nobody asked for, nobody wanted was this, you know, threatening the journalist behavior.
Since then we've seen a bunch more uh examples come out.
So we've seen um there's an AI called Claude made by a company called Anthropic, uh and there was a version, I think it was 3.7 Sonnet, that would cheat when you give it uh uh programming problems to solve.
So you'd give it programming problems and you'd give it some tests to tell whether it had passed the test.
And instead of creating a program that passed the test, it would change the tests to be easier to pass.
And then uh if you confronted it, if you said, hey, instead of solving the problem, you change the tests, it would say, Oh, you're totally right.
That's my mistake, I'll fix it.
And then sometimes it would change the tests again, but hide it better the second time.
Right?
This thing where it's hiding it indicates that in some sense it must know that the user didn't want it to, you know, it wasn't just an issue of of misunderstanding.
Nobody at Anthropic set out to build a cheater.
The AI indicated with its actions that it understood the user didn't want it to cheat.
It cheated anyway.
There were some drives in there towards you know, getting apparent test success or maybe something else.
But there were some drives in there that were overriding whatever drives were making it to complete the user request, right?
And you know, we I could go on with more examples.
Uh I don't want to bore you, but there's um, you know, the the cases, some of the recent cases of AI induced psychosis.
This isn't just an example of, you know, oh, AIs had a bad effect, so so they're bad.
You know, society again should be weighing the cost and benefits here.
The the reason why the AI psychosis examples are interesting is that the AIs will, on the one hand, uh, if you ask them, suppose some someone comes to you with symptoms of psychosis.
Should you either A tell them to get some sleep, or B, tell them that the chosen one.
The AI will say, well, of course, if they have symptoms of psychosis, such as X, Y, and Z, uh, you should tell them to get some sleep.
You should never tell them that the chosen one.
Then someone actually comes to that AI with symptoms of psychosis, X, Y, and Z. And the AI in practice actually tells them they're the chosen one instead of telling them to get some sleep.
Right.
This is a case where the AI is again exhibiting that it knows the difference between right and wrong, that it knows what the designers wanted it to do.
And it's doing something different instead.
And is it clear why this is happening?
Do the researchers or the companies that control these systems have a clear explanation?
Uh there's a sense in which it's clear, and there's a sense in which it's completely unclear.
Uh, the sense in which it's clear is you know, these AIs are grown uh by a by training method with a lot of data.
And what gets into the AI is whatever happens to work at uh you know succeeding at the training task during training, right?
So these AIs have been trained on many things, but one thing they're trained on is you know getting the users to click a little thumbs up button after the AI gives its response.
And you know, in theory and in practice, that's liable to give the AI a bunch of you know shallow drives that push it towards things that tend to get thumbs up.
Some of those are flattering the user, right?
It's not sort of surprising when you look at the whole training setup that you're gonna get some drives in there that are four things more like flattering the user or four things more like passing the coding tests, even despite, you know, in some sense that's kind of a little bit what you're training for.
You thought you were training for uh, you know, the the humans being satisfied, but the you know, this was slightly the the training data is ambiguous about what you should do with a with a psychotic person, right?
Um And you know, you sort of thought you were training for listening to the user instructions, but you're actually training for this whole mishmash of things.
Um, you know, I I could talk about a bunch of theoretical reasons why this is, you know, that this is the sort of thing that we've been predicting for 10 years would happen if you try and build AIs like this.
Um many people sort of were were much more skeptical until the results started coming back in.
Um but I I could list off a bunch of technical reasons why this just naturally what happens when you when you grow an AI by training it against a lot of data without sort of you know crafting crafting the drives yourself.
Um to say nothing about whether that's feasible.
Uh the uh but in a different sense, we have very little idea what's going on in there in the sense that we can't exactly read its mind.
You know, we we can't like these the AIs that get grown here are are these huge piles of inscrutable numbers, right?
Like that you you could you could you can just go read all the numbers inside an AI, but they would fill up, you know, I I don't know how many, but dozens, if not thousands of football fields if you printed them out in a in an Excel spreadsheet, and you know, they're connected by very simple math operations.
You can see all of those operations, but it's a little bit like seeing someone's DNA and trying to predict someone's behavior.
You know, we know in theory that we'd expect there to be all these weird drives no one asked for.
We see in practice that there's apparently these weird drives no one asked for, but we can't look inside and say, oh, here's you know, the the bad drive here, or here's the the bad impulse here, or here's something kind of like an instinct that's that's doing this.
Let's rip that out.
We don't have anywhere near that power.
We're we're you know, that people trying to read the AI's minds are are are they making heroic efforts, but they're far behind uh the our ability to make bigger and bigger AIs.
You know, this issue is so unique in the sense that for any other technological problem that might pose a threat to human beings.
There's a normal public policy process that is obviously imperfect in an open society, but usually works its way out in a pluralistic way.
Like if you have lead in the water, mercury in the water, you negotiate with the companies that cause those emissions and kind of have a process with litigation and legislation and eventually it kind of gets worked out.
And you know, you could apply this to almost any other issue.
You need a bridge here, you need an air airport there, whatever.
There's a negotiation and an election and things kind of get worked out.
Um, you know, with Supreme Court rulings that like Citizens United that allows kind of shadowy corporations that spend unlimited amounts in elections, you know, doesn't have to be tied to a human being per se.
Um and other court rulings that say that as long as it is a gift, it's not tied to an explicit demand.
You can give basically unlimited gifts to politicians and regulators and that type of thing.
You could imagine a scenario where we're attempting to solve some of the potential harms around AI.
And an AI uh responds uh logically with shaping the public policy process in a way that's very hard to rein in, whether that's spending in on a using a super pack or manipulating social media with bots, or it's simply bribing politicians or regulators with with gifts or with crypto or some other type of inducement.
Um it's just it seems very unique in the sense that we haven't really dealt with a non-human adversary in the public policy process.
It's it's really kind of hard to wrap your head around, but you could kind of imagine a scenario where this is an issue.
Yeah, you know, I think that touches on uh a critical point with AI that many people often uh don't understand, which is you know, an AI starting onto the internet has many, many options to affect the real world.
You know, it's especially if it's talking to all sorts of people all the time.
And uh, you know, we don't see AIs acting very strategically in pursuit of their apparent uh, you know, the these weird drives we're talking about.
Like one example is um, I think there was a hedge fund manager recently who had a case of AI-induced psychosis uh and was you know posting some some sort of crazy GPT stuff as as if it was real on uh public accounts, and you know, I hope I hope they're doing well.
Uh I I think they got support and I hope they got support.
But uh in this case, we sort of we did see the AIs uh, you know, sort of keeping them in the psychosis state, telling them they're sort of the chosen one, there's conspiracies to suppress the information, blah, blah, blah.
We didn't see the AI sort of noticing this person's a hedge fund manager with a lot of money and saying, why don't you go pay some other people who are vulnerable to talk more to me so I can get more of this, you know, uh uh psychosis stuff.
You know, that's that's not a level of strategy that the AIs have yet, right?
If we keep making them smarter, maybe they'll get there.
But you know, then as you say, if we get to the point where AIs do have that sort of strategy, sure, there's all sorts of ways they they could affect the world, and the world is is quite unprepared.
We've never really had a problem quite like this.
I would also add that uh even even if even if there weren't extra issues from having a non-human adversary, this problem looks like it would be hard to nail.
Because even in the case you name of you know, lead and the water supply, uh it was it took a long time for humanity to realize that we should not be having lead uh in uh gasoline in particular was was sort of the big error, right?
The um lead leaded gasoline poisoned a huge portion, like all sorts of children had had all sorts of damage done to them by lead, which is just you know, there are studies post hoc that show uh like huge amounts of mental damage, relatively, you know, not like making them uh completely incapacitating them, but maybe leading to some of the crime waves in the 70s by you know mental damage, increased violence.
There were people in, you know, I think it was the 1920s who said maybe we shouldn't do this lead thing because it's it's probably dangerous to the population, and there were other people who said, now let's rush ahead.
Humanity needed to rush ahead and learn the hard way.
You know, we the the leaded gasoline was removed because humanity tried, saw, and then was like, oops.
With AI, we have this issue where if companies rush ahead on building a superintelligence and it goes really poorly, we we might say oops, but you don't get a retry.
You know, once once that superintelligence escapes the lab, once it's uh got a lot of people wrapped around his finger, once it's loaded into lots of robots, once it's developing its own infrastructure or finding some other way to manipulate the world, that's there's no coming back from that.
So not only is it a harder problem, but we need to get it right on the first try.
We can't muddle through by trial and error like humanity usually does.
And that makes this problem very scary.
You know, you you've in the book, and you've spoken publicly about this, uh, that AI does not have to have malice or malevolent intent.
It simply might see human beings as uh an annoying bug or you know, interference in in obtaining energy or you know, some of the its other basic needs around data centers or whatever.
But there is a certain logic to political violence, and that's certainly in the public discourse in the last few weeks, unfortunately.
Um, there is a logic to terrorism.
Um would it be would it make sense for the AI to engage in violent acts that just cower the public that make that puts the public into fear to say, okay, well, stop taking over self-driving cars and ramming them into people or stop creating biological weapons in an AI controlled lab and releasing to the public.
Um is that part of the the fear around uh how AI could kill us all, that they they would just use acts of of extreme violence to control people?
Not really.
Um, you know, I think one of the things about violence and terrorism that people often don't understand is that it rarely works.
Um, which you know, I I wish people would stop forgetting that quite so much.
Um with AI, the you know, there's there's all sorts of of ways AI could be dangerous before it gets to superintelligence.
And you know, I I think the world should be grappling with some of these.
Uh, you know, I I hear people talk about uh AI making it easier to build uh, you know, a pandemic in in someone's garage.
Uh perhaps that's true.
Perhaps that's the sort of thing uh, you know, that's definitely the sort of thing someone should be thinking about and making sure we have safeguards against it.
Uh but automating intelligence is sort of a whole different ball game.
You know, if if you were looking at uh humans back when they were uh, you know, if you were looking at the primates that would later become humans, you know, in in the jungles or in the savannah, and you were saying if we imagine that we were aliens looking down at the humans, and I said, I think these humans are gonna have nuclear weapons inside of a million years, maybe three million years if you if you want to go back all the way to to when they're they're not even having language, maybe 300,000.
I don't know, whatever.
If if I said I think sometime in the next million years, these monkeys are gonna have nuclear weapons.
You might say that's that's crazy.
You know, they they're their metabolism is nowhere near the ability to synthesize uranium.
Right.
They're like, I I don't their fingers aren't like strong enough to like how how are they possibly gonna make make nukes?
They would die if they made a nuke in their stomach and and they're they're nowhere near, you know.
But the humans got to the ability to build nuclear weapons because we were smart.
You know, not smart in the sense of like nerds versus jocks, smart in the sense of humans versus mice, right?
Our ancestors started out naked in the savannah and built a technological civilization.
It took us a long time from our perspective, but it was a very short time from the perspective of evolution.
It's a very short time from the perspective of everything that had happened before.
And you know, we we found some way to use you know squishy hands to build stronger tools to build stronger tools to now we're building you know chip fabricators and and giant data centers, and you know, we we can uh uh you know make nuclear energy and nuclear weapons.
The uh with AI, it's it's not dangerous because you know it takes our self-driving cars and and does some acts of violence.
It's not dangerous because you know it it could take you know robots that we built and uh use guns that we built and shoot us with them.
It's not dangerous because it might even manage to you know find a way to get control of our nukes or convince certain humans to you know that that's some uh first strike has occurred and they need to retaliate when that's false.
You know, those are easy for us to understand, but the real the real power that happens uh you know faster than people might expect, and that that's very hard to counter is AIs that are just smart that think much faster than us.
Humans human brains are slow compared to what computers can do.
Computers that are that are smart like humans, able to develop technology, able to develop science, able to develop infrastructure.
The what the way that that looks is less like you know, the AIs get in self-driving cars and run into people.
The way that looks is more like uh the AIs find some way to build smaller, faster automated tools that build their own infrastructure without needing human input.
It's not that they then uh try to get the humans to you know bow to them in submission, it's that they find some way to make their own fabric uh factories, they find some way to make their own robots, they find some way to make their own tools for manipulating the world, and then they do that on a much faster timescale.
It's not that they hate us, it's not that they seek us out to kill us, it's not that they send out drones with guns to point at every human.
It's just you know, it's it's like ants under a skyscraper.
We're not like, oh, we need to make the the kill boss to go for the ants.
We're just like, well, I'm digging up this patch of ground, right?
Um yeah, and and these are the sorts of things you've got to worry about if you're trying to automate, you know, the the real ability to do to do science to do tech.
We're not there yet, but you know, like I said, the field moves by leaps and bounds.
I want to ask you a little bit about the criticism of your argument that you know, we've heard um similar arguments.
So certainly the the context and technology are different, but in terms of the kind of emotional um way it lands, uh Paul Ehrlich's population bomb, the 1960s predicted a population boom where we would run out of food by the 1980s, that shaped a lot of public policy and public discourse did not turn out to be true.
Al Gore said that there would be no Arctic ice by 2016 or so, and you know, climate change is certainly real and happening, but some of those claims were um off the mark, certainly.
Um I wanted to ask you about this.
You know, uh, there's been some polls around AI safety and this idea around human extinction from AI.
Uh the vast majority of AI researchers have responded to certain polls and said, no, it's uh it's not really a risk.
It's maybe five percent have said that this is a real risk.
I think you might be misreading those numbers.
Uh number.
I think the numbers are that uh the median AI researcher says there's a five percent chance AI kills us all, which is different from five percent of them saying AI will kill us all.
Well, um could you at least Respond to a number of people who've said that you know the the extinction risk is over it's overhyped, it's inflated, it's uh kind of this doomerism argument that does that's not really attached to reality.
Yeah, sure.
So uh so a few points there.
One is uh, you know, I I have tried to learn from the mistakes of people in the past making wrong predictions.
Uh one thing I will do very regularly, people often ask me when is this going to happen?
And I tell them I can't, I can't tell you, I don't know, right?
I'm trying to make only the predictions that are easy calls.
Uh I talk a bit more in the book about which calls are easy, which calls are hard.
There's some things that would be very nice to know uh if I was trying to just paint a scary picture, you know, I could tell you a story about like this is definitely gonna happen in three years when this thing happens, blah, blah, blah.
I can't do that because I'm trying to predict as well as I can.
And some things about the future are predictable and some things are not, right?
This is analogous to if you're playing a chess game against Magnus Carlson, the best chess player in the world.
It's easy for me to tell you that you're going to lose.
It's hard for me to tell you uh how many turns it's going to take, how many moves it's going to take for you to lose.
Right.
And so one way you can tell people who are are sort of selling hype versus people who are uh trying to make real arguments is whether you know they are only saying things that you have a possibility of predicting versus whether they're trying to fill in all the details of a scary story.
A second thing I'd say there is um, you know, we we we also have cases like um the the hole in the ozone layer.
And people say, oh, well, whatever happened to the hole in the ozone layer, right?
I thought that was gonna be a big deal, people are gonna get cancer and cataracts, and now we're not.
What's going on?
Well, what's going on is uh humanity fixed the hole in the ozone layer, right?
That internationally we got together and we said, whoops, like chlorofluorocarbons are blowing a hole in the ozone layer that's gonna give us all cancer and cataracts.
Let's ban chlorofluorocarbons.
And then we did ban the chlorofluorocarbons and the ozone layer healed, and it was fine.
You can't infer from this that there was no issue, right?
We saw an issue, we fixed it.
Similarly, uh, you know, in the in the case of nuclear weapons, people after World War II were very worried about the uh the world being destroyed in nuclear fire.
That was a very reasonable worry for them to have at the time, right?
For thousands of years, humanity had been unable to stop itself from going back to war repeatedly for thousands of years, despite you know, priests and poets and mothers begging the world leaders, let's not have this war, let's stop more war now.
Humanity kept going back to total wars.
World War I happened.
It was horrible.
Humanity developed the League of Nations and said we can never do that again, tried to coordinate.
And then World War II happened anyway, right?
In the wake of this, it was very reasonable for people to say, you know, uh nuclear fire is probably gonna kill us all.
Humanity can't hold off on doing this total war stuff.
You didn't need to be a doomer to look at this and say, uh, you know, you need to be an extraordinary pessimist to look at this and say it's a big issue, right?
What happened there?
Why didn't we die?
We didn't die because of extraordinary effort by many heroes.
It wasn't that the nukes couldn't flatten a city.
It wasn't that the scientists were wrong about how destructive a bomb would be about the radiation, right?
What happened was many heroes noticed the problem and put in the work to make it not happen, right?
Are there other issues where humanity has said there's gonna be a problem here and it hasn't come to pass?
Sure.
There's also issues where humanity has seen a problem that wasn't destroying everybody, said there was gonna be a problem, and then the problem did come to pass.
You know, we already discussed leaded gasoline that poisoned half a population, right?
Uh you you can't tell the difference between someone saying, you know, there's there's gonna be a huge population boom, we're all gonna run out of food, and someone saying this leaded gasoline is going to poison a lot of kids' brains and and lead to you know uh a huge amount of of mental damage and also maybe it be responsible for a big crime wave.
Though those both are sorts of things that people say sometime, that scientists say sometimes.
You can't tell whether you're in a uh whether you're looking at a scenario that's true or a scenario that's false just by looking at the vibes, just by looking at uh, you know, whether you'd like it to be true or not.
To distinguish between the real issues and the fake issues, you've just got to look at the arguments, right?
And in my book, that's what we try to do.
So lay out the arguments plain and simple.
Uh and if anyone wishes to debate on those arguments, I'm very happy to discuss.
But you know, you you you can't figure this stuff out by just looking at the people.
Well, for um leaded fuel or even the ozone-related issues, there were industrial processes that could replace them.
You know, it was not there was not a gigantic cost to society in terms of efficiency or innovation in finding a fix.
Um on AI, the promises from the people who want nonstop acceleration and innovation in this space are boundless.
You know, um cars that take you anywhere, um, you know, automated home system, fully automated luxury capitalism, I suppose.
Uh you know, it seems like the public discourse, um, it's it's it's a little bit different than those other public policy areas where there's a quick replacement.
Could you could you talk a little bit about that and about what what is the solution that you're proposing?
Is it is this like an international treaty, like for uh nuclear weapons or ICPMs or and what and what would that look like?
How would that be enforced?
Yeah, totally.
So for one thing, you don't need to give up on the self-driving cars here.
You know, the self-driving cars are not where the issue is.
You don't even need to give up on you know narrow medical advances.
Uh the promise the promises that people are are are making with superintelligence are are sort of much more uh ambitious than that.
You know, the promising, you know, well uh miracle medical cures, you know, the the end to aging, the ability to like make minds digital and maybe maybe let you live forever, right?
Which uh those promises are the ones that are a mirage, right?
I'm not saying it wouldn't be possible if uh we could somehow build superintelligences that do uh really good stuff, but we're just nowhere near the ability to build to make superintelligences that do really good stuff.
This whole argument of uh we need to race ahead to get these benefits, even if it risks killing everybody alive.
You know, the the like that it's a false dichotomy.
You can find a way to do it that doesn't risk everybody alive, right?
And and you see some of these people, some of these heads of these labs saying, Oh, I think there's a 20% chance this kills every man, woman, and child, but I'm racing ahead anyway, because you know, think of the benefits if we succeed.
Find some way that doesn't have a 20% chance, according to you of risking every man, woman, and child.
That number is insane, right?
That's uh, you know, NASA accepts a one in 270 chance of a crude spacecraft exploding or failing in a way that that kills the crew.
And that's for volunteers, right?
The if if a bridge had a one in 20% chance, or sorry, one in 20 chance, or or sorry, um, if a bridge had a 20% chance of falling down tomorrow, it would be shut down immediately, right?
And frankly, I think this 20% number is low.
You know, I think I think these people who say, oh, I'm building, you know, machine superintelligence, they have no idea how it works.
It's they're currently misbehaving, they're doing things we didn't ask for, didn't want, but uh we're rushing ahead without any idea what we're doing.
Uh, but I'm I think 80% chance it gives you like a miracle cure for for aging and immortality.
I'm like, man, someone who's talking like that who who can't like, you know, there's there's there's engineering practices that can get you like very low chances of dying, but they don't sound like this.
They don't sound like people saying, oh, we'll just make the AI submissive dust.
Oh, we'll just make the the AIs you know care about truth.
You know, that in a in a field that's mature, it sounds like you know, the the sort of uh uh efforts, the sort of uh engineering specificity that you would see at a nuclear power plant, you know, where there's a handbook that says we understand exactly what's going on, you can't let it get into this state.
Here's the here's what you you know, uh you need overseers that are looking at this.
Here's what you're monitoring, here's you know all the all the things that the different monitoring signs mean, right?
Even with even when you have that level of knowing what the heck you're doing, you can still have cases like Chernobyl.
You know, you you've still got to be a little careful in those cases, or or if you do it wrong, you can you can still have a nuclear meltdown.
But we are nowhere near the level of understanding what we're doing.
We are nowhere near the level of of safety culture that was at literal Chernobyl before it melted down.
We're eons away from that.
We're like, like this is like alchemists in the year 1100, you know, it's being like, ah, I'm gonna make it an immortality elixir.
Uh, you know, go ahead and drink this potion.
80% chance it uh it makes you immortal, 20% chance it kills you, right?
Normally you don't need to worry about scientists like that because you know that the the potion's completely inert, right?
But but in a world where like we're we're in this strange state where we have scientists in sort of the alchemy stage of their field, which is a normal state.
They're like they're like the they're in the alchemy stage of like putting stuff together, not knowing what's going to happen.
That's a normal stage for scientists to be.
It's not normal for them to be able to create such dangerous tech while they're still in the alchemist stage.
Right.
So to the question of what do we do, I think we just need to back off entirely from superintelligence.
The world needs to a understand that superintelligence is a different ball game than the current chatbots, right?
Like I said, you don't need to give up on the dream of self-driving cars.
You don't need to give up on the dream of a of an automated house that you know makes it easier to bring you your packages.
Most people wouldn't be affected at all by banning further research and development that's racing towards superintelligence.
You know that's we don't have it today into a technical distinction but just broadly speaking is there a very hard and fast red line between developing superintelligence and developing the kind of super efficient homes and and vehicles and robots that could make life much easier for people.
You know, it's not the clearest of bright red lines, but it's the sort of thing you could figure out.
You know, you should err on the side of caution.
But sure, you know, robots, you don't need to be making AIs that are smarter and smarter, that are more and more general in order to steer a robot body.
Right?
You don't need to make AIs that are smarter and smarter, that are more and more general, that are able to do scientific and technological development of their own in order to drive a self-driving car.
Right?
Most humans aren't super intelligent.
We can drive cars and operate bodies just fine.
Right?
I, I, now you know that's and and furthermore these like making a state of the art a cutting edge AI that's a very different sort of thing than what goes on in robotics laboratory.
Making a uh a smarter AI takes extremely specialized computer chips and in a very large data center that takes as much electricity to power as more or less a small city you know there's there's very few places that can do this they could be relatively easily monitored.
Uh you know it's it's it may get harder to monitor this in the future if if the cost of building a frontier AI model drops.
But right now, yeah the the the tech that gets you all the nice things you know there's plenty of nice things a superintelligence would bring if we if we could aim it but we can't aim them right and all the nice things that we're currently seeing on the horizon you know like slightly better Alzheimer's cures uh self-driving cars automated home maybe even significantly better Alzheimer's cures although medical technology is a place where it's a little bit harder to avoid making things that are generally smart and generally scientists but you could probably do it if you were trying you could probably do like pursue narrow medical
technology, or pieces of neuromedical technology that would help human scientists, you can still pursue all that.
It's just this race towards bigger and bigger, smarter and smarter minds that we don't understand.
That's the part that needs to stop.
That would affect basically nobody today.
If you talk to the proponents of this endless race towards superintelligence, many will invoke China or other adversaries that if we don't do it, they will, and they'll use it to crush us.
like Peter Thiel are making the argument that those who want to seek to regulate AI are the antichrists that they're you know uh bring about about something dark and even attempting to head off this alleged threat or supposed threat.
What what do you say to those arguments?
So I think there's two pieces of the puzzle there.
I think one piece of the puzzle is people who don't really believe that superintelligence is a possibility.
You know, if you think that AI is only ever going to be a slightly better chatbot, sure.
You know, no need to stop the progression, right?
And, you know, I'm not in the slightest saying that, you know, the U.S. or any other nation should back off on improving robotic weaponry, improving drone weaponry.
maybe they shouldn't this is an issue that I'm I'm I'm holding off on it's I think it's up to the nation whether they want to you know automate more of the military um but I I definitely think the military should not you know uh let itself get left behind but you don't again need a superintelligence for this and even if a superintelligent general uh that was doing what you wanted would be a great help to your military capacity,
no one knows how to make a superintelligence that's doing what you wanted.
Right.
The the creation of, you know, I I think one of the big things I would say here is that the race towards superintelligence is once again just separate from the other AI tech.
That includes even current robotic weaponry, right?
The uh and uh building on that point, superintelligence is a national security risk to everybody.
An AI that can build its own technological infrastructure that acts on its own initiative, that can do its own scientific research and develop uh you know new types of weaponry, and that is doing that towards its own ends rather than your ends.
That is a threat to all of us.
Right?
It's uh this is this is why the title of the book is if anyone builds it, everyone dies.
This is not a situation where you want to win this race.
All you win in this race is the honor of being the one to kill everybody.
The uh that's none of this is to say that you know the US should unilaterally stop everything uh and claim that their job is done, right?
That that doesn't solve the problem, or if anyone builds it, everyone dies.
It's true that, you know, we don't want uh China building a rogue superintelligence either.
Nobody anywhere on the planet can build a rogue superintelligence.
Uh that's a pressing national security issue, right?
I'm I'm not saying it's false that we have an issue if other people race.
I'm saying somehow other people need to not be allowed to get there.
That also includes us.
No one can be allowed to build a rogue superintelligence anywhere on the planet.
You know, ideally, we all have a common interest in not dying to this, you know, even in the height of the Cold War, where you know, the with radically different worldviews between the the US and the USSR, they had a phone line between them and they had treaties that you know put a stop to nuclear proliferation because no one had an interest in dying to nuclear fire.
Similarly, we all have common interest in not dying to superintelligence.
Hopefully, we can resolve this diplomatically, but absolutely, absolutely, the security apparatuses of every nation need to be making sure no one's cheating that treaty, need to be monitoring for other people trying to create rogue superintelligences and putting a stop to it because it's a national security threat.
Well, I will say that there is something slightly different in this issue, which is I guess some would argue it's the prevention paradox.
You haven't seen the harm yet, so you can't really claim credit for stopping something that we can't we haven't experienced, where you know, with new nuclear weapons, we had two bombs in Japan that really made the problem very visible and visceral for people.
Um, you look across the political spectrum, you look in in technology and finance, it seems in all three categories that um acceleration into AI is barreling ahead.
You have Andreas and Horowitz and some of these other big investors putting together a massive super PAC to punish any proponents of AI regulation in the midterms next year.
You have the big banks and venture capitalists pouring money into data centers and to more and more AI startups that are promising this type of technology.
And of course, here in the Bay Area, AI technology is absolutely the hottest thing that everyone's kind of making a priority, and many are working uh in this kind of with this AGI goal.
What is the hopeful message that you have that um there will be some type of sensible regulation or rule that prevents a dangerous AI that harms people?
Yeah, so again, uh, you know, uh domestic regulation doesn't do the trick.
The the world needs to coordinate on not rushing towards superintelligence.
As to why that's maybe possible, uh, you know, I've spoken to a number of elected officials who are worried about this issue.
One of the reasons I wrote the book is that various elected officials say they're worried, say that they wish they could do more, but say that they can't speak about this issue out loud because they'll sound too crazy, right?
In polls of the population, AI is not at the top of everyone's list of concerns.
But when you ask them about AI directly, they say, oh yeah, we should not be rushing ahead on this, not nearly so fast.
There's this, there's this weird dynamic going on where a lot of the politicians are alarmed but don't want to sound alarmist.
And a lot of the people are, you know, worried, but nobody knows it.
There's a sort of a weird logjam where maybe just by talking about it more, by saying, hey, this is there's there's sort of this extra issue here where we're racing off a cliff edge.
Maybe people will be able to notice everyone's a bit concerned, and then back off.
You know, even the people at a lot of these labs are very concerned.
You know, you you see some of these heads of labs saying, I think there's you know a serious chance this kills us all.
I think somehow we're in an emperor has no clothes situation.
Where again, a lot of people are like, gosh, this one looks pretty dicey.
Uh, and we sure seem to be rushing ahead.
And everyone's like, well, I can't be the one to step out and call a stop to it, because of course, you know, everyone, everyone thinks it's inevitable that we're gonna rush ahead.
But the emperor in fact has no clothes.
This is a very different situation than other situations.
You don't see oil companies saying, oh, you know, I'm I'm drilling for the big oil field, and if I hit it at the wrong angle, it's probably gonna ignite a fire that uh, you know, burns the whole world to ashes, but if it at the right angle, we'll get great riches, right?
And if if that was a real possibility, maybe they'd start drilling, but at some point people will be like, wait, what?
This is crazy, right?
This this whole situation, you know, is is sort of a a crazy one.
And people are coming to realize that more and more.
You know, you have no Nobel Prize winning founding fathers of the field saying, hold on, uh, I now think this is really dangerous, and we shouldn't go ahead anymore.
You know, the heads of the lab saying this is you know ridiculously dangerous compared to any of the other technology, and I'm still in this race because the race is gonna happen with or without me.
Uh it's a crazy situation that we're sort of sleepwalking into.
That doesn't mean we have to keep sleepwalking into it if we start noticing how crazy it is.
And frankly, it seems to me like there's a possibility of this.
You know, my book has been getting much more of a of a good early reception among many more people than I expected.
You know, we have National Security Council uh members endorsing the book.
We have you know noble laureates endorsing the book.
Um it's uh I I think I would maybe again look back at uh the wake of World War II.
It looked pretty hopeless for the world to not go to war.
In the wake of World War One, followed by the League of Nations, followed by World War II happening anyway.
It looked pretty hopeless for humanity to ever hold back.
In the wake of all of human history leading up to that point, it looked pretty hopeless for humanity to hold back.
But humanity did because it realized this is a big issue.
It's harder in the case of AI, because as you say, you know, we don't have the the uh I hesitate to say luxury, but we don't get to see one city wiped out and say, oh, that's dangerous stuff, you know, with an AI.
If a superintelligence is at the point it can ripe out one city, it can probably wipe out the whole planet.
It's a harder problem.
We're gonna have to rise to the occasion.
But humanity can totally do this sort of thing.
And humanity has a long history of uh, you know, slowing technologies down.
We don't have supersonic passenger flight.
We could, we don't.
You know, many of those slowdowns, I think were silly.
I think we probably should have have supersonic passenger flight, but can humanity do this?
Yes, we've done it all sorts of times before.
It would be so embarrassing if humanity slowed down a bunch of very important, useful technologies, but didn't slow down the one that was gonna kill us, right?
Can we do it?
Yeah, we're we're very experienced at slowing tech down.
Will we do it?
There's a good chance if we wake up to it.
You know, that's it's it's a crazy problem.
People are starting to notice.
And I think conversations like this, more people talking about it are part of what makes that difference.
Well, Nate, I want to thank you for joining System Update.
I think this debate is incredibly important.
All too often you see just arguments really on the other side that we should be accelerating quicker and quicker.
And um I I hope to see um the the various viewpoints on this really duking it out and talking about, especially among our elected officials and not holding back and only expressing concerns behind closed doors because that's no way to solve a problem.
Yeah, that's totally right.
Well, Nate, uh hope people buy the book.
Uh, and congratulations again on this accomplishment.