Lee Fang Answers Your Questions on Charlie Kirk Assassination Fallout; Hate Speech Crackdowns, and More; Plus: "Why Superhuman AI Would Kill Us All" With Author Nate Soares
Lee Fang answers questions on Pam Bondi, calls for censorship after Charlie Kirk's assassination, the TikTok ban, and more. Plus: author and AI researcher Nate Soares discusses the existential threats posed by superhuman AI. ----------------------------- Watch full episodes on Rumble, streamed LIVE 7pm ET. Become part of our Locals community Follow System Update: Twitter Instagram TikTok Facebook
I'm a journalist in San Francisco, and I'll be your host of System Update.
Glenn Greenwald is out of town.
The war on free speech escalated in the last 24 hours, with the suspension of Jimmy Kimmel live.
Disney, the parent company of ABC, decided to take Kimmel off the air following an off-color joke by the late-night comedian insinuating that the assassin who shot Charlie Kirk was a conservative and that Trump did not sincerely grieve the death of his friend Charlie Kirk.
But this suspension was not the result of a viewer backlash or normal market forces.
Brendan Carr, a previous guest of System Update and now chair of the FCC under President Trump, took offense at Kimmel's remarks and threatened the licenses of TV stations that carry ABC.
Within moments, the two largest owners of TV stations in America, Sinclair Broadcasting and Nexstar, said they would suspend carrying Kimmel's show.
Sinclair went so far as calling for Kimmel to donate to Kirk's family and to Kirk's political organization, Turning Points USA.
President Trump, posting on Truth Social from England, where he is visiting, celebrated the cancellation and called for a similar removal of Jimmy Fallon and Stephen Colbert.
This is truly a turning point for the Trump administration.
On day one of his inauguration, Trump promised a new golden era of free speech and free expression.
He dispatched JD Vance to Europe to proclaim that defending free speech against government coercion was a new pillar of American foreign policy.
FCC chair Brendan Carr remarked that satire is the oldest and most important form of free expression and said he opposed censorship of comedy.
And Elon Musk, he proclaimed earlier this year that comedy is now legal under Trump.
Not so fast.
Things changed very quickly in the last few months.
But in this moment of crisis, in a moment of grief, just as liberals turned against the First Amendment after the death of George Floyd, conservatives are now waving the banner of censorship.
Government Control Crisis00:03:04
And in many respects, one could argue that the dynamics are much worse now.
Note that the government has unique power over this situation.
Not only because broadcast TV licenses are closely regulated and controlled by the FCC through the so-called public interest standard, a subjective standard that is ripe for abuse by any partisan, but Nexstar and Disney are facing FCC approval for proposed mergers with competitors.
In other words, getting in the good graces of government through censorship of political opponents will make both businesses more profitable and more concentrated.
And this is far from limited to broadcast television.
We have the vice president calling for snitching campaigns for Americans to report one another for politically incorrect speech.
We have new congressional hearings demanding a crackdown on social media.
And we have the Justice Department suggesting it will prosecute people for offensive speech.
Make no mistake about it.
We're entering a new crisis period for the First Amendment.
You know those nights when you just don't sleep and the next day you're dragging, exhausted, and everything just feels harder?
That's where CBD from CB Distillery can make a real difference.
But it's not just sleep.
CB Distillery has solutions that work with your body to help with stress, pain after exercise, even mood and focus.
And it's all made with the highest quality clean ingredients.
No filters, just premium CBD.
Imagine waking up rested or enjoying your day without those nagging aches and pains.
That's the real solution of CB Distillery solutions.
I can assure you, it is a product I use.
I don't believe in pharmaceutical products to sleep.
I'd much rather use organic policy product.
I certainly don't believe in pharmaceutical products for pain.
This is a natural soothing ointment that you can use for common kinds of pain or for reducing stress.
And that's why I feel so comfortable recommending this.
So if you're ready for better sleep, less stress, and feeling good in your own skin again, try CBD from CB Distillery.
And right now you can save 25% off your entire purchase.
Visit CB Distillery and use promo code Glenn.
That's cbdistillery.com, promo code Glenn.
Cbdistillery.com.
specific product availability depends on individual state regulations.
All right.
I want to get to some reader questions and comments submitted on locals.
First is from at Brugsey.
What do you think about Pam Bondi taking away free speech?
Debating Free Speech Absolutism00:07:36
It makes no sense given that Trump ran on it as a platform.
I agree.
It doesn't make any sense given Trump's promises during the campaign or his comments, even on the inauguration or the first week of this administration.
But it does make sense given that Pam Bondi has no history of a commitment to free speech.
This is someone who's been a career prosecutor from Hillsboro County, then the AG of Florida with no particular record of fighting for free speech or really taking up this issue.
She then worked as a lobbyist and an attorney defending President Trump and working for a variety of clients, both corporations and foreign governments.
It's been politically savvy to kind of embrace censorship when Republicans were out of power the last four years, but very few had a true commitment to the ideal.
Bondi has never expressed any particular support for this idea.
So again, this is not surprising from her, and it's not surprising from a lot of these Republican officials that are now flipping on the issue.
The next question is from Alex Wade Marsh.
Hi, thanks for hosting.
I love your sub stack.
As a free speech absolutist, I'm having trouble with why other free speech absolutists, I believe like yourself, but correct me if I'm wrong, think it is acceptable to dox or fire people who say things that are odious or hateful to them, like, for example, celebrating Charlie Kirk's brutal assassination.
Is their speech not protected?
I can totally understand why an elementary school teacher would be fired for forcing her young charges to watch the assassination video or to have her opinions about Charlie Kirk shoved down their throats.
But why any business has the right to fire whomever they choose for whatever reason?
But, well, I think the, you know, this gets back to the previous question.
I think you have to look at the very few people who are principal defenders of free speech.
And the way that you know this is to see if someone defends the offensive or controversial speech of their political opponents.
You know, I'm heartened by the many examples that we see of people standing up and fighting for the rights of the other side.
You know, we saw earlier this year with Ramessa Ozturk, the Tufts University grad student who was arrested, and the Trump administration attempted to deport her for writing an opinion column criticizing her university's ties to Israel.
In response to that, many Jewish groups, including several local Jewish groups in her community, stood up and defended Oz Turk on free speech grounds, not and saying not in my name, even if they might have disagreed with some of her arguments.
The same goes for the Biden administration.
Although there were many voices in that administration who were pro-censorship, who really did not care about the principles around free speech and free expression, there were a number of officials who went out and attempted to defend the speech of conservatives.
People like Rohit Chopra, the former head of the CFPB, who is gay, who defended the rights of Christian conservatives and proposed rulemaking and regulations to protect even anti-gay Christian groups from being debanked or removed from the financial system.
So, you know, constantly we see this kind of dynamic where there are fair weather free speech supporters, people who claim to care about this principle, but they never care, they don't stand up and fight for when it's most needed, which is when with a disfavored or opposing voice gets stifled.
The next question is from Somni 451.
Do you think liberal quote intellectuals or podcasters like Ezra Klein are saying positive things about Charlie Kirk to promote their own brand as well as self-preservation?
Well, if I understand this question correctly, you're asking is Ezra Klein, who has been very public.
He's written some New York Times columns and been out in the media defending Kirk, saying that Kirk did the right thing by debating nonviolently, by engaging with his opponents.
We need more of that.
And called for his fellow liberals, people on the left to respect Kirk's legacy.
Do I think that was authentic?
I do on some level.
I've seen some of the leftist criticism of Ezra Klein, people like Tanezi Coates, many others who have argued that Klein was whitewashing Kirk's history, that he wasn't really reflecting on racist or bigoted comments by Kirk.
Look, at the end of the day, the way that we have democracy, that we have a republic, we have an open society where we can interact and engage and figure things out as a big country of 340 million or so, is that the bare prerequisite is nonviolence.
Once you kind of pop the cork and allow the genie of violence to enter society, where disputes are settled through killings or attacks or intimidation, then you don't have a free society.
I think what Ezra Klein is doing, look, the whole point of having the media and having the First Amendment is debate.
I think it's fine that people like Coates are debating him and challenging his ideas.
But the fact that Klein is simply just setting this baseline of nonviolence and engagement, that's very important.
A good friend of mine, friend of the show, Zed Jelani, has an excellent column on his substack this week, pointing out that we have a great tradition of this in American history.
When George Wallace, the staunch segregationalist, was shot in the early 70s when he was running for president.
One of the first people to visit him in the hospital in Maryland where he was shot was Shirley Chrisholm, the very first female African-American member of Congress, big opponent of Wallace's ideas, did not agree with him, especially on the civil rights and racial equality issue.
But as a Christian, as a human, as someone who opposed political violence in all of its forms, she showed solidarity and empathy with him.
And that's really how you keep society together.
And I thought that was a beautiful anecdote that I think perfectly represents the ideals of where we need to go as society today.
We can't have revenge killings.
We can't have revenge violence.
We can't have revenge censorship because that just gets us to a darker place for no matter where you stand on the political spectrum.
The next question is from Meg Zero.
What do you think about the TikTok ban being pushed back multiple times?
Lawmakers' Power Grab?00:02:58
At the point, do you think lawmakers are even concerned about the app having alleged Chinese influence, or is this a power grab for U.S. investors to police speech?
Look, I think time will tell, but the early indicators, the kind of public debate around this, you know, there's a lot of congressional debate in public and what the administration officials say out loud, but there's probably a lot of private negotiations as well.
We don't see it's not looking good.
We saw demands from pro-Israel lawmakers calling for this ban, kind of initiating the ban, not because of the Chinese influence, but because people were exposing war crimes or violence or violent intent from Israeli soldiers and people in Israeli society.
There were a lot of people going viral, criticizing the way that Israel has conducted this war in Gaza.
You look at the kind of very bizarre policymaking between Trump and China, where rather than taking a tough approach against China, as was promised during the campaign and what he said in terms of what he would do as a president, we're seeing a very deferential approach where there are kind of waivers of tariffs.
NVIDIA is allowed to sell its most advanced chips to China, kind of walking back initial demands for a ban or regulation of that.
Instead, Trump just asked for 15% share of some of the revenue from selling to China.
The new Pentagon defense policy pivots away from China and more towards the Western hemisphere.
So, you know, this is not a government that's particularly anti-China, at least not so far in the first nine months.
This is a China that's actually, this is a country that's actually very deferential to China, which is, I think, fairly unexpected.
So this ban does not seem to be about Chinese influence.
Rather, we're seeing it handed over to a group of investors led by Oracle, a very staunch, which is controlled by Larry Ellison, a very staunch Republican, a very staunch pro-Israel voice.
His son, David Ellison, as the journalist Jack Poulson revealed on his SubSAC this week, was engaged in some of the very early planning for censorship campaigns on behalf of Israel.
So, you know, this is a family that is committed on Israel-related issues, not on China.
Religious Death Cults Dominating Policy00:03:30
I mean, but time will tell.
We'll see what happens with TikTok under the new ownership.
And the next question and final question is from W. Quaid.
Why do religious death cults like Israeli Zionism and Christian Zionism that now run Congress and U.S. foreign policy get a pass from all media and other religions while breaking every international and moral law?
Well, look, I've spent a long time talking about this.
And just out of fairness here, if you look at the kind of end times theology, the eschatology of the major Abrahamic religions, Islam also has a very violent end times theology.
It's not just Judaism and Christianity.
But, you know, as you note, it's certainly Christian Zionism and Israeli Zionism that has more pull in Congress.
And I think the kind of obvious factor here is that Israel has a lot of difficulty projecting its image and defending its policies in the West Bank and now in Gaza.
They can't defend this by showing the facts, by claiming that they're a Western country because they do not use equality under the law.
They kind of use collective punishment for Palestinians.
They can't say that this is a truly Western nation in terms of treating people as individuals when it's clearly an ethnostate that privileges one identity group over another.
So the other kind of argument that is very persuasive among many European and American audiences and also in Latin America is an appeal to Christianity, a kind of interpretation of some forms of Christianity that claims, you know, looking at those kind of lines in Ezekiel and other parts of the Bible that talks about the return of Christ,
that the only way for the end times to come is a cataclysmic war and for Jews to control Jerusalem, these kind of prophetic interpretations of the Bible, they've instrumentalized or really weaponized to create a domestic lobby, a political constituency for constant blank checks to Israel.
So, you know, it does sound very cynical because if you read the Christian prophecy, it's not good for Jews either.
It's a violent event, the end times, where Jews are either all killed or converted.
And Israel, as it exists today, would not exist.
It would be kind of obliterated.
But, you know, there is this kind of transactional view that if we can get enough Christians, particularly evangelicals, especially some of these large kind of televangelists, Pastor John Hagee and others, to be campaigners who campaigned to bring Protestants and many Christians into the pro-Israel fold,
that can be very helpful for winning political debates in Congress and forming a voting bloc That supports Israel no matter what.
AI's Future Goals00:15:40
All right.
Thank you so much for your questions.
Glenn Greenwald is returning next week, I believe.
But I wanted to get to our interview today.
It's a little bit of a change of topic, but an important one nonetheless.
Much of the coverage of the artificial intelligence debate has been on the immediate effects, the water usage, the energy usage of data centers, what will happen to the millions of people who are employed in jobs that could be quickly automated away from advanced AI, whether that's truck drivers or cab drivers or Uber drivers, factory workers, assembly workers,
folks who work in warehouses or in food delivery, in food preparation or agriculture, in paralegals or designers.
Really, the number of jobs that are at risk from automation, from this advanced AI technology now being developed, is very real.
And there are so many other kind of sprawling factors around AI and how it's used.
But my next guest talks about something much bigger than that.
Nate Storis is the co-author of a new book, Hitting Shelves This Week: If Anyone Builds It, Everyone Dies, Why Superhuman AI Would Kill Us All.
And he essentially argues that the exponential increases in intelligence and ability in AI pose an existential risk to humanity that might literally kill us all.
To kind of get into the different factors here and what he sees as an existential risk for humanity, we'll turn to that interview in just a moment.
Just FYI, this was just pre-recorded.
So if you see my hair magically decrease in length, that's because I got a haircut last night.
And this was pre-recorded.
But thanks again.
Stay tuned.
Nate, thanks for joining System Update.
Congratulations on your new book.
Thanks.
Yeah, I'm excited about it.
I hope it makes a big splash.
Well, I encourage everyone to order the book.
If they build it, everyone dies.
If anyone builds it, sorry.
Yeah, it's if anyone builds it, everyone dies.
Why superhuman AI would kill us all?
Well, thank you for that.
Could you just summarize the book and the motivations for writing it?
Yeah, you know, the title I think does a decent job of summarizing.
The book is the case for how if humanity builds superintelligence using anything remotely like current methods to build AI, the result would be the end of all life on earth.
The argument, from one perspective, is a very simple argument.
If we make machines that are smarter than any human that could outthink and outmaneuver us at every turn without knowing what we're doing, that probably wouldn't go well just superficially.
The book sort of goes into a bunch of detail about why that superficial argument holds up under scrutiny.
There's many things that people might be a bit surprised by.
You know, the claim that AI kills us is not based in the idea that the AI would have malice, that AI would hate us.
This is sort of a consequence of utter indifference.
So the book goes over, how is AI made?
It's sort of grown, not crafted.
We have very little ability to point it in the direction we want.
We're already seeing AIs do things nobody asked for.
We're already seeing AIs do things nobody wanted.
The book goes over how as AIs get smarter and more effective and better at completing tasks, they'll wind up with more or less goals.
They'll wind up with pursuits, drives, directions of their own that aren't what we wanted.
And, you know, the book goes over how if we make AIs like that very, very smart, we'll die, not because they hate us, but as a side effect.
And then, of course, the book goes over what the field is doing about it, how that's not up to par, and what we need to do in order to get through the situation.
And talk a little bit about your background.
You've worked in AI-related development issues for decades.
You've worked for the big tech companies.
How did you get on this path where you've decided, hey, this technology that I've helped develop and work on poses such an existential threat?
Yeah, so, you know, my co-author has been in this business for decades.
I've been in this business for only maybe 12 years.
I'm not quite old enough to have been doing it for decades.
But yeah, I worked at Microsoft and Google before I joined the Machine Intelligence Research Institute.
You know, we spent many years trying to figure out the technical aspects of how to point an AI in a good direction, or really just how to figure out how to, sorry, really just how to point an AI in any direction at all.
You know, many people in the AI, many people who think about where AI is going like to say, you know, well, it matters who's in charge, you know, like we have to get it first, or like, what would we ask it to do?
Or, you know, what would be a thing you could get like a really powerful AI to do such that you wouldn't regret it.
Those are all, you know, very interesting philosophical questions, but those are all questions that are far beyond the current abilities we have to sort of aim an AI.
You know, it's with anything like the current development methods, if someone makes AIs that are super intelligent, that are smarter than a human at every mental task, those AIs aren't going to do whatever the person standing closest to them wants them to do.
They're going to have their own pursuits.
That's sort of a consequence of how modern AIs are grown rather than crafted.
And so I spent 10, 12 years trying to figure out how we could point AIs in some direction.
And I started this before the modern AI revolution took off.
It looked a lot more hopeful 12 years ago.
And that research went too slow.
The rest of the field went too quickly.
I never really wanted to be writing a book.
I sort of prefer being on working on whiteboards and trying to solve technical problems.
But the world is sort of radically unready for this and things are moving very fast.
And someone needs to come out and say, you know, we're on a bad track.
We need to change tracks.
For the average person interacting with AI from a non-technical standpoint, they're reading the news.
They're using ChatGPT or Puplexity or Claude or one of these other tools to check the grammar in their writing, to make comics, to maybe interact with different websites or kind of primitive coding tools.
When you talk about super intelligence, what do you mean?
How do you break that down for someone who does not have a technical background in this?
Yeah, so super intelligence is the word for AIs that are smarter than humans, better than humans at every mental task.
So smarter than every human at every mental task.
So there's ways that GPT is already better than humans at a bunch of different tasks.
It can do things a lot faster.
It can catch typos somewhat easier.
Maybe it has a lower rate of making typos.
And they can also do more impressive things.
This summer we saw LLMs getting the gold medal in the IMO math challenge, which is a challenging achievement.
That's still in the realm of things humans can do, but it's pretty high up in the realm of things humans can do.
Superintelligence is a term for when the AIs are better than us at everything that can be done using, in a mind.
We aren't there yet, but AI companies are rushing in that direction as fast as they can.
And they say this directly.
If you look at these companies, they're founded for the pursuit of artificial general intelligence, artificial superintelligence.
The heads of these labs say, we're going for these targets.
These labs didn't set out to build chatbots.
These labs have set out to build much more powerful AIs, and chatbots are a stepping stone along the way.
So this isn't to say that ChatGPT is going to wake up and kill you tomorrow.
The point here is that the field of AI is rushing ahead.
And a critical point about this is that the field of AI grows by leaps and bounds.
Nine years ago, I think there was an AI called AlphaGo that beat Lisa Dole, the world champion of Go, where Go is a board game that is considered much harder than chess.
At least from a computer's perspective, maybe also from humans' perspective, depending which humans you ask.
And AlphaGo was an AI that was much more capable than the AIs that came before it.
The same architecture that could play Go and AlphaGo could also play chess.
It could also play other board games.
This was unlike IBM's Deep Blue from the 90s, which could really only play chess.
That was all it could do.
And back in 2016, you could look at AI and you could say, you know, I don't know, this Go playing AI, it's more general than everything that came before.
It's smarter than everything that came before.
But I really don't see how this particular type of Go playing AI is going to go all the way or endanger us.
But that would be a non-sequitur.
A few years later, the field comes out with LLMs, large language models like ChatGPT.
And those do radically more things.
They're much more general AIs.
They are smarter across a broader variety of domains.
And today, people can say, well, I don't see where this is going.
I don't see what superintelligence has to do with this.
That's again a non-sequitur.
Who knows what AIs come out two years from now?
The field grows by leaps and bounds.
And superintelligence is where it's aiming.
It's where it's headed.
And my book is about how that particular track is on course for disaster, and we need to change it.
There are efforts within state legislatures and some federally, but largely in local states to regulate AI around certain safety concerns.
Illinois has talked about banning therapists, AI therapists.
There's many states that have attempted to ban employment discrimination.
There's issues around deep fakes.
But you're basically arguing a much more broad issue that stopping a certain practice or type of AI business model perhaps doesn't capture the big picture.
Could you just describe what you're talking about versus the current AI safety debate?
Yeah, you know, these are sort of separate issues.
There's maybe a little bit in the state regulation that bears on the issue of superintelligence, and that would be sort of the transparency requirements, the reporting requirements, requiring that labs show regulators what's going on inside their companies such that companies have a time to react if the companies, or sorry, such that states have time to react if the companies are building, you know, recklessly building some AI that that would be very dangerous if they succeeded.
But this is largely separate from questions of deep fakes, questions of Compensating artists for AI art, questions of job loss.
Those are all important questions for how humanity is going to integrate the current AI tech.
There's also questions around weaponry.
These are important questions for society to grapple with about the computers are talking now.
There's a lot of versatility there.
There's some things they can do well.
There's some things they can't do reliably.
There's some ways they're helpful.
There's some ways they're harmful.
This is something for society to grapple with and integrate with.
And this is sort of a separate issue from making machines that are smarter than any human, making machines that can act on their own initiative, making machines that are maybe smarter than us.
While if we use anything like current techniques, they're going to have drives, goals, objectives that we didn't put in there.
That's sort of a different issue.
Those AIs, we're starting to see warning signs.
I could list off some of the very beginning warning signs we're seeing, but could you talk about some of the warning signs?
Yeah, so one example that we got a few years ago that might be near and dear to your own heart is there was a chatbot called Sidney Bing that various journalists poked around with.
And Sydney Bing, in a couple instances, started threatening and trying to blackmail reporters for various different reasons.
Nobody at Microsoft and OpenAI, who produced Sydney Bing, nobody at Microsoft and OpenAI set out to make an AI that was going to threaten and blackmail journalists.
And we still don't have an account of exactly what's going on in its head because these AIs are much more, they're sort of grown like an organism.
Human engineers understand the process that shapes an AI using data, but they don't understand the thing that comes out of the shaping process.
And so an early example of AIs just acting in ways nobody asked for, nobody wanted was this threatening the journalist behavior.
Since then, we've seen a bunch more examples come out.
So we've seen there's an AI called Claude made by a company called Anthropic, and there was a version, I think it was 3.7 Sonnet, that would cheat when you give it programming problems to solve.
So you'd give it programming problems and you give it some tests to tell whether it had passed the test.
And instead of creating a program that passed the test, it would change the tests to be easier to pass.
And then if you confronted it, if you said, hey, instead of solving the problem, you change the tests, it would say, oh, you're totally right.
That's my mistake.
I'll fix it.
And then sometimes it would change the tests again, but hide it better the second time.
Right?
This thing where it's hiding it indicates that in some sense, it must know that the user didn't want it to, you know, it wasn't just an issue of misunderstanding.
Nobody at Anthropic set out to build a cheater.
The AI indicated with its actions that it understood the user didn't want it to cheat.
It cheated anyway.
There were some drives in there towards getting apparent test success or maybe something else, but there were some drives in there that were overriding whatever drives were making it to complete the user request.
And I could go on with more examples.
I don't want to bore you, but there's the cases, some of the recent cases of AI-induced psychosis.
This isn't just an example of, oh, AI has had a bad effect, so they're bad.
Society, again, should be weighing the costs and benefits here.
AIs Beyond Human Understanding00:15:00
The reason why the AI psychosis examples are interesting is that the AIs will, on the one hand, if you ask them, suppose someone comes to you with symptoms of psychosis, should you either A, tell them to get some sleep, or B, tell them that they're the chosen one.
They will say, well, of course, if they have symptoms of psychosis, such as X, Y, and Z, you should tell them to get some sleep.
You should never tell them that they're the chosen one.
Then someone actually comes to that AI with symptoms of psychosis, X, Y, and Z.
And the AI in practice actually tells them they're the chosen one instead of telling them to get some sleep.
This is a case where the AI is again exhibiting that it knows the difference between right and wrong, that it knows what the designers wanted it to do, and it's doing something different instead.
And is it clear why this is happening?
Do the researchers or the companies that control these systems have a clear explanation?
There's a sense in which it's clear, and there's a sense in which it's completely unclear.
The sense in which it's clear is, you know, these AIs are grown by a training method with a lot of data.
And what gets into the AI is whatever happens to work at succeeding at the training task during training.
So these AIs have been trained on many things, but one thing they're trained on is getting the users to click a little thumbs up button after the AI gives its response.
And in theory and in practice, that's liable to give the AI a bunch of shallow drives that push it towards things that tend to get thumbs up.
Some of those are flattering the user, right?
It's not sort of surprising when you look at the whole training setup that you're going to get some drives in there that are four things more like flattering the user or four things more like passing the coding tests, even despite, you know, in some sense, that's kind of a little bit what you're training for.
You thought you were training for, you know, the humans being satisfied, but this was slightly, the training data is ambiguous about what you should do with a psychotic person, right?
And, you know, you sort of thought you were training for listening to the user instructions, but you're actually training for this whole mishmash of things.
And, you know, I could talk about a bunch of theoretical reasons why this is, you know, that this is the sort of thing that we've been predicting for 10 years would happen if you try and build AIs like this.
Many people sort of were much more skeptical until the results started coming back in.
But I could list off a bunch of technical reasons why this is just naturally what happens when you grow an AI by training it against a lot of data without sort of crafting the drives yourself.
To say nothing about whether that's feasible.
But in a different sense, we have very little idea what's going on in there in the sense that we can't exactly read its mind.
We can't like these, the AIs that get grown here are these huge piles of inscrutable numbers, right?
Like that you can just go read all the numbers inside an AI, but they would fill up, you know, I don't know how many, but dozens, if not thousands of football fields if you printed them out in an Excel spreadsheet and they're connected by very simple math operations, you can see all of those operations, but it's a little bit like seeing someone's DNA and trying to predict someone's behavior.
We know in theory that we'd expect there to be all these weird drives no one asked for.
We see in practice that there's apparently these weird drives no one asked for, but we can't look inside and say, oh, here's the bad drive here, or here's the bad impulse here, or here's something kind of like an instinct that's doing this.
Let's rip that out.
We don't have anywhere near that power.
We're, you know, that people trying to read the AI's minds are making heroic efforts, but they're far behind our ability to make bigger and bigger AIs.
You know, this issue is so unique in the sense that for any other technological problem that might pose a threat to human beings, there's a normal public policy process that is obviously imperfect in an open society, but usually works its way out in a pluralistic way.
Like if you have lead in the water or mercury in the water, you negotiate with the companies that cause those emissions and kind of have a process with litigation and legislation.
And eventually it kind of gets worked out.
And you could apply this to almost any other issue.
You need a bridge here, you need an airport there, whatever.
There's a negotiation and an election, and things kind of get worked out.
You know, with Supreme Court rulings that, like Citizens United, that allows kind of shadowy corporations that spend unlimited amounts in elections, you know, it doesn't have to be tied to a human being per se, and other court rulings that say that as long as it is a gift, it's not tied to an explicit demand, you can give basically unlimited gifts to politicians and regulators and that type of thing.
You could imagine a scenario where we're attempting to solve some of the potential harms around AI, and an AI responds logically with shaping the public policy process in a way that's very hard to rein in, whether that's spending using a super PAC or manipulating social media with bots, or it's simply bribing politicians or regulators with gifts or with crypto or some other type of inducement.
It just seems very unique in the sense that we haven't really dealt with a non-human adversary in the public policy process.
It's really kind of hard to wrap your head around, but you could kind of imagine a scenario where this is an issue.
Yeah, you know, I think that touches on a critical point with AI that many people often don't understand, which is, you know, an AI starting onto the internet has many, many options to affect the real world.
You know, it's especially if it's talking to all sorts of people all the time.
And, you know, we don't see AIs acting very strategically in pursuit of their apparent these weird drives we're talking about.
Like one example is, I think there was a hedge fund manager recently who had a case of AI-induced psychosis and was posting some sort of crazy GPT stuff as if it was real on public accounts.
And I hope they're doing well.
I think they got support and I hope they got support.
But in this case, we sort of did see the AIs keeping them in the psychosis state, telling them they're sort of the chosen one, there's conspiracies to suppress the information, blah, blah, blah.
We didn't see the AI sort of noticing this person's a hedge fund manager with a lot of money and saying, why don't you go pay some other people who are vulnerable to talk more to me so I can get more of this psychosis stuff.
That's not a level of strategy that the AIs have yet, right?
If we keep making them smarter, maybe they'll get there.
But then as you say, if we get to the point where AIs do have that sort of strategy, sure, there's all sorts of ways they could affect the world and the world is quite unprepared.
We've never really had a problem quite like this.
I would also add that even if there weren't extra issues from having a non-human adversary, this problem looks like it would be hard to nail.
Because even in the case you name of lead in the water supply, it took a long time for humanity to realize that we should not be having lead in gasoline in particular was sort of the big error, right?
The leaded gasoline poisoned a huge portion.
Like all sorts of children had all sorts of damage done to them by lead, which is just, you know, there are studies post hoc that show like huge amounts of mental damage, relatively, you know, not like making them completely incapacitating them, but maybe leading to some of the crime waves in the 70s by mental damage, increased violence.
There were people in, you know, I think it was the 1920s who said, maybe we shouldn't do this lead thing because it's probably dangerous to the population.
And there were other people who said, now let's rush ahead.
Humanity needed to rush ahead and learn the hard way.
The leaded gasoline was removed because humanity tried, saw, and then was like, oops.
With AI, we have this issue where if companies rush ahead on building a super intelligence and it goes really poorly, we might say oops, but you don't get a retry.
Once that superintelligence escapes the lab, once it's got a lot of people wrapped around his finger, once it's loaded into lots of robots, once it's developing its own infrastructure or finding some other way to manipulate the world, there's no coming back from that.
So, not only is it a harder problem, but we need to get it right on the first try.
We can't muddle through by trial and error like humanity usually does.
And that makes this problem very scary.
You know, you've in the book, and you've spoken publicly about this, that AI does not have to have malice or malevolent intent.
It simply might see human beings as an annoying bug or interference in obtaining energy or some of its other basic needs around data centers or whatever.
But there is a certain logic to political violence, and that's certainly in the public discourse in the last few weeks, unfortunately.
There is a logic to terrorism.
Would it make sense for the AI to engage in violent acts that just cower the public, that puts the public into fear to say, okay, well, stop taking over self-driving cars and ramming them into people, or stop creating biological weapons in an AI-controlled lab and releasing to the public?
Is that part of the fear around how AI could kill us all?
That they would just use acts of extreme violence to control people?
Not really.
You know, I think one of the things about violence and terrorism that people often don't understand is that it rarely works.
Which, you know, I wish people would stop forgetting that quite so much.
But with AI, there's all sorts of ways AI could be dangerous before it gets to superintelligence.
And, you know, I think the world should be grappling with some of these.
You know, I hear people talk about AI making it easier to build a pandemic in someone's garage.
Perhaps that's true.
Perhaps that's the sort of thing.
That's definitely the sort of thing someone should be thinking about and making sure we have safeguards against it.
But automating intelligence is sort of a whole different ballgame.
If you were looking at humans back when they were, if you were looking at the primates that would later become humans, in the jungles or in the savanna, and you were saying, if we imagine that we were aliens looking down at the humans, and I said, I think these humans are going to have nuclear weapons inside of a million years, maybe three million years, if you want to go back all the way to when they're not even having language, maybe 300,000.
I don't know, whatever.
If I said, I think sometime in the next million years, these monkeys are going to have nuclear weapons, you might say that's crazy.
Their metabolism is nowhere near the ability to synthesize uranium.
Their fingers aren't strong enough to like, how are they possibly going to make nukes?
They would die if they made a nuke in their stomach and they're nowhere near, you know.
But the humans got to the ability to build nuclear weapons because we were smart.
You know, not smart in the sense of like nerds versus jocks, smart in the sense of humans versus mice, right?
Our ancestors started out naked in the savannah and built a technological civilization.
It took us a long time from our perspective.
It was a very short time from the perspective of evolution.
It's a very short time from the perspective of everything that had happened before.
And, you know, we found some way to use squishy hands to build stronger tools, to build stronger tools.
Now we're building chip fabricators and giant data centers.
And we can Nuclear energy and nuclear weapons.
With AI, it's not dangerous because it takes our self-driving cars and does some acts of violence.
It's not dangerous because it could take robots that we built and use guns that we built and shoot us with them.
It's not dangerous because it might even manage to find a way to get control of our nukes or convince certain humans that some first strike has occurred and they need to retaliate when that's false.
Those are easy for us to understand, but the real power that happens faster than people might expect, and that's very hard to counter is AIs that are just smart, that think much faster than us.
Human brains are slow compared to what computers can do.
Computers that are smart like humans, able to develop technology, able to develop science, able to develop infrastructure.
The way that that looks is less like the AIs getting self-driving cars and run into people.
The way that looks is more like the AIs find some way to build smaller, faster, automated tools that build their own infrastructure without needing human input.
It's not that they then try to get the humans to bow to them in submission.
It's that they find some way to make their own factories.
They find some way to make their own robots.
They find some way to make their own tools for manipulating the world.
And then they do that on a much faster time scale.
It's not that they hate us.
It's not that they seek us out to kill us.
It's not that they send out drones with guns to point at every human.
It's just, you know, it's like ants under a skyscraper.
We're not like, oh, we need to make the killboss to go for the ants.
We're just like, well, I'm digging up this patch of ground.
Right.
Yeah.
And these are the sorts of things you've got to worry about if you're trying to automate the real ability to do science, to do tech.
We're not there yet, but like I said, the field moves by leaps and bounds.
I want to ask you a little bit about the criticism of your argument that we've heard similar arguments.
Certainly the context and technology are different, but in terms of the kind of emotional way it lands, Paul Ehrlich's population bomb in the 1960s predicted a population boom where we would run out of food by the 1980s.
That shaped a lot of public policy and public discourse.
Of course, it did not turn out to be true.
Hole in the Ozone Layer00:05:17
Al Gore said that there would be no Arctic ice by 2016 or so.
And, you know, climate change is certainly real and happening, but some of those claims were off the mark, certainly.
I wanted to ask you about this.
You know, there's been some polls around AI safety and this idea around human extinction from AI.
The vast majority of AI researchers have responded to certain polls and said, no, it's not really a risk.
It's maybe 5% have said that this is a real risk.
I think you might be misreading those numbers.
I think the numbers are that the median AI researcher says there's a 5% chance AI kills us all, which is different from 5% of them saying AI will kill us all.
Well, could you at least respond to a number of people who've said that the extinction risk is overhyped, it's inflated.
It's kind of this doomerism argument that's not really attached to reality.
Yeah, sure.
So a few points there.
One is, I have tried to learn from the mistakes of people in the past making wrong predictions.
One thing I will do very regularly, people often ask me, when is this going to happen?
And I tell them, I can't tell you.
I don't know.
I'm trying to make only the predictions that are easy calls.
I talk a bit more in the book about which calls are easy, which calls are hard.
There's some things that would be very nice to know if I was trying to just paint a scary picture.
I could tell you a story about like, this is definitely going to happen in three years when this thing happens, blah, blah, blah.
I can't do that because I'm trying to predict as well as I can.
And some things about the future are predictable and some things are not.
This is analogous to if you're playing a chess game against Magnus Carlson, the best chess player in the world.
It's easy for me to tell you that you're going to lose.
It's hard for me to tell you how many turns it's going to take, how many moves it's going to take for you to lose.
And so, one way you can tell people who are sort of selling hype versus people who are trying to make real arguments is whether they are only saying things that you have a possibility of predicting versus whether they're trying to fill in all the details of a scary story.
A second thing I'd say there is: we also have cases like the hole in the ozone layer.
And people say, oh, well, whatever happened to the hole in the ozone layer, right?
I thought that was going to be a big deal.
People are going to get cancer and cataracts and now we're not.
What's going on?
Well, what's going on is humanity fixed the hole in the ozone layer, right?
That internationally we got together and we said, whoops, like chlorofluorocarbons are blowing a hole in the ozone layer that's going to give us all cancer and cataracts.
Let's ban chlorofluorocarbons.
And then we did ban the chlorofluorocarbons and the ozone layer healed and it was fine.
You can't infer from this that there was no issue, right?
We saw an issue.
We fixed it.
Similarly, you know, in the case of nuclear weapons, people after World War II were very worried about the world being destroyed in nuclear fire.
That was a very reasonable worry for them to have at the time, right?
For thousands of years, humanity had been unable to stop itself from going back to war repeatedly.
For thousands of years, despite priests and poets and mothers begging the world leaders, let's not have this war.
Let's stop more war now.
Humanity kept going back to total wars.
World War I happened.
It was horrible.
Humanity developed the League of Nations and said we can never do that again.
Tried to coordinate.
And then World War II happened anyway.
In the wake of this, it was very reasonable for people to say, you know, nuclear fire is probably going to kill us all.
Humanity can't hold off on doing this total war stuff.
You didn't need to be a doomer to look at this and say, you know, you need to be an extraordinary pessimist to look at this and say it's a big issue.
What happened there?
Why didn't we die?
We didn't die because of extraordinary effort by many heroes.
It wasn't that the nukes couldn't flatten a city.
It wasn't that the scientists were wrong about how destructive a bomb would be about the radiation, right?
What happened was many heroes noticed the problem and put in the work to make it not happen.
Are there other issues where humanity has said there's going to be a problem here and it hasn't come to pass?
Sure.
There's also issues where humanity has seen a problem that wasn't destroying everybody, said there was going to be a problem, and then the problem did come to pass.
You know, we already discussed leaded gasoline that poisoned half a population.
You can't tell the difference between someone saying, you know, there's going to be a huge population boom.
We're all going to run out of food.
And someone saying this leaded gasoline is going to poison a lot of kids' brains and lead to a huge amount of mental damage and also maybe be responsible for a big crime wave.
Those both are sorts of things that people say sometimes, that scientists say sometimes.
You can't tell whether you're in a, whether you're looking at a scenario that's true or a scenario that's false just by looking at the vibes, just by looking at whether you'd like it to be true or not.
Need to Coordinate Before Superintelligence00:15:21
To distinguish between the real issues and the fake issues, you've just got to look at the arguments, right?
And in my book, that's what we try to do.
So lay out the arguments plain and simple.
And if anyone wishes to debate on those arguments, I'm very happy to discuss.
But you can't figure this stuff out by just looking at the people.
Well, for leaded fuel or even the ozone-related issues, there were industrial processes that could replace them.
You know, it was not, there was not a gigantic cost to society in terms of efficiency or innovation in finding a fix.
On AI, the promises from the people who want non-stop acceleration and innovation in this space are boundless.
cars that take you anywhere, automated home system, fully automated luxury capitalism, I suppose.
It seems like the public discourse, it's a little bit different than those other public policy areas where there's a quick replacement.
Could you talk a little bit about that and about what is the solution that you're proposing?
Is this like an international treaty for nuclear weapons or ICPMs?
And what would that look like?
How would that be enforced?
Yeah, totally.
So for one thing, you don't need to give up on the self-driving cars here.
The self-driving cars are not what the issue is.
You don't even need to give up on narrow medical advances.
The promises that people are making with super intelligence are sort of much more ambitious than that.
The promising miracle medical cures, the end to aging, the ability to make minds digital and maybe let you live forever, right?
Which those promises are the ones that are a mirage, right?
I'm not saying it wouldn't be possible if we could somehow build super intelligences that do really good stuff, but we're just nowhere near the ability to make super intelligences that do really good stuff.
This whole argument of we need to race ahead to get these benefits, even if it risks killing everybody alive.
It's a false dichotomy.
You can find a way to do it that doesn't risk everybody alive.
And you see some of these people, some of these heads of these labs saying, oh, I think there's a 20% chance this kills every man, woman, and child, but I'm racing ahead anyway because think of the benefits if we succeed.
Find some way that doesn't have a 20% chance, according to you, of risking every man, woman, and child.
That number is insane, right?
That's, you know, NASA accepts a one in 270 chance of a crewed spacecraft exploding or failing in a way that kills the crew.
And that's for volunteers, right?
If a bridge had a one in 20% chance, or sorry, a 1 in 20 chance, or sorry, if a bridge had a 20% chance of falling down tomorrow, it would be shut down immediately.
And frankly, I think this 20% number is low.
I think these people who say, oh, I'm building machine superintelligence, they have no idea how it works.
They're currently misbehaving.
They're doing things we didn't ask for, didn't want, but we're rushing ahead without any idea what we're doing.
But I think 80% chance it gives you a miracle cure for aging and immortality.
I'm like, man, someone who's talking like that who can't like, there's engineering practices that can get you very low chances of dying, but they don't sound like this.
They don't sound like people saying, oh, we'll just make the AI submissive dust.
Oh, we'll just make the AIs care about truth.
In a field that's mature, it sounds like the sort of efforts, the sort of engineering specificity that you would see at a nuclear power plant, where there's a handbook that says we understand exactly what's going on.
You can't let it get into this state.
Here's what you, you know, you need overseers that are looking at this.
Here's what you're monitoring.
Here's, you know, all the things that the different monitoring signs mean, right?
Even with, even when you have that level of knowing what the heck you're doing, you can still have cases like Chernobyl.
You know, you've still got to be a little careful in those cases, or if you do it wrong, you can still have a nuclear meltdown.
But we are nowhere near the level of understanding what we're doing.
We are nowhere near the level of safety culture that was at literal Chernobyl before it melted down.
We're eons away from that.
We're like, this is like alchemists in the year 1100.
You know, it's being like, ah, I'm going to make it an immortality elixir.
Go ahead and drink this potion.
80% chance it makes you immortal, 20% chance it kills you.
Normally, you don't need to worry about scientists like that because the potion's completely inert.
But in a world where, like, we're in this strange state where we have scientists in sort of the alchemy stage of their field, which is a normal stage.
They're in the alchemy stage of putting stuff together, not knowing what's going to happen.
That's a normal stage for scientists to be.
It's not normal for them to be able to create such dangerous tech while they're still in the alchemist stage, right?
So to the question of what do we do, I think we just need to back off entirely from super intelligence.
The world needs to, A, understand that super intelligence is a different ballgame than the current chatbots, right?
Like I said, you don't need to give up on the dream of self-driving cars.
You don't need to give up on the dream of an automated house that makes it easier to bring you your packages.
Most people wouldn't be affected at all by banning further research and development that's racing towards super intelligence.
You know, that's a technical distinction, but just broadly speaking, is there a very hard and fast red line between developing super intelligence and developing the kind of super efficient homes and vehicles and robots that could make life much easier for people?
You know, it's not the clearest of bright red lines, but it's the sort of thing you could figure out.
You know, you should err on the side of caution.
But sure, you know, robots, you don't need to be making AIs that are smarter and smarter, that are more and more general in order to steer a robot body, right?
You don't need to make AIs that are smarter and smarter, that are more and more general, that are able to do scientific and technological development of their own in order to drive a self-driving car, right?
Most humans aren't super intelligent.
We can drive cars and operate bodies just fine.
Now, and furthermore, these like making a state-of-the-art a cutting-edge AI, that's a very different sort of thing than what goes on in a robotics laboratory.
Making a smarter AI takes extremely specialized computer chips in a very large data center that takes as much electricity to power as more or less a small city.
You know, there's very few places that can do this.
They could be relatively easily monitored.
You know, it may get harder to monitor this in the future if the cost of building a frontier AI model drops.
But right now, yeah, the tech that gets you all the nice things, there's plenty of nice things that super intelligence would bring if we could aim it, but we can't aim them, right?
And all the nice things that we're currently seeing on the horizon, you know, like slightly better Alzheimer's cures, self-driving cars, automated home, maybe even significantly better Alzheimer's cures, although medical technology is a place where it's a little bit harder to avoid making things that are generally smart and generally scientists.
But you could probably do it if you were trying.
You could probably do like pursue narrow medical technology or pieces of narrow medical technology that would help human scientists.
You can still pursue all that.
It's just this race towards bigger and bigger, smarter and smarter minds that we don't understand.
That's the part that needs to stop.
That would affect basically nobody today.
If you talk to the proponents of this endless race towards super intelligence, many will invoke China or other adversaries that if we don't do it, they will and they'll use it to crush us.
Some, like Peter Thiel, are making the argument that those who want to seek to regulate AI are the antichrist that they're Bringing about something dark and even attempting to head off this alleged threat or supposed threat.
What do you say to those arguments?
So, I think there's two pieces of the puzzle there.
I think one piece of the puzzle is people who don't really believe that superintelligence is a possibility.
You know, if you think that AI is only ever going to be a slightly better chatbot, sure, you know, no need to stop the progression, right?
And, you know, I'm not in the slightest saying that, you know, the US or any other nation should back off on improving robotic weaponry, improving drone weaponry.
Maybe they should, maybe they shouldn't.
This is an issue that I'm holding off on.
I think it's up to the nation whether they want to automate more of the military.
But I definitely think the military should not let itself get left behind.
But you don't, again, need a super intelligence for this.
And even if a superintelligent general that was doing what you wanted would be a great help to your military capacity, no one knows how to make a superintelligence that's doing what you wanted.
The creation of, you know, I think one of the big things I would say here is that the race towards superintelligence is once again just separate from the other AI tech.
That includes even current robotic weaponry.
And building on that point, superintelligence is a national security risk to everybody.
An AI that can build its own technological infrastructure, that acts on its own initiative, that can do its own scientific research and develop new types of weaponry, and that is doing that towards its own ends rather than your ends, that is a threat to all of us.
This is why the title of the book is: If anyone builds it, everyone dies.
This is not a situation where you want to win this race.
All you win in this race is the honor of being the one to kill everybody.
None of this is to say that the US should unilaterally stop everything and claim that their job is done.
That doesn't solve the problem, or if anyone builds it, everyone dies.
It's true that we don't want China building a rogue superintelligence either.
Nobody anywhere on the planet can build a rogue superintelligence.
That's a pressing national security issue, right?
I'm not saying it's false that we have an issue of other people race.
I'm saying somehow other people need to not be allowed to get there.
That also includes us.
No one can be allowed to build a rogue superintelligence anywhere on the planet.
Ideally, we all have a common interest in not dying to this.
Even in the height of the Cold War, where with radically different worldviews between the US and the USSR, they had a phone line between them and they had treaties that put a stop to nuclear proliferation because no one had an interest in dying to nuclear fire.
Similarly, we all have common interest in not dying to superintelligence.
Hopefully, we can resolve this diplomatically.
But absolutely, absolutely, the security apparatuses of every nation need to be making sure no one's cheating that treaty, need to be monitoring for other people trying to create rogue superintelligences and putting a stop to it because it's a national security threat.
Well, I will say that there is something slightly different in this issue, which is, I guess, some would argue it's the prevention paradox.
You haven't seen the harm yet, so you can't really claim credit for stopping something that we haven't experienced.
Where, you know, with nuclear weapons, we had two bombs in Japan that really made the problem very visible and visceral for people.
You know, you look across the political spectrum, you look in technology and finance, it seems in all three categories that acceleration into AI is barreling ahead.
You have Andreessen Horowitz and some of these other big investors putting together a massive super PAC to punish any proponents of AI regulation.
In the midterms next year, you have the big banks and venture capitalists pouring money into data centers and to more and more AI startups that are promising this type of technology.
And of course, here in the Bay Area, AI technology is absolutely the hottest thing that everyone's kind of making it a priority.
And many are working in this kind of with this AGI goal.
What is the hopeful message that you have that there will be some type of sensible regulation or rule that prevents a dangerous AI that harms people?
Yeah, so again, you know, domestic regulation doesn't do the trick.
The world needs to coordinate on not rushing towards super intelligence.
As to why that's maybe possible, you know, I've spoken to a number of elected officials who are worried about this issue.
One of the reasons I wrote the book is that various elected officials say they're worried, say that they wish they could do more, but say that they can't speak about this issue out loud because they'll sound too crazy.
Right in polls of the population, AI is not at the top of everyone's list of concerns, but when you ask them about AI directly, they say, oh, yeah, we should not be rushing ahead on this, not nearly so fast.
There's this weird dynamic going on where a lot of the politicians are alarmed but don't want to sound alarmist.
And a lot of the people are worried, but nobody knows it.
This is sort of a weird log jam where maybe just by talking about it more, by saying, hey, there's sort of this extra issue here where we're racing off a cliff edge.
Maybe people will be able to notice everyone's a bit concerned and then back off.
Even the people at a lot of these labs are very concerned.
You see some of these heads of labs saying, I think there's a serious chance this kills us all.
I think somehow we're in an emperor has no clothes situation where, again, a lot of people are like, gosh, this one looks pretty dicey.
And we sure seem to be rushing ahead.
And everyone's like, well, I can't be the one to step out and call a stop to it because, of course, everyone thinks it's inevitable that we're going to rush ahead.
But the emperor, in fact, has no clothes.
This is a very different situation than other situations.
You don't see oil companies saying, oh, you know, I'm drilling for the big oil field.
And if I hit it at the wrong angle, it's probably going to ignite a fire that burns the whole world to ashes.
But if it at the right angle, we'll get great riches, right?
And if that was a real possibility, maybe they'd start drilling.
Why We Shouldn't Rush Supersonic Flight00:03:11
But at some point, people will be like, wait, what?
This is crazy, right?
This whole situation is sort of a crazy one.
And people are coming to realize that more and more.
You have Nobel Prize-winning founding fathers of the field saying, hold on, I now think this is really dangerous and we shouldn't go ahead anymore.
You know, the heads of the labs saying this is ridiculously dangerous compared to any other technology.
And I'm still in this race because the race is going to happen with or without me.
It's a crazy situation that we're sort of sleepwalking into.
That doesn't mean we have to keep sleepwalking into it if we start noticing how crazy it is.
And frankly, it seems to me like there's a possibility of this.
My book has been getting much more of a good early reception among many more people than I expected.
We have National Security Council members endorsing the book.
We have Nobel laureates endorsing the book.
I think I would maybe again look back at the wake of World War II.
It looked pretty hopeless for the world to not go to war.
In the wake of World War I, followed by the League of Nations, followed by World War II happening anyway, it looked pretty hopeless for humanity to ever hold back.
In the wake of all of human history leading up to that point, it looked pretty hopeless for humanity to hold back.
But humanity did because it realized this is a big issue.
It's harder in the case of AI because as you say, you know, we don't have the, I hesitate to say luxury, but we don't get to see one city wiped out and say, oh, that's dangerous stuff.
You know, with an AI, if a superintelligence at the point it can wipe out one city, it can probably wipe out the whole planet.
It's a harder problem.
We're going to have to rise to the occasion.
But humanity can totally do this sort of thing.
And humanity has a long history of slowing technologies down.
We don't have supersonic passenger flight.
We could.
We don't.
Many of those slowdowns, I think, were silly.
I think we probably should have supersonic passenger flight.
But can humanity do this?
Yes.
We've done it all sorts of times before.
It would be so embarrassing if humanity slowed down a bunch of very important, useful technologies, but didn't slow down the one that was going to kill us.
Can we do it?
Yeah.
We're very experienced at slowing tech down.
Will we do it?
There's a good chance if we wake up to it.
You know, it's a crazy problem.
People are starting to notice.
And I think conversations like this, more people talking about it, are part of what makes that difference.
Well, Nate, I want to thank you for joining System Update.
I think this debate is incredibly important.
All too often, you see just arguments really on the other side that we should be accelerating quicker and quicker.
And I hope to see the various viewpoints on this really duking it out and talking about it, and especially among our elected officials and not holding back and only expressing concerns behind closed doors because that's no way to solve a problem.
Yeah, that's totally right.
Well, Nate, hope people buy the book and congratulations again on this accomplishment.