Speaker | Time | Text |
---|---|---|
This is the primal scream of a dying regime. | ||
Pray for our enemies. | ||
Because we're going medieval on this people. | ||
I got a free shot on all these networks lying about the people. | ||
The people have had a belly full of it. | ||
I know you don't like hearing that. | ||
I know you try to do everything in the world to stop that, but you're not gonna stop it. | ||
It's going to happen. | ||
And where do people like that go to share the big line? | ||
MAGA Media. | ||
I wish in my soul, I wish that any of these people had a conscience. | ||
Ask yourself what is my task and what is my purpose? | ||
If that answer is to save my country, this country will be saved. | ||
unidentified
|
War Room. | |
Here's your host, Stephen K. Band. | ||
Thank you. | ||
Good evening. | ||
I am Joe Allen sitting in for Stephen K. Bannon. | ||
Many of you in the War Room Posse, if not most of you are familiar with the concept of artificial superintelligence, a machine that outpaces human beings in thinking ability, in memory, in data collection, and if given access to the outside world through robotics or even manipulated human brains, would be able to outperform human beings in the real world. | ||
Now you also know I'm quite skeptical of the claim that this is imminent or even possible, but I'm also quite open. | ||
I'll tell you briefly about an experience I had at a forum that was given to Jaime Sevilla of Epic AI. | ||
Epic AI does evaluations on AI systems, testing them to see how good they really are. | ||
And what Jaime Sevilla presented was complicated. | ||
On the one hand, we all know that AIs are extremely fallible, but what he showed was that for some number of runs, GPT-5 could do mathematical calculations at the level of a PhD mathematician. | ||
Now, if you aren't a mathematician, perhaps you're not all that concerned about it, but what it shows is that objectively, without any question, this artificial mind can perform a specific cognitive task better than most human beings on Earth. | ||
Now, this was hosted by the Foundation for American Innovation, and R. E. Kagan of FAI pointed out something very important. | ||
That on the one hand, we see that GPT-5 is incapable oftentimes of creating an accurate map, or even answering a mundane question accurately, or counting fingers. | ||
On the other hand, for some number of runs, GPT-5 can outperform the vast majority of human beings on Earth in complex mathematics. | ||
And even more interesting, it often does so by way of alien pathways, non-human pathways. | ||
It chooses routes through the information that no human being would and arrives accurately at its destination. | ||
So on the one hand, you have a top performing artificial brain, and on the other, you have a mechanical and a mechanical idiot. | ||
How do we approach this problem as Americans, as human beings, as Christians, as naturalists? | ||
How do we approach this problem? | ||
Joining us this evening is Nate Sores, co-author of the admittedly fantastic book, if anyone builds it, everyone dies. | ||
Why superhuman AI would kill us all? | ||
Co-authored with Eliezer Yudkowski, well known on the war room for many reasons, but perhaps most for his advocacy of bombing data centers In any country that decides to build superintelligence against any kind of possible international treaty. | ||
Nate Suarez, we really appreciate you coming on. | ||
Thank you so much for joining us. | ||
Pleasure. | ||
So, Nate, to begin, I would like to just have you lay out the thesis or perhaps expand the thesis of the title. | ||
If anyone builds it, if anyone builds superintelligence, then everyone dies. | ||
Sounds bleak, but the book is very, very well written with it's very concise, it's very clear, with a lot of clever turns of phrase. | ||
I cannot recommend it enough, even for the skeptics. | ||
Please expand on the thesis. | ||
Why would everyone die if anyone built superintelligence? | ||
Yeah, so the very basic point is we're trying to build machines that are smarter than any human that that could outmaneuver us at every turn. | ||
That's the sort of thing where just on its face, from a very uh from a very basic perspective on its face, if you make machines that are much smarter than humans, that's that's at least kind of dicey. | ||
Uh if you further don't know what you're doing while you're building these machines, if these machines are grown rather than crafted, if these machines have lots of warning signs when they're small of ways that they aren't doing what anyone wanted or what anyone asked for, uh, that it's it just doesn't go well to build things much smarter than you without without any any ability to point them in some direction that you uh you want them to go. | ||
And with modern AIs, we see that modern AIs are uh grown rather than crafted. | ||
You know, these these are not uh traditional pieces of software where when they do something you don't like, the engineers can look inside them and go to every line of code and find the line of code that says, you know, oh, it was driving a teen to suicide today, I'll find the drive teens to suicide line and switch that line from uh true to false. | ||
So no, who set that line to true. | ||
Uh that was silly. | ||
We'll just turn off the we'll just turn off the driving teens to suicide uh feature. | ||
Pardon. | ||
That's not uh that's not how these machines work. | ||
These AIs um are uh they're grown. | ||
We could go into a little bit uh how they're created. | ||
Actually, but yeah, I I think without without going too much into technical detail, because I I do want uh lay people in the audience to really clearly grasp what you're talking about, but this is a really important point. | ||
It's when I agree with you completely, and how could I not? | ||
It's objectively true. | ||
This idea that the frontier AIs and even more primitive AIs from years past are grown, not crafted, or another way of putting it perhaps is that they're trained, not programmed. | ||
This is something that a lot of people get hung up on, even software engineers who are stuck in the 80s and 90s. | ||
Could you just explain to the audience what that means that these AIs are grown and how is it that you can get something out of the AI that you didn't train it for? | ||
Yeah, so uh modern AIs, you know, the the field of AI has in some sense tried to understand intelligence since you know 1954. | ||
And in some sense, that field never really made progress in understanding really in depth how to craft intelligence by hand. | ||
You know, there were many cases over time where programmers were like, maybe you know, it's a little bit like this or a little bit like that, and they tried to sort of like handcraft uh some intelligent machine that could think well. | ||
It it sort of never went anywhere. | ||
When AI started working, it started working because we found a way to uh train computers that works empirically, where humans understand the training process, but humans don't understand what comes out. | ||
It's a little bit like uh like you know, breeding cows, where you can take some traits you like and you can get out some traits you like, but you don't have precise control over what's going on. | ||
Uh so the way it works is you have basically a huge amount of computing power, you have a huge amount of data, and there's a process for combining the data with the computing power to shape the computing power uh to be a little bit better at predicting the data. | ||
And humans understand the process that does the shaping, but they don't understand what comes out, what gets shaped. | ||
And it turns out if you take a really staggering amount of computing power, we're talking uh, you know uh highly specialized computer chips in enormous data centers that take uh amount of electricity that could power a small city and you run them for a year on uh almost all of the text that you can possibly dig up that humans have ever written. | ||
Uh you shape the computing power that much, the machines start talking. | ||
We understand the shaping process. | ||
We don't understand why the machines are talking. | ||
Uh I mean, we understand why in the sense that, well, we trained them and started working, but we couldn't look inside them and debug what's going on in there. | ||
And when they act in a way we don't like, you know, all we can really do is instruct them, stop doing that. | ||
And then sometimes they stop and sometimes they don't. | ||
That uh that idea, that that black box, as it's oftentimes described, that we don't really know what's going on inside these machines, not just we as in lay people, the top experts don't really know how these machines are arriving at oftentimes coherent and accurate statements. | ||
And I think one analogy, two analogies actually, you bring up in your book that are really, really great to understand that scientists know more about DNA and how that results in an organism and more about the human brain and how that results in thought and behavior than they do large neural networks and their outputs. | ||
And yet they still work. | ||
That's right. | ||
Uh and they work, but they often don't do what you ask for. | ||
They don't do what you wanted, even sometimes when they know the difference, right? | ||
And you know, it's cute now because they're still they're still not smart enough for it to really matter. | ||
But you know, there are cases where an AI will, someone will be trying to get an AI to write computer programs, and the AI will cheat. | ||
It will, instead of making something that passes the tests, it'll change the tests to be easier to pass. | ||
And then, you know, the programmer will say, uh, hey, it looks like instead of solving the problem, you change the test to be easier to pass. | ||
And then the AI will say, Oh, that's totally my mistake. | ||
Uh, you're right. | ||
Um, you know, that's that's that's my error, I'll I'll fix it. | ||
And then it goes and it changes the tests again but hides it better this time. | ||
The thing where it changes the tests again but hides it better. | ||
Yeah, sorry, go ahead. | ||
No, please continue. | ||
Yeah, the thing, the thing where it uh changes the tests again but hides it better, that indicates that it knew what the programmer wanted in some sense. | ||
You know, it doesn't say sorry, and then I'll fix it and then make the same mistake but hiding it without in some sense, somewhere in there having something like an understanding of what the programmer wanted. | ||
Otherwise, why is it hiding it? | ||
But nobody at Anthropic, the company that made the AI where you can you can sort of see this test, other AI companies, you know, similar similar cases. | ||
But um, nobody at the AI company set out to make a cheater. | ||
The user didn't want the AI to cheat. | ||
The AI cheats anyway. | ||
So we've got this concept, and you explain it very clearly in the book, it's excellent, that AIs are grown, not crafted. | ||
And to the extent they're given degrees of freedom, which is really the key to their power, they don't always do what they train for. | ||
And you also are very clear that you don't want to anthropomorphize these machines. | ||
You don't want to think of them like you would a human when you discuss their wants or their preferences. | ||
Uh at the same time, it does seem like what you're describing is a machine with a will of its own to some extent. | ||
Yes, it's dependent on the infrastructure in the humans to prompt, but it has a will of its own without luring you into um anthropomorph uh anthropomorphization. | ||
Would you say that that is something that people should wrap their heads around that these machines are not essentially under human control? | ||
Yeah, so you know, it's it can be tricky to think about machines here because they're a different sort of thing than we're used to. | ||
You know, the uh the common reply in the field of AI is people ask, well, can a machine really have a will? | ||
Can a machine really sw really think? | ||
And the the sort of standard answer is can a submarine really swim? | ||
Right? | ||
Uh a submarine moves through the water at speed. | ||
It can get from point A to point B. Is it really swimming? | ||
I mean, this word swimming was sort of designed in a world where we were only seeing animals that did swimming. | ||
And so when a machine starts moving through the water from point A to point B, people could debate all day: is it really swimming? | ||
You know, does it count as swimming if you don't have flippers you can kick or or arms you can you can uh wave? | ||
But at the end of the day, it moves through the water at speed, right? | ||
With an AI, um, you know, even back in the old days of AI, when we look at deep blue, which is the chess AI that beat Gary Kasparov, you know, deep blue was an AI when AIs were crafted. | ||
We can look at every line of code in there and tell you what it means. | ||
You could pause it at any time and figure out every bit and bite inside that computer and know exactly what it was doing. | ||
And Deep Blue was able to beat the world champion at chess. | ||
And it had no will to win. | ||
It had no pride, it had no passion, it had no desire to be the world champion of chess, but it won anyway. | ||
And it's it didn't let you take its queen uh without you know, without sacrificing pieces of equal uh equal worth. | ||
You know, a a chess player could have looked at it and said, wow, some of these moves feel to me like there's a spark of life behind them. | ||
In fact, Gary Kasparov did say this after uh a game in 1996. | ||
He said, I smelled a new type of intelligence across the table. | ||
It was finding moves that I thought you couldn't find without human creativity. | ||
It found them anyway in a different route. | ||
And this goes back to what you were saying at the beginning. | ||
It's it's not that AIs have a human will per se. | ||
It's not that there's you know uh a human soul inside that machine. | ||
It's that it can still find routes through inhuman different methods. | ||
And we see the same thing, but in on a much more advanced and unpredictable level, right, with Alpha Go, which famously in 2016 mopped the floor up with various chess uh Go masters, and then Alpha Zero, which essentially developed its own strategies, uh very alien strategies, many of them, no? | ||
That's right. | ||
And Alpha Zero also uh interestingly was not trained on any human data. | ||
So uh Alpha Zero just trained on self-play. | ||
It played the game Go against itself. | ||
Uh, you know, in the wake of Alpha Go, uh there were many humans who said uh, well, the the this AlphaGo was trained on so much human data from centuries and centuries of human knowledge about Go, uh maybe it's not really an AI victory because it's you know absorbing all of this human data. | ||
And so Alpha Zero trained on no human data. | ||
Uh I I don't remember the stats off the top of my head, but I think it was trained for uh uh some for a relatively short time. | ||
It might have been a handful of days. | ||
I think it maybe it was three days. | ||
Uh and I believe it stayed in the human regime uh, you know, the the human pro regime, it it entered, you know, human amateur and exited human pro in some series of hours, uh again without any human data. | ||
Uh and you know, this uh you know, we could we could also like one one thing to remember also about AI when we're talking about the AlphaGo example, is AI is a technology that improves by leaps and bounds. | ||
You know, in the in the uh AlphaGo was much better at uh at playing Go than the previous AIs, but it was it was even even more so, you know, the Alpha Go Alpha Zero series of AIs, they could play many games. | ||
Deep Blue could only play chess, AlphaGo and Alpha Zero and that that series of AIs, they could play chess and go and whatever other game you threw at them uh decently well. | ||
They were more general, right? | ||
This someone that a lot of this this reminds me a lot of Norbert Wiener's ideas in God in Gollum Inc. | ||
from the 1960s. | ||
He asked the question kind of similar to Thomas Aquinas' quandary, could God create a being who could beat God at his own game? | ||
And at the time it was all very theoretical. | ||
Now the implication was could humans do the same. | ||
And now human beings have created machines that can beat the best humans at their own games. | ||
And you expand on that in the book. | ||
And I would like to get there. | ||
Let's let's give the audience the real meat. | ||
If you have computers that can overcome human beings at these small games, perhaps you could have computers that could beat us at war. | ||
At psychological manipulation. | ||
You talk about how it could possibly move through phases from just the realization into vast expansion and acceleration, the intelligence explosion. | ||
But I also really appreciate the way that you talk about this in terms of probabilities. | ||
You're not making definite predictions that this is going to happen by this year. | ||
You're saying this is the most likely case. | ||
So give us the most likely case. | ||
Why will superintelligence most likely destroy us? | ||
Yeah, so in predicting the future, there's uh an art to predicting only the things that are very easy to call. | ||
So if someone uh is if you're if you're playing against a very good chess player, if you played chess against Magnus Carlson, the best human in the world uh at chess, uh it would be hard for me to predict exactly what moves uh either of you are going to make. | ||
It would be easy for me to predict the winner. | ||
Uh so with AI, you know, it's it's it's hard to predict exactly how it will get there. | ||
It's easy to predict that at the end of the road, the smarter thing has won. | ||
Uh how could AI possibly do that? | ||
I mean, even most likely scenarios are very hard there. | ||
That's a little bit like asking someone from the year 1800 to predict war in the year 2000, right? | ||
Uh like when we're talking about facing down a superintelligence, we're talking about facing down things that can think 10,000 times smarter than you. | ||
Uh or sorry, can think 10,000 times faster than you, that can think qualitatively better. | ||
You know, it's like it's like a million copies of Einstein that can all think 10,000 times faster, that never need to sleep, that never need to eat, that uh can copy themselves and and share knowledge and experiences between them. | ||
Uh you know, the the sort of technology that those could cook up, you know, it's not literally 10,000 times faster because there's there's bottlenecks that aren't just thinking things up, but you know, uh constructing viruses probably would not be that hard. | ||
Uh physical viruses, you mean biological viruses. | ||
Biological viruses, yeah. | ||
There are already places on the internet where you can uh you know send a uh some money and an RNA sequence and say, you know, please sequence this for me and mail it to thus and such an address, right? | ||
And then you just like convince someone to break that vial outside or uh to drink that vial, uh who you've you've paid some money. | ||
You know, it's it's you know, people sometimes imagine. | ||
For the custom mRNA viruses, just kidding, please. | ||
I recommend against drinking identity. | ||
unidentified
|
Yeah. | |
Um, there's there's uh if if I was a person in 1800 trying to predict what weapons they would have in the year 2000, I could make some guesses, and those guesses are all going to be lower bounds. | ||
You know, in the year 1800, I could say, well, artillery is getting uh more powerful and more powerful, and I know some of the physics and I know the physical limits say that you can make artillery that's at least 10 times as strong, right? | ||
That's true. | ||
I could tell you stories of artillery that's 10 times as strong. | ||
Then in real life, uh if an army from 1800 faces an army from the year 2000, they face nukes. | ||
Nukes are a little bit like artillery that's 10 times as strong, right? | ||
But uh they're actually quite a bit more than 10 times as strong. | ||
So, you know, I could I could tell you stories about AIs that um think really hard, figure out a lot of what's going on inside uh DNA and how that works and how to make a sequence that will fool humans into thinking it's beneficial when actually it's it's not beneficial, um, and then you know, find some way to you know, these days uh there's there's not very good uh uh monitoring on biological synthesis laboratories. | ||
Some people are trying to set it up a little bit, which is which is great. | ||
But these days, you know, you you have the wrong DNA sequence, you mail it to the wrong people, you mail them some money, uh, you know, you can electronically send the money, you could you could probably be synthesizing uh uh these viruses. | ||
And you know, even if that pathway is cut off or turns out to be hard, wrapping humans around your finger somehow, getting getting humans to to do something that you know leads to the creation of some virus like this. | ||
This is a little bit like the artillery shell that's 10 times uh stronger than one in 1800. | ||
It's not really what happens. | ||
What really happens is something that seems more fantastical that you're less sure how it could have happened. | ||
Uh, but it's it's really not hard for for very, very smart entities with access to the whole internet to uh take humanity in a fight if they're trying. | ||
Really, the reason that it it you the the answer is not just you know they make a virus and kill us, is that uh, you know, the the the difficult part from the perspective of an AI is getting its own automated infrastructure that isn't uh full of you know fallible primate monkeys. | ||
Uh that's the part that takes some steps. | ||
Killing the humans once you have the infrastructure, it's not actually that easy to, you know, if you're if you're really trying to make a virus that can kill everybody, that doesn't seem that hard. | ||
Well, we only have just a few moments before we go to the break, and I would really like to discuss your proposed solutions to this on the other side and a few other maybe challenging questions, but in the very just in a minute or two before we go to break. | ||
Why would these AIs do this? | ||
This you you've kind of described how they could. | ||
Why? | ||
What would the motive, so to speak, be? | ||
Yeah, the uh this is one of those things that's easy to predict the endpoint, even though it's hard to predict the pathway. | ||
Uh so it's actually very hard to predict what AIs will want because, as we said, they're grown, not crafted. | ||
They uh they want they they pursue all sorts of drives that are not what anyone asked for or what anyone intended. | ||
Uh and probably these AIs would uh would pursue all sorts of weird stuff. | ||
You know, maybe something a little bit like flattering, maybe like, you know, making things that are to humans what what dogs are to wolves, like some sort of some sort of weird thing that they're that they're pursuing. | ||
The reason that this kills us is that almost any goal the AI could be pursuing, uh, can be better pursued with more resources, and we were using those resources for something else. | ||
So it's not that the AI hates you, it's not that the AI has malice, it's that the AI, you know, builds its own infrastructure, builds out uh, you know, infrastructure that makes the world um, you know, it it it captures all the sunlight for whatever purpose it's doing. | ||
It runs lots and lots of computers. | ||
I tell you what, um apologies for stopping you, but we're about to go to break. | ||
We'll come back on the other side. | ||
Uh what you're describing sounds like alchemy to me. | ||
You've described in your book, actually, this process is alchemy, turning lead into gold. | ||
And speaking of gold, go to birchgold.com slash bannon. | ||
Is the continued divide between Trump and the Federal Reserve putting us behind the curve again? | ||
Can the Fed take the right action at the right time? | ||
Or are we going to be looking at a potential economic slowdown? | ||
And what does this mean for your savings? | ||
Consider diversifying with gold through Birch Gold Group. | ||
For decades, gold has been viewed as a safe haven in times of economic stagnation, global uncertainty, high inflation, and super intelligence that will kill everyone you know. | ||
Birch Gold makes it incredibly easy for you to diversify some of your savings into gold. | ||
If you have an IRA or an old 401k, you can convert that into a tax-sheltered IRA in physical gold. | ||
Not even robots will know where you hide it. | ||
Or just buy some gold to keep it in your safe. | ||
First, get educated. | ||
Birch Gold will send you a free info kit on gold. | ||
Just text Bannon. | ||
That's B-A-N-N-O-N to the number 8989. | ||
Pardon, Birch Gold. | ||
989898. | ||
Again, text Bannon to 989898. | ||
Consider diversifying a portion of your savings into gold. | ||
That way, if the Fed can't stay ahead of the curve for the country, at least you can stay ahead for yourself. | ||
That's Birchgold.com slash Bannon. | ||
War Room, we will be right back with Nate Suarez at the end of the break. | ||
Stay tuned. | ||
unidentified
|
America's home Tell America's Voice family Are you on Getter yet? | |
What are you waiting for? | ||
It's free. | ||
unidentified
|
It's uncensored, and it's where all the biggest voices in conservative media are speaking out. | |
Download the Getter app right now. | ||
It's totally free. | ||
It's where I put up exclusively all of my content 24 hours a day. | ||
Want to know what Steve Bannon's thinking go together. | ||
unidentified
|
That's right. | |
You can follow all of your favorite Steve Bannon, Charlie Crock, Jack the Soldiers, and so many more. | ||
Download the Getter app now. | ||
unidentified
|
Sign up for free and be part of the new band. | |
Hey Ram family and War Room Posse. | ||
Mark your comments. | ||
September 12th and 13th, the Rebels Robes and Outlaws Tour is coming to the America First Warehouse. | ||
I have never seen anything like this. | ||
Two unforgettable days filled with Patriots, barbecue, and live shows straight from the most amazing place, the America First Warehouse. | ||
Get ready for a special guest to be announced. | ||
Plus a three-hour live episode of Studio 6B. | ||
And we're just gonna go do it. | ||
On the 12th, Steve Bannon will host War Room Live at 5 p.m. | ||
And Steve will be back again on the 13th. | ||
Woo! | ||
Followed by one hour with Peter Navarro. | ||
I went to prison so you won't have to. | ||
unidentified
|
The Rebels Rogues and Outlaws Tour, September 12th and 13th at the America First Warehouse. | |
Scan the QR code to see pricing and availability. | ||
Don't miss this opportunity. | ||
Tickets won't last. | ||
Welcome back, War Room Posse. | ||
We are here with Nate Soares, author of If Anyone Builds It, Everyone Dies. | ||
Why superhuman AI will kill would kill us all. | ||
Written with Eliezar Yudkowski. | ||
Nate, we've talked about some of these basic principles. | ||
AI is trained, not programmed or grown, not crafted. | ||
AI is not always going to do what it's trained to do. | ||
Advanced AI will have what we could say is like human preferences. | ||
And as it progresses from general intelligence, theoretical for now, and improves itself, it could lead to an intelligence explosion, resulting in a superintelligence that not only could kill everyone on Earth, but you say most likely would kill everyone on Earth. | ||
Before we get to your concrete proposals on what people should do about this theoretical problem, I I would just like to give you the floor to wrap up the idea to cinch up your argument. | ||
How and why artificial and superintelligence would be an existential threat to humanity. | ||
Yeah, so almost any goal it could pursue. | ||
Humans, happy, healthy, free people, are not the most efficient way to get that goal. | ||
It could get more of that goal by using more resources for other things. | ||
Whatever else it's trying to get, you know, probably more computing resources could help it get more of it. | ||
Probably creating more energy could help it get more of it. | ||
Probably capturing more sunlight could help it get more of it. | ||
You have if you have automated minds uh that are able to that are smart in the manner of humans that are able to build their own technological civilization that are able to build their own infrastructure. | ||
Uh, what that leads to, if they don't care about us, is us dying as a side effect in the same way that ants die as a side effect as we build our skyscrapers. | ||
It's not that they hate us, it's that there's a bunch of resources they can take for their own ends. | ||
And so if we if we want this to go well, we either need to figure out how to make the AIs actually care about us, or we need to not build things that are so smart and powerful that they transform the world like humanity has transformed the world, except we're the ones dying as a side effect this time, uh, as opposed to, you know, a bunch of the animals. | ||
There was a fantastic open letter issued, if I'm not mistaken, in 2023 from the Future of Life Institute that argued that AI development should be capped at GPT 4. | ||
We've blown past that, and some of the signatories, including Elon Musk, are among those who continued building no matter what. | ||
You also have a very brief statement on existential risk from the Center for AI safety. | ||
And they make a very similar argument. | ||
It's just not worth it, at least not now. | ||
What are your and LEs or Yodkowski's arguments as to what citizens and governments should do to avoid this catastrophe? | ||
So what the world needs is a global ban on research and development towards superintelligence. | ||
That you know, training these new AIs, like I mentioned, it takes highly specialized chips in extremely large data centers that take huge amounts of electricity. | ||
This is not a sort of ban on uh development that would affect the average person. | ||
Uh it would be relatively easy to find all these locations where it's possible to train uh even smarter AIs and uh monitor them, put a stop to them, make sure they're not making AIs smarter, right? | ||
This, you know, that this isn't really uh about the chatbots. | ||
The chatbots are a stepping stone towards superintelligence by these companies. | ||
These companies do not set out to make cool chatbots, they set out to make superintelligences, and uh we can't keep let them plowing away. | ||
The superintelligence is a different ball game. | ||
If we get to that ball game, if we get AIs that that sort of go over some cliff edge and uh become much smarter than humans, that's that's lethal for everybody. | ||
Uh most of the world doesn't seem to understand yet that superintelligence is a different ball game than the AI we're currently working with, and don't seem to understand that we're racing towards the brink of a cliff. | ||
It seems to me that once people understand that nobody has any interest in going over that cliff edge, uh, there's a possibility to coordinate and say, uh, despite all our other differences, we're not going to rush ahead on this one. | ||
Much like you know, the US and the Soviets in the Cold War. | ||
Many differences, we could agree not to proliferate the nukes. | ||
We've heard this from Elon Musk for years, although he's continued to move forward with the development of Grok and other AI systems. | ||
We hear clear signals from anthropic. | ||
In fact, their founding mission was to create artificial general intelligence in a safe manner. | ||
Who do you see as the companies or institutions who are most in alignment with your goal of banning superintelligent AI, both either on a national level or through international treaties? | ||
You know, none of them are advocating for it openly, which I think uh uh I mean, I guess there's there's people who are a little bit more and less clear with the the public about where they see the risks, uh where they see the dangers. | ||
You know, it's it's not necessarily irrational for somebody like Elon to hop in this race if the race gets to keep going. | ||
And uh I laud Elon for saying, you know, this this has a serious risk of killing us all, and saying things to the effect of, you know, I I originally didn't want to get in the race, but if it's gonna happen anyway, I want to be in it, right? | ||
That's that's not a totally insane picture if everyone else is racing. | ||
Uh I I think uh I think many of these these folks running these companies are deluded as to their chances of getting this right. | ||
Uh so in that sense, I think they should all just be stopping immediately. | ||
But a I I can empathize with the view of thinking that uh they can do it better than the next guy. | ||
And in that case, what all these companies should be saying is this is an extremely dangerous technology. | ||
We're racing towards a cliff edge, and the world would be better off if we shut down all of it, including us. | ||
That's implied by many of the statements they're saying. | ||
When someone says, and you know, the the heads of some of these companies have said, I think this has, you know, five, ten, twenty, twenty-five percent chance of killing every man, woman, and child on the planet. | ||
If if you think that, it doesn't necessarily mean you should stop if everyone else is racing, but it does mean you should say to the world plainly, we should not be doing this. | ||
Everybody including me should be stopped. | ||
P doom, the infamous P Doom, the probability of doom should superintelligence be created. | ||
I take it that I don't expect you to speak for Yodkowski, but your P doom is quite high. | ||
Can you give us a number, sir? | ||
I think this this the whole idea of this number is I think uh ill-founded. | ||
The this number, there's there's a big difference between someone who thinks that we are in a big danger because uh humanity can't do anything, and somebody thinks we're in big danger because humanity won't do anything. | ||
If you're just predicting, you know, what are the chances that we die from this? | ||
You're mixing together uh what can we do and what will we do? | ||
Uh my answer first and foremost is that we can do something. | ||
This has not been built yet. | ||
Humanity is backed off from brinks before. | ||
Uh If you ask, suppose we just charge ahead, suppose we do nothing, suppose we rush into making machines that are smarter than every human, uh, that can outmaneuver us at every turn, that can think 10,000 times faster, that never need to sleep, never need to eat, can copy themselves, uh, and that are pursuing goals no one asked for and no one wanted. | ||
What's the chance we survive that? | ||
The chance we survive that is roughly negligible. | ||
But that's not the question that matters. | ||
The question that matters is what are we going to do? | ||
And can we do something? | ||
And the answer to can we do something is yes. | ||
You know, personally, I'm more of a pea gloom kind of guy. | ||
I think the probability of gloom is much higher than doom, meaning that the real risk is that the AI's become so annoying, so grotesque, as it was put to me by a friend, that we would be better off extinct. | ||
But your goal to ban superintelligence, to cap it, I am completely amenable to that. | ||
I don't want chatbots. | ||
I I don't think anything but the most essential medical or military AI should even necessarily be pursued. | ||
But whether it is imminent or whether it's even possible, if we have a ban on artificial superintelligence, I get what I want, right? | ||
Like if it was possible, then we don't get it. | ||
If it was never possible, well, at least we showed due diligence. | ||
But there are arguments that the enforcement of this could go out of control, that the enforcement would be the real problem, especially global treaties, global governance. | ||
So you're well familiar with Peter Thiel's argument that the concern about artificial intelligence, general, super, whatever, that AI killing everyone is less of an immediate concern than the global the global governance it would require to keep that at bay. | ||
And this falls into the line with a lot of patterns we see in history from the drug wars, right? | ||
You have the danger of drugs and the control mechanism of the war against drugs. | ||
Or with terrorism, you have the danger of terrorism, the control mechanism of the Patriot Act, and the rest of the global surveillance state. | ||
And even on a mundane level, right? | ||
Right now there's a big push for age gating to make sure that children can't access pornography or malicious AIs, but then on the other side of that, you have the danger of required biodigital identity in order to use the internet. | ||
So how do you respond to those concerns that global governance or any overreaching govern governmental structure would be more of a danger than theoretical superintelligent AI, sir? | ||
So I think that's largely the sort of argument made by someone who does not really believe uh in this in this possibility. | ||
And uh, you know, I would I would sort of prefer to have the argument about is this possible, could it come quickly? | ||
Uh I would also say, you know, people say this, I think often rightly about things like uh like the the war on drugs or the war on terrorism, where there was a lot more uh, you know, uh power being aggregated than was maybe worth uh worth worth what we got from it. | ||
But no one says that about nuclear arms treaties, right? | ||
And that's because, in some sense, A, that's because they believe in nukes. | ||
B that's because you know, nuclear weapon, making a nuclear weapon, it takes a huge amount of resources that's easily monitorable and doesn't really affect the individual consumer, right? | ||
You don't need something like the the TSA to be checking everybody's bags for fissile material, right? | ||
You you have uh uh and and modern AI is is much like this. | ||
You know, it's it's not like you need to you need to restrict consumer hardware. | ||
Modern AIs are trained on extremely specialized chips that can be made in extremely few places in the world that are housed in extremely large data centers that again run on uh you know electricity comparable to a small city. | ||
This is this is not the sort of uh uh monitoring regime that would be more invasive than monitoring for you know nuclear arms treaties. | ||
the the difference really is that people uh are uncertain about whether, like you say superintelligence is possible and whether it is possible relatively soon. | ||
Uh that's where I would prefer to to debate someone who thinks now's not the time for that kind of treaty. | ||
On that note, I think about this in terms of the technical limits, not just the will to create it, but the technical limits. | ||
Uh you argue that it's quite possible within the realm of physics and mechanics to create a super intelligent AI. | ||
I think about one example in particular. | ||
Um supersonic jets, right? | ||
You had very early on in the history of aviation, uh 1946 uh jet hitting Mach 1. | ||
And then by 1959, you had close to Mach 7. | ||
But you get a kind of capping point, an S curve, so to speak, so that the most the fastest supersonic jet now, uh unmanned, I think it's NASA uh X43. | ||
I didn't have to look at my notes, lying. | ||
Uh the NASA X43, but it's it's a bit faster than the 1959 version, but not that much faster. | ||
Isn't it possible then that we will run into technical limitations that would keep anything like general or superintelligence from arising? | ||
So it very likely is an S-shaped curve. | ||
The question is where uh there's two questions. | ||
One is uh are there multiple different S-shaped curves where we can hop from one to the next? | ||
The other question is where does the the sort of last S-shaped curve uh fall off? | ||
So to the question of multiple S-shaped curves, you can imagine someone after AlphaGo, which we discussed, saying, you know, I I know that these AIs are more general than any AIs that came before. | ||
You know, Deep Lou could play only one game, whereas the AlphaGo series of AIs can play multiple games. | ||
Um, but I just don't see them going all the way. | ||
I don't see the AlphaGo Monte Carlo Tree Search Value Policy Network type architecture, which is what those things were called, more or less. | ||
I don't see those AIs, you know, ever talking. | ||
I don't see those AIs, you know, there's there's maybe an S-shaped curve for these game-playing AIs. | ||
That was totally true. | ||
But ChatGPT is not a bigger version of AlphaGo. | ||
There was a new advancement that unlocked qualitatively better AIs uh that can do qualitatively more things across to a wider range of of options in in better ways. | ||
You know, maybe it's the case that that Chat GPT will hit a plateau along that S-shaped curve. | ||
But uh the question is, you know, when will this field come up with some other insight, like the one that unlocked ChatGPT? | ||
How long will that take? | ||
What will it unlock next? | ||
How many more leaps, like the leap from AlphaGo to JAT GPT does it take before things uh are in the danger zone? | ||
And to the question of how high can the last S shape curve go, you know, these AIs, again, this training takes enough electricity to power a small city. | ||
Uh a human takes a hundred watts of energy. | ||
That's uh of power. | ||
Uh that's about as much as it takes to run an old school light bulb, right? | ||
So you can run a human on a light bulb. | ||
You to train an AI, it takes uh a small city worth of power. | ||
That indicates that we are nowhere near the physical limits. | ||
How long will it take us to get to the physical limits? | ||
That's harder to say. | ||
But again, this field this field progresses forward by leaps and bounds, and it's often very, very hard to call how long it will take for scientific progress to be made. | ||
You know, fusion has been 20 years away for you know 70 years now. | ||
And separately, the Wright brothers said uh, you know, flight won't happen for decades, two years before they themselves flew. | ||
That was a fantastic example. | ||
Rhetorically, I can't say how much I admire uh the way that your book is written, the cleverness of the turns of phrase and the formulations, the title especially. | ||
Uh, we have only just a few minutes remaining, but as well as we can, I would just like to talk really briefly about alignment. | ||
You argue, and LE's argued Kowski's long argued, that these systems need to be aligned to human values, and their uh stochasticity or non-deterministic elements would preclude that perhaps. | ||
Whose values, though? | ||
I you're speaking to a largely Christian, largely conservative audience. | ||
And without presuming too much, you know that the San Francisco culture uh is significantly different. | ||
Whose values would such an AI be aligned to that is a very important question for uh humanity to ask itself, and a question I wish we could be asking now, but unfortunately the problem we face is even worse than that. | ||
The problem we face is that we are nowhere near the ability to align an AI to any person's values. | ||
We aren't, you know, the Machine Intelligence Research Institute, you know, for 10 years I've been uh studying this question on the technical side. | ||
Uh never hope to be at this point. | ||
Uh no offense, but I prefer working on whiteboards uh to talking to anyone. | ||
Uh we were we were trying to figure out how to get to the very like to get to the point where you could ask whose values are we aligning it to. | ||
Right now we're not at the point where anyone could aim it. | ||
Right now we're at the point where you know the the people in these in these labs at San Francisco are trying to get it to do one thing and it does a different thing, and they specifically say stop doing that, do this instead, and then it it does some other third totally weird thing, right? | ||
The the the place where I spent my work is trying to make it so that somebody in charge could point the AI somewhere successfully. | ||
There's then a huge question of where should we point the AI? | ||
Who gets to make that trust? | ||
I tell you what, uh Nate, uh, we are out of time, but we will have you back next week, hopefully with Yudkowski in tow. | ||
The book is if anyone builds it, everyone dies. | ||
When is it released, sir, and where can people find it? | ||
It comes out on uh September 16th, one week from today, and people can find it on booksellers everywhere, including Amazon. | ||
I would definitely recommend pre-ordering it. | ||
Even if you don't believe in superintelligence, you will definitely understand the arguments. | ||
It is a fantastically written book. | ||
Thank you very much, sir, for coming on. | ||
Look forward to talking to you again next week. | ||
And when inflation jumps, when you hear the national debt is over thirty-seven trillion dollars, do you ever think maybe now would be a good time to buy some gold? | ||
You need to go to birchgold.com slash bannon. | ||
That's birchgold.com slash banan for your free guide to buying physical gold or text Bannon to 989-898. | ||
And you never thought it would get this far. | ||
Maybe you missed the last IRS deadline, or you haven't filed taxes in a while. | ||
Let me be clear. | ||
The IRS is cracking down harder than ever, and this won't go away on its own. | ||
That's why you need tax net tax network USA. | ||
They don't just know the IRS, they have a preferred direct line to the IRS. | ||
Their help has helped clear their team has helped clear over a billion in tax debt. | ||
Tax network that's TN USA.com slash Bannon. |