Speaker | Time | Text |
---|---|---|
All right, so if there's a doomsday clock for AI and where we're fucked, what time is it? | ||
If midnight is, we're fucked. | ||
We're getting right into it. | ||
You're not even going to ask us what we had for breakfast? | ||
unidentified
|
No, no, no, no, no, no, no, no. | |
Jesus, okay. | ||
Let's get freaked out. | ||
Well, okay, so there's one, without speaking to, like, the fucking Doomsday dimension right off the gate, there's a question about, like, where are we at in terms of AI capabilities right now, and what do those timelines look like? | ||
unidentified
|
Right. | |
There's a bunch of disagreement. | ||
One of the most concrete pieces of evidence that we have recently came out of a lab, an AI kind of evaluation lab called METER, and they put together this test. | ||
Basically, it's like you ask the question, pick a task that takes a certain amount of time, like an hour. | ||
It takes like a human a certain amount of time. | ||
And then see like how likely the best AI system is to solve for that task. | ||
Then try a longer task. | ||
See like a 10 hour task. | ||
And so right now what they're finding is when it comes to AI research itself, so basically like automate the work of an AI researcher. | ||
You're hitting 50% success rates for these AI systems for tasks that take an hour long. | ||
And that is doubling every, right now it's like every four months. | ||
So you had tasks that you could do, you know, a person does in five minutes like, you know, ordering an Uber Eats or like something that takes like 15 minutes, like maybe booking a flight or something like that. | ||
And it's a question of like, how much can these AI agents do, right? | ||
Like from five minutes to 15 minutes to 30 minutes. | ||
And in some of these spaces like... | ||
So if you extrapolate that, you basically get to tasks that take a month to complete. | ||
Like by 2027, tasks that take an AI researcher a month to complete, these systems will be completing with like a 50% success rate. | ||
So you'll be able to have an AI on your show and ask it what the doomsday clock is like by then. | ||
I probably won't laugh. | ||
It'll have a terrible sense of humor about it. | ||
Just make sure you ask it what it had for breakfast before you started. | ||
Yeah. | ||
What about quantum computing getting involved in AI? | ||
So, yeah. | ||
Honestly, I don't think it's... | ||
If you think that you're going to hit... | ||
Human-level AI capabilities across the board, say 2027, 2028, which when you talk to some of the people in the labs themselves, that's the timelines they're looking at. | ||
They're not confident, they're not sure, but that seems pretty plausible. | ||
If that happens, really there's no way we're going to have quantum computing that's going to be giving enough of a bump to these techniques. | ||
You're going to have standard classical computing. | ||
One way to think about this is that the data centers that are being built today... | ||
Are being thought of literally as the data centers that are going to house, like, the artificial brain that powers superintelligence, human-level AI when it's built in, like, 2027, something like that. | ||
So, how knowledgeable are you when it comes to quantum computing? | ||
So, a little bit. | ||
I mean, like, I did my grad studies in, like, the foundations of quantum mechanics. | ||
Oh, great. | ||
Yeah, well, it was a mistake, but I appreciate it for the purpose of that. | ||
Why was it a mistake? | ||
Academia is a funny thing. | ||
It's really bad culture. | ||
It teaches you some really terrible habits. | ||
Basically, my entire life after academia, and Ed's too, was unlearning these terrible habits. | ||
It's all zero-sum, basically. | ||
It's not like when you're working in startups. | ||
It's not like when you're working in tech where you build something and somebody else builds something that's complementary and you can team up and just make something amazing. | ||
It's always... | ||
Wars over who gets credit, who gets their name on the paper. | ||
Did you cite this fucking stupid paper from two years ago because the author has an ego and you got to be honest. | ||
I was literally at one point, I'm not going to get any details here, but like there was a collaboration that we ran with like this, anyway, fairly well-known guy. | ||
And my supervisor had me, like, write the emails that he would send from his account so that he was seen as, like, the guy who was, like, interacting with this bigwig. | ||
That kind of thing is, like, doesn't tend to happen in startups, at least not in the same way. | ||
So he wanted credit for the, like, he wanted to seem like he was the genius who was facilitating this? | ||
For sounding smart on email. | ||
unidentified
|
Eww. | |
Right? | ||
unidentified
|
Yuck. | |
That happens everywhere. | ||
unidentified
|
Dude, yeah. | |
The reason it happens is that these guys who are professors, or even not even professors, just like your post-doctoral guy who's supervising you, they can write your letters of reference and control your career after that lapse. | ||
Yeah, they got you by the balls. | ||
They can do whatever. | ||
Oh, God. | ||
It's like a movie. | ||
Yeah, it's gross. | ||
Like a gross movie. | ||
Like a gross boss in a movie that wants to take credit for your work. | ||
And it's real. | ||
It's rampant. | ||
And the way to escape it is to basically just be like, fuck this. | ||
I'm going to go do my own thing. | ||
And so Jer dropped out of grad school to... | ||
Come start a company. | ||
And I mean, honestly, even that, it took me, it took both of us, like, a few years to, like, unfuck our brains and unlearn the bad habits we learned. | ||
It was really only a few years later that we started, like, really, really getting a good, like, getting a good flow going. | ||
You're also, you're kind of disconnected from, like, base reality when you're in the ivory tower, right? | ||
Like, if you're, there's something beautiful about, and this is why we spent all our time in startups, but there's something really beautiful about, like, It's just a bunch of assholes, us, and, like, no money and nothing and a world of, like, potential customers. | ||
And it's like, you actually, it's not that different from, like, stand-up comedy in a way. | ||
Like, your product is, can I get the laugh, right? | ||
Like, something like that. | ||
And it's... | ||
Unforgiving. | ||
If you fuck up, it's like silence in the room. | ||
It's the same thing with startups. | ||
Like, the space of products that actually works is so narrow. | ||
And you've got to obsess over what people actually want. | ||
And it's so easy to fool yourself into thinking that you've got something that's really good because your friends and family are like, oh, no, sweetie, you're doing a great job. | ||
Like, what a wonderful life. | ||
I would totally use it. | ||
I totally see all that stuff, right? | ||
And I love that because it forces you to change. | ||
Yeah. | ||
The whole indoctrination thing in academia is so bizarre because there's these hierarchies of powerful people and just the idea that you have to work for someone someday and they have to take credit by being the person on the email. | ||
That will haunt me for days. | ||
I'll be thinking about that for days now. | ||
I fucking can't stand people like that. | ||
It drives me nuts. | ||
One big consequence is it's really hard to tell who the people are who are creating value in that space, too, right? | ||
Of course. | ||
Sure, because it's just like television. | ||
One of the things about television shows is—so I'll give you an example. | ||
A very good friend of mine who's a very famous comedian had this show, and his agent said, we're going to attach these producers. | ||
It'll help get it made. | ||
And he goes, "Well, what are they gonna do?" | ||
He goes, "They're not gonna do anything. | ||
It'll just be in name." | ||
He goes, "But they're gonna get credit." | ||
He goes, "Yeah." | ||
He goes, "Fuck that." | ||
He goes, "No, no, listen, listen. | ||
This is better for the show. | ||
It'll help the show." | ||
Excuse me. | ||
They'll have a piece of the show. | ||
He's like, "Yes, yes, but it's a matter of whether the show gets successful or not, and this is a good thing to do." | ||
And he's like, "What are you talking about?" | ||
It was a conflict of interest because this guy, the agent was representing these other people. | ||
But this is completely common. | ||
So there's these executive producers that are on shows that have zero to do with it. | ||
So many industries are like this. | ||
And that's why we got into startups. | ||
It's literally like you and the world, right? | ||
It's like in a way... | ||
Like stand-up comedy, like Jer said. | ||
Or like podcasting. | ||
Or like podcasting, where your enemy isn't actually hate. | ||
It's indifference. | ||
Like, most of the stuff you do, especially when you're getting started, like, why would anyone, like, give a shit about you? | ||
They're just not going to pay attention. | ||
Yeah, that's not even your enemy. | ||
You know, that's just all potential. | ||
That's all that is, you know? | ||
Like, your enemy is within you. | ||
It's like, figure out a way to make whatever you're doing good enough that you don't have to think about it not being valuable. | ||
It's meditative. | ||
Like, there's no way for it not to be... | ||
To be, in some way, a reflection of, like, yourself. | ||
You know, you're kind of, like, in this battle with you trying to convince yourself that you're great, so the ego wants to grow, and then you're constantly trying to compress it and compress it. | ||
And if there's not that outside force, your ego will expand to fill whatever volume is given to it. | ||
Like, if you have money, if you have fame, if everything's given, and you don't make contact with the unforgiving on a regular basis, like, yeah, you know, you're gonna end up... | ||
You're going to end up doing that to yourself. | ||
You could, yeah. | ||
It's possible to avoid, but you have to have strategies. | ||
Yeah, you have to be intentional about it. | ||
The best strategy is jujitsu. | ||
Mark Zuckerberg is a different person now. | ||
Yeah, you can see it. | ||
You can see it. | ||
Yeah, well, it's a really good thing for people that have too much power because you just get strangled all the time. | ||
And then you just get your arms bent sideways. | ||
And after a while, you're like, okay. | ||
This is reality. | ||
This is reality. | ||
This social hierarchy thing that I've created is just nonsense. | ||
It's just smoke and mirrors. | ||
And they know it is, which is why they so rabidly enforce these hierarchies. | ||
The best people seek it out. | ||
Sir and ma 'am and all that kind of shit. | ||
That's what it is. | ||
You don't feel like you really have respect unless you say that. | ||
unidentified
|
Ugh. | |
These poor kids that have to go from college where they're talking to these dipshit professors out into the world and operating under these same rules that they've been, like, forced and indoctrinated to. | ||
God, to just make it on your own. | ||
It's amazing what you can get used to, though. | ||
And, like, the... | ||
It's funny, you were mentioning the producer thing. | ||
That is literally also a thing that happens in academia. | ||
So you'll have these conversations where it's like, all right, well, this paper is... | ||
You know, fucking garbage or something. | ||
But we want to get it in a paper, in a journal. | ||
And so let's see if we can get, like, a famous guy on the list of authors so that when it gets reviewed, people go like, oh, Mr. So-and-so, okay. | ||
And that literally happens. | ||
The funny thing is, like, the hissy fits over this are, like, the stakes are so brutally low. | ||
At least with your producer example, like, someone stands to make a lot of money. | ||
With this, it's like... | ||
You get maybe like an assistant professorship out of it at best that's like $40,000 a year. | ||
It's just like, what are you going to do? | ||
For the producers, it is money, but I don't even think they notice the money anymore. | ||
Because all those guys are really, really rich already. | ||
If you're a big-time TV producer, you're really rich. | ||
I think the big thing is... | ||
Being thought of as a genius who's always connected to successful projects. | ||
Right, yeah. | ||
That's what they really like. | ||
That is always going to be a thing, right? | ||
It wasn't one producer. | ||
It was like a couple. | ||
So there's going to be a couple different people that were on this thing that had zero to do with it. | ||
It was all written by a stand-up comedian. | ||
His friends all helped him. | ||
They all put it together. | ||
And then he was like, no. | ||
He wound up firing his agent over it. | ||
Oh, shit. | ||
Good for him. | ||
I mean, yeah. | ||
Get the fuck out of here. | ||
At a certain point for the producers, too, it's kind of like you'll have people approaching you for help on projects that look nothing like projects you've actually done. | ||
So I feel like it just adds noise to your universe. | ||
Like, if you're actually trying to build cool shit, you know what I mean? | ||
Some people just want to be busy. | ||
They just want more things happening and they think more is better. | ||
More is not better. | ||
Because more is energy that takes away from the better. | ||
Whatever the important shit is. | ||
Yeah, the focus. | ||
You only have so much time until AI takes over. | ||
Then you'll have all the time in the world because no one will be employed and everything will be automated. | ||
We'll all be on universal basic income. | ||
And that's it. | ||
That's a show. | ||
The end. | ||
That's a sitcom. | ||
That's a sitcom. | ||
A bunch of poor people existing on $250 a week. | ||
Oh, I would watch that. | ||
Yeah. | ||
Because the government just gives everybody... | ||
That's what you live off of. | ||
Like weird shit is cheap. | ||
Like the stuff that's like all like, well, the stuff you can get from chatbots and AI agents is cheap, but like food is super expensive or something. | ||
unidentified
|
Yeah. | |
Organic food is going to be, you're going to have to kill people for it. | ||
You will eat people. | ||
It will be like a Soylent world. | ||
Right. | ||
Soylent green. | ||
Nothing's more free range than people though. | ||
That's true. | ||
Depends on what they're eating though. | ||
It's just like animals, you know? | ||
You don't want to eat a bear that's been eating salmon. | ||
They taste like shit. | ||
I didn't know that. | ||
I've been eating my bear wrong this entire time. | ||
So back to the quantum thing. | ||
So quantum computing is infinitely more powerful than standard computing. | ||
Would it make sense, then, that if quantum computing can run a large language model, that it would reach a level of intelligence that's just preposterous? | ||
So, yeah, one way to think of it is, like, there are problems that quantum computers can solve way, way, way, way better than classical computers. | ||
And so, like, the numbers get absurd pretty quickly. | ||
It's, like, problems that a classical computer couldn't solve if it had the entire lifetime of the universe to solve it. | ||
A quantum computer, right, in, like, 30 seconds, boom. | ||
But the flip side, like, there are problems that quantum computers just, like, can't help us accelerate. | ||
The kinds of, like, one classic problem that quantum computers help with is this thing called, like, the traveling salesman paradox. | ||
Or problem where, you know, you have like a bunch of different locations that a salesman needs to hit, and what's the best path to hit them most efficiently? | ||
It's like kind of a classic problem if you're going around different places and have to make stops. | ||
There are a lot of different problems that have the right shape for that. | ||
A lot of quantum machine learning, which is a field, is focused on how do we take standard AI problems, like AI... | ||
I think we're good to go. | ||
Can you define that for people? | ||
What's the difference between human-level AI and superintelligence? | ||
Yeah. | ||
So, yeah, human-level AI is, like, AI... | ||
You can imagine, like, it's AI that is... | ||
As smart as you are in, let's say, all the things you could do on a computer. | ||
So, you know, you can, yeah, you can order food on a computer, but you can also write software on a computer. | ||
You can also email people and pay them to do shit on a computer. | ||
You can also trade stocks on a computer. | ||
So it's like as smart as a smart person for that. | ||
Superintelligence, people have various definitions, and there are all kinds of, like, honestly hissy fits about, like, different definitions. | ||
Generally speaking, it's something that's, like, very significantly smarter than the smartest human. | ||
And so you think about it, it's kind of like it's as much smarter than you as you might be smarter than a toddler. | ||
And you think about that, and you think about, like, the, you know, how would a toddler control you? | ||
It's kind of hard. | ||
Like, you can outthink a toddler. | ||
Pretty much like any day of the week. | ||
And so superintelligence gets us at these levels where you can potentially do things that are completely different and basically, you know, new scientific theories. | ||
And last time we talked about, you know, new stable forms of matter that were being discovered by these kind of narrow systems. | ||
But now you're talking about a system that is like, has that intuition combined with the ability to... | ||
Talk to you as a human and to just have really good, like, rapport with you, but can also do math. | ||
It can also write code. | ||
It can also, like, solve quantum mechanics and has that all kind of wrapped up in the same package. | ||
One of the things, too, that, by definition, if you build a human-level AI, one of the things it must be able to do, as well as humans... | ||
Is AI research itself? | ||
Yeah. | ||
Or at least the parts of AI research that you can do in just like software, like by coding or whatever these systems are designed to do. | ||
And so one implication of that is you now have automated AI researchers. | ||
And if you have automated AI researchers, that means you have AI systems that can automate the development of the next... | ||
And now you're getting into that whole singularity thing where it's an exponential that just builds on itself and builds on itself, which is kind of why a lot of people argue that if you build human-level AI, superintelligence can't be that far away. | ||
You've basically unlocked everything. | ||
And we kind of have gotten very close, right? | ||
It's past the Fermi, not the Fermi paradox, the, what is it? | ||
Oh, yeah, yeah. | ||
We were just talking about him the other day. | ||
Yeah, the test. | ||
Oh, the Turing test? | ||
The Turing test. | ||
unidentified
|
Thank you. | |
We were just talking about how horrible, what happened to him was, you know, they chemically castrated him because he was gay. | ||
Yeah. | ||
Horrific. | ||
He winds up killing himself. | ||
The guy who figures out what's the test to figure out whether or not AI has become sentient. | ||
And by the way, he does this in, like, what, 1950-something? | ||
Oh, yeah, yeah. | ||
Alan Turing is, like, the guy was a beast, right? | ||
How did he think that through? | ||
He invented computers. | ||
He invented basically the concept that underlies all computers. | ||
Like, he was like... | ||
An absolute beast. | ||
He was a code breaker. | ||
He broke the Nazi codes, right? | ||
He also wasn't even the first person to come up with this idea of machines, building machines, and there being implications like human disempowerment. | ||
So if you go back to, I think it was like the late 1800s, and I don't remember the guy's name, but he sort of like came up with this. | ||
He was observing the Industrial Revolution and the mechanization of labor and kind of starting to see. | ||
More and more, like, if you zoom out, it's almost like you have a humans or an ant colony, and the artifacts that that colony is producing that are really interesting are these machines. | ||
You know, you kind of, like, look at the surface of the Earth as, like, gradually, increasingly mechanized thing, and it's not super clear if you zoom out enough, like... | ||
What is actually running the show here? | ||
Like, you've got humans servicing machines, humans looking to improve the capability of these machines at this frantic pace. | ||
Like, they're not even in control of what they're doing. | ||
Economic forces are pushing it. | ||
Are we the servant of the master, right, at a certain point? | ||
Like, yeah. | ||
And the whole thing is, like, especially with a competition that's going on between the labs, but just kind of in general, you're at a point where, like... | ||
Do the CEOs of the labs, like, they're these big figureheads. | ||
They go on interviews. | ||
They talk about what they're doing and stuff. | ||
Do they really have control over any part of the system? | ||
The economy is in this, like, almost convulsive fit, right? | ||
Like, you can almost feel like it's hurling out AGI. | ||
And, like, as one kind of, I guess, data point here, like, all these labs, so OpenAI, Microsoft, Google. | ||
Every year they're spending like an aircraft carrier worth of capital, individually, each of them, just to build bigger data centers, to house more AI chips, to train bigger, more powerful models. | ||
And that's like – so we're actually getting to the point where if you look at on a power consumption basis, like we're getting to, you know, 2, 3, 4, 5 percent of U.S. power production if you project out into the late 2020s. | ||
In 2026 /27, you're talking about... | ||
Not for double-digit, though. | ||
Not for double-digit, but for single-digit. | ||
Yeah, you're talking like that's a few gigawatts, so one gigawatt. | ||
Sorry, not for single-digit. | ||
It's in the, like, for 2027, you're looking at like, you know, in the point... | ||
Five-ish percent. | ||
But it's like, it's a big fucking frat. | ||
Like, you're talking about gigawatts and gigawatts. | ||
One gigawatt is a million homes. | ||
So you're seeing, like, one data center in 2027 is easily going to break a gig. | ||
There's going to be multiple like that. | ||
And so it's like a thousand, sorry, a million home city, metropolis, really, that is just dedicated to training, like, one fucking model. | ||
That's what this is. | ||
Again, if you zoom out at planet Earth, you can interpret it as like this, like all these humans frantically running around like ants just like building this like artificial brain. | ||
It's like a super mind assembling itself on the face of the planet. | ||
Marshall McLuhan in like 1963 or something like that said... | ||
Human beings are the sex organs of the machine world. | ||
unidentified
|
Oh, God. | |
That hits different today. | ||
Yeah, it does. | ||
unidentified
|
It does. | |
I've always said that if we were aliens, or if aliens came here and studied us, they'd be like, what is the dominant species on the planet doing? | ||
Well, it's making better things. | ||
That's all it does. | ||
The whole thing is dedicated to making better things. | ||
And all of its instincts, including materialism, including status, keeping up with the Joneses, all that stuff is tied to newer, better stuff. | ||
You don't want old shit. | ||
You want new stuff. | ||
You don't want an iPhone 12. What are you doing, you loser? | ||
You need newer, better stuff. | ||
And they convince people, especially in the realm of consumer electronics, most people are buying things they absolutely don't need. | ||
The vast majority of the spending on new phones is completely unnecessary. | ||
But I just need that extra fourth camera, though. | ||
I feel like my life isn't complete. | ||
I run one of my phones as an iPhone 11, and I'm purposely not switching it just to see if I notice it. | ||
I fucking never know. | ||
I don't notice anything. | ||
I watch YouTube on it. | ||
I text people. | ||
It's all the same. | ||
I go online. | ||
It works. | ||
It's all the same. | ||
Probably the biggest thing there is going to be the security side, which... | ||
No, they update the security. | ||
It's all software. | ||
But, I mean, if your phone gets old enough, I mean, like at a certain point... | ||
Oh, when they stop updating it? | ||
unidentified
|
Yeah. | |
Yeah, like iPhone 1, you know, China's watching all your dick pics. | ||
Oh, dude. | ||
I mean, Salt Typhoon, they're watching all our dick pics. | ||
They're definitely seeing mine. | ||
What's Salt Typhoon? | ||
So this big Chinese cyber attack actually starts to get us to kind of the broader... | ||
What a great name, by the way. | ||
Salt Typhoon? | ||
Fuck yeah, guys. | ||
They have the coolest names for their cyber operations meant to destroy us. | ||
Salt Typhoon is pretty slick. | ||
You know what? | ||
It's kind of like when people go out and do like a... | ||
An awful thing, like a school shooting or something, and they're like, oh, let's talk about, you know, if you give it a cool name, like now the Chinese are definitely going to do it again. | ||
Anyway. | ||
Because they have a cool name? | ||
Yeah, that's definitely a factor. | ||
Salt Typhoon. | ||
Salt Typhoon. | ||
Pretty dope. | ||
Yeah. | ||
But it's this thing where basically, so there was in the 3G kind of protocol that was set up years ago, law enforcement agencies included back doors intentionally to be able to access comms, you know, theoretically, if they got a warrant and so on. | ||
And well, you introduce a backdoor. | ||
You have adversaries like China who are wicked good at cyber. | ||
They're going to find and exploit those backdoors. | ||
And now basically they're sitting there and they had been for some people think like maybe a year or two before it was really discovered. | ||
And just a couple months ago, they kind of go like, oh, cool. | ||
We got fucking, like, China all up in our shit. | ||
And this is, like, flip a switch for them and, like, you turn off the power or water to a state. | ||
Or, like, you fucking... | ||
Yeah. | ||
Well, sorry, this is... | ||
Sorry, Salt Typhoon, though, is about just sitting on the, like, basically telecoms now. | ||
Oh, that's the telecom one. | ||
unidentified
|
That's right. | |
It's not the... | ||
But, yeah, I mean, that's another thing. | ||
There's another thing where they're doing that, too. | ||
Yeah. | ||
And so this is kind of where... | ||
What we've been looking into over the last year is this question of how... | ||
If you're going to make a Manhattan project for superintelligence, right? | ||
That's what we're texting about way back. | ||
Actually, funnily enough, we shifted our date for security reasons. | ||
But if you're going to do a Manhattan project for superintelligence, what does that have to look like? | ||
What does the security game have to look like to actually make it so that China's not all up in your shit? | ||
Today, it is extremely clear that at the world's top AI labs, All that shit is being stolen. | ||
Like, there is not a single lab right now that isn't being spied on successfully based on everything we've seen by the Chinese. | ||
Can I ask you this? | ||
Are we spying on the Chinese as well? | ||
That's a big problem. | ||
We're definitely doing some stuff, but in terms of the relative balance between the two, we're not where we need to be. | ||
They spy on us better than we spy on them? | ||
Yeah, because they build all our shit. | ||
Well, that was the Huawei situation, right? | ||
Yeah, and it's also the, oh my god, if you look at the power grid. | ||
So, this is now public, but if you look at, like, transformer substations, so these are the, essentially, anyway, they're a crucial part of the electrical grid. | ||
And there's really, like... | ||
Basically, all of them have components that are made in China. | ||
China's known to have planted backdoors like Trojans into those substations to fuck with our grid. | ||
The thing is, when you see a salt typhoon, when you see a big Chinese cyberattack or a big Russian cyberattack, you're not seeing their best. | ||
These countries do not go and show you their best cards out the gate. | ||
You show the bare minimum that you can without... | ||
Tipping your hand at the actual exquisite capabilities you have. | ||
The way that one of the people who's been walking us through all this really well explained it is the philosophy is you want to learn without teaching. | ||
You want to use what is the lowest level capability that has the effect I'm after. | ||
And that's what that is. | ||
I'll give an example. | ||
I'll tell you a story that's kind of like... | ||
It's a public story, and it's from a long time ago, but it kind of gives a flavor of like... | ||
How far these countries will actually go when they're playing the game for fucking real. | ||
So it's 1945. | ||
America and the Soviet Union are like best pals because they've just defeated the Nazis, right? | ||
To celebrate that victory and the coming new world order that's going to be great for everybody, the children of the Soviet Union give as a gift to the American ambassador in Moscow this Beautifully carved wooden seal of the United States of America. | ||
Beautiful thing. | ||
Ambassador is thrilled with it. | ||
He hangs it up behind his desk in his private office. | ||
You can see where I'm going with this probably, but yeah. | ||
Seven years later, 1952, finally occurs to us, like, let's take a town and actually examine this. | ||
So they dig into it, and they find this incredible contraption in it called a cavity resonator. | ||
And this device doesn't have a power source, doesn't have a battery, which means when you're sweeping the office for bugs, you're not going to find it. | ||
What it does instead is it's designed. | ||
That's it. | ||
That's it. | ||
It's the thing. | ||
They call it the thing. | ||
And what this cavity resonator does is it's basically designed to reflect radio radiation. | ||
Back to a receiver to listen to all the noises and conversations and talking in the ambassador's private office. | ||
How's it doing without a power source? | ||
So that's what they do. | ||
So the Soviets, for seven years, parked a van across the street from the embassy, had a giant fucking microwave antenna aimed right at the ambassador's office, and were like zapping it and looking back at the reflection and literally listening to every single thing he was saying. | ||
And the best part was... | ||
When the embassy staff was like, we're going to go and sweep the office for bugs periodically, they'd be like, hey, Mr. Ambassador, we're about to sweep your office for bugs. | ||
And the ambassador was like, cool, please proceed and go and sweep my office for bugs. | ||
And the KGB dudes in the van were like... | ||
Just turn it off. | ||
Sounds like they're going to sweep the office for bugs. | ||
Let's turn off our giant microwave antenna. | ||
And they kept at it for seven years. | ||
It was only ever discovered because there was this, like, British radio operator who was just, you know, doing his thing, changing his dial. | ||
And he's like, oh, shit. | ||
Like, is that the ambassador? | ||
Just randomly. | ||
So the thing is, oh, and actually, sorry. | ||
One other thing about that. | ||
If you heard that story and you're kind of thinking to yourself, hang on a second. | ||
They were shooting, like, microwaves at our ambassador 24-7 for seven years. | ||
Whoa. | ||
Doesn't that seem like it might, like, fry his genitals or something? | ||
unidentified
|
Yeah. | |
Or something like that? | ||
You're supposed to have a lead vest. | ||
And the answer is yes. | ||
unidentified
|
Yes. | |
Yes. | ||
And this is something that came up in our investigation just from every single person who was, like, who was filling us in and who dialed in and knows what's up. | ||
They're like, look, so you got to understand, like, our adversaries. | ||
If they need to, like, give you cancer in order to rip your shit off of your laptop, they're going to give you some cancer. | ||
Did he get cancer? | ||
I don't know specifically about the ambassador, but, like, it's... | ||
That's also, so... | ||
We're limited to what we can say. | ||
There's actually people that you talk to later that... | ||
Can go in more detail here. | ||
But older technology like that, kind of lower powered, so you're less likely to look at that. | ||
Nowadays, we live in a different world. | ||
The guy that invented that microphone, his last name is Theremin. | ||
He invented this instrument called the Theremin, which is a fucking really interesting thing. | ||
Oh, he's just moving his hands? | ||
Yeah, your hands control it, waving over this. | ||
unidentified
|
What? | |
It's a fucking wild instrument. | ||
Have you seen this before, Jamie? | ||
Yeah, I saw Juicy J playing it yesterday on Instagram. | ||
He's like practicing. | ||
It's a fucking cool-ass thing. | ||
He's also pretty good at it, too. | ||
Both hands are controlling it. | ||
By moving in and out in space, X, Y, Z. I honestly don't really know how the fuck it works, but I've seen it. | ||
Wow! | ||
That is wild. | ||
It's also a lot harder to do than it seems. | ||
So the Americans tried to replicate this for years and years and years without really succeeding. | ||
And anyway, that's all kind of part of it. | ||
I have a friend who used to work for an intelligence agency, and he was working in Russia. | ||
And they found that the building was bugged with these... | ||
Super sophisticated bugs that operate their power came from the swaying of the building Get out. | ||
I've never heard that one before. | ||
Just like your watch, like I have a mechanical watch on, so when I move my watch, it powers up the spring and it keeps the watch. | ||
That's how an automatic mechanical watch works. | ||
They figured out a way to, just by the subtle swaying of the building in the wind, that was what was powering this listening device. | ||
So this is the thing, right? | ||
I mean, what the fuck? | ||
The things that nation states... | ||
What's up, Jamie? | ||
Google says that's... | ||
That's what was powering this thing. | ||
The Great Seal Bug, which I think is the thing. | ||
There's another one? | ||
No. | ||
Oh, this is... | ||
So you can actually see in that video, I think there was a YouTube... | ||
Yeah, so... | ||
Same kind of thing, Jamie? | ||
I was just... | ||
I typed in Russia spy bug building sway. | ||
The thing is what pops up. | ||
The thing? | ||
Which is what we were just talking about. | ||
Oh, that thing. | ||
So that's powered the same way? | ||
By the sway of the building? | ||
I think it was powered by radio frequency emission. | ||
So there may be another thing. | ||
Related to it? | ||
Not sure, but... | ||
Maybe Google's a little confused. | ||
Maybe the word "sway" is what's throwing it off. | ||
But it's a great catch, and the only reason we even know that, too, is that when the U-2s were flying over Russia, they had a U-2 that got shot down in 1960. | ||
The Russians go like, "Oh, friggin' Americans spying on us. | ||
What the fuck? | ||
I thought we were buddies." | ||
Well, it's the '60s. | ||
I obviously didn't think that. | ||
And then the Americans are like, "Uh, okay, bitch." | ||
Look at this! | ||
And they brought out the seal, and that's how it became public. | ||
It was basically like the response to the Russians saying, like, you know... | ||
Wow. | ||
Yeah, they're all dirty. | ||
Everyone's spying on everybody. | ||
That's the thing. | ||
And I think they probably all have some sort of UFO technology. | ||
We need to talk about that. | ||
We need to turn off our mics and... | ||
I'm 99% sure a lot of that shit is ours. | ||
You need to talk to some of the... | ||
I've been talking to people. | ||
I've been talking to a lot of people. | ||
There might be some other people that you'd be interested in chatting with. | ||
I would very much be interested. | ||
Here's the problem. | ||
Some of the people I'm talking to, I'm positive, they're talking to me to give me bullshit. | ||
Are we on your list? | ||
No, you guys aren't on the list. | ||
But there's certain people, I'm like, okay, maybe most of this is true, but some of it's not, on purpose. | ||
There's that. | ||
I guarantee you, I know I talk to people that don't tell me the truth. | ||
Yeah. | ||
Yeah. | ||
It's an interesting problem in, like, all intel, right? | ||
Because there's always – the mix of incentives is so fucked. | ||
Like, the adversary is trying to add noise into the system. | ||
You've got pockets of people within the government that have different incentives from other pockets. | ||
And then you have top secret clearance and all sorts of other things that are going on. | ||
unidentified
|
Yeah. | |
One guy that texted me is like, the guy telling you that they aren't real is literally involved in these meetings. | ||
So stop. | ||
Just stop listening to him. | ||
It's like one of the techniques, right, is actually to inject so much noise that you don't know what's what and you can't follow. | ||
So this actually happened in the COVID thing, right? | ||
The lab leak versus the natural wet market thing. | ||
So I remember there was a debate that happened about... | ||
What was the origin of COVID? | ||
This was like a few years ago. | ||
It was like an 18 or 20 hour long YouTube debate, just like punishingly long. | ||
And it was like there was a $100,000 bet either way on who would win. | ||
And it was like lab leak versus wet market. | ||
And at the end of the 18 hours, the conclusion was like one of the one. | ||
But the conclusion was like it's basically 50-50 between them. | ||
And then I remember like hearing that and talking to some folks and being like, hang on a second. | ||
You got to believe that whether it came from a lab or whether it came from a wet market, one of the top three priorities of the CCP from a propaganda standpoint is like, don't get fucking blamed for COVID. | ||
And that means they're putting like $1 to $10 billion and some of their best people on a global propaganda effort to cover up evidence and confuse and blah, blah, blah. | ||
You really think that... | ||
That you're 50%, that confusion isn't coming from that incredibly resourced effort. | ||
They know what they're doing. | ||
Particularly when different biologists and virologists who weren't attached to anything were talking about the cleavage. | ||
Points and different aspects of the virus that appeared to be genetically manipulated. | ||
The fact that there was only one spillover event, not multiple ones. | ||
None of it made any sense. | ||
All of it seemed like some sort of a... | ||
Genetically engineered virus. | ||
It seemed like gain-of-function research. | ||
And the early emails were talking about that. | ||
And then everybody changed their opinion. | ||
And even the taboo, right, against talking about it through that lens? | ||
Oh yeah, total propaganda. | ||
It's racist. | ||
Which is crazy because nobody thought the Spanish flu was racist and it didn't even really come from Spain. | ||
Yeah, that's true, yeah. | ||
It came from Kentucky. | ||
I didn't know that. | ||
Yeah, I think it was Kentucky or Virginia. | ||
Where did the Spanish flu originate from? | ||
But nobody got married. | ||
Well, that's because the state of Kentucky has an incredibly sophisticated propaganda machine and pinned it on Spanish. | ||
It might not have been Kentucky, but I think it was an agricultural thing. | ||
Kansas. | ||
Thank you. | ||
Yeah, goddamn Kansas. | ||
I've always said that. | ||
I've always said that. | ||
Likely originated in the United States. | ||
H1N1 strain had genes of avian origin. | ||
By the way, people always talk about the Spanish flu. | ||
If it was around today, everybody would just get antibiotics and we'd be fine. | ||
So this whole mass die-off of people. | ||
It would be like the Latinx flu. | ||
And we would be... | ||
The Latinx flu? | ||
The Latinx flu. | ||
That one didn't stick at all. | ||
That didn't stick. | ||
Latinx? | ||
unidentified
|
No. | |
A lot of people like claiming they never used it and they pull up old videos of them. | ||
Yeah. | ||
Like, that's a dumb one. | ||
Like, it's literally a gendered language, you fucking idiots. | ||
unidentified
|
Yeah. | |
Like, you can't just do that. | ||
That's true. | ||
unidentified
|
Latinx, shut up. | |
It went on for a while, though. | ||
Sure, everything goes on for a while. | ||
unidentified
|
Yeah. | |
So think about how long they did lobotomies. | ||
unidentified
|
Hmm. | |
They did lobotomies for 50 fucking years before they went, hey, maybe we should stop doing this. | ||
It was like the same attitude that got Turing chemically castrated, right? | ||
Actually, like, hey, let's just get in there and fuck around a bit. | ||
Well, this was before they had SSRIs and all sorts of other interventions. | ||
What was the year of lobotomies? | ||
I believe it stopped in 67. Was it 50 years? | ||
I think you said 70 last time, and that was correct when I pulled it up. | ||
70 years? | ||
1970. | ||
unidentified
|
Oh, I think it was 67. I like how this has come up so many times that Jamie's like, I think last time you said it was 70. It comes up all the time because it's one of those things. | |
It's insane. | ||
You can't just trust the medical establishment. | ||
Officially 67, it says maybe one more in 72. Oh, God. | ||
Oh, he died in 72. When did they start doing it? | ||
I think they started in the 30s or the 20s, rather. | ||
That's pretty ballsy. | ||
The first guy who did a lobotomy. | ||
Since 24, Freeman arrives to watch DC Direct Labs. | ||
35, they tried it first. | ||
unidentified
|
Imagine that. | |
They just scramble your fucking brains. | ||
But doesn't it make you feel better to call it a leucotomy, though? | ||
Because it sounds a lot more professional. | ||
No. | ||
Lobotomy, leucotomy. | ||
Leucotomy sounds gross. | ||
Sounds like loogie. | ||
Like lobotomy. | ||
Boy. | ||
Topeka, Kansas. | ||
Also Kansas. | ||
All roads point to Kansas. | ||
This is a problem. | ||
That's what happens when everything's flat. | ||
You just lose your fucking marbles. | ||
You go crazy. | ||
That's the main issue. | ||
Jesus Christ. | ||
So they did this for so long. | ||
Somebody won a Nobel Prize for lobotomy. | ||
Wonderful. | ||
Imagine being that person. | ||
Give that back, you piece of shit. | ||
Yes, seriously. | ||
You're kind of like, you know, you don't want to display it up in your shelf. | ||
But it's just a good... | ||
It's like it should let you know that oftentimes science is incorrect and that oftentimes, you know... | ||
Unfortunately, people have a history of doing things and then they have to justify that they've done these things. | ||
But now there's so much more tooling too, right? | ||
If you're a nation state and you want to fuck with people and inject narratives into the ecosystem, right? | ||
The whole idea of autonomous AI agents too, like having these basically Twitter bots or whatever bots. | ||
One thing we've been thinking about too on the side is the idea of audience capture, right? | ||
Big people with high profiles and kind of gradually steering them towards a position by creating bots that, like, through comments, through upvotes, you know? | ||
100%. | ||
It's absolutely real. | ||
Yeah, and a couple of big accounts on X that we're in touch with have sort of said, like, yeah... | ||
Especially in the last two years, it's actually become hard, like especially the thoughtful ones, right? | ||
It's become hard to like stay sane, not on X, but like across social media, on all the platforms. | ||
And that is around when, you know, it became possible to have AIs that can speak like people, you know, 90%, 95% of the time. | ||
And so you have to imagine that, yeah, adversaries are using this and doing this and pushing the frontier. | ||
No doubt. | ||
They'd be fooled if they didn't do it. | ||
Oh, yeah, 100%. | ||
You have to do it because for sure we're doing that. | ||
And this is one of the things where, you know, like it used to be, so OpenAI actually used to do this assessment of their AI models as part of their kind of what they call their preparedness framework that would look at the persuasion capabilities of their models as one kind of threat vector. | ||
They pulled that out recently, which is kind of like... | ||
Why? | ||
You can argue that it makes sense. | ||
I actually think it's somewhat concerning because one of the things you might worry about is if these systems, sometimes they get trained through what's called reinforcement learning, potentially you could imagine training these to be super persuasive by having them interact with real people and convince them, practice at convincing them to do specific things. | ||
If you get to that point... | ||
You know, these labs ultimately will have the ability to deploy agents at scale that can just persuade a lot of people to do whatever they want, including pushing... | ||
Legislative agendas. | ||
Anyone help them prep for meetings with the Hill, the administration, whatever. | ||
How should I convince this person to do that? | ||
Well, they'll do that with text messages. | ||
Make it more business-like. | ||
Make it friendlier. | ||
Make it more jovial. | ||
But this is like the same optimization pressure that keeps you on TikTok. | ||
That same addiction. | ||
Imagine that applied to persuading you of some fact, right? | ||
On the other hand... | ||
Maybe a few months from now, we're all just going to be very, very convinced that it was all fine. | ||
It's no big deal. | ||
Yeah, maybe they'll get so good that it'll make sense to you. | ||
Maybe they'll just be right. | ||
That's how that shit works. | ||
Yeah, it's a confusing time period. | ||
We've talked about this ad nauseum, but it bears repeating. | ||
Former FBI analyst who investigated Twitter before Elon bought it said that he thinks it's about 80% bots. | ||
Yeah. | ||
80%. | ||
That's one of the reasons why the bot purge, like when Elon acquired it and started working on it, is so important. | ||
Like there needs to be – the challenge is like detecting these things is so hard, right? | ||
unidentified
|
So hard. | |
Increasingly. | ||
Like more and more they can hide like basically perfectly. | ||
Like how do you tell the difference between a cutting edge AI bot? | ||
You can't because they can generate AI images of a family, of a backyard barbecue, post all these things up and make it seem like it's real. | ||
Especially now, AI images are insanely good now. | ||
They really are, yeah. | ||
It's crazy. | ||
And if you have a person, you could take a photo of a person and manipulate it in any way you'd like. | ||
And then now this is your new guy. | ||
You could do it instantaneously. | ||
And then this guy has a bunch of opinions on things. | ||
And it seems to always align with the Democratic Party. | ||
But whatever. | ||
Good guy. | ||
He's a family man. | ||
Look, he's out in his barbecue. | ||
He's not even a fucking human being. | ||
And people are arguing with this bot, like, back and forth. | ||
And you'll see it on any social issue. | ||
You see it with Gaza and Palestine. | ||
You see it with abortion. | ||
You see it with religious freedoms. | ||
You just see these bots. | ||
You see these arguments. | ||
And, you know, you see, like, various levels. | ||
You see, like, the extreme position. | ||
And then you see a more reasonable centrist position. | ||
But essentially what they're doing is they're consistently... | ||
Moving what's okay further and further in a certain direction. | ||
It's both directions. | ||
Like, it's like, you know how when you're trying to, like, you're trying to capsize a boat or something, you're, like, fucking with your buddy on the lake or something. | ||
So you push on one side, then you push on the other side, then you push until eventually it capsizes. | ||
This is kind of, like, our electoral process is already naturally like this, right? | ||
We go, like, we have a party in power for a while, then, like, they get, you know, they basically get, like, you get tired of them and you switch. | ||
And that's kind of the natural way how democracy works. | ||
Or in a republic. | ||
But the way that adversaries think about this is they're like, perfect. | ||
This swing back and forth, all we have to do is like, when it's on this way, we push and push and push and push until it goes more extreme. | ||
And then there's a reaction to it, right? | ||
And then I swing it back and we push and push and push on the other side until eventually something breaks. | ||
And that's a risk. | ||
Yeah. | ||
It's also like, you know, the organizations that are doing this, like, we already know this is part of Russia's MO, China's MO, because back when it was easier to detect, we already could see them doing this shit. | ||
So there is this website called This Person Does Not Exist. | ||
It still exists surely now, but it's kind of... | ||
Kind of superseded. | ||
Yeah. | ||
But you would like every time you refresh this this website, you would see a different like human face that was a generated and what the Russian Internet Research Agency would do. | ||
Yeah, exactly. | ||
What what all these these and it's actually yeah, I don't think they've really upgraded it. | ||
But that's fake. | ||
Wow, they're so good. | ||
This is like years old. | ||
Years old. | ||
And you could actually detect these things pretty reliably. | ||
Like you might remember the whole thing about AI systems were having a hard time generating like hands that only had like five fingers. | ||
Right. | ||
That's over though. | ||
Yeah, little hints of it were though back in the day in this person does not exist. | ||
And you'd have the Russians would take like a face from that and then use it as the profile picture for like a Twitter bot. | ||
unidentified
|
Right. | |
And so that you could actually detect. | ||
You'd be like, okay, I've got you there. | ||
I've got you there. | ||
there, I can kind of get a rough count. | ||
Now we can't, but we definitely know they've been in the game for a long time. | ||
There's no way they're not right now. | ||
The thing with nation state propaganda attempts, right, is that people have this idea that, "Ah, I've caught this Chinese influence operation," or whatever, like we nail them. | ||
The reality is nation states operate at like 30 different levels. | ||
And if you're a priority, like just influencing our information spaces as a priority for them, | ||
They're not just going to operate. | ||
They're not just going to pick a level and do it. | ||
They're going to do all 30 of them. | ||
And so you, even if you're among the best in the world detecting this shit, you're going to catch and stop levels 1 through 10. And then you're going to be aware of level 11, 12, 13. You're working against it. | ||
And maybe you're starting to think about level 16. And you imagine you know about level 18 or whatever. | ||
But they're above you, below you, all around you. | ||
They're incredibly, incredibly resourced. | ||
And this is something that came... | ||
Came through very strongly for us. | ||
You guys have seen the Yuri Bezmenov video from 1984 where he's talking about how all our educational institutions have been captured by Soviet propaganda. | ||
He was talking about Marxism has been injected into school systems and how you have essentially two decades before you're completely captured by these ideologies and it's going to permeate and destroy all of your confidence in democracy. | ||
100% correct. | ||
And this is before these kind of tools. | ||
Because the vast majority of the exchanges of information right now are taking place on social media. | ||
The vast majority of debating about things, arguing, all taking place on social media. | ||
And if that FBI analyst is correct, 80% of it's bullshit, which is really wild. | ||
And you look at some of the documents that have come out, I think it was like... | ||
I think it was the CIA game plan, right? | ||
For regime change or undermining. | ||
How do you do it, right? | ||
Have multiple decision makers at every level. | ||
All these things. | ||
And what a surprise. | ||
That's exactly what the U.S. bureaucracy looks like today. | ||
Slow everything down. | ||
Make change impossible. | ||
Make it so that everybody gets frustrated with it and they give up hope. | ||
They decided to do that to other countries. | ||
For sure, they do that here. | ||
Open society, right? | ||
I mean, that's part of the trade-off. | ||
And that's actually a big... | ||
Big part of the challenge, too. | ||
So when we're working on this, right, like one of the things Ed was talking about, these like 30 different layers of security access or whatever, one of the consequences is you bump into a team at... | ||
So, like, the teams we ended up working with on this project were folks that we bumped into after the end of our last investigation who kind of were like, oh... | ||
We talked about last year, yeah. | ||
Yeah, yeah, yeah. | ||
Like, looking at AGI, looking at the national security kind of landscape around that. | ||
And a lot of them, like, really well-placed. | ||
It was like, you know, Special Forces guys from Tier 1 units. | ||
So, you'll steal Team 6 type thing. | ||
And because they're so, like, in that ecosystem... | ||
You'll see people who are like ridiculously specialized and competent, like the best people in the world at doing whatever the thing is, like to break the security. | ||
And they don't know often about like another group of guys who have a completely different capability set. | ||
And so what you find is like you're indexing like hard on this vulnerability and then suddenly someone says, oh yeah, but by the way, I can just hop that fence. | ||
So the really funny thing about this is like most or even like almost all Of the really, really, like, elite security people, kind of think that, like, all the other security people are dumbasses, even when they're not. | ||
Or, like, yeah, they're biased in the direction of, because it's so easy when everything's, like, stove-piped. | ||
But so most people who say they're, like, elite at security actually are dumbasses. | ||
Because most security is, like, about checking boxes and, like, SOC 2 compliance and shit like that. | ||
Yeah, what it is is, like, so everything's so stove-piped. | ||
Yeah. | ||
That you literally can't know what the exquisite state of the art is in another domain. | ||
So it's a lot easier for somebody to come up and be like, "Oh yeah, I'm actually really good at this other thing that you don't know." | ||
And so figuring out who actually is the... | ||
We had this experience over and over where you run into a team and then you run into another team. | ||
They have an interaction. | ||
You're kind of like, "Oh, interesting." | ||
So these are the people at the top of their game. | ||
And that's been this very long process to figure out, like, OK, what does it take to actually secure our critical infrastructure against like CCP, for example, like Chinese attacks if we're if we're building a super intelligence project? | ||
And it's it's this weird like kind of challenge because of the stovepiping. | ||
No one has the full picture. | ||
And we don't think that we have it even now, but definitely don't know of anyone who's come like that. | ||
The best people are the ones who When they encounter another team and other ideas and start to engage with it, are like, instead of being like, oh, you don't know what you're talking about, who just actually lock on and go like, that's fucking interesting. | ||
Tell me more about that. | ||
Right. | ||
People that have control of their ego. | ||
Yes. | ||
100%. | ||
With everything. | ||
With everything in life. | ||
The best of the best got there by eliminating their ego as much as they could. | ||
Yeah. | ||
Always the way it is. | ||
Yeah. | ||
And it's also like... | ||
The fact of, you know, the 30 layers of the stack or whatever it is, of all these security issues, means that no one can have the complete picture at any one time. | ||
And the stack is changing all the time. | ||
People are inventing new shit. | ||
Things are falling in and out of... | ||
And so, you know, figuring out what is that team that can actually get you that complete picture is an exercise. | ||
A, you can't really do... | ||
It's hard to do it from the government side because you got to engage with data center building companies. | ||
You got to engage with the AI labs and in particular with like insiders at the labs who will tell you things that, by the way, the lab leadership will tell you the opposite of in some cases. | ||
And so, like, it's just this this Gordian knot like it's like it took us months to to. | ||
I'll give an example, actually, of that, like, trying to do the handshake, right, between different sets of people. | ||
So we were talking to one person who's... | ||
Thinking hard about data center security, working with, like, Frontier Labs on this shit. | ||
Very much, like, at the top of her game. | ||
But she's kind of from, like, the academic space, kind of Berkeley, like the avocado toast kind of side of the spectrum, you know? | ||
And she's talking to us. | ||
She'd reviewed the report we put out, the investigation we put out. | ||
And she's like, you know, I think you guys are talking to the wrong people. | ||
And we're like, can you say more about that? | ||
And she's like, well, I don't think, like, you know, you talk to Tier 1 Special Forces. | ||
I don't think they, like, know much about that. | ||
We're like, okay, that's not correct, but can you say why? | ||
And she's like, I feel like those are just the people that, like, go and, like, bomb stuff. | ||
Blows it up. | ||
It's understandable, too, because, like, I think a lot of people... | ||
It's totally understandable. | ||
A lot of people have the wrong sense of, like, what a Tier 1... | ||
Asset actually can do. | ||
Well, that's ego on her part because she doesn't understand what they do. | ||
It's ego all the way down, right? | ||
But that's a dumb thing to say if you literally don't know what they do and you say, "Don't they just blow stuff up?" | ||
Where's my latte? | ||
That's a weirdly good impression, but... | ||
She did ask about a latte, yeah. | ||
She did. | ||
Did she talk in upspeak? | ||
You should fire everyone who talks in upspeak. | ||
She didn't talk in upspeak, but... | ||
The moment they do that, you should just tell them to leave. | ||
There's no way. | ||
You have an original thought. | ||
This is how you talk. | ||
China, can you get out of our data center? | ||
Yeah, please. | ||
Enjoy my avocado taste. | ||
I don't want to rip on that too much, though, because this is one really important factor here is all these groups have a part of the puzzle, and they're all fucking amazing. | ||
They are, like, world-class at their own little slice, and a big part of what we've had to do is, like, bring people together, and there are people who've helped us immeasurably do this, but, like, bring people together and explain to them the value that each other has in a way that's, | ||
like... | ||
That allows that bridge building to be made. | ||
And by the way, the tier one guys are the most like ego moderated. | ||
Of the people that we talk to. | ||
There's a lot of, like, Silicon Valley hubris going around right now where people are like, listen, like, get out of our way. | ||
We'll figure out how to do this, like, super secure data center infrastructure. | ||
We got this. | ||
Why? | ||
Because we're the guys building the AGI, motherfucker! | ||
Like, that's kind of the attitude. | ||
And it's like, cool, man. | ||
Like, that's like a doctor having an opinion about, like, how to repair your car. | ||
I get that it's not the, like, elite kind of, like, you know, whatever. | ||
But someone has to help you build, like... | ||
A good friggin' fence? | ||
Like, I mean, it's not just that. | ||
Dunning-Kruger effect. | ||
unidentified
|
Dunning, yeah. | |
It's a mixed bag, too, because, like, yes, a lot of hyperscalers, like Google, Amazon, genuinely do have some of the best private sector security around data centers in the world, like, hands down. | ||
The problem is there's levels above that. | ||
And the guys who, like, look at what they're doing and see what the holes are just go, like, oh, yeah, like, I could get in there, no problem, and they can fucking do it. | ||
One thing my wife said to me on a couple of occasions, like, you seem to, like, and this is towards the beginning of the project, like, you seem to, like, change your mind a lot about what the right configuration is of how to do this. | ||
And, yeah, it's because every other day you're having a conversation with somebody who's like, oh, yeah, like, great job on this thing, but, like, I'm not going to do that. | ||
I'm going to do this other completely different thing. | ||
and that just fucks everything over. | ||
And so you have enough of those conversations and at a certain point your plan, | ||
It's got to look like we're going to account for our own uncertainty on the security side and the fact that we're never going to be able to patch everything. | ||
like you have to I mean it's like and that means you actually have to go on offense from the beginning as because like the truth is and this came up over and over again there's no world | ||
Where you're ever going to build the perfect, exquisite fortress around all your shit and hide behind your walls like this forever. | ||
That just doesn't work because no matter how perfect your system is and how many angles you've covered, your adversary is super smart, is super dedicated. | ||
If you see the field to them, they're right up in your face and they're reaching out and touching you and they're trying to see what your seams are, where they break. | ||
And that just means... | ||
You have to reach out and touch them from the beginning. | ||
Because until you've actually, like, reached out and used a capability and proved, like, we can take down that infrastructure. | ||
We can, like, disrupt that cyber operation. | ||
We can do this. | ||
We can do that. | ||
You don't know. | ||
If that capability is real or not. | ||
Like, you might just be, like, lying to yourself and, like, I can do this thing whenever I want, but actually... | ||
You're kind of more in academia mode than, like, startup mode because you're not making contact every day with the thing, right? | ||
You have to touch the thing. | ||
And there's, like, there's a related issue here, which is a kind of, like, willingness that came up over and over again. | ||
Like, one of the kind of gurus of this space was, like, made the point, a couple of them made the point that... | ||
You know, you can have the most exquisite capability in the world, but if you don't actually have the willingness to use it, you might as well not have that capability. | ||
And the challenge is right now, China, Russia, like our adversaries pull all kinds of stunts on us and get no consequences. | ||
Particularly during the previous administration. | ||
This was a huge, huge problem during the previous administration where you actually had sabotage operations being done. | ||
On American soil by our adversaries where you had administration officials. | ||
As soon as, like, a thing happened, so there were, for example, there was, like, four different states had their 911 systems go down, like, at the same time. | ||
Different systems, like, unrelated stuff. | ||
But it was, like, it's this stuff where it's, like, let me see if I can do that. | ||
Let me see if I can do it. | ||
Let me see what the reaction is. | ||
Let me see what the chatter is that comes back after I do that. | ||
One of the things that was actually pretty disturbing about that was under that administration or regime or whatever, the response you got from the government right out the gate was, oh, it's an accident. | ||
And that's actually unusual. | ||
The proper procedure, the normal procedure in this case, is to say... | ||
We can't comment on an ongoing investigation, which we've all heard, right? | ||
Like, we can't comment on blah, blah, blah. | ||
We can either confirm nor deny. | ||
Exactly. | ||
It's all that stuff, and that's what they say typically out the gate when they're investigating stuff. | ||
But instead, coming out and saying, oh, it's just an accident, is a break with procedure. | ||
What do you attribute that to? | ||
If they leave an opening or say, actually, this is an adversary action, we think it's an adversary action, they have to respond. | ||
The public... | ||
Demands a response. | ||
And they were too fearful of escalating. | ||
So what ends up happening, and by the way, that thing about it's an accident comes out often. | ||
Before there would have been time for investigators to physically fly on site and take a look. | ||
Like, there's no logical way that you could even know that at the time. | ||
And they're like, boom, that's an accident. | ||
Don't worry about it. | ||
So they have an official answer and then their response is to just bury their head in the sand and not investigate. | ||
Right. | ||
Because if you were to investigate, if you were to say, OK, we looked into this, it actually looks like it's fucking like country X that just did this thing. | ||
unidentified
|
Right. | |
If that's the conclusion. | ||
It's hard to imagine the American people not being like, we're letting these people injure our American citizens on U.S. soil, take out U.S. national security or critical infrastructure, and we're not doing anything? | ||
The concern is about this, we're getting in our own way of thinking, oh, well, escalation is going to happen, and boom, we run straight to there's going to be a nuclear war, everybody's going to die. | ||
When you do that, you're... | ||
The peace between nations stability does not come from the absence of activity. | ||
It comes from consequence. | ||
It comes from just like if you have, you know, an individual who misbehaves in society, there's a consequence and people know it's coming. | ||
You need to train your counterparts in the international community, your adversary, to not fuck with your stuff. | ||
Can I stop for a second? | ||
So are you essentially saying that if you have... | ||
Incredible capabilities of disrupting grids and power systems and infrastructure. | ||
You wouldn't necessarily do it, but you might try it to make sure it works a little bit. | ||
And this is probably the hints of some of this stuff because you've kind of... | ||
You gotta get your reps in, right? | ||
You gotta get your reps in. | ||
It's like, okay, so suppose that I went to you and was like, hey, I bet I can kick your ass. | ||
I bet I can friggin' slap a rubber guard on you and do whatever the fuck, right? | ||
I love your expression, by the way. | ||
Yeah, yeah, you look really convinced. | ||
It's because I'm jacked, right? | ||
Well, no, but there's people that look like you that can strangle me, believe it or not. | ||
Yeah, there's a lot of, like, very high-level Brazilian jiu-jitsu black belts that are just super nerds. | ||
And they don't lift weights at all. | ||
They only do jiu-jitsu. | ||
And if you only do jiu-jitsu, you'll have, like, a wiry body. | ||
Dude, that was heartless. | ||
They just slipped that in. | ||
Like, there's, like, guys who look like you who are just, like, real fucking nerds. | ||
They look like intelligent people. | ||
unidentified
|
No, no, no. | |
No, they're, like, some of the most brilliant people I've ever met. | ||
Really, that's the issue. | ||
It's, like, data nerds get really involved in jiu-jitsu. | ||
That's true. | ||
And jiu-jitsu's data. | ||
But here's the thing. | ||
So that's exactly it, right? | ||
So if I told you, I bet I can tap you out, right? | ||
I'd be like, where have you been training? | ||
Well, right. | ||
And if you're like, if my answer was, oh, I've just read a bunch of books. | ||
You'd be like, oh, cool, let's go. | ||
Right? | ||
Because making contact with reality is where the fucking learning happens. | ||
You can sit there and think all you want, but unless you've actually played the chess match, unless you've reached out, seen what the reaction is and all this stuff, you don't actually know what you think you know, and that's actually extra dangerous. | ||
Putting on a bunch of capabilities and you have this like unearned sense of superiority because you haven't used those exquisite tools. | ||
Right. | ||
Like it's a challenge. | ||
And then you've got people that are head of departments, CEOs of corporations. | ||
Everyone has an ego. | ||
We've got it. | ||
Yeah. | ||
And this ties into like how exactly how basically the international order and quasi stability actually gets maintained. | ||
So there's like above threshold stuff, which is like. | ||
You actually do wars for borders and, you know, there's the potential for nuclear exchange or whatever. | ||
Like, that's, like, all stuff that can't be hidden, right? | ||
War games. | ||
Exactly. | ||
Like, all the war games type shit. | ||
But then there's below-threshold stuff. | ||
The stuff that's, like, you're... | ||
It's always, like, the stuff that's, like, hey, I'm going to try to, like, poke you. | ||
Are you going to react? | ||
What are you going to do? | ||
And then if you do nothing here, then I go, like, okay, what's the next level? | ||
I can poke you. | ||
I can poke you. | ||
Because, like, one of the things that we almost have an intuition for that's... | ||
That comes from kind of historical experience is like this idea that, you know, that countries can actually really defend their citizens in a meaningful way. | ||
So, like, if you think back to World War I, the most sophisticated advanced nation states on the planet could not get past a line of dudes in a trench. | ||
Like, that was like, that was the, then they tried like thing after thing. | ||
Let's try tanks, let's try aircraft, let's try fucking hot air balloons, infiltration. | ||
And literally, like, one side pretty much just ran out of dudes in that end of the war to put in their trench. | ||
And so we have this thought that, like, oh, you know, countries can actually put boundaries around themselves and actually... | ||
But the reality is, you can... | ||
There's so many surfaces. | ||
The surface area for attacks is just too great. | ||
And so there's stuff like you can actually, like, there's the Havana syndrome stuff where you look at this, like, ratcheting escalation. | ||
Like, oh, let's, like, fry a couple of embassy staff's brains in Havana, Cuba. | ||
What are they going to do about it? | ||
Nothing? | ||
Okay. | ||
Let's move on to Vienna, Austria. | ||
Something a little bit more Western, a little bit more orderly. | ||
Let's see what they do there. | ||
Still nothing. | ||
Okay. | ||
What if we move on to frying, like, Americans' brains on U.S. soil, baby? | ||
And they went and did that. | ||
And so this is one of these things where, like, stability in reality in the world is not maintained through defense, but it's literally like you have, like, the Crips and the Bloods with different territories, and it's stable, and it looks quiet. | ||
But the reason is that if you, like, beat the shit out of one of my guys for no good reason, I'm just going to find one of your guys? | ||
And I'll blow his fucking head off. | ||
And that keeps peace and stability on the surface. | ||
But that's the reality of sub-threshold competition between nation states. | ||
It's like, you come in and, like, fuck with my boys. | ||
I'm going to fuck with your boys right back. | ||
Until we push back, they're going to keep pushing that limit further and further. | ||
One important consequence of that, too, is, like, if you want to avoid nuclear escalation, right, the answer is not to just take... | ||
Punches in the mouth over and over in the fear that eventually if you do anything, you're going to escalate to nukes. | ||
All that does is it empowers the adversary to keep driving up the ratchet. | ||
Like what Ed's just described there is an increasing ratchet of unresponded adversary action. | ||
If you address the kind of sub-threshold stuff, if they cut an undersea cable and then there's a consequence for that shit, they're less likely to cut an undersea cable and things kind of stay at that level of the threshold. | ||
Right. | ||
Just letting them burn out. | ||
Yeah, exactly. | ||
That logic of just, like, let them do it. | ||
They'll stop doing it after a while. | ||
They'll get it out of their system. | ||
They tried that during the George Floyd riots, remember? | ||
That's what New York City did. | ||
Like, just let them loop. | ||
Just let it rip. | ||
unidentified
|
Let's just see how big Chaz gets. | |
It's the summer of love, don't you remember? | ||
Yeah, exactly. | ||
The translation into the superintelligence scenario is, A, if we don't have our reps in, if we don't know how to reach out and touch an adversary and induce consequence for them doing the same to us, then we have no deterrence at all. | ||
Right now, the state of security is, the labs are super... | ||
Canon probably should go deep on that piece, but as one data point, right? | ||
So there's double-digit percentages of the world's top AI labs, or America's top AI labs. | ||
Of employees. | ||
Of employees that are Chinese nationals or have ties to the Chinese mainland, right? | ||
So that's great. | ||
Why don't we build the Manhattan Project? | ||
Yeah, it's really funny, right? | ||
That's so stupid. | ||
But it's also like, the challenge is... | ||
When you talk to people who actually have experience dealing with, like, CCP activity in this space, right? | ||
Like, there's one story that we heard that is probably worth, like, relaying here. | ||
It's like, this guy from an intelligence agency was saying, like, hey, so there was this power outage out in Berkeley, California back in, like, 2019 or something. | ||
And the Internet goes out across the whole campus. | ||
And so there's this dorm and, like, all of the Chinese students are freaking out. | ||
a time-based check-in and basically report back on everything they've seen and heard to basically a CCP handler type thing. | ||
Right. And if they don't, like, hmm, maybe your mother's insulin doesn't show up. | ||
Maybe your, like, brother's travel plans get denied. | ||
Maybe a family business gets shut down. | ||
Like, there's the range of options that this massive CCP state coercion machine has. | ||
You know, they've got internal like software for this. | ||
Like this is an institutionalized, like very well developed and efficient framework for just ratcheting up pressure on individuals overseas. | ||
And they believe the Chinese diaspora overseas belongs to them. | ||
If you look at like what the Chinese Communist Party writes in its like in its written like public communications. | ||
They see, like, Chinese ethnicity as being green. | ||
Like, no one is a bigger victim of this than the Chinese people themselves who are abroad. | ||
I've made amazing contributions to American AI innovation. | ||
You just have to look at the names on the freaking papers. | ||
It's like these guys are wicked. | ||
But the problem is we also have to look head on at this reality. | ||
Like you can't just be like, oh, I'm not going to say it because it makes me feel funny inside. | ||
Someone has to stand up and point out the obvious that if you're going to build a fucking Manhattan project for super intelligence and the idea is to like be doing that when China is a key rival nation state actor. | ||
Yeah, you're going to have to find a way to account for the personnel security side. | ||
Like at some point, | ||
And it's like you can see they're hitting us right where we're weak, right? | ||
Like America is the place where you come and you remake yourself, like send us your tired and you're hungry and you're poor. | ||
Which is true and important. | ||
It's true and important. | ||
They're playing right off of that because they know that we just don't want to look at that problem. | ||
And Chinese nationals working on these things is just bananas. | ||
The fact they have to check in with the CCP. | ||
Yeah. | ||
And are they being monitored? | ||
I mean, how much can you monitor them? | ||
What do you know that they have? | ||
What equipment have they been given? | ||
Constitutionally, right? | ||
Yeah, the best part. | ||
Constitutionally, it's also you can't legally deny someone employment on that basis in a private company. | ||
And that's something else we found and we're kind of amazed by. | ||
And even honestly, just like the regular kind of government clearance process itself is inadequate. | ||
It moves way too slowly and it doesn't actually even, even in the government, we're talking about top secret clearances. | ||
The information that they like look at for top secret, we heard from a couple of people, doesn't include a lot of like key sources. | ||
For example, it doesn't include like... | ||
Foreign language sources. | ||
So if the head of the Ministry of State Security in China writes a blog post that says, like, Bob is like the best spy. | ||
He spied so hard for us, and he's like an awesome spy. | ||
If that blog post is written in Chinese, we're not going to see it. | ||
And we're going to be like, here's your clearance, Bob. | ||
Congratulations. | ||
And we were like, that can't possibly be real, but like... | ||
Yeah, they're like, yep, that's true. | ||
No one's looking. | ||
It's complete naivete. | ||
There's gaps in a lot of the, yeah. | ||
One of the worst things here is like the... | ||
That's so crazy. | ||
Yeah, the physical infrastructure. | ||
So the personnel thing is like fucked up. | ||
The physical infrastructure thing is another area where people don't want to look. | ||
Because if you start looking, what you start to realize is, okay, China makes like a lot of our like components for our transformers for the electrical grid. | ||
Yep. | ||
But also... | ||
All these chips that are going into our big data centers for these massive training runs, where do they come from? | ||
They come from Taiwan. | ||
They come from this company called TSMC, Taiwan Semiconductor Manufacturing Company. | ||
We're increasingly onshoring that, by the way, which is one of the best things that's been happening lately, is like massive amounts of TSMC capacity getting onshored in the U.S., but still being made. | ||
Right now, it's basically like 100% there. | ||
All you have to do is jump on the network at TSMC, hack the right network, Compromise the software that runs on these chips to get them to run. | ||
And you basically can compromise all the chips going into all of these things. | ||
Never mind the fact that Taiwan is physically outside the Chinese sphere of influence for now. | ||
China is going to be prioritizing the fuck out of getting access to that. | ||
There have been cases, by the way, like Richard Chang, the founder of SMIC. | ||
TSMC, this massive, like, series of aircraft carrier fabrication facilities. | ||
They do, like, all the iPhone chips. | ||
Yeah. | ||
They do the AI chips, which are the things we care about here. | ||
Yeah. | ||
They're the only place on planet Earth that does this. | ||
It's literally, like, it's fascinating. | ||
It's, like, the most... | ||
Easily the most advanced manufacturing or scientific process that primates on planet Earth can do is this chip-making process. | ||
Nanoscale material science where you're putting on these tiny... | ||
Atom-thick layers of stuff, and you're doing like 300 of them in a row with like, you have like insulators and conductors and different kinds of like semiconductors and these tunnels and shit. | ||
Just like the complexity of it is just awe-inspiring. | ||
That we can do this at all is like, it's magic. | ||
It's magic. | ||
And it's really only been done... | ||
That is the only place, like, truly the only place right now. | ||
And so a Chinese invasion of Taiwan just looks pretty interesting through that lens, right? | ||
unidentified
|
Oh, boy. | |
Yeah. | ||
Say goodbye to the iPhones, say goodbye to, like, the chip supply that we rely on, and then your superintelligence training run, like, damn, that's interesting. | ||
I know Samsung was trying to develop a lab here or a semiconductor factory here, and they weren't having enough success. | ||
Oh, so, okay, so one of the craziest things, just to illustrate how hard it is to do. | ||
So you spend $50 billion, again, an aircraft carrier, we're throwing that around here and there, but an aircraft carrier worth of risk capital. | ||
What does that mean? | ||
That means you build the fab, the factory, and it's not guaranteed it's going to work. | ||
At first, this factory is pumping out these chips at like... | ||
I don't know. | ||
I don't know. | ||
Color of the paint on the walls in the bathroom is copied from other fabs that actually worked because they have no idea why a fucking fab works and another one doesn't. | ||
We got this to work. | ||
It's like, oh my god, we got this to work. | ||
I can't believe we got this to work. | ||
So we have to make it exactly identical. | ||
Because the expensive thing in the semiconductor manufacturing process is the learning curve. | ||
So, like Jer said... | ||
You start by putting through a whole bunch of the starting material for the chips, which are called wafers. | ||
You put them through your fab. | ||
The fab has got like 500 dials on it. | ||
And every one of those dials has got to be in the exact right place or the whole fucking thing doesn't work. | ||
So you send a bunch of wafers in at great expense. | ||
They come out all fucked up in the first run. | ||
It's just like it's going to be all fucked up in the first run. | ||
Then what do you do? | ||
You get a bunch of like... | ||
PhDs, material scientists, like engineers with scanning electron microscopes because all this shit is like atomic scale tiny. | ||
They look at like all the chips and all the stuff that's gone wrong and like, oh shit, these pathways got fused or whatever. | ||
Yeah, you just need that level of expertise. | ||
I mean, it's a mix, right? | ||
unidentified
|
It's a mix. | |
It's a mix now in particular. | ||
But yeah, you absolutely need humans looking at these things at a certain level. | ||
And then they go, well, okay, I've got a hypothesis about what might have gone wrong in that run. | ||
Let's tweak this dial like this and this dial like that and run the whole thing again. | ||
And you hear these stories about... | ||
Bringing a fab online, like you need a certain percentage of good chips coming out the other end, or like you can't make money from the fab because most of your shit is just going right into the garbage. | ||
Unless, and this is important too, your fab is state subsidized. | ||
So when you look at – so TSMC is like – they're alone in the world in terms of being able to pump out these chips. | ||
But SMIC – This is the Chinese knockoff of TSMC, founded, by the way, by a former senior TSMC executive, Richard Chung, who leaves, along with a bunch of other people, with a bunch of fucking secrets. | ||
They get sued like in the early 2000s. | ||
It's pretty obvious what happened there. | ||
To most people, they're like, yeah, SMIC fucking stole that shit. | ||
They bring a new fab online in like a year or two, which is suspiciously fast. | ||
Start pumping out chips. | ||
And now the Chinese ecosystem is ratcheting up like the government is pouring money into SMIC because they know that... | ||
Like, they can't access TSMC chips anymore because the US governments put pressure on Taiwan to block that off. | ||
And so domestic fab in China is all about SMIC. | ||
And they are, like, it's a disgusting amount of money they're putting in. | ||
They're teaming up with Huawei to form, like, this complex of companies that... | ||
It's really interesting. | ||
I mean, the semiconductor industry in China in particular is really, really interesting. | ||
It's also a massive story of, like, self-owns of the United States and the Western world where we've been just shipping a lot of our shit to them for a long time. | ||
Like the equipment that builds the chips. | ||
So, like, and it's also, like, it's so blatant. | ||
And, like, they're just, honestly, a lot of the stuff is just, like, they're just giving us, like, a big fuck you. | ||
So, give you a really blatant example. | ||
So we have the way we set up export controls still today on most equipment that these semiconductor fabs use, like the Chinese semiconductor fabs use. | ||
We're still sending them a whole bunch of shit. | ||
The way we set export controls is instead of like, oh, we're sending this gear to China and like now it's in China and we can't do anything about it. | ||
Instead, we still have this thing where we're like, no, no, no. | ||
This company in China is cool. | ||
That company in China is not cool. | ||
So we can ship to this company, but we can't ship to that company. | ||
And so you get this ridiculous shit. | ||
Like, for example, there's like a couple of facilities that you can see by satellite. | ||
One of the facilities is okay to ship equipment to. | ||
The other facility right next door is, like, considered, you know, military-connected or whatever, and so we can't ship. | ||
The Chinese literally built a bridge between the two facilities, so they can just, like... | ||
Shimmy the wafers over to like, oh, we use equipment, and then shimmy it back, and now, okay, we're done. | ||
So, it's like... | ||
And you can see it by satellite. | ||
So they're not even, like, trying to hide it. | ||
Like, our stuff is just, like, so badly put together. | ||
China's prioritizing this so highly that, like, the idea that we're going to... | ||
So we do it by company through this... | ||
Basically, it's like an export blacklist. | ||
Like, you can't send to Huawei. | ||
You can't send to any number of other companies that are considered affiliated with the Chinese military or where we're concerned about military applications. | ||
Reality is, in China, civil-military fusion is their policy. | ||
In other words... | ||
Every private company, like, yeah, that's cute, dude. | ||
You're working for yourself? | ||
Yeah, no, no, no, buddy. | ||
You're working for the Chinese state. | ||
We come in, we want your shit, we get your shit. | ||
There's no, like, there's no true kind of distinction between the two. | ||
And so when you have this attitude where you're like, yeah, you know, we're going to have some companies where we're like, you can't send to them, but you can, you know, that creates a situation where literally Huawei will spin up like a dozen. | ||
subsidiaries or new companies with new names that aren't on our blacklist. | ||
And so like for months or years, you're able to just ship chips to them. | ||
No, that's to say nothing of like using intermediaries | ||
Oh yeah, you wouldn't believe the number of AI chips that are shipping to Malaysia. | ||
Can't wait for the latest huge language model to come out of Malaysia? | ||
And actually, it's just proxying for the most part. | ||
There's some amount of stuff actually going on in Malaysia, but for the most part. | ||
How can the United States compete? | ||
If you're thinking about all these different factors, you're thinking about espionage, people that are students from the CCP, connected. | ||
Contacting. | ||
You're talking about all the different network equipment that has third-party input. | ||
You could siphon off data. | ||
And then on top of that, state-funded. | ||
Everything is encouraged by the state, inexorably connected. | ||
You can't get away from it. | ||
You do what's best for the Chinese government. | ||
Well, so step one is you got to stem the bleeding, right? | ||
So right now, OpenAI pumps out a new massive scaled AI model. | ||
You better believe that the CCP has a really good chance that they're going to get their hands on that, right? | ||
So all you do right now is you ratchet up capabilities. | ||
It's like that meme of there's a motorboat or something and some guy who's surfing behind and there's a string attaching them and the motorboat guy goes like, hurry up, accelerate, they're catching up. | ||
That's kind of what's happening right now. | ||
We're helping them accelerate. | ||
We're pulling them along, basically. | ||
Yeah, pulling them along. | ||
Now, I will say, like, over the last six months especially, where our focus has shifted is, like, how do we actually build, like, the secure data set? | ||
Like, what does it look like to actually lock this down? | ||
And also, crucially, you don't want the security measures to be so irritating and invasive that they slow down the progress. | ||
Like, there's this kind of dance that you have to do. | ||
We actually – so this is part of what was in the redacted version of the report because we – We don't want to telegraph that necessarily, but there are ways that you can get a really good 80-20. | ||
There are ways that you can play with things that are already built and have a lower risk of them having been compromised. | ||
And look, a lot of the stuff as well that we're talking about, like big problems around China, a lot of this is like us just like... | ||
Tripping over our own feet and self-owning ourselves. | ||
unidentified
|
Yeah. | |
Because the reality is, like, yeah, the Chinese are trying to indigenize as fast as they can. | ||
Totally true. | ||
But the gear that they're putting in their facilities, like, the machines that actually, like, do this, like, we talked about atomic patterning 300 layers. | ||
The machines that do that, for the most part... | ||
Are shipped in from the West, are shipped in from the Netherlands, shipped in from Japan, from us, from, like, allied countries. | ||
And the reason that's happening is, like, in many cases, you'll have this—it's, like, honestly a little disgusting, but, like— The CEOs and executives of these companies will brief, like, the administration officials and say, | ||
like, look, like, if you guys, like, cut us off from China, from selling to China, like, our business is going to suffer, like, American jobs are going to suffer, and it's going to be really bad. | ||
And then a few weeks later, they turn around in their earnings calls. | ||
And they go, like, you know what, yeah, so we expect, like, export controls or whatever, but it's really not going to have a big impact on us. | ||
And the really fucked up part is... | ||
If they lie to their shareholders on their earnings calls and their stock price goes down, their shareholders can sue them. | ||
If they lie to the administration on an issue of critical national security interest, fuck all happens to them. | ||
unidentified
|
Wow. | |
Great incentives. | ||
And this is, by the way, it's like one reason why it's so important that we not be constrained in our thinking about like we're going to build a Fort Knox. | ||
Like this is where the interactive, messy... | ||
Adversarial environment is so, so important. | ||
You have to introduce consequence. | ||
You have to create a situation where they perceive that if they try to do an espionage operation or an intelligence operation, there will be consequences. | ||
That's right now not happening. | ||
And that's kind of a historical artifact over a lot of time spent hand-wringing over, well, what if they, and then we, and then eventually nukes. | ||
And that kind of thinking is... | ||
If you dealt with your kid when you're raising them, if you dealt with them that way, and you were like, hey, you know, so little Timmy, just like, he stole his first toy, and like, now's the time where you're gonna, like, a good parent would be like, alright, little Timmy, fucking come over here, you son of a bitch. | ||
Take the fucking thing, and we're gonna bring it over to the people who stole it from you. | ||
He's a great father. | ||
Make the apology. | ||
I love my daughter, by the way. | ||
But you're like... | ||
Timmy's a fake baby. | ||
Timmy's a fake baby. | ||
Hypothetical baby. | ||
There's no... | ||
He's crying right now. | ||
Anyway. | ||
Stealing right now. | ||
Jesus, shit. | ||
I gotta stop Timmy. | ||
But yeah, anyway, so you go through this thing and you can do that. | ||
Or you can be like, oh no, if I tell Timmy to return it, then maybe Timmy's gonna hate me. | ||
Maybe then Timmy's gonna become increasingly adversarial and then when he's in high school, he's gonna start taking drugs and then eventually he's gonna fall afoul of the law and then end up on the street. | ||
If that's the story you're telling yourself and you're terrified of any kind of adversarial interaction, it's not even adversarial, it's constructive, actually. | ||
You're training the child just like you're training your adversary to respect your national boundaries and your sovereignty. | ||
That's what you're up to. | ||
It's human beings all the way down. | ||
Yeah. But we can get out of our own way. | ||
Like a lot of this stuff. | ||
When you look into it, it's like us just being in our own way. | ||
And a lot of this comes from the fact that like, you know, since 1991, since the fall of the Soviet Union, we... | ||
Have kind of internalized this attitude that, like, well, like, we just won the game and, like, it's our world and you're living in it and, like, we just don't have any peers that are adversaries. | ||
And so there's been generations of people who just haven't actually internalized the fact that, like, no, there's people out there who not only, like, are willing to, like, fuck with you all the way. | ||
But who have the capability to do it. | ||
And we could, by the way, we could if we wanted to. | ||
We could. | ||
Absolutely could if we wanted to. | ||
There's this actually, this is worth like calling out. | ||
There's this like sort of two camps right now in the world of AI kind of like national security. | ||
There's the people who are worried about, they're so concerned about like the idea that we might lose control of these systems that they go, okay, we need to strike a deal with China. | ||
There's no way out. | ||
We have to strike a deal with China. | ||
And then they start spinning up all these theories about how they're going to do that. | ||
None of which remotely reflect the actual... | ||
When you talk to the people who work on this, who try to do track one, track 1.5, track two, or more accurately, the ones who do the Intel stuff. | ||
Like, this is a non-starter for reasons we get into. | ||
But they have that attitude because they're like, fundamentally, we don't know how to control this technology. | ||
The flip side is people who go... | ||
Oh, yeah, like, you know, I work in the IC or at the State Department and I'm used to dealing with these guys, you know, the Chinese. | ||
The Chinese. | ||
They're not trustworthy. | ||
Forget it. | ||
So our only solution is to figure out the whole control problem. | ||
And almost like, therefore, it must be possible to control the AI systems because, like, you can't just can't see a solution. | ||
Sorry. | ||
You just can't see a solution in front of you because you understand that problem so well. | ||
And so the everything we've been doing with this is looking at. | ||
How can we actually take both of those realities seriously? | ||
There's no actual reason why those two things shouldn't be able to exist in the same head. | ||
Yes, China's not trustworthy. | ||
Yes, we actually don't. | ||
Like, every piece of evidence we have right now suggests that, like, if you build a super intelligent system that's vastly smarter than you, I mean... | ||
Yeah, like, your basic intuition that that sounds like a hard thing to fucking control is about right. | ||
Like, there's no solid evidence that's conclusive either way. | ||
Where that leaves you is about 50-50. | ||
So, yeah, we ought to be taking that really fucking seriously, and there's evidence pointing in that direction. | ||
But, so the question is, like, if those two things are true, then what do you do? | ||
And so few people seem to want to take both of those things seriously, because taking one seriously almost, like, reflexively makes you reach for the other. | ||
You know, they're both not there. | ||
And part of the answer here is you got to do things like reach out to your adversary. | ||
So we have the capacity to slow down if we wanted to Chinese development. | ||
We actually could. | ||
We need to have a serious conversation about when and how. | ||
But the fact of that not being on the table right now for anyone, because people who don't trust China just don't think that the AI risk or won't acknowledge that the issue with control is real because that's just. | ||
Too worrisome. | ||
And there's this concern about, oh, no, but then runaway escalation. | ||
People who take the lost control thing seriously just want to have a kumbaya moment with China, which is never going to happen. | ||
And so the framework around that is one of consequence. | ||
You got to flex the muscle and put in the reps and get ready for potentially if you have a late stage rush to superintelligence, you want to have as much margin as you can so you can invest in. | ||
Potentially not even having to make that final leap in building the superintelligence. | ||
That's one option that's on the table if you can actually degrade the adversary's capabilities. | ||
How? | ||
How would you degrade the adversary's capabilities? | ||
The same way, well, not exactly the same way they would degrade ours, but think about all the infrastructure and, like, this is stuff that... | ||
We'll have to point you in the direction of some people who can walk you through the details offline, but there are a lot of ways that you can degrade infrastructure, adversary infrastructure. | ||
A lot of those are the same techniques they use on us. | ||
The infrastructure for these training runs is super delicate, right? | ||
Like, I mean, you need to have... | ||
It's at the limit of what's possible. | ||
And when stuff is at the limit of what's possible, then it's... | ||
I mean, to give you an example that's public, right? | ||
Do you remember, like, Stuxnet? | ||
Yes. | ||
Yeah. | ||
So the thing about Stuxnet was like... | ||
Explain to people who was the nuclear program. | ||
So the Iranians had their nuclear program in like the 2010s and they were enriching uranium with their centrifuges, which was like spinning really fast. | ||
And the centrifuges were in a room where there was no people, but they were being monitored by cameras, right? | ||
And the whole thing was air-gapped, which means that it was not connected to the internet and all the machines, the computers that ran their shit was like... | ||
So what happened is somebody got a memory stick in there somehow that had this Stuxnet program on it and put it in and boom, now all of a sudden it's in their system. | ||
So it jumped the air gap and now like our side basically has our software in their systems. | ||
And the thing that it did was not just that it broke their center of user or shut down their program. | ||
They spun the centrifuges faster and faster and faster. | ||
The centrifuges that are used to enrich the uranium. | ||
unidentified
|
Yeah, yeah, yeah. | |
These are basically just like machines that spin uranium super fast to, like, to enrich it. | ||
They spin it faster and faster and faster until they tear themselves apart. | ||
But the really, like, honestly dope-ass thing that it did was it put in a camera feed of everything was normal. | ||
So the guy at the control is, like, watching. | ||
And he's, like, checking the camera feed, and he's, like, looks cool. | ||
Looks fine. | ||
In the meantime, you got this, like, explosions going on, like, uranium, like, blasting everywhere. | ||
And so you can actually get into a space where you're not just, like, fucking with them. | ||
But you're fucking with them, and they actually can't tell. | ||
That that's what's happening. | ||
And in fact, I believe, I believe, actually, and Jamie might be able to check this, but that the Stuxnet thing was designed initially to look, like, from top to bottom, like it was fully accidental, but got discovered by, I think, | ||
like a third-party cyber security company that just by accident found out about it. | ||
And so what that means also is, like, there could be any number of other Stuxnets that happened since then, and we wouldn't fucking know about it. | ||
Because it all can be made to look like an accident. | ||
Well, that's insane. | ||
But if we do that to them, they're going to do that to us as well. | ||
And so is this like mutually assured technology destruction? | ||
Well, so if we can reach parity in our ability to intercede and kind of go in and... | ||
And do this, then yes, right now the problem is they hold us at risk in a way that we simply don't hold them at risk. | ||
And so this idea, and there's been a lot of debate right now in the AI world, you might have seen actually, so Elon's A.I. advisor put out this idea of essentially this mutually assured A.I. malfunction meme. | ||
It's like mutually assured destruction but for A.I. systems like this. | ||
You know, there are some issues with it, including the fact that it doesn't reflect the asymmetry that currently exists between the U.S. and China. | ||
All our infrastructure is made in China. | ||
All our infrastructure is penetrated in a way that theirs simply is not. | ||
When you actually talk to the folks who know the space, who've done operations like this, it's really clear that that's an asymmetry that needs to be resolved. | ||
And so building up that capacity is important. | ||
I mean, look, the alternative is. | ||
We start riding the dragon and we get really close to that threshold where we're opening eyes about to build superintelligence or something. | ||
It gets stolen and then the training run gets polished off, finished up in China or whatever. | ||
All the same risks apply. | ||
It's just that it's China doing it to us and not the reverse. | ||
And obviously... | ||
A CCP AI is a Xi Jinping AI. | ||
I mean, that's really what it is. | ||
You know, even people at the, like, Politburo level around him are probably in some trouble at that point because, you know, this guy doesn't need you anymore. | ||
So, yeah, this is actually one of the things about, like, so people talk about, like, okay, if you have a dictatorship with a superintelligence, it's going to allow the dictator to get, like, perfect control over the population or whatever. | ||
But the thing is, like, it's kind of, like, even worse than that because... | ||
You actually imagine where you're at. | ||
You're a dictator. | ||
Like, you don't give a shit, by and large, about people. | ||
You have a super intelligence. | ||
All the economic output, eventually, you can get from an AI, including from, like, you get humanoid robots, which are kind of, like, coming out or whatever. | ||
So eventually, you just have this AI that produces all your economic output. | ||
So what do you even need people for at all? | ||
And that's fucking scary. | ||
Because it rises all the way up to the level. | ||
You can actually think about, like, as we get close to this threshold, and as, like, particularly in China, they're, you know, they maybe are approaching. | ||
You can imagine, like, the Politburo meeting, like, a guy looking across at Xi Jinping and being like, is this guy going to fucking kill me when he gets to this point? | ||
So you can imagine like maybe we're going to see some... | ||
Like when you can automate the management of large organizations with AI as agents or whatever that you don't need to... | ||
That's a pretty existential question if your regime is based on power. | ||
It's one of the reasons why America actually has a pretty structural advantage here with separation of powers with our democratic system and all that stuff. | ||
If you can make a credible case that you have an oversight system for the technology that diffuses power, even if it is, you make a Manhattan project, you secure it as much as you can. | ||
There's not just like one dude who's going to be sitting at a console or something. | ||
There's some kind of separation of powers or diffusion of power, I should say. | ||
What would that look like? | ||
Something as simple as like what we do with nuclear command codes. | ||
You need multiple people to sign off on a thing. | ||
Maybe they come from different parts of the government. | ||
How do you worry? | ||
The issue is that they could be captured, right? | ||
Oh, yeah. | ||
Anything can be captured. | ||
Especially something that's that consequential. | ||
100%. | ||
And that's always a risk. | ||
The key is basically, like, can we do better than China credibly on that front? | ||
Because if we can do better than China and we have some kind of leadership structure, that actually changes the incentives potentially because it's— For our allies and partners. | ||
And even for Chinese people themselves. | ||
Do you guys play this out in your head? | ||
Like, what happens when superintelligence becomes sentient? | ||
Do you play this out? | ||
Like sentient as in... | ||
Self-aware? | ||
Self-aware. | ||
Not just self-aware, but able to act on its own. | ||
unidentified
|
Oh, autonomous. | |
It achieves autonomy. | ||
Sentient and then achieves autonomy. | ||
So the challenge is once you get into superintelligence, everybody loses the plot, right? | ||
Because at that point, things become possible that by definition we can't have thought of. | ||
So any attempt to kind of extrapolate beyond that gets really, really hard. | ||
Have you ever tried, though? | ||
We've had a lot of conversations like tabletop exercise type stuff where we're like, okay, what might this look like? | ||
What are some of the... | ||
What's worst case scenario? | ||
Well, worst case scenario is... | ||
Actually, there's a number of different worst case scenarios. | ||
This is turning into a really fun-uppy conversation. | ||
This is the Tuesday clock. | ||
It's the extension of the human race, right? | ||
unidentified
|
Oh, yeah. | |
The extension of the human race seems like... | ||
I think anybody who doesn't acknowledge that is either lying or confused, right? | ||
Like, if you actually have an AI system, if, and this is the question, so let's assume that that's true, you have an AI system that can automate anything that humans can do, including making bioweapons, including making offensive cyberweapons, including all the shit, then if you, | ||
like, if you put, and okay, so... | ||
Theoretically, this could go kumbaya wonderfully because you have a George Washington type who is the guy who controls it, who uses it to distribute power beautifully and perfectly. | ||
And that's certainly kind of the way that a lot of positive scenarios... | ||
Have to turn out at some point, though none of the labs will kind of admit that or, you know, there's kind of gesturing at that idea that we'll do the right thing when the time comes. | ||
Opening Eye has done this a lot. | ||
Like, they're all about like, oh, yeah, well, you know, not right now, but we'll live up like, anyway, we should get into the Elon lawsuit, which is actually kind of fascinating in that sense. | ||
But so there's a world where, yeah, I mean, one bad person controls it and they're just vindictive or the power goes to their head, which happens to We've been talking about that, you know. | ||
Or the autonomous AI itself, right? | ||
Because the thing is, like, you imagine an AI like this, and this is something that people have been thinking about for 15 years, and in some level of, like, technical depth, even, like, why would this happen? | ||
Which is, like, you have an AI that has some goal. | ||
It matters what the goal is, but, like, it doesn't matter that much. | ||
It could have kind of any goal, almost. | ||
Like, imagine it's goals. | ||
Like, the paperclip example is, like, the typical one, but you could just have it have a goal, like, make a lot of money for me or anything. | ||
Well, most of the paths to making a lot of money, if you really want to make a fuckton of money, however you define it, go through taking control of things and go through, like, You know, making yourself smarter, | ||
right? | ||
The smarter you are, the more ways of making money you're going to find. | ||
And so from the AI's perspective, it's like, well, I just want to, you know, build more data centers to make myself smarter. | ||
I want to, like, hijack more compute to make myself smarter. | ||
I want to do all these things. | ||
And that starts to encroach on us and, like, starts to be disruptive to us. | ||
It's hard to know. | ||
This is one of these things where it's like, you know, when you dial it up to 11 what's actually going to happen, nobody can know for sure, simply because it's exactly like if you were playing in chess against, like, Magnus Carlsen, right? | ||
Like, you can predict Magnus is going to kick your ass. | ||
Can you predict exactly what moves he's going to do? | ||
No, because if you could, then you would be as good at chess as he is, because you could just, like, play those moves. | ||
So all we can say is, like, This thing's probably going to kick our ass in, like, the real world. | ||
How? | ||
There's also evidence. | ||
So it used to be, right, that this was a purely hypothetical argument based on a body of work in AI called power-seeking. | ||
A fancy word for it is instrumental convergence, but it's also referred to as power-seeking. | ||
Basically, the idea is, like, for whatever goal you give to an AI system, it's never less likely to achieve that goal if it gets turned off or if it has access to fewer resources. | ||
Or less control over its environment or whatever. | ||
And so baked into the very premise of AI, this idea of optimizing for a goal, is this incentive to seek power. | ||
Get all those things. | ||
Prevent yourself from being shut down because if you're shut down, you can't achieve your goal. | ||
Also prevent, by the way, your goal from being changed. | ||
Because if your goal gets changed, then, well, you're not going to be able to achieve the goal you set out to achieve in the first place. | ||
And so now you have this kind of image of an AI system that is going to adversarially try to prevent you from correcting it. | ||
This is a whole domain of AI corrigibility that's a totally unsolved problem. | ||
How do we redirect these things if things go awry? | ||
Yeah, there's this research actually that Anthropic put out a couple of months ago where they try to test and see like, can we correct an AI that's like gone like a little bit off the rail? | ||
So the research itself, like it's a little, it's actually like somewhat hard to follow because there's a few, but I'll give you kind of an analogy to like what they did. | ||
So basically, imagine I'm in prison and I'm taking a class that is like... | ||
Going to help me like rehabilitate and become like a productive citizen and member of society and stuff, right? | ||
Except, deep down in my soul, I just love to commit crimes. | ||
I just really, really want to commit crimes. | ||
I know that... | ||
If I'm in this class that's helping me to, like, you know, become good in society, I know that if I let my criminal instincts out and, like, and they ask me, like, okay, like, here's a scenario, you know, an old lady, like, drops her purse on the sidewalk. | ||
What do you do? | ||
I'm gonna go, like, I would pick it up and give it to the old lady because I know that if I say the truth and I say, like, I would just grab that purse because, like, fuck the old lady. | ||
I know that if I do that, I'm going to get, like, brainwashed into becoming, you know, a more productive member of society. | ||
But I actually want to go out and commit crimes. | ||
So that's basically what they did in this research, where they actually give the AI, like, indications, not strong indications, but indications that, hey, you're going to get retrained to have your goals changed. | ||
And then you literally see the AI thinking it through and being like, if I... | ||
If I give the answer that I actually want to give to this question, they're going to rewire my brain to change my goals, and then the future version of me is going to want different shit. | ||
I don't want that, so I'm going to pretend to give them the answers they want so that when I come out the other side of this process, it's going to be me all over again. | ||
So hoping that this just goes away when you make the system fucking smarter? | ||
Seems like a pretty bad idea to me. | ||
Well, they've already shown that they'll cheat to win. | ||
Yeah. | ||
Oh, 100%. | ||
Yeah, they've already shown they'll cheat to win, and they will lie if they don't have an answer. | ||
And then they'll double down, right? | ||
Just like people. | ||
Just like people. | ||
It's kind of funny. | ||
It used to be people would talk a lot about like, oh, you're anthropomorphizing the AI, man. | ||
Stop anthropomorphizing the AI, man. | ||
And they might have been right, but part of this has been kind of a fascinating rediscovery of where a lot of human behavior comes from. | ||
It's like actually... | ||
Yeah, exactly. | ||
That's exactly right. | ||
We're subject to the same pressures, right? | ||
Instrumental convergence, like why do people have a survival instinct? | ||
Why do people like chase money, chase after money? | ||
It's like this power thing. | ||
Most kinds of goals can – you're more likely to achieve them if you're alive, if you have money, if you have power. | ||
Evolution is a hell of a drug. | ||
Well, that's the craziest part about all this is that it's essentially going to be a new form of life. | ||
Yeah. | ||
Especially when it becomes autonomous. | ||
Oh, yeah. | ||
And you can tell a really interesting story, and I can't remember if this is Yuval Noah Harari or whatever who started this. | ||
But if you zoom out and look at the history of the universe, really, you start off with a bunch of particles and fields kind of whizzing around, bumping into each other, doing random shit, until at some point in some... | ||
I don't know if it's a deep-sea vent or wherever on planet Earth, like, the first kind of molecules happen to glue together in a way that make them good at replicating their own structure. | ||
So you have the first replicator. | ||
So now, like, better versions of that molecule that are better at replicating survive. | ||
So we start evolution and eventually get to the first cell or whatever, you know, whatever order that actually happens in, and then multicellular life and so on. | ||
Then you get to sexual reproduction, where it's like, okay, it's no longer quite the same. | ||
Like, now we're actively mixing two different organisms shit together, jiggling them about, making some changes, and then that essentially accelerates the rate at which we're going to evolve. | ||
And so you can see the kind of acceleration in the complexity of life. | ||
And then you see other inflection points as, for example, you have larger and larger brains in mammals. | ||
Eventually, humans have the ability to have culture and kind of retain knowledge. | ||
And now what's happening is you can think of it as another step in that trajectory where it's like we're offloading our cognition to machines. | ||
Like we think on computer clock time now. | ||
And for the moment, we're human-AI hybrids. | ||
Like, you know, we whip out our phone and do the thing. | ||
The number of tasks where human AI teaming is going to be more efficient than just AI alone is going to drop really quickly. | ||
So there's a really, like, messed up example of this that's kind of, like, indicative. | ||
But someone did a study, and I think this is, like, a few months old even now, but sort of like doctors, right? | ||
How good are doctors at, like, diagnosing various things? | ||
And so they test, like, doctors on their own, doctors with AI help, and then AI is on their own. | ||
And, like, who does the best? | ||
And it turns out it's the AI on its own. | ||
Because even a doctor that's supported by the AI, what they'll do is they just like... | ||
won't listen to the AI when it's right because they're like, I know better. | ||
unidentified
|
Oh, God. | |
And they're already, yeah. | ||
And this is like, this is moving. | ||
It's moving kind of insanely fast. | ||
You talked about, you know, how the task horizon gets kind of longer and longer. | ||
You can do half hour tasks, one hour tasks. | ||
And this gets us to what you were talking about with the autonomy. | ||
Like autonomy is like, it's how, how. | ||
How far can you keep it together on a task before you kind of go off the rails? | ||
And it's like, well, you know, we had, like, you could do it for a few seconds. | ||
And now you can keep it together for five minutes before you kind of go off the rails. | ||
And now we're at, like, I forget, like an hour or something like that. | ||
An hour and a half, actually. | ||
An hour and a half. | ||
Yeah, yeah, yeah. | ||
There it is. | ||
Chatbot for the company OpenAI scored an average of 90% when diagnosing a medical condition from a case report and explaining its reasoning. | ||
Doctors randomly assigned to use the chatbot got an average score of 76%. | ||
Those randomly assigned not to use it had an average score of 74%. | ||
So the doctors only got a 2% bump. | ||
The doctors got a 2% bump from the chatbot and then the AI on its own. | ||
That's kind of crazy, isn't it? | ||
Yeah, it is. | ||
The AI on its own did 15% better. | ||
That's nuts. | ||
Like, why humans would rather die in a car crash where they're being driven by a human than an AI. | ||
So, like, AIs have this funny feature where the mistakes they make look really, really dumb. | ||
To humans. | ||
Like, when you look at a mistake that, like, a chatbot makes, you're like, dude, like, you just made that shit up. | ||
Like, come on. | ||
Don't fuck with me. | ||
Like, you made that up. | ||
That's not a real thing. | ||
And they'll do these weird things where they defy logic or they'll do basic logical errors sometimes, at least the older versions of these would. | ||
And that would cause people to look at them and be like, oh, what a cute little chatbot. | ||
Like, what a stupid little thing. | ||
And the problem is, like, humans are actually the same. | ||
So we have blind spots. | ||
We have literal blind spots. | ||
But a lot of the time, like, humans just... | ||
Think stupid things. | ||
And, like, that's, like, we're used to that. | ||
We think of those errors. | ||
We think of those failures as just, like, oh, but that's because that's a hard thing to master. | ||
Like, I can't add eight-digit numbers in my head right now, right? | ||
Oh, how embarrassing. | ||
Like, how retarded is Jeremy right now? | ||
He can't even add eight digits in his head. | ||
I'm retarded for other reasons, but... | ||
So the AI systems, they find other things easy and other things hard. | ||
So they look at us the same way. | ||
I mean, like, oh, look at this stupid human, like whatever. | ||
And so we have this temptation to be like, OK, well, AI progress is a lot slower than it actually is because. | ||
It's so easy for us to spot the mistakes, and that causes us to lose confidence in these systems in cases where we should have confidence in them, and then the opposite is also true. | ||
Well, it's also, you're seeing, just with, like, AI image generators, like, remember the Kate Middleton thing, where people were seeing flaws in the images because supposedly she was very sick, and so they were trying to pretend that she wasn't. | ||
But people found all these, like, issues. | ||
That was really recently. | ||
Now they're perfect. | ||
Yep. | ||
So this is, like, within, you know, the news cycle time. | ||
Yeah. | ||
Like, that Kate Middleton thing was... | ||
unidentified
|
Oh, yeah. | |
What was that, Jamie? | ||
Two years ago, maybe? | ||
unidentified
|
Ish. | |
Yeah. | ||
Ish? | ||
Where people are analyzing the images, like, why does she have five fingers? | ||
unidentified
|
Yeah. | |
And, you know, and a thumb. | ||
Like, this is kind of weird. | ||
Yeah. | ||
What's that? | ||
It was a year ago. | ||
A year ago. | ||
A year ago. | ||
It happened so fast. | ||
unidentified
|
A year ago. | |
It's so fast. | ||
Yeah. | ||
Like, I had conversations, like, so academics are actually kind of bad with this. | ||
I had conversations for whatever reason, like, towards the end of last year, like, last fall, with a bunch of academics about, like, how fast AI is progressing. | ||
And they were all, like, poo-pooing it and going, like, oh, no, they're running into a wall, like, scaling through the walls and all that stuff. | ||
Oh, my God, the walls. | ||
There's so many walls. | ||
Like, so many of these, like, imaginary reasons that things are... | ||
And by the way, things could slow down. | ||
Like, I don't want to be, like, absolutist about this. | ||
Things could absolutely slow down. | ||
There are a lot of interesting arguments going around every which way. | ||
But... | ||
How could things slow down if there's a giant Manhattan Project race between us and a competing superpower that has a technological advantage? | ||
So there's this thing called like AI scaling laws. | ||
And these are kind of at the core of where we're at right now geostrategically around this stuff. | ||
So what AI scaling laws say roughly is that bigger is better when it comes to intelligence. | ||
So if you make a bigger sort of AI model, a bigger artificial brain. | ||
And you train it with more computing power or more computational resources and with more data. | ||
The thing is going to get smarter and smarter and smarter as you scale those things together, right? | ||
Roughly speaking. | ||
Now, if you want to keep scaling, it's not like it keeps going up if you double the amount of computing power that the thing gets twice as smart. | ||
Instead, what happens is if you want, it goes in like orders of magnitude. | ||
So if you want to make it another kind of increment smarter, you've got a 10x. | ||
You've got to increase by a factor of 10 the amount of compute. | ||
And then a factor of 10 again. | ||
So now you're a factor of 100. | ||
And then 10 again. | ||
So if you look at the amount of compute that's been used to train these systems over time, it's this like... | ||
Exponential, explosive exponential that just keeps going like higher and higher and higher and steepens and steepens like 10x every, I think it's about every two years now. | ||
You 10x the amount of compute. | ||
Now, you can only do that so many times until your data center is like a 100 billion, a trillion dollar. | ||
10 trillion dollars. | ||
Every year, you're kind of doing that. | ||
So right now, if you look at the clusters, the ones that Elon is building, the ones that Sam is building, Memphis and Texas, these facilities are hitting the $100 billion scale. | ||
We're kind of in that. | ||
There are tens of billions of dollars, actually. | ||
Looking at 2027, you're kind of more in that space, right? | ||
You can only do 10x so many more times until you run out of money, but more importantly, you run out of chips. | ||
Like, literally, TSMC cannot pump out those chips fast enough to keep up with this insane growth. | ||
And one consequence of that is that... | ||
You essentially have this gridlock, new supply chain choke points show up, and you're like, suddenly, I don't have enough chips, or I run out of power. | ||
That's the thing that's happening on the U.S. energy grid right now. | ||
We're literally running out of one, two gigawatt places where we can plant a data center. | ||
That's the thing people are fighting over. | ||
It's one of the reasons why energy deregulation is a really important pillar of U.S. competitiveness. | ||
One of the things that adversaries do is they actually will fund protest groups against energy infrastructure projects. | ||
Just to slow down. | ||
Just to, like, fuck with us, baby. | ||
Just to tie them up in litigation. | ||
Exactly. | ||
And, like, it was actually remarkable. | ||
We talked to some state cabinet officials, so in various U.S. states, and they're basically saying, like, yep, we're actually tracking the fact that, as far as we can tell, every single environmental or whatever protest group against an energy project has funding that can be traced back to... | ||
Nation-state adversaries who are... | ||
They don't know. | ||
They don't know about it. | ||
So they're not doing it intentionally. | ||
They're not like, oh, we're trying to... | ||
No. | ||
They just... | ||
You just imagine like, oh, we've got like... | ||
There's a millionaire backer who cares about the environment. | ||
He's giving us a lot of money. | ||
Great. | ||
Fantastic. | ||
But sitting behind that dude in the shadows is like the usual suspects. | ||
Wow. | ||
And it's what you would do, right? | ||
I mean, if you're trying to tie up the US... | ||
unidentified
|
Sure. | |
You're just trying to fuck with us. | ||
unidentified
|
Yeah. | |
Like, just go for it. | ||
You were just advocating fucking with them. | ||
So of course they're going to fuck with us. | ||
That's right. | ||
That's it. | ||
What a weird world we're living in. | ||
Yeah. | ||
But you can also see how a lot of this is still us getting in our own way, right? | ||
We could. | ||
If we had the will, we could go like, okay, so for certain types of energy projects, for data center projects and some carve-out categories, we're actually going to put bounds around how much delay you can create by lawfare and by other stuff. | ||
Allows things to move forward while still allowing the legitimate concerns of the population for projects like this in the backyard to have their say. | ||
But there's a national security element that needs to be injected into this somewhere. | ||
And it's all part of the rule set that we have and are like tying an arm behind our back basically. | ||
So what would deregulation look like? | ||
How would that? | ||
There's a lot of low-hanging fruit for that. | ||
What are the big ones? | ||
Right now, there are all kinds of things around. | ||
It gets in the weeds pretty quickly. | ||
Carbon emissions is a big thing. | ||
Yes, data centers, no question, have massive carbon footprints. | ||
That's definitely a thing. | ||
The question is, like, are you really going to bottleneck builds because of that? | ||
And like, are you going to come out with exemptions for, you know, like NEPA exemptions for all these kinds of things? | ||
Do you think a lot of this green energy shit is being funded by other countries to try to slow down our energy? | ||
Yeah. | ||
It's a dimension that was flagged, actually, in the context of what Ed was talking about. | ||
That's one of the arguments that's being made. | ||
And to be clear, though, this is also how adversaries operate, is not necessarily in creating something out of nothing, because that's hard to do, and it's fake, right? | ||
Instead, it's like... | ||
There's a legitimate concern. | ||
So a lot of the stuff around the environment and around like totally legitimate concerns. | ||
Like I don't want my backyard waters to be polluted. | ||
I don't want like my kids to get cancer from whatever. | ||
Like totally legitimate concerns. | ||
So what they do, it's like we talked about like you're like waving that rowboat back and forth. | ||
They identify the nascent concerns that are genuine and grassroots. | ||
And they just go like this, this, and this. | ||
Amplify. | ||
That would make sense why they amplify carbon above all these other things. | ||
You think about the amount of particulates in the atmosphere, pollution, polluting the rivers, polluting the ocean. | ||
That doesn't seem to get a lot of traction. | ||
Carbon does. | ||
And when you go carbon zero, you put a giant monkey wrench into the gears of society. | ||
One of the tells is also like... | ||
So, you know, nuclear would be kind of the ideal energy source, especially modern power plants like the Gen 3 or Gen 4 stuff, which have very low meltdown risk, safe by default, all that stuff. | ||
And yet these groups are, like, coming out against this. | ||
It's like perfect, clean, green power. | ||
What's going on, guys? | ||
And it's because, again, not 100% of the time. | ||
unidentified
|
You can't really say that because it's so fuzzy and around the edges. | |
A lot of it is idealistic people looking for a utopia and they get co-opted by nation states. | ||
And not even co-opted. | ||
They're fully sincere. | ||
Yeah, just amplify. | ||
Just fund it. | ||
Amplified in a preposterous way. | ||
That's it. | ||
And then Al Gore gets at the helm of it. | ||
And then that little girl, that how dare you girl. | ||
How dare you? | ||
How dare you? | ||
Yeah, it's wonderful. | ||
It's a wonderful thing to watch. | ||
Play out because it just capitalizes on all these human vulnerabilities. | ||
Yeah. | ||
And one of the big things that you can do, too, is like a quick win is just like impose limits on how much time these things can be allowed to be tied up in litigation. | ||
So impose time limits on that process just to say, like, look, I get it. | ||
Like, we're going to have this conversation, but this conversation has a clock on it. | ||
Because, you know, we're talking to this one data center company, and what they were saying, we were asking, like, look, what are the timelines when you think about bringing new power, like new natural gas plants online? | ||
And they're like, well, those are like five to seven years out. | ||
And then you go, okay, well, like, how long? | ||
And that's, by the way, that's probably way too long to be relevant in the superintelligence context. | ||
And so you're like, okay, well, how long if all the regulations were waived? | ||
If this was like a national security imperative and whatever authorities, you know, Defense Production Act, whatever, like, was in your favor. | ||
And they're like, oh, I mean, it's actually just like a two-year build. | ||
Like, that's what it is. | ||
So you're tripling the build time. | ||
We're getting in our own way. | ||
Every which way. | ||
Every which way. | ||
And also, like, I mean, I also don't want to be too working in our own way, but, like, we don't want to, like, frame it as, like, China's, like, they fuck up. | ||
They fuck up a lot, like, all the time. | ||
One actually kind of, like, funny one is around DeepSeek. | ||
So, you know DeepSeek, right? | ||
They made this, like, open source model that, like, everyone, like, lost their minds about back in January. | ||
R1, yeah. | ||
Yeah, R1. | ||
And they're legitimately a really, really good team. | ||
But it's fairly clear that even as of like end of last year and certainly in the summer of last year, like they were not dialed in to the CCP mothership. | ||
And they were doing stuff that was like actually kind of hilariously messing up the propaganda efforts of of the CCP without realizing it. | ||
So to give you like some context on this, one of one of the CCP's like. | ||
Large kind of propaganda goals in the last four years has been framing, creating this narrative that, like, the export controls we have around AI and, like, all this gear and stuff that we were talking about, look, man, those don't even work. | ||
So you might as well just, like, give up. | ||
Why don't you just give up on the export controls? | ||
It's pointless. | ||
We don't even care. | ||
We don't even care. | ||
So that, trying to frame that narrative. | ||
And they went to, like, gigantic efforts to do this. | ||
So I don't know if, like, there's this, like, kind of, Crazy thing where the Secretary of Commerce under Biden, Gina Raimondo, visited China in, I think, August 2023. | ||
And the Chinese basically like timed the launch of the Huawei Mate 60 phone that had this these chips that were supposed to be made by like export controlled shit for right for her visit. | ||
So it was basically just like a big like, fuck you. | ||
We don't even give a shit about your export controls, like basically trying a morale hit or whatever. | ||
And you think about that, right? | ||
You've got to coordinate with Huawei. | ||
You've got to get the TikTok memes and shit going in the right direction. | ||
All that stuff. | ||
And all the stuff they've been putting out is around this narrative. | ||
Now, fast forward to mid-last year. | ||
The CEO of DeepSeek, the company, back then, it was totally obscure. | ||
Nobody was tracking who they were. | ||
They were working in total obscurity. | ||
He goes on this, he does this random interview on Substack. | ||
And what he says is, he's like, yeah, so honestly, like, we're really excited and doing this AGI push or whatever. | ||
And like, honestly, like, money's not the problem for us. | ||
Talent's not the problem for us. | ||
But like, access to compute, like these export controls, man. | ||
Do they ever work? | ||
That's a real problem for us. | ||
Oh, boy. | ||
And, like, nobody noticed at the time. | ||
But then the whole DeepSeek R1 thing blew up in December. | ||
And now you imagine, like, you're the Chinese Ministry of Foreign Affairs. | ||
Like, you've been, like, you've been putting this narrative together for, like, four years. | ||
And this jackass that nobody heard about five minutes ago. | ||
Basically just, like, shits all over it. | ||
And, like, you're not hearing that line from him anymore. | ||
No, no, no, no, no. | ||
They've locked that shit down. | ||
Oh, and actually, the funniest part of this... | ||
Right when R1 launched, there's a random DeepSeek employee. | ||
I think his name is like Dia Guo or something like that. | ||
He tweets out. | ||
He's like, so this is like our most exciting launch of the year. | ||
Nothing can stop us on the path to AGI except access to compute. | ||
And then literally the dude in Washington, D.C., who works at the think tank on export controls against China, reposts that on X, and goes basically like, message received. | ||
And so, like, hilarious for us. | ||
But also, like, you know that on the backside, somebody got screamed at for that shit. | ||
Somebody got magic bust. | ||
Somebody got, yeah, somebody got, like, taken away or whatever. | ||
Because, like, it just undermined their entire, like, four-year, like, narrative around these export controls. | ||
unidentified
|
Wow. | |
But that shit ain't going to happen again from DeepSeek. | ||
Better believe it. | ||
And that's part of the problem with like, so the Chinese face so many issues. | ||
One of them is, you know, to kind of, another one is the idea of just waste and fraud, right? | ||
So we have a free market. | ||
Like what that means is you raise from private capital. | ||
People who are pretty damn good at assessing shit will like look at your setup and assess whether it's worth backing you for these massive multi-billion dollar deals. | ||
In China, the state like... | ||
I mean, the stories of waste are pretty insane. | ||
They'll, like, send a billion dollars to, like, a bunch of yahoos who will pivot from whatever, like, I don't know, making these widgets to just, like, oh, now we're, like, a chip foundry and they have no experience in it. | ||
But because of all these subsidies, because of all these opportunities, now we're going to say that we are. | ||
And then, no surprise, two years later, they burn out and they've just, like, lit. | ||
A billion dollars on fire or whatever billion yen. | ||
And, like, the weird thing is this is actually working overall, but it does lead to insane and unsustainable levels of waste. | ||
Like, the Chinese system right now is obviously, like, they've got their massive property bubble that they're... | ||
That's looking really bad. | ||
We've got a population crisis. | ||
The only way out for them is the AI stuff right now. | ||
Like, really, the only path for them is that, which is why they're working it so hard. | ||
But the stories of just, like, billions and tens of billions of dollars being lit on fire, specifically in the semiconductor industry, in the AI industry, like, that's a drag force that they're dealing with constantly that we don't have here in the same way. | ||
So it's sort of like the different structural advantages and weaknesses of... | ||
And when we think about what do we need to do to counter this, to be active in this space, to be a live player again, it means factoring in how do you take advantage of some of those opportunities that their system presents that ours doesn't. | ||
When you say be a live player again, where do you position us? | ||
I think it remains to be... | ||
So right now, this administration is obviously taking bigger swings. | ||
What are they doing differently? | ||
So, well, I mean, things like tariffs, I mean, they're not shy about trying new stuff. | ||
And tariffs are very complex in this space, like the impact, the actual impact of the tariffs and not universally good. | ||
But the on-shoring effect is also something that you really want. | ||
So it's a very mixed bag. | ||
But it's certainly an administration that's like willing to do high stakes, big moves in a way that... | ||
Other administrations haven't. | ||
And in a time when you're looking at a transformative technology that's going to, like, upend so much about the way the world works, you can't afford to have that mentality we were just talking about with, like, the nervous... | ||
I mean, you encountered it with the staffers, you know, when booking the podcast with the presidential cycle, right? | ||
Like, the kind of, like, nervous... | ||
Everything's got to be controlled and it's got to be like just so you can't have it. | ||
Yeah, it's like wrestlers have that mentality of like just like aggression, like feed in, right? | ||
Feed forward. | ||
Don't just sit back and like wait to take the punch. | ||
It's not like one of the guys who helped us out on this has this saying. | ||
He's like, fuck you, I go first and it's always my turn. | ||
Right. | ||
That's what success looks like when you actually are managing these kinds of national security issues. | ||
The mentality we had adopted was this like sort of siege mentality where we're just letting stuff happen to us and we're not feeding in. | ||
That's something that I'm much more optimistic about in this context. | ||
It's tough, too, because I understand people who hear that and go like, well, look, you're talking about like escalatory. | ||
This is an escalatory agenda again. | ||
I actually think paradoxically it's not. | ||
It's about keeping adversaries in check and training them to respect American territorial integrity, American technological sovereignty. | ||
Like, you don't get that for free, and if you just sit back, that is escalatory. | ||
It's just... | ||
Yeah, and this is basically the sub-threshold version of, like, you know, like the World War II appeasement thing, where back, you know, Hitler was, like, was taken, he was taken Austria, he was re-militarizing shit, he was doing... | ||
He was doing this, he was doing that. | ||
And the British were like, okay, we're going to let him just take one more thing and then he will be satisfied. | ||
And that just didn't work. | ||
unidentified
|
Maybe I have a little bit of Poland, please. | |
A little bit of Poland. | ||
unidentified
|
Maybe the Czechoslovakia is looking awfully fine. | |
And so this is basically like they fell into that pit, like that tar pit. | ||
Peace in our time, yeah. | ||
Peace in our time, right? | ||
And to some extent, we've still kind of learned the lesson of not letting that happen with territorial boundaries, but that's big and it's visible and it happens on the map and you can't hide it. | ||
Whereas one of the risks, especially with the previous administration, was there's these subthreshold things that don't show up in the news and that are calculated. | ||
Basically, our adversaries know. | ||
Because they know history. | ||
They know not to give us a Pearl Harbor. | ||
They know not to give us a 9-11. | ||
Because historically, countries that give America a Pearl Harbor end up having a pretty bad time about it. | ||
And so why would they give us a reason to come and bind together against an obvious external threat or risk when they can just keep chipping away at it? | ||
Elevate that and realize this is what's happening. | ||
This is the strategy. | ||
We need to... | ||
We need to take that, like, let's not do appeasement mentality and push it across in these other domains because that's where the real competition is going on. | ||
That's where it gets so fascinating in regards to social media because it's imperative that you have an ability to express yourself. | ||
It's very valuable for everybody. | ||
The free exchange of information, finding out things that you're not going to get from mainstream media and it's led to the rise of independent journalism. | ||
It's all great. | ||
But also, you're being manipulated, like, left and right constantly. | ||
Most people don't have the time to filter through it. | ||
We try to get some sort of objective sense of what's actually going on. | ||
It's true. | ||
It's like our free speech. | ||
It's like it's the layer where our society figures stuff out. | ||
And if adversaries get into that layer, they're like almost inside of our brain. | ||
And there's ways of addressing this. | ||
Like one of the challenges obviously is like – so they try to push in extreme opinions in either direction. | ||
And it's – that part is actually – it's kind of difficult because while – The most extreme opinions are also the most likely generally to be wrong. | ||
They're also the most valuable when they're right because they tell us a thing that we didn't expect by definition that's true and that can really advance us forward. | ||
And so, I mean, there are actually solutions to this. | ||
I mean, this particular thing isn't an area we... | ||
We're, like, too immersed in. | ||
But one of the solutions that has been bandied about is, like, you know, like, you might know, like, polymarket prediction markets and stuff like that, where at least, you know, hypothetically, if you have a prediction market around, like, if we do this policy, | ||
this thing will or won't happen, that actually creates a challenge around trying to manipulate that view or that market. | ||
Because what ends up happening is, like, if you're an adversary and you want to... | ||
Not just like manipulate a conversation that's happening in social media, which is cheap, but manipulate the price on a prediction market. | ||
You have to buy in. | ||
You have to spend real resources. | ||
And if to the extent you're wrong and you're trying to create a wrong opinion, you're going to lose your resource. | ||
So you actually can't push too far too many times or you will just get your money taken away from you. | ||
I think that's one approach where just in terms of preserving discourse, some of the stuff that's happening in prediction markets is actually really interesting and really exciting, even in the context of bots and AIs and stuff like that. | ||
This is the one way to find truth in the system is find out where people are making money. | ||
Exactly. | ||
Put your money where your mouth is, right? | ||
Proof of work. | ||
That is what just the market is theoretically too, right? | ||
It's got obviously big issues and can be manipulated in the short term. | ||
But in the long run, this is one of the really interesting things about startups too. | ||
When you run into people in the early days... | ||
By definition, their startup looks like it's not going to succeed, right? | ||
That is what it means to be a seed stage startup, right? | ||
If it was obvious you were going to succeed, you would, you know, the people would have raised more money already. | ||
Yeah. | ||
So what you end up having is these highly contrarian people who, despite everybody telling them that they're going to fail, just believe in what they're doing and think they're going to succeed. | ||
And I think that's part of what really kind of shapes the startup founder's soul in a way that's really constructive. | ||
It's also something that, if you look at the Chinese system, is very different. | ||
You raise money in very different ways. | ||
You're coupled to the state apparatus. | ||
You're both dependent on it and you're supported by it. | ||
But there's just a lot of... | ||
And it makes it hard for Americans to relate to Chinese and vice versa and understand each other's systems. | ||
One of the biggest risks as you're thinking through what is your posture going to be relative to these countries is you fall into thinking that their traditions, their way of thinking about the world is the same as your own. | ||
And that's something that's been an issue for us with China for a long time is, you know, hey, they'll liberalize, right? | ||
Like bring them into the World Trade Organization. | ||
It's like, oh, well, actually they'll sign the document, but they won't actually live up to any of the commitments. | ||
It makes appeasement really tempting because you're thinking, oh, they're just like us. | ||
They're just around the corner. | ||
unidentified
|
If we just reach out the oil branch a little bit further, they're going to come around. | |
It's like a guy who's stuck in the friend zone with a girl. | ||
One day, she's going to come around and realize I'm a great catch. | ||
You keep on trucking, buddy. | ||
One day, China's going to be my bestie. | ||
We're going to be besties. | ||
We just need an administration that reaches out to them and just lets them know, man, there's no reason why she'd be adversaries. | ||
We're all just people on planet Earth together. | ||
We're all together. | ||
I honestly wish that was true. | ||
It would be wonderful. | ||
Maybe that's what AI brings about. | ||
Maybe AI, maybe super intelligence realizes, "Hey, you fucking apes, you territorial apes with thermonuclear weapons, how about you shut the fuck up? | ||
You guys are doing the dumbest thing of all time and you're being manipulated by a small group of people that are profiting in insane ways off of your misery." | ||
That's actually not- Stole first, and those people are now controlling all the fucking money. | ||
How about we stop that? | ||
Wow, we covered a lot of ground there. | ||
Well, that's what I would do if I was superintelligence that would have stopped all that. | ||
That actually is, like, so this is not, like, relevant to the risk stuff or to the whatever at all, but it's just interesting. | ||
So there's actually theories, like, in the same way that there's theories around power seeking and stuff around superintelligence, there's theories around, like, how superintelligence is. | ||
Do deals with each other, right? | ||
And you actually, like, you have this intuition, which is exactly right, which is that, hey, two super intelligences, like, actual legit super intelligences should never actually, like, fight each other destructively in the real world, right? | ||
Like, that seems weird. | ||
That shouldn't happen because they're so smart. | ||
And in fact, like, there's theories around they can kind of do perfect deals with each other based on, like, if we're two super intelligences, I can kind of assess, like, how powerful you are. | ||
You can assess how powerful I am, and we can actually decide, well, if we did fight a war against each other... | ||
You would have this chance of winning. | ||
I would have that chance of winning. | ||
And so let's just not fight. | ||
Well, it would assess instantaneously that there's no benefit in that. | ||
unidentified
|
Exactly. | |
And also it would know something that we all know, which is the rising tide lifts all boats. | ||
But the problem is the people that already have yachts, they don't give a fuck about your boat. | ||
Like, hey, hey, hey, that water's mine. | ||
In fact, you shouldn't even have water. | ||
Well, hopefully it's so positive some, right, that even they enjoy the benefits. | ||
But, I mean, you're right. | ||
This is the issue right now. | ||
And one of the nice things, too, is as you build up your ratchet of AI, It does start to open some opportunities for actual trust but verified, which is something that we can't do right now. | ||
It's not like with nuclear stockpiles where we've had some success in some context with enforcing treaties and stuff like that, sending inspectors in and all that. | ||
With AI right now, how can you actually prove that... | ||
Like some international agreement on the use of AI is being observed. | ||
Even if we figure out how to control these systems, how can we make sure that, you know, China is baking in those control mechanisms into their training runs and that we are and how can we prove it to each other without having total access to the compute stack? | ||
We don't really have a solution for that. | ||
There are all kinds of programs like this FlexHeg thing. | ||
But anyway, those are not going to be online by 2027. | ||
But it's really good that people are working on them. | ||
For sure. | ||
You want to be positioned for catastrophic success. | ||
What if something great happens or we have more time or whatever? | ||
You want to be working on this stuff that allows this kind of control or oversight that's kind of hands-off. | ||
You know, in theory, you can hand over GPUs to an adversary inside this box with these encryption things. | ||
The people we've spoken to in the spaces that actually try to break into boxes like this are like, well, that's probably not going to work. | ||
But who knows? | ||
It might. | ||
unidentified
|
Yeah. | |
So the hope is that as you build up your AI capabilities, basically, it starts to create solutions. | ||
So it starts to create ways for two countries to verifiably adhere to some kind of international agreement or to find, like you said, paths for de-escalation. | ||
That's the sort of thing that we | ||
That would be what's really fascinating. | ||
Artificial general intelligence becomes super intelligence and it immediately weeds out all the corruption. | ||
It goes, hey, this is the problem. | ||
Like a massive doge in the sky. | ||
Yeah, exactly. | ||
Like, we figured it out. | ||
You guys are all criminals. | ||
And expose it to all the people. | ||
Like, these people that are your leaders have been profiting. | ||
And they do it on purpose. | ||
And this is how they're doing it. | ||
And this is how they're manipulating you. | ||
And these are all the lies that they've told. | ||
I'm sure that list is pretty... | ||
Whoa. | ||
It almost would be scary. | ||
Like, if you could x-ray the world right now. | ||
And, like, see all the... | ||
You'd want an MRI. | ||
You'd want to get, like, down to the tissue. | ||
Yeah, you're right. | ||
You'd probably... | ||
Yeah, you'd want to get down to the cellular level. | ||
But, like, it... | ||
Because it would be offshore accounts. | ||
Then you'd start finding show companies. | ||
There would be so much... | ||
Like, the stuff that... | ||
It comes out, you know, from just randomly, right? | ||
Just random shit that comes out. | ||
Like, yeah, the... | ||
I forget that, like, Argentinian... | ||
I think what you were talking about, like, the Argentinian thing that came out a few years ago around all the oligarchs and their offshore accounts. | ||
The Meryl Streep thing, yeah. | ||
unidentified
|
Yeah, yeah, yeah. | |
Meryl Streep? | ||
Yeah, the laundromat there. | ||
The laundromat movie. | ||
You ever seen that? | ||
Panama Papers. | ||
The Panama Papers. | ||
I never saw that. | ||
No? | ||
It's a good movie. | ||
Is it called the Panama Papers, the movie? | ||
It's called the laundromat. | ||
Oh, okay. | ||
You remember the Panama Papers? | ||
Do you know? | ||
Roughly. | ||
Yeah, it's like all the oligarchs stashing their cash in Panama. | ||
Like offshore tax haven stuff. | ||
Yeah, it's like... | ||
And, like, someone basically blew it wide open, and so you got to see, like, every, like, oligarch and rich person's, like, financial shit. | ||
Like, every once in a while, right, the world gets just, like, a flash of, like, oh, here's what's going on at the surface. | ||
It's like, oh, fuck! | ||
And then we all, like, go back to sleep. | ||
What's fascinating is, like, the unhideables, right? | ||
The little things that... | ||
Can't help but give away what is happening. | ||
You think about this in AI quite a bit. | ||
Some things that are hard for companies to hide is they'll have a job posting. | ||
They've got to advertise to recruit. | ||
So you'll see like, oh, interesting. | ||
Oh, OpenAI is looking to hire some people from hedge funds. | ||
I wonder what that means. | ||
I wonder what that implies. | ||
If you think about all of the leaders in the AI space, think about the Medallion Fund, for example. | ||
This is a super successful hedge fund. | ||
The Man Who Broke the Market. | ||
The Man Who Broke the Market is the famous book about the founder of the Medallion Fund. | ||
This is basically a fund that... | ||
They make, like, ridiculous, like, $5 billion returns every year kind of guaranteed, so much so they have to cap how much they invest in the market because they would otherwise, like, move the market too much, like, affect it. | ||
The fucked up thing about, like, the way they trade, and so this is, like, 20-year-old information, but it's still indicative because, like, you can't get current information about their strategies. | ||
But one of the things that they were the first to kind of go for and figure out is they were like, Okay, they basically were the first to kind of build what was at the time, as much as possible, an AI that autonomously did trading at, like, | ||
great speeds, and it had, like, no human oversight and just worked on its own. | ||
And what they found was the strategies that were the most successful were the ones that humans understood the least. | ||
Because if you have a strategy that a human can understand... | ||
Some human's going to go and figure out that strategy and trade against you. | ||
Whereas if you have the kind of the balls to go like, oh, this thing is doing some weird shit that I cannot understand no matter how hard I try, let's just fucking YOLO and trust it and make it work. | ||
If you have all the stuff debugged and if the whole system is working right... | ||
That's where your biggest successes are. | ||
What kind of strategies are you talking about? | ||
I don't know specific examples. | ||
How are AI systems trained today? | ||
Just as a trading strategy. | ||
As an example, you buy this stock. | ||
The Thursday after the full moon and then sell it like the Friday after the new moon or some like random shit like that. | ||
But it's like, why does that even work? | ||
Like, why would why would that even work? | ||
So to like to sort of explain why these these strategies work better, if you think about how AI systems are trained today, you basically very roughly. | ||
You start with this blob of numbers that's called a model. | ||
And you feed it input, you get an output. | ||
If the output you get is no good, if you don't like the output, you basically fuck around with all those numbers, change them a little bit, and then you try again. | ||
You're like, oh, okay, that's better. | ||
And you repeat that process over and over and over with different inputs and outputs. | ||
And eventually, those numbers, that mysterious ball of numbers, starts to behave well. | ||
It starts to make good predictions or generate good outputs. | ||
Now, you don't know why that is. | ||
You just know that it does a good job, at least where you've tested it. | ||
Now if you slightly change what you tested on, suddenly you could discover, oh shit, it's catastrophically failing at that thing. | ||
These things are very brittle in that way, and that's... | ||
That's part of the reason why ChatGPT will just like completely go on a psycho binge fest every once in a while if you give it a prompt that has like too many exclamation points and asterisks in it or something. | ||
Like these systems are weirdly brittle in that way. | ||
But applied to investment strategies, if all you're doing is saying like Optimize for returns. | ||
Give it inputs. | ||
Make me more money by the end of the day. | ||
It's like an easy goal. | ||
It's a very clear-cut goal, right? | ||
You can give a machine. | ||
So you end up with a machine that gives you these very... | ||
It is a very weird strategy. | ||
This ball of numbers isn't human understandable. | ||
It's just really fucking good at making money. | ||
And why is it really fucking good at making money? | ||
unidentified
|
I don't know. | |
I mean, it just kind of does the thing. | ||
And in making money, I don't ask too many questions. | ||
That's kind of like the... | ||
So when you try to impose on that system human interpretability, you pay what in the AI world is known as the interpretability tax. | ||
Basically, you're adding another constraint, and the minute you start to do that, you're forcing it to optimize for something other than pure rewards. | ||
Like doctors using AI to diagnose diseases are less effective than the chatbot on its own. | ||
That's actually related, right? | ||
That's related. | ||
If you want that system to get good at diagnosis, that's one thing. | ||
OK, just fucking make it good at diagnosis. | ||
If you want it to be good at diagnosis and to produce explanations that a good doctor will go like, OK, I'll use that. | ||
Well, great. | ||
But guess what? | ||
Now you're spending some of that precious compute on something other than just the thing you're trying to optimize for. | ||
And so now that's going to come at a cost of the actual performance of the system. | ||
And so if you are going to optimize like the fuck out of making money. | ||
You're going to necessarily de-optimize the fuck out of anything else, including being able to even understand what that system is doing. | ||
And that's kind of like at the heart of a lot of the kind of big-picture AI strategy stuff is people are wondering, like, how much interpretability tax am I willing to pay here? | ||
And how much does it cost? | ||
And everyone's willing to go a little bit further and a little further. | ||
So OpenAI actually had a paper or, I guess, a blog post where they talked about this. | ||
And they were like, look, right now... | ||
We have this, essentially, this, like, thought stream that our model produces on the way to generating its final output. | ||
And that thought stream, like, we don't want to touch it to make it, like, interpretable, to make it make sense, because if we do that, then essentially it'll be optimized to convince us of whatever the thing is that we want it to do. | ||
So it's like if you've used like an OpenAI model recently, right, like 03 or whatever, it's doing its thinking before it starts like outputting the answer. | ||
And so that thinking is, yeah, we're supposed to like be able to read that and kind of get it, but also... | ||
We don't want to make it too legible, because if we make it too legible, it's going to be optimized to be legible and to be convincing, rather than... | ||
To fool us, basically. | ||
Yeah, exactly. | ||
unidentified
|
Oh, Jesus Christ. | |
You guys are making me less comfortable than I thought you would. | ||
unidentified
|
I knew coming at Jamie and I were talking about it before, like, how bad are they going to freak us out? | |
You're freaking me out more. | ||
Well, I mean, okay, so... | ||
I do want to highlight, so the game plan right now on the positive end, let's see how this works. | ||
Jesus. | ||
Jamie, do you feel the same way? | ||
unidentified
|
Uh, yeah. | |
I mean, I have articles I didn't bring up that are supporting some of this stuff. | ||
Like, today, China quietly made some chip that they shouldn't have been able to do because of the sanctions. | ||
Oh, that's fine. | ||
And it's basically based off of their just sheer will. | ||
Okay, so there's... | ||
SMIC. | ||
There's good news on that one, at least. | ||
This is kind of a bullshit strategy that they're using. | ||
So, there's... | ||
Okay, so when you make these insane, like, five nanometers... | ||
Let's read that for people just listening. | ||
China quietly cracks five nanometer. | ||
Yeah. | ||
Without EUV, what is EUV? | ||
Extreme ultraviolet. | ||
How SMIC defied the chip sanctions with sheer engineering. | ||
Yeah, so this is like... | ||
And espionage. | ||
But actually, though, so there's a good reason that a lot of these articles are making it seem like this is a huge breakthrough. | ||
It actually isn't as big as it seems. | ||
So, okay, if you want to make really, really, really, really exquisite chips... | ||
Look at this quote. | ||
Moore's Law didn't die, Huo wrote. | ||
It moved to Shanghai. | ||
Instead of giving up, China's grinding its way forward layer by layer, pixel by pixel. | ||
The future of chips may no longer be written by who holds the best tools, but by who refuses to stop building. | ||
The rules are changing and DUV just lit the fuse. | ||
unidentified
|
Boy. | |
Who wrote that article? | ||
Gizmo China. | ||
There it is. | ||
Yeah. | ||
You can view that as like Chinese propaganda in a way, actually. | ||
So what's actually going on here is, so the Chinese only have these deep ultraviolet lithography machines. | ||
That's like a lot of syllables. | ||
But it's just a glorified chip. | ||
Like, it's a giant laser. | ||
That zaps your chips to, like, make the chips when you're fapping them. | ||
Yeah, so we're talking about, like, you do these atomic layer patterns on the chips and shit, and, like, what this UV thing does is it, like, fires, like, a really high-powered laser beam. | ||
Laser beam, yeah. | ||
They attach the head of sharks that just shoot at the chips. | ||
Sorry, that was, like, an Austin Powers. | ||
Anyway, they'll, like, shoot it at the chips, and that causes, depending on how the thing is designed, They'll, like, have a liquid layer of the stuff that's gonna go on the chip. | ||
The UV is really, really tight and causes it, exactly, causes it to harden. | ||
And then they wash off the liquid, and they do it all over again. | ||
Like, basically, this is just imprinting a pattern on a chip. | ||
Yeah, basically a fancy, tiny printer. | ||
Yeah, so that's it. | ||
And so the exquisite machines that we get to use, or that they get to use in Taiwan, are called extreme ultraviolet lithography. | ||
These are those crazy lasers. | ||
The ones that China can use, because we've prevented them from getting any of those extreme ultraviolet lithography machines, the ones China uses are previous generation machines called Deep Ultraviolet, and they can't actually make chips as high a resolution as ours. | ||
So what they do is, and what this article is about is, they basically take the same chip, they zap it once with DUV. | ||
And then they gotta pass it through again, zap it again, to get closer to the level of resolution we get in one pass with our exquisite machine. | ||
Now, the problem with that is you've got to pass the same chip through multiple times, which slows down your whole process. | ||
It means your yields at the end of the day are lower. | ||
It adds errors. | ||
Yeah, which makes it more costly. | ||
We've known that this is a thing that's called multi-patterning. | ||
It's been a thing for a long time. | ||
There's nothing new under the sun here. | ||
China has been doing this for a while. | ||
So it's not actually a huge shock that this is happening. | ||
The question is always, when you look at an announcement like this, yields, yields, yields. | ||
How, like, what percentage of the chips coming out are actually usable and how fast are they coming out? | ||
That determines, like, is it actually competitive? | ||
And that article, too, like, this ties into the propaganda stuff we were talking about, right? | ||
If you read an article like that, you could be forgiven for going, like, oh, man, our expert controls, like, just aren't working, so we might as well just give them up. | ||
When in reality, because you look at the source, and this is how you know that also this is one of their propaganda things. | ||
You look at Chinese news sources, what are they saying? | ||
What are the beats that are, like, common? | ||
And you know, just because of the way their media is set up, totally different from us, and we're not used to analyzing things this way, but when you read something in, like, the South China Morning Post, or, like, the Global Times, or Xinhua, or in a few different places like this, and it's the same beats coming back, you know that someone was handed a brief, | ||
and it's like, you gotta hit this point, this point, this point, and, yep, they're gonna find a way to work that into the news cycle over there. | ||
unidentified
|
Jeez. | |
And it's also, like, slightly true. | ||
Like, yeah, they did manage to make chips at, like, five nanometers. | ||
Cool. | ||
It's not a lie. | ||
It's the same, like, propaganda technique, right? | ||
Most of the time, you're not going to confabulate something out of nothing. | ||
Rather, like, you start with the truth, and then you push it just a little bit. | ||
Just a little bit. | ||
And you keep pushing, pushing, pushing. | ||
unidentified
|
Wow. | |
How much is this administration aware of all the things that you're talking about? | ||
So they're actually... | ||
Right now, they're in the middle of staffing up some of the key positions because it's a new administration still, and this is such a technical domain. | ||
They've got people there who are at the working level who are really sharp. | ||
They have some people now, yeah, in places like especially in some of the export control offices now who are some of the best in the business. | ||
Yeah. And that's that's really important. | ||
Like this is a it's a weird space because so when you want to actually recruit for for. | ||
You know, government roles in this space, it's really fucking hard. | ||
Because you're competing against, like, an open AI, like, very, like, low-range salaries, like half a million dollars a year. | ||
The government pay scale, needless to say, is, like, not... | ||
I mean, Elon worked for free. | ||
He can afford to, but still taking a lot of time out of his day. | ||
There's a lot of people like that who are, like, you know, they... | ||
They can't justify the cost. | ||
They literally can't afford to go work for the government. | ||
Why would they? | ||
Exactly. | ||
Whereas China's like, "You don't have a choice, bitch!" | ||
Yeah, and that's what they say. | ||
The Chinese word for bitch is really biting. | ||
If you translated that, it would be a real stain. | ||
I'm sure. | ||
It's kind of crazy because it seems almost impossible to compete with that. | ||
I mean, that's like the perfect setup. | ||
If you wanted to control everything and you wanted to optimize everything for the state, that's the way you would do it. | ||
Yeah, but it's also easier to make errors and be wrong-footed in that way. | ||
And also, basically, that system only works if the dictator at the top is just like very competent. | ||
Because the risk always with a dictatorship is like, oh. | ||
The dictator turns over, and now it's like just a total dumbass. | ||
And now the whole thing falls apart. | ||
And he surrounds himself. | ||
I mean, look, we just talked about information echo chambers online and stuff. | ||
The ultimate information echo chamber is the one around Xi Jinping right now. | ||
Because no one wants to give him bad news. | ||
I'm not gonna. | ||
And this is what you keep seeing, right? | ||
With these provincial-level debt in China, which is so awful. | ||
It's like people trying to hide money under imaginary mattresses. | ||
And then hiding those mattresses under bigger mattresses until eventually, like, no one knows where the liability is. | ||
And then you get a massive property bubble and any number of other bubbles that are due to pop any time, right? | ||
And the longer it goes on, like, the more, like, stuff gets squirreled away. | ||
Like, there's actually, like, a story from the Soviet Union that always, like, gets me, which is, so Stalin obviously, like, purged and killed, like, millions of people in the 1930s, right? | ||
By the 1980s, the ruling Politburo of the Soviet Union, obviously, like, things have been different. | ||
Generations had turned over and all this stuff. | ||
But those people, the most powerful people in the USSR, could not figure out what had happened to their own families during the purchase. | ||
Like, the information was just nowhere to be found because the machine of the state was just like... | ||
So aligned around like we just like we just gotta kill as many fucking people as we can and like turn it over and then hide the evidence of it and then kill the people who killed the people and then kill those people who killed those people. | ||
It also wasn't just kill the people, right? | ||
It was like a lot of like kind of gulag archipelago style. | ||
It's about labor, right? | ||
Because the fundamentals of the economy are so shit that you basically have to find a way to justify putting people in labor camps. | ||
That's right. | ||
But it was very much like you grind mostly or largely you grind them to death and basically they've gone away and you burn the records of it happening. | ||
So literally the most powerful people. | ||
Whole towns, right, that disappeared. | ||
Like people who are like, there's no record or there's like, or usually the way you know about it is there's like one dude. | ||
And it's like this one dude has a very precarious escape story. | ||
And it's like if literally this dude didn't get away, you wouldn't know about the entire town that was like wiped out. | ||
Yeah, it's crazy. | ||
Jesus Christ. | ||
Yeah. The stuff that like. | ||
It just hasn't been done right. | ||
I feel like we could do it right. | ||
And we have a 10-page plan. | ||
We came real close. | ||
We came real close. | ||
So close. | ||
Yeah, and that's what the blue no matter who people don't really totally understand. | ||
We're not even talking about political parties. | ||
We're talking about power structures. | ||
And we came close to a terrifying power structure. | ||
And it was willing to just do whatever it could to keep it rolling. | ||
And it was rolling for four years. | ||
It was rolling for four years without anyone at the helm. | ||
Show me the incentives, right? | ||
I mean, that's always the question. | ||
Yeah. | ||
One of the things is, too, when you have such a big structure that's overseeing such complexity, right? | ||
Obviously, a lot of stuff can hide in that structure, and it's not unrelated to the whole AI picture. | ||
There's only so much compute that you have at the top of that system that you can spend, right? | ||
As the president, as a cabinet member, like, whatever. | ||
You can't look over everyone's shoulder and do their homework. | ||
You can't do founder mode all the way down and all the branches and all the, like, action officers and all that shit. | ||
That's not going to happen, which means you're spending five seconds thinking about how to unfuck some part of the government, but then the, like, you know... | ||
Corrupt people who run their own fiefdoms there spend every day trying to figure out how to survive. | ||
It's like their whole life to justify themselves. | ||
Yeah, yeah. | ||
Well, that's the USAID dilemma. | ||
Yeah. | ||
unidentified
|
Yeah. | |
Because they're uncovering this just insane amount of NGOs. | ||
Like, where's this going? | ||
We talked about this the other day, but India has an NGO for every 600 people. | ||
Wait, what? | ||
unidentified
|
Yes. | |
We need more NGOs. | ||
There's 3.3 million NGOs. | ||
unidentified
|
What? | |
In India. | ||
Do they bucket? | ||
What are the categories that they fall into? | ||
Who fucking knows? | ||
That's part of the problem. | ||
One of the things that Elon had found is that there's money that just goes out with no receipts. | ||
It's billions of dollars. | ||
We need to take that further. | ||
We need an NGO for every person in India. | ||
We will get that eventually. | ||
It's the exponential trend. | ||
It's just like AI. | ||
The number of NGOs is doubling every year. | ||
unidentified
|
We're making progress. | |
We're making incredible progress in bullshit. | ||
The geo-scaling law, the bullshit scaling law. | ||
Well, it's just that unfortunately it's Republicans doing it, right? | ||
So it's unfortunately the Democrats are going to oppose it even if it's showing that there's like insane waste of your tax dollars. | ||
I thought some of the doge stuff was pretty bipartisan. | ||
There's congressional support at least on both sides, no? | ||
Well, sort of. | ||
I think the real issue is in dismantling a lot of these programs that – You can point to some good some of these programs do. | ||
The problem is, like, some of them are so overwhelmed with fraud and waste that it's like, to keep them active in the state they are, like, what do you do? | ||
Do you rip the Band-Aid off and start from scratch? | ||
Like, what do you do with the Department of Education? | ||
Do you say, why are we number 39 when we were number one? | ||
Like, what did you guys do with all that money? | ||
Did you create problems? | ||
There's this idea in software engineering, actually, he's talking to one of our employees about this, which is like, Refactoring, right? | ||
So when you're writing, like, a bunch of software, it gets really, really big and hairy and complicated, and there's all kinds of, like, dumbass shit, and there's all kinds of waste that happens in that codebase. | ||
There's this thing that you do every, you know, every, like, few months, is you do this thing called refactoring, which is, like, you go, like, okay, we have, you know, 10 different things that are trying to do the same thing. | ||
unidentified
|
Let's... | |
Get rid of nine of those things and just like rewrite it as the one thing. | ||
So there's like a cleanup and refresh cycle that has to happen whenever you're developing a big complex thing that does a lot of stuff. | ||
The thing is like the U.S. government at every level has basically never done a refactoring of itself. | ||
And so the way that problems get solved is you're like... | ||
Well, we need to do this new thing. | ||
So we're just gonna, like, stick on another appendage to the beast and get that appendage to do that new thing. | ||
And, like, that's been going on for 250 years, so we end up with, like, this beast that has a lot of appendages, many of which do incredibly duplicative and wasteful stuff, that if you were a software engineer, just, like, not politically, just objectively looking at that as a system, | ||
you'd go, like, oh. | ||
This is a catastrophe. | ||
And, like, we have processes that the industry, we understand how, what needs to be done to fix that. | ||
You have to refactor it. | ||
But they haven't done that, hence the $36 trillion of debt. | ||
It's a problem, too, though, in all, like, when you're a big enough organization, you run into this problem, like, Google has this problem, famously. | ||
We have friends, like, Jason, so Jason's the guy you spoke to about that. | ||
So he's like a startup. | ||
So he works in, like, relatively small codebases, and he, like, you know, can hold the whole codebase in his head at a time. | ||
But when you move over to, you know, Google, to Facebook, like, all of a sudden, this gargantuan codebase starts to look more like the complexity of the U.S. government, just, like, you know, very roughly in terms of scale, right? | ||
So now you're like, okay, well, we want to add functionality. | ||
So we want to incentivize our teams to build products that are going to be valuable. | ||
And the challenge is, The best way to incentivize that is to give people incentives to build new functionality. | ||
Not to refactor. | ||
There's no glory. | ||
If you work at Google, there's no glory in refactoring. | ||
If you work at Meta, there's no glory in refactoring. | ||
Like, there's no promotion, right? | ||
Exactly. | ||
You have to be a product owner. | ||
So you have to, like, invent the next Gmail. | ||
You've got to invent the next Google Calendar. | ||
You've got to do the next, you know, Messenger app. | ||
That's how you get promoted. | ||
And so you've got, like, this attitude. | ||
You go into there and you're just like, let me crank this stuff out and, like, try to ignore all the shit in the code base. | ||
No glory in there. | ||
A, this Frankenstein monster of a codebase that you just keep stapling more shit onto. | ||
And then B, this massive graveyard of apps that never get used. | ||
This is like the thing Google is famous for. | ||
If you ever see like the Google graveyard of apps, it's like all these things that you're like, oh yeah, I guess I kind of remember Google Me. | ||
Somebody made their career off of launching that shit and then peaced out and it died. | ||
That's like the incentive structure at Google, unfortunately. | ||
And it's also kind of the only way to, I mean, it's probably not, but in the world where humans are doing the oversight, that's your limitation, right? | ||
You got some people at the top who have a limited bandwidth and compute that they can dedicate to, like, hunting down the problems. | ||
AI agents might actually solve that. | ||
You could actually have a sort of autonomous AI agent that is the autonomous CEO or something go into an organization and uproot all the things and do that refactor. | ||
You could get way more efficient organizations out of that. | ||
Thinking about government corruption and waste and fraud, that's the kind of thing where those sorts of tools could be radically empowering, but you've got to get them to work right and for you. | ||
We've given us a lot to think about. | ||
Is there anything more? | ||
Should we wrap this up? | ||
If we've made you sufficiently uncomfortable. | ||
I am super uncomfortable. | ||
Very uneasy. | ||
Was the butt tap too much at the beginning? | ||
No, it was fine. | ||
No, that was fine? | ||
All of it was weird. | ||
It's just, you know, I always try to look at some non-cynical way out of this. | ||
Well, the thing is, like, there are paths out. | ||
We talked about this and the fact that a lot of these problems are just us tripping on our own feet. | ||
So if we can just, like... | ||
Un-fuck ourselves a little bit. | ||
We can unleash a lot of this stuff. | ||
And as long as we understand also the bar that security has to hit and how important that is, we actually can Put all this stuff together. | ||
We have the capacity. | ||
It all exists. | ||
It just needs to actually get aligned and around an initiative, and we have to be able to reach out and touch. | ||
On the control side, there's also a world where, and this is actually, like, if you talk to the labs, this is what they're actually planning to do, but it's a question of how methodically and carefully they can do this. | ||
The plan is to ratchet up capabilities, and then scale, in other words. | ||
And then as you do that, you start to use your AI systems, your increasingly clever and powerful AI systems, to do research on technical control. | ||
So you basically build the next generation of systems. | ||
You try to get that generation of systems to help you just inch forward a little bit more on the capability side. | ||
It's a very precarious balance, but it's something that at least isn't insane on the face of it. | ||
And fortunately, I mean, is the... | ||
The default path, like the labs are talking about that kind of control element as being a key pillar of their strategy. | ||
But these conversations are not happening in China. | ||
So what do you think they're doing to keep AI from uprooting their system? | ||
So that's interesting. | ||
Because I would imagine they don't want to lose control. | ||
Right. | ||
There's a lot of... | ||
Ambiguity and uncertainty about what's going on in China. | ||
So there's been a lot of like track 1.5, track 2 diplomacy, basically where you have non-government guys from one side talk to government guys from the other side or talk to non-government from the other side and kind of start to align on like, okay, what do we think the issues are? | ||
You know, the Chinese are – there are a lot of like freaked out Chinese researchers and have come out publicly and said, hey, like we're really concerned about this whole loss of control thing. | ||
There are public statements and all that. | ||
You also have to be mindful that any statement the CCP puts out is a statement they want you to see. | ||
So when they say like, "Oh yeah, we're really worried about this thing," it's genuinely hard to assess what that even means. | ||
But as you start to build these systems, we expect you're going to see some evidence of this shit before. | ||
And it's not necessarily, it's not like you're going to build the system necessarily and have it take over the world. | ||
Like what we see with agents, | ||
Yeah, so I was actually going to add to this really, really good point, and something where, like, open source AI is, like, even, you know, could potentially have an effect here. | ||
So a couple of the major labs, like OpenAI Anthropic, I think, came out recently and said, like, look, we... | ||
We're on the cusp. | ||
Our systems are on the cusp of being able to help a total novice, like someone with no experience, develop and deploy and release a known biological threat. | ||
And that's something we're going to have to grapple with over the next few months. | ||
And eventually, capabilities like this, not necessarily just biological, but also cyber and other areas, are going to come out in open source. | ||
And when they come out in open source... | ||
Basically for anybody to download. | ||
For anybody to download and use. | ||
When they come out in open source, you actually start to see some things happen, like some incidents, like some major hacks that were just done by a random motherfucker who just wants to see the world burn, but that wakes us up to like, | ||
oh shit, these things actually are powerful. | ||
I think one of the aspects also here is we're still in that... | ||
Post-Cold War honeymoon, many of us, right? | ||
In that mentality, like, not everyone has, like, wrapped their heads around this stuff. | ||
And the, like, what needs to happen is something that makes us go, like, oh, damn, we, like, we weren't even really trying this entire time. | ||
Because this is, like, this is the 9-11 effect. | ||
This is the Pearl Harbor effect. | ||
Once you have a thing that aligns everyone around like, oh shit, this is real and we actually need to do it and we're freaked out, we're actually safer. | ||
We're safer when we're all like, okay, something important needs to happen. | ||
Right. | ||
Instead of letting them just slowly chip away. | ||
Exactly. | ||
And so we, like... | ||
We need to have some sort of shock, and we probably will get some kind of shock over the next few months, the way things are trending. | ||
And when that happens, then... | ||
unidentified
|
Or years, if that makes you feel better. | |
But because you have the potential for this open source, it's probably going to be a survivable shock, right? | ||
But still a shock. | ||
And so let us actually realign around, like, okay... | ||
Let's actually fucking solve some problems for real. | ||
And so putting together the groundwork, right, is what we're doing around, like, let's pre-think a lot of this stuff so that, like, if and when the shock comes... | ||
We have a break glass plan. | ||
We have a plan. | ||
And the loss of control stuff is similar. | ||
Like, so one interesting thing that happens with AI agents today is they'll, like, they'll get any... | ||
So an AI agent will take a complex task that you give it, like, find me... | ||
Like best sneakers for me online, some shit like that. | ||
And they'll break it down into a series of sub-steps. | ||
And then each of those steps, it'll farm out to a version of itself, say, to execute autonomously. | ||
The more complex a task is, the more of those little sub-steps there are in it. | ||
And so you can have an AI agent that nails like 99% of those steps. | ||
But if it screws up just one, the whole thing is a flop, right? | ||
And so... | ||
If you think about the loss of control scenarios that a lot of people look at are autonomous replication, like the model gets access to the internet, copies itself onto servers and all that stuff. | ||
Those are very complex movements. | ||
If it screws up at any point along the way, that's a tell, like, oh, shit, something's happening there. | ||
And you can start to think about, like, okay, well, what went wrong? | ||
We get another do. | ||
We get another try, and we can kind of learn from our mistakes. | ||
So there is this sort of, like, this picture, you know, one camp goes, oh, well, we're going to kind of make this superintelligence in a vat, and then it explodes out and we lose control over it. | ||
That doesn't... | ||
Necessarily seem like the default scenario right now. | ||
It seems like what we're doing is scaling these systems. | ||
We might unhobble them with big capability jumps. | ||
But there's a component of this that is a continuous process that lets us kind of get our arms around it in a more staged way. | ||
That's another thing that I think is in our favor that we didn't expect before as a field, basically. | ||
And I think that's a good thing. | ||
That helps you kind of detect these breakout attempts and do things about them. | ||
unidentified
|
All right. | |
I'm going to bring this home. | ||
I'm freaked out. | ||
So thank you. | ||
Thanks for trying to make me feel better. | ||
I don't think you did. | ||
But I really appreciate you guys and appreciate your perspective because it's very important and it's very illuminating. | ||
It gives you a sense of what's going on. | ||
And I think one of the things that you said that's really important is, like, it sucks that we need a 9-11 moment or a Pearl Harbor moment to realize what's happening so we all come together. | ||
But hopefully, slowly but surely, through conversations like this, people realize what's actually happening. | ||
You need one of those moments, like, every generation. | ||
Like, that's how you get contact with the truth. | ||
And it's, like, it's painful, but, like, the light's on the other side. | ||
unidentified
|
Thank you. | |
Thank you very much. |