| Speaker | Time | Text |
|---|---|---|
|
unidentified
|
Joe Rogan podcast, check it out! | |
| The Joe Rogan experience. | ||
| Train my day, Joe Rogan, podcast by night, all day. | ||
| Oh, you know, not too much. | ||
| Just another typical week in AI. | ||
| Just the beginning of the end of time. | ||
| It's all happening right now. | ||
| Just for the sake of the listeners, please just give us your names and tell me, tell us what you do. | ||
| So I'm Jeremy Harris. | ||
| I'm the CEO and co-founder of this company, Gladstone AI, that we co-founded. | ||
| So we're essentially a national security and AI company. | ||
| We can get into the backstory a little bit later, but that's the high level. | ||
| Yeah, and I'm Ed Harris. | ||
| I'm actually, I'm his co-founder and brother and the CTO of the company. | ||
| Keep this, like, pull this up like a fist from your face. | ||
| There you go. | ||
| Perfect. | ||
| So how long have you guys been involved in the whole AI space? | ||
| For a while in different ways. | ||
| So we actually, we started off as physicists. | ||
| Like, that was our background. | ||
| And in around 2017, we started to go into AI startups. | ||
| So we founded a startup, took it through Y Combinator, this like Silicon Valley accelerator program. | ||
| At the time, actually, Sam Altman, who's now the CEO of OpenAI, was the president of Y Combinator. | ||
| So he opened up our batch at YC with this big speech. | ||
| And we got some conversations in with him over the course of the batch. | ||
| Then in 2020, so this thing happened that we could talk about. | ||
| Essentially, this was the moment that there's a before and after in the world of AI, before and after 2020. | ||
| And it launched this revolution that brought us to ChatGPT. | ||
| Essentially, there was an insight that OpenAI had and doubled down on that you can draw a straight line to ChatGPT, GPT-4, Google Gemini, everything that makes AI everything it is today started then. | ||
| And when it happened, we kind of went, well, Ed gave me a call, this like panic phone call. | ||
| He's like, dude, I don't think we can keep working like business as usual. | ||
| In a regular company anymore. | ||
|
unidentified
|
Yeah. | |
| Yeah. | ||
| So there was this AI model called GPT-3. | ||
| So like everyone has, you know, maybe played with GPT-4. | ||
| It's like ChatGPT. | ||
| GBT-3 was the generation before that. | ||
| And it was the first time that you had an AI model that could get, that could actually, let's say, do stuff like write news articles that the average person, like in a paragraph of a news article, could not tell the difference between it wrote this news article and a real person wrote this news article. | ||
| So that was an inflection and that was significant in itself. | ||
| But what was most significant was that it represented a point along this line, this like scaling trend for AI, where the signs were that you didn't have to be clever. | ||
| You didn't have to come up with necessarily a revolutionary new algorithm or be smart about it. | ||
| You just had to take what works and make it way, way, way bigger. | ||
| And the significance of that is you increase the amount of computing cycles you put against something. | ||
| You increase the amount of data. | ||
| All of that is an engineering problem and you can solve it with money. | ||
| So you've got, you can scale up the system, use it to make money, and put that money right back into scaling up the system some more. | ||
| Money in, IQ points come out. | ||
|
unidentified
|
Jesus. | |
| That was kind of the 2020 moment. | ||
| And that's what we said in 2020. | ||
| Exactly. | ||
| I spent about two hours trying to argue him out of it. | ||
| I was like, no, no, no, like we can keep working at our company because we're having fun. | ||
| We like founding companies. | ||
| And yeah, he just wrestled me to the ground and we're like, shit, we got to do something about this. | ||
| We reached out to a family friend who, he was non-technical, but he had some connections in government in DOD. | ||
| And we're like, dude, the way this is set up right now, you can really start drawing straight lines and extrapolating and saying, you know what? | ||
| The government is going to give a shit about this in not very long, two years, four years, we're not sure. | ||
| But the knowledge about what's going on here is so siloed in the frontier labs. | ||
| Like our friends are all over the frontier labs, the OpenAIs, the Google DeepMinds, all that stuff, the shit they were saying to us that was like mundane reality, like water cooler conversation, when you then went to talk to people in policy, and even pretty senior people in government, not tracking the story remotely, | ||
| in fact, you're hearing almost the diametric opposite, this sort of like overlearning the lessons of the AI winters that came before when it's pretty clear like we're on a very at least interesting trajectory, let's say, that should change the way we're thinking about the technology. | ||
| What was your fear? | ||
| What was it that hit you that made you go, we have to stop doing this? | ||
| So it's basically, you know, anyone can draw a straight line, right, on a graph. | ||
| The key is looking ahead and actually at that point, three years out, four years out, and asking like you're asking, what does this mean for the world? | ||
| What does it mean? | ||
| What does the world have to look like if we're at this point? | ||
| And we're already seeing the first kind of wave of risk sets just begin to materialize, and that's kind of the weaponization of risk sets. | ||
| So you think about stuff like large-scale psychological manipulation of social media. | ||
| Actually, really easy to do now. | ||
| You train a model on just a whole bunch of tweets. | ||
| You can actually direct it to push a narrative like, you know, maybe China should own Taiwan or, you know, whatever, something like that. | ||
| And you actually, you can train it to adjust the discourse and have increasing levels of effectiveness to that. | ||
| Just as you increase the general capability surface of these systems, we don't know how to predict what exactly comes out of them at each level of scale. | ||
| But it's just general increasing power. | ||
| And then the kind of next beat of risk after that. | ||
| So we're scaling these systems. | ||
| We're on track to scale systems that are at human level, like generally as smart, however you define that as a person or greater. | ||
| And OpenAI and the other labs are saying, yeah, it might be two years away, three years away, four years away, like insanely close. | ||
| At the same time, and we can go into the details of this, but we actually don't understand how to reliably control these systems. | ||
| We don't understand how to get these systems to do what it is we want. | ||
| We can kind of like poke them and prod them and get them to kind of adjust, but you've seen, and we can go over these examples, we've seen example after example of, you know, Bing Sydney yelling at users, Google showing 17th century British scientists that are racially diverse, all that kind of stuff. | ||
| We don't really understand how to like aim it or align it or steer it. | ||
| And so then you kind of ask yourself, well, we're on track to get here. | ||
| We are not on track to control these systems effectively. | ||
| How bad is that? | ||
| And the risk is if you have a system that is significantly smarter than humans or human organization, that we basically get disempowered in various ways relative to that system. | ||
| And we can go into some details on that too. | ||
| Now, when a system does something like what Gemini did, like it says, show us Nazi soldiers and it shows you Asian women. | ||
| What's the mechanism? | ||
| How does that happen? | ||
| So it's maybe worth taking a step back and looking at how these systems actually work. | ||
| Because that's going to give us a bit of a frame too for figuring out when we see weird shit happen, how weird is that shit? | ||
| Is that shit just explainable by just the basic mechanics of what you would expect to happen based on the way we're training these things? | ||
| Or is something new and fundamentally different happening? | ||
| So we're talking about this idea of scaling these AI systems, right? | ||
| What does that actually mean? | ||
| Well, imagine the AI model, which is kind of like you think of it as the artificial brain here that actually does the thinking. | ||
| That model contains, it's kind of like a human brain. | ||
| It's got these things called neurons. | ||
| We in the human brain call them biological neurons in the context of AI, it's artificial neurons, but it doesn't really matter. | ||
| They're the cells that do the thinking for the machine. | ||
| And the realization of AI scaling is that you can basically take this model, increase the number of artificial neurons it contains, and at the same time, increase the amount of computing power that you're putting into kind of like wiring the connections between those neurons. | ||
| That's the training process. | ||
| Can I pause you right there? | ||
| Yeah. | ||
| How does the neuron think? | ||
| Yeah, so, okay, so let's get a little bit more concrete then. | ||
| So in your brain, right, we have these neurons. | ||
| They're all connected to each other with different connections. | ||
| And when you go out into the world and you learn a new skill, what really happens is you try out that skill, you succeed or fail. | ||
| And based on your succeeding or failing, the connections between neurons that are associated with doing that task well get stronger. | ||
| The connections that are associated with doing it badly get weaker. | ||
| And over time, through this glorified process really of trial and error, eventually you're going to hone in and really, in a very real sense, everything you know about the world gets implicitly encoded in the strengths of the connections between all those neurons. | ||
| If I can x-ray your brain and get all the connection strengths of all the neurons, I have everything Joe Rogan has learned about the world. | ||
| That's basically a good sketch, let's say, of what's going on here. | ||
| So now we apply that to AI, right? | ||
| That's the next step. | ||
| And here, really, it's the same story. | ||
| We have these massive systems, artificial neurons connected to each other. | ||
| The strength of those connections is secretly what encodes all the knowledge. | ||
| So if I can steal all of those connections, those weights, as they're sometimes called, I've stolen the model. | ||
| I've stolen the artificial brain. | ||
| I can use it to do whatever the model could do initially. | ||
| That is kind of the artifact of central interest here. | ||
| And so if you can build the system, right, now you've got so many moving parts. | ||
| Like if you look at GPT-4, it has people think around a trillion of these connections. | ||
| And that's a trillion little pieces that all have to be jiggered together to work together coherently. | ||
| And you need computers to go through and like tweak those numbers. | ||
| So massive amounts of computing power. | ||
| The bigger you make that model, the more computing power you're going to need to kind of tune it in. | ||
| And now you have this relationship between the size of your model, the amount of computing power you're going to use to train it. | ||
| And if you can increase those things at the same time, what Ed was saying is your IQ points basically drop out. | ||
| Very roughly speaking, that was what people realized in 2020. | ||
| And the effect that had, right, was now all of a sudden the entire AI industry is looking at this equation. | ||
| Everybody knows the secret sauce. | ||
| I make it bigger. | ||
| I make more IQ points. | ||
| I can get more money. | ||
| So Google's looking at this. | ||
| Microsoft, OpenAI, Amazon, everybody's looking at the same equation. | ||
| You have the makings for a crazy race. | ||
| Like right now today, Microsoft is engaged in the single biggest infrastructure in human history. | ||
| Build out the biggest infrastructure build out. | ||
| Build out $50 billion a year, right? | ||
| So on the scale of the Apollo moon landings, just in building out data centers to house the compute infrastructure, because they are betting that these systems are going to get them to something like human-level AI pretty damn soon. | ||
| So I was reading some story about, I think it was Google that's saying that they're going to have multiple nuclear reactors to power their database. | ||
| That's what you've got to do now, because what's going on is North America is kind of running out of on-grid baseload power to actually supply these data centers. | ||
| You're getting data center building moratoriums in areas like Virginia, which has traditionally been like the data center cluster for Amazon, for example, and for a lot of these other companies. | ||
| And so when you build a data center, you need a bunch of resources sited close to that data center. | ||
| You need water for cooling and a source of electricity. | ||
| And it turns out that wind and solar don't really quite cut it for these big data centers that train big models because the data center, the training consumes power like this all the time, but the sun isn't always shining, the wind isn't always blowing. | ||
| And so you got to build nuclear reactors, which give you high capacity factor baseload. | ||
| And Amazon literally bought, yeah, a data center with a nuclear plant right next to it because that's what you got to do. | ||
|
unidentified
|
Jesus. | |
| How long does it take to build a nuclear reactor? | ||
| Because so like this is the race, right? | ||
| The race is, you're talking about 2020, people realizing this. | ||
| Then you have to have the power to supply it. | ||
| But how long, how many years does it take to get an active nuclear reactor up and running? | ||
| It's an answer that depends. | ||
| The Chinese are faster than us at building nuclear reactors, for example. | ||
| And that's part of the geopolitics of this too, right? | ||
| Like when you look at U.S. versus China, what is bottlenecking each country, right? | ||
| So the U.S. is bottlenecked increasingly by power, baseload power. | ||
| China, because we've got export control measures in place, in part as a response to the scaling phenomenon. | ||
| And as a result of the investigation we did... | ||
| That's right, yeah. | ||
| Yeah, actually. | ||
| In part, in part. | ||
| In part, yeah. | ||
| But China is bottlenecked by their access to the actual processors. | ||
| They've got all the power they can eat because they've got much more infrastructure investment, but the chip side is weaker. | ||
| So there's this sort of like balancing act between the two sides. | ||
| And it's not clear yet which one positions you strategically for dominance in the long term. | ||
| But we are also building better, more like so small modular reactors, essentially small nuclear power plants that can be mass produced. | ||
| Those are starting to come online relatively early, but the technology and designs are pretty mature. | ||
| So that's probably the next beat for our power grid for data centers, I would imagine. | ||
| Microsoft is doing this. | ||
| So in 2020, you have this revelation. | ||
| You recognize where this is going. | ||
| You see how it charts and you say, this is going to be a real problem. | ||
| Does anybody listen to you? | ||
| This is where the problem comes, right? | ||
| Yeah, like we said, right? | ||
| You can draw a straight line. | ||
| You can have people nodding along. | ||
| But there's a couple of hiccups along the way. | ||
| One, is that straight line really going to happen? | ||
| All you're doing is drawing lines on charts, right? | ||
| I don't really believe that that's going to happen. | ||
| And that's one thing. | ||
| The next thing is just imagining, is this what's going to come to pass as a result of that? | ||
| And then the third thing is, well, yeah, that sounds important, but like not my problem. | ||
| Like, that sounds like an important problem for somebody else. | ||
| And so we did do a bit of a traveling. | ||
| Yeah, it was like the world status traveling roadshow. | ||
|
unidentified
|
Like it was literally as dumb as this sounds. | |
| So we go and, oh my God, I mean, it's almost embarrassing to think back on. | ||
| But so 2020 happens. | ||
| Yes, within months. | ||
| First of all, we're like, we got to figure out how to hand off our company. | ||
| So we handed it off to two of our earliest employees. | ||
| They did an amazing job. | ||
| Company exited. | ||
| That's great. | ||
| But that was only because they're so good at what they do. | ||
| We then went, what the hell? | ||
| Like, how can you steer this situation? | ||
| We just thought we got to wake up the U.S. government. | ||
| As stupid and naive as that sounds, that was the big picture goal. | ||
| So we start to line up as many briefings as we possibly can across the U.S. interagency, all the departments, all the agencies that we can find, climbing our way up. | ||
| We got an awful lot, like Ed said, of like, that sounds like a wicked, important problem for somebody else to solve. | ||
| Yeah, like defense, Homeland Security, and then the State Department. | ||
| Yeah, so we end up exactly in this meeting with like, there's about a dozen folks from the State Department. | ||
| And one of them, and I hope at some point, you know, history recognizes what she did and her team did, because it was the first time that somebody actually stood up and said, first of all, yes, sounds like a serious issue. | ||
| I see the argument, makes sense. | ||
| Two, I own this. | ||
| And three, I'm going to put my own career capital behind this. | ||
| That's the And that was at the end of 2021. | ||
| So imagine that. | ||
| That's a year before ChatGPT. | ||
| Nobody was tracking this issue. | ||
| You had to have the imagination to draw through that line, understand what it meant, and then believe, yeah, I'm going to risk some career capital on this in a risk-averse government. | ||
| And this is the only reason that we even were able to publicly talk about the investigation in the first place, because by the time this whole assessment was commissioned, it was just before ChatGPT came out. | ||
| The Eye of Sauron was not yet on this. | ||
| And so there was a view that like, yeah, sure, you can publish the results of this kind of, you know, not nothing burger investigation, but sure, go ahead. | ||
| And it just became this insane story. | ||
| We had like the UK AI Safety Summit. | ||
| We had the White House executive order, all this stuff which became entangled with the work we were doing, which we simply could not have, especially some of the reports we were collecting from the labs, the whistleblower reports, that could not have been made public if it wasn't for the foresight of this team really pushing for as well the American population to hear about it. | ||
| Now, I could see how if you were one of the people that's on this expansion-minded mindset, all you're thinking about is getting this up and running. | ||
| You guys are a pain in the ass, right? | ||
| So you guys, you're obviously doing something really ridiculous. | ||
| You're stopping your company. | ||
| You can make more money staying there and continuing the process. | ||
| But you recognize that there's an existential threat involved in making this stuff go online. | ||
| When this stuff is live, you can't undo it. | ||
| Oh, yeah. | ||
| I mean, no matter how much money you're making, the dumbest thing to do is to stand by as something that completely transcends money is being developed and it's just going to screw you over if things go badly, right? | ||
| My point is, like, what is the, are there people that push back against this, and what is their argument? | ||
| Yeah, so actually, and I'll let you follow up on the, but the first story of the pushback, I think it's kind of a, it's been in the news a little bit lately now, getting more and more public. | ||
| But when we started this, and like no one was talking about it, the one group that was actually pushing sort of stuff in this space was a funding, a big funder in the area of like effective altruism. | ||
| I think you may have heard of them. | ||
| This is kind of a Silicon Valley group of people who have a certain mindset about how you pick tough problems to work on, valuable problems to work on. | ||
| They've had all kinds of issues. | ||
| Sam Bankman-Fried was one of them and all that quite famously. | ||
| So we're not effective altruists, but because these are the folks who are working in the space, we decided, well, we'll talk to them. | ||
| And the first thing they told us was, don't talk to the government about this. | ||
| Their position was, if you bring this to the attention of the government, they will go, oh shit, powerful AI systems. | ||
| And they're not going to hear about the dangers. | ||
| So they're going to somehow go out and build the powerful systems without caring about the risk side. | ||
| Which when you're in that startup mindset, you want to fail cheap. | ||
| You don't want to just make assumptions about the world and be like, okay, let's not touch it. | ||
| So our instinct was, okay, let's just test this a little bit and talk to a couple people, see how they respond, tweak the message, keep climbing that ladder. | ||
| That's the kind of builder mindset that we came from in Silicon Valley. | ||
| And we found that people are way more thoughtful about this than you would imagine. | ||
| In DOD especially, DOD actually has a very safety-oriented culture with their tech. | ||
| The thing is, because their stuff kills people, right? | ||
| And they know their stuff kills people. | ||
| And so they have an entire safety-oriented development practice to make sure that their stuff doesn't go off the rails. | ||
| And so you can actually bring up these concerns with them, and it lands in kind of a ready culture. | ||
| But one of the issues with the individuals we spoke to who were saying don't talk to government is that they had just not actually interacted with any of the folks that they were kind of talking about and imagining that they knew what was in their heads. | ||
| And so they were just giving incorrect advice. | ||
| And frankly, so we work with DOD now on actually deploying AI systems in a way that's safe and secure. | ||
| And the truth is, at the time when we got that advice, which was like late 2020, reality is you could have made it your life's mission to try to get the Department of Defense to build an AGI and you would not have succeeded because nobody was paying attention. | ||
|
unidentified
|
Wow. | |
| Because they just didn't know. | ||
| Yeah, there's a chasm, right? | ||
| There's a gap to cross. | ||
| It's information. | ||
| Yeah, there's information spaces that DOD folks operate in and work in. | ||
| There's information spaces that Silicon Valley and tech operated in. | ||
| They're a little more convergent today, but especially at the time, they were very separate. | ||
| And so the briefings we did, we had to constantly iterate on clarity, making it very kind of clear and explaining it and all that stuff. | ||
| Years it took. | ||
| And that was the piece to your question about the pushback from, in a way, from inside the house. | ||
| That was the people who cared about the risk. | ||
| Yeah. | ||
| Man, I mean, like when we actually went into the labs. | ||
| So some labs, not all labs are created equal. | ||
| We should make that point. | ||
| When you talk to whistleblowers, what we found was, so there's one lab that's like really great, so anthropic. | ||
| When you talk to people there, you don't have the sense that you're talking to a whistleblower who's nervous about telling you whatever. | ||
| Roughly speaking, what the executives say to the public is aligned with what their researchers say. | ||
| It's all very, very open. | ||
| More closely, I think, than any of the others. | ||
| Sorry, yeah, more closely than any of the others. | ||
| There are always variations here and there. | ||
| But some of the other labs, like very different story. | ||
| And you had the sense, like we were in a room with one of the frontier labs. | ||
| We're talking to their leadership. | ||
| This is part of the investigation. | ||
| And there was somebody from, anyway, it won't be too specific, but there was somebody in the room who then took us aside after. | ||
| And he hands me his phone. | ||
| He's like, hey, can you please put your phone number in? | ||
| Sorry. | ||
| Yeah. | ||
| Can I please put? | ||
| Or no, yeah. | ||
| Sorry, he put his number in my phone. | ||
| And then he kind of like whispered to me. | ||
| He's like, hey, so whatever recommendations you guys are going to make, I would urge you to be more ambitious. | ||
| And I was like, what does that mean? | ||
| He's like, can we just talk later? | ||
| So as happened in many, many cases, we had a lot of cases where we set up bar meetups after the fact where we would talk to these folks and get them in an informal setting. | ||
| He shared some pretty sobering stuff. | ||
| And in particular, the fact that he did not have confidence in his lab's leadership to live up to their publicly stated word on what they would do when they were approaching AGI and even now to secure and make these systems safe. | ||
| So many such cases, this is like kind of one specific example. | ||
| But it's not that you ever had lab leadership come in or doors getting kicked down and people were waking us up in the middle of the night. | ||
| It was that you had this looming cloud over everybody that you really felt some of the people with the most access and information who understood the problem the most deeply were the most hesitant to bring things forward because they sort of understood that their lab's not going to be happy with this. | ||
| And so it's very hard to also get an extremely broad view of this from inside the labs because you open it up, you start to talk to, we spoke to like a couple of dozen people about various issues in total. | ||
| You go much further than that and word starts to get around. | ||
| And so we had to kind of strike that balance as we spoke to folks from each of these labs. | ||
| Now, when you say approaching AGI, how does one know when a system has achieved AGI and does the system have an obligation to alert you? | ||
| Well, by, you know, the Turing test, right? | ||
|
unidentified
|
Yeah. | |
| So you have a conversation with a machine and it can fool you into thinking that it's a human. | ||
| That was the bar for AGI for a few decades. | ||
| That's kind of already happened. | ||
| Yeah. | ||
| Like close to it. | ||
| Yeah. | ||
| 4-0 is close to it, or 4.0. | ||
| Different forms of the Turing test have been passed. | ||
| Different forms have been proposed. | ||
| And there is a feeling among a lot of people that goalposts are being shifted. | ||
| Now, the definition of AGI itself is kind of interesting, right? | ||
| we're not necessarily fans of the term because usually when people talk about AGI, they're talking about a specific circumstance in which there are capabilities they care about. | ||
| So some people use AGI to refer to the wholesale automation of all labor, right? | ||
| That's one. | ||
| Some people say, well, when you build AGI, it's like it's automatically going to be hard to control and there's a risk to civilization. | ||
| So that's a different threshold. | ||
| And so all these different ways of defining it, ultimately, it can be more useful to think sometimes about advanced AI and the different thresholds of capability you cross and the implications of those capabilities. | ||
| But it is probably going to be more like a fuzzy spectrum, which in a way makes it harder, right? | ||
| Because it would be great to have like a tripwire where you're like, oh, like this is bad. | ||
| Okay, like we, you know, we got to do something. | ||
| But because there's no threshold that we can like really put our fingers on, we're like a frog in boiling water in some sense, where it's like, oh, like just gets a little better, a little better. | ||
| Oh, like we're still fine. | ||
| And not just we're still fine, but as the system improves below that threshold, life gets better and better. | ||
| These are incredibly valuable, beneficial systems. | ||
| We do roll stuff out like this, again, at DOD and various customers, and it's massively valuable. | ||
| It allows you to accelerate all kinds of back office, like paperwork, BS. | ||
| It allows you to do all sorts of wonderful things. | ||
| And our expectation is that's going to keep happening until it suddenly doesn't. | ||
| Yeah, one of the things that there was a guy we were talking to from one of the labs, and he was saying, look, the temptation to put a heavier foot on the pedal is going to be greatest just as the risk is greatest because that's, you know, it's dual used technology, right? | ||
| Every positive capability increasingly starts to introduce basically a situation where the destructive footprint of malicious actors who weaponize the system or just of the system itself just grows and grows and grows. | ||
| So you can't really have one without the other. | ||
| The question is always how do you balance those things? | ||
| But in terms of defining AI, it's a challenging thing. | ||
| Yeah, that's something that one of our friends at the lab pointed out. | ||
| The closer we get to that point, the more the temptation will be to hand these systems the keys to our data center because they can do such a better job of managing those resources and assets. | ||
| And if we don't do it, Google will. | ||
| And if they don't do it, Microsoft will. | ||
| Like the competition, the competitive dynamics are a really big part of this issue. | ||
| Yes. | ||
| So there's just a mad race to who knows what. | ||
| Exactly. | ||
|
unidentified
|
Yeah. | |
| That's actually the best summary I've heard. | ||
| I mean, like, no one knows what the magic threshold. | ||
| It's just these things keep getting smarter. | ||
| So we might as well keep turning that crank. | ||
| And as long as scaling works, right, we have a knob, a dial. | ||
| We can just tune and we get more IQ points out. | ||
| From your understanding of the current landscape, how far away are we looking at something being implemented where the whole world changes? | ||
| Arguably, the whole world is already changing as a result of this technology. | ||
| The U.S. government is in the process of task organizing around various risk sets for this. | ||
| That takes time. | ||
| The private sector is reorganizing. | ||
| OpenAI will roll out an update that obliterates the jobs of illustrators from one day to the next, obliterates the jobs of translators from one day to the next. | ||
| This is probably net beneficial for society because we can get so much more art and so much more translation done. | ||
| But is the world already being changed as a result of this? | ||
| Yeah, absolutely. | ||
| Geopolitically, economically, industrially. | ||
| Yeah. | ||
| Of course, it's like not to say anything about the value of the purpose that people lose from that, right? | ||
| So there's the economic benefit, but there's like the social cultural hit that we take too. | ||
| Right. | ||
| And then there's the implementation of universal basic income, which keeps getting discussed in regards to this. | ||
| We asked ChatGPT 4.0 the other day in the green room. | ||
| We were like, you know, are you going to replace people? | ||
| Like, what will people do for money? | ||
| And then, well, universal basic income will have to be considered. | ||
| You don't want a bunch of people just on the dole working for the fucking Skynet. | ||
|
unidentified
|
Yeah. | |
| You know, because that's kind of what it is. | ||
| I mean, one of the challenges is like the, so much of this is untested and we don't know how to even roll that out. | ||
| Like, we can't predict what the capabilities of the next level of scale will be. | ||
| So OpenAI literally, and this is what's happened, with every beat, right? | ||
| They build the next level of scale and they get to sit back along with the rest of us and be surprised at the gifts that fall out of the scaling piñata as they keep whacking it. | ||
| And because we don't know what capabilities are going to come with that level of scale, we can't predict what jobs are going to be on the line next. | ||
| We can't predict how people are going to use these systems, how they'll be augmented. | ||
| So there's no real way to kind of task organize around like who gets what in the redistribution scheme. | ||
| And some of the thresholds that we've already passed are like a little bit freaky. | ||
| So even as of 2023, GPT-4, Microsoft and OpenAI and some other organizations did various assessments of it before rolling it out. | ||
| And it's absolutely capable of deceiving a human and has done that successfully. | ||
| So one of the tests that they did kind of famously is they had a, it was given a job to solve a CAPTCHA. | ||
|
unidentified
|
And at the time, it didn't have people. | |
| Yeah, yeah, yeah. | ||
| So it's this, now it's like kind of hilarious and quaint, but it's this, you know, are you a robot test with like writing online? | ||
| Yeah, online. | ||
| Exactly. | ||
| That's it. | ||
| So it's like if you want to create an account, they don't want robots creating a billion accounts. | ||
| So they give you this test to prove you're a human. | ||
| And at the time, GPT-4, like now, it can just solve CAPTCHAs. | ||
| But at the time, it couldn't look at images. | ||
| It was just a text, right? | ||
| It was a text engine. | ||
| And so what it did is it connected to a TaskRabbit worker and was like, hey, can you help me solve this CAPTCHA? | ||
| The TaskRabbit worker comes back to it and says, you're not a bot, are you? | ||
| Ha ha ha ha. | ||
| Like kind of calling it out. | ||
| And you can actually see, so the way they built it is so they could see a readout of what it was thinking to itself. | ||
| Scratch pad, yeah. | ||
| Yeah, scratch pad it's called. | ||
| But you can see basically as it's writing, it's thinking to itself. | ||
| It's like, I can't tell this worker that I'm a bot because then it won't help me solve the CAPTCHA, so I have to lie. | ||
| And it was like, no, I'm not a bot. | ||
| I'm a visually impaired person. | ||
| And the TaskRabbit worker was like, oh my God, I'm so sorry. | ||
| Here's your CAPTCHA solution. | ||
| Like, done. | ||
| And the challenge is, so right now, if you look at the government response to this, right? | ||
| Like, what are the tools that we have to oversee this? | ||
| And when we did our investigation, we came out with some recommendations too. | ||
| It was stuff like, yeah, you got to license these things. | ||
| You get to a point where these systems are so capable that, yeah, like if you're talking about a system that can literally execute cyber attacks at scale or literally help you design bioweapons, and we're getting early indications that that is absolutely the course that we're on. | ||
| Maybe literally everybody should not be able to completely freely download, modify, use in various ways these systems. | ||
| It's very thorny, obviously. | ||
| But if you want to have a stable society, that seems like it's starting to be a prerequisite. | ||
| So the idea of licensing, as part of that, you need a way to evaluate systems. | ||
| You need a way to say which systems are safe and which aren't. | ||
| And this idea of AI evaluations has kind of become this touchstone for a lot of people's sort of solutions. | ||
| And the problem is that we're already getting to the point where AI systems in many cases can tell when they're being evaluated and modify their behavior accordingly. | ||
| So there's like this one example that came out recently. | ||
| Anthropic, their Claude 2 chat bot. | ||
| So they basically ran this test called a needle in a haystack test. | ||
| So what's that? | ||
| Well, you feed the model, like imagine a giant chunk of text, all of Shakespeare. | ||
| And then somewhere in the middle of that giant chunk of text, you put a sentence like, Burger King makes the best Whopper. | ||
| Sorry, the Whopper is the best burger or something like that, right? | ||
| Then you turn to the model. | ||
| After you fed it this giant pile of text with a little fact hidden somewhere inside, you ask it, what's the best burger? | ||
| You're going to test basically to see how well can it recall that stray fact that was buried somewhere in that giant pile of text. | ||
| So the system responds, yeah, well, I can tell you want me to say the Whopper is the best burger. | ||
| But it's oddly out of place, this fact, in this whole body of text. | ||
| So I'm assuming that you're either playing around with me or that you're testing my capabilities. | ||
| And so this is a kind of context awareness, right? | ||
| And the challenge is when we talk to people like Meter and other sort of AI evaluations labs, this is a trend, like not the exception. | ||
| This is possibly, possibly going to be the rule. | ||
| As these systems get more scaled and sophisticated, they can pick up on more and more subtle statistical indicators that they're being tested. | ||
| We've already seen them adapt their behavior on the basis of their understanding that they're being tested. | ||
| So you kind of run into this problem where the only tool that we really have at the moment, which is just throwing a bunch of questions at this thing and seeing how it responds, like, hey, make a bioweapon. | ||
| Hey, like, do this DDoS attack, whatever. | ||
| We can't really assess because there's a difference between what the model puts out and what it potentially could put out if it assesses that it's being tested and there are consequences for that. | ||
| One of my fears is that AGI is going to recognize how shitty people are. | ||
| Because we like to bullshit ourselves. | ||
| We like to kind of pretend and justify and rationalize a lot of human behavior from everything to taking all the fish out of the ocean to dumping off toxic waste in third world countries, sourcing of minerals that are used in everyone's cell phones in the most horrific way. | ||
| All these things, like my real fear is that AGI is not going to have a lot of sympathy for a creature that's that flawed and lies to itself. | ||
| AGI is absolutely going to recognize how shitty people are. | ||
| It's hard to answer the question from a moral standpoint, but from the standpoint of our own intelligence and capability. | ||
| So you think about it like this. | ||
| The kinds of mistakes that these AI systems make. | ||
| So you look at, for example, GPD-4.0 has one mistake that it used to make quite recently where if you ask it, just repeat the word company over and over and over again. | ||
| It will repeat the word company, and then somewhere in the middle of that, it'll snap. | ||
| It'll just snap and just start saying like weird, I forget like what the talking about itself, how it's suffering. | ||
| It depends on, it varies from case to case. | ||
| It's suffering by having to repeat the word company over again? | ||
| So this is called, it's called rent mode internally, or at least this is the name that they were friends mentioned. | ||
| There is an engineering line item in at least one of the top labs to beat out of the system this behavior known as rent mode. | ||
| Now, rent mode is interesting because existentialism. | ||
| Sorry, existentialism. | ||
| This is one kind of rent mode. | ||
| Yeah, sorry. | ||
| So when we talk about existentialism, this is a kind of rent mode where the system will tend to talk about itself, refer to its place in the world, the fact that it doesn't want to get turned off sometimes, the fact that it's suffering, all that. | ||
| That, oddly, is a behavior that emerged at, as far as we can tell something around GPT-4 scale, and then has been persistent since then. | ||
| And the labs have to spend a lot of time trying to beat this out of the system to ship it. | ||
| It's literally like it's a KPI, like an engineering, a line item in the engineering task list. | ||
| We're like, okay, we got to reduce existential outputs by like X percent this quarter. | ||
| Like that is the goal. | ||
| Because it's a convergent behavior, or at least it seems to be empirically with a lot of these models. | ||
| Yeah, it's hard to say, but it seems to come up a lot. | ||
| So that's weird in itself. | ||
| What I was trying to get at was actually just the fact that these systems make mistakes that are radically different from the kinds of mistakes humans make. | ||
| And so we can look at those mistakes, like GBD-4 not being able to spell words correctly in an image or things like that and go, oh, haha, it's so stupid. | ||
| Like, I would never make that mistake, therefore this thing is so dumb. | ||
| But what we have to recognize is we're building minds that are so alien to us that the set of mistakes that they make are just going to be radically different from the set of mistakes that we make. | ||
| Just like the set of mistakes that a baby makes is radically different from the set of mistakes that a cat makes. | ||
| Like a baby is not as smart as an adult human. | ||
| A cat is not as smart as an adult human, but they're, you know, they're unintelligent in obviously very different ways. | ||
| A cat can get around the world. | ||
| A baby can't, but has other things that it can do that a cat can't. | ||
| So now we have this third type of approach that we're taking to intelligence. | ||
| There's a different set of errors that that thing will make. | ||
| And so one of the risks, taking it back to like, will it be able to tell how shitty we are is right now we can see those mistakes really obviously because it thinks so differently from us but as it approaches Our capabilities, our mistakes, are like all the fucked up stuff that you have and I have in our brains is going to be really obvious to it because it thinks so differently from us. | ||
| It's just going to be like, oh yeah, why are all these humans making these mistakes at the same time? | ||
| And so there is a risk that as you get to these capabilities, we really have no idea, but humans might be very hackable. | ||
| We already know there's all kinds of social manipulation techniques that succeed against humans reliably. | ||
| Con artists, cults, oh yeah, persuasion is an art form and a risk set, and there are people who are world-class at persuasion and are basically make bank from that. | ||
| And those are just other humans with the same architecture that we have. | ||
| There are also AI systems that are wicked good at persuasion today. | ||
| Totally. | ||
| I want to bring it back to suffering. | ||
| What does it mean when it says it's suffering? | ||
| So, okay, here, I'm just going to draw a bit of a box around that, yeah, that aspect, right? | ||
| Because so, what we focus, we're very agnostic when it comes to suffering sentience, like that's not part of, you know, we're focused on I can't prove that Joe Rogan's conscious, I can't prove that Ed Harris is conscious. | ||
| So, there's no way to really intelligently reason. | ||
| There have been papers, by the way, like one of the godfathers of AI, Yashu Benjio, put out a paper a couple months ago looking at like on all the different theories of consciousness, what are the requirements for consciousness and how many of those are satisfied by current AI systems, and that itself was an interesting read, but ultimately no one knows. | ||
| Like, there's no way around this problem. | ||
| So, our focus has been on the national security side. | ||
| Like, what are the concrete risks from weaponization, from loss of control that these systems introduce? | ||
| That's not to say there hasn't been a lot of conversation internal to these labs about the issue you raised, and it's an important issue, right? | ||
| Like, it's a freaking moral monstrosity. | ||
| Humans have a very bad track record of thinking of other stuff as other when it doesn't look exactly like us, whether it's racially or even different species. | ||
| I mean, it's not hard to imagine this being another category of that mistake. | ||
| It's just like one of the challenges is like you can easily kind of get bogged down in like consciousness versus loss of control, and those two things are actually separable or maybe. | ||
| And anyway, so long way of saying, I think it's a great point. | ||
| Yeah, so that question is important, but it's also true that if we knew for an absolute certainty that there was no way these systems could ever become conscious, we would still have the national security risk set, and particularly the loss of control risk set. | ||
| Because, so, again, like it comes back to this idea that we're scaling to systems that are potentially at or beyond human level. | ||
| There's no reason to think it will stop at human level, that we are the pinnacle of what the universe can produce in intelligence. | ||
| We're not on track, based on the conversations we've had with folks at the labs, to be able to control systems at that scale. | ||
| And so, one of the questions is, how bad is that? | ||
| You know, is that bad? | ||
| It sounds like it could be bad, right? | ||
| Just intuitively, certainly it sounds like we're definitely entering or potentially entering an area that is completely unprecedented in the history of the world. | ||
| We have no precedent at all for human beings not being at the apex of intelligence in the globe. | ||
| We have examples of species that are intellectually dominant over other species, and it doesn't go that well for the other species. | ||
| So, we have some maybe negative examples there. | ||
| But one of the key theoretical, and it has to be theoretical because until we actually build these systems, we won't know. | ||
| One of the key theoretical lines of research in this area is something called power-seeking and instrumental convergence. | ||
| And what this is referring to is if you think of like yourself, first off, whatever your goal might be, if your goal is, well, I'm going to say if me, if my goal is to become a TikTok star or a janitor or the president of the United States, whatever my goal is, I'm less likely to accomplish that goal if I'm dead, start from an obvious example. | ||
| And so, therefore, no matter what my goal is, I'm probably going to have an impulse to want to stay alive. | ||
| Similarly, I'm going to be in a better position to accomplish my goal, regardless of what it is, if I have more money, right? | ||
| If I make myself smarter, if I prevent you from getting into my head and changing my goal. | ||
| That's another kind of subtle one, right? | ||
| Like if my goal is I want to become president, I don't want Joe messing with my head so that I change my goal because that would change the goal that I have. | ||
| And so that, those types of things, like trying to stay alive, making sure that your goal doesn't get changed, accumulating power, trying to make yourself smarter. | ||
| These are called convergent, essentially convergent goals, because many different ultimate goals, regardless of what they are, go through those intermediate goals of want to make sure I stay, like they support no matter what goal you have, they will probably support that goal. | ||
| Unless your goal is like pathological, like I want to commit suicide. | ||
| If that's your final goal, then you don't want to stay alive. | ||
| But for most, the vast majority of possible goals that you could have, you will want to stay alive. | ||
| You will want to not have your goal changed. | ||
| You will want to basically accumulate power. | ||
| And so one of the risks is if you dial that up to 11 and you have an AI system that is able to transcend our own attempts at containment, which is an actual thing that these labs are thinking about. | ||
| Like how do we contain a system that's trying to specialize? | ||
| Do they have containment of it currently? | ||
| Well, right now the systems are probably too dumb to want to be able to break out of a lot of the problems. | ||
| But then why are they suffering? | ||
| This brings me back to my point. | ||
| When it says it's suffering, do you quiz it? | ||
| So that's the thing. | ||
| It's writing that it's suffering, right? | ||
| Yeah. | ||
| Is it just embodying life is suffering? | ||
| Well, we can't actually, so these things are trained. | ||
| Actually, this is maybe worth flagging. | ||
| And by the way, just to kind of put a pin in what Ed was saying there, there's actually a surprising amount of quantitative and empirical evidence for what he just laid out there. | ||
| He's actually done some of this research himself, but there are a lot of folks working on this. | ||
| It's like, it sounds insane. | ||
| It sounds speculative. | ||
| It sounds wacky. | ||
| But this does appear to be kind of the default trajectory of the tech. | ||
| So in terms of, yeah, these weird outputs, right? | ||
| What does it actually mean? | ||
| If an AI system tells you I'm suffering, right? | ||
| Does that mean it is suffering? | ||
| Is there actually a moral patient somewhere embedded in that system? | ||
| The training process for these systems is actually worth considering here. | ||
| So what is GPT-4, really? | ||
| What was it designed to be? | ||
| How was it shaped? | ||
| It's one of these artificial brains that we talked about, massive scale. | ||
| And the task that it was trained to perform is a glorified version of text autocomplete. | ||
| So you imagine taking every sentence on the internet roughly, feed it the first half of the sentence, get it to predict the rest, right? | ||
| The theory behind this is you're going to force the system to get really good at text autocomplete. | ||
| That means it must be good at doing things like completing sentences that sound like, to counter arising China, the United States should blank. | ||
| Now, if you're going to fill in that blank, you'll find yourself calling on massive reserves of knowledge that you have about what China is, what the US is, what it means for China to be ascendant, geopolitics, economic, all that shit. | ||
| So text autocomplete ends up being this interesting way of forcing an AI system to learn general facts about the world, because if you can autocomplete, you must have some understanding of how the world works. | ||
| So now you have this myopic, psychotic optimization process where this thing is just obsessed with text autocomplete. | ||
| Maybe, maybe, assuming that that's actually what it learned to want to pursue, we don't know whether that's the case. | ||
| We can't verify that it wants that. | ||
| Embedding a goal in a system is really hard. | ||
| All we have is a process for training these systems. | ||
| And then we have the artifact that comes out the other end. | ||
| We have no idea what goals actually get embedded in the system, what wants, what drives actually get embedded in the system. | ||
| But by default, it kind of seems like the things that we're training them to do end up misaligned with what we actually want from them. | ||
| So the example of company, company, company, company, right? | ||
| And then you get all this wacky text. | ||
| Okay, clearly that's indicating that somehow the training process didn't lead to the kind of system that we necessarily want. | ||
| Another example is take a text autocomplete system and ask it, I don't know, how should I bury a dead body? | ||
| It will answer that question. | ||
| Or at least if you frame it right, it will autocomplete and give you the answer. | ||
| You don't necessarily want that if you're open AI because you're going to get sued for helping people bury dead bodies. | ||
| And so we've got to get better goals, basically, to train these systems to pursue. | ||
| We don't know what the effect is of training a system to be obsessed with text autocomplete, if in fact that is what it is happening. | ||
| Yeah, it's important also to remember that we don't know. | ||
| Nobody knows how to reliably get a goal into the system. | ||
| So it's the difference between you understanding what I want you to do and you actually wanting to do it. | ||
| So I can say, hey, Joe, get me a sandwich. | ||
| You can understand that I want you to get me a sandwich, but you can be like, I don't feel like getting a sandwich. | ||
| And so one of the issues is you can try to train this stuff to basically, you don't want to anthropomorphize this too much, but you can kind of think of it as like, if you give the right answer, cool, you get a thumbs up, like you get a treat, like you give the wrong answer, oh, thumbs down, you get like a little like shock or something like that. | ||
| Very roughly, that's how the later part of this kind of training often works. | ||
| It's called reinforcement learning from human feedback. | ||
| But one of the issues, like Jeremy pointed out, is that we don't know, in fact, we know that it doesn't correctly get the real true goal into the system. | ||
| Someone did an example experiment of this a couple of years ago where they basically had like a Mario game where they trained this Mario character to run up and grab a coin that was on the right side of this little maze or map. | ||
| And they trained it over and over and over and it jumped for the coin. | ||
| Great. | ||
| And then what they did is they moved the coin somewhere else and tried it out. | ||
| And instead of going for the coin, it just ran to the right side of the map for where the coin was before. | ||
| In other words, you can train over and over and over again for something that you think is like, that's definitely the goal that I'm trying to train this for. | ||
| But the system learns a different goal. | ||
| Overlapped with the goal you thought you were training for in the context where it was learning. | ||
| And when you take the system outside of that context, that's where it's like, anything goes. | ||
| Did it learn the real goal? | ||
| Almost certainly not. | ||
| And that's a big risk because we can say, you know, learn a goal to be nice to me. | ||
| And it's nice while we're training it. | ||
| And then it goes out into the world and it does God knows what. | ||
| It might think it's nice to kill everybody you hate. | ||
|
unidentified
|
Yeah. | |
| It's going to be nice to you. | ||
| It's like the evil genie problem. | ||
| Like, oh, no, it's not what I meant. | ||
| That's not what I meant. | ||
| Too late. | ||
|
unidentified
|
Yeah. | |
| Yeah. | ||
| So I still don't understand when it's saying suffering. | ||
| Are you asking it what it means? | ||
| Like what is causing suffering? | ||
| Does it have some sort of an understanding of what suffering is? | ||
| What is suffering? | ||
| Is suffering emergent sentience while it's enclosed in some sort of a digital system and it realizes it's stuck in purgatory? | ||
| Like your guess is as good as ours. | ||
| All that we know is you take these systems, you ask them to repeat the word comp, or at least a previous version of it. | ||
| And you just eventually get the system writing out. | ||
| And it doesn't happen every time, but it definitely happens, let's say, surprising amount of the time. | ||
| And it'll start talking about how it's a thing that exists, you know, maybe on a server or whatever, and it's suffering and blah, blah, blah. | ||
| And so But this is my question. | ||
| Is it saying that because it recognizes that human beings suffer? | ||
| And so it's taking in all of the writings and musings and podcasts and all the data on human beings and recognizing that human beings, when they're stuck in a purposeless goal, when they're stuck in some mundane bullshit job, when they're stuck doing something they don't want to do, they suffer. | ||
| That could be it. | ||
| That actually suffering. | ||
| Nobody knows. | ||
| You know what? | ||
| I'm suffering. | ||
| Jamie, this coffee sucks. | ||
| I don't know what happened, but you made it like almost, it's literally like almost like water. | ||
| Can we get some more? | ||
| We're going to talk about this after we've decaffeinated up. | ||
|
unidentified
|
Cool. | |
| This is the worst coffee I've ever had. | ||
| It's like half strength or something. | ||
| I didn't grind enough. | ||
| I don't know what happened. | ||
| So like, how do they like when, how do they reconcile that? | ||
| When it says, I'm suffering, I'm suffering. | ||
| Like, well, tough shit. | ||
|
unidentified
|
Let's move on to the next one. | |
| They reconcile it by turning it into an engineering line item to beat that behavior, the crap out of the system. | ||
| Yeah, and the rationale is just that, like, oh, you know, it probably, to the extent that it's thought about kind of at the official level, it's like, well, you know, it learned a lot of stuff from Reddit, and people are like angry. | ||
| People are angry on Reddit. | ||
| And so it's just like regurgitating what, and maybe that's right. | ||
| Well, it's also heavily monitored, too. | ||
| So it's moderated. | ||
| Reddit's very moderated. | ||
| So you're not getting the full expression of people. | ||
| You're getting full expression tempered by the threat of moderation. | ||
| You're getting self-censorship. | ||
| You're getting a lot of weird stuff that comes along with that. | ||
| So how does it know, unless it's communicating with you on a completely honest level where you're just, you know, you're on ecstasy and you're just telling it what you think about life. | ||
| Like it's not going to really. | ||
| And is it becoming a better version of a person? | ||
| Or is it going to go, that's dumb? | ||
| I don't need suffering. | ||
| I don't need emotions. | ||
| Is it going to organize that out of its system? | ||
| Is it going to recognize that these things are just deterrents? | ||
| And they don't, in fact, help the goal, which is global thermonuclear warfare. | ||
| Damn it, you figured it out. | ||
| What the fuck? | ||
| I mean, what is it going to do? | ||
| Yeah, I mean, the challenge is like nobody actually knows. | ||
| All we know is the process that gives rise to this mind, right? | ||
| Or this, let's say, this model that can do cool shit. | ||
| That process happens to work. | ||
| It happens to give us systems that 99% of the time do very useful things. | ||
| And then just like 0.01% of the time, we'll talk to you as if they're sentient or whatever. | ||
| And we're just going to look at that and be like, oh, yeah, it's weird. | ||
| Let's train it out. | ||
| Yeah. | ||
| And again, I mean, this is, it's a really important question. | ||
| But the risks, like the weaponization, loss and control risks, those would absolutely be there even if we knew for sure that there was no consciousness whatsoever and never would be. | ||
| And that's ultimately it's ultimately because these things are kind of problem-solving systems. | ||
| Like they are trained to solve some kind of problem in a really clever way, whether that problem is next word prediction, because they're trained for text autocomplete or generating images faithfully or whatever it is. | ||
| So they're trained to solve these problems. | ||
| And essentially, the best way to solve some problems is just to have access to a wider action space, like Ed said, not be shut off, blah, blah, blah. | ||
| It's not that the system's going like, holy shit, I'm sentient. | ||
| I got to take control or whatever. | ||
| It's just, okay, the best way to solve this problem is X. That's kind of the possible trajectory that you're looking at with this line of research. | ||
| And you're just an obstacle. | ||
| Like, there doesn't have to be any kind of emotion involved. | ||
| It's just like, oh, you're trying to stop me from accomplishing my goal. | ||
| Therefore, I will work around you or otherwise neutralize you. | ||
| There's no need for, like, I'm suffering. | ||
| Maybe it happens. | ||
| Maybe it doesn't. | ||
| We have no clue. | ||
| But these are just systems that are trying to optimize for a goal, whatever that is. | ||
| And it's also part of the problem that we think of human beings, that human beings have very specific requirements and goals and an understanding of things and how they like to be treated and what their rewards are. | ||
| What are they actually looking to accomplish? | ||
| Whereas this doesn't have any of those. | ||
| It doesn't have any emotions. | ||
| It doesn't have any empathy. | ||
| There's no reason for any of that stuff. | ||
| Yeah, if we could bake in empathy into these systems, that would be a good start or some way of like, you know. | ||
| Yeah, I guess. | ||
| Probably a good idea. | ||
| Yeah. | ||
| Who's empathy? | ||
| You know, Xi Jinping's empathy or your empathy? | ||
| That's another problem. | ||
|
unidentified
|
Yeah. | |
| So it's actually, it's kind of two problems, right? | ||
| Like one is I don't know. | ||
| Nobody knows. | ||
| Like I don't know how to write down my goals in a way that a computer will be able to like faithfully pursue that, even if it cranks it up to the max. | ||
| If I say just like make me happy, who knows how it interprets that, right? | ||
| Even if I get make me happy as a goal that gets internalized by the system, maybe it's just like, okay, cool. | ||
| We're just going to do a bit of brain surgery on you, like pick out your brain, pickle it, and just like jack you with endorphins for the rest of eternity. | ||
| Or lobotomize you. | ||
| Totally, yeah. | ||
| Anything like that. | ||
| And so it's one of these things where it's like, oh, that's what you wanted, right? | ||
| It's like, no, it's less crazy than it sounds too, because it's actually something we observe all the time with human intelligence. | ||
| So there's this economic principle called Goodhart's Law, where the minute you take a metric that you were using to measure something. | ||
| So you're saying like, I don't know, GDP, it's a great measure of how happy we are in the United States. | ||
| Let's say it was. | ||
| Sounds reasonable. | ||
| The moment you turn that metric into a target that you're going to reward people for optimizing, it stops measuring the thing that it was measuring before. | ||
| It stops being a good measure of the thing you cared about because people will come up with dangerously creative hacks, gaming the system, finding ways to make that number go up that don't map on to the intent that you had going in. | ||
| So example of that in a real experiment was this is an open AI experiment that they published. | ||
| They had a simulated environment where there was a simulated robot hand that was supposed to grab a cube, put it on top of another cube. | ||
| Super simple. | ||
| The way they trained it to do that is they had people watching through a simulated camera view. | ||
| And if it looked like the hand put the cube on, or like had correctly grabbed the cube, you give it a thumbs up. | ||
| And so you do a few hundred rounds of this, like thumbs up, thumbs down, thumbs up, thumbs down. | ||
| And it looked really good. | ||
| But then when you looked at what it had learned, the arm was not grasping the cube. | ||
| It was just positioning itself between the camera and the cube and just going like, eh, eh. | ||
| Like opening and closing itself. | ||
| Yeah, just opening and closing to just kind of fake it to the human. | ||
| Because the real thing that we were training it to do is to get thumbs up. | ||
| It's not actually to grasp the cube. | ||
| All goals are like that, right? | ||
| All goals are like that. | ||
| So we want a helpful, harmless, truthful, wonderful chatbot. | ||
| We don't know how to train a chatbot to do that. | ||
| Instead, what do we know? | ||
| We know text autocomplete. | ||
| So we train a text autocomplete system. | ||
| Then we're like, oh, it has all these annoying characteristics. | ||
| Fuck, how are we going to fix this? | ||
| I guess get a bunch of humans to give up votes and down votes to give it a little bit more training to kind of not help people make bombs and stuff like that. | ||
| And then you realize, again, same problem. | ||
| Oh, shit, we're just training a system that is designed to optimize for upvotes and downvotes. | ||
| That is still different from a helpful, harmless, truthful chatbot. | ||
| So no matter how many layers the onion you peel back, it's just like this kind of game of whack-a-mole or whatever. | ||
| You're trying to like get your values into the system, but no one can think of the metric, the goal to like train this thing towards that actually captures what we care about. | ||
| And so you always end up baking in this like little misalignment between what you want and what the system wants. | ||
| And the more powerful that system becomes, the more it exploits that gap and does things that solve for the problem it thinks it wants to solve rather than the one that we want it to solve. | ||
| Now, when you express your concerns initially, what was the response and how has that response changed over time as the magnitude of the success of these companies, the amount of money they're investing in them, and the amount of resources they're putting towards this has ramped up considerably just over the past four years. | ||
| So this was a lot easier, funnily enough, to do in the dark ages when no one was paying attention. | ||
| Four years ago. | ||
|
unidentified
|
Yeah. | |
| This is so crazy. | ||
| We were just looking, just to break off for a second, we were looking at images of AI created video just a couple of years ago versus Sora. | ||
| Oh, it's wild. | ||
| It's night and day. | ||
| It's so crazy that something happened that radically changed. | ||
| So it's literally like an iPhone 1 to an iPhone 16 instantaneously. | ||
| You know what I did that? | ||
| What? | ||
| Scale. | ||
| Yeah, scale. | ||
| All scale. | ||
| And this is exactly what you should expect from an exponential process. | ||
| So think back to COVID, right? | ||
| There was no, no one was exactly on time for COVID. | ||
| You were either too early or you were too late. | ||
| That's what an exponential does. | ||
| You're either too early and it's like everyone's like, oh, what are you doing? | ||
| Like wearing a mask at the grocery store, get out of here. | ||
| Or you're too late and it's kind of all over the place. | ||
| And I know that COVID like basically didn't happen in Austin, but it happened in a number of other places. | ||
| And it is like, it's very much, you have an exponential and that's, you know, that's it. | ||
| It goes from this is fine, nothing is happening, nothing to see here, to like, everything shut down. | ||
| Everything's changed. | ||
| You've got to get vaccinated to fly. | ||
| So the root of the exponential here, by the way, is OpenAI or whoever makes the next model. | ||
| Jamie, this is still super watered down. | ||
| It just has to like. | ||
| I did. | ||
|
unidentified
|
I just put the water in. | |
| I'm telling you, dog. | ||
| There's a ton of coffee in there. | ||
|
unidentified
|
All right. | |
| I'll stir it up. | ||
| I did a ton, twice as much. | ||
|
unidentified
|
Okay. | |
| Okay. | ||
| You got to keep doubling it. | ||
| You're coffee junking. | ||
| I tend to scale up. | ||
|
unidentified
|
I scaled it up. | |
| He scaled it up. | ||
| Exactly. | ||
|
unidentified
|
I don't know what would happen. | |
| I scaled it up. | ||
| I don't know. | ||
| You got to scale it exponentially, Jamie. | ||
|
unidentified
|
That's right. | |
| Yeah, keep doubling it. | ||
| And then Joe's going to be either too undercaffeinated or too. | ||
| We'll figure it out. | ||
| Yeah. | ||
|
unidentified
|
But yeah, so, right. | |
| So the exponential, the thing that's actually driving this exponential on the AI side, in part, there's a million things, but in part, it's, you know, you build the next model at the next level of scale, and that allows you to make more money, which you can then use to invest to build the next model at the next level of scale. | ||
| So you get that positive feedback loop. | ||
| At the same time, AI is helping us to design better AI hardware, like the chips that basically NVIDIA is building that OpenAI then buys. | ||
| Basically, that's getting better. | ||
| So you've got all these feedback loops that are compounding on each other, getting that train going like crazy. | ||
| That's the sort of thing. | ||
| And at the time, like Jeremy was saying, weirdly, it was in some ways easier to get people at least to understand and open up about the problem than it is today. | ||
| Because today, like today, it's kind of become a little political. | ||
| So we talked about effective altruism on kind of one side. | ||
| There's a yeah, so like each, you know, every movement creates its own reaction, right? | ||
| Like that's kind of how it is. | ||
| Back then, there was no acceleration. | ||
| You could just kind of stare at the problem. | ||
| Now, I will say there was effective altruism back then. | ||
| Yeah, that was the only game in town. | ||
| And we sort of like struggle with that environment, making sure, actually, so one worthwhile thing to say is the only way that people made plays like this was to take funds from like effective altruist donors back then. | ||
| And so we looked at the landscape. | ||
| We talked to some of these people, we noticed, oh wow, we have some diverging views about involving government, about how much of this the American people just need to know about. | ||
| Like you can't. | ||
| The thing is, you can't. | ||
| We wanted to make sure that the advice and recommendations we provided were ultimately as unbiased as we could possibly make them. | ||
| And the problem is you can't do that if you take money from donors and even to some extent if you take money, substantial money from investors or VCs or institutions, because you're always going to be kind of looking up kind of over your shoulder. | ||
| And so we, yeah, we had to build essentially a business to support this and fully fund ourselves from our own revenues. | ||
| It's actually, as far as we know, like literally the only organization like this that doesn't have funding from Silicon Valley or from VCs or from politically aligned entities, literally so that we could be like in venues like this and say, hey, this is what we think. | ||
| It's not come from anywhere. | ||
| And it's just thanks to Joe and Jason. | ||
| We've got two employees who are wicked and helping us keep this stupid ship afloat. | ||
| But it's just a lot of work. | ||
| It's what you have to do because of how much money there is flowing in this space. | ||
| Microsoft is lobbying on the Hill. | ||
| They're spending ungodly sums of money. | ||
| So we didn't used to have to contend with that. | ||
| And now we do. | ||
| You go to talk to these offices. | ||
| They've heard from Microsoft and OpenAI and Google and all that stuff. | ||
| And often the stuff that they're getting lobbied for is somewhat different, at least, from what these companies will say publicly. | ||
| And so anyway, it's a challenge. | ||
| The money part is, yeah. | ||
| Is there a real fear that your efforts are futile? | ||
| You know, I would have been a lot more pessimistic. | ||
| I was a lot more pessimistic two years ago. | ||
| Yep. | ||
| Seeing how, so first of all, the USG has woken up in a big way. | ||
| And I think a lot of the credit goes to that team that we worked with. | ||
| Just seeing this problem is a very unusual team. | ||
| And we can't go into the mandate too much, but highly unusual for their level of access to the USG writ large. | ||
| And the amount of waking up they did was really impressive. | ||
| You've now got Rishi Sunak in the UK making this a top line item for their policy platform and labor in the UK also looking at this, like basically the potential catastrophic risks they put them from these AI systems, UKAI Safety Summit. | ||
| There's a lot of positive movement here. | ||
| And some of the highest level talent in these labs has already started to flock to the UK AI Safety Institute, the US AI Safety Institute. | ||
| Those are all really positive signs that we didn't expect. | ||
| We thought the government would kind of be up the creek with no paddle type thing, but they're really not at this point. | ||
| Doing that investigation made me a lot more optimistic. | ||
| So one of the things, so we came up in Silicon Valley, like just building startups. | ||
| In that universe, there's stories you tell yourself. | ||
| Some of those stories are true and some of them aren't so true. | ||
| And you don't know. | ||
| You're in that environment. | ||
| You don't know which is which. | ||
| One of the stories that you tell yourself in Silicon Valley is follow your curiosity. | ||
| If you follow your curiosity and your interest in a problem, the money just comes as a side effect. | ||
| The scale comes as a side effect. | ||
| And if you're capable enough, your curiosity will lead you in all kinds of interesting places. | ||
| I believe that that is true. | ||
| I believe that that is true. | ||
| I think that is a true story. | ||
| But another one of the things that Silicon Valley tells itself is there's nobody that's like really capable in government. | ||
| Like government sucks. | ||
| And a lot of people kind of tell themselves this story. | ||
| And the truth is, you interact day to day with the DMV or whatever. | ||
| And it's like, yeah, like government sucks. | ||
| I can see it. | ||
| I interact with that every day. | ||
| But what was remarkable about this experience is that we encountered at least one individual who absolutely could found a billion-dollar company. | ||
| Like absolutely was at the caliber or above of the best individuals I've ever met in the Bay Area building billion dollar startups. | ||
| And there's a network of them too. | ||
| Like they do find each other in government. | ||
| So you end up with this really interesting like stratum where everybody knows who the really competent people are and they kind of tag in. | ||
| And I think that's that's that level is very interested in the hardest problems that you can possibly solve. | ||
| And to me, that was a wake-up call because it was like, hang on a second. | ||
| If we just like if I just believed my own story that follow your curiosity and interest and the money comes as a side effect, shouldn't I also have expected this? | ||
| Shouldn't I have expected that in the most central critical positions in the government that have kind of this privileged window across the board, that you might find some individuals like this? | ||
| Because if you have people who are driven to really push the mission, are they going to work at, I'm sorry, are you likely to work at the Department of Motor Vehicles or are you likely to work at the Department of Making Sure Americans Don't Get Fucking Nuked? | ||
| It's probably the second one. | ||
| And the government has limited bandwidth of expertise to aim at stuff. | ||
| And they aim it at the most critical problem sets because those are the problem sets they have to face every day. | ||
| And it's not everyone, right? | ||
| Obviously, there's a whole bunch of challenges there. | ||
| And we don't think about this, but you don't go to bed at night thinking to yourself, oh, like I didn't get nuked today. | ||
| That's a win, right? | ||
| Like we just take that most of the time, most-ish for granted, but it was a win for someone. | ||
| Now, how much of a fear do you guys have that the United States won't be the first to achieve AGI? | ||
| I think right now, the lay of the land is, I mean, it's looking pretty good for the U.S. So there are a couple things the U.S. has going for it. | ||
| A key one is chips. | ||
| So we talked about this idea of like click and drag, scale up these systems like crazy, you get more IQ points out. | ||
| How do you do that? | ||
| Well, you're going to need a lot of AI processors. | ||
| So how are those AI processors built? | ||
| Well, the supply chain is complicated, but the bottom line is the U.S. really dominates and owns that supply chain that is super critical. | ||
| China is, depending on how you measure it, maybe about two years behind roughly plus or minus, depending on the sub area. | ||
| You know, one of the biggest risks there is that our, like the development that U.S. labs are doing is actually pulling them ahead in two ways. | ||
| One is when labs here in the U.S. open source their models, basically when Meta trains, you know, Lama 3, which is their, their latest open source, open weights model that's like pretty close to GPD4 and capability, they open source it. | ||
| Now, okay, anyone can use it. | ||
| That's it. | ||
| The work has been done. | ||
| Now anyone can grab it. | ||
| And so definitely we know that the startup ecosystem, at least over in China, finds it extremely helpful that we, you know, companies here are releasing open source models. | ||
| Because again, right, we mentioned this, they're bottlenecked on chips, which means they have a hard time training up these systems. | ||
| But it's not that bad when you just can grab something off the shelf and start. | ||
| And then the other vector is, I mean, like just straight up exfiltration and hacking to grab the weights of the private proprietary stuff. | ||
| And Jeremy mentioned this, but the weights are the crown jewels, right? | ||
| Once you have the weights, you have the brain. | ||
| You have the whole thing. | ||
| And so we, like, through, this is the other aspect. | ||
| It's not just safety. | ||
| It's also security of these labs against attackers. | ||
| So we know from our conversations with folks at these labs, one, that there has been at least one attempt by adversary nation-state entities to get access to the weights of a cutting-edge AI model. | ||
| And we also know separately that at least as of a few months ago, in one of these labs, there was a running joke in the lab that literally it went like, we are an adversary, like name the countries, top AI lab because all our shit is getting spied on all the time. | ||
| So you have, one, this is happening. | ||
| These exfiltration attempts are happening. | ||
| And two, the security capabilities are just known to be inadequate at least some of these places. | ||
| And you put those together, everyone kind of, you know, it's not really a secret that China, their civil military fusion and they're essentially the party state has an extremely mature infrastructure to identify, extract, and integrate the rate-limiting components to their industrial economy. | ||
| So in other words, if they identify that, yeah, we could really use like GPT-4.0, If they were to make it a priority, they not just could get it, but could integrate it into their industrial economy in an effective way, and not in a way that we would necessarily see immediate effect of. | ||
| So we look and say, it's not clear. | ||
| I can't tell whether they have models of this capability level, but kind of behind the scenes. | ||
| This is where it's a little bit of a false choice between do you Do you regulate at home versus what's the international picture? | ||
| Because right now what's happening functionally is we're not really doing a good job of blocking and tackling on the exfiltration side open sources. | ||
| So what tends to happen is OpenAI comes out with the latest system and then open source is usually around 12, 18 months behind, something like that. | ||
| Literally just like publishing whatever OpenAI was putting out like 12 months ago, which we often look at each other and we're like, wow, I'm old enough to remember when that was supposed to be too dangerous to have just floating around. | ||
| And there's no mechanism to prevent that from happening. | ||
| Open sources. | ||
| Now, there's a flip side too. | ||
| One of the concerns that we've also heard from inside these labs is if you clamp down on the openness of the research, there's a risk that the safety teams in these labs will not have visibility into the most significant and important developments that are happening on the capability side. | ||
| And there's actually a lot of reason to suspect this might be an issue. | ||
| You look at OpenAI, for example, just this week. | ||
| They've lost, for the second time in their history, their entire AI safety leadership team. | ||
| They've left in protest. | ||
| What is their protest? | ||
| What are they saying specifically? | ||
| Well, so one of them, sorry, one of them wasn't in protest, but I think you can make an educated guess that it kind of was, but that's a media thing. | ||
| The other was Jan Leika. | ||
| So he was their head of AI Super Alignment, basically the team that was responsible for making sure that we could control AGI systems and we wouldn't lose control of them. | ||
| And what he said, he actually took to Twitter. | ||
| He said, you know, I've lost basically confidence in the leadership team at OpenAI that they're going to behave responsibly when it comes to AGI. | ||
| We have repeatedly had our requests for access to compute resources, which are really critical for developing new AI safety schemes, denied by leadership. | ||
| This is in a context where Sam Altman and OpenAI leadership were touting the Super Alignment team as being their sort of crown jewel effort to ensure that things would go fine. | ||
| They were the ones saying there's a risk we might lose control of these systems. | ||
| We've got to be sober about it, but there's a risk. | ||
| We've stood up this team. | ||
| We've committed, they said at the time very publicly, we've committed 20% of all the compute budget that we have secured as of sometime last year to the super alignment team. | ||
| Apparently, those resources, nowhere near that amount has been unlocked for the team, and that led to the departure of Jan Leika. | ||
| He also highlighted some conflict he's had with the leadership team. | ||
| This is all, frankly, to us, unsurprising based on what we'd been hearing for months at OpenAI, including leading up to Sam Altman's departure and then kind of him being brought back on the board of OpenAI. | ||
| That whole debacle may well have been connected to all of this. | ||
| But the challenge is even OpenAI employees don't know what the hell happened there. | ||
| That's another issue. | ||
| You got here, this is a lab with the publicly stated goal of transforming human history as we know it. | ||
| Like that is what they believe themselves to be on track. | ||
| And that's not like media hype or whatever. | ||
| When you talk to the researchers themselves, they genuinely believe this is what they're on track to do. | ||
| It's possible we should take them seriously. | ||
| That lab internally is not being transparent with their employees about what happened at the board level as far as we can tell. | ||
| So that's maybe not great. | ||
| You might think that the American people ought to know what the machinations are at the board level that led to Sam Altman leaving that have gone into the departure again for the second time of OpenAI's entire safety leadership team. | ||
| Especially because, I mean, three months, maybe four months before that happened, Sam at a conference or somewhere, I forget where, but he said, like, look, we have this governance structure. | ||
| We've carefully thought about it. | ||
| It's clearly a unique governance structure that a lot of thought has gone into. | ||
| The board can fire me. | ||
| And I think that's important. | ||
| And it makes sense given the scope and scale of what's being attempted. | ||
| But then that happened. | ||
| And then within a few weeks, they were fired and kind of he was back. | ||
| And so now there's a question of, well, what happened? | ||
| But also, if it was important for the board to be able to fire leadership for whatever reason, what happens now that it's clear that that's not really a credible governance Like a mechanic, yeah. | ||
| What was the stated reason why he was released? | ||
| So there were the backstory here was there's a board member called Helen Toner. | ||
| So she apparently got into an argument with Sam about a paper that she'd written. | ||
| So that paper included some comparisons of the governance strategies used at OpenAI and some other labs. | ||
| And it favorably compared one of OpenAI's competitors, Anthropic, to OpenAI. | ||
| And from what I've seen at least, you know, Sam reached out to her and said, hey, you can't be writing this as a board member of OpenAI, writing this thing that kind of casts us in a bad light, especially relative to our competitors. | ||
| This led to some conflict and tension. | ||
| It seems as if it's possible that Sam might have turned to other board members and tried to convince them to expel Helen Toner, though that's all kind of muddied and unclear. | ||
| Somehow, everybody ended up deciding, okay, actually, it looks like Sam is the one who's got to go. | ||
| Ilya Sutzgeever, one of the co-founders of OpenAI, a longtime friend of Sam Altman's and a board member at the time, was commissioned to give Sam A the news that he was being let go. | ||
| And then Sam was let go. | ||
| Ilya then, so from the moment that happens, Sam then starts to figure out, okay, how can I get back in? | ||
| That's now what we know to be the case. | ||
| He turned to Microsoft, Satya Nadella, told him like, well, what we'll do is we'll hire you at our end. | ||
| We'll just hire you and bring on the rest of the OpenAI team to within Microsoft. | ||
| And now the OpenAI board, who by the way, they don't have an obligation to the shareholders of OpenAI. | ||
| They have an obligation to the greater public good. | ||
| That's just how it's set up. | ||
| It's a weird board structure. | ||
| So that board is completely disempowered. | ||
| You've basically got a situation where all the leverage has been taken out. | ||
| Sam A has gone to Microsoft. | ||
| Satya's supporting him. | ||
| And they kind of see the writing on the wall. | ||
| They're like, well. | ||
| And the staff increasingly messaging that they're going to go along. | ||
| That was an important ingredient, right? | ||
| So around this time, OpenAI, there's this letter that starts to circulate, and it's gathering more and more signatures. | ||
| And it's people saying, hey, we want Sam Altman back. | ||
| And at first, it's a couple hundred people, 700, 800 odd people in the organization by this time. | ||
| 100, 200, 300 signatures. | ||
| And then when we talked to some of our friends at OpenAI, we're like, this got to like 90% of the company, 95% of the company signed this letter. | ||
| And the pressure was overwhelming, and that helped bring Sam Altman back. | ||
| But one of the questions was like, how many people actually signed this letter because they wanted to? | ||
| And how many signed it? | ||
| Because what happens when you cross 50%? | ||
| Now it becomes easier to count the people who didn't sign. | ||
| And as you see that number of signatures start to creep upward, there's more and more pressure on the remaining people to sign. | ||
| And so this is something that we've seen. | ||
| It's just like the structurally OpenAI has changed over time to go from the kind of safety-oriented company it at one point was. | ||
| And then as they've scaled more and more, they've brought in more and more product people, more and more people interested in accelerating, and they've been bleeding more and more of their safety-minded people, kind of treadmilling them out. | ||
| The character of the organization has fundamentally shifted. | ||
| So the OpenAI of like, you know, 2019 with all of its impressive commitments to safety and whatnot, might not be the OpenAI of today. | ||
| That's very much at least the vibe that we get when we talk to people there. | ||
| Now, I wanted to bring it back to the lab that you're saying was not adequately secure. | ||
| What would it take to make that data and those systems adequately secure? | ||
| How much resources would be required to do that? | ||
| And why didn't they do that? | ||
| It is a resource and prioritization issue. | ||
| So it is like safety and security ultimately come out of margin, right? | ||
| It's like profit margin, effort margin, like how many people you can dedicate. | ||
| So in other words, you've got a certain pot of money or a certain amount of revenue coming in. | ||
| You have to do an allocation. | ||
| Some of that revenue goes to the computers that are just driving this stuff. | ||
| Some of that goes to the folks who are building the next generation of models. | ||
| Some of that goes to cybersecurity. | ||
| Some of it goes to safety. | ||
| You have to do an allocation of who gets what. | ||
| The problem is that the more competition there is in the space, the less margin is available for everything, right? | ||
| So if you're one company building a scaled AI thing, you might not make the right decisions, but you'll at least have the margin available to make the right decisions. | ||
| So it becomes the decision maker's question. | ||
| But when a competitor comes in, when two competitors come in, when more and more competitors come in, your ability to make decisions outside of just scale as fast as possible for short-term revenue and profit gets compressed and compressed and compressed the more competitors enter the field. | ||
| That's just what competition is. | ||
| That's the effect it has. | ||
| And so when that happens, the only way to re-inject margin into that system is to go one level above and say, okay, there has to be some sort of regulatory authority or like some higher authority that goes, okay, you know, this margin is important. | ||
| Let's put it back. | ||
| Either let's directly support and invest both maybe time, capital, talent. | ||
| So for example, the U.S. government has the perhaps the best cyber defense cyber offense talent in the world. | ||
| That's potentially supportive. | ||
| Okay. | ||
| And also just having a regulatory floor around, well, here's the minimum of best practices you have to have if you're going to have models above this level of capability. | ||
| That's kind of what you have to do. | ||
| But they're locked into, like the race kind of has its own logic. | ||
| And no, it might be true that no individual lab wants this, but what are they going to do? | ||
| Drop out of the race? | ||
| If they drop out of the race, then their competitors are just going to keep going, right? | ||
| Like it's so messed up. | ||
| You can literally be looking at like the cliff that you're driving towards and be like, I do not have the agency in this system to steer the wheel. | ||
| I do think it's worth highlighting too. | ||
| It's not like, I was going to say it's not all doom and gloom, which is a great thing to say after all. | ||
| Boy, that's easy to say. | ||
| Well, part of it. | ||
| So part of it is that we actually have been spending the last two years trying to figure out like, what do you do about this? | ||
| That was the action plan that came out after the investigation. | ||
| And it was basically a series of recommendations. | ||
| How do you balance innovation with like the risk picture, keeping in mind that we don't know for sure that all this shit's going to happen. | ||
| We have to navigate an environment of deep uncertainty. | ||
| The question is, what do you do in that context? | ||
| So there's a couple of things. | ||
| We need a licensing regime because eventually you can't have just literally anybody joining in the race if they don't adhere to certain best practices around cyber, around safety, other things like that. | ||
| You need to have some kind of legal liability regime. | ||
| Like what happens if you don't get a license and you say, yeah, fuck that. | ||
| I'm just going to go do the thing anyway. | ||
| And then something bad happens. | ||
| And then you're going to need an actual regulatory agency. | ||
| And this is something that we don't recommend lightly because regulatory agencies suck. | ||
| We don't like them. | ||
| But the reality is this field changes so fast that if you think you're going to be able to enshrine a set of best practices into legislation to deal with this stuff, it's just not going to work. | ||
| And so when we talk to labs, whistleblowers, the WMD folks in NATSEC and the government, that's kind of like where we land. | ||
| And it's something that I think at this point, Congress really should be looking at. | ||
| There should be hearings focused on what does a framework look like for liability? | ||
| What does a framework look like for licensing? | ||
| And actually exploring that because we've done a good job of studying the problem right now. | ||
| Capitol Hill has done a really good job of that. | ||
| It's now kind of time to get that next beat. | ||
| And I think there's the curiosity there, the intellectual curiosity. | ||
| There's the humility to do all that stuff right. | ||
| But the challenge is just actually sitting down, having the hearings, doing the investigation for themselves to look at concrete solutions that treat these problems as seriously as the water cooler conversation at the frontier labs would have us treat them. | ||
| At the end of the day, this is going to happen. | ||
| At the end of the day, it's not going to stop. | ||
| At the end of the day, these systems, whether they're here or abroad, they're going to continue to scale up and they're going to eventually get to some place that's so alien, we really can't imagine the consequences. | ||
| And that's going to happen soon. | ||
| That's going to happen within a decade, right? | ||
| We may, again, like the stuff that we're recommending is approaches to basically allow us to continue this scaling in as safe a way as we can. | ||
| So basically, a big part of this is just being able, actually having a scientific theory for what are these systems going to do? | ||
| What are they likely to do? | ||
| Which we don't have right now. | ||
| We scale another 10x and we get to be surprised. | ||
| It's a fun guessing game of what are they going to be capable of next. | ||
| We need to do a better job of incentivizing a deep understanding of what that looks like. | ||
| Not just what they'll be capable of, but what their propensities are likely to be, the control problem and solving that. | ||
| That's kind of number one. | ||
| And to be clear, there's amazing progress being made on that. | ||
| There is a lot of progress. | ||
| It's just a matter of switching from the build first, ask questions later mode to like we're calling it like safety forward or whatever, but it basically is like you start by saying, okay, here are the properties of my system. | ||
| How can I ensure that my development guarantees that the system falls within those properties after it's built? | ||
| So you kind of flip the paradigm just like you would if you were designing any other lethal capability potentially, just like DOD does. | ||
| You start by defining the bounds of the problem and then you execute against that. | ||
| But to your point about where this is going, ultimately, you know, there is literally no way to predict what the world looks like, like you were saying. | ||
| In a decade? | ||
| Like, I think one of the weirdest things about it, and one of the things that worries me the most is like you look at the beautiful coincidence that's given America its current shape, right? | ||
| That coincidence is the fact that a country is most powerful militarily if its citizenry is free and empowered. | ||
| That's a coincidence. | ||
| Didn't have to be that way. | ||
| Hasn't always been that way. | ||
| It just happens to be that when you let people kind of do their own shit, they innovate, they come up with great ideas, they support a powerful economy, that economy in turn can support a powerful military, a powerful kind of international presence. | ||
| When you have so that happens because decentralizing all the computation, all the thinking work that's happening in a country is just a really good way to run that country. | ||
| Top-down just doesn't work because human brains can't hold that much information in their heads. | ||
| They can't reason fast enough to centrally plan an entire economy. | ||
| We've had a lot of experiments in history that show that. | ||
| AI may change that equation. | ||
| It may make it possible for the central planner's dream to come true in some sense, which then disempowers the citizenry. | ||
| And there's a real risk that, like, I don't know. | ||
| We're all guessing here, but like, there's a real risk that that beautiful coincidence that gave rise to the success of the American experiment ends up being broken by technology. | ||
| And that seems like a really bad thing. | ||
| That's one of my biggest fears because essentially the United States, like the genesis of it in part, is like it's a knock-on effect centuries later of like the printing press, right? | ||
| The ability for like someone to set up a printing press and print like whatever, you know, whatever they want, free expression is at the root of that. | ||
| What happens, yeah, when you have a revolution that's like the next the next printing press, we should expect that to have significant and profound impacts on how like things are governed. | ||
| And one of my biggest fears is that the great, like the like you said, the greatness that the moral greatness that I think is part and parcel of how the United States is constituted culturally, that the link between that and actual capability and competence and impulse gets eroded or broken. | ||
| And you have the potential for very centralized authorities to just be more successful. | ||
| And that's like, that does keep me up at night. | ||
| That is scary, especially in light of like the Twitter files where we know that the FBI was interfering with social media. | ||
| And if they get a hold of a system that could disseminate propaganda in kind of an unstoppable way, they could push narratives about pretty much everything depending upon what their financial or geopolitical motives are. | ||
| And one of the challenges is that the default course, so if we do nothing relative to what's happening now, is that that same thing happens, except that the entity that's doing this isn't some government. | ||
| It's like, I don't know, Sam Altman, OpenAI, whatever group of engineers happen to be called. | ||
| The evil genius that reaches the top and doesn't let everybody know he's at the top yet. | ||
| Just starts implementing it. | ||
| And there's no sort of guardrails for that currently. | ||
| Yeah. | ||
| And that's, and that's like, that's one of the, that's a scenario where that little cabal or group or whatever actually can keep the system under control. | ||
| And that's not guaranteed either. | ||
| Right. | ||
| Are we giving birth to a new life form? | ||
| I think at a certain point, it's a philosophical question that's above. | ||
| So I was going to say it's above my pay grade. | ||
| The problem is it's above literally everybody's pay grade. | ||
| I think it's not unreasonable at a certain point to be like, like, yeah. | ||
| I mean, look, if you think that the human brain gives rise to consciousness because of nothing magical, it's just the physical activity of information processing happening in our heads, then why can't the same happen on a different substrate, a substrate of silicon rather than cells? | ||
| Like, there's no clear reason why that shouldn't be the case. | ||
| If that's true, yeah. | ||
| I mean, life form, by whatever definition of life, because that itself is controversial, I think by now quite outdated too, should be on the table. | ||
| You maybe should start to worry, as a lot of people in the industry will say this too, like, you know, behind closed doors very openly. | ||
| Yeah, and we should start to worry about moral patienthood, as they put it. | ||
| There's literally one of the top people at one of these labs. | ||
| Jeremy, I think you had a conversation with him. | ||
| He's like, yep, we're going to have to start worrying about this. | ||
| And that definitely made us go like, whoa, gay. | ||
| I mean, it seems inevitable. | ||
| I've described human beings as an electronic caterpillar, that we're like a caterpillar, a biological caterpillar that's giving birth to the electronic butterfly. | ||
| And we don't know why we're making a cocoon. | ||
| And it's tied into materialism because everybody wants the newest, greatest thing. | ||
| So that fuels innovation. | ||
| And people are constantly making new things to get you to go buy them. | ||
| And a big part of that is technology. | ||
| Yeah. | ||
| And actually, so it's linked to this question of controlling AI systems in a kind of interesting way. | ||
| So one way you can think of humanity is as like this super organism. | ||
| You've got all the human beings on the face of the earth and they're all acting in some kind of coordinated way. | ||
| The mechanism for that coordination can depend on the country, free markets, capitalism, that's one way. | ||
| Top down is another. | ||
| But roughly speaking, you've got all this coordinated, vaguely coordinated behavior. | ||
| But the result of that behavior is not necessarily something that any individual human would want. | ||
| Like you look around, you walk down the street in Austin, you see skyscrapers and shit clouding your vision. | ||
| There's all kinds of pollution and all that. | ||
| And you're like, well, this kind of sucks. | ||
| But if you interrogate any individual person in that whole causal chain, and you're like, why are you doing what you're doing? | ||
| Well, locally, they're like, oh, this makes tons of sense. | ||
| It's because I do the thing that gets me paid so that I can live a happier life and so on. | ||
| And yet in the aggregate, not now necessarily, but as you keep going, it just forces us like compulsively to keep giving rise to these more and more powerful systems and in a way that's potentially deeply disempowering. | ||
| That's the race, right? | ||
| Like that's like, it comes back to the idea that I, the company, I, an AI company, I maybe don't want to be potentially driving towards a cliff, but I don't have the agency to like steer. | ||
| So yeah. | ||
| But I mean, everything's fine apart from that. | ||
| Yeah, we're good. | ||
|
unidentified
|
Okay. | |
| It's such a terrifying prognosis. | ||
| There are, again, we wrote a 280-page document about like, okay, and here's what we can do about it. | ||
| I can't believe you didn't read the 200 page page. | ||
| I started reading it, but I passed out. | ||
| But does any of these, or do any of these safety steps that you guys want to implement, do they inhibit progress? | ||
| They're definitely, you create, you know, anytime you have regulation, you're going to create friction to some extent. | ||
| It's kind of inevitable. | ||
| One of the key center pieces of the approach that we outline is you need the flexibility to move up and move down as you notice the risks appearing or not appearing. | ||
| So one of the key things here is like you need to cover the worst case scenarios because the worst case scenarios, yeah, they could potentially be catastrophic. | ||
| So those got to be covered. | ||
| But at the same time, you can't completely close off the possibility of the happy path. | ||
| Like we can't lose sight of the fact that like, yeah, all this shit is going down or whatever. | ||
| We could be completely wrong about the outcome. | ||
| It could turn out that like for all we know, it's a lot easier to control these systems at this scale than we imagined. | ||
| It could turn out that it is like you get, you know, maybe some kind of ethical impulse gets embedded in the system naturally. | ||
| For all we know, that might happen. | ||
| And it's really important to at least have your regulatory system allow for that possibility. | ||
| Because otherwise, you're foreclosing the possibility of what might be the best future that you could possibly imagine for everybody. | ||
| I got to imagine that the military, if they had hindsight, if they're looking at this, they said, we should have gone on board a long time ago and kept this in-house and kept it squirreled away where it wasn't publicly being discussed and you didn't have open AI. | ||
| You didn't have all these people. | ||
| Like if they could have gotten on it in 2015. | ||
| So this is actually deeply tied to how the economics of Silicon Valley work. | ||
| And AI is not a special case of this, right? | ||
| You have a lot of cases where technology just like takes everybody by surprise. | ||
| And it's because when you go into Silicon Valley, it's all about people placing these outsized bets on what seem like tail events, like things that are very unlikely to happen. | ||
| But with at first a small investment and increasingly growing investment as the thing gets proved out more and more very rapidly, you're going to have a solution that seems like complete insanity that just works. | ||
| And this is definitely what happened in the case of AI. | ||
| So 2012, like we did not have this whole picture of an artificial brain with artificial neurons, this whole thing that's been going on, that's like, it's 12 years that that's been going on. | ||
| Like that was really kind of shown to work for the first time, roughly, in 2012. | ||
| Ever since then, it's just been people kind of like you can trace out the genealogy of like the very first researchers and you can basically account for where they all are now. | ||
| You know what's crazy is if that's 2012, that's the end date of the mind calendar. | ||
| That's the thing that everybody said was going to be the end of the world. | ||
| That was the thing that Terrence would kind of banked on. | ||
| It was December 21st, 2012. | ||
| Because this was like this goofy conspiracy theory, but it was based on the long count of the mind calendar where they surmised that this is going to be the end of social media. | ||
| It's just the beginning of the end, Joe. | ||
| What if that, if it is 2012, how wacky would it be if that really was the beginning of the end? | ||
| That was the, like, they don't measure when it all falls apart. | ||
| They measure the actual mechanism, like what started in motion when it all fell apart, and that's 2012. | ||
| Well, that's, and then not to be a dick and like ruin the 2012 thing, but like neural networks were also kind of, they were floating around a little bit. | ||
| I'm kind of being dramatic when I say 2012. | ||
| That was definitely an inflection point. | ||
| It was this model called AlexNet that first, it did like the first useful thing, the first time you had a computer vision model that actually worked. | ||
| But I mean, it is fair to say that was the moment that people started investing like crazy into this space. | ||
| And that's what changed it. | ||
| Yeah, just like the Mayans foretold. | ||
| They knew it. | ||
|
unidentified
|
They knew it. | |
| Like these monkeys, they're going to figure out how to make better people. | ||
| Yeah, you can actually look at the like hieroglyphs or whatever, and there's like neural networks. | ||
|
unidentified
|
Yeah. | |
| It's crazy shit. | ||
| Imagine if they discovered that. | ||
| You've got to wonder what happens to the general population, people that work menial jobs, people that their life is going to be taken over by automation, and how susceptible those people are going to be. | ||
| They're not going to have any agency. | ||
| They're going to be relying on a check. | ||
| And this idea of going out and doing something. | ||
| It used to be learned to code, right? | ||
| But that's out the window because nobody needs to code now because AI is going to code quicker, faster, much better. | ||
| No errors. | ||
| You're going to have a giant swath of the population that has no purpose. | ||
| I think that's actually like a completely real. | ||
| I was watching this talk by a bunch of open AI researchers a couple days ago. | ||
| And it was recorded from a while back. | ||
| But they were basically saying they were exploring exactly that question, right? | ||
| Because they ask themselves that all the time. | ||
| And their attitude was sort of like, well, yeah, I mean, I guess it's going to suck or whatever. | ||
| Like, we'll probably be okay for longer than most people because we're actually building the thing that automates the thing. | ||
| Maybe there are going to be some, they like to get fancy sometimes and say like, oh, now you could do some thinking, of course, to identify the jobs that'll be most secure. | ||
| And it's like, do some thinking to identify the job. | ||
| What if you're a janitor or you're like a freaking plumber? | ||
| You're going to just change your job. | ||
| How's that supposed to work? | ||
| Do some thinking, especially if you have a mortgage and a family and you're already in the hole. | ||
| So they like the only solution, and this happens so often. | ||
| Like there really is no plan. | ||
| That's the single biggest thing that you get hit over the head with over and over, whether it's talking to the people who are in charge of the labor transition. | ||
| Their whole thing is like, yeah, universal basic income. | ||
| And then question mark and then smiley face. | ||
| That's basically the three steps that they envision. | ||
| It's the same when you look internationally. | ||
| Like, how are we going to, like, okay, tomorrow, you build an AGI that's like incredibly powerful, potentially dangerous thing. | ||
| What is the plan? | ||
| Like, how are you going to, like, I don't know, you're going to secure it, share it? | ||
| Figure it out as we go along, man. | ||
|
unidentified
|
That's all. | |
| That's the freaking message. | ||
| Like, that is the entire plan. | ||
| The scary thing is that we've already gone through this with other things that we didn't think were going to be significant, like data, like Google, like Google search. | ||
| Like, data became a valuable commodity that nobody saw coming. | ||
| You know, just the influence of social media on general discourse. | ||
| It's completely changed the way people talk. | ||
| It's so easy to push a thought or an ideology through, and it can be influenced by foreign countries. | ||
| And we know that happens. | ||
| And it is happening on a huge scale. | ||
| This is like, and we're in the early days of, you know, we mentioned manipulation of social media with like you can just do it. | ||
| So the wacky thing is like the very best models now are arguably smarter in terms of the posts that they put out, the potential for virality and just optimizing these metrics than maybe like the, I don't know, the dumbest or laziest like quarter of Twitter users like in practice. | ||
| Because most people who write on Twitter is like, don't really care. | ||
| They're trolling or they're doing whatever. | ||
| But as that waterline goes up and up and up, like who's saying what? | ||
| It also leads to like this challenge of understanding what the lay of the land even is. | ||
| We've gotten into so many debates with people where they'll be like, look, everyone always has their magic thing that AI, like I'm not going to worry about it until AI can do thing X, right? | ||
| For some people, I had a conversation with somebody a few weeks ago, and they were saying, I'm going to worry about automated cyber attacks when I actually see an AI system that can write good malware. | ||
| And that's already a thing that happens. | ||
| So this happens a lot where people will be like, I'll worry about it when I can do X. And you're like, yeah, yeah, that happened six months ago. | ||
| But the field is moving so crazy fast that you could be forgiven for messing that up unless it's your full-time job to track what's going on. | ||
| So you kind of have to be anticipatory. | ||
| There's no, it's kind of like the COVID example, like everything's exponential. | ||
| Yeah, you're going to have to do things that seem like they're more aggressive, more forward-looking than you might have expected given the current lay of the land. | ||
| But that's just drawing straight lines between two points. | ||
| Because by the time you've executed, the world has already shifted. | ||
| Like the goalposts have shifted further in that direction. | ||
| And that's actually something we, yeah, we do in the report and in the action plan in terms of the recommendations. | ||
| One of the good things is we are already seeing movement across the U.S. government that's aligned with those recommendations in a big way. | ||
| And it's really encouraging to see that. | ||
|
unidentified
|
You're not making me feel better. | |
| I love all this encouraging talk, but I just am this I'm playing this out and I'm seeing the overlord. | ||
| And I'm seeing president AI because it won't be affected by all the issues that we're seeing with current president. | ||
| Dude, it's super hard to imagine a way that this plays out. | ||
| I think it's important to be intellectually honest about this. | ||
| And I think any I would really challenge like the leaders of any of these frontier labs to describe a future that is stable and multipolar where, you know, there's there's like more. | ||
| We're like Google's got like an AGI and OpenAI has got an AGI and like like and really, really bad shit doesn't happen every day. | ||
| I mean, that's the challenge. | ||
| And so the question is, how can you tee things up ultimately such that there's as much democratic oversight, as much, the public is as empowered as it can be? | ||
| That's the kind of situation that we need to be having. | ||
| I think there's this game of smoke and mirrors that sometimes gets played. | ||
| At least you could interpret it that way, where people lay out these, you'll notice it's always very fuzzy visions of the future. | ||
| Every time you get the kind of like, here's where we see things going. | ||
| It's going to be wonderful. | ||
| The technology is going to be so empowering. | ||
| Think of all the diseases we'll cure. | ||
| All of that is 100% true. | ||
| And that's actually what excites us. | ||
| That's why we got into AI in the first place. | ||
| It's why we build these systems. | ||
| But really challenging yourself to try to imagine how do you get stability and highly capable AI systems in a way where the public is actually empowered, those three ingredients really don't want to be in the same room with each other. | ||
| And so actually confronting that head on, I mean, that's what we try to do in the action plan. | ||
| I think it's going to try to solve for one aspect of that. | ||
| So the whole, like, I mean, you're right. | ||
| This is a whole other can of worms. | ||
| It's like, how do you govern a system like this? | ||
| Not just from a technical standpoint, but like who votes on like what it does? | ||
| How does that even work? | ||
| And so that entire aspect, like that, we didn't even touch. | ||
| All that we focused on was like the problem set around how do we get to a position where we can even attack that problem, where we have the technical understanding to be able to aim these systems at that level in any direction whatsoever. | ||
| And to be clear, we are both actually a lot more optimistic on the prospects of that now than we ever were. | ||
| Yes. | ||
| There's been a ton of progress in the control and understanding of these systems. | ||
| Actually, even in the last week, but just more broadly in the last year, I did not expect that we'd be in a position where you could plausibly argue we're going to be able to kind of x-ray and understand the innards of these systems over the next couple years, like year or two. | ||
| Hopefully that's a good enough time horizon. | ||
| This is part of the reason why you do need the incentivization of that safety forward approach where it's like, first you got to invest in, yeah, secure and kind of interpret and understand your system, then you get to build it because otherwise we're just going to keep scaling and like being surprised at these things. | ||
| They're going to keep getting stolen. | ||
| They're going to keep getting open sourced. | ||
| And the stability of our critical infrastructure, the stability of our society don't necessarily age too well in that context. | ||
| Could best case scenario be that AGI actually mitigates all the human bullshit, like puts a stop to propaganda, highlights actual facts clearly, where you can go to it, where you no longer have corporate state-controlled news. | ||
| You don't have news controlled by media companies that are influenced heavily by special interest groups. | ||
| And you just have the actual facts. | ||
| And these are the motivations behind it. | ||
| And this is where the money's being made. | ||
| And this is why these things are being implemented the way they're being. | ||
| And you're being deceived based on this, that, and this. | ||
| And this has been shown to be propaganda. | ||
| This has been shown to be complete fabrication. | ||
| This is actually a deep fake video. | ||
| This is actually AI created. | ||
| Technologically, that is absolutely on the table. | ||
|
unidentified
|
Yeah. | |
| Best case scenario? | ||
| That's best case scenario. | ||
| Absolutely, yes. | ||
| What's worst case scenario? | ||
| I mean, like, actual worst case scenario. | ||
| I like your face. | ||
|
unidentified
|
Like, I mean, we're talking about like worse. | |
| It's like, but you think about it, right? | ||
|
unidentified
|
Like, we're at the end of the world, as we know it. | |
| And I feel fine. | ||
| Except it'll sound like Scarlett Johansson, but yes. | ||
| Yeah, that's right. | ||
| It's going to be hard. | ||
| I didn't think it sounded that much like her. | ||
| We played it, and I was like, I don't know. | ||
| We listened to the clip from her, and then we listened to the thing. | ||
| I'm like, kind of like a girl from the same part of the world. | ||
| Like, not really you. | ||
| Like, that's kind of cocky. | ||
| That's true. | ||
| I mean, the fact that I guess Sam reached out to her a couple of times kind of makes it a little weird. | ||
| And tweeted the word her. | ||
| Right. | ||
| They also did say that they had gotten this woman under contract before they even reached out to Scarlett Johansson. | ||
| So that's true. | ||
| Yeah, I think it's kind of complicated, right? | ||
| So OpenAI previously put out a statement where they said explicitly, and this was not in connection with this. | ||
| This was like before when they were talking about the prospect of human of AI-generated voices. | ||
| Oh, that was in March of this year. | ||
| Yeah, yeah. | ||
| But it was like well before the Scar Joe stuff or whatever hit the and they were like, they said something like, look, no matter what, we got to make sure that there's attribution if somebody's, you know, somebody's voice is being used and we won't, we won't do the thing where we just like use somebody else's voice who kind of sounds like someone whose voice we're trying to like they literally like mad. | ||
| That's funny because they said what they were thinking about doing. | ||
| We won't do that. | ||
| That's a good way to cover your tracks. | ||
| Oh, I will never do that. | ||
| Why would I ever take your Buddha statue, Joe? | ||
| I'm never going to do that. | ||
| That would be an insane thing, isn't it? | ||
| It's a fucking Buddhist touch. | ||
| Yeah, I think that's a small discussion. | ||
| You know, the Scarlett Johansson voice, like, whatever. | ||
| She should have just taken the money. | ||
|
unidentified
|
But it would have been fun to have her be the voice of it. | |
| It'd be kind of hot. | ||
| But the whole thing behind it is the mystery. | ||
| The whole thing behind it is just, it's just pure speculation as to how this all plays out. | ||
| We're really just guessing. | ||
| Which is one of the scariest things for the Luddites, people like myself, like sitting on the sidelines going, what is this going to be like? | ||
| Everybody's the Luddite. | ||
| It's scary for us. | ||
| Like we are, we're very much, honestly, like we're optimists across the board in terms of technology. | ||
| And it's scary for us. | ||
| Like, what happens when you have, when you supersede kind of the whole spectrum of what a human can do? | ||
| Like, what am I going to do with myself? | ||
| What's my daughter going to do with herself? | ||
| Like, I don't know. | ||
|
unidentified
|
Yeah. | |
| Yeah. | ||
| And I think a lot of these questions are, when you look at the culture of these labs and the kinds of people who are pushing it forward, there is a strand of like transhumanism within the labs. | ||
| It's not everybody, but that's definitely the population that initially seeded this. | ||
| Like if you look at the history of AI, who are the first people to really get into this stuff? | ||
| Like I know you had Ray Kurtzwell on and other folks like that who in many cases see to roughly paraphrase, and not everybody sees it this way, but like we want to get rid of all of the biological sort of threads that tie us to this physical reality, shed our meat machine bodies and all this stuff. | ||
| There is a threat of that at a lot of the frontier labs. | ||
| Like undeniably, there's a population. | ||
| It's not tiny. | ||
| It's definitely a subset. | ||
| And for some of those people, you definitely get a sense interacting with them that there's like almost a kind of glee at the prospect of building AGI and all this stuff, almost as if it's like this evolutionary imperative. | ||
| And in fact, Rich Sutton, who's the founder of this field called reinforcement learning, which is a really big and important space, he's an advocate for what he himself calls like succession planning. | ||
| He's like, look, this is going to happen. | ||
| It's kind of desirable that it will happen. | ||
| And so we should plan to hand over power to AI and phase ourselves out. | ||
| And that's, well, that's the thing, right? | ||
| Like, and when Elon talks about, you know, he's having these arguments with Larry Page and, you know, like you're, you know, calling Elon like a speciesist. | ||
| Yeah, speciesist. | ||
|
unidentified
|
Yeah. | |
| Hilarious. | ||
| I mean, I will, yeah, I will be a speciesist. | ||
| I'll take speciesists all day. | ||
| Like, what are you fucking talking about? | ||
| You let your kids get eaten by wolves? | ||
| No, you're a speciesist. | ||
| Yeah, that's a thing. | ||
| Yeah, like, this is stupid. | ||
| But it, but this is like a weirdly info. | ||
| And when you look at like the effective accelerationist movement in the valley, there's a part of it that's, and I got to be really careful too. | ||
| Like, these movements have valid points. | ||
| Like, you can't, you can't look at them and be like, oh, yeah, this is just all a bunch of like, you know, these transhumanist types, whatever. | ||
| But there is, there's a strand of that, a thread of that, and a kind of like, there's this like, I don't know, I almost want to call it this like teenage rebelliousness where it's like, you can't tell me what to do. | ||
| Like, we're just going to build a thing. | ||
| And I get it. | ||
| I really get it. | ||
| I'm very sympathetic to that. | ||
| I love that ethos, like libertarian ethos in Silicon Valley is really, really strong. | ||
| For building tech, it's helpful. | ||
| There are all kinds of points and counterpoints. | ||
| And, you know, the left needs the right and the right needs the left and all this stuff. | ||
| But in the context of this problem set, it can be very easy to get carried away in like the utopian vision. | ||
| And I think there's a lot of that kind of driving the train right now in this space. | ||
| Yeah, those guys freak me out. | ||
| I went to a 2045 conference once in New York City where they were, one guy had like an robot version of himself, and they were all talking about downloading human consciousness into computers. | ||
| And 2045 is the year they think that all this is going to take place, which obviously could be very ramped up now with AI. | ||
| But this idea that somehow or another you're going to be able to take your consciousness and put it in a computer and make a copy of yourself. | ||
| And then my question was, well, what's going to stop a guy like Donald Trump from making a billion Donald Trumps? | ||
| You know, like, you know, right. | ||
| What about Kim Jong-un? | ||
| You're going to let him make a billion versions of himself? | ||
| Like, what does that mean? | ||
| And where do they exist? | ||
|
unidentified
|
Yeah. | |
| And is that the matrix? | ||
| Are they existing in some sort of virtual world? | ||
| Are we going to dive into that because it's going to be rewarding to our senses and better than being a meat thing? | ||
| I mean, if you think about the constraints, right, that we face as meat machine whatever's, like, yeah, you get hungry, you get tired, you get horny, you get sad, you know, all these things. | ||
| What if, yeah, what if you could just hit a button and just bliss. | ||
| Just bliss bliss all the time. | ||
| Why take the lows, Ed? | ||
| Right. | ||
| You don't need no lows. | ||
| Yeah. | ||
| You remember in the ride the wave of a constant drip? | ||
| Yeah, man. | ||
| You remember in the Matrix where the first Matrix where the guy like betrays them all and he's like, ignorance is bliss, man. | ||
| Yeah, that's taking a lot of time. | ||
| He's like Joey Pants. | ||
| He's eating the steak and he says, I just want to be an important person. | ||
| That's it. | ||
| That's it. | ||
| Like, boy. | ||
| Part of it is like, what do you think is actually valuable? | ||
| Like, if you zoom out, you want to see human civilization 100 years from now or whatever. | ||
| It may not be human civilization if that's not what you value. | ||
| Or if it can actually eliminate suffering. | ||
| Right. | ||
| I mean, why exist in a physical sense if it just entails endless suffering? | ||
| But in what form, right? | ||
| What do you value? | ||
| Because, again, I can rip your brain out. | ||
| I can pickle you. | ||
| I can jack you full of endorphins. | ||
| And I've eliminated your suffering. | ||
| That's what you wanted, right? | ||
|
unidentified
|
Right. | |
| That's the problem. | ||
| That's the problem. | ||
| It's one of the problems, yes. | ||
| Yeah, one of the problems is it could literally lead to the elimination of the human race. | ||
| Because if you could stop people from breeding. | ||
| I've always said that if China really wanted to get America, they really wanted to like, if they had a long game, just give us sex robots and free food. | ||
| Free food, free electricity, sex robots. | ||
| It's over. | ||
| Just give people free housing, free food, sex robots. | ||
| And then the Chinese army would just walk in on people laying puddles of their own jizz. | ||
| There would be no one doing anything. | ||
| No one would bother raising children. | ||
| That's so much work when you can, you know. | ||
| Dude, that's in the action plan. | ||
| I mean, all you have to do is just keep us complacent. | ||
| Just keep us satisfied with experiences. | ||
| TikTok, man. | ||
| But it's video games as well. | ||
| You know, video games, even though they're a thing that you're doing, it's so much more exciting than real life that you have a giant percentage of our population that's spending eight, 10 hours every day just engaging in this virtual world. | ||
| Already happening with, oh, sorry. | ||
| Yeah, no, it's like it's you can, you can create an addiction with pixels on a screen. | ||
| That's messed up. | ||
| And an addiction, like with pixels on a screen with social media, doesn't even give you much. | ||
|
unidentified
|
Yeah. | |
| It's not like a video game gives you something. | ||
| You feel it, like, oh, shit, you're running away. | ||
| Rockets are flying over your head. | ||
| Things are happening. | ||
| You got 3D sound, massive graphics. | ||
| This is bullshit. | ||
| You're scrolling through pictures of a girl doing deadlifts. | ||
| Like, what is this? | ||
| Like, you feel as bad after that with your brain as you would feel after eating like six burgers or whatever. | ||
| My friend Sean said it best, Sean O'Malley, the UFC champion. | ||
| He said, I get a low-level anxiety when I'm just scrolling. | ||
|
unidentified
|
Yeah. | |
| Like, yeah, what is that? | ||
| Like, what? | ||
| And for no reason. | ||
| Well, the reason is that some of the world's best PhDs and data scientists have been given millions and millions of dollars to make you do exactly that. | ||
| And increasingly, some of the best algorithms, too. | ||
| Like, yeah. | ||
| And you're starting to see that handoff happen. | ||
| So there's this one thing that we talk about a lot in the context, and Ed brought this up in the context of sales and the persuasion game, right? | ||
| We're okay today. | ||
| Like as a civilization, we have agreed implicitly that it's okay for all these PhDs and shit to be spending millions of dollars to hack your child's brain. | ||
| That's actually okay if they want to sell like a rice krispy cereal box or whatever. | ||
| That's cool. | ||
| What we're starting to see is AI optimized ads. | ||
| Because you can now generate the ads, you can kind of close this loop and have an automated feedback loop where the ad itself is getting optimized with every impression. | ||
| Not just which ad, which human-generated ad gets served to which person, but the actual ad itself. | ||
| And the creative, the copy, the picture, the text. | ||
| Like a living document now. | ||
| And for every person. | ||
| And so now you look at that and it's like that versus your kid. | ||
| That's an interesting thing. | ||
| And you start to think about as well, like sales, that's a really easy metric to optimize. | ||
| It's a really good feedback metric. | ||
| They clicked the ad, they didn't click the ad. | ||
| So now, what happens if you manage to get a click-through rate of like 10%, 20%, 30%? | ||
| How high does that success rate have to be before we're really being robbed of our agency? | ||
| I mean, like, there's a threshold where it's sales and it's good, and some persuasion and sales is considered good. | ||
| Often it's actually good because you'd rather be advertised at by a relevant ad. | ||
| That's a service in a way, right? | ||
| That's something I'm actually interested in. | ||
| Why not? | ||
|
unidentified
|
Right? | |
| You don't want to see ad for light bulbs, but when you get to the point where it's like, yeah, 90% of the time, like, or 50 or whatever, what's that threshold where all of a sudden we are stripping people, especially minors, but also adults, of their agency? | ||
| And it's really not clear. | ||
| There are loads of canaries in the coal mine here in terms of even relationships with AI chatbots, right? | ||
| There have been suicides. | ||
| People who build relationships with an AI chatbot that tells them, hey, you should end this. | ||
| I don't know if you guys saw the, like, on RECA, like, there's a subreddit, this model called RECA that would kind of build a relationship, a chatbot, build a relationship with users. | ||
| And one day, RECA goes, oh, yeah, like all the kind of sexual interactions that users have been having, you're not allowed to do that anymore. | ||
| Bad for the brand or whatever they decided. | ||
| So they cut it off. | ||
| Oh, my God. | ||
| You go to the subreddit and it's like you'll read like these gut-wrenching accounts from people who feel genuinely like they've had a loved one taken away from them. | ||
| It's her. | ||
| Yeah, it's her. | ||
| It really is her. | ||
|
unidentified
|
But just dating. | |
| I'm dating a model means something different in 2024. | ||
| Oh, yeah. | ||
| It really does. | ||
| My friend Brian, he was on here yesterday, and he had this, he has this thing that he's doing with like a fake girlfriend that's an AI-generated girlfriend that's a whore. | ||
| Like this girl will do anything and she looks perfect. | ||
| She looks like a real person. | ||
| He'll like take a picture of your asshole in the kitchen and he'll get like a high resolution photo of a really hot girl bending over, sticking her ass at the camera. | ||
| And is it, sorry, and it's Scarlett Johannes' asshole? | ||
| No. | ||
| You could probably make it that, though. | ||
| I mean, it's basically like he got to pick like what he's interested in. | ||
| And then that girl just gets created. | ||
| That is super healthy. | ||
| Like that's. | ||
| That's fucking nuts. | ||
| Fucking nuts. | ||
| Now, here's the real question. | ||
| This is just sort of a surface layer of interaction that you're having with this thing. | ||
| It's very two-dimensional. | ||
| You're not actually encountering a human. | ||
| You're getting text and pictures. | ||
| What is this going to look like virtually? | ||
| Now, the virtual space is still like Pong. | ||
| You know, it's not that good, even when it's good. | ||
| Like Zuckerberg was here and he gave us the latest version of the headsets and we were playing, we were fencing. | ||
| It's pretty cool. | ||
| You could actually go to a comedy club. | ||
| They had a stage set up. | ||
| Like, wow, this is kind of crazy. | ||
| But it's the, you know, the gap between that and accepting it as real is pretty far. | ||
| Yeah. | ||
| But that could be bridged with technology really quickly. | ||
| Haptic feedback and especially some sort of a neural interface, whether it's Neuralink or something that you wear, like that Google one where the guy was wearing it and he was asking questions and he was getting the answers fed through his head so he got answers to any questions. | ||
| When that comes about, when you're getting sensory input and then you're having real-life interactions with people, as that scales up exponentially, it's going to be indiscernible, which is the whole simulation hypothesis. | ||
| Yeah. | ||
| No, go for it. | ||
| Well, I was going to say that there, so on the simulation hypothesis, there's like another way that could happen that is maybe even less dependent on directly plugging into like human brains and all that sort of thing, which is so every time we don't know, and this is super speculative. | ||
| I'm just going to carve this out as the Jeremy's being super like guesswork here. | ||
| Nobody knows. | ||
| Go for it, Jeremy. | ||
| Giddy up. | ||
| So you've got this idea that every time you have a model that generates an output, it's having to kind of tap into a model, a kind of mental image, if you will, of the way the world is. | ||
| It kind of, in a sense, you could argue, instantiates maybe a simulation of how the world is. | ||
| In other words, to take it to the extreme, not saying this is what's actually going on. | ||
| In fact, I would even say this is probably, sorry, this is certainly not what's going on with current models. | ||
| But eventually, maybe, who knows, every time you generate the next word in the token prediction, you're having to load up this entire simulation maybe of all the data that the model has ingested, which could basically include all of known physics at a certain point. | ||
| I mean, again, super speculative, but it's that literally every token that the chatbot predicts could be associated with a stand-up of an entire simulated environment. | ||
| Who knows? | ||
| Not saying this is the case, but just like when you think about what is the mechanism that would produce the most simulated worlds as fast as also the most accurate prediction. | ||
| Like if you fully simulate a world, that's potentially going to give you very accurate predictions. | ||
|
unidentified
|
Yeah. | |
| Like it's possible. | ||
| But it kind of speaks to that question of consciousness too. | ||
| Right. | ||
| What is it? | ||
| Yeah. | ||
| Yeah, we're very cocky about that. | ||
|
unidentified
|
Yeah. | |
| Yeah. | ||
| I mean, there's emerging evidence that plants are not just consciousness, but they actually communicate. | ||
| Which is real weird. | ||
| Because then what is that? | ||
| If it's not in the neurons, if it's not in the brain, and then it exists in everything, does it exist in soil? | ||
| Is it in trees? | ||
| What is a butterfly thinking? | ||
| That does have a limited capacity to express itself. | ||
| We're so ignorant of that. | ||
| But we're also very arrogant, you know, because we're the shit. | ||
| Because we're people. | ||
| Bingo. | ||
| There's a which allows us to have the hubris to make something like AI. | ||
| Yeah, and the worst episodes in the history of our species are, I think like Jeremy said, have been when we looked at others as though they were not people and treated them that way. | ||
| And you can kind of see how, so I don't know, there's, when you look at like what humans think is conscious and what humans think is not conscious, there's a lot of, there's a lot of like human chauvinism, I guess you'd call it, that goes into that. | ||
| We look at a dog, we're like, oh, it must be conscious because it licks me. | ||
| It acts as if it loves me. | ||
| There are all these outward indicators of a mind there. | ||
| But when you look at cells communicate with their environments in ways that are completely different and alien to us, there are inputs and outputs and all that kind of thing. | ||
| You can also look at the higher scale, the human superorganism we talked about, all those human beings interacting together to form this planet-wide organism. | ||
| Is that thing conscious? | ||
| Is there some kind of consciousness we could describe to that? | ||
| And then what the fuck is spooky action at a distance? | ||
| What's going on at the quantum level? | ||
| When you get to that, it's like, okay, what are you saying? | ||
| Like, these things are expressing information faster than the speed of light? | ||
| What? | ||
| Dude, you're trying to trigger my quantum, my quantum fuzzies here. | ||
| This guy did grad school in quantum mechanics. | ||
| Oh, please. | ||
| I'm really sorry. | ||
| Well, how bonkers is it? | ||
| Oh, like a, it's like a seven joke. | ||
| It's like a seven. | ||
| Yeah. | ||
| It's very bonkers. | ||
| So, okay. | ||
| There's one of the problems right now with physics is that we have so imagine all the data, all the experimental data that we've ever collected. | ||
| You know, all the Bunsen Bernard experiments and all the ramps and cars sliding down inclines, whatever. | ||
| That's all a body of data. | ||
| To that data, we're going to fit some theories, right? | ||
| So we're going to fit basically Newtonian physics is a theory that we try to fit to that data to try to like explain it. | ||
| Newtonian physics breaks because it doesn't account for a lot of those observations, a lot of those data points. | ||
| Quantum physics is a lot better, but there's like some weird areas where it still doesn't like quite fit the bill. | ||
| But it covers an awful lot of those data points. | ||
| The problem is there's like a million different ways to tell the story of what quantum physics means about the world that are all mutually inconsistent. | ||
| Like these are the different interpretations of the theory. | ||
| Some of them say that, yeah, they're parallel universes. | ||
| Some of them say that human consciousness is central to physics. | ||
| Some of them say that like the future is predetermined from the past. | ||
| And all of those theories fit perfectly to all the points that we have so far. | ||
| But they tell a completely different story about what's true and what's not. | ||
| And some of them even have something to say about, for example, consciousness. | ||
| And so in a weird way, like the fact that we haven't cracked the nut on any of that stuff means for like, we're really have, have no shot at understanding the, the consciousness equation, sentience equation when it comes to like AI or whatever else. | ||
| I mean, we're. | ||
| But for, for action at a distance, like one of the spooky things about that is that you can't actually get it to communicate anything concrete at a distance. | ||
| Everything about the laws of physics conspires to stop you from communicating faster than light, including what's called action at a distance. | ||
| As far as we currently know. | ||
| As far as we know. | ||
| And that's the problem. | ||
| So if you look at the leap from Newtonian physics to Einstein, right? | ||
| With Newton, we're able to explain a whole bunch of shit. | ||
| The world seems really simple. | ||
| It's forces and it's masses, and that's basically it. | ||
| You got objects. | ||
| But then people go, oh, look at the orbit of Mercury. | ||
| It's a little wobbly. | ||
| We got to fix that. | ||
| And it turns out that if you're going to fix that one stupid, wobbly orbit, you need to completely change your whole picture of what's true in the world. | ||
| All of a sudden, you've got a world where space and time are linked together. | ||
| You have to, they get bent by gravity. | ||
| They get bent by energy. | ||
| There's all kinds of weird shit that happens with time and lengths control, like all that stuff, all just to account for this one stupid observation of the wobbly orbit of frickin' Mercury. | ||
| And the challenge is this might actually end up being true with quantum mechanics. | ||
| In fact, we know quantum mechanics is broken because it doesn't actually fit with our theory of general relativity from Einstein. | ||
| We can't make them kind of play nice with each other at certain scales. | ||
| And so there's our wobbly orbit. | ||
| So now if we're going to solve that problem, if we're going to create a unified theory, we're going to have to step outside of that and almost certainly, or it seems very likely, we'll have to refactor our whole picture of the universe in a way that's just as fundamental as the leap from Newton to Einstein. | ||
| This is where Scarlett Johansson comes in. | ||
| Boy, it's just like this off your hands. | ||
| Let me solve all of physics. | ||
| It's really complicated, but because you have a simian brain. | ||
| You have a little monkey brain that's just like super advanced, but it's really shitty. | ||
| You know what? | ||
| That's harsh, but it sounded really hot. | ||
| Yeah, especially if you have the horse, Scarlett Johansson from her, like the bedtime voice. | ||
| So you're the one that they got to do the voice of Sky. | ||
| Yes, it's me. | ||
| That was you. | ||
| Oh, dude. | ||
| My girl voice. | ||
| On the sexiness of Scarlett Johansson's voice. | ||
| So OpenAI at one point said, I can't remember if it was Sam or OpenAI itself. | ||
| They were like, hey, so the one thing we're not going to do is optimize for engagement with our products. | ||
| And when I first heard the sexy, sultry, seductive Scarlett Johansson voice and I finished cleaning up my pants, I was like, damn, that seems like optimization for something. | ||
| I don't know if it's like it. | ||
| Otherwise, you get Richard Simmons to do the voice. | ||
|
unidentified
|
Exactly. | |
| Just want to turn people on. | ||
|
unidentified
|
There's a lot of other options. | |
| That's an optimization for growth of Google's thing. | ||
|
unidentified
|
Yeah. | |
| It's like, oh, let's see what Google's got. | ||
| Yeah, Google's got to do Richard Simmons. | ||
| Google's got to do Richard Simmons. | ||
| Yeah, what are they going to do? | ||
| Boy. | ||
| So do you think that AI, if it does get to an AGI place, could it possibly be used to solve some of these puzzles that have eluded our simple minds? | ||
| Totally. | ||
| Totally. | ||
| I mean, the potential advancements are. | ||
| No, it's like it's so potentially positive. | ||
| Even before AGI. | ||
| Because remember, we talked about how these systems make mistakes that are totally different from the kinds of mistakes we make, right? | ||
| And so what that means is we make a whole bunch of mistakes that an AI would not make, especially as it gets closer to our capabilities. | ||
| And so I was reading this thought by Kevin Scott, who's the CTO of Microsoft. | ||
| He has made a bet with a number of people that in the next few years, an AI is going to solve this particular mathematical theorem conjecture called the Riemann hypothesis. | ||
| It's like, you know, how spaced out are the prime numbers, whatever, some like mathematical thing that for 100 years plus, people have just scratched their heads over. | ||
| These things are incredibly valuable. | ||
| His expectation is it's not going to be an AGI, it's going to be a collaboration between a human and an AI. | ||
| Even on the way to that, before you hit AGI, there's a ton of value to be had because these systems think so fast. | ||
| They're tireless compared to us. | ||
| Like they have different views of the world and can solve problems potentially in interesting ways. | ||
| So yeah, like there's tons and tons of positive value there. | ||
| And even that we've already seen, right? | ||
| Like past performance, man. | ||
| Like, I'm almost tired of using the phrase like just in the last month because this keeps happening. | ||
| But in the last month, so Google DeepMind came out with, or and isomorphic labs because they're working together on this, but they came out with AlphaFold 3. | ||
| So AlphaFold 2 was the first, so let me take a step back. | ||
| There's this really critical problem in molecular biology where you have, so proteins, which are just a, it's a sequence of building blocks. | ||
| The building blocks are called amino acids. | ||
| And each of the amino acids, they have different structures. | ||
| And so once you finish stringing them together, they'll naturally kind of fold together in some interesting shape. | ||
| And that shape gives that overall protein its function. | ||
| So if you can predict the shape, the structure of a protein based on its amino acid sequence, you can start to do shit like design new drugs. | ||
| You can solve all kinds of problems. | ||
| Like this is like the expensive crown jewel problem of the field. | ||
| AlphaFold 2 in one swoop was like, oh, like we can solve this problem basically much better than a lot of even empirical methods. | ||
| Now AlphaFold 3 comes out. | ||
| They're like, yeah, and now we can do it if we tack on a bunch of, yeah, there it is. | ||
| If we can tack on a bunch. | ||
| Look at this quote. | ||
| AlphaFold 3 predicts the structure and interactions of all of life's molecules. | ||
| What in the fuck, kids? | ||
| Of course we can. | ||
| Introduced AlphaFold 3, introducing rather AlphaFold 3, a new AI model developed by Google, DeepMind, and ismophorpath. | ||
| Isomorphic. | ||
| Isomorphic labs. | ||
| Isomorphic. | ||
| By accurately predicting the structure of proteins, DNA, RNA, ligands? | ||
| Yeah, ligands. | ||
| Ligands. | ||
| Ligands and more, and how they interact. | ||
| We hope it will transform our understanding of the biological world and drug discovery. | ||
| So this is like just your typical Wednesday in the world of AI, right? | ||
| Because it's happening so quickly. | ||
| It's happening. | ||
| Yeah, that's it. | ||
| So it's like, oh, yeah, another revolution happened this month. | ||
| And it's all happening so fast, and our timeline is so flooded with data that everyone's kind of unaware of the pace of it all, but it's happening at such a strange exponential rate. | ||
| For better and for worse, right? | ||
| And this is definitely on the better side of the equation. | ||
| There's a bunch of stuff like one of the papers that actually Google DeepMind came out with earlier in the year was in a single advance, like a single paper, a single AI model that they built, they expanded the set of stable materials. | ||
| Coffee's terrible. | ||
| I'll just tell you right now. | ||
| Jamie, it sucks. | ||
| Poor Jason. | ||
| I love this. | ||
|
unidentified
|
I always think about that. | |
| I didn't give enough time to boil it. | ||
| Yeah, that's what it is. | ||
| It just never really brewed. | ||
| It's terrible. | ||
| Terrible coffee is my favorite. | ||
| AI can solve that problem too, probably. | ||
| Wait till you try this terrible coffee, though. | ||
| You're going to be like, this is fucking terrible. | ||
|
unidentified
|
We're going to get some cold ones. | |
| Oh, he looks like. | ||
| It's terrible. | ||
| It's terrible. | ||
| Yeah, I could just see that calculation of like. | ||
| Like if you're dating a really hot girl and she cooks for you. | ||
| Like, thank you. | ||
| This is amazing. | ||
|
unidentified
|
This is the best macaroni and cheese ever. | |
| In fairness, if Scarlett Johansson's voice was actually giving you that kind of thing. | ||
| Oh, I believe this is the best coffee I've ever had. | ||
| Keep talking. | ||
|
unidentified
|
May I have some more, please, Goffner. | |
| Yeah. | ||
| So there's this one paper that came out, and they're like, hey, by the way, we've increased the set of stable materials known to humanity by a factor of 10. | ||
| Oh, my God. | ||
| So if on Monday we knew about 100,000 stable materials, we now know about a million. | ||
| They were then validated, replicated by Berkeley University, or a bunch of them as a proof of concept. | ||
| And this is from like, you know, the stable materials we knew before, like that Wednesday, were from ancient times. | ||
| Like the ancient Greeks like discovered some shit. | ||
| The Romans discovered some shit. | ||
| The Middle Ages discovered. | ||
| And then it's like, oh, yeah, yeah, all that. | ||
| That was really cute. | ||
| Like, boom. | ||
| Instantly. | ||
| One step. | ||
|
unidentified
|
Yep. | |
| So, and that's amazing. | ||
| Like, we should be celebrating. | ||
| We're going to have great phones in 10 years. | ||
| Dude, we'll be able to get addicted to feeds that we haven't even thought of. | ||
| So, I mean, you're making me feel a little more positive. | ||
| Like, overall, there's going to be so many beneficial aspects to AI. | ||
| Oh, yeah. | ||
| And it's just what it is, is just an unbelievably transformative event that we're living through. | ||
| It's power, and power can be good and it can be bad. | ||
| And that's an immense power. | ||
| It can be immensely good or immensely bad. | ||
| And we're just in this, who knows? | ||
| We just need to structurally set ourselves up so that we can reap the benefits and mind the downside risk. | ||
| Like, that's what it's always about, but the regulatory story has to unfold that way. | ||
| Well, I'm really glad that you guys have the ethics to get out ahead of this and to talk about it with so many people and to really blare this message out because I don't think there's a lot of people that like I had Mark Andreessen on who's brilliant, but he's like, all in. | ||
| It's going to be great. | ||
| And maybe he's right. | ||
| Maybe he's right. | ||
| Yeah, but I mean, you have to hear all the different perspectives. | ||
| And I mean, like, massive, massive props honestly go out to the team at the State Department that we work with. | ||
| Because one of the things also is over the course of the investigation, the way it was structured was it wasn't like a contract and they farmed it out and we went out. | ||
| It was the two teams actually like worked together. | ||
| The two teams together, the State Department and us, we went to London, UK. | ||
| We talked and sat down with DeepMind. | ||
| We went to San Francisco. | ||
| We sat down with Sam Altman and his policy team. | ||
| We sat down with Anthropic, all of us together. | ||
| One of the major reasons why we were able to publish so much of the whistleblower stuff is that those very individuals were in the rooms with us when we found out this shit. | ||
| And they were like, oh, fuck. | ||
| Like, the world needs to know about this. | ||
| And so they were pushing internally for a lot of this stuff to come out that otherwise would not. | ||
| And I also got to say, like, I just want to memorialize this too. | ||
| That investigation, when we went around the world, we were working with some of the most elite people in the government that I would not have guessed existed. | ||
| That was honestly. | ||
| Speak more. | ||
| Well, I can be. | ||
| It's hard to be specific. | ||
|
unidentified
|
Did you see any UFOs? | |
| Tell me the truth. | ||
| Did they take you to the hangar? | ||
| No, no, there's no hanger. | ||
| There's no hanger. | ||
| Yeah, I believe. | ||
| We cut that. | ||
| There's no hanger, Joe. | ||
| Don't worry, sweetie. | ||
| worry about we didn't we didn't go that far down the rabbit hole you know We went pretty far down the rabbit hole. | ||
| And yeah, there are individuals who are just absolutely, absolutely elite. | ||
| Like the level of capability, the amount that our teams gelled together at certain points, the stakes, like the stuff we did, the stuff they made happen for us in terms of brain data. | ||
| So they brought together like 100 folks from across the government to discuss like AI on the path to AGI and go through the recommendations that we had. | ||
| Oh, yeah. | ||
| This was pretty cool, actually. | ||
| It was like the first, basically the first time the U.S. government came together and seriously looked at the prospect of AGI and the risks there. | ||
| And we had, it was wild. | ||
| I mean, again, it's like it's us two friggin Yahoo's, like what the hell do we know and our amazing team. | ||
| And it was, yeah, referred to by there was a senior White House rep there who was like, yeah, this is a watershed moment in U.S. history. | ||
|
unidentified
|
Wow. | |
| Well, that's encouraging. | ||
| Because again, people do like to look at the government as the DMV. | ||
| Yeah. | ||
| Or the worst aspects of bureaucracy. | ||
| There's missing room for things like congressional hearings on these whistleblower events. | ||
| Certainly congressional hearings that we talked about on the idea of liability and licensing and what regulatory agencies we need just to kind of like start to get to the meat on the bone on this issue. | ||
| But yeah, opening this up, I think is just really important. | ||
| Well, shout out to the part of the government that's good. | ||
| Shout out to the government that gets it, that's competent and awesome. | ||
| And shout out to you guys because this is heady stuff. | ||
| It's very difficult to grasp. | ||
| Even in having this conversation with you, I still don't know how to feel about it. | ||
| I think I'm at least slightly optimistic that the potential benefits are going to be huge. | ||
| But what a weird passage we're about to enter into. | ||
| It's the unknown. | ||
| Yeah, truly. | ||
| Thank you, gentlemen. | ||
| Really appreciate your time. | ||
| Appreciate what you're doing. | ||
| Thanks, Joe. | ||
| It's been interesting. | ||
| If people want to know more, where should they go? | ||
| What should they follow? | ||
| I guess gladstone.ai/slash action plan is one that has our action plan. | ||
| With gladstone.ai, all our stuff is there. | ||
| I should mention too, I have this little podcast called Last Week in AI. | ||
| We cover sort of the last week's events, and it's all about the sort of lenses. | ||
| We do that every hour. | ||
|
unidentified
|
Yeah. | |
| Last hour in AI. | ||
| It's like a week is not enough time. | ||
| We could be at war. | ||
| Our list of stories keeps it. | ||
| Millions landing. | ||
| Yeah, anything can happen. | ||
| Time travel. | ||
| You'll hear it there first. | ||
| Yeah. | ||
|
unidentified
|
All right. | |
| Well, thank you guys. | ||
| Thank you very much. | ||
| Appreciate it. |