| Speaker | Time | Text |
|---|---|---|
| Symmetry of the comments from our panelists, that what's required, even as we have this amazing military transformation, military-driven, an inflection point, is clear strategic planning and guidance. | ||
| I should say in closing, Malcolm Kerr was a friend of mine. | ||
| I can remember walking with him on the lawn of the American University of Beirut, looking out over the Mediterranean not long before he was killed. | ||
| And I do often think of him as somebody who embodied our hopes for the Middle East and the way that those were crushed. | ||
| And I hope when we come back next year, we'll actually answer the question, tell me how this ends, but we'll have a little more progress in this arc that we've been trying to describe for you. | ||
| So thank you very much, Ben. | ||
|
unidentified
|
Former CIA Director David Petraeus joins Google's Global Affairs President Kent Walker and other panelists to discuss the security implications of artificial intelligence. | |
| This at the Aspen Security Forum in Aspen, Colorado. | ||
| It's about 25 minutes. | ||
| It's funny, as a journalist, there's one question I always hesitate to ask, and it's the, so tell me what happens next. | ||
| Predict the future. | ||
| Because the only remotely honest answer a person can give is, oh, it beats me. | ||
| None of us can predict the future. | ||
| None of us know exactly what is going to happen, particularly when it comes to tech AI, some of the big themes that we have already been hitting this morning. | ||
| So welcome to a panel to whom I'm about to put the question, tell me what happens next, predict the future. | ||
| I want to start by teasing out some of the themes, questions that have emerged on earlier panels that we've kicked around a little bit but not resolved. | ||
| And I want to start because we have significant military battlefield expertise on stage with a former CENTCOM commander, General Europe first. | ||
| Speak to the ways that you see the battlefield being transformed. | ||
| What opportunities do you see in the air on the land on the sea, below the sea? | ||
| Unmanned. | ||
| And increasingly, it will not just be remotely piloted, it's going to be algorithmically piloted. | ||
| And you can predict the future by seeing what is going on in the present, frankly, right in Ukraine. | ||
| I'm a frequent visitor there. | ||
| I've watched as they formed not just an Army, Navy, Air Force, but unmanned systems force, as they have introduced unmanned systems to every single battlefield formation. | ||
| Every company now has a drone platoon, battalion, a drone company, brigade, a drone battalion. | ||
| Then there are other huge regiments that divide up the front lines among them. | ||
| To give you a sense of the scale of this, the last time I was there was about two or three months ago, I asked the overall commander of Ukrainian forces, how many drones did you employ yesterday against the Russians? | ||
| And he said, almost 7,000. | ||
| 7,000? | ||
| Do the math. | ||
| I hope we have more than 7,000 drones in the U.S. military, but it can't be that many times more. | ||
| So they're going to produce 3.5 million unmanned systems. | ||
| And keep in mind, this is not just in the air of various types, everything from short-duration suicide drones to much longer-duration strategic going in and closing down Moscow's international airport every other day or so. | ||
| This is maritime drones. | ||
| How does a country that has no Navy sink one-third of the Russian Black Sea Fleet? | ||
|
unidentified
|
Aerial drones that find the ships and maritime drones that sink them? | |
| How does a country shoot down aircraft over occupied Crimea by shooting air defense missiles off maritime unmanned systems? | ||
| How does a country with $1 million worth of drones parked outside airfields that have strategic aircraft of Russia on them, thousands of kilometers apart, $1 million worth of drones damages or destroys $5 to $7 billion worth of strategic aircraft, some of which cannot be replaced? | ||
| So you can see that future. | ||
| And again, right now, most of those drones are remotely piloted. | ||
| The future of the future is going to be systems that are remotely piloted. | ||
| And so if you now turn it to a U.S. scenario, our Indo-Pacific commander has publicly described what he wants to do in the Taiwan Strait, 110 miles of open ocean, very formidable task. | ||
| And he wants to turn that into his term, a hellscape. | ||
| How do you turn that into a hellscape? | ||
| Unmanned systems underneath the surface of the water, massive numbers of them, on the surface of the water, in the air, on the ground, in outer space, the equivalent in cyberspace, the cognitive air, you know, all of these domains of warfare. | ||
| And again, increasingly, not remotely piloted. | ||
| So the human in the loop is going to become the human on the loop. | ||
| In other words, establishing the criteria for what the machine, the mission, and then the tasks, and then what it has to meet. | ||
| And by the way, AI is going to write the algorithms with a few bits of input from humans. | ||
| I think you've just given us the seeds for about five panels when we all convened back here last year. | ||
| Next year. | ||
| Let me pick up on one other thread that I want to bring you two in on. | ||
| Charlie Dent on this stage yesterday, former Republican Congressman Charlie Dent, talked about the need to maintain dominance in the race with China on tech, on AI. | ||
| First of all, just are we all in agreement that that's where we are? | ||
| Any dissent on the idea that the U.S. retains dominance in this moment? | ||
| That we currently do or that we should? | ||
| That we currently do? | ||
| Well, Kent and I, I think we are in heated agreement with this. | ||
| The Chinese large language models are rapidly catching up, if not there. | ||
| I think this is the year that the Chinese achieve parity with us. | ||
| Kent, do you agree? | ||
| I think it's neck and neck. | ||
| I think a few years ago we would have said we were years ahead. | ||
| Now I think we're months ahead. | ||
| And in some areas, they may well be ahead. | ||
| And to step back, the stakes that the general has started to lay out on the battlefield are significant. | ||
| But even more fundamentally, this is a race for geopolitical influence. | ||
| This is a race for economic leadership around the world. | ||
| If you go back to what was called the long century between the French Revolution and the First World War, Great Britain dominated that century because they dominated in steel and coal. | ||
| The United States dominated the 20th century because we led in mass production and material science. | ||
| So the question is who's going to lead in the 21st century? | ||
| And the early signs are not as suspicious as we would like. | ||
| The Australian Strategic Policy Institute does a review every few years of who is leading in critical technologies. | ||
| They look at 64 different technologies from batteries to advanced engines to advanced chemistry. | ||
| In 2003, the United States led in 60 of those 64 categories. | ||
| Today, China leads in 57 of 64. | ||
| Now, the good news is we do think that AI is a key element in turning that around. | ||
| Because AI is not just a scientific breakthrough, it's a breakthrough in how we make breakthroughs. | ||
| So that's new generations of material science. | ||
| That's quantum. | ||
| That's personalized medicine. | ||
| And many more. | ||
| But we really have to lean into this and have an affirmative pro-innovation approach. | ||
| Just to follow up and make this specific, you just said, Anya, you think this might be the year where China achieves parity with the U.S. in large language models. | ||
| What do you mean? | ||
| Like, what are the Chinese about to be able to do as well as we do? | ||
| So when you look at the frontier models, as we call them, it used to be Google, OpenAI, Anthropic, maybe Meta were solidly in the lead. | ||
| You all have heard about DeepSeek, but behind DeepSeek are probably 15 other very exceptional Chinese models, including Kiwi and others. | ||
| And the problem with this race is it's exactly as you said, Kent. | ||
| You don't quite know where it leads. | ||
| It's not just about having the best model. | ||
| It's how do you use it in your society? | ||
| How do you implement it? | ||
| And I think here the Chinese are actually eating our lunch. | ||
| Starting September 1st of this year, every student K through 12 in China will have AI lessons. | ||
| Age-appropriate. | ||
| How do you use the model? | ||
| How do you interact with it? | ||
| How do you do it ethically? | ||
| In the U.S., there's a good, the Trump administration actually had a pretty good executive order on AI and education, but it's baby steps. | ||
| You know, we're training a couple hundred thousand teachers. | ||
| We're starting to think about it. | ||
| And so if you think about not just where are we at the frontier of the technology, but how are people actually using that technology in their societies, they're doing well. | ||
| Let me pick up on that. | ||
| Sergei Bruden, one of our co-founders, has a saying, ideas are easy, execution is hard. | ||
| And the United States has a history of being the first to invent technologies, but not necessarily the best to deploy. | ||
| And that learning by doing, whether it's television or color printers or many other technologies that are now manufactured outside the United States, is a sobering note for us. | ||
| China has launched its AI Plus program. | ||
| It is spending hundreds of billions of dollars a year investing in AI. | ||
| There are 200,000 Chinese companies using AI today, and something like 600 million Chinese report that they are already actively using AI. | ||
| Now that, you know, good on them for taking this seriously, but a real challenge to us. | ||
| There's a book that's just out by Jeremy Ding called Technology and the Rise of Great Powers. | ||
| And the secret, he says, is an innovation economy, yes, but a diffusion economy is just as important. | ||
| Get these tools in the hands of the people who need to use them, and that actually makes the tools better. | ||
| Well, and it's so interesting because, Anya, you just made the point about China is teaching this at an aid-appropriate level in schools. | ||
| I suspect many of my fellow parents in this room would agree our schools spend a lot more time trying to keep children away from it because we want them to learn. | ||
| There's good intentions, but what do you think are the policies if we want to, if we agree that maintaining dominance, that not getting left behind in this race is a noble goal? | ||
| What do we do? | ||
| So I have to give the Trump administration an enormous amount of credit here. | ||
| They came in right away. | ||
| They have a lot of technologists who've taken time out of their jobs in Silicon Valley and elsewhere who really understand this stuff. | ||
| And they're embedded across this administration. | ||
| David Sachs, the AI czar, says we need to cement our U.S. technology stack around the world. | ||
| They're doing a lot. | ||
| Here are the good things they're doing. | ||
| Making it easier to build power plants. | ||
| We're referring to Dan Poneman again. | ||
| Deregulation. | ||
| We're actually doing a pretty good job, I would say. | ||
| I don't know if David would agree, on trying to break through some of the bureaucracy in the Pentagon to get drones and other tech-enabled things faster in there. | ||
| Trying. | ||
| Trying. | ||
| Trying to do that. | ||
| But there is a real, I think there's a real renewed vigor and energy and a willingness to break China that I haven't seen in a previous administration. | ||
| But there's also vested interests in what Senator McCain used to term the military-industrial congressional complex, each element of which wants to maintain legacy systems, legacy processes, legacy basing structures, legacy maintenance contracts, you name it, and frustrates the efforts of those who know we need to be accelerating our transition. | ||
| We really need to go, this is very simplistic terms, but we need to transform our military from a small number of very large platforms that are incredibly capable, exorbitantly expensive, and very heavily manned, and increasingly vulnerable, to an extraordinary number of unmanned systems, again, under the sea, surface, air, ground, space, et cetera, | ||
| and which will increasingly be not human piloted or remotely piloted, but algorithmically piloted. | ||
| Again, we still need some of those platforms. | ||
| It's not the entire force. | ||
| There are other scenarios around the world that call for those. | ||
| But we're not remotely making that kind of dramatic move. | ||
| I would, in fact, submit that the Chinese are going to school more on Ukraine than are we in the United States. | ||
| We don't have our vaunted Army lessons learned team, Joint Forces Lessons Learned team. | ||
| They're not on the ground because, understandably, we're concerned about boots on the ground and American casualties. | ||
| But if you don't do that, it's very hard to divine the lessons if you're not on that ground with the actual units seeing what they're doing. | ||
| You know, the latest innovation is in Ukraine because they can't work through the jamming. | ||
| They're trying to maintain a command and control link, a radio link, if you will, sometimes using Starlink and a GPS link. | ||
| So now what they're doing is they just spool out little fiber optic cable behind the drones. | ||
| They're doing this on scale of thousands of these. | ||
| By the way, it makes a mess of the farmers' fields. | ||
| Now you have to wade through a lot of fishing line, basically, fiber optic cable. | ||
| But the innovation is so rapid. | ||
| If you're gone for three months and you come back, you find a whole big breakthrough that they have just actually put onto the battlefield. | ||
| And they're doing this at a rate and a pace that only a country that has extraordinary IT talents, manufacturing skill, design skills, and is fighting for its very survival could actually do. | ||
| One sentence to double down on what David just said, when I have walked the halls of the Pentagon and had these conversations until recently, it feels like we are the Titanic, we see the iceberg, and we are not turning. | ||
| And I would say the Trump administration is doing a really good job trying to turn the title. | ||
| But we should have sick doge on the military procurement system instead of AID. | ||
| So last night, I asked Google's personal AI assistant to help me come up with a good question to put to Kent Walker of Google before a live audience. | ||
| It came up with four in less than 10 seconds. | ||
| They were not bad. | ||
| Then I asked it to tailor a question for an audience particularly interested in national security. | ||
| It did so. | ||
| It actually added a helpful hint. | ||
| I can't wait for what's coming. | ||
| This is for me. | ||
| When you ask the question, deliver it confidently and allow for a thoughtful pause before he responds. | ||
| Good luck. | ||
| Here goes. | ||
| Fellow moderators, take note. | ||
| Mr. Walker, very polite. | ||
| Mr. Walker, what specific concrete measures is Google implementing to ensure that increasingly powerful AI capabilities cannot be leveraged by adversarial nation states or non-state actors in ways that directly threaten U.S. national security interests or global stability? | ||
| How does Google balance its commercial asset, its commercial interests, with the imperative to prevent catastrophic misuse? | ||
| Mr. Walker, have at it. | ||
| This is my thoughtful pause, by the way. | ||
| It's a very good question. | ||
| I think Gemini's earning its keep. | ||
| We talk about being bold and responsible, and the responsible side of it is very much. | ||
| We are right now at the Pareto frontier of capability of these models and efficiency of these models, and the rate is increasing at a remarkable pace. | ||
| We are 300 times, not 300%, 300 times more efficient than we were at the state of art, that state of the art was just two years ago. | ||
| That is a remarkable rate of change. | ||
| In that kind of dynamic environment, we are spending a lot of time building in guardrails to our models to minimize the chances that they can be hacked or abused. | ||
| That's particularly important as we get now into the agentic era of AI, where these tools will be able to take multi-step actions and potentially, which will extraordinarily useful for all of us in our daily lives, including the scientific community, but also potentially dangerous when it comes to things like chemical weapons, biological weapons, radiological weapons, nuclear weapons. | ||
| So how do we take steps against that? | ||
| We build into the model hard guardrails. | ||
| We build into what are called deterministic guardrails. | ||
| Can I just push you on this to sharpen your chatbots questions about how you balance commercial interests against the potential for catastrophic misuse? | ||
| Take us inside a meeting at Google. | ||
| Are there conversations where you think, God, this would be so cool, it would make us a lot of money, but boy, could that go really wrong. | ||
| So we're not going there? | ||
| So we have held off on releasing some models over the years where we have had concerns. | ||
| I'm giving you an example from a few years back. | ||
| We had a model that would do great recognition, speech recognition at a distance. | ||
| And we said, well, wait a second, that could also be misused for surveillance. | ||
| So we held off publishing a paper around that and only published the part that would be useful for people who have hearing issues. | ||
| So it's right up close and you can see the person's face. | ||
| So those kinds of back and forth happen all the day. | ||
| We have teams that are focused just on the frontier model safety classes of questions. | ||
| We have red teams that go in and try and break the models in different ways to make sure that they're not subject to these abuse. | ||
| Now, that said, it's a fast-evolving technology and nobody is going to be perfect in this area, but we're devoting a lot of work ourselves and across the industry through something called the Frontier Model Forum to try and do cutting edge research to benefit everybody working in this area to make the guardrails as strong as they can be and limit the chances of a jailbreaking. | ||
| I have a question and I want all three of you to take this on lightning round. | ||
| General Petraeus, you famously asked the tell me how this ends question. | ||
| You were talking about the war in Iraq. | ||
| But I want you to apply it to this because I keep thinking you built your career in a world, the military, that places a premium on predictability, on ability to plan. | ||
| None of us know, not one of us, how all this is going to go with tech, with AI. | ||
| How do you think about that? | ||
| Well, if I just come back to the world that I know the best, which is that of the military and perhaps even intelligence, it would be that you're going to see not too, in the not too distant future, unmanned systems fighting unmanned systems, and they will not be remotely piloted. | ||
| They will be algorithmically piloted. | ||
| So it's going to be AI systems, and again, in a sense, fighting against other AI systems in the form of unmanned systems, again, in all the domains of warfare. | ||
| And that is really something. | ||
| And so you're really, it's your technology, it's your technology fighting their technology, and the human is not in the loop. | ||
| Human may or may not even be on the loop all that much because indeed the algorithms increasingly are going to be produced by the AR large language models. | ||
| And of course, I'd actually be curious if you agree with this. | ||
| Those models reportedly will be within two years at the level of a Nobel laureate. | ||
| I think right now they're at a very good graduate student next year's great PhD. | ||
| And the year after that is again Nobel laureate level intellectual thinking. | ||
| The test we use, Democrasabes, who actually just won the Nobel Prize for some of his work in this area, is if these models had all the information available to Einstein in 1900, could they come up with a theory of relativity? | ||
| And we are certainly making progress in that direction. | ||
| Would it have taken 10 seconds or 20 seconds? | ||
| And wait till quantum, by the way. | ||
| We haven't really touched on that, but then take this incredible acceleration of computational power. | ||
| And I should say quantum is on track. | ||
| We think that by 2030 we will have working quantum computers which will more than exponentially increase the rate of AI. | ||
| AI is making quantum faster, quantum will make AI faster, so you get this combinatorial loop of innovation. | ||
| So we already are working on post-quantum cryptography, trying to get ready for that future. | ||
| Five years is not a very long time. | ||
| No, it is not. | ||
| Anya, tell us how this ends. | ||
| Let me make the concerns more specific, give you the dark scenario, and then why that scenario is absolutely not inevitable. | ||
| So we've all been, here are the things that could go wrong. | ||
| A non-state actor now has the equivalent of a PhD in chemistry, biology, physics sitting on their shoulders if they're tinkering to make a weapon of mass destruction. | ||
| That's possible. | ||
| Hasn't happened yet. | ||
| You could imagine a scenario where you at some point in the next few years have a 9-11 of AI, where some bad actor uses it to do harm in the physical world. | ||
| There's another harm, which is that the AI itself, I know a lot of us in Silicon Valley spend a lot of time on this, self-replicates in ways that are deeply harmful, does things, you're going to give the paperclip example, does things that we don't intend, jailbreaks all of the great safeguards that Google and others are putting in that. | ||
| I would call that a potential Chernobyl of AI, where the technology itself does harm. | ||
| None of this is inevitable. | ||
| And by the way, you have to call out the UK here. | ||
| Rishi Sunak did something amazing, and he started three or four years ago the UK AI Safety Institute. | ||
| It is completely not woke. | ||
| They do testing in advance of models being deployed. | ||
| I think Google and others voluntarily have those models tested for these types of risks we've been talking about. | ||
| Cyber, chemical bio, jumping its own safeguards. | ||
| And so, and they now have 14 safety institutes all around the world. | ||
| The Trump administration has been a little more quiet about that, but they have not gotten rid of the U.S. one. | ||
| The Chinese, whenever we talk to the Chinese about these issues, their scientists are also deeply worried. | ||
| And so I think there is a groundswell here to do something really positive on safety testing before we have a disaster. | ||
| We're going to have time for maybe one question. | ||
| I do want to just flesh out the paperclip example. | ||
| This is credit to you, Kent. | ||
| You are of the mind that there is a very small but not zero percentage that AI will run the world and we will all be paperclips in its service. | ||
| And so this is an area where it's called the alignment problem. | ||
| You need to make sure that your models are doing what you want them to do, whether that's ordering one pizza instead of 10 pizzas or not doing grievous harm. | ||
| And that's one of the reasons why this area of AI safety research continues to be extremely important and something we take seriously. | ||
| That said, coming back to the what happens next question, it's difficult to make predictions, especially about the future. | ||
| Something I learned from my son who's a history major is something called the aperture of now. | ||
| We look back at history and we see all these patterns and it's so obvious that this led to this, led to this. | ||
| And then we ask, well, what's going to happen next? | ||
| We say, well, I don't know. | ||
| It's all completely contingent because it goes through the aperture of now. | ||
| I would say that to the extent history is any guide, technology has been a remarkable positive impact on human lives around the world. | ||
| Human life has doubled in the last hundred years of average human lifespan around the world. | ||
| We have cleaner water, we have better food, we have better medical care, and not just people in the development world, people in the developing world as well. | ||
| AI is a general purpose technology. | ||
| There's a lovely paper called GPTs are GPTs. | ||
| Generative pre-trained transformers are general purpose technologies. | ||
| And if we do this right, and we take into account the risk that Anya has laid out, but also the benefits of being able to dramatically make our economies more productive, raise the standards of science, create nuclear fusion, create clean water for people around the world, the upside is really tremendous. | ||
| So the stakes are extremely high, and we all have a deep responsibility to get it right. | ||
| Okay, one quick question. | ||
| Anybody out there? | ||
| Mary Louise, I think we should just stop there. | ||
| We should take a question. | ||
| Yes, sir, right here. | ||
| Thanks. | ||
| Patrick Lock from Intermap Technologies, we're a mapping company. | ||
| My question is around with all of these incredible advances, actually, some of which are happening today. | ||
| On the battlefield in Ukraine, in some battles, 80% of the casualties are inflicted by drone, FPD drone. | ||
| We just struck Iran. | ||
| In the last panel, they were talking about potential blowback from that and uncertainty around that. | ||
| And my question to you guys is: is this filtering into and being articulated in terms of our bright red lines, our homeland security, right? | ||
| Our deterrence, and these evolutions, which we're seeing real time in Ukraine, whether they're being adequately articulated from a policy perspective. | ||
| Quick answer, General. | ||
| I think we're going to get some wake-up calls in the United States from drone attacks that are carried out. | ||
| And I think only then will we truly get serious about having the kind of counter-drone defenses around any large gathering of people, any significant institution, probably prisons, you name it. | ||
| But I think there's going to be some of that that will take place over time. | ||
| Okay, all right. | ||
| Thank you. | ||
| Thanks, everybody. | ||
| Thank you. | ||
| Coming up Saturday morning, Dominic Lett, budget and entitlement policy analyst at the Cato Institute, will talk about Congress and the Trump administration's deficit reduction record. | ||
| And then National Urban League President and CEO Mark Morial talks about the organization's 2025 State of Black America report. | ||
| C-SPAN's Washington Journal. | ||
| Join the conversation live at 7 Eastern Saturday morning on C-SPAN. | ||
| C-SPAN Now, our free mobile app, or online at c-span.org. | ||
| Democracy is always an unfinished creation. | ||
| Democracy is worth dying for. | ||
| Democracy belongs to us all. | ||
| We are here in the sanctuary of democracy. | ||
| Great responsibilities fall once again to the great democracies. | ||
| American democracy is bigger than any one person. | ||
| Freedom and democracy must be constantly guarded and protected. | ||
|
unidentified
|
We are still at our core a democracy. | |
| is also a massive victory for democracy and for freedom. | ||
|
unidentified
|
Sunday on C-SPAN's Q&A, a discussion on preserving the legacies of U.S. presidents and the work their privately funded organizations do to achieve this, including through the Presidential Leadership Scholars Program, which launched in 2015. | |
| The participants talk about the relationship between their foundations and the government-funded presidential library system, which is overseen by the National Archives. | ||
| The idea of opening the George W. Bush Center on SMU campus was first broached. | ||
| There was some resistance among faculty and students. | ||
| That has totally changed. | ||
| And these days, now, President Bush, on occasion, may make surprise appearances in classrooms. | ||
| And I think that's a huge hit for a lot of the students, some of whom weren't born when he was first elected president. | ||
| The partnership is really what's important at all of our institutions. | ||
| And we all have a little bit of a different model. | ||
| At the Clinton Presidential Center, the foundation and the library, we work very closely together on our programs, but the library staff really, a lot of them focus on the core mission, which is to preserve and open the records of those eight years. | ||
| We do try to bring programming to either the Texas AM campus utilizing our network so that students have the opportunity to be exposed to those that embody the principles, the values of the 41st president, so that the legacy is living on in that way. | ||
| The foundations put additional money into these institutions. | ||
| Actually, they build the libraries, they build the edifices and turn them over to the American people through the National Archives, which maintain these institutions. | ||
| But we continue to be involved and put money into them to make them what they are. | ||
| Preserving the legacies of U.S. Presidents Sunday night at 8 p.m. Eastern on C-SPAN's QA. | ||
| You can listen to QA and all our podcasts wherever you get your podcasts or on our free C-SPAN Now app. | ||
| Up next, a conversation on how the Defense Department can make acquisition contracts more efficient. | ||
| Pentagon Senior Advisor Peter Ginto discussed sole source contracts and expanding the industrial base. | ||
| This is just over half an hour. |