All Episodes
Aug. 12, 2025 23:32-00:02 - CSPAN
29:54
Discussion on Security Implications of AI
Participants
Clips
d
david petraeus
00:20
g
glenn ivey
rep/d 00:06
s
sean spicer
00:15
s
steve cohen
00:02
|

Speaker Time Text
unidentified
C-SPAN Now, our free mobile video app, or online at c-span.org.
On Wednesday, security experts examine the recently released Warfighting Framework from the U.S. Space Force, which outlines operational planning for conflict and combat in the space domain.
They'll also discuss peacetime competition and how the U.S. and its allies can project power in space.
We'll have live coverage from the Atlantic Council at 10 a.m. Eastern on C-SPAN.
C-SPAN Now, our free mobile app, and online at c-SPAN.org.
Celebrate National Book Lovers Day with C-SPAN by shopping our sale, happening now at c-spanshop.org, C-SPAN's official online store.
Enjoy savings of up to 10% on all books site-wide.
Every purchase helps support our non-profit operations.
Scan the code or visit c-spanshop.org to browse the National Book Lovers Day sale.
Going on now.
Former CIA Director David Petraeus joins Google's Global Affairs President Kent Walker and other panelists to discuss the security implications of artificial intelligence.
This is at the Aspen Security Forum in Aspen, Colorado.
It's about 25 minutes.
It's funny, as a journalist, there's one question I always hesitate to ask, and it's the, so tell me what happens next.
Predict the future.
Because the only remotely honest answer a person can give is, oh, beats me.
None of us can predict the future.
None of us know exactly what is going to happen, particularly when it comes to tech, AI, some of the big things that we have already been hitting this morning.
So welcome to a panel of whom, to whom I'm about to put the question, tell me what happens next, predict the future.
I want to start by teasing out some of the themes, questions that have emerged on earlier panels that we've kicked around a little bit but not resolved.
And I want to start because we have significant military battlefield expertise on stage with a former CENTCOM commander, General Europe First.
Speak to the ways that you see the battlefield being transformed.
What opportunities do you see in the air on the land on the sea, below the sea?
Unmanned.
And increasingly, it will not just be remotely piloted, it's going to be algorithmically piloted.
And you can predict the future by seeing what is going on in the present, frankly, right in Ukraine.
I'm a frequent visitor there.
I've watched as they formed not just an Army, Navy, Air Force, but Unmanned Systems Force, as they have introduced unmanned systems to every single battlefield formation.
Every company now has a drone platoon, battalion, a drone company, brigade, a drone battalion.
Then there are other huge regiments that divide up the front lines among them.
To give you a sense of the scale of this, the last time I was there was about two or three months ago.
david petraeus
I asked the overall commander of Ukrainian forces, how many drones did you employ yesterday against the Russians?
unidentified
And he said, almost 7,000.
7,000?
Do the math.
I hope we have more than 7,000 drones in the U.S. military, but it can't be that many times more.
So they're going to produce 3.5 million unmanned systems.
And keep in mind, this is not just in the air of various types, everything from short-duration suicide drones to much longer-duration strategic going in and closing down Moscow's international airport every other day or so.
This is maritime drones.
How does a country that has no Navy sink one-third of the Russian Black Sea Fleet?
Aerial drones that find the ships and maritime drones that sink them.
How does a country shoot down aircraft over occupied Crimea by shooting air defense missiles off maritime unmanned systems?
How does a country with $1 million worth of drones parked outside airfields that have strategic aircraft of Russia on them, thousands of kilometers apart, $1 million worth of drones damages or destroys five to seven billion dollars worth of strategic aircraft, some of which cannot be replaced.
So you can see that future.
And again, right now, most of those drones are remotely piloted.
But the future of the future is going to be systems that are remotely piloted.
And so if you now turn it to a U.S. scenario, our Indo-Pacific commander has publicly described what he wants to do in the Taiwan Strait, 110 miles of open ocean, very formidable task.
And he wants to turn that into his term, a hellscape.
How do you turn that into a hellscape?
david petraeus
Unmanned systems underneath the surface of the water, massive numbers of them, on the surface of the water, in the air, on the ground, in outer space, the equivalent in cyberspace, the cognitive area, you know, all of these domains of warfare.
unidentified
And again, increasingly, not remotely piloted.
So the human in the loop is going to become the human on the loop.
In other words, establishing the criteria for what the machine, the mission, and then the tasks, and then what it has to meet.
And by the way, AI is going to write the algorithms with a few bits of input from humans.
I think you've just given us the seeds for about five panels when we all convened back here last year.
Next year.
Let me pick up on one other thread that I want to bring you two in on.
Charlie Dent on this stage yesterday, former Republican Congressman Charlie Dent talked about the need to maintain dominance in the race with China on tech, on AI.
First of all, just are we all in agreement that that's where we are?
Any dissent on the idea that the U.S. retains dominance in this moment?
That we currently do or that we should?
That we currently do?
Well, Kent and I, I think we are in heated agreement with this.
The Chinese large language models are rapidly catching up, if not there.
I think this is the year that the Chinese achieve parity with us.
Kent, do you agree?
I think it's neck and neck.
I think a few years ago we would have said we were years ahead.
Now I think we're months ahead.
And in some areas, they may well be ahead.
And to step back, the stakes that the general has started to lay out on the battlefield are significant.
But even more fundamentally, this is a race for geopolitical influence.
This is a race for economic leadership around the world.
If you go back to what was called the long century between the French Revolution and the First World War, Great Britain dominated that century because they dominated in steel and coal.
The United States dominated the 20th century because we led in mass production and material science.
So the question is who's going to lead in the 21st century?
And the early signs are not as suspicious as we would like.
The Australian Strategic Policy Institute does a review every few years of who is leading in critical technologies.
They look at 64 different technologies from batteries to advanced engines to advanced chemistry.
In 2003, the United States led in 60 of those 64 categories.
Today, China leads in 57 of 64.
Now, the good news is we do think that AI is a key element in turning that around.
Because AI is not just a scientific breakthrough, it's a breakthrough in how we make breakthroughs.
So that's new generations of material science.
That's quantum.
That's personalized medicine.
And many more.
But we really have to lean into this and have an affirmative pro-innovation approach.
Just to follow up and make this specific, you just said, Anya, you think this might be the year where China achieves parity with the U.S. in large language models.
What do you mean?
Like, what are the Chinese about to be able to do as well as we do?
So, when you look at the frontier models, as we call them, it used to be Google, OpenAI, Anthropic, maybe Meta were solidly in the lead.
You all have heard about DeepSeek, but behind DeepSeek are probably 15 other very exceptional Chinese models, including Kiwi and others.
And the problem with this race is it's exactly as you said, Kent, you don't quite know where it leads.
It's not just about having the best model, it's how do you use it in your society, how do you implement it?
And I think here the Chinese are actually eating our lunch.
Starting September 1st of this year, every student K through 12 in China will have AI lessons, age-appropriate.
How do you use the model?
How do you interact with it?
How do you do it ethically?
In the U.S., there's a good, the Trump administration actually had a pretty good executive order on AI and education, but it's baby steps.
You know, we're training a couple hundred thousand teachers.
We're starting to think about it.
And so, if you think about not just where are we at the frontier of the technology, but how are people actually using that technology in their societies, they're doing well.
Let me pick up on that.
Sergey Bruden, one of our co-founders, has a saying, ideas are easy, execution is hard.
And the United States has a history of being the first to invent technologies, but not necessarily the best to deploy.
And that learning by doing, whether it's television or color printers or many other technologies that are now manufactured outside the United States, is a sobering note for us.
China has launched its AI Plus program.
It is spending hundreds of billions of dollars a year investing in AI.
There are 200,000 Chinese companies using AI today, and something like 600 million Chinese report that they are already actively using AI.
Now, that, you know, good on them for taking this seriously, but a real challenge to us.
There's a book that's just out by Jeremy Ding called Technology and the Rise of Great Powers.
And the secret, he says, is an innovation economy, yes, but a diffusion economy is just as important.
Get these tools in the hands of the people who need to use them, and that actually makes the tools better.
Well, and it's so interesting because, Anya, you just made the point about China is teaching this at an aid-appropriate level in schools.
I suspect many of my fellow parents in this room would agree our schools spend a lot more time trying to keep children away from it because we want them to learn.
There's good intentions, but what do you think are the policies if we want to, if we agree that maintaining dominance, that not getting left behind in this race is a noble goal?
What do we do?
So, I have to give the Trump administration an enormous amount of credit here.
They came in right away.
They have a lot of technologists who've taken time out of their jobs in Silicon Valley and elsewhere who really understand this stuff, and they're embedded across this administration.
David Sachs, the AI czar, says we need to cement our U.S. technology stack around the world.
They're doing a lot.
Here are the good things they're doing.
Making it easier to build power plants.
We're referring to Dan Poneman again.
Deregulation.
We're actually doing a pretty good job, I would say.
I don't know if David would agree, on trying to break through some of the bureaucracy in the Pentagon to get drones and other tech-enabled things faster in there.
Trying.
It's going to be slow.
But there is a real, I think there's a real renewed vigor and energy and a willingness to break China that I haven't seen in a previous administration.
There's also vested interests in what Senator McCain used to term the military-industrial congressional complex, each element of which wants to maintain legacy systems, legacy processes, legacy basing structures, legacy maintenance contracts, you name it, and frustrates the efforts of those who know we need to be accelerating our transition.
We really need to go, this is very simplistic terms, but we need to transform our military from a small number of very large platforms that are incredibly capable, exorbitantly expensive, and very heavily manned, and increasingly vulnerable, to an extraordinary number of unmanned systems, again, under the sea, surface, air, ground, space, et cetera,
and which will increasingly be not human-piloted or remotely piloted, but algorithmically piloted.
Again, we still need some of those platforms.
It's not the entire force.
There are other scenarios around the world that call for those.
But we're not remotely making that kind of dramatic move.
I would, in fact, submit that the Chinese are going to school more on Ukraine than are we in the United States.
We don't have our vaunted Army lessons learned team, Joint Forces Lessons Learned Team.
They're not on the ground because, understandably, we're concerned about boots on the ground and American casualties.
But if you don't do that, it's very hard to divine the lessons if you're not on that ground with the actual units seeing what they're doing.
You know, the latest innovation is in Ukraine because they can't work through the jamming.
They're trying to maintain a command and control link, a radio link, if you will, sometimes using Starlink and a GPS link.
So now what they're doing is they just spool out little fiber optic cable behind the drones.
They're doing this on scale of thousands of these.
By the way, it makes a mess of the farmers' fields.
Now you have to wade through a lot of fishing line, basically fiber optic cable.
But the innovation is so rapid.
If you're gone for three months and you come back, you find a whole big breakthrough that they have just actually put onto the battlefield.
And they're doing this at a rate and a pace that only a country that has extraordinary IT talents, manufacturing skills, design skills, and is fighting for its very survival could actually do.
One sentence to double down on what David just said: when I have walked the halls of the Pentagon and had these conversations until recently, it feels like we are the Titanic, we see the iceberg, and we are not turning.
And I would say the Trump administration is doing a really good job trying to turn the title.
But we should have sick doge on the military procurement system instead of AID.
So last night, I asked Google's personal AI assistant to help me come up with a good question to put to Kent Walker of Google before a live audience.
It came up with four in less than 10 seconds.
They were not bad.
Then I asked it to tailor a question for an audience particularly interested in national security.
It did so.
It actually added a helpful hint.
I can't wait for what's coming.
This is for me.
When you ask the question, deliver it confidently and allow for a thoughtful pause before he responds.
Good luck.
Here goes.
Fellow moderators, take note.
Mr. Walker.
Very polite.
Mr. Walker, What specific concrete measures is Google implementing to ensure that increasingly powerful AI capabilities cannot be leveraged by adversarial nation states or non-state actors in ways that directly threaten U.S. national security interests or global stability?
How does Google balance its commercial asset, its commercial interests, with the imperative to prevent catastrophic misuse?
Mr. Walker, how about it?
This is my thoughtful pause, by the way.
It's a very good question.
I think Gemini's earning its keep.
We talk about being bold and responsible, and the responsible side of it is very much, we are right now at the Pareto frontier of capability of these models and efficiency of these models, and the rate is increasing at a remarkable pace.
We are 300 times, not 300%, 300 times more efficient than we were at the state of art, that state of the art was just two years ago.
That is a remarkable rate of change.
In that kind of dynamic environment, we are spending a lot of time building in guardrails to our models to minimize the chances that they can be hacked or abused.
That's particularly important as we get now into the agentic era of AI, where these tools will be able to take multi-step actions and potentially, which would be extraordinarily useful for all of us in our daily lives, including the scientific community, but also potentially dangerous when it comes to things like chemical weapons, biological weapons, radiological weapons, nuclear weapons.
So how do we take steps against that?
We build into the model hard guardrails.
We build into what are called deterministic guardrails.
Can I just push you on this to sharpen your chatbots questions about how you balance commercial interests against the potential for catastrophic misuse?
Take us inside a meeting at Google.
Are there conversations where you think, God, this would be so cool, it would make us a lot of money, but boy, could that go really wrong.
So we're not going there?
Yeah.
So we have held off on releasing some models over the years where we have had concerns.
I'm giving you an example from a few years back.
We had a model that would do great recognition, speech recognition at a distance.
And we said, well, wait a second, that could also be misused for surveillance.
So we held off publishing a paper around that and only published the part that would be useful for people who have hearing issues.
So it's right up close and you can see the person's face.
So those kinds of back and forth happen all the day.
We have teams that are focused just on the frontier model safety classes of questions.
We have red teams that go in and try and break the models in different ways to make sure that they're not subject to these abuse.
Now, that said, it's a fast-evolving technology and nobody is going to be perfect in this area, but we're devoting a lot of work ourselves and across the industry through something called the Frontier Model Forum to try and do cutting-edge research to benefit everybody working in this area to make the guardrails as strong as they can be and limit the chances of jailbreaking.
I have a question and I want all three of you to take this on lightning round.
General Petraeus, you famously asked the tell me how this ends question.
You were talking about the war in Iraq.
But I want you to apply it to this because I keep thinking you built your career in a world, the military, that places a premium on predictability, on ability to plan.
None of us know, not one of us, how all this is going to go with tech, with AI.
How do you think about that?
Well, if I just come back to the world that I know the best, which is that of the military and perhaps even intelligence, it would be that you're going to see not too, in the not too distant future, unmanned systems fighting unmanned systems, and they will not be remotely piloted.
They will be algorithmically piloted.
So it's going to be AI systems, and again, in a sense, fighting against other AI systems in the form of unmanned systems, again, in all the domains of warfare.
And that is really something.
And so you're really, it's your technology, it's your technology fighting their technology, and the human is not in the loop.
Human may or may not even be on the loop all that much, because indeed the algorithms increasingly are going to be produced by the AR large language models.
And of course, I'd actually be curious if you agree with this.
Those models reportedly will be within two years at the level of a Nobel laureate.
I think right now they're at a very good graduate student next year's great PhD.
And the year after that is, again, Nobel laureate level intellectual thinking.
The test we use, Democrsabes, who actually just won the Nobel Prize for some of his work in this area, is if these models had all the information available to Einstein in 1900, could they come up with a theory of relativity?
And we are certainly making progress in that direction.
Would it have taken 10 seconds or 20 seconds?
And wait till quantum, by the way.
We haven't really touched on that, but then take this incredible acceleration of computational power.
And I should say quantum is on track.
We think that by 2030 we will have working quantum computers which will more than exponentially increase the rate of AI.
AI is making quantum faster, quantum will make AI faster.
So you get this combinatorial loop of innovation.
So we already are working on post-quantum cryptography, trying to get ready for that future.
Five years is not a very long time.
No, it is not.
Anya, tell us how this ends.
Let me make the concerns more specific, give you the dark scenario, and then why that scenario is absolutely not inevitable.
So we've all been, here are the things that could go wrong.
A non-state actor now has the equivalent of a PhD in chemistry, biology, physics sitting on their shoulders if they're tinkering to make a weapon of mass destruction.
That's possible.
Hasn't happened yet.
You could imagine a scenario where you at some point in the next few years have a 9-11 of AI, where some bad actor uses it to do harm in the physical world.
There's another harm, which is that the AI itself, I know a lot of us in Silicon Valley spend a lot of time on this, self-replicates in ways that are deeply harmful, does things, you're going to give the paperclip example, does things that we don't intend, jailbreaks all of the great safeguards that Google and others are putting in that.
I would call that a potential Chernobyl of AI, where the technology itself does harm.
None of this is inevitable.
And by the way, you have to call out the UK here.
Rishi Sunak did something amazing, and he started three or four years ago the UK AI Safety Institute.
It is completely not woke.
They do testing in advance of models being deployed.
I think Google and others voluntarily have those models tested for these types of risks we've been talking about.
Cyber, chemical bio, jumping its own safeguards.
And so, and they now have 14 safety institutes all around the world.
The Trump administration has been a little more quiet about that, but they have not gotten rid of the U.S. one.
The Chinese, whenever we talk to the Chinese about these issues, their scientists are also deeply worried.
And so I think there is a groundswell here to do something really positive on safety testing before we have a disaster.
We're going to have time for maybe one question.
I do want to just flesh out the paperclip example.
This is credit to you, Kent.
You are of the mind that there is a very small but not zero percentage that AI will run the world and we will all be paperclips in its service.
And so this is an area where we, it's called the alignment problem.
You need to make sure that your models are doing what you want them to do, whether that's ordering one pizza instead of 10 pizzas or not doing grievous harm.
And that's one of the reasons why this area of AI safety research continues to be extremely important and something we take seriously.
That said, coming back to the what happens next question, it's difficult to make predictions, especially about the future.
Something I learned from my son, who's a history major, something called the aperture of now.
We look back at history and we see all these patterns, and it's so obvious that this led to this, led to this.
And then we ask, well, what's going to happen next?
We say, well, I don't know.
It's all completely contingent because it goes through the aperture of now.
I would say that to the extent history is any guide, technology has been a remarkable positive impact on human lives around the world.
Human life has doubled in the last hundred years, our average human lifespan around the world.
We have cleaner water, we have better food, we have better medical care, and not just people in the development world, people in the developing world as well.
AI is a general-purpose technology.
There's a lovely paper called GPTs are GPTs.
Generative pre-trained transformers are general-purpose technologies.
And if we do this right, and we take into account the risk that Anya has laid out, but also the benefits of being able to dramatically make our economies more productive, raise the standards of science, create nuclear fusion, create clean water for people around the world, the upside is really tremendous.
So the stakes are extremely high, and we all have a deep responsibility to get it right.
Okay, one quick question.
Anybody out there?
Mary Louise, I think we should just stop there.
We should take a question.
Yes, sir, right here.
Thanks.
Patrick Block from Intermap Technologies, we're a mapping company.
My question is around with all of these incredible advances, actually some of which are happening today.
On the battlefield in Ukraine, in some battles, 80% of the casualties are inflicted by drone, FPD drone.
We just struck Iran.
In the last panel, they were talking about potential blowback from that and uncertainty around that.
And my question to you guys is: is this filtering into and being articulated in terms of our bright red lines, our homeland security, right, our deterrence, and these evolutions, which we're seeing real time in Ukraine, whether they're being adequately articulated from a policy perspective.
Quick answer, General.
I think we're going to get some wake-up calls in the United States from drone attacks that are carried out.
And I think that only then will we truly get serious about having the kind of counter-drone defenses around any large gathering of people, any significant institution, probably prisons, you name it.
But I think there's going to be some of that that will take place over time.
I reckon.
All right.
Thank you.
Thanks, everybody.
Thank you.
Honor the person who first showed you democracy in action and ignite America 250, C-SPAN's 18-month ad-free celebration of our nation's story.
Give $25 or more by August 31st at c-span.org/slash donate and add your democracy hero to our online wall to keep these vital stories alive for viewers and learners everywhere.
As our thanks, you'll receive an exclusive democracy unfiltered decal.
Your gift helps make C-SPAN possible.
Visit c-span.org/slash donate today and join us in keeping America's story alive.
Thank you.
Weekends bring you Book TV featuring leading authors discussing their latest nonfiction books.
Here's a look at what's coming up this weekend.
At 8 p.m. Eastern, Stacey Abrams, a one-time Georgia state legislator and gubernatorial candidate, talks with former Librarian of Congress Carla Hayden about her latest fictionalized thriller, Coded Justice.
And then at 9.15 p.m. Eastern, Michael Grinbaum gives an inside look at the glamorous Condi Nast publishing empire, the people who crafted its publications, and the standards they set for American culture with his book, Empire of the Elite.
And at 10.15 p.m. Eastern, Book TV takes you to Freedom Fest, an annual libertarian festival held this year in Palm Springs, California, to hear three authors discuss their works.
We'll talk with Wrong Speak Publishing founder Adam Coleman, attorney Kent Heckenlively, and computer information technologist Sean Worthington.
Watch Book TV every weekend on C-SPAN 2 and find a full schedule on your program guide or watch online anytime at booktv.org.
This August, tune in to C-SPAN for highlights of our America 250 coverage.
Join us as we continue to explore the American story through the voices, sites, and stories that shaped it.
Give me liberty or give me death!
On Monday, we'll feature the reenactment of Patrick Henry's Give Me Liberty speech from its original location at St. John's Church in Richmond, Virginia.
Watch C-SPAN's America 250 highlights beginning Monday at 9 p.m. Eastern on C-SPAN.
Congressman Cohen, welcome to the program.
Thank you.
steve cohen
It's good C-SPAN still funded by the government.
unidentified
It is not funded by the government.
What do you mean?
Well, I thought you didn't get any money from the government at all.
No, not at all, and we never have.
What a disappointment to Elon Musk.
I'm sure he liked to doge to you.
Thanks for having me.
Love C-SPAN.
Appreciate the opportunity to come out.
glenn ivey
You know, I wish we could have a thousand C-SPANs across the media spectrum.
Unfortunately, we don't.
unidentified
I think C-SPAN is a huge, huge asset to America.
sean spicer
Not just the coverage that we get of both chambers on one and two, but programs like Washington Journal that allow policymakers, lawmakers, personalities to come on and have this question time during Washington Journal.
unidentified
So it's a huge benefit.
I hope that all these streaming services carry C-SPAN as well because it's an important service to the American people.
I'm actually thrilled that this time in Washington Journal, I'm getting a lot of really substantive questions from across the political aisle.
Our country would be a better place if every American just watched one hour a week.
They could pick one, two, or three, just one hour a week, and we'd all be a much better country.
Export Selection