All Episodes
March 4, 2025 - Epoch Times
22:58
Is DeepSeek a Sputnik Moment for America? | Nicolas Chaillan
| Copy link to current segment

Time Text
The Chinese AI app DeepSeek recently became the most downloaded iPhone app in the US and caused US tech stocks to plummet.
President Donald Trump described it as a wake-up call for American companies.
So what's really going on?
Is DeepSeek as powerful as people say?
Or is there a bigger story here?
They were able to disrupt half a trillion dollars of market share, market cap, shorting the stock and making a whole bunch of money at the same time with really nothing to back it up, just noise and completely manipulating the market.
That's scary.
Today I'm sitting down with AI expert Nicholas Cheyenne, former Chief Software Officer for the U.S. Air Force and now founder of the generative AI company AskSage.
He's worried that the U.S. Department of Defense is falling woefully behind China when it comes to adopting AI technology in the military.
What you see is the CCP adopting an incredible amount of velocity and speed.
They do GPT and DeepSeq models, even the new Alibaba models, into their military networks.
This is American Thought Leaders, and I'm Jan Jekalek.
Nicholas Cheyenne, such a pleasure to have you back on American Thought Leaders.
Good to see you.
So DeepSeek has been described by some as a Sputnik moment, a second Sputnik moment for America.
What do you think?
I don't think that's that easy, right?
When you look at what happened, effectively you see China, particularly the CCP, manipulating markets.
And, you know, it's kind of interesting to see how quickly U.S. companies and also investors in different markets reacted to this news, fake news, really, which was, okay, China has created a better model with very small investment in GPUs using...
You know, the older models of GPUs of Nvidia.
And none of it is true.
You know, when you start digging into what happened, you find that, you know, that company is really led by, you know, investors that have been investing around crypto for a while.
They had access to about 50,000 H100 latest NVIDIA chips.
And also, the models are not that good.
Not only is there a tremendous bias baked into the models, of course, coming directly from CCP propaganda.
But you also see something pretty insane, which is they ingested an immense amount of data coming from OpenAI and other companies, which everybody does.
But at the end of the day, what you also see is these models being trained to pass the benchmarks that are used to decide whether or not they are better.
And quite honestly, when you use them in real life with real use cases that we do here, we find...
Pretty quickly, they are not up to par and quite behind what you see with the latest models from OpenAI and Google and even Meta.
So, you know, I think it's important to realize that they're leading in many fields and they know how to manipulate opinions and markets, which is, you know, they shorted the stocks.
They made, you know, hundreds of billions of money by doing this announcement.
And so they're smarter than us and play a better game to manipulate what's going on in the United States and even in Europe.
But certainly this is not a Sputnik moment when it comes to AI. We need to be at the top of our game and we need to make sure the government is adopting the best U.S. companies' capabilities.
But it does not mean that China is leading right now when it comes to these models.
It's something we need to pay attention to because at the end of the day, they might be winning.
Nick, I'm going to get you to unpack a few things for me before I go further.
For example, you said that this DeepSeek AI is being trained against the benchmarks.
Before we talk about that, tell me exactly what it means to train an AI for those of us that are uninitiated.
Well, most of the time, the way these large language models are trained is literally ingesting massive amount of data to pretty much capture everything that exists.
And in fact, they're running out of money, so they are creating new data by using large language models to actually create new...
Content because we're running out of data to ingest.
And so it's not surprising that you see all these lawsuits, not only with OpenAI, but also other companies.
And now you're seeing DeepSeek also ingesting effectively data directly from OpenAI, using their APIs, their technology to generate responses, and including, of course, their documentation and all these things.
That's pretty common.
And that's the way these models are trained.
And it's very difficult to then Change the way these models are going to behave because they get this bias from the content ingested based on the volume of data ingested.
So it's very difficult for DeepSeq to then remove facts that are automatically ingested via all this massive amount of data.
And so that's why you see the models initially answering the questions about, you know, President Xi and all the information that China is trying to suppress.
But then they have safety mechanisms to look at the response and override the answer.
And that's how they start hiding, you know, all the moment in history that China doesn't want us to know about.
It seems to me it's like such an odd thing that the model will actually show you the answer for a split second and then basically say, sorry, I can't show that to you, right?
I was just looking at a recent tweet from a user who asked hundreds of times about Tiananmen Square, for example, and kept getting the same answer.
And it sort of, it appeared to, quote unquote, trigger the AI into saying, look, enough, don't ask me this question anymore.
It almost seems intentional because they wouldn't absolutely have to show you that there was an answer and then hide it.
Do you make sense of this?
So the way these special models called the reasoning models work is you see the reasoning first, and that used not to be the case, right?
That's very recent with the new 01 models coming from OpenAI.
These were the first models that resonate first throughout a very detailed process of thinking.
And the longer he thinks, the longer the response gets to become better and costs more money to generate, of course.
And so what's interesting is they're showing you the thinking, which many models don't show you that, and they only show you the response.
And so by showing the thinking, they have no way to hide what the model is doing behind the scene.
And when the response is ready, It triggers their safety mechanisms to then remove the answer, but it's too late because, of course, the thinking and the reasoning of the model is there for everyone to see, and you can see all those insights being shared to the user.
And so that's kind of the downside of reasoning models.
They have no mechanisms to hide that.
Tell me very quickly, how is it that you know that They didn't use this low amount of computing power that they claim.
It's pretty obvious when you look at the research that, first of all, you know, five, six million dollar investment is just plain ridiculous.
At the end of the day, that's what the CCP does, right?
They lie about every number on the planet.
And so when you see all that access to these beefier chips and when you look at the background and the knowledge of the people behind DeepSeek, it's pretty obvious to anyone that they know what they're doing and they have access to a pretty much unlimited amount of compute and also funding.
And so when you see the The numbers just don't lie.
And, you know, the CCP does.
Nick, we're going to take a quick break right now, and we'll be right back in a moment.
And we're back with Nicholas Chan, former Chief Software Officer of the Air Force and Space Force, and now founder of AskSAGE.
So the bottom line is, you know, they launched this thing.
And kind of wowed the world, wowed the markets, wowed users.
But then when you look under the hood, it's really not nearly what it appeared.
It's not.
I mean, it's not.
Don't get me wrong.
They still had very good outcomes when it came to the training methods and they did come up with ways to be more efficient and save money.
And there's definitely value there.
Of course, the fact that it's the CCP behind it and biased and, you know, with a bunch of made-up answers, that kind of kills the entire value, in my opinion, because how are you going to trust it?
Even in math, even in research, you have, you know, they could train it in a way that, you know, if you ask a question in English, you get answers that are not as good as if you were to ask it in Chinese, for example.
That would be easy to do.
So again, I would never trust it.
And so that defeats the point of using it.
The numbers and the benchmarks and the amount of money that they spent to train it is completely made up.
And I don't believe it in one second.
And it might certainly not be as much money as we spent on OpenAI 03 or 01, but it's still way more than they're claiming.
A thousand times more.
So when you left the Air Force, when you left being the software chief over there back in 2021, I think around the time when we first spoke online, you said you believe that the U.S. has really lost the war or on the cusp of losing the war in AI. But you sound more optimistic today.
So it's always interesting.
I always said that the US was doing very well, but the DoD was losing.
And I think that's something that's a little detail that's being lost in translation with my French accent often.
And I think that's the most important fact.
What you see is the CCP adopting at incredible amount of velocity and speed.
Their latest, you know, Beidou GPT and Deep Seek models and, you know, even the new Alibaba models into their military networks at scale across classification levels on very complex weapon systems.
And, you know, while the models are less good and less capable than the U.S. models, unfortunately, the U.S. has a massive wall and lack of adoption.
of the best of breed U.S. capabilities into the Department of Defense.
And so what you end up seeing is the U.S. leading compared to China when it comes to companies.
But when it comes to defense, which is quite honestly the most important use of AI we could think about in 2025 and beyond, you see DoD being at least five years behind China.
And it's compounding by the fact that these models augment.
And empower the velocity of people up to, you know, 20 to 50x.
So one person turning into 50 people.
And let's face it, the CCP already has more people.
And now they turn these people to 50 times more thanks to being augmented and empowered by AI. It's almost impossible to compete, particularly if you don't have access to AI capabilities.
And so where I'm very concerned is the amount of money spent, particularly during the Biden administration, for four years on AI ethics and other...
You know, massive amount of money wasted by research teams building their own DOD capabilities like NPR, GPT, that just, you know, are years behind what you see companies do and really push us even further behind China.
Tell me a little bit about the intersection of AI and nuclear capability.
This strikes me as something that's very important.
Because you mentioned some of the limitations right now of AI usage in DoD.
Yeah, there's a few fields that are super essential to winning the next battles, right?
And DoD is lagging behind.
And we're very excited to see a new administration that is eager to focus on all these fields, including hypersonics, quantum compute.
AI, you know, drones, right?
And really, you know, even the nuclear deterrence of the United States is already in shambles because of, you know, some delays we've been seeing in some programs that we're running, again, lagging behind schedule.
And, you know, way over budget.
We're talking, you know.
10x of our budget.
And so it's concerning.
We need to do better.
We need to be less complacent.
We need to have more urgency to get things done.
And more importantly, we need to understand the kind of fights we're going to be fighting moving forward.
And I'm not sure it's going to be much about jets and bombers, although they still need to be at the top of their innovation game.
But it's going to be also about software and...
Drones and AI capabilities to empower humans to make better, faster decisions.
And quite honestly, right now, the money spent on those domains were mostly wasted in Paper research, like DEI research on AI or ethics.
We spend so much money debating whether or not to use AI in a DOD, which is just mind-boggling to me.
China is not wasting time pondering life, whether or not they should use AI in their weapon systems.
And if they have it and we don't, it's the same as saying...
Maybe we just don't do nuclear systems anymore.
And China has it, and so be it.
But I don't think that's a good answer.
And I don't think anyone would want to be living in that world.
Nick, I have to ask you about this ethics piece.
I mean, it's one thing to say whether it should be used at all.
It's a different thing not to have some significant ethical guidelines.
And you're right, those guidelines may not exist in communist China at all.
You can kind of imagine, yes, autonomous weapons systems driven by AI that do what they want.
There's been very popular movies made about this, right?
We definitely don't want that.
Clearly, we have to have some kind of ethical framework.
We do, but it doesn't mean you spend 100% of your budget on ethics.
If I were to pick, I would probably spend 5% of the budget on ethics and 95% on capabilities.
And look, we're so far away from autonomous weapons powered by AI. I agree that we should be very cautious when it comes to putting AI on weapons.
The fact is we're going to have to get there spotly, but we're going to have to get there because China is going to do it.
And we demonstrated when we did dogfights with jets that humans lost every single time against jets powered by AI with no human on board.
And so humans don't have the ability.
To move as fast and make decisions as fast as technology.
And so we're going to need to find a way to compete against these new weapons.
And you don't want to be the one playing catch-up.
You know, one thing that I think we all know from World War II and nuclear weapons is being the first was important.
And playing catch-up is never good.
And so we can't just...
You know, dismiss the importance of getting autonomous weapons powered by AI with, you know, still control from humans.
But the fact is, at some point, when you get into a fight, and it's a face-to-face fight between two weapons, two AI-powered weapons, the human, to some degree, needs to allow the fight to begin and then get out of the way to let us win.
That's the world we live in.
We can pretend it's not happening.
We can pretend, you know, and everybody says, well, you know, we could have China sign, you know, some agreement and treaty to say they're not going to do that, but they will sign it and do it anyways, right?
So we can't take that chance.
And quite honestly, we're so far away from those weapons, which scares me even more because China is not.
And so we need to keep that in mind as well.
Fascinating.
Something I've been seeing a lot of videos of recently is these incredible light displays, which are actually something like tens of thousands of drones being used in coordination.
And people have been noticing, and I've been thinking about, of course, the military potential of this kind of technology.
Have you thought about this?
And where are we at with that?
I pushed back in 2018 the creation of a special office to go after the defensive side of this and also the offensive side of swarming technologies.
Honestly, we have not done nearly enough to even comprehend what can be done with those swarming technologies.
The speed and the cost of these devices are mind-boggling and most people don't even know.
How quickly these things can move.
And the time it would take to react, by the time you even understand what's going on, the attack is already over and you have nothing left to do.
So I can tell you it's probably one of the biggest threats, and not just from the CCP, but also from terrorist organizations.
Quite honestly, the cost being so low nowadays and the technology being so accessible.
There's really no barrier to entry.
And the fact that we're not spending a significant amount of money in the defense budget and at DHS as well to go above and beyond to understand and have answers to prevent those attacks is very concerning.
Give me a scenario of what one of these attacks might look like.
You know, sky is the limit with these attacks.
They could put explosives on some of these drones, but even just using them to crash into things and just, you know, there's so many things that can be done with swarming technologies to disrupt, you know, air traffic control, to disrupt airspace.
I mean, you know, sky is the limit.
There's almost nothing you cannot do with it from putting...
You know, weapons to putting bombs on them or just dropping them from the sky and hit, you know, people and objects.
It's a very concerning capability.
And when you see how well coordinated these can be and disconnected as well, which means if they lose control, they can still continue to behave and achieve whatever it is they were programmed to do.
So a lot of the technology we use to disrupt their capabilities would not be impacted because they would still fly and go do what they're trying to do.
And many commercial drones are designed to fall out of the sky if they lose control from the controller.
And I'll just go back land where they took off.
The military version of these drones can be programmed to continue doing the last instructions they were given.
And so a lot of the technology we use to disable those drones would just not work efficiently to stop these attacks.
And honestly, people go back to basic means like eagles and other things.
But again, you know, we don't have nearly half of the volume of eagles or, you know, birds to go attack these drones.
So, again, and the NETS technology and all that, this is not realistic against a massive swarming attack.
And so I think we really need to wake up.
Well, Nicholas Cheyenne, it's such a pleasure to have had you on again.
Thanks for having me.
Thank you all for joining Nicholas Cheyenne and me on this episode of American Thought Leaders.
Export Selection