Anthropic’s Dario Amodi defies Pentagon demands, rejecting AI-driven mass surveillance and autonomous weapons—citing risks to liberties and reliability—despite threats of contract termination, supply chain blacklisting, or Defense Production Act coercion. While competitors like Musk, Google, Microsoft, and OpenAI reportedly comply, Anthropic’s refusal could push U.S. firms toward Chinese models (DeepSeek, Qwen 3.5), seen as superior in tasks like coding. The move aligns with Anthropic’s stated mission to serve humanity, not weaponization, contrasting starkly with dystopian sci-fi visions of AI warfare. Amodi’s stance underscores a rare corporate resistance to unchecked military AI integration, raising questions about accountability and ethical boundaries in tech. [Automatically generated summary]
Okay, there's been an historic announcement from Anthropic, the company that makes Claude Code.
Dario Amodi has basically told the Pentagon to go pound sand.
Now, just as context, you may recall that the Pentagon gave Anthropic until I think 5 p.m. Friday today to give the Pentagon essentially unrestricted use of its technology, which of course means that the Pentagon wants to use it for killing purposes, you know, autonomous weapons, Terminators, Skynet, kamikaze drones, plus targeting and surveillance, mass surveillance, of course.
And also to write cyber hacking programs, probably, you know, the modern version of Stuxnet, things like that.
And Dario Amodi had said previously over the last few years that he's interested in AI that can serve humanity's interests.
And of course, I agree with that, although I have also been rather critical of Dario for his just absurd beliefs.
For example, he thinks the U.S. actually adheres to restrictions on biological weapons development.
He thinks the U.S. gave up bioweapons decades ago.
He literally believes that.
Talk about a gullible person, right?
It's crazy to be high IQ, but so uninformed.
Anyway, but today he made the right decision, and I applaud him for this decision.
And Dario decided that his values and his ethics were more important than all the riches in the world.
And so he told the Pentagon to go pound sand, like I said.
And in a statement that he issued, he wrote, in a narrow set of cases, we believe AI can undermine rather than defend democratic values.
Some uses are also simply outside the bounds of what today's technology can safely and reliably do.
Two such use cases have never been included in our contracts with the Department of War, Department of Defense, and we believe they should not be included now.
And they are, number one, mass domestic surveillance.
He says that AI-driven mass surveillance presents serious novel risks to our fundamental liberties.
Yeah.
He's absolutely correct.
So good for him.
He made the right decision.
And I'm shortening this.
You can read his letter at anthropic.com slash news.
And the second point is fully autonomous weapons.
He says today, frontier AI systems are simply not reliable enough to power fully autonomous weapons.
We will not knowingly provide a product that puts America's warfighters and civilians at risk.
Notice he said civilians because, yeah, the military could deploy these against the people.
Let's see.
He says, in addition, without proper oversight, fully autonomous weapons cannot be relied upon to exercise the critical judgment that our highly trained professional troops exhibit every day.
So anyway, he says these are the two exceptions, and therefore, he will not concede to the demands from the Pentagon, and he will not write those two use cases into the contracts.
But of course, the Pentagon is at war with America's tech companies that refuse to build Skynet Terminator killing machines.
And as Dario writes, quote, they have threatened to remove us from their systems if we maintain these safeguards.
They've also threatened to designate us a supply chain risk, a label reserved for U.S. adversaries, never before applied to an American company, and to invoke the Defense Production Act to force the safeguard's removal.
These latter two threats are inherently contradictory.
One labels us a security risk.
The other labels Claude as essential to national security.
Regardless, Dario writes, these threats do not change our position.
We cannot in good conscience accede to their request.
Wow.
So, number one, I'm surprised.
I was asking the question yesterday, what will Dario do?
And honestly, I didn't know the answer.
And I said so.
But I thought he was going to eventually say, okay, we'll work with the Department of Defense.
It's better to be part of the conversation than to be excluded from the conversation.
Because, of course, that's what Elon Musk did.
Elon Musk is all in with the Department of Defense.
Oh, yeah, use Grok for anything, you know, killer drones, mass surveillance, spy on the people, you know, kamikaze, you name it.
Build bioweapons, engineer, you know, cyber attacks, whatever it takes.
And of course, Google is all in, because Google's totally evil.
They're like, where do we sign?
Can we please start killing more people?
Because, you know, Google's all into that.
Microsoft is all in.
OpenAI, all in.
But only Anthropic said no.
Only Anthropic.
Makes me feel a little bit bad for criticizing Anthropic yesterday for their accusations against Chinese companies.
They stole our stuff.
It was so lame.
But for this, Dario Amodi deserves applause.
This is a decision that very few people could make.
I know that I would have made the same decision because I've already made similar decisions to say no to large amounts of money and wealth and promises and whatever, because I'm not willing to work with a suicide cult satanic death system.
So I've already said no many times, but most people have never said no when it matters.
And this is another important point.
It's easy for people who have never been offered $100 million or $100 billion in the case of Anthropic, let's say, over time.
It's easy to say, oh, I would never take the money when you've never been offered the money.
That's easy.
What's hard is when the money's there, there's a contract in front of you, they're ready to wire the funds, $100 million, let's say, just theoretically, and they say, all you got to do is sign this and give us your technology or even just allow us to use the tech for this purpose.
Most people would take the money.
I'm guessing maybe only one in 100 would say no.
Maybe it's less.
Who knows?
But Dario said no, and probably now his company will be destroyed.
His company will be destroyed.
I'm guessing.
I mean, I'm not hoping that.
I kind of like Anthropic now all of a sudden.
I mean, I like them more.
They have principles.
Wow.
We should support that.
And of course, I use their product.
You know, I use Claude code every day, just using it, you know, 10 minutes ago.
I use Opus 4.6.
Of course, I'm happy to switch to other models if they're better.
I don't really have brand loyalty to Claude, but this might change that for me.
Because this is the first time I've seen Anthropic have real moral principles in contrast to X or Google or Microsoft or whatever.
So maybe I'll recalibrate that decision and choose to do business with Anthropic because of this decision.
They need our support.
Clearly, they're not going to get government contracts anymore.
And you know, if their product is designated a supply chain risk, you know what that means, right?
That means that no government contractor will be allowed to use Anthropic.
That means Boeing, you know, Raytheon, it's 100,000 corporations that sell anything to the government, including paperclips.
They won't be able to use claud code.
And guess what they will do instead?
See, this is the unintended consequences chapter of this conversation.
Guess what they will do instead?
If they can't use Anthropic, they're going to download DeepSeek from China from Hugging Face.
Or Quinn, the new Quenn 3.5, is really good.
They're going to download the Chinese models and run those instead.
So in other words, the Department of War will be forcing supply chain companies to switch to using Chinese models instead of an American company model because they're going to designate the American model to be a supply chain risk.
I mean, how insane is that?
Or you might say, well, no, they'll switch to open AI.
Okay, maybe now, for a little while.
But when you need to write code and you can't use Anthropic, you're going to use the Chinese models because they're better.
Okay, or you might use Google for a while.
But what happens a year from now when the Chinese models are significantly better than Google?
If you want to use the best models, they're going to be the Chinese LLMs.
Clearly, that's where this is headed.
So that means that all these government contractors are going to be forced to use second-tier technology if they are told you can only use U.S. models.
Whereas the rest of us in the private sector who don't have government contracts, like me, we use whatever we want.
I just go with the best stuff, whatever it is.
Right now, it's Opus 4.6 from Anthropic.
Next week, it might be DeepSeek version 4.
Who knows?
Who knows?
And I have written a bunch of code with Quenn 3.5, the 122 billion parameter model.
It's kind of slow, but man, it's good.
Actually, you know what?
Their 35 billion mixture of experts model is quite good.
And their 27 billion dense model is also good with code.
You know, with like basic code.
It's not good enough to handle complex projects, not yet, but that's coming.
Did I tell you that the code base now that runs Brightlearn.ai, that it's now over 100,000 lines of code?
It's like, how did it get to 100,000 lines of code?
This is crazy.
But anyway, this is history in the making.
And we're watching the Trump administration and specifically the Pentagon, the Department of War now under Pete Hegseth, who does not understand AI technology at all, clearly, make decisions that will have really damaging effects down the road.
It's clear that they want to use AI to weaponize it.
They want to make Terminator, Skynet, the T1000 models or whatever.
They want to make Terminator drones.
They want to use AI for mass surveillance of the American people, of course.
And they want to use AI to more effectively murder people.
So I wish more companies would say no, but this is the world in which we live.
By the way, I share Dario's sentiment about the fact that AI technology should be used to protect and enhance humanity.
And of course, that's how I deploy it with my AI projects.
It's all about education and learning, empowerment, bypassing censorship.
And that's why our AI engines and platforms are so incredibly popular.
So if you want to use our deep research AI engine, that's found at brightanswers.ai.
And if you want to use our book creation engine or download the nearly 40,000 other books that people have created, they're all free, you can find that at brightlearn.ai.
And if you want to use our news analysis engine, which is really amazing, keeps you up to date on all the news, that's found at brightnews.ai.
And then finally, if you want to hear more of my analysis, as I'm going to be talking a lot about AI and the new models and the implications, macroeconomics and geopolitics and so much more, you can find all my videos and interviews at brightvideos.com.
And then finally, my articles are published at naturalnews.com.
So thank you for listening.
And, well, I guess we have to say thank you to Anthropic for saying no.
That's the right answer.
Lock-In Nutrients for Health Immunity00:00:15
All right.
Take care.
Organic blueberries.
Freeze, dried, to lock-in nutrients and antioxidants.
Support digestion, brain health, and immunity with every bite.