All Episodes
Feb. 26, 2026 - Health Ranger - Mike Adams
23:14
China Strategically Undermining the Virtualized U.S. Economy by Releasing World Class AI Models...

China’s Alibaba just dropped Qwen 3.5—a 122-billion-parameter AI model running on a $1,000 NVIDIA card, matching older 371B models with 90% smaller size and 108 tokens/sec speed. Free alternatives like DeepSeek V4 (outperforming Anthropic’s Pentagon-backed Opus 4.6) and Kimi K2 threaten U.S. AI giants’ $200M/month subscriptions, risking OpenAI’s collapse or forced government bailouts. Meanwhile, Dario Amodi faces a Friday deadline: let China weaponize its models for the Pentagon or lose $200M contracts and U.S. access. China’s industrial dominance—robots, EVs, drones—combined with free global AI distribution, undermines America’s financialized economy, where jobs like healthcare and admin face automation. Europe’s "woke" leaders, desperate to steal battery tech, lag behind Ukraine and Russia in engineering prowess. Without internet shutdowns, China’s 5D chess move could displace U.S. tech stocks and millions of jobs overnight. [Automatically generated summary]

|

Time Text
OpenAI's Bankruptcy Prediction 00:08:04
Strategically speaking, what China is doing to undermine America's economy is really rather brilliant, and they're doing it with AI models that replace human cognition, increasingly so.
And the latest example of this just got released, and honestly, I spent most of my day checking out this new, well, a couple of different models from Quen, which is Alibaba, and they released Quenn 3.5.
And some medium-sized models, they have a 3.5, 35 billion parameter model and a 122 billion parameter model.
And I was running both of those quite a lot.
I was doing coding with them.
I was doing question-answer pairs.
I was doing document normalization tests and seeing some rather astonishing things.
This is a quantum leap forward.
Let me just give you the technical thing real quick on this, and then we'll discuss the macroeconomic impact and the strategy of this.
But technically speaking, you know how the number of billions of parameters in a model describes the size of its brain, so to speak.
So a 35 billion parameter model has 35 billion different nodes in it, you could say.
35 billion relationships between nodes, I think is technically more accurate.
And the really large models have hundreds of billions of parameters, but they don't run on consumer-grade hardware.
And until recently, well, until just today, the large Quenn model, I forgot it was like maybe it was 371 billion parameters, possibly, something in that range.
And it was a very capable model, but you had to have enterprise-grade hardware to run it.
Well, this new model that Quenn just released, this 35 billion parameter model that can run on consumer-grade hardware, that is, for example, a 24-gigabyte NVIDIA card, which could be the 5080 or I think the 4080 comes in 24 gigabytes or maybe 20.
You can run it in 20 gigs also.
But this model is extraordinary, and it has the same performance as the 371 billion parameter model.
So what Quenn has done is rather extraordinary.
They've cut the size of the model by 90%, roughly, give or take.
They've made it run on consumer-grade hardware that's not even crazy expensive.
You can run this on a graphics card that costs, I don't know, $1,500.
And you might say, well, that sounds like a lot, except this can replace monthly subscriptions to Anthropic or ChatGPT or other services that might cost up to $200 a month.
And yet you're getting it for free forever.
You just have to buy the hardware one time.
Now, the performance isn't exactly the same as Anthropic.
It's not as good.
I'll talk about that in a second.
But what Quenn has done here by pushing this to the edge and releasing these publicly, and also the 122 billion parameter model, which is also remarkably good.
I was writing code with that one today.
They are making the case for corporations all across the world, including in America, to just drop their subscriptions to OpenAI and Anthropic and just use the free Quenn models.
I mean, why wouldn't you if they're almost as good?
And they really are almost as good.
They're not quite there, but they're close.
And with the upcoming release of DeepSeek version 4, which I'm really looking forward to, that is supposed to be even better than Anthropic Opus 4.6 or clawed code, as they call it.
And if that's the case, then Anthropic is in real trouble.
And so is OpenAI.
And I actually have a prediction on this.
And so is Microsoft and Google, et cetera.
But my prediction is that OpenAI will head toward bankruptcy and it will have to be funded or bailed out by the U.S. government.
Anthropic, it's not clear yet because Anthropic is having that spat, well, it's more than a spat, a fundamental disagreement in AI ethics with the Pentagon, where the Pentagon wants to be able to use clawed or opus, Sonic, all the different models.
The Pentagon wants to use them for basically for killing people, autonomous weapons, Terminator, bots, Skynet, etc.
And Anthropic absolutely does not want to do that, or at least the founder, Dario Amodi, doesn't want to do that.
And the Pentagon has given him until, I think, Friday at 5 p.m.
So he's got one more day to decide whether he's in or out.
If he's in, his company gets to keep a $200 million military contract, but he has to agree to allow it to be used to kill people.
If he doesn't agree, then Anthropic gets added to the basically almost like a malicious code list, and it will be banned across the entire U.S. government, which would just gut the revenues of Anthropic.
And so in a very real sense, although I know this isn't the focus here, but as a side thought, Anthropic has to make a decision, or Dario Amodi has to make a decision, that you're either going to, well, if you adhere to your values, which is that AI shouldn't be used to kill people, then you're going to lose your company.
And essentially, your company is going to cease to exist.
Especially with Quen and DeepSeek coming along from China and undermining your entire revenue model of being able to charge people for high-end coding agents.
But if you agree to compromise your values and allow the Pentagon to use your technology to murder people, which is what the Pentagon does, obviously.
I mean, you know, okay, I guess I don't have to go into that detail.
You know.
But then the company continues to exist.
And then people like Dario might retire as billionaires.
You know, the valuation goes up and everything's great, you know, except that you've sold your soul.
You've compromised your values.
See?
So it's not an easy choice for most people.
If you were in that position, could you turn down $200 million in contracts just to stick with your principles?
Could you watch your company be destroyed and all the people that you hired and that you work with, I don't know how many it is, hundreds of engineers, watch them all lose their jobs because you decided to stick with your principles?
Or would you take the money and say, I'm doing the best I can given the circumstances?
What would you do?
What would you do?
So, I mean, I know what I would do.
You know what I would do.
I would tell them to go pound sand.
But that's why I don't have venture capital.
I don't have investors because I never want to have to answer to investment people who are only interested in money.
So otherwise, I would have already gone down that road a long time ago.
But what would you do?
There's a question.
What would you do?
Most people would take the money.
And Anthropic may take the money.
But here's the thing.
Even if they take the money, I think that Anthropic's future is almost entirely just government contracts for the reasons I've mentioned here.
China's AI Advantage 00:12:28
That is Quen and soon DeepSeek, Chinese models.
And there are others, by the way, Kimi K2, et cetera.
These models are proving to be extraordinary.
And China is really inching ahead now.
This is the year, 2026, that China is demonstrating that its models are superior.
And it's extraordinary.
I mean, I know I've used that word a lot.
I'm not trying to start a drinking game or anything.
But this new Quen 3.5 model, it uses, it's a mixture of experts model.
And so even though it has 35 billion parameters, it only activates 3 billion parameters to answer your question or your prompt or whatever.
So it's got a bunch, you know, it's got, you could say, roughly like 10 experts in it, and each one is roughly 3 billion parameters.
And that's what makes it so blazing fast.
It's crazy.
So it's got the knowledge of a 35 billion parameter model, which is a lot.
That used to be considered very sizable.
But it's got the throughput that's mind-blowing.
I'm measuring it today at 108 tokens per second.
108 tokens per second.
I ask it to write an article and it spits it out faster than I can read.
And I'm a very fast reader.
It's like, there's the whole thing.
Whoa.
Whoa.
It's fast.
And then I, because I had started it up with only 16K tokens for a context window, which is like the limit of the number of tokens that it can handle.
And it did 108 tokens at 16K.
So I thought, well, let me just max it out here.
I'm going to go to 262K, which is the max of the model, which is a lot.
And let's see how much it slows down.
So I tested it again, and it comes back 106 tokens a second.
It barely noticed the larger context window.
And that's very unusual because the older models used to have a massive difference based on the context window.
But it turns out that Quenn was able to train these newer models from the start with very large context windows.
And Quen is even hosting a model on their website with APIs, etc., that has a 1 million token context window, which is enough to process large code bases, for example.
So this large context window is really unbelievable, especially given that it's completely free and you can download it yourself and you can run it yourself on consumer grade hardware, or I should say sort of higher end consumer grade hardware.
This is not just cheap gaming cards.
This is definitely higher end stuff, but it is accessible.
I mean, you can buy these cards on Amazon, for example, or wherever you shop for computer cards.
So what this does is it puts this technology in the hands of people everywhere around the world.
And it's real intelligence, by the way.
It's real cognition.
These are reasoning models.
They think through things, and you can watch them think if you want to turn on the thinking monitoring in your inference software, you can watch what they're thinking, and they're working through problems logically and rationally.
And the quality of the output for the size of the model is almost unbelievable.
Two years ago, this would have been considered utterly impossible.
And so for those people who say, oh, AI is in a bubble, not Chinese AI, you could argue that maybe American AI companies are in a stock market bubble that is about to pop because the Chinese AI companies are not in a bubble.
Their technology keeps getting better and better and better.
And all those warnings last year where people said, oh, it's all plateaued.
You know, it's never going to get any better.
It's just, you're not going to get any more returns.
Nonsense.
It keeps getting better.
It keeps getting better by leaps and bounds, actually.
I have not seen any ceiling on this.
Every time a new model comes out of China, it's got some other major advancement that is just mind-blowing.
Now, here's the thing.
Getting back to the main structure of my presentation here is that, see, China knows that it can undermine the United States very easily by shipping free machine cognition to replace human cognition in the workplace because the United States is largely a financialized economy.
We don't make stuff very much anymore.
I mean, we don't have a lot of factories in America.
We have services, yeah, I get that.
But we have a lot of financialization and a lot of virtualization.
So a lot of really virtual jobs, you know, like approving insurance claims or whatever, sitting behind a desk, working through a monitor, processing spreadsheets and emails and this and that.
That's a lot of the U.S. economy right there, especially in medicine and healthcare, which employs like millions of people to sit behind desks and just type in, you know, classification codes for medical procedures or deny insurance reimbursement or whatever.
It's a massive economy that is completely replaceable by AI at this point.
Completely replaceable.
So China releases all the AI models for free to push that into the West while the U.S. AI companies are refusing to release their models.
But China's doing it, and so corporations are taking on these models.
You know, Quen is actually very widely used across corporate America right now.
And then the U.S. companies are firing human workers because now they can automate those jobs.
But if you're wondering, well, why would China be willing to do this?
Because it might also hurt their own economy.
But no, no, no.
China's economy is not virtualized.
China's economy is largely industrial and manufacturing.
They produce physical goods, which AI can't do.
You see, large language models don't create things out of nothing, not physical things.
And robots, well, China's already leading in robots, so China has automated their manufacturing.
And China knows that even if they ship out all these AI models that replace cognitive jobs all over the world, people still need robots and drones and batteries and EVs and tools and everything.
All the stuff that's made in China.
Computer monitors, you name it, right?
None of that can be replaced by AI.
So it's a brilliant strategy on China's part because they can flood the world with free machine cognition that replaces humans and causes the U.S. job market to get totally wrecked as job losses just spread like wildfire throughout the system as companies automate and automate and automate because of these free tools.
Meanwhile, China's industrial economy is protected.
It's protected.
So you see what I mean?
That's 5D chess.
And it's angering U.S. officials and U.S. tech companies.
This is why Anthropic the other day pointed the finger at China and said, you stole our stuff.
You distilled our knowledge to train your models.
They're blaming Moonshot and DeepSeek and MiniMax, pointing the finger, wagging their finger.
But in reality, model distillation is a totally legit training technique for AI models.
It's been used by almost every company out there.
And the thing is, these Chinese companies paid for the API access.
They didn't steal anything.
They weren't hacking Anthropic.
They were buying access via the API.
And Anthropic let them use the API.
I mean, and then Anthropic is saying, well, this is like large-scale, you know, industrial theft or something like that.
Intellectual property theft.
Yeah, please stop complaining and just try to build better models.
China builds better models.
And frankly, they don't need Anthropic.
They don't need any U.S. models.
They don't need to do that.
They've got better math.
They've got better science.
They've got more science papers on this topic.
I mean, maybe at first, a few years ago, they started out trying to copy.
Well, I'm sure they did.
They were trying to copy the structure of U.S. AI models.
But Chinese learn quickly.
They will, you know, they'll reverse engineer your model, and then they'll figure out how you did it, and then they'll do it better.
That's what they always do.
That's what they do with manufacturing.
Now, that's why most of the best technology in the world comes out of China.
It's not that they stole all that tech.
It's that they invented it.
I mean, think about the best battery technology in the world by far.
No one else is even close to what's coming out of China.
You know, CATL and BYD mostly.
Did they steal that battery tech from?
No.
They invented it.
They put hundreds of scientists and chemists and engineers on the problem for 10 years plus and they figured it out.
Now the world wants to steal the tech from China.
Did you hear the latest thing from the EU where the EU says to China, basically I'm paraphrasing, but an ultimatum.
The EU says, hey, China, you have to give us all your battery technology secrets.
Otherwise, we're going to block all your electric vehicles from all of Europe.
Yeah.
Yeah, think about that.
So the EU is saying we now want to steal Chinese technology because we here in Europe, in France, in Germany, in England, in Britain, in Spain, we can't figure it out because we're Europeans.
We're all run by woke retard leaders here in Europe.
We can't figure out anything.
So we need the Chinese to give us their secrets.
Like that's happening.
How hilarious is that?
And by the way, you know who the smartest Europeans are in terms of engineering?
Like AI and math and science?
You know who they are?
Yeah, okay.
Ukrainians and Russians right there.
I'm telling you, Ukrainians and Russians, they are because life is hard.
Life is hard in Ukraine and Russia, especially for the last four years with this insane war.
Life is hard.
Life is easy in the UK, or it has been easy in Germany.
They all got lazy and they stopped being good engineers.
And they shut down their domestic energy supply.
So, you know, again, Western Europe is run by woke retards at this point.
Anyway, this pattern by China, this cannot be stopped.
There is no way to stop this.
Because China can just release the models for free, as they've been doing.
You can download them from Hugging Face or other places or, you know, GitHub or whatever.
Everybody's got internet access.
Unless you're going to shut down the whole internet, you can't block files from China.
But you've got to understand that all over the world, all the people in India, Southeast Asia, and South America, they love the fact that China is releasing these models for free because China is on the pro-freedom side of this.
China is offering really decentralized machine cognition, which is of great benefit to humankind.
Decentralized Machine Cognition 00:02:37
And it means that you can now, right now, today, for a couple thousand dollars, you can get the hardware and run on your desk technology that would have cost you $100 million three or four years ago.
Seriously, that's how big the gains have been.
And this is going to continue.
When DeepSeek version 4 comes out, that's also going to be a game changer, and it will probably exceed the coding capabilities of this Quen 3.5 that just came out.
And it's rumored that DeepSeek is going to have a light version that also runs on consumer-grade hardware.
Like a 20-gigabyte card is the rumor, or a 24-gigabyte card, something like that.
If that's the case, then my goodness, U.S. AI companies might really be obsolete.
And the U.S. tech sector is going to get hammered in the stock market.
And U.S. job losses are going to mount.
So be prepared for that.
And I'll keep you posted as best I can.
I'm deep in this industry, so I'm watching it and using the tools every day.
I'm looking around.
If you saw how many computers are in my office, how many is it?
Eight.
That's just in my office.
And then I have a mini data center outside of my office.
But I've got, yeah, I've got eight computers, and each of them have two screens.
Well, some of them have more.
So what is that?
It's like 20-plus screens, actually.
It's pretty crazy.
Yeah, it's a good thing I have good nutrition to protect myself from all that electromagnetic exposure, huh?
Yeah.
Otherwise, I'd be cooked.
There's a lot of electronics in here.
You can use my AI tools for free at brightanswers.ai.
That's our deep research engine.
Or brightlearn.ai is my book creation platform.
And you can download all the books there for free.
You can watch my videos at brightvideos.com.
And you can follow and check news trends at brightnews.ai.
So thank you for listening.
I'm Mike Adams, the founder of all those sites I just mentioned and much more.
My articles are found at naturalnews.com.
And thank you for listening.
Take care.
Biostructured silver gel deserves a spot in your emergency bug out bag.
It's made with three types of silver, seven powerful essential oils, and lab tested for glyphosate, heavy metals, and microbiology.
Get yours at HealthRangerStore.com.
Export Selection