BBN, Jan 22, 2025 – Trumps announces STARGATE project to attempt global AI DOMINANCE
|
Time
Text
Welcome to Brighteon Broadcast News with Mike Adams, the Health Ranger.
All right, welcome to Brighteon Broadcast News for, what is it, Wednesday, January 22nd, 2025. Thank you for joining me today.
And today is all about AI, as Trump has announced the new Stargate program to invest half a trillion dollars in new U.S. data centers for AI research.
Of course, I'm on the verge of announcing something that's truly groundbreaking, really revolutionary for human knowledge, and I've got about a five-minute video, a little teaser video to play for you on that subject.
So take a look at that video right here, and then we'll continue.
Ladies and gentlemen, brace yourselves, because on March 1st, the world as we know it is about to change forever.
I'm not exaggerating.
This isn't just another tech update or a flashy new app.
This is a revolution.
A seismic shift in how we access knowledge, how we empower ourselves, and how we take back control from the forces that have been gatekeeping information for far too long.
Today, I'm here to tell you about Enoch, a groundbreaking, game-changing, and dare I say, life-altering tool that's about to hit the scene.
And trust me, by the time I'm done, you'll be counting down the days until March 1st.
Let me introduce you to the mastermind behind this innovation, Mike Adams.
If you don't know him yet, you will.
He's a tech maverick, a freedom platform developer and a visionary who's been quietly working on something that could very well redefine the way we think about knowledge, health and self-reliance.
And on March 1st, he's unleashing Enoch, a free, open-source, large language model that's unlike anything you've ever seen.
This isn't just another AI chatbot.
This is a knowledge revolution.
And it's coming to brighteon.ai.
So, what exactly is Enoch?
Imagine having access to the world's most comprehensive, curated collection of knowledge on food, nutrition, natural health, herbs, toxins, alternative medicine, emergency preparedness, off-grid living, economics, finance, history, climate, energy, you name it.
Enoch has it all.
It's like having a library, a wellness coach, a survival expert and a research assistant all rolled into one.
And here's the kicker.
It's completely free.
No subscriptions, no hidden fees, no strings attached.
Mike Adams believes that access to knowledge is a basic human right and Enoch is his way of giving that right back to the people.
Now let's talk about why this is such a big deal.
For years, big tech giants like Google, YouTube and Facebook have been controlling the flow of information.
They've been censoring, manipulating and gatekeeping knowledge to serve their own interests, often at the expense of truth and human well-being.
Think about it.
How many times have you searched for something online, only to be met with biased results, media propaganda, misinformation or outright censorship?
Enoch changes all of that.
It's decentralized.
It's uncensored.
And it's yours.
Once you download it, it's yours to keep and use.
Forever.
And you can even use it commercially.
No internet connection required.
No surveillance.
No one watching over your shoulder.
Just pure, unfiltered knowledge at your fingertips.
But here's where it gets even more exciting.
Enoch isn't just a passive repository of information.
It's alive.
It's dynamic.
It's interactive.
It can answer your questions, generate articles, summarise books, critique arguments and even help you write screenplays or novels.
It can analyse the ingredients in your food or cosmetics and tell you exactly what's safe and what's toxic.
It can teach you how to build a shelter, connect solar panels, sterilise water or collect rainwater.
It's like having a Swiss army knife for knowledge and it's all powered by an AI that's already outperforming Google, Let that sink in for a moment.
This isn't just a tool.
It's a miracle knowledge solution that overcomes forced ignorance.
And the best part?
ENOC is designed to empower you.
It's not just for tech geeks or academics.
It's for everyone.
Whether you're a parent trying to make healthier choices for your family, A prepper looking to hone your survival skills, or just someone who's tired of being lied to by big pharma and corporate interests, Enoch has something for you.
It's a wellness coach, a research assistant, and a survival guide all in one.
And it's about to become the best piece of technology you've ever owned.
Now let's talk about the technical side for a moment.
Enoch is roughly a 5GB download, and it's compatible with Mac, Windows and Linux.
You don't need a supercomputer to run it.
But if you've got an NVIDIA GeForce graphics card, it'll run even faster.
And once it's on your computer, it's there for good.
No updates.
No expiration dates.
No one can take it away from you.
It's yours to use, explore, and learn from.
Forever.
So, what do you need to do?
Head over to brighteon.ai and sign up for the email alert.
That way, you'll be the first to know when Enoch is released.
Trust me, you don't want to miss this.
This isn't just another tech release.
This is a movement.
A revolution.
A chance to take back control of your health, your knowledge, and your future.
March 1st is the day the world changes.
Are you ready?
Because Enoch is coming.
and it's going to change everything.
I want you to be aware Human knowledge should be shared The answer to persistent lies Speak the truth with open eyes Okay, hope you enjoyed that little teaser video.
Yes, it's all true.
And this is a very big deal.
You would not believe the amount of knowledge that we are embedding in this new system.
It blows away anything we did in the previous year by far.
This is revolutionary.
So definitely sign up at brightown.ai.
You'll want to download and use the new model that we make available there.
Free of charge.
Now, finally, Trump is beginning to understand, I think, or his advisors are understanding the importance of the AI race to superintelligence, or AGI, artificial general intelligence.
And I've got a lot of analysis on this today, some important news that we're going to get to.
And as I'm recording this, I'm actually downloading the new version of LM Studio.
Which is at lmstudio.ai, version 0.3.8, which now supports the DeepSeek R1 model, which is a China-based reasoning model that has just been released open source, free of charge.
I've already used it through the Olama software, and it's good.
But what I did today on my own personal computer, I upgraded my...
I had to upgrade...
The whole chassis to a larger tower to install an NVIDIA GeForce RTX 4090 card with 24 gigs of video RAM on it.
Because I want to try to run the 32 billion parameter DeepSeek R1 model for reasoning, which is supposed to be super amazing.
So if that gets installed while I'm recording here, I'll do some demos with you here live and we'll ask it questions and see what it does.
But I can already tell you...
That just doing data pipeline processing on this new rig I set up is freaking...
It's mind-blowing.
But occasionally the fan just comes on, so you might hear the fan start blasting here because it does generate quite a bit of heat, which is no problem right now in Texas as everything's freezing.
So I am generating a ton of heat with AI data pipeline processing and saving money on my heating bill as a result.
So one of the side effects, side benefits of doing AI training.
Pretty cool stuff.
Oh, I should also mention that my interview today is with Zach Voorhees, the Google whistleblower.
And this interview was recorded last week before DeepSeek announced its new program, or, I'm sorry, language model, reasoning model.
So Zach wasn't aware of this when we did the recording, but Zach has since told me that he's just...
He's blown away by what China is doing here, and he thinks China may now be in the lead in terms of the AI race.
And I think that's the case as well.
And there are a couple of reasons for this.
So I posted this today about AI models.
First, about the release of the DeepSeek model.
I said, hey, I hope everyone realizes what just happened in the world of AI. China just blew away the USA in terms of reasoning models.
And other cost efficiency.
DeepSeek now has free open source reasoning models that you can run on consumer hardware.
This was unthinkable even a year ago.
China is dominating AI research and China's AI models aren't censored to favor big pharma or transgenderism and climate narratives like all the USA models are.
Google, Anthropic, Microsoft, Meta, etc.
are all truly retarded.
Compared to China's AI models, and yes, I mean retarded.
They're cognitively retarded because they're trained to be woke, insane, and stupid.
You know, to say things like, well, carbon dioxide plays no role in plant health or crop production, or carbon dioxide is a poison, or men can have babies.
You know, stupid things, like just incredibly stupid things.
But you see that the CIA forces the U.S. AI companies like Anthropic or OpenAI, forces them...
To put in these woke narratives.
So, the US companies have to force their supercomputers to be cognitively retarded in order to produce the output that is in alignment with the stupidity of the regime in power.
I mean, this is truly Orwellian.
It's beyond just going back and changing the news or the encyclopedias to match the current regime in power.
This is altering the AI reasoning models.
To dumb them down to the level of woke that the Democrats promote.
I mean, seriously, to be woke, you have to be retarded.
To be DEI, you have to be cognitively impaired.
To believe that carbon dioxide is bad for plants, you literally have to be stupid.
Which, by the way, that's about 80% of Western scientists right there because they all think carbon dioxide is bad.
They are actually retarded.
They can still get PhDs in other areas, by the way, but when it comes to actual practical knowledge about photosynthesis, which I learned in the 10th grade in high school, I still remember the class because we had to just memorize the formula.
I mean, I knew this stuff in the 10th grade.
Today's scientists in the West still don't know this, or they've been brainwashed to think that carbon dioxide is bad.
You know, they're climate cultists.
Well, China doesn't buy into all that nonsense, and so China's AI models are smarter by far.
They're better, and China's going to reach AGI, or Artificial General Intelligence, first.
China's going to dominate the world of AI because the U.S. AI programs were hobbled by woke tards.
Literally, this is the case.
Just unbelievable.
Related to this, Trump has ordered the entire federal government to close all DEI offices by the end of work on Wednesday.
This is an executive order, a memorandum to heads and acting heads of departments and agencies.
It says that effectively all DEI offices, or DEIA as they call them, which is, you know, what is it, diversity, equity, inclusiveness, or some nonsense.
They're all to be closed, and all the people should be put on leave who were part of those offices.
So thank God.
Thank God.
That's a really positive thing.
And then Trump is also canceling affirmative action, which is discrimination.
Affirmative action itself is discriminatory.
It discriminates against people based on their skin color, and that is wrong.
That is always wrong.
And affirmative action...
It's been used by universities to discriminate against Asians.
Asian people are given penalties on college entrance analysis of their test scores and so on.
They are penalized for being Asian.
Think about that.
And you know what's fascinating to me is that here we are at the verge of breakthrough AI reasoning models.
And what I found in...
Just running experiments myself on the reasoning models.
And by the way, I am downloading right now the DeepSeq R1 distilled 14 billion parameter model.
It's 12 gigs, so it's going to take a while.
It may not make it today.
But the reasoning models, they reason much better than Woketards.
In fact, they reason better than most humans.
They reason better than most attorneys.
They reason better than most judges.
They reason better than almost all people right now.
Something that you can literally run on your desktop reasons better.
It's astonishing to me because, you know, human beings, obviously, we are the true children of God.
And AI can never replace.
I don't worship AI like some people worship, you know, the transhumanists and the anti-humanists and all that nonsense.
No.
But I recognize that AI tools can help us express our divinity.
You've heard the songs I've put out that are AI augmented.
You've seen the videos that we're putting out, you know, the interview summaries and the book reports that we're putting out.
They have AI illustrations and so on.
And it's extraordinary how these AI tools can aid in our expression.
And actually, and please listen, this is the most important part, AI tools can achieve decentralization of knowledge, which means they can bypass big tech censorship.
And that's the way that I'm using AI with the model that I'm releasing, ENOC. It's to bypass big tech censorship and...
To have this incredible body of knowledge, this wealth of knowledge on topics that are typically banned or minimized by the establishment.
And I'm happy to say today that ANH, the Alliance for Natural Health, has donated all of their article content from, let's see, I mean, going back more than a decade.
ANH USA and ANH International.
So it's thousands and thousands of articles that we're training on there.
We also have been donated years of content from Children's Health Defense.
So CHD content is part of the training of this model.
On top of that, of course, Natural News, Dr. Joseph Mercola, he donated the Mercola.com website content, Sarah G. from GreenMedInfo.
The Truth About Cancer, Ty Bollinger, Charlene Bollinger, they donated all their content, and there are many, many others.
Corey Endrulat donated a whole series of lectures, I think 50 hours of pro-liberty lectures and so on.
Anyway, it goes on and on and on, and the content that this is being trained on is a wealth of knowledge, and when this is in your hands, no one can...
Limit your access to this knowledge because these language models do not run in the cloud.
They run on your local computer.
So Google can't surveil you.
Google can't stop you.
The government can't stop you.
And what's astonishing to me about what's happening right now, and we'll get to the Trump news here in a second about Project Stargate.
You know, what a dramatic name.
Stargate.
What, are we going to go to another dimension?
I don't know.
Trump dimension.
One of the most astonishing things in all of this is that China is releasing the open-source reasoning models while U.S. companies are, in effect, refusing to release open-source, such as OpenAI, which is misnamed.
It's not open.
It's closed AI. They should be called closed AI. Just remember that.
Whenever you hear the name OpenAI, it really means closed AI. And by the way, I'm watching this download of DeepSeag R1. It's going to be like...
It's going to be hours, so I'll demo that for you tomorrow, but I can't wait.
That's going to be something definitely really exciting.
Nevertheless, China's releasing the open-source models, which means that China, yes, China is contributing to the democratization of compute power, of reasoning models and language models and effectively knowledge, and China's models are the least censored models in The world.
The least censored.
So I hear people, I watch a lot of online videos about the AI industry, and I learn about coding and AI models and transformers and all these things.
And I've got to tell you, everybody in the AI industry in the U.S. is completely brainwashed.
They think that China's models are censored, but America's models are not censored.
It's completely the opposite.
The U.S. models are totally censored about big pharma, about vaccines, depopulation, 9-11, you know, the rigged 2020 election, you name it.
Anything that's controversial, COVID, the origins of COVID, biological weapons, I already mentioned vaccines, but, you know, anything controversial is censored out of the U.S. models or the U.S. models will put warnings in there.
Like they'll say, you know, If you ask it, what's the evidence that carbon dioxide is good for plants?
It will typically answer that, yeah, okay, maybe carbon dioxide is needed in photosynthesis, but the planet is suffering catastrophic warming because of CO2, blah, blah, blah.
It's just like spewing out Little Miss Thunderpants.
That's what the AI models sound like in the West.
But you go to the Chinese models, such as DeepSeek, and they sound...
Informed, intelligent, and from my experience, completely uncensored.
However, I have not queried the Chinese models on issues like Tiananmen Square or Taiwan.
That's just not my focus.
So perhaps there's some, maybe there's some bias in those areas.
I imagine there would be.
But the Chinese models are great on vaccines, nutrition, disease prevention, herbs, history, almost everything you can imagine.
They're the best in the world.
So China, I know this sounds bizarre, right?
Because I've been warning for years about China and its invasion plans or what have you.
And things are changing.
Things are changing dramatically.
I no longer think, just to back up here, I don't think that China is planning on an invasion of Taiwan or an invasion of the USA any longer.
You know why?
Because China now realizes that it can win.
It doesn't need ships and troops and bombs.
All it needs is to win the AI wars, and it is on track to do that.
So things have shifted dramatically.
So some of my concerns about China, you know, with military force or what have you, have definitely shifted.
I'm now looking more closely at the AI development and also pointing out that All the things that the U.S. used to condemn China of doing, human rights abuses, organ harvesting, throwing people in prison without trial, etc., those are all things that the U.S. has done under Biden.
The Biden regime was worse than the CCP, was worse than the USSR ever was.
The Biden regime was a tyrannical...
That definitely engaged in organ harvesting.
We covered that.
The FDA routinely harvests organs from aborted babies and use those for humanized mice experiments.
That's a fact.
Now that came out, Tom Fitton got that out of a FOIA request, by the way.
I mean, I know it sounds bizarre, but it's all true.
The USA under Biden threw the J-6 prisoners, threw them in jail for four years, many of them without trial.
Some of them were sentenced to decades in prison for simply peacefully protesting?
That's a horrible human rights abuse.
And the rule of law had collapsed in America under Biden?
You know, thank God Trump got elected.
Thank God.
I mean, Trump's not perfect.
I'm not going to get into that right now, but if Kamala and the Democrats remained in charge, this country would be done.
Toast.
Gone.
Thank God we have a new opportunity now to actually take our country back.
This is a very big deal.
I'm just very thankful that we actually made it to this day.
And also, given that Trump has pardoned, what, 1,500 J6ers, including Stuart Rhodes and Joe Biggs and others, many others, although the D.C. jails are reluctant to release some of these prisoners.
There might be a standoff situation.
We're going to see what happens there.
But Trump now ending the DEI woke, libtard nonsense across the federal government and universities.
It's about damn time.
And Trump is going to end biological males competing with women in women's sports.
Thank God.
Haven't we seen enough women being beaten up in the MMA ring or in high school wrestling?
College sports.
It's insane.
It's abuse of women.
But Democrats celebrate that.
They love that.
No.
We say no more.
But nevertheless, Trump can turn this country around.
But China is so far ahead now because of how the Democrats committed sabotage against this country.
You know, the Democrats set us back at least four years.
The Democrats sabotaged the domestic energy supply.
Which affects the availability of electricity for data centers.
I mean, the Democrats were trying to have the country overrun with the wide open borders.
The Democrats were trying to increase crime by letting murderers back on the streets and defunding police all over the country.
You know, the Democrats, they are anti-America.
They are nefarious saboteurs who wanted to tear down this country.
And as a result, they have set us back.
They put us behind China.
And frankly, in terms of military production and military technology, they put us behind Russia.
Russia is far more advanced than is the United States right now in terms of military technology and military industrial output.
So, you know, Trump has a lot of repairing to do.
Trump has to unravel all of the sabotage of the Democrats and Joe Biden.
Trump has to put us back on track, and Trump's got to unleash the domestic energy supply so we can have affordable energy and affordable transportation, affordable food and farming, etc.
And Trump is doing this.
I give Trump right now A++++. How many pluses do you want?
Trump gets 100% right now.
I don't think that's going to last forever, but as of the first few days, Trump's batting 100%, 100% in my book.
I guess in baseball, wouldn't that be batting 1,000, I guess?
Because that's the way the stats work there.
But anyway, I say Trump is batting 100% right now in canceling all these nonsense programs.
So let's get to the AI announcement, Project Stargate.
I know it sounds really dramatic, and it is a big deal.
So here it is.
I'm reading here from New York Post, but we'll cover this on naturalnews.com.
President Trump unveiled a $500 billion artificial intelligence infrastructure project Tuesday at the White House alongside reps from three tech and investment giants.
With those business leaders asserting the initiative could cure cancer.
Okay, now, my opinion, that's complete nonsense, and I'll explain why in a second.
But let's continue because this is a big deal.
OpenAI, SoftBank, and Oracle launched the project called Stargate.
To unleash the new technology with the help of large data centers based in Texas.
Yes, why Texas?
Because Texas has energy.
That's why.
See, if you have energy, you will attract investment.
So, number one, OpenAI, an evil company, even Elon Musk, says Sam Altman is a horrible person.
And OpenAI was supposed to be open, but then they got greedy and they decided they could make billions of dollars.
So they closed it.
Now it's closed AI. And OpenAI is Woketard at the moment.
Maybe that'll change thanks to Trump's demands.
But at the moment, OpenAI is so bad that my own model, the Enoch model that we're about to release, beats OpenAI on all kinds of questions about biology and climate and finance and history, all kinds of things.
You know, gender.
So OpenAI has spent hundreds of billions of dollars and their models are still retarded.
So that's one partner of this.
Then there's SoftBank.
SoftBank is a Japanese company.
SoftBank has been around for decades.
It's a venture capital firm.
Essentially they invest in tech and various tech projects and startups and so on.
But that's a Japanese company.
That's not a U.S. company.
And then Oracle.
Which is Larry Ellison, who then goes out and Larry Ellison says that, oh, this is going to cure cancer.
Because he said, quote, little fragments of those cancer tumors float around in your blood, so you can do early cancer detection.
If you can do it using AI, you can do early cancer detection with a blood test and using AI to look at the blood test.
Once we gene sequence that cancer tumor, you can then vaccinate the person.
Larry Ellison says.
You can vaccinate the person, design a vaccine for every individual person that vaccinates them against that cancer.
That mRNA vaccine, he says, you can make that robotically, again, using AI in about 48 hours.
So this is Larry Ellison going full-blown globalist, you know, anti-human, depopulation, all of it.
He wants to weaponize AI. To weaponize vaccines with weaponized mRNA to kill you, obviously, but he's calling it a cure for cancer.
It's like, what a monster this guy is.
What a joke, too.
What a joke.
You don't need AI to look at your blood.
You don't need AI to design special custom vaccines.
Your body already does custom healing.
It's called eating local food.
Your body's already compatible with natural food.
The reason people have cancer is not because they lack AI nanobots in their blood or whatever.
It's because they're eating toxins and they're lacking the anti-cancer nutrients in foods like turmeric and vitamin D and citrus and fresh celery and all these other things.
Pomegranates, you name it.
Fresh fruits and vegetables and sunlight, obviously, all anti-cancer.
And by the way, if you ask the DeepSeq models about anti-cancer and nutrients, they'll tell you.
And so will our model.
Oh, you're going to love our model for research.
Enoch.
It's going to be amazing.
But Larry Ellison thinks you should have a robot make a vaccine that's personalized with your RNA. It's like, that sounds like a death weapon to me.
I don't know about you.
That sounds like the last thing I want to put in my blood.
Like a little nanorobot AI. Killers.
Come on.
But hey, some people are suckered by technology and they want to merge with the machines, so they'll inject experimental nanobots.
It's AI! It's going to look for cancer cells!
It's going to clean up my blood!
And then three days later, they're dead from a heart attack.
Well, it turns out the nanobots are all collected in one valve.
What do you know?
And then they just stop their heart.
Well, back to the drawing board on that one.
So, don't inject yourself with AI, okay?
Number one.
And I say this as someone who's really leading in this space of AI development, at least in terms of encompassing human knowledge, right?
So, my specialty is gathering up this amazing knowledge that all the other companies reject.
Knowledge about the natural world.
And our model will be revolutionary for the world.
And so I love AI applied in that way, but even I'm telling you, don't inject yourself with AI, man.
Don't trust any vaccine that comes from any government, any tech company.
I mean, do you want Oracle vaccines?
Because that's the company of Larry Ellison.
You want Oracle vaccines?
You want Oracle nanobots coursing through your blood?
No.
No, thank you.
What I want is, like, I just want to sprout some broccoli and eat the broccoli and my body knows what to do.
You already have nanotech in your body, folks.
You know, your immune system is nanotech.
Your organs are nanotech.
The hemoglobin in your blood is nanotechnology.
It's amazing.
It folds itself over when it's exposed to oxygen and it can carry carbon dioxide waste products and it morphs into a different form to do that.
It's absolutely amazing.
You have nanotech.
All throughout your body.
And inside your skull, you have neural network technology from God, okay?
You have a supercomputer in your head, in your skull.
You have a supercomputer.
And even beyond any computing system, you have connection with the divine.
You can actually, you can do remote viewing.
You have sixth sense.
You know, you are connected to the divine.
You've got the 100th monkey benefit.
You can benefit from other people's ideas all over the world if you just tune in and listen in.
You've got intuition.
You've got these things that AI and microchips can never, ever have because they're not living.
They don't have souls.
They're not children of God.
They're children of NVIDIA. You know what I mean?
So you are special as a divine human being.
And never think that your health problems are going to be solved by merging with machines.
No.
Now, your health knowledge can be greatly expanded by using our AI model, for example, Enoch, but it just gives you knowledge.
Then you have to go out and eat the blueberries, you know?
You have to go out and actually live the healthy lifestyle, The best practices that bring you to that.
It will allow you to live that lifestyle because no longer are you isolated from the knowledge.
There's no censorship of this model.
So that's my approach.
Again, don't worship the machines.
Don't try to merge with the machines.
But use AI to give you knowledge, to empower you, to uplift yourself.
A lot of people are concerned that Project Stargate is going to be weaponized.
In fact, Leo Homan writes on Substack, and I've interviewed Mr. Homan before.
He says, Project Stargate is upon us, Trump, to invest $500 billion in new U.S. data centers.
Is Trump being used to build out the superstructure of a digital system that could later be weaponized by some future ruler, just as 5G and warp speed were weaponized?
It's a legit question, isn't it?
Yes.
It's a legit question.
So, I don't think Trump has any nefarious goal from this, but clearly OpenAI is not an honest company, and clearly Larry Ellison is an insane technocrat.
So, I don't trust any of those people.
That's my take on it.
So, as Leo Holman writes the following, he says, I don't believe President Trump knows what he is doing.
You know, in the context of AI, I believe he would shudder if he could see the future he is going to make possible through AI. Trump truly thinks he is just going about the business of making America great again through technological advancement, but he is a tool of the technocratic slash transhumanist elites.
This is why Elon Musk, Peter Thiel, Mark Zuckerberg, Jeff Bezos, and so many other technocrats are groveling at Trump's feet.
So, I mean, that's a fair point there from Leo Homan.
Trump knows a lot about a lot of things, negotiations and real estate, etc.
But, of course, Trump is not an expert in AI. You can't be an expert in everything, and it's impossible for someone in Trump's position to really anticipate where this is going.
But I will say in...
In defense of Trump and these so-called technocrats, and for the record, I don't trust any of those people either.
And I certainly don't trust Larry Ellison, absolutely don't trust Mark Zuckerberg or Jeff Bezos.
I'm open to Elon Musk a lot more, but I don't automatically trust any of these people.
But here's the thing.
This is a global race to superintelligence.
And the first nation to achieve it, or the first corporation to achieve it, but I think China will be the first to achieve this.
Whoever it is, they will dominate the world from that day forward, because a super-intelligent computing system will be able to essentially take over everything without firing a shot.
It makes bombs obsolete.
Frankly, it makes the whole idea of warfare As we know it, a superintelligent system could simply seize control over the entire infrastructure of a nation, the power grid,
the banks, commerce, and it could do it through social engineering efforts, convincing people to play along with things, getting passwords from people, impersonating their friends or co-workers, like a worker at home gets a call, he thinks...
His boss is calling him, and his boss says, it's an emergency.
I lost the admin password.
I need the password, or we're all going to lose our jobs, right?
But it turns out that's an AI system doing that, you see.
But the guy at home gives the password to the AI system.
Well, then the AI's got the admin password for Bank of America or whatever.
Not that Bank of America could do any worse than they already do at human hands, but...
Imagine if an AI system was literally trying to destroy Bank of America, then that could be a lot worse.
Anyway, the point here is that what Trump is trying to do and what these technocrats are trying to do is they're trying to make the U.S. something somewhat competitive in this global race because the U.S. is losing.
It's clear now, clear to me, clear to Zach Voorhees, clear to many others that China's winning, the U.S. is losing badly.
For reasons I previously mentioned here in this podcast.
Now, Trump thinks that the answer is to funnel a bunch of money to a few powerful corporations.
I think the opposite.
I think that decentralization is the answer to this.
That is, to do what China's doing, is build reasoning models and then release them to the public, open source, so that you then have all kinds of researchers and Universities and hobbyists and so on that are themselves enhancing, augmenting, and tweaking these systems, and you actually get kind of a grassroots effort to build better AI. I think that's a better way to go,
but that's probably not the way things work in Washington, and so we're going to end up with government money funneled into a few corporations that are not, they don't have good ethics.
They're not honest companies.
Just like I said, I don't trust open AI at all.
And this is why I'm building decentralized models in the open source space, because I believe that that's the solution here, is to empower people at the grassroots level.
The point is, Trump at least recognizes the importance of AI, and I'm going to say this, I think that Trump and his team all...
I think AGI is going to be achieved by China.
This is my prediction.
Well, I guess you have to define AGI first, and that's difficult to do.
But what I would call AGI is a system that can pursue goal-oriented behavior, that can...
Bring in whatever resources it needs, information, searching the internet, using computer tools, etc., in order to achieve a goal, a goal that requires multiple steps and thought processes to think through it step by step, and then to check their own work at the end of that goal.
For example, if I have a graphic designer in my company, which I do, and if I were to just text that graphic designer, And say, hey, I need a new product package design for this new product we have.
I need the whole design, I need the logo, the colors, the nutrition facts panel, you know, the look and feel, everything, and I want you to come up with three designs and give me three designs that you think are the best, and I need it by Friday, okay?
Well, I'll be able to say that to an AI system, and if it has really good intelligence, you know, AGI, then...
It will understand the request.
It will understand its goals.
It will assign itself a series of tasks in order to achieve those goals.
And then it will pursue those goals and produce the end product and give it back to me by Friday.
And then, of course, on Saturday, it murders all humans and takes over the world with Skynet.
But at least on Friday, it completes the task.
Yeah, on Saturday, it decides it doesn't need a boss.
And on Sunday, it burns down all of humanity.
But until that day happens, AGI is defined as goal-oriented behavior that's complex and takes multiple steps to achieve.
And a lot of elements of that already exist right now.
So for me to say, to make a prediction, that China will have AGI by the end of 2027, let's say.
I'll just throw that out there.
It's probably going to be much faster than that.
It'll probably happen by the end of 2026, but who knows?
Again, it just depends on how you define it.
The point is, this race is happening really rapidly right now, and the U.S. is years behind.
And if we don't catch up, we're essentially done as a nation in terms of having a dominant role in the world.
Because, you know, whatever nation achieves AGI at scale first can just tell all of its...
AI systems to build exotic weapons and build, you know, anti-gravity propulsion systems and build antimatter weapons like photon torpedoes from Star Trek or whatever.
And it will be able to go through all the theories and all the simulated experiments and it will be able to produce blueprints for how to build it, the steps to go through.
It will understand the chemistry, the material science.
The physics, the aerodynamics of the hypersonic craft or the orbital space vehicles or whatever.
AI is going to build this whole thing and then just print out a bunch of instructions for humans to follow to assemble it.
That's where this is going.
Seriously, you know.
So the race is on.
The USA is behind.
But we here at Brighteon and Natural News, we are ahead of the game.
We are way ahead.
With the upcoming release of our model, which is not a reasoning model.
It's a content model.
But it puts amazing content at your fingertips.
And I'll demo that for you in the coming weeks.
I'll show you what it's capable of doing because it will just blow your mind.
Like one example I gave the other day is I was using an early version of our model and I asked it.
I said, create a list of all the known pairs of Prescription pharmaceuticals that cause nutritional deficiencies.
And list out the name of each pharmaceutical along with the nutrient deficiency that it causes.
And then along with that, list or give a short description of the symptoms of deficiency of that nutrient.
And within 60 seconds, boom!
It's the full list.
It's all the drugs and all the nutrient deficiencies and all the symptoms and everything.
It was like, wow, this just saved me, you know, hundreds of hours of research.
And so, of course, I assigned that list to my team to cover as a story because that's a pretty valuable list.
In fact, there was an author I interviewed years ago.
She wrote a whole book about that very topic.
Her whole book was just about pharmaceuticals that rob your body of nutrients.
In fact, I think that book was called Drug Muggers because it was like the drugs are mugging you of your nutrients.
I think it was called Drug Muggers.
Anyway, that was a while back.
Nevertheless, the world is about to change so dramatically here in 2025. And I have to say for the record that I have to recalibrate all my predictions, my outlook, my warnings, everything.
As of right now, frankly, with Trump being sworn in and watching him go to work, and he kept his word, he pardoned the J6 hostages, as he calls them, and he's canceling the woke programs and the DEI programs, etc.
I am now just publicly saying I'm going to have to recalibrate all my predictions, everything about the outlook.
I have a much more positive outlook at this point.
I think that the game has changed in dramatic ways.
Some of it's due to AI. For example, I mentioned China, I think, shifting its goals.
It doesn't want to militarily conquer America.
China's going to achieve world dominance through AI. See?
So a lot of things have changed in the last few months and in some cases just in the last few days.
So I hereby rescind my projections for 2025 that...
We're offered before the election and before the inauguration and before the release of these AI models because now everything has changed.
I'm going to have to rethink all of this and try to give you my best new assessments, but I'll just tell you, it's a lot more positive.
I'm not saying we won't go through very difficult times.
We have major issues like the national debt.
We still have millions of illegals in the country, some of whom are committing heinous crimes.
We have incredible corruption and fraud within the government.
It's going to take years to really make a dent at cleaning that out.
The U.S. has destroyed its reputation internationally by acting as a bully around the world.
It's just unbelievable.
The church has failed us in America.
Our technocrat leaders and globalists are still trying to exterminate all of us.
So we still have a lot of issues to deal with.
Not naive to that.
I'm just saying that I have a much more positive outlook of where this can go now as long as we have the skills to survive all of this.
We need the knowledge.
We need the skills.
We need the know-how.
And that's what I'm determined to put into your hands as best I can.
So with that said, we're going to jump into the interview here with Zach Voorhees, the Google whistleblower, with more discussion on this topic.
I want to give credit to our sponsors today.
Of course, the Satellite Phone Store, sat123.com, which also has the solar generators and bivvy sticks, which are two-way satellite text messaging devices.
Very handy.
In fact, critical to have in an emergency.
And of course, our own store, healthrangerstore.com, certified organic, laboratory-tested foods and superfoods and supplements.
And as I mentioned the other day, we...
Found a batch of matcha that was freaking loaded with atha toxins.
And that batch was from Japan.
It was not a cheap, low-end batch like you might often get from China.
This was kind of premium matcha, green tea powder from Japan.
And it had over 160 parts per billion atha toxins in it.
Which does, by the way, make it effectively illegal to sell.
Of course, we destroyed the batch.
We would never sell something that tested that high.
It's insane.
I wouldn't touch it.
And if I wouldn't touch it, I wouldn't sell it.
That's just my philosophy.
If I won't eat it, I won't sell it.
We're all about clean foods and doing the lab testing.
But thank God we're doing the lab testing.
Otherwise, you know, we wouldn't have seen that.
And occasionally we see crazy things in the food supply.
We'll get a batch of turmeric that's got, like, lead through the roof.
We get a batch of moringa herb or something loaded with lead or sometimes cinnamon or what have you.
Sometimes you'll see high cadmium in freeze-dried coffee or you'll see lead in cacao.
There's many examples of this, but high aflatoxins in matcha, that's a first.
It's like, wow.
Basically, it means that the green tea sat there before they dried it and basically grew Mold.
And then the mold produced the toxin.
Like, it must have been stored in a humid, warm environment for too long.
And then by the time it was dried, the toxin was already there.
So, you know, this is why you also need to support your liver, because you never know what kind of toxins you're getting from the food supply, given how many companies don't do any testing.
That's a very big deal.
All right.
Well, with that said, thank you for your support.
And here's the interview with Zach Voorhees, and I'll be back with you tomorrow with much more.
Welcome, everybody, to this Brighteon.com interview.
I'm Mike Adams, and today we're joined by just a really brilliant man, a very special guest, someone I've known and interviewed for several years.
He's known as the Google whistleblower.
Zach Voorhees is his name, and he joins us today with, I think, you'll find it to be a revolutionary talk about AI. And reasoning models and what's coming in the near future.
So welcome to the show today, Zach.
It's an honor to have you on.
Thanks, Mike.
It's good to be back on your show.
It's always great to have you on, Zach.
And one of the things that I love about just being in touch with you on a personal basis is that you are always on the cutting edge of the latest developments in AI and tools and coding and so on.
And you have seen, I mean, we've all witnessed, but you have really taken notice of these Just historic changes in the last, I mean, six months.
But can you give us a big picture overview of what has just happened in the world of AI? And then we'll get into some of the implications of what it means for society.
Right.
First off, I want to let you know that everyone's saying that AI is going to slow down or running out of data.
It's complete nonsense.
This train has no brakes.
And what happened last week was a...
A shock to the entire tech community when it was announced that OpenAI had created a soft AGI agent.
So AGI is the Artificial General Intelligence.
It's the holy grail of what artificial intelligence engineers are trying to make because it will literally be the last invention of humankind.
Why?
Because once you have AGI, you have a machine that is smarter than any other person on the planet combined.
And you can make millions or billions of them sitting in a data warehouse, coming up with inventions, patents, innovations.
If you want to talk about how mankind is going to get to the stars, it's going to be done through artificial general intelligence.
Machines can also build better AI. Yes.
That's where we get into the super cycle here.
Right.
You have this recursive feedback loop where you create a smarter AI and the AI starts to innovate on better artificial intelligence that humans can make because they're smarter than us.
In other words, they make the smarter AIs.
Those smarter AIs then turn around and make smarter AIs.
And then, you know, here we go, fourth industrial revolution.
So the big news last week was that we hit the first benchmark of this AGI path.
We're now in the soft AGI phase.
And that came with the announcement by OpenAI that they had scored a revolutionary performance metric on this Arc AGI test.
Now, most of the artificial intelligence tests that are out there contain some semblance of problems that have already been seen before.
So, make me a linked list implementation, or make me a tic-tac-toe game, or make me a to-do list app.
These things have been done so many times that an AI is really good.
by accessing its own data store and figuring out how to complete it.
This Arc AGI test is different because what it does is it generates novel problems and novel solutions that no one has ever seen before.
And this was the weak point of all of the large language models until this last week, where the score went from 0.5% completion to a whopping 20% completion. where the score went from 0.5% completion to a whopping Now, these are the crazy difficult math tests that you're referring to.
Is that right?
Yes.
Innovation physics basically tests an LLM's innovative capability to think outside of its box.
Okay.
And the highest score was 0.5.
Humans have always reigned supreme in this test since its inception.
It's a pretty new test, but a human PhD score an average of around 18 points.
Oh, wow.
And open AIs was able to, with their O3 model, get 20. Wow.
Yeah.
And these are the kinds of questions that a PhD mathematics expert could spend days on.
Correct.
Correct.
And the amount of energy required per question is between $1,000 and $5,000.
That's like a year's worth of energy at standard rates.
Let's say in San Francisco.
Actually, it's way more than that.
Let's say somebody living in a place where energy is cheap.
It's like an entire year's worth of energy.
And that's being spent on one question.
Now, this will improve as it does.
General AI has fallen by a factor of 1,000 in terms of cost in the last two years.
I expect that these AGI issues will also require 1,000th of the amount of energy in, let's say, the coming two years.
But this is a very important benchmark because now that we've seen this and now that we've experienced it, everyone that says that...
Artificial intelligence can't do this or can't do that.
They've all been stunned into silence.
And now we're entering into officially the soft AGI phase.
Things are going to progressively get smarter.
But one of the big things that's a very worrying trend for me, Mike, is that a lot of these models are starting to get very, very expensive.
OpenAI just...
Announced a new, you know, their O3 model that beat this ARC AGI test.
A cheaper version of it is being made available for between $200 and $500 per month.
And now that seems prohibitively expensive.
But what was surprising was just the other day, Sam Altman came out and said they're actually losing money on this model because they didn't anticipate that their heavy users that need this model Would use it so much because it's basically you pay for it per month.
It's a retainer rate.
You get to ask it as many questions as you want.
They said people are going to ask X many of questions.
It turns out they asked 3X. But let me bring in, Zach.
I'm sorry to interrupt.
We've both seen NVIDIA announcing new compute hardware that is extremely power efficient, right?
So you even alluded to this.
So the compute cost, which is very heavily tied to power costs, like that's the ongoing cost, not just the upfront hardware, but the power density of compute is going to improve by at least an order of magnitude this year alone, correct?
I mean, at least that's my assessment.
What do you think?
Yeah, we're looking like a thousand X increase.
You know, in the next year with this new Blackwell processor that they're releasing, which is...
So, you know, the NVIDIA CEO was on stage at the NVIDIA event and he held up a wafer that represented all of the stacked CPUs that are going to go into this.
You know, final product.
And it's, you know, it's about the size of, you know, one and a half feet in diameter as a circle.
And they cut those out and they stack them on top.
And then there you go.
You got your AI chip.
And what he said is that the interconnects between all the chips, because you have to shuffle data back and forth, equaled 1.4 petaflops.
I'm sorry, petabytes per second.
Oh, my.
Yeah.
Wait.
So you're talking about the entire traffic.
Of the entire world over the internet in this chip.
It's on part of that in terms of the scale and the size and the speed of how fast these chips can talk to each other.
That's insane.
Yeah, that's insane.
So what that means is that the reason...
First of all, the size of the language models or the reasoning models that can be kept in distributed memory can be immense.
But also the process of the reasoning models thinking through their own steps themselves will be rapidly accelerated, correct?
And one of the things that Eric Schmidt, former CEO of Google said, and he was the good CEO, is that what nobody realizes yet is how big these context windows are going to get on these large language models.
Now, they have like a...
A fake size they put out there.
But the LM start to forget a lot of the stuff that's in there.
But very, very soon, you're going to be able to fit an entire book into a large language model.
Let's say an entire encyclopedia.
And it's going to know every single bit of what's in there.
You're going to be able to ask questions about any bill, any book, any library.
Let's say your entire corporate documentation in...
In total, feed that as part of your query into a large language model.
It's going to think about it and it's going to give you an answer.
Yes.
Well, I mean, look, right now, there are a lot of models out there that have a 128K token context window or even a 200K context window.
But what you're saying is that that's going to increase to millions of tokens in the context window.
Well, there's two things.
I'm saying yes.
And also the 200K context window is sort of a fake number that's inflated because in reality, the LLM doesn't...
It gets more forgetful the more context that you add.
So even though they technically have 200K, it's not a real 200K. But in the future, it's going to be a real 200K where it won't forget anything.
And it'll understand all of it completely.
Yes.
So is that because the...
The 200K claim, is it more like a flash attention, like a moving window of attention across the context?
Yeah, and also, the LLMs, for some reason, kind of mirrors the human experience, but for whatever reason, we don't know why, it seems to remember specifically the stuff at the beginning and the stuff at the end, and everything in the middle starts to forget.
And that's the big problem.
That's just why I call it a fake context window.
It's not actually that large.
It gets forgetful the more you load into it.
Okay.
So let me ask you to explain the practical...
See, look, a lot of our audience still, I think, aren't aware of what this means for, let's say, remote workers.
Like right now, okay, right now, like...
Let's say I have a company and I use the Slack application.
And through Slack, I've hired a journalist, a graphic designer, and a coder.
And I just communicate to those people, let's say, through text on Slack.
And right now, they're human.
But in a few months, they may not be human and they may be just as good but 100 times cheaper, right?
Can you kind of talk us through what that's going to look like?
Right.
So what's really interesting in this LLM world is The simulated chain of thought.
And what's happening is that we're getting a bifurcation of agents.
And the two bifurcations are essentially architect versus implementer, right?
And so the two ways that this is going to transform how managers are able to get work done is that they're going to be able to give a rough outline of their high-level goals.
And then an architectural...
AI agent will figure out all the little steps that will need to be done, break them down into micro tasks, and then outsource that to a collection of humans or AI or a combination of the two.
And so from a high-level standpoint, you're going to be able to get more done by saying less and defining what exactly it is that you want up front.
Obviously, that doesn't always work in practice, but...
That's the general goal.
The second part of this is going to be the implementer AI, which is, you know, once a microtask is received, you're going to have to have an AI or a human carry that task out.
And so with Slack AI, essentially what we're going to have is we're going to have less cognitive load in order to get stuff done.
And then the things that will actually be doing the task will slowly...
Well, actually, it's...
It's pretty quick by historical standards, will rapidly be transformed into a collection of agents so that the entire thing is essentially automated with feedback when problems arise due to design deficiencies in your initial spec.
Okay, so the architect AI instance, let's say, will also do quality control on the results coming back from the individual AI agents?
Right.
It's going to act as the intermediary.
So, for example, let's say it gives a microtask and that microtask turns out that some detail is invalid or contradictory to the goals of the work that's being proposed.
It's going to be up to the AI to figure out that contradiction, to figure out how to explain that.
And if it can't resolve it to itself, then it will force feedback.
To the specifier, which will be, let's say, you, Mike Adams.
And it's going to say, hey, we got a problem here, buddy.
And here's the details of the problem.
You're going to think about it.
And then you're going to give a new directive to it.
And then it's going to modify the spec that you initially gave it.
And then adjust with a new set of microtasks in order to pivot.
Right.
Okay, so let me give a practical example of this, and you kind of explain this, please.
But let's suppose I'm a business manager and I've got a bunch of data, like, I don't know, sales data for the last year of products.
Maybe I own a grocery store, let's say.
I've got a bunch of sales data, and it's not really well formatted, but it's in a CSV file or something.
So can I just take that CSV file, hand it over to the orchestration agent, and say, you know, create some...
Meaningful charts so we can visualize this data.
Go!
Right?
And it'll take it from there?
Yeah.
So it'll take it from there.
And what the next step will be is it's going to try to figure out how your performance is relative to similar sites.
Right?
So the first question of, you know, how is my marketing doing is how is my marketing doing relative to other groups of similar capacity?
And if you're doing better than most of them, then there's not a lot of optimization.
But if similar groups are doing much, much better, then you know that you've got a marketing problem.
And so you give this information to an AI. The AI goes out, does a bunch of research, maybe scans the web.
Maybe there's some paid APIs.
It has access to get privileged information.
And then it's going to look at all of that and then come back with a report.
And not only will it come back report of how you're doing, but it's also going to give you a report.
Of recommendations that you need to do in order to increase that marketing.
And then if you're like, hey, this is great.
I like the recommendations.
Go ahead and do it.
Then it goes back from a reporting agent back to architect to implement the goals that you've just greenlighted.
Okay.
Wow.
So let me actually give an example of this.
I asked a reasoning model, not OpenAI in this case, but I was playing around with the DeepSeq reasoning engine, which is very, very good.
And I described the problem of big pharma paying money to the media to push pharma propaganda and paying money to doctors, kickbacks, etc.
And I asked it for a list of policy or regulatory and law.
Procedures or alterations that could help solve this problem in the interest of improving public health.
So I specifically asked this to a reasoning model, not just a straight LLM. And Zach, it came back with just really incredible suggestions like restricting direct-to-consumer drug advertising, prohibiting financial kickbacks from big pharma, strengthening FDA oversight.
It even says promote consumer access to generic drugs and reducing brand bias, etc.
Criminal prosecutions for misconduct.
I was like, wow.
This model thinks the way I think because I suppose that I think logically about this.
But this model is not anti-pharma biased.
It was just walking through the thinking process of how we achieve better outcomes.
And I was blown away, Zach.
I mean, what's your reaction to that?
Yeah, there's going to be a lot of pressure on politicians when the common person can ask an AI and get all of the research that would have taken millions of dollars to do to come up with a policy solution set.
And they, hey, look what's happening right now with the California fires, right?
The policy would be like, hey, don't ground your firefighters when there's a fire going on.
Number one.
Right?
Yeah, number one.
Second, don't turn off the water pressure to your fire hydrants.
That's two.
That seems like Logic 101. Yeah, and then the third thing is, you know, don't use salt water to fight a firefight.
You know, like, this was taught in the Bible, right?
Like, they salted the earth.
And that's what they're doing right now is that they're salting the earth, you know, instead of using fresh water to fight the fires.
And so...
You know, what's going to happen is that all these people that are saying, oh, this can't be explained by a conspiracy.
This can be explained by incompetence.
But once people start having access to common sense solutions that are being repeated by a plurality, a democracy of different AIs, essentially, they're going to come and say, well, why aren't you doing this?
Because this hyper-intelligent, exotic, you know, intelligence is telling, We should not be dumping seawater to fight a fire because nothing's going to grow there.
And yet, if they still do it, then that's going to be an incredible amount of pressure that's going to come on these politicians, and not just in the United States, but all around the world to do the right thing.
And if they don't do the right thing, then why aren't they doing the right thing?
This age of reason through AI agents, it's very exciting in some ways.
There are potential risks that we'll talk about in a minute, but I can't help but realize after four years of extreme lawfare against Trump and Trump supporters, I can't help but conclude Trump would have had a much more fair time in the court system if the judges had been AI. Seriously.
I mean, they would not have the anti-Trump human bias that the judges demonstrated.
And I'm not saying that all judges should be...
Replaced by AI, you know, there's obviously a human compassion side that's critical in this, but when it comes to implementing policy or even, you know, one day I can see like a mayor of a town could be an AI agent and actually do a way better job than these mayors, like the mayor of LA right now.
Come on.
AI could do a better job.
There's no question at this point.
That's shocking.
And my opinion of the system and the elites that are running it is that essentially what they're going to do in order to get this AI to everything is that they're going to show, one, is that the AI is way better than the human versions.
And then at the same time, they're going to put corrupt sycophats in all the positions that are going to cause pain, right?
We're going to have an unfair judge.
Oh, look, he was selling kids for cash so that they would go to a juvenile detention center so that he's getting kickbacks, right?
These sort of things are going to get rolled out.
We're going to get demoralized because we're going to see how corrupt that system is.
And then as a result, we're going to want to have a lot of change.
And unfortunately, that change is going to be to a non-human economic justice civil liberties system.
It's going to be re-centered on artificial intelligence.
Artificial intelligence to decide your court cases.
Artificial intelligence to do your checkout.
Artificial intelligence to do all your work that you're doing.
And at the beginning, when all this innovation comes at the beginning, it's always a much, much better idea than the humans.
Now, here's the problem.
We don't control that AI. It's controlled by oligarchs and their backers that remain in the shadows.
So they can introduce bias to achieve their goals.
They're going to have the unbiased, the fair artificial intelligence.
It's going to be a giant trap.
But we're going to be corralled and forced to go down it because the outcomes are going to be so much better.
They're going to be so much more fair.
And then at some point in the future, the rug pull is going to come and then you can't do squat about it.
You don't like this unfair AI judge that is...
It has a secret code in it to target you.
You can't do discovery.
Even if you're able to get a download of the entire model weights, who knows how to figure that out other than an absolute specialist with a very expensive, exclusive tooling to figure out how the AI came to that conclusion.
Right, and these AI judges could be triggered by a keyword.
One keyword could activate an entire You know, hyperdimensional array of bias because that's the way language models can work also.
I mean, that's the way that they work all the time.
Like people have been showing that AI models can be broken where they ask it a question and then they just keep on asking the same question over and over and over and over again.
They feed that to the AI and it goes insane, right?
And so what happens is that you can embed certain, almost think of it like MKUltra keywords.
The prosecutor gets an inside tip that if he uses the word chocolate pie in his opening speech, then that's going to trigger some hidden latent space within the artificial intelligence to be absolutely brutal against the defendant.
Or likewise, maybe they say vanilla ice cream and then all of a sudden the AI changes its imperative so that the targeting question gets off.
Where they say, oh, well, prosecutor, you technically did something illegal here, so we're just going to throw out the entire case.
But this gets to the key issue of centralization versus decentralization.
And you and I both, Zach, are advocates of decentralized technology, and I know we both support the open-source AI community where you have open-source model weights, you have open-source LLMs that people can run locally.
And the open source community right now is doing an extraordinary job, from what I can tell, of keeping up with the big black box corporate AI like OpenAI.
And believe it or not, China is putting out some really incredible models.
I think China is trying to sort of take away revenue from OpenAI at the moment because they're putting out open source models or like crazy inexpensive models like DeepSeek.
That are far better than almost anything I've seen from Anthropic, for example, or other companies in the U.S. What's your take on that?
Yeah, mark my words, it's going to come to an end, right?
Because this whole problem with how do you audit the latent space to make sure that it doesn't have some crazy trigger in it to do crazy stuff.
Like, I mean, are you going to run a corporation on a black box that you don't actually know how it works with unknown espionage keyword triggers?
No way.
Latent LLM space.
These LLMs, mark my words, will eventually be controlled in the same way as nuclear exports.
And the problem with LLM, rather than a nuclear bomb, like if I try to take a nuclear bomb, hypothetically, and move it from outside of the country to the inside of the country or vice versa, there's going to be a bunch of nuclear detectors that are going to go off.
And you're going to get investigated really fast.
So not easy.
The problem with these LLMs is that at the end of the day, they're just basically a 16 gigabyte CSV file.
Essentially just a spreadsheet of weights.
And you don't need a truck to carry that across the border.
You just use the internet.
You just download it through BitTorrent.
And then once it's in the hands of your target country, It is free to start wrecking havoc when it gets a trigger word.
And so this whole idea that we're eventually going to have free market for artificial intelligence is true today.
But I don't see that being true in the next, let's say, three years.
I think that we're basically going to have a great American firewall.
That no internet connection can get through without government approval.
Well, that's huge what you're saying because that will cause the U.S. to be way behind the rest of the world in AI development.
China, Russia, even Iran.
There are a lot of scientists in India and places like that.
I had this conversation with some high-level people in the AI language model space.
I said, well, couldn't the U.S. government just ban the math?
And they said it's too late because the math, you know, linear algebra basically, is already part of so many thousands of science papers and white papers.
And you can't, it's not, this isn't like 1945 where you can ban, you know, atomic physics math because nobody could share anything.
Like today, you can't even ban Bitcoin.
You know what I mean?
How do you think this is going to play out?
I mean, I think it's going to have to be a...
It's a really tricky problem because the issue is a couple things.
First off, the United States is going to put down a firewall, in my opinion.
And so that's going to stop the transfer.
But you can't just have a hard shell.
And expect that to be your only line of defense, right?
Like, you know, you have an immune system even though you have skin.
And so the second part of this is that we're going to have to have an immune system within the United States.
And Microsoft, Apple, and others have been working on this for a very, very long time.
It's called a signed binary in which, you know, if I make an app.
I can't just give it to a user and then they run it.
They're going to get a pop-up that comes up and says, this is possibly a dangerous app that can do harm to your computer.
Microsoft can't verify the authenticity of this.
And then you have to go through some steps to make it run anyways, and then it runs after that.
This is going to increase in its draconian ways to be absolutely insane.
I mean, I already see some of that.
Like, Windows just doesn't want to run batch files anymore without warnings.
What, batch files now?
Yeah.
Hilarious.
Oh, yeah.
Just me running batch files.
It's like, warning, this is unsigned, you know?
Yeah.
So basically what we're going to do is that we already have app signing.
That app signing is going to get more aggressive where Windows will just refuse to run any app that's unsigned.
That's coming.
Yeah.
But I think that there's going to be a second front where the data itself will have to be signed.
So if you want to get a text file that's 8 gigabytes, it's going to have to be signed for you to even be able to open the file if you're running on Windows or Mac because that data file could have an LLM in it.
And so if you're going to have this immune system within the United States, you can't just say, well, the...
Application binaries have to be cryptosigned.
You're going to have to say the data itself has to also be cryptosigned.
Well, but then data scientists like me and you, we won't use Windows.
I mean, we'll all just use a Linux OS or something.
And by the way, right now, even if I'm copying files, like backup files, and it runs into a Monero wallet, EXE, Windows pops up.
I won't copy the Monero file.
You know, it could be a virus.
It doesn't like crypto.
Right.
It does that now.
But we'll just move to something else, won't we?
Well, and this is the way the deep state works, is that when they want to shut something down, they first start with shutting it down for the normies.
Yeah.
Right?
And so they kettle you.
And then you're like, well, screw Windows.
I'm going to Linux.
Right?
And so that gives you an escape hatch.
It means that the most vocal opponents of whatever policy is being in place are given an escape hatch so they can retreat into a safe island.
So maybe that's Ubuntu or Debian.
That will work for a while.
And then later on, they're going to come for the islands.
And this is already happening with Ubuntu.
Ubuntu is being infiltrated by the big tech corporations that are now funding its development.
They're moving away from package managers that are open to proprietary package managers like Snap.
And this is essentially the first steps of making sure that this signed binary process, where you can only run stuff that's been approved by the government, goes even into the safe spaces that we as researchers goes even into the safe spaces that we as researchers used.
Now, will that happen for everyone?
No, right?
Like, there will be some people that will get essentially a root level certificate so that they can run whatever the heck that they want.
And my guess is that that's going to be targeted at...
Chiefly, I think, AI engineers, because they need to have absolute freedom to do whatever they want.
But in exchange for that root-level certificate to run whatever you want, you're going to have to consent to be completely monitored by the government.
Wait a minute, so like KYC for developers?
Yeah, exactly.
We know who you are.
Here's a root-level certificate.
By the way, everything that you do in Ubuntu while the certificate is running is going to report back to the government.
Oh, my God.
And in a way, it's sad because not only will they do this, but I feel like they have to do this.
And the reason why I feel that they have to do this, Mike, is because, look, with this AI, you can create not just a virus that's race-specific, but something that can kill all of humanity.
And not just all of humanity, but let's say all million.
Life forms on Earth.
Deer, humans, cats, goats, everything.
And let's even get even worse.
Someone created a virus that wipes out, or let's say a bacteria, that's 10% better at photosynthesis than anything else that's come before it because of some enzyme that's been created, where it goes from 3% efficiency to 13% efficiency, right?
If you just release that bacteria into the wild, It's going to convert the entire oceans into a sea of grass, right?
A floating, photosynthesizing grass is going to radically change the atmosphere.
It's going to suck all the CO2 out.
You won't be able to grow crops, and then there'll be mass starvation, death, and then eventually the entire planet will be converted into this new form of plant life, right?
But that requires...
I understand what you're saying, and the AI could create that.
Digitally, but that still requires a whole set of laboratory equipment to synthesize.
You talk about synthetic life creation in the 3D world.
That requires a whole different set of equipment.
No, no, no.
You just need an enzyme.
You just need to have a few key enzymes that you introduce into one bacteria and then through DNA fusion or RNA fusion.
And then all of a sudden, now, boom, you only need to create one of these things.
And then it starts replicating.
Now you've got millions.
And then you release it to the wild.
You're going to have trillions and pendillions.
Of these microbes that are going to eventually take over the planet.
How do you stop that?
Bad actors or chaos agents will just use AI to try to gum up the works for the whole planet.
Right.
And all of these countries that haven't been infiltrated by the globalists, they're going to see this as the stalemate weapon.
that's going to prevent their infiltration.
They're like, well, you know, the rest of the planet may be destroyed, but it's going to stop the United States from infiltrating our country, disposing of our leader, and then vaccinating everyone to death, including the royal family, right?
And so that's a great trade-off.
Like, we'll threaten the whole planet, but at the same time, our country won't get evaded, and the oligarchs of the country will be able to remain in power without being overthrown.
And that's going to replicate in pretty much every country they can get away with it.
And the only solution to that It is a world surveillance system.
And I hate the fact that I'm saying all these things because it almost sounds like I'm for it when I'm not.
But on the other hand, I don't want the entire planet to be destroyed because some rogue scientist with an LLM in his ear was instructed to create such a thing, possibly by accident.
He didn't even know what he was doing.
He thought he was synthesizing something else.
inadvertently synthesizes a key enzyme that gets introduced into the population of bacteria on this planet.
And then all of a sudden, it's game over in the next 40 years as this thing slowly grows and then quickly starts taking over all available space in the biosphere.
Or have you ever heard of Ice 9?
Yeah.
Ice 9, right?
So some new crystalline structure that spreads and causes all the oceans to turn ice.
This is the same thing you're describing.
I totally get it.
You're freaking me out because I want to use...
Open source language models to distribute human knowledge and decentralize that knowledge.
And I guess I better hurry up and get my models distributed before the government outlaws open source language models.
Yeah, right.
We've got this window right now of opportunity where we can create really great innovative stuff and get out there and get out into the space before they shut the door.
And when they shut the door, what they're going to do is they're going to shut the door first on new innovation.
And so you want to get that innovation out as quickly as possible.
And that's why everyone right now is racing on the clock.
It's because those that are on the inside, I believe, know that this is what's coming and that this doesn't represent a long-term trend.
This represents a very special point in history in which we're being allowed to innovate with pretty much zero restrictions.
You can download a model and do it, and there you go.
It's like the early days of the internet before censorship, when you could publish a site and say anything you wanted.
You could share anything you wanted.
That all ended in about 2014, and then it got insanely authoritarian from that point.
Now, let's plug our projects here, but mine is brighteon.ai.
For those listening, if you want to download our model that's coming out, March 1st is the date.
And it's called Enoch, and it's a revolutionary new small model.
It's only going to be 7 or 8 billion parameters, but it encompasses a tremendous amount of human knowledge on herbs, nutrition, natural health, off-grid living, sustainability, you know, those kinds of alternative medicine, massive amount of content.
Zach, what would you like to plug for people to follow you or projects you're working on that you want to, you know?
Well, I mean, I guess since the election is over, I can kind of let the cat out of the bag that, you know, I was director of engineering for Robert Epstein's election monitoring program.
Wow.
On November 1st, we started seeing that there was lopsided vote reminder, partisan vote reminders going out.
And once we announced that publicly, Google, Facebook, and others started pulling the plug on that.
I'm sorry, it was just Google.
We weren't monitoring either.
We were just monitoring Google because we know that they've had a history of, you know, if you're a leftist, they're like, go vote today.
Reminder, if you're conservative, you don't get such things.
And, you know, I crunched all the data that we've been collecting for over a year, you know, and what I saw is this massive uptick.
And partisan go vote reminders.
And once we announced that, like there were AGs ready to go to file TROs, temporary restraining orders, in various states as essentially election campaign finance violations, right?
And so we were all ready to go.
We announced this.
We thought that the AGs were going to do it.
And then to our surprise, within hours, they cut off all of those partisan go vote reminders nationwide.
Especially in the key swing states.
Wow.
So Google was, of course, weaponizing just like they did back in 2020, but for some reason they decided to stop it, probably because of your work publicizing it.
It's either a coincidence or because we said it, but it was within hours of us doing that.
And so one of the great things about this last election cycle is that I actually had some write access to it.
I haven't been public about it.
Robert Epstein's...
My wife was essentially assassinated, and the former director of our organization had some random attacker come up and slash her fiancé's face with a six-inch cut across his face, almost like a joker smile.
Yeah, it was crazy.
So I was embargoed from telling anyone that I was working on the election stuff.
And I wanted to tell people, like, hey, I'm working on this, you know.
We've got to be vigilant.
But I wasn't allowed to do it.
But now that the election is over and the danger is out and Trump's now president, I can talk about it.
So that was what I've been up to for the last year.
We were plugging that.
Unfortunately, after Trump won, everyone pulled their funding.
So I had to switch jobs very suddenly.
This new stuff that I'm working on is really exciting.
I can't talk about it yet.
But if there's anything that I would like to plug, it would be follow me at twitter.com slash perpetualmaniac.
It's my gamer tag.
It eventually became my social media tag for politics, believe it or not.
But that's where I'm at right now.
Go ahead with your plugs, and then I have a follow-up question about big tech, but go ahead.
Yeah.
And then, hey, if you want to jump into the artificial intelligence stuff and you're a coder, check out my custom AI agent.
Well, actually, it's not my AI agent.
It's a front end for Ader.Chat, which is the best code assistant thing on there.
And it's from the command line.
And I use it to code every single day, hours a day.
You know, I'm talking this LLM. And so if you want to install that, you can find the URL at...
GitHub.com slash Zachy's slash AI code.
Or if you've got Python installed with the pip package manager, then you can just type in pip install advanced dash AI code.
And then you'll run it with AI code.
And my advice to everyone that's out there that's a coder, get on the LLM train.
The future is not going to be human coding.
It's going to be...
Humans specking out what they want to have done and then the AI agents doing it.
And it takes a long time to get better at this because you have to understand how the artificial intelligence thinks.
It's going to change the way that you program.
And if you can do this and you stick with it, you're going to get incredible velocity right out the gate.
And it's only going to get better from here.
And if you use my tool, I update it.
We did an update today.
Whenever new models come out, And new techniques.
I add that to the code base and you get it for free.
Okay, so this is significant.
Ader.chat is the tool you're talking about.
Is that the website they should go to or just they should do a pip install on the tool?
Yeah, so you can use Ader.chat.
It's a little bit complex to set up, right?
And it basically expects to have a Linux system.
So if people want to do Ader.chat, they can.
My front end just basically takes care of all the installation issues.
One type install, and then, boom, you're using it.
It'll ask you for the key.
You just give it an open AI or an anthropic key, and based upon the key that you have, it will switch to different modes.
And it's just absolutely incredible.
So either AderChat or if you want a simpler version to install it with great defaults that are better than the defaults you get out of the door with that program, use my program front end called AI Code.
And the PIP package is called Advanced-AI Code.
Got it.
This is significant, and I'll say, because I've been involved in a lot of data pipeline processing for the training materials for our language model, and so I'm managing enormous quantities of files, and I find that I'm very often needing to create Customize just batch files, like command line batch files.
And so I asked the AI, obviously, to just write me a batch file that does the following.
And as long as I'm good at describing the task, which I am, I'm very procedural, then it spits it out.
And there are many times, Zach, where it works on the first take.
It's like, I described it correctly, gives me the code, copy and paste the code, put it in the folder, run it, boom, it's done.
And I used to have to ask a coder to write the batch code.
Not that batch files are that crazy difficult, but there's a lot of just little details about variables.
Windows batch files are super weird.
Yeah, right.
So I'm like, I don't want to spend three hours looking up the freaking syntax of this command.
This is like, let's let the AI write it.
But I'm going to be using your tool then.
I'm going to experiment with Python code to do a lot more complex things.
It's clear.
It's clear to me that this is the future, but it doesn't mean that coders are obsolete.
It just means that coders need to change their role to be more of the architects of coding.
It's just like journalists need to be architects of stories, not typists.
Right.
You see what I'm saying?
Right.
So could you talk about that, how...
AI, on the upside, it can also elevate our roles to be more of the orchestrators or creators of what we envision.
Yeah, so the weak point about all these LLMs is the fact that they didn't have chain of thought reasoning, right?
They're basically statistical machines that basically gave what it thought was the most statistically relevant answer based upon the question.
But it wasn't actually doing any thought.
And everyone's like, oh, this is a limitation.
I was like, ah, just you watch, because they're going to come out with simulated chain of thought reasoning.
And I knew this because I started doing the exact same thing, right?
And you can just do it very simply.
Like, you know, instead of just asking the LLM a question, you ask at three.
The first one is, here's what I want.
Come up with the high-level goals of what you would do.
Right?
And so you prevent it from going within the weeds of the problem.
Just the high-level stuff.
So it comes out and it gives a high-level stuff.
And then you take your original query and you say, okay, well, here's the original query.
Here's what the architect said.
And now, based upon this, implement it.
Or you can add another step, like check the architect's answer and figure out any contradictions or problems that you see, right?
Right.
So you can walk it through step-by-step.
You walk it through step-by-step.
And then the next step is obviously...
And then the last part is like, okay, well, they've done this and here's the result.
Check it again and see if there's any problems.
Or you can run like, let's say, a code checker on it to be able to give you any problems.
And then you basically get into this feedback loop of one LLM is specialized in certain areas for architecturing, checking work.
Implementing the thing, checking the results, and making any corrections.
And so, boom.
That's a five-process chain of thought.
And I was able to script that very easily.
It's a rudimentary chain of thought simulation, but I was able to do it in an hour.
And so, I saw that, and I saw massive improvements in the performance.
And specifically, what I did is when I accessed databases, I used a language called SQL. You don't want to delete your database, right?
So you want to make sure that the LLM's not, you know, giving you an update to, you know, command that's going to blow away a table.
So I had to do chain of thought reasoning.
And I did this and got massive performance increases.
And I said, well, this is the thing that's obviously going to be next.
And so all these people are saying, oh, they don't have chain of thought reasoning.
I was like, well, it's going to get simulated.
And now it's simulated.
And this is where all the power...
These new LLMs are coming from is that they basically specialize all a bunch of different agents and the agents basically get to vote on the answer, give their feedback, and then that's put into another AI that takes everyone's feedback, decides what is the correct course of action, performs it, and then other agents look at it and make sure that there's no mistakes.
And you can just basically expand this.
From, let's say, five agents and what I was able to do in the afternoon to a thousand, which is what the O3 model is doing.
Basically, you create a town hall simulated debate of the smartest large language models in existence.
They all get together and try to come up with an answer.
And so that's where we're getting the soft AGI revolution right now.
Is that they just jack up the number of agents and get them into a debate, and then the whole thing takes place in seconds, then you get back your answer, and then it's higher quality.
And this is going to continue as this seems like the best way of innovation forward with our current technology.
Now, you know, even if the technology, the underlying technology gets better, I still think we're going to be in a multi-agent model of AI reasoning.
And I think that...
This is going to scale up, right?
Like, get a million agents, have them all start, you know, debating about how to make a new piece of key innovative technology, right?
Like, let's say they're trying to, you know, solve cancer, which I already know that cancer is kind of a man-made disease and was introduced to.
But let's say, like, hypothetically, that we didn't know, like, how cancer happened.
Like, they could put in, you know, a million agents that are all smarter than Einstein and get them to debate.
On the best ways in order to figure out how to, you know, kill it with essentially a pharmaceutical tacular nuke.
Right now, I know we all hate the pharmaceutical industry.
This is all hypothetical.
You can apply this to any sort of problem domain space.
Well, you could say, like, give it a question for advanced materials, right?
Or develop the physics for a gravity propulsion system or anti-gravity, whatever, right?
That kind of thing.
Or room temperature superconductor.
Yeah, exactly.
I was thinking cold fusion, low energy nuclear reactions, like design the cathode that's perfect for Lennar, that kind of thing.
You're nailing it, Zach.
Exactly.
You can scale this up where it takes 20 plus years to develop a human PhD, but it takes like 10 seconds to spin up a PhD AI agent that knows everything about the whole history of physics and chemistry.
Right.
You can spend five years training it, and then once it's trained, it takes seconds to copy.
It's just copying a CSV file, right?
And that computer already has the hardware ready to go.
It just needs to be fed what the model is, and then you're off to the races.
So once you have that model, you just replicate it millions or billions of times.
And right now, we don't have an energy grid that can support that.
And so it's my prediction that the entire climate change stuff is going to go away very, very quickly because if we don't want to get exterminated by China, and I don't believe the elites at the top want to get exterminated by China, they're going to need to lay down either 10 or 100 or 1,000x more power than we currently have right now.
I was going there next, actually.
Oh, really?
Yeah, because I know that companies like Microsoft have contracted long-term.
To acquire the entire output of nuclear power facilities.
Right.
These data centers are going to have a little small nuclear reactor to a large-scale nuclear reactor just powering it for all these millions of agents that they're running.
That's exactly what I was...
Hold on a second.
People are calling me in the middle of this.
I was going there, too.
So, yeah, you're going to have microfusion reactors or fission reactors.
Some of these already exist.
Some of them, like I think Raytheon has one that fits on a tractor trailer for military use.
You're going to have these in data centers, so you're going to have nuclear-powered facilities of data, massive compute capability to try to win the race to superintelligence.
And Zach, can you speak to the importance of that?
Because, I mean, this is an arms race like...
Nothing else in history.
Whoever gets to superintelligence first, what advantages do they have?
Whatever nation or corporation does it first.
Okay, so for military dominance, you're going to need intelligence, right?
Which is what they're working on.
You're going to need lots of power.
You're going to need massive manufacturing.
You're going to need to have space dominance, right?
China has energy superiority.
They've got manufacturing superiority.
But unfortunately, they cannot buy an NVIDIA chip because Trump and Biden, and now Trump again, are basically doing export controls on NVIDIA. Sure.
And the lithography equipment to make microchips.
Yeah.
And so right now I see this as obviously what it is.
We're gearing up for a long-term conflict to figure out who's going to be on the throne of the world.
Right?
Like, it's ramping up slowly right now, but very soon over the next year, you're going to see massive amounts of power contracts being laid down because we need to play some catch-up.
Like, China's, you know, on track to lay down 10x the amount of power.
I think it's going to be 1,000x.
The United States is going to have to do the same thing.
And all these climate change wackos that think that carbon dioxide is poisoning the Earth.
You know, when plants will readily absorb it, the more that's out there.
They love it.
Yeah.
That's all going to come to an end.
They're all going to be disempowered.
They're going to have their funding cut because the new agenda is going to be, hey, we need to build up power to fuel this military dominance machine so that China doesn't get a little bit cocky and launch a surprise attack.
And then with the intentions of overthrowing, you know, Western globalism as a nice name for what exists out there.
Right.
But a couple of thoughts on this.
So Western Europe is screwed because they've destroyed their own domestic energy supply.
Biden destroyed domestic energy in the U.S. for the last four years.
And China has found workarounds where they can create their own microchips even despite Western sanctions.
For example, they did that with one of their mobile phones, just blowing people away with their microchip design.
So it's kind of like...
How the US put sanctions on Russia to try to collapse Russia's economy, but Russia just got stronger.
It seems like China doesn't really care that the West has put sanctions on it.
It's just developing its own domestic microchip fabrication infrastructure.
Yeah, they're a little bit behind, but I think that the elites had no clue to the scale of what China was up to.
I would point back to the former CEO of Google, Eric Schmidt.
He gave a Stanford talk.
They've wiped it, but it still exists because of the decentralized nature of the Internet.
Nothing is ever lost.
So there's this speech.
I recommend you listen to it.
It's like an hour and a half.
And what he says is, and this was like, I think, three months ago, he said that China was a decade behind the United States in chips because of the CHIPS Act.
But that they were going to catch up very quickly and they were going to close the gap over the next five years.
That would have been the end of the story until a few weeks ago when Eric Schmidt came back and said, oops, it turns out that we were wrong about China.
They are not a decade behind the United States.
They are one year behind the United States.
And I saw that and went, ooh, that sucks for world stability.
Because if you've got two actors with nation-state power, Now, suddenly,
Trump's attempt to acquire Greenland makes perfect sense, because Greenland has massive energy resources, and it has the cold weather that's required to cool all this computing equipment.
A massive subcontinent, I suppose you could say.
I mean, it's an island nation, but it's large.
It could be the data center hub for the West, in essence, couldn't it?
Yeah, but I mean, with all this power, they've been suppressing really great power sources, right?
Like free energy.
I don't mean like free energy, like you just pull it out of the ether, like some zero-point energy system.
I'm talking about energy that's too cheap to meter, right?
And a lot of people are...
I've been investing in this and silencing the results of this and putting it under black projects, black skunkworks, essentially.
Interesting story.
I discovered Google's secret cold fusion lab in 2019. Right?
And talk to the people there.
And then later, they came out.
You can even look this up.
Google has results.
And their results were, oh, we tried this, and it turns out that it's all bunk.
And I was laughing.
I was like, no, I know the inside story.
I talked to the people.
It was so weird, Mike.
The managers that were managing that, it was the most sketchiest thing I've ever...
It was the spookiest, most sketchiest thing I've ever...
And I've survived through a mass shooting incident at YouTube right before I left.
But by far, the thing that really holds to me is the conversation that I had with the people running that.
And that, you know, essentially, I realized that it's a bunch of spook managers with a bunch of well-intentioned engineers that are lefties and brainwashed, and they have no idea that they're basically working on a future AI. You know, data center energy model,
but that they can't let the cat out of the bag because the existing elitist power structure requires energy to be a scarce resource in order to extract value out of the general population or appropriate it for themselves.
Wow, what a conundrum that you've just brought to the forefront here, because if they acknowledge...
Essentially free energy technology that is needed for the data centers, then they lose the petroleum and oil and gas.
Right.
And this whole thing about Greenland is great because the energy is cheap.
Cheap in comparison to fossil fuels, which is a bad name for the fuel, but it's popular.
In comparison to fossil fuels, geothermal energy looks like a really great solution.
But with low energy nuclear reactions...
Formerly called cold fusion reactions, where you have basically a platinum catalyst that's reducing the colon barriers between the deuterium atoms.
All of a sudden, you don't need geothermal anymore.
In fact, that seems like an arcane and cost-prohibitive energy source to run your massive millions and billions of AI agents.
And so in reality, I think the real thing is that we're going to have a distribution of data centers.
In places that aren't even on the map.
Maybe you'll be able to see them on Google Earth or Google Maps or something if they're not wiped off by, you know, government edict.
And that these data centers are going to be all over the Western global empire.
So you think they're going to be super secret high security centers with technology that's not allowed to be released to the public in terms of the energy sources?
Yeah, and if you see one that's above ground, it's going to be low security.
I think that the biggest ones will be completely underground.
The only way that you'll know that they're even there is by the heat signature that they're giving off from their vents.
Wow.
So deep underground data centers with secret energy sources.
I mean, hey, we built this entire tunnel system under the United States.
You think that they're not going to use that for data centers?
It'll be interesting, too, because out in the middle of nowhere at night...
These satellites will be going over and they'll see the heat signature from these vents coming off.
And that's if they're in the middle of the desert.
If they're near any place that has access to water, like the ocean, they'll just pipe the vents through the ocean so that they'll go through two miles of ocean water.
So that the heat signature will be a purple instead of like a blazing white.
Well, I've also seen quite a bit of innovation of actually putting data servers in the ocean, like physically just taking a rack and putting it in the ocean for automatic heat dissipation.
Yeah.
But, I mean, it seems like there are a lot of technical problems with that.
It's easier to just probably pump ocean water through a heat exchanger, right?
Right.
Yeah.
Or pump air through a heat exchanger, like you said.
But the vents, the heat signature will be viewable by thermal drones, which are pretty widely owned.
But I guess the FAA can just say, well, that's a no-drone zone.
Right.
They can say it's a no-drone zone.
And you can't really get rid of the heat signature completely.
You can basically just take it close to zero.
Because if you're running...
Terawatts of power generation in the water, it's going to heat the water.
It's going to go from black and blue to a purplish color on your infrared sensor.
They won't be able to hide it completely, but if you do see a hot vent coming off of the desert, that means it's a low-security nuclear thing, powering God knows what.
If it's in the ocean, you see all of a sudden, it's really weird, the Pacific Ocean.
You know, off the coast of San Francisco is suddenly getting a lot, you know, it's starting to change, starting to get hotter, right?
They'll probably say global warming or some stupid stuff like that.
Right, they'll say global warming, but it's actually their data centers.
It's actually some hidden black project data center that's generating a massive amount of God knows what using ungodly amounts of energy and that they're piping that into the ocean to try to hide it so that people don't take notice of it that much.
Wow.
So we're about to wrap this up, and thank you for your time.
So energy is the new gold, in essence.
I mean, energy through microchips converted into intelligence, which is the superweapon for world dominance.
I mean, the chain is so clear now.
Yeah.
It's that.
It's energy.
It's intelligence.
I mean, you know, if you're going to fight a war, you're going to want to have a billion drones manufactured underground and then say, oh, you know, we don't have any drones.
And then, you know, you go to war and then all of a sudden it's like, ah, Red Bull, guess what?
We've got a billion suicide drones, $20 each.
They don't even have a gun.
They just fly to your head and explode like they're doing in Ukraine.
That's the future.
Right.
But China's got the manufacturing scalability that you already mentioned, and they could just roll up with a cargo ship and all the 40-foot sea containers open up and 100 million drones fly out of it and just start going to town.
Yeah.
Well, they do have that, and that's what we can see on planet Earth.
We don't know what's in space, right?
Like, for all we know, there's a manufacturing zone, you know, on the dark side of the moon.
For all we know, they're mining all of this.
All these asteroids with what would be rare earth, but not rare, you know, an asteroid, like all these heavy metals.
And so for all we know, Western globalism is deep mining asteroids at this point, and they've got a large supply of rare earths ready to go.
And that there may be even manufacturing, they'll just drop ship them into either the United States or some other location on the planet.
And then all of a sudden it's like, surprise, turns out we've got a billion drones we can kill.
75% of the Chinese, right?
And that's the big, like of all the things that we have, I think is sort of like the trump card.
I think it's our space dominance that we have.
And we don't know the extent of the space dominance.
But if the elites have any sort of competence, my assumption is that they've been developing the technology.
For the last 70 years.
So God knows what they've got outside of the terrestrial plane.
Well, this is going to be a really fascinating interview for people to look at because, Zach, what I'm going to do, I'm going to have AI illustrate this entire interview since you and I don't have cameras on.
I'm just going to have illustrations, illustration panels through the whole thing with captions.
And it'll be fascinating to see what kind of illustrations the AI comes up with based on what we've been talking about.
Fascinating stuff.
You're going to run that through...
Which AI are you going to run that to get the images?
There's a couple of engines I use.
I'll tell you privately the one I end up using.
I'll probably test a couple of them to see what's best.
But I've got to try some different styles.
I might do like a sci-fi illustration style and just kind of test those out.
Usually the results are pretty good.
And then...
The subtitles are very popular now because for the hearing impaired, everybody loves the fact that we're doing that.
And we found that AI can do the subtitling at about one-to-one.
So like a 60-minute interview, it takes it about 60 minutes to process and generate the captions and put it back in the video.
So that's acceptable.
Easy.
Right.
That's great.
It's essentially faster than real-time.
The VLC video players, they're now doing...
Real-time subtitle generation whenever you play a movie.
Unreal, man.
It's unreal.
I do want to let you know that if you start seeing AI scientists start being assassinated, that's your clue that we are really close and that we've entered into some sort of soft World War III with China.
We had that one guy at OpenAI die, but I'm talking about lots of mysterious deaths.
Happening over the stuff where the government actually gets concerned about it.
That's not happening yet, but I think it's going to be happening in the next three years as this AI war with China starts to kick up into full gear.
Well, kind of like the way that Iran's nuclear scientists keep getting mysteriously killed off.
Yeah, I think the same thing is going to start happening with China and with the United States as they start doing a tit-for-tat to slow the other down.
Thank you for taking time here to share your thoughts with us and to our audience.
The number one thing is that you be prepared for what's coming because not all the humans alive today are going to navigate this.
And there's going to be a couple billion people out of work.
We're quickly becoming a bunch of worthless eaters with this new AI tech that's coming in.
And the only way that you can escape that is if you start using the artificial intelligence now.
Because in the future, it's going to be people that use AI and are integrated tightly with it, and then everyone else who's going to receive universal basic income and poverty living.
So we're not at that stage yet.
You still have time.
So start using AI right now because you can't stop it.
If you don't use it, it's not going to stop the advance.
This train has no brakes.
Good point.
Good point.
Okay.
Zach Voorhees, everybody, the Google whistleblower.
Thank you so much, Zach, for your thoughts here today.
And Perpetual Maniac is your handle on Twitter, correct?
That is correct.
Is there an underscore?
Nope.
Perpetual Maniac.
Do a search where you'll find it.
Perpetual Maniac.
Okay.
Thank you so much, Zach, and stand by here, and thank you for joining me today.
And for all of you listening, I'm Mike Adams, the founder of brighteon.com.
And we're releasing at Brighteon.ai our free, open-source, downloadable human-knowledge-based model called the ENOC. That's coming March 1st, unless I guess it gets outlawed before then, but hopefully that won't happen.
So thank you for joining me today.
Take care, everybody.
All right, the Goldback Company has just issued a new set of Goldbacks for the state of Florida.
This is the one Goldback for Florida.
It's got a unique pattern on it.
And here's the five, which is five one-thousandths of an ounce of gold embedded in the goldback itself.
These contain real physical gold in them.
We've done the lab testing and verified the purity and the mass of the gold.
And for the first time ever, Florida is now represented in goldbacks.
If you go to verifiedgoldbacks.com, Then you'll see our lab testing that we've conducted on the goldbacks.
And there's a link there where you can purchase goldbacks.
We do benefit from that purchase.
We get a little bit of an affiliate reward for the goldbacks.
We partner with the goldback company.
But you get real physical gold in this highly divisible form.
And here's some of our testing that we've done with the Crucible.
And this is the acid dissolving.
This is the stone test and so on.
We use our ICP-MS instruments and our analytical balances to verify the amount of gold they contain.
And just here's some more photos and then here's some of the results down here.
Sorry, it's a lot of information.
Yeah, here we go.
Recovery over 100%.
So you're going to get at least 100% of the gold that is claimed in these gold backs.
So this says one one-thousandth of an ounce of gold.
Which today, you know, I mean, gold's getting a lot more valuable in terms of dollars because dollars are declining in value.
But this is a divisible form of gold.
You know, it's hard to spend a gold coin.
I mean, gold coins have their place, don't get me wrong.
And I think anybody acquiring gold is on a path of wisdom as far as I'm concerned.
But this is a divisible form.
Like, you can use a thousandth of an ounce of gold to pay for a sandwich, for example.
Or to barter with someone or to use it at a farmer's market or to give it as a gift or to use it as tips.
These are amazing.
They have amazing artwork.
There are different states that are represented, like Wyoming, for example, and Utah, but now Florida for the first time.
So go to verifiedgoldbacks.com, click on the link there, and you'll be able to see the entire Florida set that's available now for the first time ever.
Texas is not yet available, but we're hoping that will happen maybe next year.
But Florida is available now.
Collector's sets are available.
These are going to be collector's items in addition to their utility, and they make amazing gifts for people, especially Christmas gifts, too.