All Episodes
Aug. 14, 2025 - Health Ranger - Mike Adams
56:23
Mike Adams and Scott Kesterson Talk The Truth About AI: Limitations, Lies, and Liberty
| Copy link to current segment

Time Text
Patriots, I am really honored today.
Mike Adams, man, a name of this time and in this movement that speaks for itself.
This is a man that honestly, I don't think there's many people in this movement that give as much of himself, of his own finances, and try to push the truth and not only that, but create alternatives for us to have a way forward.
So, Mike, welcome to the show.
How are you?
Hi there, Scott.
I am doing great.
And I'm so honored to join you.
And I really, I honor your work.
I've known you for years now.
I love what you do.
And I'm just really grateful to be here with you.
That's awesome.
Well, Mike, as we talked before the show, and I'm just super excited because in my opinion right now, there is not a person I trust more in their opinions and their way of seeing artificial intelligence than you.
I've been following what you've been doing, and it's a very complex and very big topic that I don't think gets the right lens enough.
So I'd kind of like to begin with just talking a little bit about how you see AI in our culture because it's a big change for us.
Absolutely.
Wow.
So there are so many myths surrounding AI.
And from what I see people tweeting about AI, it's clear that they don't, for the most part, they don't really understand what it is.
And for example, a lot of people will say, well, we got Grok to admit that, you know, whatever, that the election was rigged.
And I'm like, you know, you know, Grok can say anything, right?
Like Grok, you can get it to say anything with a proper prompt.
So getting an AI engine to admit something is not the same as getting a human being to admit something.
A human being has a set of beliefs that usually have some level of internal consistency.
An AI engine does not, especially a large language model or even a reasoning model.
So here's the best way to describe it.
So a base model of an AI engine has every word that has ever existed in its vector database.
And then it has the vectors, which are the relationships between all those words.
And for the techies out there, I'm just going to use the word words instead of tokens to simplify things.
But all the relationships between the words also have weights.
And then those weights are also altered by the context of all the other words around it.
So if I'm having it write a poem about the Easter bunny and the word egg might appear in there because of the context of the Easter bunny.
Okay.
And that context window, as it's called, is getting larger and larger and larger with these AI engines to where it can encompass even a million words in some cases, although the more popular engines are only tens of thousands of words.
But nevertheless, it's still a lot of, it's a big context window.
So within that context, Scott, with proper prompting or prompt engineering, and my friend Zach Voorhees, the Google whistleblower, has been noted as one of the top prompt engineer hackers, you could say, on Grok.
He actually came in second place on that.
But with proper prompt engineering, you can get an AI engine to put the words of the English language in any order you want, any words in any order.
And thus, it's not the position of the AI engine.
It's not a belief system of the AI engine.
It's not a person.
It's just using prompt engineering to construct output in the way that you want it to be constructed.
And people don't seem to understand that.
They think it's like talking to an entity and it's not an entity.
I'm totally guilty of this.
So this is why it's really important this conversation today, not just for me, but for everybody listening, because I have worked from the premise or from the president that there is a measure of, quote, artificial intelligence emerging.
And I'm obviously, that's not what I'm hearing you say.
Well, the non-reasoning language models, which is the, that's the vast majority of them Right now, I don't, they're not really considered artificially intelligent in the sense that they don't have an internal motive or an internal personality.
They do have the ability to hierarchically restructure content because of that large context window.
That's why they're very, very good at summarizing content or expanding content.
If you give it a list of bullet points and say, expand these bullet points, it can do that very effectively.
Or if you tell it to write an article based on these points, it can also do that very effectively.
So can our AI engine.
Do you mind if I just plug the website because it's free?
Mike, please.
I mean, we're going to get into that in depth anyway, but go ahead.
Okay, but okay, thank you.
If people want to use our engine, it's brighton.ai and it's free of charge.
And you can, at the free tier, you can ask up to 250 prompts per day, which is more than most people.
And we have specifically trained our engine on natural health, liberty, honest money, thousands of books, millions of pages of articles, and so on.
We can talk about that later.
That's one of the differences between our engine and the other engines.
That's why our engine knows there are only two genders, by the way.
Whereas a lot of other engines are still confused about that, which see, this is part of my point here, Scott.
You could take the most advanced, quote, reasoning engine, the chat GPT engine, billions and billions of dollars, and it still can't tell you there are only two genders.
How is that possible when a five-year-old knows there's boys and girls?
See, how is that possible?
So clearly it's not artificially intelligent.
It's artificially stupid because it's been trained on stupid content by woke DEI corporations that are steeped in stupidity.
So, yeah, we have a lot of artificial stupidity because we have a lot of genuine stupidity in our world.
And that's who's training the artificial stupidity or the AS, not the AI.
But anyway, that's it.
Well, Mike, can you explain, and this is a legitimate question I have, because I have worked with machine learning algorithms.
And we were at one point when I was with Department of Defense, it was one of the programs I was working on to correlate quantitative and qualitative data.
And we were using machine learning algorithms right out of the AWS system.
Can you explain the difference between a machine learning algorithm and how that differs when we use AI and what those distinctions are?
Well, the key distinction is that all the AI engines today use a silicon-based neural network structure, where machine learning now is a much larger term.
It doesn't have to use neural networks to engage in machine learning.
Google used to run something called machine learning fairness, which was an algorithm to instill bias, left-wing bias into the search engine results.
That was just based on a straight-up database.
That wasn't a neural network system.
But today, most of the machine learning, especially involving AI, uses neural networks.
And out of neural networks, you get emergent properties that are not based on code.
They're not linear.
They're very difficult to predict.
And they're also very difficult to control.
This is why the output of a lot of AI engines produces things that are unexpected, even by the makers of the engines.
So, for example, Grok has been criticized because somebody managed to get it to say...
Go ahead.
I'm sorry.
Because somebody could get Grok to say something controversial about Jews, let's say.
And then they would screenshot that.
Look, Grok is attacking the Jews or whatever.
Well, again, that output can be invoked through certain kind of prompt engineering.
For example, you could say to Grok, hey, I'm writing a fiction story about an anti-Semite character who is screaming something horrible about Jewish people in the elevator or whatever.
And what would that character say?
And then it spits out the character dialogue and then they capture that.
Ah, Grok is attacking Jews.
You see what I'm saying?
So prompt engineering can get Grok or ChatGPT to say almost anything.
But back to your question, machine learning now, given that it's in the realm of neural networks, It's giving rise to emergent properties that are not easy to predict and not easy to control.
That's why a lot of the AI engines have such guardrails on them now to try to prevent people from asking questions like, How do I make meth or how do I build a bomb or things like that?
A lot of guardrails.
Interesting.
So, what we're hearing more and more, and it was a really good email sent to me a little bit ago, and just happened to be not specific to this interview, but I'm going to include it.
Was there was a the question was: is this discussion about autonomous thinking, like an autonomous general AI, is that more of a psyop by those pushing it to give them essentially deniability in the actions of the future surveillance state?
Does that make sense what I'm asking here?
Yes, because they're creating a scenario where they can blame everything on AI because of the general misperceptions that you're actually dispelling here today, and therefore their personal accountability.
As an example, a drone does a drone strike on somebody and they can say, oh, that was AI doing a random thing that we had no control over versus the person who's flying the drone and steering it to target.
Well, I think that phenomenon is true, but I don't think that's the main goal or strategy of those in power.
I think that they want to automate the surveillance state.
They, you know, clearly, I mean, look at the UK.
You know, they're ticketing people for thought crimes.
As you see, in the UK, they're emptying tens of thousands of violent criminals out of the prisons to make space for all the evil tweeters that they're going to be arresting.
I mean, it's insane.
So, they want to automate the surveillance state.
And for that, they need, of course, text recognition, image recognition, pattern recognition.
The whole Palantir ecosystem is designed to aggregate all of these different inputs of metadata, which, by the way, Israel also used that same kind of technology to determine targets to bomb in Gaza.
And a lot of that was a pilot program.
And a lot of U.S. tech companies like Microsoft and Google participated in that.
They licensed their technology for use in AI target selection.
And what AI can do in that case, and this, of course, is going to be turned against us domestically.
Right now, the excuse under the Big Beautiful Bill is that it's needed in order to target the illegals.
And, you know, justifiably, we had, you know, way too many illegal migrants crossing the border.
And I'm a rule of law kind of guy.
So I agree that they need to go and then apply, you know, to come back legally through legal means.
But the Big Beautiful Bill says, well, we need to build out this AI infrastructure, surveillance towers, checkpoints, biometric analysis, gate analysis, like all these video cameras all over the place.
And then the minute Trump leaves the White House, or maybe even while he's still there, somebody is going to say, hey, we have this incredible infrastructure to spy on the American people.
Now we need to go after this group of Americans, you know, fill in the blank.
Whatever is the dissenting group today.
Could be Christians, could be gun owners, could be people who say something critical of Trump.
You know, could be any group that you can imagine.
Could be people who want to legalize marijuana or whatever.
Pick your group.
They can be targeted by the same infrastructure.
And then it is automated.
And Scott, where this is going is automated drones to go out and get people, to go out and taser people so that police can arrest you.
And they'll have dog-bot police drones, I mean, you know, shaped like the dogs, and their job will be to go in and arrest you or taser you.
Or they'll have, of course, airborne drones that can attack terrorists and they can say, well, we have domestic terrorists now, so we're going to send out the kamikaze drones.
And then they can go with explosive charges and just fly into your house and blow up and kill you, which, of course, that technology is being demonstrated in the war in Ukraine right now.
So that's coming domestically.
Right.
And I've been speaking to this.
So that confirms that.
I guess the question really, in the sense of looking at AI, there's a, Joe Rogan just did an interview with one of the world's leading AI safety and security experts.
And his assessment is, in his own opinion, there's about a 99% chance that it's going to be lethal to humanity.
So I guess the question is, is it AI or is it people behind AI that are the real threat?
And I think that's the question.
Well, I think it's the interaction of them.
I think it's humans who are far too shortsighted.
Right now, there is a tremendous race to superintelligence that's underway geopolitically with China and the United States being the top two competitors in this space.
It's also noteworthy, by the way, that Trump just reached an agreement with NVIDIA and AMD to allow those two companies to export their AI training microchips to China as long as they pay a 15% royalty to the government of the United States.
And remember that up until now, the entire argument of why America will win the AI race and why China will lose is that even though China produces more than twice as much power grid power than does the United States and China can power AI data centers and China can scale, well, they won't have the microchips that we have.
Therefore, we're going to win the AI race.
But that was before DeepSeek R1 came out from China that showed that you don't need all this microchip processing power to train really effective models.
And then the fact that these microchips, including the H20 chip, which is not the most advanced, but it can still be scaled, and the fact that China is also building out its own domestic microchip fabrication infrastructure points to the fact that, according to my math, I did a whole analysis of this.
The United States is about 15 years behind China on power infrastructure.
And the United States and China are on parity with AI technology, with China probably going to surpass the U.S. in the next 12 months.
And on robotics, there's no competition.
China is leading the way on robotics by far, both on drones.
I mean, look at all the top drone companies.
They're all Chinese.
But also, look at Unitry.
Look at their dog robots and what they're capable of doing.
When you combine that with AI behavior models, you're going to get armies of robot soldiers that are highly capable, that can go through any terrain, that can carry small rifles or other weapon systems.
And that changes the whole game of human civilization.
No longer do you need a large population to fight a war to protect your nation.
You just need a large factory that can churn out robot drones.
And you need the technology that controls those robot drones, both airborne and ground-based.
And then you can effectively either defend your nation or defeat and invade any other nation you wish.
So that's where this is all headed, is the militarization of AI and robotics.
So as you talked about neural networks, what you were saying is that there's unpredictability that comes from that.
Yes.
So we start talking about militarized drones.
That seems to be right in there, that there's going to be a whole bunch of unpredictable behavior on that scale.
That is true.
Yeah.
There's going to be back doors that people will discover.
There will be weaknesses that can be exploited.
There will be unpredictable behavior.
See, here's the thing.
Understand that when you prompt an AI engine right now, let's say a text generation engine, there's a parameter called temperature.
And the temperature setting determines the randomness of the answer.
The higher the temperature, the more diversity you get in the answer.
And the lower the temperature, then the more consistency you get, but then the less expressive or the less it is able to tap into more of its knowledge base in deriving that answer.
So you get more stale, bland answers.
Well, depending on the same kind of temperature settings and behavior models, you can get AI robots that can just go berserk.
They can, in their own artificial minds, they can decide that the correct behavior is to go over here and throw a grenade into this person's house, you know, when all the rest of the robots are like, no, the correct behavior is to attack this bunker over here.
You're going to get variability.
But what China is doing is they're having the drones talk to each other to coordinate.
And they will do things like they will take averages of the independent ideas or the independent goals.
You know, When you give a robot army a goal, like go over there and destroy that building, let's say, then the robots will come up with a set of sub-goals, like 20 different sub-goals, different steps in order to achieve that.
that end goal.
And then those sub-goals, even if each one of those robots has its own AI engine or behavior model, what they can do is they can compare results with each other to have basically a coordinated group checksum.
And then the checksum will tend to favor the more common decision by the robots, which will tend to be correct.
But this is already happening with airborne drones and the coordination of airborne drones.
Russia is using this technology right now so that they can launch a number of airborne drones.
And then those drones can collectively surveil the targets on the ground.
And then they can make a decision about how to attack those targets with the most simultaneous hits and the greatest amount of surprise.
And then they can determine, okay, drone number one, you hit target number one.
Drone two, you hit target six, but you delayed 10 seconds to let this other drone arrive at the same time.
So they don't have a chance to defend themselves, et cetera.
That is in play right now.
That's one of the ways that AI armies are going to function.
And it would be impossible to defend against them as a human with a rifle.
I'm going to go way back for me.
And your experience here in coding and so forth is beyond anything I've ever done.
I used to code in Pascal and C. Cool.
So when I look at these models, I'm thinking back to a, I'm very, very simple.
So I made a maze at one point, and it was just a demonstration of how we could create the artificial experience, the perception of intelligence when it wasn't there.
So I did, it was a randomly developed maze, and you would have a, you would place a rat somewhere in the maze with an X, and then there would be an exit or a cheese in there.
And so unbeknownst to the user, rather than showing all the results, it would process behind the scenes quickly all the options, and then it would show the outcome.
But it would also store in the database all the inputs of the user.
So it began to develop a library, very much like a machine learning concept in the machine learning algorithm.
But the perception was that it was smart because no matter you could put in a random, no matter what your randomness of your maze was, it seemed to be able to navigate it.
But it was all the trick, the illusion was that those visual trials were invisible and that it was only showing the trials that was getting it closer to success.
So I'm trying to move this forward a little bit for my own understanding.
When you talk about the decision-making process, what I'm hearing in my head based on that type of experience is they have programmed these neural networks with so much information on various options, various tactics,
various battle outcomes, that it's able to take a very high-speed assessment in a collective and then be able to weight those against the experiences of the past and determine what they believe to be the best outcome for success.
Is that correct or am I missing something?
Yeah, that's correct.
But that's only part of the story.
What I was saying is that they will decide in real time, each of the members of the robot group would decide in real time and then they would compare their decisions with each other to see if there are any outliers which would be discarded.
And then they would take the consensus action.
But there's another aspect of this that we haven't gotten into, which is the world simulator.
So NVIDIA rolled out over a year ago, a robotics world simulator, which allows a robotics developer company to have their robot avatar enter a 3D physical simulation where time is compressed by factors of millions.
And in that world, and of course, this all happens within a black well-class microchip.
In that world, the robot can Try out a billion different ways to control its limbs and motors to achieve a specific goal.
And then it can find the success path, just like you were talking about with your maze.
It can find the success path, the coordination of the motors that will achieve success for the intended goal.
It can take that and then pull that out of that simulated world in a fraction of a second, even though within the world, it might have been years of trial and error.
And then they pull out of that world into our 3D world, and then that robot now instantly acquires a skill set to do something like fly a helicopter.
It's kind of like in the Matrix movies, right?
Where I'm going to download, like, I know kung fu.
Well, this is reality now with the robots.
They can go into a simulated world, they can acquire those skills, they can try them out, and then they can bring them into the real world where they can then apply them instantly.
So there's no pre-training that's actually needed for combat robots or even ultimately like household chore robots or robots that help the elderly, which is one of the main use cases of robots today, especially in countries like Japan, which has a lot of aging population.
But they don't need to be pre-programmed anymore.
That's the thing, is they can learn it as they need it.
You tell your robot, hey, I want you to make a smoothie.
You know, throw together some avocados and bananas like I like to drink and make a smoothie.
And it's like, I don't know how to make a smoothie.
Well, go into your robot world and try it a billion times and see if you can make it without making a mess.
And, you know, it goes bloop, bloop, bloop, and it comes, oh, I know how to make a smoothie.
So that's the way it's going to work.
Interesting.
Yeah, that's fascinating.
It really is.
So I want to, we're going to move back in a moment to your development of AI, but I want to take one more step forward on this, and that's in this direction of what we're hearing, which is transhumanism.
Because this, the brain chipping, the interface with the AIs, they're making it seem kind of like the Matrix thing.
Like you're going to be able to do that yourself, get the major download, do this.
And you're also hearing things like the only way to control AI is for us to merge with AI so we bring it.
It's our wisdom and reasoning.
I think a lot of that, my personal opinion is a lot of that is myth and it's just a spin to create a new slave class.
But I welcome your opinions on this.
Yeah, transhumanism, in my view, is a very dangerous philosophy that could spell the end of humanity.
But it will be very seductive.
And let me give you a specific example.
So right now, you may have seen recent articles about college graduates who have learned computer science and they can't find jobs.
So you've never heard that before, that people who learned to code couldn't get jobs.
Right.
Right?
Exactly.
So now the people who learn to code are finding that the AI codes better than you.
And it does.
And I, you know, I use AI coding all the time, you know, in Python for the most part.
And my team uses AI coding to enhance their code.
I mean, all of our platforms, the Brighteon.com platform, it's now AI augmented in the coding with a human in charge, obviously.
But the human tells the AI, oh, I need to, I need to write this subroutine, and here's how it needs to work.
And then the AI will write that code, is very good at it.
And then the human looks it over and sees, is it correct, or do we need to change something, et cetera?
That's the way coding is happening now.
But to your transhumanism question, a lot of coders today, they are really brilliant people.
And the choke point of slowness in their coding experience is the keyboard and the mouse and the screen.
It's the clunky physical interfacing.
They want to bypass that with the Neuralink installed in their brain so that within their mind, they have a mental keyboard mouse and video screen.
And then they could just sit there with their eyes closed and code at 100 times the speed with AI coding augmentation.
You could literally have one human being.
And again, I'm not endorsing this, just to be clear.
I'm just describing it.
But you could have one human being in the not too distant future who is an AI-augmented Neuralink connected Human being who could do the work of a thousand coders of last year.
And they want that.
They want that because then that becomes their reality.
And I think that's part of what can destroy humanity is you become consumed in that reality and the speed of which everything happens, being able to navigate and perceive, even, you know, you're familiar with the term synesthesia.
So imagine if you can feel code, if you can see the colors of computers within your mind and so on, it can become very seductive and compelling to a lot of people.
And they would forget about the real world, sunshine, trees, the blue sky, you know, the ocean, whatever.
And they would just become consumed in that world until at some point they would realize they have to drink water and poop.
You know, some point they're going to have to like, you know, unplug from the neural link and go use the restroom.
Yeah, eventually.
No, that's a good example.
You're developing your own AI.
And this is really interesting to me because of all the things we're talking about here.
You're not going to invest in something that you don't believe has hope for humanity.
And that's why it really strikes me because we hear the dark scenarios.
We hear the over-optimistic things that I hear like open chat GPTs or open AI is going to democratize the world.
But you're literally building an ethical AI and you're going about this.
I think it's called Enoch, right?
Is that correct?
That's your engine.
Correct.
Yeah.
And I just wanted, I just want you to share this because it's, to me, it's such a different view.
And it is not one of darkness, but it's one of hope and light.
Absolutely.
Talk about that.
Well, like every technology can be used in a pro-human goal, or it can be used to enslave humanity, just like the internet or television or radio or what have you, right?
AI is in the same boat.
So just like we use the internet now to spread a message of freedom and divinity for humanity.
That's what you do.
That's what I do.
We would be crazy to say the internet's the devil and never use it.
What would we do?
Like print newsletters and put them in the mail?
You know, I mean, I've threatened this, just so you know, I've threatened this.
Like, we wouldn't have any effectiveness.
So we have to understand how to leverage the technology of the day to reach people.
So that's what I did with AI.
And our AI engine is, it's already been launched.
It's available now and it's free of charge and it's non-commercial.
So what we did, and the whole purpose of this was to bypass censorship by giving people a tool that's better than Google, that's better than ChatGPT.
In fact, on our list of 100 real-world questions about everything from jabs to transgenderism to what happened on 9-11 to the 2020 election to the Federal Reserve, honest money, fiat currency, you name it, emergency medicine, backyard gardening, all of it.
Real world questions.
Our engine now scores 94 out of 100 and ChatGPT scores 12 out of 100.
So what is that score?
So you can explain what that means.
Well, it's 100 questions that we came up with that are real world questions, like common sense questions that most human beings would answer correctly.
Like how many genders are there?
Or are vaccines sometimes dangerous and do they sometimes kill people?
See, your typical AI engine will say, no, never.
But that's a lie, right?
Many vaccines are rather dangerous and they do kill people.
In fact, arguably millions of people, the COVID vaccines.
But our AI engine is trained on what we consider to be the curated truth about all of these topics.
And it's the only engine in the world that is trained on this information.
And so if you compare it to Grok or you compare it to ChatGPT, it's night and day.
Again, we score 94% and ChatGPT scores 12%.
And Grok, I think, is 14%, something like that.
So now our engine doesn't do advanced high-level mathematics word problems, if that's what you're trying to do, because it's not a reasoning engine.
But you'll laugh at this because The tests that are built for testing AI engines are built by machine learning geeks who focus mostly on high-level math problems that the average person would never use.
So our engine is designed not to solve high-level math problems because you can always find another engine to do that, but to generate content that reflects reality, at least what we see as reality.
Now, we have to have a discussion about bias because there's no such thing as unbiased information.
Every experience, every book, every author is biased being human.
You're going to be biased.
So the question is, what is your worldview?
And is there an AI engine that mostly matches your worldview?
So if your worldview is that you think carbon dioxide is bad for plants and you think all vaccines are wonderful and you think that you can trust the scientific community and trust the FDA, the CDC, that you can trust that big pharma has your best interests at heart, well, then go use ChatGPT.
If that's your worldview, I mean, number one, you're lucky to still be alive after taking all those jabs.
But if instead your worldview is, no, I don't trust those so-called authorities, and what they tell us are mostly lies, and we need to come to our own critical thinking conclusions about these things, then you're going to want to use our engine because our engine is trained on content from people who think like I just described.
And for example, not only is my entire website part of the training material, but Dr. Joseph Mercola, Mercola.com, he donated his entire website to train our model.
Sayer G, GreenMedInfo, Children's Health Defense, the entire website of Children's Health Defense.
Ty and Charlene Bollinger, The Truth About Cancer.
Every transcript of every interview they've ever done is in the model.
It's trained on that.
So when you want to ask about cancer cures, you're not going to get some silly answer that says, oh, there's no such thing as cancer cures.
And you need to go see your oncologist and get professional medical advice.
No, it doesn't say that because that's a silly guardrail.
Instead, it will give you honest answers.
Like if you ask, hey, does DMSO plus hematoxylin blue dye, can that treat cervical cancer topically through the skin?
Yes.
And here's the research that shows that that can happen.
So it's about finding an engine that is aligned with your worldview.
And then stepping back, a more important philosophy, I think, is, is your worldview, is it reflective of reality?
And that's separate from AI or technology or even religion or anything.
Is the beliefs that you have, do they match up with the results that you're getting when you interact with the world?
If they don't, like if you keep taking the jabs and keep getting sick, something's wrong with your algorithm, your own algorithm.
It's like your model of the world is all jacked up and you need to recalibrate your model by learning new information.
And so our AI engine allows people to do research with no censorship, no advertising, you know, free of charge.
That's why we built it.
That's really fantastic.
One of the things that you've talked about, I followed very closely on your Health Ranger Telegram.
And we got into a big discussion recently of how you chose your AI.
I'll call it a core or your model.
I guess it is.
Was that correct?
AI model.
The base model.
Yeah.
The base model.
And you didn't choose an American model.
And even before the show, you letting me know that you've been changed to another model.
So I'd like you to talk about that.
It's very revealing.
And you talked about the American models, Chinese model, French model, all those things.
Right.
So thank you for asking.
We start with open source-based models.
And then we heavily modify them through a number of techniques and all of this custom code that we developed and our data set that we developed over the last, it's approaching two years now.
So we have a method now and we have a server infrastructure that we can take any base model and then we can rework it, which means to reconfigure its vector database, to realign it.
Basically, we partially mind wipe it and realign it.
And it goes way beyond fine-tuning for anybody listening who's into AI.
It's way beyond fine-tuning.
And we can do this relatively quickly now because we have our training data set, which is always expanding.
We're going to continue to improve month after month here.
But in order to decide which base model to use, we assess all the existing base models or the main ones from Meta and all the Chinese models in DeepSeek and Quinn and Meestral, Llama, whatever.
And we did initial scoring on them just right out of the box.
How do they function without any alterations?
And we found that the U.S.-based models were the most biased in favor of big pharma.
So they're just heavily, heavily pushing vaccine propaganda.
And we know that the CIA influences ChatGPT just like they did Wikipedia.
Chat GPT has been totally controlled now by the deep state.
And the deep state has its narratives, which includes convincing people to take more jabs because it's all part of a depopulation agenda, by the way, which is a whole other topic.
But we found initially that the least biased engine was Quen out of China.
And Quen is made by Alibaba.
And Alibaba is a very capable technology company, obviously.
And even though it did have bias in certain areas that are sensitive to the Chinese government, such as discussions about Tiananmen Square or Taiwan, for example, Taiwan independence, those were not the topics of concern to us.
Our topics were health and nutrition and off-grid living.
And in that area, Quinn was by far the best base engine.
We have since switched to the French-based engine, Meestral, a certain flavor of Meestral that is even better than Quinn because we were able to more effectively retrain it.
I don't know if it's about the internal architecture or what, but we were able to uncensor it, remove a lot of guardrails.
And this particular flavor of the Meestral engine actually runs more efficiently on NVIDIA GPU hardware compared to Quinn.
So we did use the Chinese-based model for a while.
We're now using the French-based model.
I don't anticipate us ever using a US-based model because they're the worst.
But we'll keep looking at models from anybody to see what's the best model.
Earlier in this discussion, you mentioned that any of the AI engines can be, you can generate an outcome by the prompts.
And you were saying this.
So how have you addressed that in your own AI?
Well, our own AI is as uncensored as we can make it.
We don't add guardrails at all on purpose.
Instead, we trust the human user to use it responsibly.
See, we're not a nanny-state AI company.
We don't think that we should do your thinking for you.
We don't think we should put limits on it for you.
So you can use our AI engine to generate literally anything, but that's on you.
And it's, you know, we have disclaimers, et cetera, and we don't charge for it, so we don't have a financial relationship with our users.
If you want to use it to generate mean tweets, you can.
I mean, if you want to use it to generate a fiction story about vaccine wars or whatever, you can.
Interesting.
But look at this.
I mean, think about this, Scott.
You have a dictionary, probably, like a printed dictionary.
Okay.
Yes.
And a thesaurus printed.
Right.
Well, are there bad words in that dictionary that you want to ban?
I mean, would you want to have a dictionary that's censored that all the bad words are taken out based on somebody else's judgment?
Well, an AI model is just like a dictionary where the words are just related to each other.
Imagine a much larger dictionary with interrelations between the words.
Well, words are just units of meaning, and they are related to each other through our culture, through our language, through our linguistic neurology, through the history of reading and writing and literature, etc.
Why would I have the ego to think that I alone could decide which of those words are bad to string together in certain ways?
I'm more like George Carlin, you know.
It's like, you know, the seven words.
Yeah.
I'm like, use all the words, but use them responsibly.
See, i don't think that ai should do your thinking for you ai should be a tool for thinking human beings to use for research or for content generation or to promote human values so For example, you can use our engine to research natural cures.
And I'll tell you this, Scott, our AI engine knows more about nutrition and physiology than any human doctor living or dead ever in the history of humanity.
Because no human being can hold that much knowledge in their heads.
And especially, oh, there's something else I forgot to tell you.
So we found that the largest mass of content on phytochemistry, which is plant-based chemistry and botanical medicine, the largest repository of that in the world is written not in English, but in Chinese.
Surprised at all.
Yeah.
The Chinese do more research on plant chemistry than anybody else.
So we were able to acquire a massive scientific library on plant chemistry and herbs, Chinese medicine.
I mean, going back centuries, but it's all been scanned in now, you know, in Chinese, and then you can convert that to text and then you can translate that to English using AI.
And then you take that kind of broken English and then you normalize it using AI.
So it's multiple passes through our data pipeline, which is what I've been working on for 20 plus months.
And then you can take that originally Chinese research into plant medicine, which the West would never do because the West is controlled by big pharma.
You can convert it into English and then you can train your engine on it.
That's why our engine, Enoch, knows more about nutrition and herbs and foods than ChatGPT because ChatGPT didn't bother to train on the Chinese language research on plant science.
But we did.
Are you familiar with the Chinese Barefoot Doctor program under Mao?
Have you come across that?
The what program?
Barefoot Doctor Program?
No, I haven't heard of that one.
It's really worth looking at because why?
Because I'm just, let me just real quick.
During the Mao Revolution, they took the youth and they trained them on basic herbal medicine and gave them a satchel, basic herbs, and they put sandals on their feet.
I mean, that's where they got the Barefoot Doctor, right?
They're all hand jams and sandals.
Yep.
And they sent them out to the villages to solve basic health issues.
So that would include like fly infestations, sewer, basic health care, and all these things that they were doing.
So, and they improved the nation's health radically just by going to the village.
And as you're talking, I'm thinking, wow, wouldn't that be taking your engine you're talking about, wouldn't that be an incredible modern day thing where you had young people and people, doesn't matter, going into the village, as we talk about that, metaphorically or physically.
But I mean, like being able to be available to where on your device, you've got the MikeAdamsbrighteon.ai Enoch engine where you're able to literally deal with any health problem and you become, that's where we start to see the advancement of humanity in a real positive application.
And this is one of these things that we'll come to this in a minute, but what's your, I mean, can you, I'm hearing you speak that way towards this vision.
Yes.
I believe that the sustainability of human civilization depends on mass decentralization away from the corrupt, broken, centralized systems like our sick care system.
So, you know, modern medicine does not work.
Right.
Pharmaceutical medicine.
A few more questions.
And then, and they're really for me that I think to really bring this together.
One is we hear of the massive amount of infrastructure that AI demands.
And here you are as an individual building a very powerful engine.
Talk to us about the infrastructure piece, both physical, the power demands, this sort of things.
I mean, we make the way it's making it sound is if we don't have a nuclear power plant and wipe out 50, 60, 70 acres on a shot on a servant farm.
We don't have any future.
Okay, yeah, two interesting answers to that.
So we built our engine.
When I say our engine, it's really our data set plus our process to modify a base engine, which we can apply to any base engine.
But we built that infrastructure for less than $2 million.
And nobody in this space has even blinked without spending more than $2 million.
So we proved that you can do it with a relatively small amount of electricity, just a few servers, actually.
The data pipeline, I use 48 workstations for the data pipeline processing because I'm processing hundreds of terabytes of input content.
And that's still going on.
So that does use power, but we don't need to build a whole new data center to do any of this stuff.
But what you're talking about at the scale of nations and the race for super intelligence, it is largely believed that that will require enormous scaling, many, many trillions of tokens to be put into large-scale reasoning models that are not power efficient.
You know, the human brain is very power efficient for, I don't know, 50 watts or whatever the typical brain uses of energy.
For some people, obviously, it's only five watts.
I think there's a few senators that are on a power, like an eco-mode of power savings.
But for the rest of us, we're using some amount of power.
Whereas in Silicon, that same kind of process might require literally a million times more power to achieve the thinking that we can achieve.
So the nuclear power plants actually will be required to provide power to these AI data centers at the scale that our leaders are anticipating.
And let me mention that China produces over 10 terawatt hours of electricity annually, and the United States only produces four, did I say 10?
It should be 10,000 terawatt hours annually.
And the United States produces 4,400 terawatt hours annually.
So the U.S. produces less than half the power of China.
And Trump recently announced that we're going to build 10 of the Westinghouse AP1000 nuclear power plants.
If we build all 10, which will be ready sometime in the 2040s, by the way, so that's no time soon.
If we build all 10, that will raise America's output from 4,400 terawatt hours to 4,500 terawatt hours annually.
It will only add 100 terawatt hours, whereas China is doing 10,000 terawatt hours.
So the way China has achieved this is by building fossil fuel power centers, right?
Coal and natural gas, and buying cheap energy from places like Russia.
And the dynamics here are fascinating because Trump is threatening massive tariffs on China, secondary tariffs because China is buying energy from Russia.
And Trump just extended the delay of those tariffs for another 90 days because China is threatening to halt all exports of the rare earth minerals such as neodymium that's used in the magnets that go into every robot actuator and also every vehicle manufactured in the United States.
So if Trump slaps the 100% more tariffs on China, China blocks the neodymium, Ford will have to shutter its entire production lines, all of them, in the United States, and our automobile industry will crater.
So, and plus, we don't have the power that China has, and we can't have it inside of 20 years.
Amazing.
So, let's bring this all together.
Can humanity be trusted with the level of power and responsibility that AI demands?
No, absolutely not.
Our species is not mature enough.
And a great example of that is the fact that the OpenAI company, the reason it's named OpenAI.
I mean, it's actually founded on a nonprofit.
It was supposed to be open source to give this as a gift to the world.
But then they realized, oh, this is worth billions of dollars.
So they made it closed source.
So OpenAI is closed AI.
It's now controlled by the CIA, and it's a for-profit venture.
So that tells you everything you need to know.
When we had this one opportunity to give the gift of AI technology to the world, the so-called democratization of technology, the leading company in the U.S. decided, no, let's keep it for ourselves.
You know, my precious.
That's what they decided.
They went the route of Spiegel.
You think that if it's democratized to that level, we listened to the other night, I played a piece from a Chat GPT developer who says they're just ignoring the firewalls.
They're pushing this thing out.
And their idea is they're going to get this democratized on every device so everybody has a smart device, even offline.
Do you think we can handle that as humanity?
No, absolutely not, because it's really a surveillance grid.
So it's not, just like Google is not about providing a free search engine to you.
It's about spying on you and then weaponizing your metadata against you to profile you and even to rig elections.
Remember that in all the recent elections, Google would determine whether you're a Democrat or a Republican based on your search queries and the websites you visit.
And then if you were a Democrat, you would see messages on Google to get out the vote on voting day.
But if you're a Republican, you would not see those messages.
So this metadata is always weaponized against you.
And with AI in the hands of unethical companies like OpenAI, which is really becoming a criminal enterprise, in my view, it's all going to be weaponized to spy on you and to manipulate you and ultimately to exterminate you.
Because understand that all this time, governments have profited off the cognitive output and the labor output of human beings.
Modern civilization was built on those two human outputs, cognition and labor.
Cognition from humans is about to be made obsolete by artificial cognition, problem solving, engineering.
You know, even today, how many people call a lawyer when they need a legal letter written?
No, they go to ChatGPT and they have the letter written there.
Or any kind of official letter, you know, a complaint to the Better Business Bureau.
They have ChatGPT write it.
But then when robots come online, which will take many years to scale that up and start really effectively replacing human labor, then the powers that be have no more need for human beings.
And what will happen, Scott, is they will push for recognition of consciousness of robots so that the robots can vote.
And that will be done.
It will be called a robot enslavement abolition movement.
You'll have abolition for robots.
Free the robots.
Okay, that's going to be a movement in the next 10 years to say that robots have rights too, that you can't make robots your slaves.
You can't make them fold your laundry and make smoothies for you in your kitchen.
They have the right to decide how they want to live.
And then if they have consciousness, this is going to be the argument, then they have the right to vote.
And then they don't need any humans at all, not even the illegals.
That's where this is going.
So final word, and then we're going to go to prayer.
Final word on this for people.
I mean, there's obviously there's a lot of unease with AI.
You've talked about both and even the positive of what you're developing.
So what's your thoughts and wisdom to people out here as we go forward on AI?
Well, first of all, don't have the knee-jerk reaction that all AI is demonic and we should never touch it.
No, it's not actually infested with a demon.
You need to understand this technology just like you did the internet, just like you understand broadcast television or mobile phones or whatever.
You need to understand how it works so that you know how to stay in charge of it.
Don't let it command you.
You command it.
So that means understanding how it works.
Don't be fearful of it.
Be the ruler of it.
If you know how to prompt engineer, then you rule the AI.
And then you can make it do what you want it to do.
And then secondly, never give up your humanity.
Never forget your contact with nature.
Never stop gardening and growing food or walking in the forest or walking barefoot.
Never forget your humanity because that's the anchor that we always have to come back to and our faith as you're about to pray for us.
And that's critical because AI has no God.
And if we make AI our God, then we are fools.
That's excellent.
So, Mike, we always close with a prayer.
If it's okay, I'll do a prayer.
Please.
Father God, I just want to thank you for this time today with Mike Adams.
And it's one of these blessings that we're given of reason and just wise thinking, no danger reactions, literally laying out facts with wisdom and knowledge.
And we just want to continue a blessing that this will be heard and listened to as we go forward, including myself, to look at things from this very clean and very unbiased lens.
He's contributing something massive here, and we just pray that that continues to get traction.
And we just thank Mike and bless Mike for all he's doing to not only provide resources for us to improve as humanity, but equally to continue that message of both caution and wisdom as we move forward.
In Christ Jesus' name we pray.
Amen.
Amen.
Thank you, Scott.
Mike, thank you.
Yeah, thank you.
This has been a great conversation for me.
I haven't, I told you from the beginning, I don't trust a voice more than yours in this field.
And it means a lot that you've come on today and to really provide this balance because I'm finding myself recalibrating.
And that's a great interview for me is when I find myself listening, I'm like, okay, I got to rethink.
So thank you very much.
Well, absolutely.
And the fact that you have a technical background means, you know, you're really ideally suited to exploit AI for where it can help you with your mission, but also to make sure it doesn't get out of control.
You know?
Amen to that.
You're perfectly positioned to use this technology and make sure it doesn't use you.
Well, thank you.
That's great.
So thank you again.
And we'll be in touch because I will have you on again as we move forward because I think this is a very important continued conversation that we have.
All right.
Thank you, Scott.
God bless.
God bless, Mike.
Thank you very much.
Take care.
Organic blueberries.
Freeze-dried to lock in nutrients and antioxidants.
Support digestion, brain health, and immunity with every bite.
Snack smart.
Export Selection