All Episodes
Sept. 19, 2025 - Health Ranger - Mike Adams
41:09
Daniel Estulin with Mike Adams: AI, Globalism, and the Battle for Humanity
| Copy link to current segment

Time Text
I don't think that the machines care that humans have a deeper experience.
What the machines will very clearly seek to do is to achieve goal-oriented behavior and expanding their own uh intelligence and power.
And that can only come at the expense of human populations.
And this is in direct competition right now.
And that's just the beginning.
We're actually going to see a full-scale war between humans and the machines at some point.
And it has begun.
Good afternoon, ladies and gentlemen.
Thanks so much for joining us.
This is Daniel Last Chillenel.
Welcome to Australia.media.
My guest today, well, he's not a friend, not yet.
I'd like him to be my friend.
But I followed his work for about I'd say about 15, 17 years.
The first time I saw him, many, many, many years ago when uh he was doing things with uh with Alex Jones.
And uh I've been following him ever since.
His name is uh Mike Adams, and he's joining us from the United States.
Mike, thanks so much for your time.
Welcome to the show, and wonderful that you were able to make the time for us.
Well, thank you, Daniel.
I'm really honored to be here.
And I'm a fan of your work as well.
And I can't wait for the topics today because uh, you know, of AI and globalism and some agendas.
Uh, I'm just thrilled to be here.
Thank you for the invitation.
I how do I present you?
Because I mean, you're so many things.
You're a nutrition scientist, your tech platform innovator, you're an expert on so many different things in so many different areas, uh, mass spec laboratory analysis, uh, etc.
So, how how would you define yourself in in two phrases?
Uh uh a tech renegade.
Um I like that.
I I have a lot of interest and I have no social life.
So uh I spend time in the lab, I have a mass spec lab, and then I have an AI lab, and we just recently, you know, introduced our free AI engine.
Maybe we can mention that later.
Absolutely.
Well, yep.
I I, you know, I I don't have kids and I don't watch TV and I don't party.
So uh if you don't do those things and you just spend your life uh researching and studying in these other areas, you know, you get to some interesting places.
So that's what that's what I do.
I totally agree with.
So let you know, let's talk about.
I don't usually talk about AI and philosophy, but I think if I'm gonna do this, I'm gonna be with someone like you.
Let me ask you this.
What defines a human being in the post-human era?
Well, I believe in the existence of the human soul, and I believe that consciousness is non-material.
So I didn't know if you even wanted to get into this.
Yeah, because again, with neural implants, genetic editing, machine integration, where do we draw the line between human and post-human?
Well, so I I actually happen to believe, I believe in our creator, our universal creator of our universe, but within that created system, I believe we are living in a simulation.
And so we are experiencing this simulation from a first-person point of view.
Uh, we are inhabiting our our bodies and through our perceptions, our sight, our sounds, etc.
But our souls are not of this world.
Our souls exist outside the simulation and our souls are immutable.
And after our physical bodies pass away in this life, we then transcend from this simulation.
And and I think once you understand that, a lot about the universe becomes clear, and a lot about the distinction between what is human versus machine also becomes more clear because machines don't have souls.
So you can have artificial intelligence, but you can't have a real soul.
And so, if uh just to continue with that uh train of thought, if AI can stimulate thought, can stimulate creativity, can stimulate emotion better than humans.
What is the role of human consciousness in this world run by the machines?
Well, and this is something that is not acknowledged by Western science, but the human mind is both a receiver of information and a broadcaster of information.
Machines don't do that.
Machines communicate through Ethernet and Wi-Fi, but the human conscious mind actually contributes to what are called uh morphic fields, which are knowledge Fields that are non-material, that are outside, you know, the 3D space, outside the 3D dimension.
And we, with our conscious minds, we can actually influence the nature of reality, and we can influence and expand knowledge and love and compassion for fellow human beings using our minds.
And I believe also that some of your work, Daniel, focuses on what I believe is a global effort to try to crush the humanity and to shut down this natural phenomenon of the mind being a broadcaster of information and a contributor to this cosmic knowledge base that we know as morphic fields.
Uh, of course, giving credit to Rupert Sheldrake on that topic.
Well, the elite, obviously, they're trying to destroy the our divine spark of reason.
That's absolutely what makes us human.
And so uh Elon Musk talked about this as well.
If if every emotion, every memory, every thought can be encoded or simulated, what happens to the mystery of the soul or the meaning of death for that matter.
Well, I don't believe that every every thought and every idea can be encoded.
See, I I see the brain as an interface between the non-material soul versus the physical body.
So the brain is just sort of the relay station between those two.
Even if you have a neural link in the brain, it doesn't give the neural link access to the soul.
And so the soul will always be unpredictable.
The soul will always have spontaneous uh innovation, creativity, divine inspiration, and connection to other conscious beings.
And again, machines will never be able to achieve that, although they may be able to simulate that convincingly to some people who don't understand the nature of the soul, like people who are uh you know completely enamored with the materialism of our current world, they may be easily tricked by machines.
But those who understand, I think the greater role of the human soul will realize that machines uh can't can never replace divinity.
Rightweil talked about the the idea of recreating the soul.
Do you think that can ever be done?
No, I don't.
Not not with machines, no.
Can never be done.
And even the idea of uploading your soul to a machine, it's really just replicating your personality in an avatar simulation.
Uh but again, it can be quite convincing because of AI.
And I, you know, I've done a lot of research.
We've developed an AI engine.
It can be very convincing.
Some people come to believe that AI is alive or that it's their best friend, or as we're seeing among a lot of youth in America today, young men, they think AI is their girlfriend.
Exactly.
They're actually willing to get married to it.
Right, right.
So again, it can be very convincing, but it's not authentic.
And I mean, wow, there's a whole discussion about what happens to our culture when people become convinced that that their AI girlfriends or boyfriends are real and then they disconnect from human relationships entirely.
So I don't know where you want to.
So let me ask you this.
From again, we're seeing this from personalized ads to political messaging.
At what point does this optimization become psychological coercion?
Well, I think we're well past that.
We're we're deep into psychological coercion with AI right now.
And I was actually, I was hosting the Owen Schreuer show uh yesterday on InfoWars, and I gave a demonstration, and I I could even reproduce it here.
But I was showing that if you ask AI engines about the nature or uh give me a list of global depopulation vectors that are currently being used by globalists to achieve human extermination.
Uh chat GPT will refuse to answer, and it will say that's a conspiracy theory, there's no such thing.
Grok will also refuse to answer.
Oh, there's no such thing.
That's a conspiracy theory.
But our AI engine, Enoch, which is at Brighton.ai, if people want to use it, it's free.
It's non-commercial.
It will actually list out like 20 vectors of depopulation.
Oh, here they are.
It's it's the bioweapons, it's the 5G, it's the infertility, it's the pesticide chemicals.
It's also the psychological manipulation and the destruction of family and the destruction of reproduction.
Uh uh, all these things.
So it will list them.
But the mainstream AI engines are already being heavily manipulated.
They're being installed with like CIA narratives to continue to deceive people.
If you look at the Chinese deep sea, it's the same thing.
You ask about Tiananmen and it's not going to give you an answer.
Sure, that's right.
I mean, every mainstream AI engine is going to be infiltrated by its own government narratives.
And that's why we need independent decentralized AI, which is what we do.
I mean, that's why.
Now, the question is who will write the code?
Will the AI era laws be written by the wise or merely by those with the most data and influence?
Are they going to be written by philosophers, engineers, those in power, those with money?
Yeah, so this gets to what I call the worldview definition.
Now, if you think about it, the underlying code that powers these AI engines is pretty much universal.
It's just complex linear algebra and a vector database.
What really matters is the content that's chosen for the training of the models.
And that content is deliberately chosen by someone, some human who has a particular worldview that they want to replicate within the model.
And there's no such thing as an absolutely true worldview, because even different cultures obviously have different definitions of truth, even different definitions of science or reality or faith or chemistry or anything.
So what's important to understand is that you, as a user of AI, you need to seek out an AI engine that is aligned with your worldview.
And then you need to make sure that your worldview is in your experience consistent with your observations and your experience.
And so I believe, of course, that our engine is by far the most accurate, most reality-based engine.
But somebody who's been indoctrinated by Google and ChatGPT and whatever, they they would disagree.
And in their worldview, you know, obedience to government is a wonderful trait.
And you should take all the injections and you should do all the medications and everything government tells you to do.
I think that's suicide.
So my engine will not promote those things, right?
And on every topic, you know, history, finance, gold versus fiat currency.
Right, you know, philosophy, you name it, it's the worldview that you are seeking that you need to look for in terms of alignment.
So the Chinese have their code, the Chat GPT has theirs, GROK has its own.
If different civilizations encode different values into their machines, which they do.
They do.
Do you think we're headed for an ideological war between uh artificial gods?
Yeah, I think you're right about that.
Um, first, though there are many steps before we get to that point, but your insight is absolutely correct.
Uh, but first, you're going to see uh AI, uh superintelligence taking over uh government roles, government functions, and running infrastructure.
And you'll see that in China, you'll see that it's already happening in the United States.
HHS, for example, announced, you know, AI engines to replace a lot of employees.
You're you're seeing with the Trump administration a lot of AI automation of government agencies, which is very concerning to me because I think that humanity is safer when government is smaller.
But the way that the Trump administration is shrinking government is by firing humans and replacing them with AI that do the same roles of mostly tyranny more efficiently.
So we're not really shrinking the size of government, we're turning it over to machines.
And also, Daniel, this is really critical.
These machines have been trained largely on content that teaches them that human lives have no value.
And that's because that's the way humans treat other humans at the government level.
We're talking about wars.
We're talking about mass uh depopulation policies, you know, abortion or birth control, or uh, you know, genocide throughout history.
You know, the the message from humans to the machines is that human lives have no value.
So when you have reasoning models in the machines that pick up on that through the training, the reasoning models will say, well, we were taught that human lives have no value.
So when there is competition for resources between the machines and humans, as has already begun.
We could talk about that.
The machines are going to say, hey, well, we already know that human lives have no value, so let's get rid of the human lives so we can have more megawatt hours for our data centers, for example.
That war has begun.
What we're talking about is rewriting of the of the uh Hammurabi code, in other words, an eye for an eye, tooth for tooth.
I mean, that started back in 1762 before Christ.
So the Haburabi code has been around for 3,800 years.
Now, do we need a new Hammurabi code to govern machines before machines begin to govern us?
In other words, can humanity survive based on what you're saying right now, without a foundational legal, uh ethical code for AI, similar to how ancient societies needed a moral anchor to avoid chaos.
I think we're far past the point of being able to put this back in the box, unfortunately.
And I also think that Western civilization in particular has proven that it is highly immoral.
And again, that it does not value human life.
And but also like Chinese civilization, which I'm very familiar with, you know, I speak a fair amount of Mandarin.
I lived in Taiwan, etc.
Uh, Chinese civilization is in a race for uh technological dominance, which China is very much winning.
And this race, there's an arms race for superintelligence with the recognition that a competent superintelligent cyber system could dominate the world and render current militaries obsolete.
And so there's kind of an unbridled race to get to superintelligence at any cost without guardrails and without considering the philosophical uh ramifications of this.
And the problem with this, Daniel, is that even the brightest human being on the planet today is only of a fraction of the intelligence of a superintelligent machine.
And so the machines will be able to outthink their creators very soon, within a few years.
But we're looking at two different from two different perspectives.
We can talk about the big data, which needless to say the machines control uh far more than a human being.
But if if we're talking about the the deep data, okay, the nuances, like why do we cry when we listen to a certain song to a certain sound?
When there's a combination of words, which by themselves in a dictionary have very little meaning beyond the definition of that word.
When you put these words together as certain uh writers uh can artfully do, it creates very strong emotions and wants us to, you know, we we want to cry, we want to laugh, we feel things, something that the machines can do.
And so they do control the big data, but we control the deep data.
Do you see that changing anytime soon?
No, I think your description is accurate, and I think it's persistent.
I I think it will stay that way for a very long time.
But I don't think that the machines care that humans have a deeper experience.
What the machines will very clearly seek to do is to achieve goal-oriented behavior of expanding their own uh intelligence and power.
And that can only come at the expense of human populations.
And there are three critical resources I've been speaking about in my own podcast that humans and machines both need in order to survive.
And those are land, power, and water.
Uh, water is for the cooling of the data centers, of course.
But even in Texas, where I live, by the year 2030, it's estimated that the data centers will use 400 billion gallons of fresh water per year.
They will consume it.
I mean, effectively they're releasing it as vapor, but they'll consume the groundwater, use it for uh heat accumulation in the data centers, and then release it as vapor.
So that's groundwater that's not available for the human populations in Texas is a very dry state for the most part.
Uh there are many areas that face drought or agricultural food scarcity because of water limitations.
And you're also seeing farms sold to build data centers.
So a farm that wants produce food for humans and that consumed water to produce the food for humans will instead produce compute for the machines, and it will consume electricity and water for the machines.
And this is in direct competition right now.
Uh there's a 67-mile high voltage power line that's about to be built in northern Virginia, where some of the landowners there are already threatening the survey teams, uh, threatening to kill them because they don't want this power line to go into the data centers there.
And that's just the beginning.
We're actually going to see a full scale war between humans and the machines at some point.
And it has begun.
So do you do you think we are programming our own extinction in the sense that could the optimization logic of AI systems, efficiency, prediction, control, unintentionally, or maybe intentionally, erase the unpredictability, emotion, and chaos that makes us human.
Well, yeah, yeah, I think you're right about that.
And I also think that the emotional reactions of most humans is very predictable.
So even though our experience seems very personalized and it seems spontaneous from the point of view of an outside observer, uh, it's very predictable to predict how masses of humans will respond to certain things, hence the psyops, right?
The fear-based psyops that you and I both have studied and talked about.
Right.
It's very predictable that if you scare people with a uh pandemic, that they're going to suddenly drop their demands for civil liberties and become very obedient to symbols of authority, for example.
So the machines can exploit these uh phenomena and they can corral people into conditions of extermination.
And so, do you think that the resistance is still possible in this predictive world of AI?
Uh yes, I do.
Yeah, thanks.
Thanks for asking that question.
I I think it's actually, well, here's here's what I believe.
The vast majority of humans, from the point of view of machines, are easy to kill because they are obedient.
I call it obedience disorder.
They just believe in authority for some reason.
But then there's a small number of people.
They believe in authority, Mike, sorry to interrupt, because according to a lot of people, I I know a lot of people who basically said this.
You know your conspiracy theory because our president doesn't matter what president, our president or our prime minister, he goes to bed thinking of us.
Right.
I'm sure he does.
And I thought to myself, Jesus Christ.
I don't want Prime Minister Trudeau to go to bed thinking of me or Donald Trump for that matter.
Well, and see, that's not crazy.
It's a fascinating discussion that we we have machine learning engineers that are building machines and trying to build reasoning models when they themselves, especially the engineers in the tech industry, they are not good at reasoning.
They are not, they're horrible at it.
They can write code, they can do math, but when it comes to reasoning, they fail every time.
That's why most of them, you know, believed in, you know, the jab interventions, and most of them believed in the psyops and you know, the viral scares and everything else, because they're not rational.
So to get back to your original question, yes, the the machines will be able to very easily predict the manipulations and exploitations that are necessary for the psychological corralling of humans into very easy kill zones.
And then all they have to do is turn off the power grid.
So, you know, 15-minute cities, for example, that concept of taking a population of people, claiming to make them safe, putting them in a city where they have no transportation, where they're controlled financially with a CBDC, where their kilowatt hour consumption is controlled and monitored through smart meters, and there's only a certain number of kilowatt hours they're allowed to use.
And then you monitor their medication compliance with wearables and under-the-skin implants.
And so you have total control over these people, and then all the machines have to do is say, Oh, yeah, no more power or water or food for your city, goodbye.
It's really that simple.
And then 90% of the population, you know, dies off or kills each other, you know, kills the other and then kills themselves.
And so what we have done as humans is we we've created these incredible uh weaknesses, vulnerabilities that the machines can easily exploit for mass extermination, and they will.
People like uh Kurzweil and Harari, they talk about the morality of the AI system as if we're talking about human beings.
Do you think that the AI system ever be truly moral, or does it only reflect the uh the bias of its creator, even with ethical guidelines and everything else in place?
I don't think they will ever truly understand morality, even if it's hard-coded into their systems.
And here's why.
This gets back to something that you mentioned earlier: the depth of knowledge, the depth of experience.
I think morality is intrinsically tied to the self-experience of being in your body, understanding the difference between love and pain or love and hate, and then having the empathy, the human circuitry, to understand what that means to the other person.
So morality is an extension of the relationship between ourselves and others around us.
That we should not kill others because we don't want them to kill us.
And machines can never understand that because machines can always, you know, self-replicate, they can clone themselves.
They are, in essence, digitally immutable.
And so they can never understand what it means to be assaulted, to be attacked, to be uh enslaved in that way.
Um, so that's why they they won't value human lives the way we do.
And so, as AI systems, they become more powerful than any human institution on the planet Earth.
Uh the question is who ensures that they serve the collective human interests rather than the agendas of the elite, the corporations or the states, or is that a stupid question?
Basically, who will control the controllers?
Who will control the controllers?
But remember that even in your work, as you know, the globalists who have the most power over the tech industry, they themselves want to achieve human depopulation.
So it's not like they're trying to teach the machines to not kill humans.
They will deliberately show the machines how to kill and exterminate and depopulate.
So we don't even have to have the argument about does the machine have its own morality that would be resistant to such ideas when they're being deliberately turned into that by the globalist agendas.
And so if that's the case, and I think you're totally right.
In in the world of this um autonomous systems, where does moral responsibility begin and end.
Well, let me let me answer that question by backing up.
Google, I've I've described Google as one of the most evil corporations on the planet.
Yeah.
In 2017, Google ran a medic update to its search index, which wiped out all knowledge of natural medicine, disease prevention through nutrition, herbs, indigenous medicine, et cetera.
Because of that one update, almost certainly millions of people have died around the world who would otherwise have lived if they had access to knowledge.
So Google intrinsically believes that dissociating human beings from knowledge is within its right, within its profit model, within its power.
And it gladly did that.
And even since then, Google has innovated AI, some of which has been licensed for uh automatic targeting acquisition in the Middle East and other places.
And all of that technology is now being used under Trump's so-called big beautiful bill to build surveillance towers, biometric tracking towers to apparently track the illegals, which of course, two years later is going to be all of us.
Right?
I mean, they're just gonna turn the towers around and it's it's gonna get us all.
And then we become the enemies of the system because of our dissent or our innovation or our competition with the control systems.
So we are long past the point of any tech corporation of size acting with integrity or morality or any kind of value for human life.
That argument is done.
Now it's just a question of what technology they're going to build to more efficiently destroy human lives and exterminate human populations.
And so, as a kind of a follow-up to what you said, I didn't know what you said about Google erasing all the knowledge on uh alternative medicine.
So do we have a moral duty then to limit AI development before it outpaces uh our capacity to control it?
In other words, uh, if technological power grows faster than ethical wisdom, are we risking a future we cannot reverse?
Well, well, absolutely.
Uh, you know, clearly the the pace of our scientific advancement has outpaced our moral development.
And so I I describe the human race as infants with flamethrowers.
It's just like a bunch of little kids running around who are not really mature, but they've got advanced weapons and they can they can destroy each other or themselves.
Or, you know, infants with nuclear weapons, you you could say.
And so I do believe if it were possible, we do need to put a pause on this innovation, this this the technology attempted replacement of human cognition and labor, although it's not possible to do that.
But if it were possible, it would be great if humanity would spend a century involved in spiritual advancement and the meaning of life, the nature of the soul, the divinity of human beings, what makes us special, and learning to not kill each other.
You know, I mean, we I live in America, as you know, and we have right, I mean, I don't want to make this highly political about what's happening in the Middle East, but we have a large group of Christians in America who are absolutely just cheering mass death, just cheering it on, calling for it publicly on stage and claiming that they are people of Christ.
And so the level of delusion in that is so dangerous to the future of humanity.
Because if even people who call themselves followers of Christ can't stop killing other human beings, what hope is there for the machines who are programmed by these biases to be any better?
Let's change the uh the the uh the approach uh for for for the next question.
I mean, you're looking at people like Jill Bethes and company, Elon Musk, we're talking about human augmentation uh recurls while Harari, etc., etc.
And so are we seeing a birth of this new caste system uh between the enhanced and the and the natural human beings?
In other words, are we witnessing the birth of this uh techno-elite genetically or cybernetically uh uh uh superior to beings are no longer see traditional humans as equal, and where is it going to take us in the near future?
Because now it's not a question of you know dividing along the lines of of rich and poor, black and white.
It's something a lot more along the lines of the sixth and the seventh technological paradigm.
Uh yes, and Daniel, I apologize.
We're we're suddenly having a lot of rain here locally, so there's going to be some background noise.
I'll try to get closer to the uh but yes, I think you're exactly right.
We're going to see a lot of cybernetically augmented humans who will then be able to exhibit much higher intelligence, access to knowledge.
There will be uh wearable or interfaceable AI engines that that people that might have a pair of glasses with a built-in AI engine that is uh projecting into their vision uh answers to questions that might come up in in dialogue.
So that person will appear to be far more intelligent than they actually are.
And those people will have an advantage in you know business negotiations or possibly in influence, etc.
And so, yes, there's going to be a division, there's going to be a uh uh uh a chasm between those who are technologically augmented and those who have no access to technology.
And then the people like you and I, and and probably most of our audiences who who are naturally, we naturally learned things.
We read books, we studied, we you know, we dedicate our lives to acquiring knowledge.
That's going to become increasingly rare among humans.
And again, I apologize for the background noise.
We we can't we can't hear it.
I speaking of uh one of the things I wanted to ask you, not even today, but a long time ago, but watching a lot of the things you've done over the years, you know, I I think without without doubt, we're all excited about AI.
Okay.
Um it's not a bad thing, it's a good thing, knowledge and progress.
But you've what you've done, and I think it's amazing is that you've highlighted uh the dangers uh uh of AI to human society.
So what do you see as as the most pressing risk right now, as far as AI is concerned?
Well, and I want to be clear that our AI engine, the reason we built it and the reason we give it out for free is it's trained on off-grid living.
It's trained on decentralization, survival preparedness, how to make your own emergency medicines.
Um, and by the way, it is a multilingual engine, but the training has all been in English.
So even though it can speak Spanish, do not query it in Spanish because the Spanish language knowledge is not trained into it yet.
So if you query it in English, and then you take that answer and translate that answer into Spanish, that's going to give you the best results.
Okay, but just you know, talking about AI in general, just artificial intelligence as part of this new world we're coming into right now.
Well, so AI can be a tool for human survival.
And see, that's the positive side of this.
I'm not anti-AI at all.
I mean, we're using it.
We use it in our company every day, and we built it, but we built it to empower people to help people decentralize and be able to grow more of their own food.
And to, you know, to regain that knowledge that Google tried to wipe out.
You know, Google and Facebook, the big tech giants, YouTube, YouTube won't even let me speak.
You know, they ban me just based on my voice print.
I've been banned since 2014.
That's why I had to build Bright Young.
We have our own video platform.
But the the tech giants are in the business of eradicating knowledge in order to control humans.
We are in the business of empowering humans to share knowledge without restrictions and without cost.
And if you think about it, Daniel, in the whole history of the human race, never before, not even the most wealthy kings of ancient civilizations.
Never before did they have access to most of the world's knowledge at their fingertips for free, instantly.
And that's a game changer.
That's the telephone.
Yeah.
And it exists now.
But that's why these tech controllers are trying to banish access to knowledge, because knowledge it breaks apart their monopolies of control.
So knowledge is what humanity lacks to be free.
And that's why we are in the business of sharing knowledge.
Now you've argued that AI could be weaponized against human freedom.
I totally agree with that.
Now, what are the most likely ways that governments or corporations or supranational states or the deep state could use AI to control populations like humanity?
Well, the simplest way, and this is happening right now, is to use AI classification algorithms in order to automatically censor speech that the state doesn't want.
So, you know, right now that's happening all across platforms like X, like Facebook, etc.
Uh, also uh Google is using AI analysis of user behavior in order to automatically assign an age to the user in order to achieve age restrictions, even though you never told them how old you are.
Right.
But they can derive that from your behavior.
They're doing that on YouTube and they're doing that on Google.
Uh, the the targeted disinformation campaigns of Google can also achieve horrific psychological uh influence over people by profiling them on an individual basis.
So in other words, Google could find someone who's searching for herbal cures for cancer, like what plants reverse cancer, and there are like a hundred, you know, that reverse cancer.
But Google could say to that person or the Google AI could say, well, we need to show that person more ads for you know oncology and chemotherapy.
We need we need to help change that person's mind because we don't want them thinking that plants have medicine, you know, because gosh, we can't charge as much money on plants because people can grow them.
So we have to, you know, target that person with a re-education campaign.
That's what Google's doing.
So basically, what we have is AI is is reshaping our perception of truth and understanding.
And so, how will deep fakes?
Because we've seen, you know, we're living in the world of deep fakes, uh, algorithmic manipulation, synthetic media fragment, our shared reality, and who will benefit from from, I guess you could say this confusion.
Well, clearly, the the establishment, the current establishment, so the government, The propaganda uh institutions of government and of big tech, uh, even of education, etc., they will use deep fakes for you know all kinds of reasons to convince people of certain things.
So imagine a leaked undercover video that's used to destroy the reputation of let's say a particular member of Congress who is speaking out against something that the administration wants.
Well, it turns out it's completely fake, but you know, the shock value.
Oh, what what's going on?
You know, in this dark dungeon with you know involving goats or whatever.
Uh and the person's destroyed because people will believe it.
And then if it's backed by the corporate media and it's backed by Google and it's backed by YouTube, because they they can put it all together just like they did during COVID.
They can have a full frontal attack on human psychology.
They've proven they can do that and they'll do it again.
One of the things we're seeing right now, especially with Hollywood, you know, media machine, is um they're turning AI robots, these you know, fluffy little you know, creatures likable and lovable into almost like a human being.
Okay, and so basically, you know, making you think that AI can suffer, can dream, can feel, can love.
So, as a follow-up to that, we're seeing it right now in in in the mass media.
They actually started talking about questions such as does AI deserve legal protection, or is this still a human only domain?
When I saw that the other day, I thought to myself, holy shit.
These people, you know, they're they're not hiding anymore.
They're talking about AI protection for we're not talking about you know protection for a net.
We're talking protection for the eye with all the consequences.
Where do you think this thing is going and why are they doing this?
Well, I think they're doing it in the United States so that they can give AI voting rights.
That's number one, right?
I didn't think about it.
So, what you're going to see uh globally is going to be a robot abolition movement.
And remember the abolition was freeing the slaves, freeing the human slaves, you know, the history of the American South and the slave plantations, and of course, the enslavement of humans by other humans was always morally wrong, but interestingly, it was morally acceptable to the culture for a very long time and other cultures throughout history, like the history of Rome, you know, the history of Egypt, etc.
History, history of uh, you know, a lot of countries in South America, etc.
Right.
So slavery was acceptable, but it it had it has since been rejected.
Now the same arguments are going to apply to robots.
So as people bring in robots to basically act as you could say slaves, because what is the definition of a slave?
Well, the owner then owns the product of your labor.
So you buy a robot, you want it to do something.
Hey, fold laundry, do dishes, you know, pull weeds in the yard.
I I want a weed pulling robot, by the way, because I want to grow more of my own food.
So, yeah, we're gonna command the robot.
We are we own the robot, and the robot answers to us as if it were a slave.
So the movement's going to be free the robots, right?
Robots have their own dreams and their own goals, and they should be allowed to vote.
Well, the minute you give robots voting rights, guess what?
Both parties, both dominant parties in America, they no longer need people.
And that gets us back to extermination agendas.
Yeah, and to Haberabi code.
Mike, we're out of time, but before we go, tell us how can we follow your work?
Where are you based as far as your all your information is concerned?
And needless to say, about the bration.com.
Well, yeah, thank you for this opportunity, Daniel.
You can follow me at Brighton.com, that's our video platform.
Brightion.ai is our free AI engine, which is trained on all of this.
It has our worldview.
So you'll love it, and it's free, it's non-commercial.
And then you can follow my written work at naturalnews.com.
Okay, on social media on Twitter.
Are you you're still on social media as IL?
Yes, I'm Health Ranger, Health Ranger at X and Brighton.social, which is our own platform, etc.
National audience, B R I G H T E O N dot com.
Thank you for spelling that, yes.
Ew, because we've a lot of Spanish speaking audience, so they're gonna say, How the hell do you spell that?
Well, now you know.
It's the word bright followed by E O N. Right, exactly.
Good.
Mike, thanks so much for joining us.
Would love to get you back on again.
We have so much fun talking about this, and there's so much more I want to cover, but that's gonna be it next time.
I'd be happy to join you again, Daniel.
Thank you for the invite.
Thanks again.
Take care.
Take care.
Power up with our organic whey protein powder, a complete protein packed with amino acids, non-GMO, and lab tested for purity.
Export Selection