All Episodes
Oct. 17, 2025 - Health Ranger - Mike Adams
55:49
AI, Freedom & the Great Divide: Mike Adams x Matt Kim on Power, Privacy...
| Copy link to current segment

Time Text
It will be the largest increase of wealth gap society has ever seen because the people that are using the tools right now to their benefit to create, to do things to amplify their work, they are laughing people.
And it's not, oh, I'm thinking, I think I want to get involved or I'm going to start using ChatGBT or I'm going to start using another AI talk to a little bit.
That's not enough.
You have to learn how to build agents.
You have to let it automate a lot of things that you're doing.
You have to have it do the work for you.
And you have to be more of a manager and creative.
As an individual, you need to be able to leverage AI for your workflow and you need to use it to 10x your output.
Welcome to today's interview here on Brighteon.com.
I'm Mike Adams, the founder of Brighteon.
And, you know, one of our fan favorite guests on decentralized TV, another show that I also co-host, is Matt Kim.
And he joins us today.
Matt Kim, his channel on X is Free Matt Kim, just like it sounds.
There it is.
And Matt Kim has so many interesting projects going on.
And he's a pro-Liberty, pro-technology guy who's got great analysis videos all over.
So welcome to the show, Matt Kim.
It's great to have you back.
Thank you, Mike.
Thank you for having me on.
You know, I didn't know as much about you before we did the previous show.
Yeah.
And I can't stop consuming the things that you put out.
I'm like, I love this guy so much.
So much.
I've become like the biggest Mike Adams fan out there.
And every time I see something on X that pops up, I'm like, oh, yeah, I got to read this because I know it's going to be interesting.
Wow, that's high praise coming from you because, you know, the feeling is mutual.
I've watched your analysis videos for, you know, a couple of years and you always have these hard-hitting analysis points and you have the courage to say what needs to be said.
You are not afraid to go against the grain, which is so critical these days.
It's one of the reasons why our fans love you as well.
So, you know, hey, the feeling is mutual.
Thanks for coming back.
We'd love to talk to you anytime you've got something to mention.
So where do we begin today?
What's on your mind?
I mean, you tell me, where are we in this world and specifically in this country?
Because it feels like I feel like I'm living in a different timeline.
I know, totally.
So, okay, then all of a sudden.
I didn't see this coming.
And I'm very critical or I'm very skeptical about kind of everything that happens in the world, especially in this country.
I didn't see this coming.
Well, my sense, and perhaps this is because I've been working so diligently on our AI projects.
And, you know, we just released our free AI model that just blows away Grok, just blows away Chat GPT.
It's stunning.
And it's free.
And we built it for only $2 million, which is just unheard of, right?
But the thing that got me the most about this is, number one, we launched the website, the entire website and the structure that runs it.
I'm the only human involved.
All of my engineers are now AI, all of them.
I run AI teams.
And I mean, I have a human engineering team.
Don't get me wrong.
They keep BrightTown.com up and running and the website and the content management systems for natural news, et cetera.
But for the AI projects, I only use AI engineers.
And I have two layers.
I have architects and I have engineers.
And then there's me.
I'm the only human.
So, Matt, what does this mean for our world that now, even I did this as an experiment.
I'm essentially not needing human engineers for many projects any longer.
And like I have, one of my network admins contacted me late last night.
He's overseas.
He's like, oh, we need a fix on this thing.
Like this URL has to return a JSON file.
And I'm like, okay, it's fixed in 10 minutes instead of five days.
You know?
I think there are two sides to this.
One is what can we do about AI as individuals?
And as an individual, you need to be able to leverage AI for your workflow and you need to use it to 10X your output.
Yes.
Because it used to be a time where the guy who works hard works twice as hard as a guy that doesn't work very hard.
And the difference between the guy works hard and doesn't work hard was 2x.
Now we're in a world where the guy that works hard uses AI to leverage his work and now he can work 10x harder or 10x more output.
So he's working twice as hard with 10x output.
So he's working 20 times harder than the average person.
It will be the largest increase of wealth gap society has ever seen because the people that are using the tools right now to their benefit to create, to do things to amplify the work, they are lapping people.
And it's not, oh, I'm think, I think I'm going to get involved, or I'm going to start using ChatGBT, or I'm going to start using another AI and talk to a little bit.
That's not enough.
You have to learn how to build agents.
You have to let it automate a lot of things that you're doing.
You have to have it do the work for you.
And you have to be more of a manager and creator.
So I think that's where we are as far as an individual.
So Matt, as far as let me demo that real quick.
So as you're talking, I just typed in a prompt into our Enoch engine.
And here it is.
Expand this into a full article.
What skills do I need to master to compete in the world of AI dominance and artificial intelligence?
Okay, I'm going to click submit.
So from one sentence, it now writes an article.
So if I'm, you know, if I'm a journalist or a podcaster or whatever, I can take this article, boom, it's got everything, the data literacy, et cetera.
I can take this as a template, you know, jazz it up, fact-check it, whatever.
Emotional intelligence here.
So just like you said, 10X.
I just saved an hour of time in 60 seconds.
Yes.
Yeah.
You have to be able to use it to leverage your ability and what you're competent at and fill in the gaps of what you're not competent at.
Because again, it used to be the guy who works twice hard makes the money.
The guy that works twice hard is able to be more successful than his peers.
Now you have to do 20X, what one individual is doing.
Right.
So I think people have to understand the hard truth because there's a lot of people in society that think, okay, AI is going to come in and they're going to replace us.
Or because of AI, it means I can do less because it's going to do it for me.
Well, no, because people are working hard, still going to work hard.
Yes.
They're going to use it to put up that much more output.
This is not the time to kind of relax.
The window for this opportunity to maybe change your status and eco social and economic status is very short, actually.
So I think people, especially men, need to be working harder now than ever before.
You are absolutely true.
I'm so glad you pointed that out because since, frankly, since Claude Code came out, I have been working harder than ever before.
I've never put in more hours building things.
So AI didn't replace what guys like you and I do because we are the innovators, we're the creators.
AI just allowed us to actually work harder to create more stuff and at much lower cost and much higher efficiency.
And that's what our audience needs to understand.
AI is not going to take your job unless you fail to learn and fail to expand the possibilities of what you're capable of doing, right?
Yeah, I mean, even just 20 years ago, people didn't realize that being a software engineer or a coder, that would be such a big and lucrative market.
People were not as big in that space.
And if you look back, you're like, well, those jobs didn't actually exist 20 years ago.
But as the world changes, new jobs emerge.
So yes, certain jobs will disappear.
New opportunities will appear.
You have to be ready for that shift.
So I think that is a more practical kind of individual ideas that people need to take home is that you need to figure out how to leverage it.
Otherwise, you will get left behind.
It's like saying, I'm not going to use the internet anymore 25 years ago.
It seems so asinine now, right?
Right.
Well, what do you think, Matt, will happen to the people who reject AI?
Because there's also, in conservative circles, there's a lot of Christians, not all of them, but many of them who think that all AI is the devil and therefore they're not going to touch it.
And as a result, they may find themselves obsolete in the marketplace very quickly.
What do you think is going to happen in society for those who reject learning this technology?
I think they need to understand that technology is really a tool.
And they kind of put this fear in our minds that we're going to have this artificial general intelligence.
It's going to think for us.
It's going to replace us.
But that's actually not how technology works.
There's no indicator that AI is thinking for itself or becoming human.
But they will say these types of things in order to justify the ridiculously high valuations that they have for the AI economy, AI stocks, AI businesses.
They can't justify it unless they say that they're about to break through to a new kind of demographic or new kind of breakthrough technology.
But the reality is it is an extremely efficient search model that's able to do a little bit of logic, a little bit of coding, a little bit of engineering, a little bit of thinking, but it's not actually thinking.
It's just collecting data in a way that it's been trained to do.
So it's a language model.
So we are not at this AGI level yet.
I actually don't think we're very close to that at all, regardless of what people tell you.
Well, okay, that's, yeah, let's have that discussion because there's, you know, you've probably heard of recursive reasoning.
And Samsung put out a science paper on the new model, which was only a 7 million parameter model, not 7 billion, 7 million, 1,000 times smaller than, let's say, Meestral 7B.
And using recursive reasoning, which is a very simple flow, you just throw tokens at it and it keeps feeding its own answers back into itself.
It just takes longer time.
But, you know, a 7 million parameter model is blazing fast anyway.
So using recursive reasoning, it was able to beat, you know, like ChatGPT, I think, 3.5 and a bunch of these other models with one ten thousandth the number of neurons in the digital space.
Seems to me that things like recursive reasoning or chain of thought reasoning are beginning to actually show thinking capabilities.
I agree.
It's not AGI.
I wouldn't call it that.
But there's reasoning.
Like, for example, I can take a bunch of Python code and I can paste it into an AI engine, even my own, and I can just say, tell me what this does in plain English.
And it figures it out and explains it to me in English.
Like, that's clearly way beyond word prediction statistical analysis.
This is something else.
That's like structuring knowledge somehow.
What do you think?
Yeah, it is calculating faster.
It is processing information quicker, but it's not doing the things that humans can do.
Humans are able to invent.
Humans are able to have a level of humanity to certain problems that an AI cannot do.
I think when you are taking code to explain code, it makes sense.
But when you're taking code to explain human to human behavior, it's going to struggle because it's going to go to the most logical solution.
But as we know, humans are actually very illogical and irrational.
So I think there's a little bit of gap there.
And I think that's really what I mean when we're talking about can it replace humans or will it be kind of this artificial being that a lot of the Christian and conservative side fear.
Yeah, well, exactly.
And at the same time, this is happening, there's clearly also the weaponization of AI technology by governments to spy on their own people.
You've got Palantir in the U.S. and you've got California pass a law to have the California Ministry of Truth Technology Division approve your AI model or you can't release it.
And I just gave them the finger on that.
Forget it.
We're going to release it anyway.
Screw you.
But talk about AI weaponization being used to enslave humanity rather than empower.
Yeah.
Well, I think most of the big systems that we have are really control mechanisms, right?
And if you are a ruler of the world and you are in charge of a large subset of society, you actually want to make your job easier and you don't want a lot of problems.
In order to do that, you have to build various control mechanisms in order to control the population, in order to make it give predictable outcomes.
And I think AI is just one of them of the many.
The other would be social media.
The other would be what we eat, what we view, what we see, what we talk about.
It's all designed in order to control and mold our way and view of thinking.
It's similar really to the monetary system also, isn't it?
Like money actually isn't that real.
If you can print money anytime you want, unlimited amounts of it, then I don't know if that's a real, a real thing.
The whole, it's designed in order to keep you down.
The whole system is designed to keep and make you more controllable.
I think the problem with AI is that we are told that it's going to overtake our lives.
And by doing that, it's taking away the power for us to control it.
It's giving people a level of fear that's saying, hey, this is out of control.
I need to comply with it, not realizing that they actually have ultimate power because what they can do, you can do also, at least for now.
I read this book by Eric Schmidt recently.
And Eric Schmidt, Google, whatever, not good guy, fine.
But he talked about this idea of how AI will have two new classes of people.
There will be the ruling class, which will be the people who decide what you see in your AI.
So who decide the algorithm, who decide what's important, who decide how it works and what you see.
And then there will be those that just are controlled by the systems and rules and rails that those people have decided.
So this is a new classification of people.
We call them the creators and then the control, the consumers.
And I thought that was a really interesting way to kind of frame that, you know?
Well, we're already seeing that because we see that the CIA now controls the narratives of ChatGPT.
We see that big pharma is heavily influencing it.
We see that also Netanyahu and the ADL are pushing all AI to eliminate any criticism of Israel for whatever reason.
I can't imagine why.
This is one of the reasons why we launched our standalone model.
Let me just show people.
If you go to brightu.ai, and this is free, you click on downloads right here, and right here, you download this GGUF file.
Okay, you can run it locally on your own computer.
All you need is a cheap graphics card.
Nobody can censor it.
I mean, this is the total democratization of the entirety of human knowledge condensed into a 12 billion parameter model that's been memory wiped of all the pro pharma bias, by the way.
And Matt, this is the ultimate decentralization.
I would imagine governments are going to try to outlaw standalone models because they want to be able to control everything in the cloud where they can change and reshape the narrative day by day, like the Ministry of Truth to match the current administration's whatever lies and propaganda they're pushing out.
That seems to be the way they're going, right?
Yeah, I think people need to learn how to run their own language models or their own AI models locally.
I think that's really important.
I think there will be a technological shift as far as hardware to create machines specifically because it is very intensive.
Right now, you have, what do you say, 7 billion parameters?
Our model's a 12 billion parameter.
12 billion parameters.
But eventually you're going to want to expand at 32.
You're going to want to expand 64.
Because that's kind of the greed in people that I want to have a more comprehensive model.
And you're going to need ways to be able to process that.
It's not easy on small machines to do it.
I mean, I run a MacBook Pro with 64 gigs of RAM.
And it's not easy to run over that too many complicated models.
It's very slow.
So I think there is going to be a push for that.
And I think that's actually needed.
I think so too.
Systems and hardware in place to help people run their own models locally because using centralized models, that's bad news.
Well, right.
And the open source community, and I think you and I are both longtime supporters of open source code.
And that's why we support that whole philosophy.
That's why we put out our model for free.
And we built it on a base model.
In this case, it's the Meestral Nemo 12 billion parameter model that the Meestre company put out for free also.
So there's a very rich, abundant open source community of models at Hugging Face, millions of models to choose from.
And I think it's too late for the powers that be to put that back in the box.
It's like, how can they ban math?
How can they ban linear algebra?
How can they ban like gaming graphics cards?
It's too late.
I mean, that's always my kind of my pause when they talk about the AI arms race.
Because they talk about, well, the United States, we have to build AI faster and first and bigger first, because otherwise we're going to lose this arms race to China.
And that's kind of narrative.
And that is what justifies the United States and the United States government spending so much money in order to grow these models and grow these data centers.
Yeah.
But what does that actually mean to lose?
Because what we're finding out quickly is that once someone develops it, the other country or other company, other people can just copy it and roll it out because that's where we are in technology.
So they just want first mover advantage as far as the economics, but there's no losing to the other side because if the other side builds it, this side also gets it.
Well, okay, that's interesting.
There's a book called If Anyone Builds It, Everyone Dies.
Have you heard of that book?
It's by a machine learning scientist.
Forgot his name.
But what I've heard a lot of experts argue is that if China were to develop, let's say, superintelligence first, within microseconds, they would basically go full Skynet and use it to dominate the world and hack into everybody else's infrastructure, shut down the world and say, you're all slaves to us now.
Because if they don't do that, this is the two scorpions in a jar game theory, right?
If they don't do that, then they would anticipate that if the U.S. gets to superintelligence first, that the U.S. would invoke exactly that scenario.
So aren't we at a race to total destruction if they manage to achieve really super intelligent systems that have IQ 1000, let's say?
Well, that's also assuming that we can get there.
True.
That's actually possible.
That there is a level of AI aptitude where it is so strong.
But there's nothing that says we're even close to that.
And my biggest, it's a very non-technical reason to believe that we're not even close.
But it's very hard to decipher what is true, what is not.
Well, my logical but non-technical reasoning for saying that is that if we were close to some sort of super intelligence, then I don't believe that the major players in AI,
that they would be spending their time or efforts on things like marketing, on things like app development, on things like integrating with other services, on video, on how to make anime bots.
This is not something they would focus on because if they're close to that finish line or if they saw that finish line in sight, every single dollar and resource and man that they have would be put on the project to get them there first.
They wouldn't care about valuations.
They wouldn't care about stocks.
They wouldn't care about burning money.
They wouldn't care about video generators.
They wouldn't care about making deep fakes.
They wouldn't care about making AI porn, you know, anime like Grok is doing.
They wouldn't think about that stuff at all because whoever gets to super intelligence first wins.
The fact that they are spending on all these other auxiliary projects to me shows that they're actually not even, they're so far from it, they're not even thinking about it.
Like they're thinking about long-term revenue first.
You don't care about money if you're close to that because the first person right there makes money becomes obsolete at that point.
Observe.
Right.
Exactly.
You essentially get to take over the entire world.
And the fact that that's not where their resources are going, that gives me a big pause.
And I know that it's an overly technical answer to that, but if you were close, would you care about anything else?
You're about to rule the world.
That's what these people care about.
And I also, it's funny because I hear machine learning scientists talk in sort of godlike utopian terms where they say things like AI is going to solve physics or it's going to solve cancer.
And they think that that means an AI machine is going to come up with a synthetic molecule that cures cancer.
I can already tell you that's never going to be true because if you keep eating pro-cancer foods, your body's going to keep making cancer.
And frankly, we already know the answers to cancer.
They're just suppressed.
Okay.
I mean, we already actually, it's all in the model that we just released.
You want to know there's like 50 cures for cancer that you can access for free.
So, you know, for every kind of cancer.
So that's already been solved.
We don't need super intelligence.
This is kind of getting back to your point.
What we need is freedom.
We need to stop the suppression and censorship of human knowledge that already solves the problems for humanity.
Does that make sense?
Yeah.
I think that the solution to kind of this dystopian future that they paint on us is that we need to go into the things that make us human, meaning compassion, empathy, the things that AI won't ever be able to replace.
AI will be able to do calculations faster.
AI will be able to do logic better.
It will have a better memory.
It'll be able to pull more information.
But it's like saying, you know, 25 years ago, we used to do mental math.
We don't do those things.
We don't do that anymore.
And then we went to a regular calculator.
Then we went to like a TI-83 and then TI-85.
And then all of a sudden it's doing calculus.
Right.
But it didn't change the fact on how we've solved problems.
We took those tools and we were able to create better and greater things.
We just have to make sure that the tools aren't controlling us and that we're controlling the tools.
Totally.
And I think that's really important for people.
Now, Matt, can I ask you maybe some personal questions about your history?
Because I've only come to know you this year from interviewing you.
And I don't know much about your history.
What is it in your life that made you such an advocate of human freedom and knowledge expansion?
I don't know if it was any one thing specifically, but I do know that I hit COVID was really interesting for me because at first I really wanted to believe kind of the scientists, the doctors, et cetera, because why would they lie to you?
I wanted to believe the government.
Why would they lie to you about something that if you go outside and you call off, you're going to kill grandma, like why would you lie to someone about that?
How bad of a person, how evil of a person do you have to be to go out there and blatantly say that to people's face?
If you go outside and you call if you're going to kill grandma, so you got to stay home.
It must be serious, no?
It has to be.
There's no way the world is so upside down that institutions would blatantly lie to you and weaponize your grandma to do it.
And then you realize they were lying to you straight to your face.
Yep.
And you see the censorship.
You see how they mold the narrative.
You see how the whole entire era, the COVID era, was used to build control mechanisms around you to justify their surveillance mechanisms.
And as you see more and more and more of it, once you realize what's going on, you can't unsee it.
And I think that's really what it is.
I have a very young daughter, and I don't want her to live in a world, number one, of full control and surveillance.
But number two, I don't want her to live in pure compliance.
And if I want that life for my daughter, you know, kids don't learn by you telling them to live a certain way.
You actually have to live a certain way and then they mimic you.
So I feel like I have to, not necessarily I want to, but if I'm, if I'm thinking about future generations, I'm thinking about what life will be like for my daughter and her daughter and their daughter.
We have no choice, actually.
We have to question.
We have to push back and we have to retain some resemblance of freedom.
Otherwise, like that's it.
Did you grow up in the United States?
You lived in the U.S. your whole life?
I was born here.
I was raised here.
And yeah, I've lived here.
I've moved around a lot within the United States, New York, New Jersey, Ohio.
I live in Georgia now.
So I've been through kind of a lot of it.
Okay.
So do you see Western culture, especially in the U.S., as moving in the direction of embracing liberty and freedom?
Or is it more sort of surrendering to the authoritarian systems that seem to be emerging?
We are moving very closely to the China model, which is that there is one government, one party, one authoritarian rule, and then they keep the population in the country safe by rolling out surveillance, by tracking people, by taking away certain civil liberties.
But it's okay because for the general population, it may be better.
And that's kind of the pitch.
I think that's the model we're moving into.
I think the biggest concern of that was kind of the whole TikTok thing.
You see in China, like, oh, they got the great firewall.
They don't get to see what's outside the world.
They only know what, they only know what's within China, what the party wants to tell you.
Well, we're creating the same exact model.
We're trying to create this firewall around us.
The U.S. will have the U.S. only algorithm, and therefore we can control that.
You can't have information from outside the world.
They want to do the same thing.
In China, they use WeChat in order to do your communications, to do your payment system, to track every dollar that you spend.
If you go and you cross a street and your jaywalk just comes out of your account, if you see something that is, you know, dissenting to the party, then they dock you your credit.
And it's not like, you know, they think that in China, the social credit score is a number, but it's not.
It's your finance history.
It is your what you've said online.
It is your job history.
It is an agglomation of all the things that you do in your life.
And they use that to justify, are you a good person or bad?
So basically, we're living in black mirror.
Is that what this government wants?
We're living in black mirror.
I mean, and it's just continuing to evolve.
I'm sorry to interrupt you there, but what's your reaction to the fact that in the UK and other EU countries, they are passing laws to literally monitor all private chats so they can, you know, for safety reasons, so they can catch people chatting about not liking Kier Stalin or whoever is in charge at the moment.
We're here from the government and we're here to help.
Yeah, right.
Isn't that what they always say?
They use that for your safety and they use that to leverage and they create more fear.
They make it sound like the world is falling around you.
They make you feel kind of panicked.
And then they come out with a solution and say, hey, look, for your safety, we're going to add more cameras to your street.
We're going to add more cameras to your front door.
We're going to add cameras into the cars.
And do you know what?
It's really hard for you to monitor it.
So we'll just monitor it for you.
And God forbid something happens to you.
So we'll just send someone there just in case before it happens.
And you feel good.
You feel safe.
You know, if you are in a luxury condo, a luxury condo means more surveillance, more cameras.
That's actually what a luxury condo means these days, right?
More smart features.
They know when you're coming in.
They know when you're coming out.
You know, in Ohio, I saw this bill that they're trying to pass where they said, if you're using too much electricity in the summertime, we will just change your temperature in your house for you.
And slowly, by one by one, they take away your rights.
They take away your civil liberties, but they exchange it for what they call safety.
I don't want the government keeping me safe any longer because every time they say they're keeping you safe, it's the opposite.
I want them to get out of my life and let me decide what's good for me.
You know, like I'm even against the whole force people to wear seatbelts thing, actually.
Yeah, well, I'm all for human liberty as well.
I mean, people should be allowed to choose whether they want to wear masks.
Or, you know, look, if you're dumb enough to not wear a seatbelt, that's your own suicide mission choice, you know, right?
There are situations where people make a conscious choice not to wear seatbelts.
For example, like one time when I was given a speech at a place with a high security profile, they assigned me a security guy.
He specifically did not wear a seatbelt so that he could move more quickly in an incident.
So there's a rational reason to not wear seatbelts sometimes, right?
But what do you think, Matt, of the fact that, you know, you mentioned electricity and how the state wants to go in and alter your thermostat settings to make sure you're not using too much electricity?
Well, on the Eastern Connection that runs the 13 U.S. states, which includes Virginia and Tennessee, also, by the way, in Pennsylvania, of course, that power grid is just toast.
Like they've said, you can't add any more data centers.
It's already tapped out.
But that's also where all the secret CIA data centers are being built, you know, for the government and the DOD.
So they need more power, but they don't have it.
And some of that is the race to AI superintelligence.
And so they say we need this.
At what point, Matt, do we start being told, especially people on the eastern half of the U.S., well, hey, you can't have air conditioning, you can't have heating, you can't charge your electric car today because we need the power grid for the race to superintelligence.
We've got to divert your power to the data centers or we all might die.
I can see that being the message.
Yeah, it's scary, isn't it?
That they will want power and they want more money.
That's really the end goal, right?
They want more power.
They want more money.
And will they get there?
Will they not get there?
They actually, I don't even think they actually care.
They just need to keep on building.
They need to create, they have to keep their stocks up.
I think the biggest indicator of that is that, you know, when we look at the economy, it's like, well, the economy is doing great because the stock market is up.
And I'm looking around going, well, the economy is not that great.
People are having a hard time.
Things are really expensive.
They're not getting any cheaper.
But you keep on telling me it's okay because the economy is good.
Well, you can't let systems fall.
You can't let companies fail.
And if your companies are over-leveraged and overvalued, then you can't just say, oh, just kidding.
We have to recorrect.
So they have to keep pushing it forward.
I don't know what they're going to do with the energy problem because they keep on building all these AI data centers.
People's energy prices are going through the roof.
Yes.
And, you know, it's taken up.
I mean, the reason that most people's energy bills are up is because of the data centers.
They're drawing so much energy and there is a finite amount of it.
Maybe that's why we're going after Venezuela.
I've thought about that a lot.
Yeah.
You know, why do we even care about Venezuela?
Why are we in this war?
Why are we pretending like we care about this war on drugs when obviously they don't care because they are the biggest drug dealers in the world?
But all of a sudden we have this war on drugs and we're blowing up all these all these boats in the water and just like why do we even care about Venezuela?
Well, because they have a lot of energy and we need a lot of energy, you know?
True.
I don't know what the solution is.
Well, what do you think about modular or small modular reactors, which are these small nuclear reactors that can produce, I think, up to 300 megawatts.
That's the larger one.
And they can, you know, you can swap out the fuel every few years, you know, the fuel rods using like a tractor trailer.
These are made by companies like Westinghouse and I think Raytheon has them.
I've heard a lot of talk about these small modular reactors being relatively quickly set up and installed on the power grid compared to the typical Westinghouse AP1000 nuclear power plants that take about 15 years to build and approve.
You know, 15 years from now, it's over.
I mean, if there's going to be super intelligence, you know, it's going to happen before 20, what would that be?
2040 or whatever.
Maybe, but what do you think?
What do you think about the modular reactors?
I don't know much about it.
Why don't you tell me a little more about it?
What are like, what are they cost?
Where are they putting them?
I don't have as much context.
Maybe you can fill me in.
Well, I mean, right now, when the power grid needs energy, they typically will set up gas plants, right, to run natural gas because that with gas turbines, because that's the simplest, kind of quickest way.
Or they'll run big portable diesel generators because that's very expensive.
The small modular reactors, there's no question they're coming online probably starting next year.
And again, they can go from 20 megawatts to 300 megawatts.
And the thing is, they don't require the long-term, you know, like the five-year plan and the same evacuation plans as a regular nuclear power plant.
You can put them almost like in neighborhoods because they're considered much safer.
And the main units can be brought in on the back of an 18-wheeler rig.
So, I mean, it's still a vision.
Is there a lot of pushback for that?
I haven't even heard much of that.
Not that I've heard.
Look, here, you know, the whole ecological argument, you know, climate change, carbon dioxide, even anti-nuclear, that has been just completely silenced in the race to superintelligence.
Because like power is now the number one priority, especially with Trump and the White House, it's like, you know, just set up every power source that you can think of.
You know, because China produces more than twice the power of the United States, far more.
Over 10,000 terawatt hours annually.
That's what China produces.
And they're building more.
I mean, we can't even compete with China's energy.
And, you know, they're running that new pipeline from Russia from the Yamal fields all the way through Mongolia up into northern China.
50 million, I'm sorry, 50 billion cubic meters of gas per year is going to go through that pipeline.
Whoa.
That's energy, man.
Well, I got a separate question for you then.
Because there's so much put on this idea of you got to beat other countries and you got to remain kind of the world power, right?
Because otherwise you lose.
But what happens if the United States is no longer the top number one world power?
Is that actually the worst thing?
No, it's actually a good thing to live in a multipolar world where power is decentralized across nations that interact with each other based on trade and ideas rather than bombs and coercion.
That's my opinion.
Because so much effort is given to the United States and the military and to these data centers and AI and everything else in order to retain kind of the top of the food chain pyramid.
But in the process, regular Americans, regular citizens, like people are having a hard time.
It's not going to get better.
Oh, yeah.
What if we're like, let's just figure out how to figure out how the United States can operate on its own and not worry about what the rest of the world is doing.
And if that means you fault number two or God forbid, number three, so be it.
Why is there such a greed to remain number one at all cost when Matt, I hate to say this because it's going to sound anti-American, but I'm not.
I love America.
I love Texas.
I wouldn't want to live anywhere else.
But since the Bretton Woods agreement and the petrodollar, the United States of America has been ripping off the entire world by printing money and exchanging it for goods and energy and minerals and et cetera.
And that scam, it's a scam to have the World Reserve Currency be able to print the currency and trade it for everything.
That scam is so lucrative that it has allowed Americans to live relatively in a wealthy manner compared to everybody else in the world almost.
And that scam is coming to an end.
And that's why they're using the Navy and the military to try to enforce all that and assassinations all over the world.
But you already know all that.
I mean, you talk about that.
If we went back to say, hey, we have 10 years of less technology, but you had more time.
You can spend time with your family.
You don't have to worry about a crumbling debt around you.
I think I'd take that trade, actually.
Yeah.
You know, I don't know if I need another 10 years of technology first.
What if we had Facebook later than another country?
I think I'd be all right with that.
Why do we have the first people that everyone uses Facebook or X or whatever?
What's wrong with not getting there first?
And, you know, I didn't think like that before.
I was like, no, we have to get there first.
You have to be number one.
But I've slowly come, maybe, maybe because I have a family or whatever else now.
Maybe it's not as important as it wants to us.
And I don't know.
Maybe that makes me like kind of anti-American.
I don't know if it does or not.
No, but I think you're getting to the life here to be better.
The values that give meaning to life and human civilization, those values are never captured by the GDP.
They're never captured by energy or how many terawatt hours on the power grid.
It's just like you say, I mean, some of the most fun times I have is, you know, out in nature, like jogging in the sunshine, exercising in the woods, checking on my goats or my dogs, training with my dog.
You know, I mean, I don't have kids, but if I did, it would be spending time with family, right?
So you're right.
I mean, that's what should matter, but we all get caught up in this rat race that actually destroys the quality of life in the quest for something that's artificial, which is material wealth.
Yeah, I feel like there needs to be, and I see it amongst actually the youngest generation.
I've seen a trend amongst kind of the Gen Z, Gen Y, or Alpha, Gen Alpha, whatever we want to call them, of going back to separating themselves from technology a little bit, of going back to wired headphones versus wireless.
I've seen the Gen Z go into parties where they lock their phones up at the bar in boxes and you're not allowed to have your phone at the party.
That's great.
Which is, I love that actually.
Yeah.
And, you know, we always talk about this pendulum swing.
And we always think that it's a political, you know, a right versus left pendulum.
But really, I think the pendulum swing is people that are so focused on technology.
And the youngest generation go, well, like all the old guys, they all only want to talk about technology these days.
I'm just going to talk about being outside.
And maybe that's a good thing.
Well, I completely agree with you.
And, you know, I live on a ranch.
I have backyard chickens.
And on any given weekend, I'm raking out the chicken coop or collecting eggs or whatever.
And I think that's important.
I take time every day off my desk, away from the AI agents to go out and experience the real world.
But not everybody, you know, does it that way?
And Matt, the thing that I, the vision that I have for living a free, decentralized life is to be as far from the city as possible, to be as off-grid as possible, but to be technology augmented where it makes the off-grid living more efficient.
So, for example, having your own local AI model like the one we just put out, that's great because if the internet goes down, I can still have, I can ask and answer any question right on my own computer without an internet.
And also robotics.
I'd like to ask you about robotics because my vision of robots is that small humanoid robots will be able to help people live a more rural lifestyle because a rural lifestyle is very labor intensive or home gardening.
Imagine growing your own food with the help of a robot that can do a lot of the manual tasks that are laborious.
I actually think that can be valuable.
We're going to run into this weird part of society where we have robots and then we're going to be arguing, well, do you want a centrally controlled robot or a locally operated robot, right?
Yes, open source brains.
That's the kind of conversation we have with AI.
Yes.
Because do I want a government-run robot running around my house?
No, no, no, no.
I don't know if I do.
But then will they allow?
Will they allow you to have your own operated robot?
Well, somebody will build it.
Like, of course they would, but somebody will make it.
They will.
Some people will make it.
And then are we censoring the people?
Are we restricting it?
Are we putting regulations so that you can't?
Because they can justify, well, if you have a robot, it can be a weapon.
We can't just let people have subhuman or, you know, these types of weapons in their house because it can be leveraged for other things.
They'll create a SIOP or a false flag where someone's locally run robot goes out and creates some sort of mass havoc.
And they're like, oh, we told you that's why you can't have them.
Totally.
You can only use controlled.
You're like, oh, fuck.
We're back in this cycle all over again.
Well, you really see where this is going.
I'm glad you mentioned that.
But yeah, they might outlaw hacking your robot brains.
I'm going to do it anyway.
Yeah, I mean, I don't know because I have the right.
I mean, software is freedom of speech.
Software is protected by First Amendment.
So maybe I buy a Tesla robot, let's say, and then sooner or later, somebody's going to have like a hack kit.
I plug this in, I run it, Zap, you know, and now it's running my software instead of talking and it's not reporting back to, you know, the Tesla company all the time.
You're going to have robot hack.
Just like, did you know right now, like if you buy like a diesel truck, well, you can flash the computer on your, on your truck or your car to bypass emissions or whatever.
And that black market exists right now.
I think the same thing is going to be for robots.
Yeah, but you know, people have this tendency.
And I don't know what it is, but I think we all kind of go through it.
It's like you can jailbreak your iPhone and you can flash it and you can have it run so that it's not as tracking you and you can do all these things.
But then they roll out a new update and you kind of want the new features of the update.
You're like, all right, I'm just going to plug back in because, you know, I heard I can get like transparent glass buttons.
And how could I not want those?
So it's actually not difficult to get people to come back into the system.
All you got to do is give the features to the people that comply.
And the people that don't comply eventually just, well, I want those features too because that looks pretty cool.
You know, I want dark mode.
And then all of a sudden, you just got them all back.
And I don't know what that is in us as humans because so even me.
And I talked about this really briefly, I think, online the other day.
I'm so anti-government monitoring and tracking.
And I don't want them restricting where I go.
But then I go to the airport and I'm like, oh, but that TSA pre-check is so nice.
It's so nice because I don't have to wait in this long line.
I can just go right through.
I don't have to take my computer on my bag.
I travel a lot.
I'm flying a lot.
I can either spend my afternoon in this line, which is miserable, or I can do this TSA pre-check and go right through.
And I know they're watching me.
I know they're giving me permission to travel, although it's my decision if I want to go somewhere or not, but I still need to seek their permission to travel.
And I hate it, but I still do it because the convenience of TSA precheck is so good, especially in a very high-stress travel situation.
And that like really makes me mad about myself sometimes that I do it willingly.
Freedom is never convenient in a tyrannical society, right?
So think about even just using crypto or using privacy crypto or going through all kinds of tasks to try to create privacy out of your non-private Bitcoin, for example, right?
Or even using a VPN, you know, company that you helped build, VP.net, which is, you know, it's a zero-knowledge VPN that's, I think, the best in the world, by the way, VP.net.
It's a pain for a typical user, especially the older side of users, to install and use a VPN, even if it's just one button.
It's like it's another step.
And I've had lots of people ask me, should I use a VPN to log into this bank or whatever?
I'm like, you know, people don't understand what VPNs do, actually.
I've come to discover.
They think it like blocks everything and it doesn't.
It just changes your origin IP address.
But it doesn't delete your cookies on your browser and all the things you already have set.
You know, it doesn't delete your Google profile, you know.
Yeah, we talked about this last time.
A VPN, if you're taking your online privacy seriously, you don't want your information all out in the world.
You don't want people to track you.
Then the VPN doesn't solve all your problems.
Really, what it is is in a system or a layered step of how to keep yourself private.
It's the base layer.
So you get your VPN and then you put other protocols or other processes on top of it in order to protect yourself online.
But the base layer is your VPN.
So everyone should be operating under a VPN.
Everyone should be using one because it's doing the minimal things required to at least mask you online, but it doesn't make you completely invisible.
There are a lot of things to do.
And if someone really wants to target you, they can.
It's not about targeted protection.
It's about blanketed protection.
You just don't want to be a part of the mob.
Do you remember the movie with Gene Hackman and Will Smith called Enemy of the State?
I think that was 1997.
And that movie was like 20 years ahead of its time because everything that's depicted in that movie came true about government surveillance, satellite surveillance, turning off your bank accounts, et cetera.
But the Gene Hackman character, he was a former NSA guy in the movie.
That was his role.
He not only had this EMP-proof compound with a giant Faraday cage with all his computers, he had it rigged with explosives so that he could like pull a switch and blow the whole thing up and escape, which happened in the movie.
Now, that's probably going too far for privacy.
Like, you shouldn't have to rig your computer with explosives is my point, you know?
But somewhere in between the rig everything with explosives versus do nothing, there's probably a sweet spot of how you can protect your privacy with some efficient effort.
Does that make sense?
Yeah, I think for 90, 95% of the population, having a VPN will give you a level of protection and privacy that probably you need.
If you are an investigative journalist, maybe you're working on very high, high-level projects, if you're working on very sensitive technologies, you probably want something a little more robust.
But for the general population where there aren't people specifically looking for you, a VPN is going to be enough.
Again, it's not the save all from everything, but it is a base layer of your privacy.
And if you are just a normal American, normal human being living your life, get a VPN.
I recommend VP.net.
Yeah, I do too.
And turn it on.
And, you know, like, it's better than not having it.
You know?
Okay, I'm asking our AI engine, how does a VPN help protect my privacy online?
Okay, let's see what it says.
Encrypts internet traffic, IP masking.
Hold on, it's scrolling.
It's auto-scrolling.
Tunneling, tunneling protocols, no logs policy.
That's VP.net.
Some VPNs do log, preventing DNS leaks, protecting against IPv6 leaks.
Oh, that's interesting.
That's pretty rare, but that can happen.
So this is good information.
Yeah.
Yeah, I think the no-logs policy is interesting because that's really the problem that we went out to solve when we created VP.net.
Yeah.
Because no logs policy means that it's a policy decision, meaning that all your other VPNs have the ability to spy on you.
They have the ability to see what you're doing.
Not saying that they are logging it, but they can.
And the problem with humans is that if they're given the opportunity to you, or the government shows up with a government gun and they say, tell me what so-and-so is doing on the internet.
Tell me what Mike Adams is doing on the internet.
Most humans will say, oh, yes, sir, here you go.
Because they're not going to go to bat for you as one individual.
It's not worth it or is it financially feasible for them to do.
We created VP.net so that it is impossible for us to log or see or correlate your traffic.
That's a huge difference, actually, meaning that we do not know what you're doing online.
It is impossible for us to connect you and the destination.
And that is the biggest differentiator between what VP.net is doing versus what every other VPN in the world, every single VPN in the world, what they are doing.
Every single VPN in the world knows what you are doing the moment you are doing it.
That's right.
That's right.
And I think, honestly, I think a lot of the VPNs are just like traps.
Yeah, honeypot traps.
There's no question about it.
Because, you know, let's say, by the way, we're almost out of time.
I can't believe it.
But let's say you're a U.S. senator who's got some creepy perv thing and you want to go, I don't know, on the dark web and search for like furry porn or whatever you're into.
The VPN knows that about you.
Like, because you signed up with your credit card, probably, because you didn't use, you know, privacy crypto to do it, which is silly.
But you signed up with a credit card with some VPN that's actually run by Mossad, and now they've got blackmail on you.
Now you're going to do whatever they say.
So that, I believe that happens every day.
So like, number one, don't search for furry porn.
You know, don't be a perv online.
If you're going to be a perv.
If you do a private browser, it doesn't mean you're safe.
Be a private perv.
Don't be an online perv.
Keep it, you know, keep it in your own weird bedroom, whatever.
Talk about privacy coins.
Did just add Zano and Freedom Dollar to the payment methods of VP.net.
Oh, that's great.
If you want to use anonymous token, those are the two that I personally like also.
I like Xano.
I recommend.
Cool.
Okay.
So VP.net is a website, folks.
And yeah, I do encourage you to pay with Xano.
I think Xano is the most promising privacy crypto infrastructure.
I agree.
It's more than a coin.
It's a whole infrastructure.
I've been watching that project very closely.
It's pretty amazing.
They're really focused on not making it a project where, okay, we're going to try to increase our valuation, or that doesn't seem like their focus.
They're focused like how do we build an ecosystem where people can actually live their lives using private tokens.
Yeah, totally.
And I like that mission.
I'm a more mission-driven person.
I like the fact that they have a real focus rather than when moon, when Lambo, which is 99.9% of crypto projects out there.
When are we going to moon?
When are we going to Lambo?
How do I have 3x, 4x, 5x?
Great.
I mean, there's a lot of things you can make money on these days.
Yeah.
But to actually have purpose that provides value to society, I really like what they're doing.
Totally.
Right there with you.
Well, and that's what we believe.
That's what you believe.
Let me mention how people can follow you again on X. Here it is.
Free Matt Kim is the username there on X. And Matt, thank you for joining me today.
This has been a lot of fun.
We'll have to do it again soon.
Thank you, Mike.
Thank you for having me on.
And I appreciate everybody.
And we'll definitely get together soon.
You got to come on my show.
I'd love to come on your show.
Yeah.
Just reach out.
Oh, hey, after we finish recording, stay tuned.
Let me give you my direct contact info too.
Okay.
So just stand by.
All right.
Thanks for watching, everybody.
Mike Adams here with Brighteon.com.
And just another great conversation with Matt Kim.
So check out his work on X and also check out VP.net.
Thanks for watching today.
Take care.
Power up with Groovy B and Boku Organic Super Fuel Blend.
Nine superfoods in one clean plant-based mix.
Glyphosate tested, low in heavy metals.
Only at healthrangerstore.com.
Export Selection