All Episodes
Sept. 9, 2025 - Health Ranger - Mike Adams
01:45:13
DTV – Roman Yampolskiy on AI superintelligence, human extermination and simulation theory
| Copy link to current segment

Time Text
Centralized God decarbonize decentralized all right, folks.
Welcome to the brand new episode of Decentralized TV here on Brighton.com, the free speech video network.
And as always, I'm joined by my guest today, Todd Pittner.
Welcome, Todd.
Great to have you here today.
Welcome, Mike.
Uh, and I want to thank you because I totally took advantage of you this weekend.
And you're in your company, your labor labor day sale.
Oh, okay.
I'm really grateful.
And by the way, uh the the freeze-dried strawberries and the freeze-dried bananas came in.
And my wife made a comment that I've never heard her say.
She said, hmm, these are divorce worthy.
And I said, What do you think?
Divorce worthy.
If you eat all of these, we're going to court.
Well, if I could send you more strawberries to save your marriage, I'm happy to do that.
So just let me know if that's necessary.
Excellent.
Excellent sale.
Thank you.
Oh, that's great.
Well, thanks for supporting us.
Uh, we really appreciate that.
And um, you know, that's all organic, lab tested.
I know you love it.
It in that format, the freeze-dried format they'll last for many, many years.
Uh, but getting to today's show, you and I have a really amazing guest coming up, and I think our audience is gonna just uh they're gonna lose their minds today with with joy, you know.
Yeah, we we we are definitely going to hit the control alt delete on your minds.
That's the problem.
Our guest today coming up is Mr. Roman Yampolski, and I I I it's hard for me to remember his name at the moment, I mean his last name.
And he he's uh a computer science expert who has written many papers and done literally decades of research on issues like uh AI safety, uh, how do we stop the machines from killing us all?
And also, are we living in a simulation?
Right from a superior intelligence that built this world and somehow we're in it, and he is almost completely convinced that we are living in a simulation.
And today we're going to ask him why.
Like what why does he believe that to be the case?
Yeah, yeah.
So it's gonna get interesting.
And I just want to know how would we know and what does it really mean to be in a simulation?
Because when I wake up in the morning, you know, I I still need to walk my dog, or they'll go to the bathroom in the the in the house.
Is that all part of the simulation, Mike?
Well, well, yeah.
I mean, what as as he explains it, and I've watched many of his interviews, I've I found him very intriguing.
Uh of course, the the the simulation is persistent and it is convincing because we are in it.
You know, we are living inside a biological haptic interface that is very convincing to our consciousness that's that's basically you know steeped in the simulation.
So of course it feels real and looks real, and you know, all the emotions we experience them as real.
So you're gonna break my heart because that means I'm an actual NPC, Mike.
Well, but but no, but but our consciousness is in the simulation.
Okay.
So you're not an NPC, but there are NPCs in the simulation who do not have the souls.
So this see the simulation theory actually explains why there are NPCs.
That is explained.
To populate the world with non-player characters.
Yeah, that supports my 98% number.
You know, my my dim view of humanity.
Yeah.
Yeah.
So the world is populated like a giant sim city, right?
Yeah.
There are a certain number of actual divine souls from a transcendent reality that agree to enter the simulation.
And here we are, and we're given, we're born into these bodies, right?
And yet, in order to fill out the rest of the world and make it look more full, there are NPCs.
Wow.
Yeah.
I mean, it tracks, it tracks with what we're thinking.
You know that I think only two percent of this uh world people in this in the simulation have the ability to critically think.
And so I think that's right.
Yeah.
I'm totally into it.
You know?
Yeah, and we got to be careful because we want to cover both topics, AI and the simulation.
And I feel like I could do a whole show on each of those with our guests, you know.
Look, there's uh yes, as I did my research, as I'm wont to do, uh, I've broken it up into like six different categories.
Each one could be a show and in and upon themselves.
So I mean, this is gonna be a fascinating, fascinating interview.
So he's gonna pop in any minute, and then we'll we'll break shortly to kind of recompose the the panels here and everything.
Great.
He should show up any minute.
In the meantime, I want to mention after after the interview, we will have the after party.
And of course, the show is sponsored by a solution that you have brought to the world, which is about how to keep more of what you earn.
And then I've got a couple sponsors.
Uh, we're gonna get to that later, but did you notice uh what gold and silver have been doing lately?
In terms of kind of thumbs up, Mike.
Uh-huh.
Yeah.
Uh silver went over 41 dollars.
That's amazing.
Yeah.
Go silver.
Good for silver.
Go silver.
And uh gold hit like 3570 or something.
Uh so we've been doing this show how long, Todd?
When over two years, well over two years.
Over two years.
Okay.
So I think when we started doing this show, gold was around two thousand dollars an ounce.
1800.
Uh, 1800?
Yes.
So in other words, you're saying gold has almost doubled since we started doing the show.
Yep.
And I do think we should take credit for it.
Well, I mean, look, we told everybody that that that's the the life raft, you know, the lifeboat for your assets is is gold and silver and maybe privacy crypto on top of that.
Yep.
Yep.
And if you employ the strategy of keeping more of what you earn, you can afford to buy more gold and silver and privacy crypto.
Exactly.
And good food from you.
Yeah, right.
And also in in light of our guest that's about to join us, we're not saying that you know the the way to win the simulation is to have the most money.
That's not what we're saying.
But as long as you're in the simulation, you don't want to be broke.
You don't want to lose your dollars.
You don't want to lose the purchasing power of everything that you've earned.
Because that's what's happening right now.
The dollar's collapsing in real time.
My gosh, Mike, I had Labor Day weekend where I I got a bunch of steaks for my daughters.
You know, I have four daughters.
And I couldn't believe it.
I remember when my girls were younger, I could go and buy four.
I mean, for all four of us, six of us actually, with my wife, I could buy enough steaks for under 50 bucks to be able to feed us all, and they were really nice steaks.
Yeah.
And this last Saturday, I went into Sprouts, and I'm like, no freaking way.
You want almost 40 for one piddly piddly ass steak.
I couldn't believe it.
I put it back, Mike.
I just it violated my every sensibility.
And it just hit me that this food inflation, man.
It's like if you don't strategically attempt to keep more of what you earn, then you're just gonna, you know, good luck with hot dogs every weekend.
Yeah, if even or more like uh, you know, cricket nuggets.
Right.
With little legs coming out the sides, you know.
That's how you know you're getting a good deal on your food when the cricket legs dominate the patty.
That's right.
Um, so you didn't buy steaks.
What did you buy?
I bought chicken and shrimp, and I did buy two steaks as well.
That they were a little bit different, but I made uh I I made um uh uh hibachi.
So afford some rice.
They weren't like meat glue steaks, were they?
No, you didn't get meat glue.
Not meat glue.
No, I I uh but that grosses me out thinking about that, by the way.
Ever since that's been in the conscience.
No, we have good steaks.
Just remember, Todd, these are all experiences in the simulation.
So the the the fake food.
I have a sneaky suspicion because Roman is the expert that who we're gonna interview, and I think he knows something and has something going on, and I'm just gonna posit it here.
Okay.
I think he thinks the winner of the simulation is he who has the bestest beardeth.
But I also know from his interviews, he he wants to pursue immortality in the simulation.
Interesting.
He wants to live for a million years in the simulation.
Okay.
Yeah.
And I want to ask him about that.
Yeah.
Great.
Yeah.
Because I don't know that I want to live forever in the simulation.
I mean, have you thought about that, Todd?
You want to live forever?
I I haven't I haven't thought about that, but uh it's kind of interesting.
I don't know.
Well, I well, let's wait and we'll just ask them all about these things.
Uh-huh.
Yeah, we we we will do so.
Um, but the it's pretty clear also the machines will exterminate the NPCs pretty soon.
Yeah, I think so.
So there's gonna be an NPC ethnic cleansing by the Skynet machines, basically, is what's coming.
We traffic and decentralized NPC no moreness.
Uh NPCU later, I think is is what the machines are gonna say.
I can always get it on you, Mike.
Yeah.
It's like, you know, this is a really important time to have your own mind, to have knowledge about what's happening, to be decentralized.
Because you realize when the machines start to go after people.
Oh, our guest is joining us here.
Welcome.
Mr. Lampoyski.
My case is made.
Look at that beard.
I know we love your beard, sir.
Thank you so much.
Why don't you have a beard?
Uh because I never graduated from puberty, I guess.
I don't know.
My mine is embarrassing.
I actually did the computations from 16 to about 65.
You'll waste six months of your life shaving.
Oh, solar costs of shaving supplies, plus everyone looks better with a beard.
Some typically the best intro ever.
So welcome to the show.
Roman Yampulski.
Uh it's it's an honor to have you on.
Appreciate it.
Uh so this show is about decentralization.
We try to share with our audience strategies for how to live more off-grid, and we have a technology emphasis.
And your work is absolutely critical in this area.
You're the author of the book AI, unexplainable, unpredictable, uncontrollable, or co-author uh on this book, Considerations on the AI Endgame.
Let's go ahead and show it on the screen there.
Uh and you can search for our guest name.
It is Yampolski, Y-A-M-P-O-L-S-K-I-Y.
Don't forget the I before the Y, and you can find his books.
He's also on X at Roman Yam Y A M, which is obviously short for Yampolsky.
So uh Mr. Jam Polski, can you please give our audience a little bit of your background in computer science and AI safety and uh you know the academia infrastructure, etc.
Yeah, I'm an associate professor of computer science and engineering.
I've been working on AI safety for about 15 years now.
Uh published a number of books on that topic and close to 300 papers now on different subtopics in that domain.
And in these papers, you arrive at some uh very uh I think to some people shocking conclusions, although Todd and I both tend to agree with your conclusions.
But can can you tell us about what are the things that you have said in your papers that are attracting the most attention or scrutiny?
Yeah, so I'm looking specifically at future systems we're likely to be able to develop in the next couple of years, and I study what are the limits of control for what we call AGI or superintelligence systems.
And it it seems that uh our initial assumption that given enough money and time, we can figure out how to control superintelligence.
Uh probably not true.
It's impossible.
Basically, Given sufficiently intelligent system, it will find a way to escape from any controls we place on it and essentially do what it wants, which means we are not deciding how the future is going to turn out.
So when we're told by companies like OpenAI that yes, we're going to build in guardrails and we're going to have a safety division, or we're told that by Google or Meta or whoever, um, it seems like what you're saying is that those efforts have already failed, and it's not possible for humans to outthink the coming superintelligence.
Is that fair?
Right.
So I I think it's nice that they are trying.
It's certainly better than not trying, and I'm happy they have at least uh safety departments and they're working in it.
That's that's wonderful.
Would be much worse if they completely skip that.
But uh it seems that their efforts will not scale beyond tool AI.
So the systems which are narrow AIs, they can make sure a much better, much safer than otherwise.
For AGI, we're starting to see them essentially fail to guarantee quality guarantee that the system doesn't lie, be able to explain and predict what they do.
And once we get to beyond human performance, I think they definitely will not succeed in that domain.
Now, let me let me share with you, and by the way, our my co-host Todd has questions for you as well.
But let me let me share something uh briefly that's also important.
Uh my background is also uh mathematics and computer science, and we my company built our own uh AI on top of open source models with some really I think really clever uh retraining techniques, and we've released that publicly.
And uh and I also I use AI to write code to do the data pipeline processing for the training material for AI.
So I'm very familiar with this.
And one of the things I've noticed that I'd like to ask you is I've noticed that there appears to be the natural emergent property of intelligence from sufficiently complex neural networks.
I think you've spoken to this about the brute force method of just adding compute to silicon neural networks, giving rise to intelligence in ways that even the engineers don't quite understand.
Uh, can you speak to that?
Yeah, so if you look at uh nature, just animals, as the brain size increases, it looks like they're getting smarter with humans being sort of at the top of that food chain.
There are exceptions, uh, bigger brains and dolphins and I think elephants, but they have lower density of neurons.
So the more neurons, the bigger the neural network, the more likely you are to have advanced intelligence.
And that theory, the scalability hypothesis has been shown to be true so far.
The more training, the more compute is uh done, the more capable those systems become.
And and thank you for answering that.
And let me follow up specifically, it seems to me that there is an emerging natural intelligence out of the complexity, even without additional training.
Uh what do you think about that concept?
Because what I'm seeing is, yeah, go ahead.
I'm not sure I fully understood what you mean by that.
Can you explain a little?
That when you add more neurons to a system uh that has a base level of training, that its own exhibition of intelligence increases even without without additional training.
In other words, as they are scaling up the data centers and the GPUs and so on, even if they don't refine new algorithms of training, the systems are going to get smarter beyond human intent.
If you just add additional nodes, additional neurons, they are not uh knowledgeable.
They have random weights, so they are kind of useless.
Only after you train them, integrate them with the rest of the network, do they contribute to more problem solving ability?
Okay, well, thank you for clarifying that.
Uh Todd, uh you've got an interesting question.
So go for it.
Yeah, so just for the record, for those of our viewers who've been with us forever, uh, you know that I'm usually invited to these high-tech uh uh interviews to raise the intelligent bar.
So, you know, humble brag there.
Um Roman, um so I have two questions, and you can answer me either.
But the first one is you know, I'm a magician, you know, I put myself through college as a magician.
So this question has to do with magic or an illusion, more appropriately.
Why is human control over super intelligence an illusion?
That's question one.
Question two is why do you assign a 99.9% probability to human extinction from AI?
Uh extinction.
Extinction, thank you.
Yes.
So let's start with a second question.
Uh, I think it's impossible to do this.
So then something is impossible.
It's basically 100% not happening.
And I'm very humble, just like you.
So I'll say maybe I made a mistake.
Maybe I'm wrong.
Maybe we'll find a way.
So it's not 100, it's something close to it, 99% certainty beyond that.
So that's what I'm saying.
If they suggest that they can build a perpetual motion machine, I would say they have about zero chance of succeeding, no matter how much funding, how much time, how great the batteries are, how much effort they put into new wires, they're not going to create perpetual motion machine.
In this case, we're trying to create a perpetual safety machine.
Something that can always guarantee safety no matter what the level of AI is, in terms of intelligence, in terms of interaction with other agents, malevolent agents, data sets.
You're always ahead of those systems.
You're doing better than a superintelligent entity.
That seems impossible to me.
So I'm saying basically you have zero chance of success.
Therefore, doom is that high.
I'm not sure what you mean by that illusion.
Is that what you mean by them telling us that they can get us there and creating this illusion of successful safety work?
That we can control it.
That's the ill the illusion that we can control uh superintelligence.
Yeah, we we can't even control another human being at our level, right?
We develop light vector tests, we develop morals, ethics, religion.
All those systems fail all the time, right?
We know that you cannot really trust fully employees or anyone else.
Okay, let me let me take the next question.
Thank you for answering that.
Uh Roman, uh, may I call it is it okay we call you by your first name?
Sure.
Okay.
There's my name.
Okay, thank you.
Um before I even became aware of your work, uh I I was along a very similar uh conclusion through a different vector uh about the machines eventually exterminating humans.
And let me share that with you and get your reaction.
It occurred to me that there's obvious, obviously going to be very strong competition for resources that both humans and machines need in order to survive.
Obviously, electricity being the most uh obvious one, but there's also fresh water.
So uh fresh water is used to cool data centers.
And where I am in Texas, water is very scarce.
There's going to be 400 billion gallons of water used by data centers by the year 2030.
So that's water that cannot be used by human beings.
Uh also land.
So farmland that used to produce food for humans, obviously, machines don't need food.
They need the farmland to build data centers to expand uh their their own, you know, compute uh bandwidth.
So though those are three resources.
Uh water, electricity, and and land.
So in my mind, it seemed inevitable that eventually the machines were going to say, like we don't even, we're not, we don't want to kill the humans.
We just need their resources, you know.
Um doesn't doesn't that seem inevitable to you that that the competition for resources is going to marginalize humans sooner or later?
So uh at some point far in the future, yes, we would run out of space and water.
Right now there is no shortage of water.
I don't even know if it has to be desalinated for cooling purposes.
So at the end, yes, the machines may decide that they need the resources for something else, and we don't care if we are using them or if we rely on them to uh continue existing.
So that's definitely one of the concerns.
But uh really we're now discussing what would be the reason they decide to turn us off, and that would be one of them.
Well, what what's the most likely reason in your research that the machines would decide to turn off the human power grid or something like that?
What do you think?
That's the unpredictability part of it.
Like as a human-level intelligence, you cannot predict what a superintelligence would do.
You can consider some possibilities.
Maybe they don't want competition, maybe they are scared you're gonna turn it off, maybe you'll develop competing superintelligence.
Again, uh impossible to know for sure, but just know that we are not deciding what we're gonna do.
Okay, I I I appreciate that position.
Understanding that humans can never predict the strategies of superintelligence that can outthink us.
That makes perfect sense.
All right, Todd, next next question goes to you.
I I love Roman, I love the fact that you're just you're you are hammering the answers so quickly.
This is going to be great.
We normally can never get through our questions with our guests.
So thank you so much.
Yeah, I think so.
Okay, go ahead, Todd.
Sure.
So Roman, you've argued that not only might AI simulations become unavoidable, but that we are ourselves that we may already be living in one.
How does the need for an advanced AI for advanced AI to simulate reality lead you to conclude that it's more likely we live in a simulation than not?
Yeah, if it's a sufficiently complex system, its level of thinking is very detailed.
It uh if a human thinks in pictures, or maybe in text, it can think in whole environments, virtual worlds.
And to have a good idea of what's going to happen, it can simulate all aspects of it, including conscious intelligent agents who are part of that simulation.
So if it needs to make a decision about sales of some product, well, it needs to simulate this planet with all the users and see okay, what type of rice cereal will sell the best.
As a side effect of creating this experiment for marketing purposes, it would have to create real like agents who experienced all the real states of this world, suffering included.
So explain to me what living in a simulation would be like if we are.
Because I'm I'm trying to What do you mean?
Here it is.
It would not be any different from your point of view.
You wouldn't even know unless I told you.
Okay.
Okay.
So I just told you.
So who would be doing the programming?
Are you are you?
The creator of this, the the I'm pretty sure I'm not doing it since I don't remember doing it.
But of course, it's possible that I turned off my memory to enjoy the game a lot more.
Uh, whoever is doing it is very capable engineer and scientist, amazing artist.
We see the evidence of that everywhere, but also they seem to have some moral problems as well based on the same evidence.
I I love the fact that you spoke in previous interviews about the origin story that's been told to various uh civilizations across history of a creator in the sky that created your world, and after you die, then you go to heaven or some similar type of transcendent place.
And how much that sounds exactly like a simulation where we're living in the simulation and then one day we leave the simulation.
Uh, can can you speak to some of the similarities between those stories and why why I mean, do you believe that ancient cultures were actually told the big truth and then they created religions around that?
So I want to kind of invert this experiment.
So let's say we take a book like mine, which talks about all this technology in modern terms.
We talk about superintelligence, we talk about computer simulations, and then we explain it to someone who has no knowledge of modern technology, a primitive tribe.
So you tell them there is this very capable, very smart entity, and it's able to create worlds like ours and create beings in that world.
And then that being for some reason wants to test if you're gonna be doing this or that, some sort of ethical test.
Wait a couple generations, and what they have in their local language is essentially the religious stories we are passing around.
Now, how they got that information is above my pay grade, but it just seems like it's a perfect mapping.
All these concepts we see in theology are now being done in AI safety work just with technical terms.
We have boxing procedures, we have, you know, uh instead of Ten Commandments, we have guidelines, rules uh for large language models to follow.
Uh all sorts of retraining, repurposing is happening, we'll reset this model because it's not behaving well.
So again, just interesting similarities, of course.
They could be just that.
Do you give any credence to those who claim that they've have near-death experiences or NDEs?
Because one of the most common themes of their experience is that they feel like after they leave this world, the next world that they briefly visit and return from Feels far more real than this world.
Yeah.
I read a lot about those.
And what's interesting, they always kind of get visions of their own religion.
They never have visions from another religion.
So that makes kind of sense based on their previous training data.
There are good experiments with artificial neural networks, where if you start disconnecting parts of it, it kind of creates an internal input, which also stimulates the network to produce those hallucinations.
So the network starts kind of producing outputs as if it's actually experiencing a world it's trained on.
Interesting.
Wow.
Okay.
Wow.
Well, that okay, that's gonna drive a lot of philosophers nuts trying to parse all that.
Um Todd, you want to take the next question?
I mean, I my my question stack keeps getting bigger, so jump in when you can.
Well, like I said, I'm raising the IQ with my question with the IQ bar with my question.
So, Roman, do you think that this master creator who may or may not be living in his mom's basement?
Do you think he eats Cheetos or Doritos?
No, that's not my question.
Sorry.
Uh, could this simulation itself be a test whether we're wise enough to resist, resist building uncontrollable AI or not?
Well, given all the developments, recent developments, we are very likely living in the most interesting of times.
Not just because I'm living in it, but because we are creating a lot of meta-technology, meta-inventions.
We are creating intelligence, we're learning to create worlds.
Never in a history of a world, we were able to create inventors, agents.
This is all novels.
So it wouldn't be surprising if they were experimenting with specifically this time period to find out, let's say, how to create safe superintelligence, what other types of superintelligence can be created, what are the interesting worlds?
It could be a tool for testing, tools for creative exploration.
We cannot know from inside the simulation.
Okay, yeah, quick question.
So do you believe that we are living inside a simulation?
Is that your belief?
I think there is a lot of good reasons to think we are, yes.
Okay, and I heard somewhere that um you would love to live in the simulation forever.
Did we get that right?
So I don't want to die.
And so that means I don't want to get old, get sick, and I want to live as long as possible.
If there is an option to learn what is outside the simulation and outside is better than inside, I'm happy to continue living forever outside.
Got it.
Okay, all right.
That thanks for that clarification.
Now, uh I I've got a totally different question for you.
But I just want to preface it by saying that we we invited you here in in all seriousness.
And the, you know, this is not we don't consider this to be entertainment.
We're we're not asking you these questions because it's just interesting to the audience.
We are genuinely wanting to more deeply understand the nature of our reality as we experience it.
And I think that's important for our world.
I I think your work is actually really, really critical for humanity.
And yet, most people, and I know you've experienced this, Roman, most people dismiss this.
And I'm I want to ask you why that is.
What is it about the normalcy bias that people just make they'll listen to you or they'll read your book and then they'll say, ah, that was cool, and then they go back to their regular life, you know?
Why is I think most people actually dismiss it if you look at this as a type of scientific uh religious mapping.
Most people are religious.
Most people believe in a superintelligent being, they believe it's a temporary world.
So the default for humans around the world is not atheism.
That's already a given.
Even within scientific community and with an atheist community, a lot of people do take simulation theory as a possibility.
Most don't have high probability like I do, but 10, 20% is very common for philosophers for many Silicon Valley billionaires, for example.
So it's not as uh out of norm as you think it is.
Okay, that that's good to know.
But the follow-up to that is why, but about AI, for example, the the idea that AI is very likely to destroy humanity.
That thought is much more easily dismissed by people because, well, for for why?
Why do you think it was maybe a decade ago it was science fiction?
Now, Nobel Prize winner, founder of machine learning, Turing award winners, all in agreement with me.
We had a letter signed by thousands of computer scientists all say this is more dangerous than nuclear weapons.
That's the modern state of the art thinking in this space.
But if if if they really believed it, then it seems like they would prioritize not destroying humanity or not not pursuing AI research.
But we we still have the incentives being the financial returns rather than the preservation of the human race, it seems we really believe that we're all gonna die.
Aging is happening, yet we sped about zero of our national budget and individual budgets and fighting aging.
It's the same thing.
So you are you saying that there's a suicide cult type of attitude among humans anyway, and that's why they're pursuing machines that will kill us?
At the level of humanity, we are kind of doing things which are less likely to lead us to not going extinct, yes?
Okay, that's I was not hoping to hear that today, Roman, but uh I thank you for being honest about it.
Um Todd, you want to jump in?
Because I'm trying to sort this out.
Like I take Romans seriously.
But I find when I talk to people about Roman about your work, they don't, at least the people that I've spoken with so far, I granted it's a very small sample, they don't take it seriously.
They're like, yeah, whatever.
Um, but they don't even believe that AI will replace human jobs either.
Well, I for social reasons, social reasons, you know?
Smarter friends.
Okay.
Let me write, let me make a note.
Roman Yampolski, get better friends.
I get it.
Okay.
Todd, go for it.
So uh if self-preservation is such a strong instinct for us, why isn't it stopping us from pursuing potentially suicidal AI development, Roman?
So that is uh kind of prisoner's dilemma situation.
Game theoretically, what is best for society and what is best for you personally, not always aligned.
If you think you can make a lot of money right now, but then government will stop this development, but you already locked in as a very rich, successful person.
It is a better state for you than to stop right now and be a poor person in the same world.
If you think someone, a competitor will do it instead, but then everything's going to be fine anyways.
It's in your interest to beat into it.
So individually, they're all making rational decisions.
Globally, we're all getting screwed.
How are you treated by the AI tech giants?
And let me set this up because in the telecom industry, the telecom giants will say, oh, there's no health risk from electromagnetic pollution or or 5G, let's say.
The pesticide industry will say, oh, there's no risk for eating pesticides and herbicides.
The AI industry likes to say, ah, we've got it all locked down, it's totally safe.
How are you treated by that industry given the severity of your warnings?
So it doesn't really matter how they treat me.
What matters is what they are claiming, right?
Not a single lab, not a single scholar claims they have a solution.
No one published a paper, a patent, has even a prototype for a system which scales to superintelligence level capability.
They at best telling you that given more money and more time, they'll figure it out.
That's the state of the art.
But but you we we know that's not true.
So are they just going to continue to deceive us until it's too late?
Well, some of them might actually believe they can do it.
They just haven't realized the difficulty of a problem yet.
They see it as business as usual, like we can secure credit cards, we can secure superintelligence.
What will be the sign that it's too late?
Good question.
That is a very hard question because so far, nothing we predicted should be scary, actually scared anyone to stop.
We talked about, you know, AI trying to escape, trying to lie, trying to blackmail.
Nothing seems to be making any difference.
So you've you've argued we need to pause uh a pause in AI development, but that most regulation is just simply posturing.
So which approaches uh do you think hold promise and why aren't they being adopted?
I'm sorry.
Ideally, we want differential development.
We want certain technologies developed and others not.
So we want narrow tools for solving specific problems.
You want to cure specific type of disease, beautiful, let's do that.
You don't want a general superintelligence to replace humanity.
Well, what what realistic frameworks could actually reduce existential risk?
Again, personal self-interest.
If what I'm saying is recognized, people will understand.
They will not survive this, they will not be rich, they will not live for a long time unless they stop developing this mutually assuring distraction technology.
Doesn't matter who builds it, the outcome is the same.
So that that's key that you're you're appealing to the humanity and the rationality of the top-level engineers.
And you've said in previous interviews that governments uh are probably incapable of a regulatory framework in any kind of time that matters.
How much are you concerned about the fact that our national leaders, like let's take the United States, for example, is led by a person, Donald Trump, who basically knows nothing about AI.
And I'm not attacking him, and most people know nothing about AI.
But that's also true across most other nations.
They don't understand what AI is capable of, and they could not possibly move faster than AI can move, especially under superintelligence.
How big of a problem is this?
It's not optimal, but also even if they fully understood it, there is limited things they can do without completely destroying all the infrastructure, right?
You can make it illegal for large corporations to do certain experiments.
But as this technology becomes cheaper to develop with time, more and more actors can participate in this research.
By by what year do you anticipate most desk jobs or computer jobs will be replaced by automation?
It's very hard to predict deployment.
So we might have human-level AGI within a few years.
Prediction markets are all pointing to that.
But it doesn't mean that people will immediately be replaced.
There are so many complete BS jobs which still exist.
They don't need to exist, but we just keep them around.
Well, what I've done in my company, I'd love your reaction to this, is I I mean I augment our human talent with AI tools.
But in doing so, for example, I use AI coding now.
That means I did not have to hire two additional human coders.
Now I'm just using anthropic to write the code or clawed code, whatever.
Uh maybe grok code coming up next.
So effectively, even though I don't intend to replace humans with AI, I will hire fewer humans because of the automation factor.
But there are other companies that, of course, would just immediately replace as many humans as possible, and there would obviously be economic pressure to do that.
What is doesn't that mean that deployment will by necessity be very rapid for economic reasons?
Again, in certain industries we'll see that, but in ours, maybe there is more interest in having a human around for social reasons, historic reasons.
Uh again, predicting something like that is very hard in specific terms.
I cannot tell you like by this date, 100% is just not possible to do.
But it seems the trend is more and more computer-desk jobs can be done automatically.
Is it possible that one day the machines will thank humans for inventing them as they dismiss humanity?
I mean, like your purpose as the biologicals was to invent us the machines.
They likely to replace us for sure.
Will they thank us?
Is a very difficult question.
I'm not sure if they will.
I cannot predict that.
Of course.
Okay, okay.
Um what would be the methods of efficient extermination?
Um maybe I'm asking you to predict too much here.
I'm sorry, I'm not trying to put you on the spot.
I know you write science papers.
Um I'm I'm not claiming you're a futurist with a crystal ball, but have you thought about this question?
Yeah, and the most interesting answers are outside of what we can predict.
Novel physics, novel weapons, novel poisons.
I simply cannot predict something like that.
I can tell you about historic precedent, things we know about, viruses, nanotech.
We don't know about things we don't know about.
All right, so Todd basically he told us antimatter grenades are gonna take us out, is what that's right.
Something that we don't even know of right now, that's gonna be the system.
Okay, Todd, next question is yours.
Yeah, thank you.
Roman, I mean, what what are the humans to do ultimately?
I mean, other than die, you know?
Yeah.
Um If you have an audience, and I don't know how large your audience is, try to influence those people not to build superintelligence.
Tell them to vote for people who are against building superintelligence and so on.
So I want to build, I mean, I I want to use a lot of AI to help empower humanity with knowledge of my focus is actually health and nutrition.
And I have a lab, I have a mass spec laboratory.
We do food science uh uh experiments all the time.
So I want to bypass censorship of big tech and governments and bring the world in multiple languages, knowledge about nutrition and disease prevention and so on.
I need to use AI to do that.
So I'm gonna try to build the biggest inference center that I can and the biggest training center I can.
I'm not trying to build superintelligence, but even my own actions are kind of heading in that direction in a way, aren't they?
Well, just to share information, you don't need a very powerful AI.
You need censorship resistant technology like that.
That's right.
That's that's all I want to accomplish here.
I don't want to build superintelligence, but I'll be handing over money to NVIDIA to buy all their microchips.
I'll, you know, uh I'll be I'll be uh contributing to the architecture or infrastructure of the AI ecosystem.
So convincing NVIDIA not to sell to companies specifically explicitly trying to build superintelligence would be a great goal if you can accomplish that.
Uh yeah, well, if Jensen Juan would listen to me, he'd be on the podcast, you know.
But there is more money to be made if humanity is around.
That's that's a really good point.
You need your customers to be alive.
I like that.
Uh that great, great takeaway.
Uh uh, Todd, next next question is yours.
If we are in a simulation and you are expert in it, is there cheat code?
How can how can we win?
I published a paper on how to hack the simulation.
You should read it.
So it's a good idea.
Oh, I will definitely where can we find it?
I will read that.
It's still on the internet.
They haven't censored it yet.
Maybe because I'm still here.
What is the the title is how to How to Hack the Simulation by Roman and Polski.
Um you also wrote another paper about the evidence of the simulation that could be obvious to those of us who are inside the simulation, I believe.
But I forgot the details of that paper.
Can you go over the highlights of that for our audience?
So there are multiple papers talking about evidence, either from quantum physics or about kind of glitches in a simulation.
Uh, all of them are kind of similarities between what our virtual worlds are doing in terms of saving compute efficiency of graphics rendering and what we observe in the physics world, uh, observer effects.
You only render something in this universe if someone's looking at it at microscopic level.
Well, yes.
Um, and what about uh, you know, the the the quantization of physics, uh Planck units, Planck constants.
So the digital nature of physics is another good reason to think it may be, but also there are analog computers, so that doesn't prove it completely.
So, and you said it doesn't render the simulation until we are observing it.
So that uh obviously involves the the observer effect, you know, uh Schrdinger's cat Heisenberg principle, et cetera.
And you believe that to be a very real phenomenon in our simulation, correct?
I mean, I just want to clarify.
Well, those uh experimentally verified pretty much beyond question.
So what they mean is the interesting philosophical dilemma, but uh the actual experimental evidence is very strong.
How do you explain consciousness and the sense of self-awareness within the simulation?
Right.
So that's a very hard question, as it is called hard problem.
Uh we don't know for sure.
There could be external players in a simulation, but also it could be just a side effect of how your hardware and software interacts with the inputs you are getting.
So the example I like to give is about optical illusions.
When you look at an optical illusion, you experience certain internal states.
Maybe you see rotations, maybe color change.
Those are internal perceptions, quality you experience.
There is good evidence that AI can experience those as well as animals.
So maybe there is some rudimentary states of consciousness already in large language models.
Wow.
Todd, go ahead.
I I'm gonna grab an optical.
I have a really great optical illusion to share here.
You go ahead.
I'm gonna grab it.
I'll be right back.
So you you've obviously heard of the Mandela effect.
Uh you have Barenstein bears versus Baron Stein bears, or build it and they will come, or build it and he will come, those type things.
Is a Mandela effect just maybe a glitch in the uh in the simulation?
It'd be surprising.
I mean, if they were making a change, they would wipe your memory, wouldn't they?
Yeah, but why do many of us think that there's uh think of this?
I mean, it it's not the Baron stained bears, damn it.
It's Baron Stein.
But but yet it's S T A I in uh in the books.
I don't have a strong opinion on it.
It might be part of a bigger set of pieces of evidence.
Okay.
Okay.
I think it's a glitch, but that's that's just my simulation.
Okay.
You see this gentleman?
Familiar with this illusion.
Yep.
Okay, all right.
Now this I like how you just have props around like I'm gonna demonstrate.
Yeah, no, I do.
I you should see what we have.
I don't want to see what you have.
So um if if I if I just simply if I try to rotate this and keep it on the same vertical axis.
Well, I'm conscious, man, for sure.
Definitely not NPC.
Oh, yeah.
Right?
Now, what's really interesting is if I take a piece of of tissue, all right, and I'm gonna tie it to one of the one of the rungs of this apparent window.
Okay.
Okay, there it is.
And this isn't the uh Todd, this isn't your slide of hand magic.
This is just straight up uh neurological perceptions gone wrong.
Uh and then, and then I rotate it.
It appears as if this tissue is kind of leaping.
Well, I don't know, maybe I'm not holding it that well.
It appears that if if it's going back and forth, but I'm actually rotating it around in a certain way.
Right.
So you can ask basically a modern large language model.
The only requirement is that it's a novel illusion.
You cannot use something it can Google a CNU show beforehand.
So that's true.
And you show it and it has the same experiences as an average human, then we know it had the same experience.
Well, that yeah, that that that's a task coming up with a completely novel illusion.
Uh this was demonstrated decades ago.
And this is actually a trapezoid.
It's not even it's not even a square.
But our brain says it's square because that's the way we always see windows.
Uh I can't even show you.
But like this this side is much shorter than this side.
Anyway, our audience gets it.
You get it, Roman.
Okay.
Um, let me uh Roman, I really appreciate your time.
Are are you okay with about 10 more minutes?
Yeah, yeah, that's perfect.
Okay.
Um, let me ask you a kind of a devil's advocate, but a philosophical question.
Uh the the assumption uh underlying a lot of these conversations is that humanity is worth saving.
And what is it about humanity that is worth saving?
Like why why is this such an important goal?
Again, I I'm being facetious a little bit, but in your philosophy, why does that matter?
I'm enjoying my life.
I don't want it to end.
I don't know about yours, maybe it's not worth saving.
I have no idea how you experience life, but I love it and I enjoy my family, my friends, so I'd like to protect them.
Well, I'm gonna get new friends, so uh I I think I'll enjoy it more uh when that happens.
But yeah, I'm enjoying life too.
But as a whole, I'm not that impressed with all of humanity.
I'm I'm impressed with part of humanity, right?
But not all of humanity.
I mean, we have you know, child killers and war and violence and the deception and irrationality and all these kinds of things.
I mean, there's good and bad together.
Um I could see an argument that not all of the race is worth saving.
So there is uh uh sub-field of research about building a warfare successor.
If you think we are building superintelligence and we cannot control it, and it's gonna kill everyone.
Can we influence the future of the universe in terms of what type of superintelligence we build?
Will it be very creative?
Will it have consciousness and so on?
So some people are very interested in the future of a post-human universe.
I couldn't care less.
Okay.
Roman, in this in this simulation, people are born, they grow up, die, right?
So how can you rationalize the fact that you know you you you'd like to live in this simulation forever if you know that uh biologically it seems we all die?
Well, it's a disease.
You have an aging disease, and just like any other disease, we can cure it.
You can be at the same age, healthy forever.
There is no physics problem with that, right?
It's not violation of laws of physics.
So that's part of your simulation is hopefully to be able to figure that out for yourself.
Is I want to live forever, I'm gonna treat my body well.
Uh I'm going to cut down from three to two big max every time I go through the golden arches.
Yeah.
I'll still let you supersize it.
Okay, great.
And I I heard you speak about this before.
That the reason most people have children, you believe, is simply as as a replacement strategy.
But there's there's obviously there's something built into the neurology and the emotions and and endocrine systems of humans to desire to have children.
Yeah, historically we liked sex.
That's what evolution used to get you to have children.
You may not even know why they showed up, but that's what you did.
And then later, then you started to realize things.
You were like, I'm going to be old.
I need someone to take care of me.
I'm going to be weak.
So you had children.
And now you need someone to run your business after you can't.
So you think if if the disease of aging is resolved, then people would consciously decide to not have children because they would rather enjoy their lives or they would put off having children for much later.
There is no biological clock ticking.
You don't have to have kids by 30.
You can wait until you're three million years old.
One of the great things about people dying off is that that's how the structure of scientific revolutions takes place, though, right?
When old ideas die off and they're replaced by new, younger thinkers.
Wouldn't living forever cause a concentration of intellectual models that could be bad for our future?
So typically older minds, with some exceptions, of course, have harder time learning.
So that's why they get stuck in the old theories.
If we preserve you in a state of still very flexible, very pliable mind where you can continue learning, that wouldn't be a problem.
New evidence would cause you to update, change your beliefs, and so science can continue.
Now there is still a shortage of tenured positions.
So we need some people today.
And I'm thinking, like I saw RFK Jr. uh bashing some of the United States senators today, and I was thinking, I sure don't want those senators to live forever.
I I'm not I'm not wishing harm upon them, just to be clear.
But sometimes ideas are like there are some people, many people who will never change, and they're wrong.
And that's been reflected throughout history.
So my follow-up question is I'm concerned about people living forever.
The like the most powerful, most corrupt, most wealthy people would have access to that technology first, and they would use it to live longer and and dominate humanity with their bad ideas.
That is a big concern, absolutely.
Uh interesting observation about what you said.
So we have the oldest presidents, multiple ones, we have the oldest senators.
Why are they all not investing more into life extension?
Good question.
We keep electing them.
Maybe they should take a hint.
But the the other thing I hear from you, and I hear this from other uh AI or machine learning experts is a very high confidence that AI will solve aging.
And you know, my background is obviously in in in nutrition and so on.
I'm not yet convinced, although I'm open to the idea that AI will solve aging.
Uh are you confident that that's possible?
I I think it's possible against it's not a violation of laws of physics.
There is a program, DNA code representing all aspects of your body.
So we should be able to uh modify it to change how many resets your cells can go through without causing cancer, of course.
But uh it may be possible to solve it without AI.
It just we hope that AI would help to accelerate this process.
A very important example is protein folding problem.
We couldn't solve it manually, but with AI, we now are able to solve that problem.
Super important problem in medical research, and we can brute force all human relevant proteins, all animal relevant proteins.
So if we can do the rest of this exploration for the human genome and understand all parts of that, which right now we know very few of what they do, that would definitely be a step forward to make you at least healthier, if not immortal.
Well, and until that day comes, I've got my superfood smoothie here in my glass jar.
So this is what I'm into, uh avocados and turmeric and uh all kinds of goodies and sprouted cruciferous vegetables, etc.
And I use AI to do research in nutrition, and it's it's amazing.
It's opened up even me after 25 years in this space, it's opened me up to all kinds of new knowledge.
And all I did was gather up nutrition knowledge and then train it into the model, and then I started asking the model these questions.
Like, wow, where has this been my whole life?
You know?
So what's the optimal diet?
What should I be eating?
Well, it it it really is has to be very personalized, but if you have some.
So we need to understand human genome, you see?
Yeah, and and you know, everybody's got different health challenges, but there are there are very powerful natural medicines to address almost anything, uh even uh preventatively as well.
But I I think AI can be revolutionary for health and wellness and nutrition, but again, only if people desire to have knowledge, and that's another problem with humanity today.
And Todd, you and I we talk about NPCs, kind of low information people out there that you know they're living on food stamps, they're getting free pop tarts.
They're not gonna go to an AI engine and say, How can I be healthier?
They're just not interested in it.
You know, there's there has to be some level of curiosity to improve our condition.
Uh, I I think.
And there is among some, like like all of us here, Roman, but there's a segment of humanity that's just they just want to play video games all day or whatever.
Somebody's playing this video.
Yeah, can you blame them?
This is a very interesting game.
But are they drinking smoothies or eating pop tarts on the carnivore diet?
That's the right answer.
Yeah.
But okay, well, Todd, if you you can jump in.
Well, I I just for job security, if everybody's gonna be living forever, I'm just gonna start focusing on uh using AI for uh solving the parking lot problem because it's gonna get ugly out there, man.
Stop driving cars, not an issue at all.
Yeah, it keeps driving, making you money.
Okay.
Okay, the uh one more really important point, Roman.
Okay, I'm sorry to hesitate, but in the machine learning community, I've noticed an automatic default trust in thinking that our governments want us to be healthy and successful and alive.
And I'm convinced that in some cases, various governments around the world are really not that interested in doing anything other than killing people off or depopulation and so on.
So it's like even before the machines get go full Skynet, I feel like government policies are killing us too.
So I why why is there this very common trust in government as a as a good faith organization?
Or maybe this is outside the scope of where you focus, but this this is a problem that's it's about outside of a scope, but look at different countries and their governments.
Some are better than others, and you can see which ones work best by people voting that they feed.
You may think it's not perfect or optimal, but most people want to be where you are.
Most people want to be where we are where I am.
I assume you're in the US, right?
Yeah, I'm in Texas.
Even better.
Yeah, yeah.
Most people do want to be in Texas, actually, for some reason.
Stop complaining.
It's awesome.
Yeah, no, I'm not.
I'm actually, I love where I live.
Uh there you go.
Yeah.
I do.
I love it.
Uh so but that's another question.
How can our audience, how can we maintain a positive attitude and continue to live meaningful lives in pursuit of knowledge and the things that matter to us, even with the knowledge that one day the machines may try to kill us all?
It's all about having a sense of humor.
Well, we have that.
Figure out what is the funniest possible joke.
I think this life is the funniest possible joke, actually.
So I read my paper about humor.
Excellent.
Um, so that's that's a very practical answer.
Um, but what else?
I mean, what what would you add to this?
I mean, we're kind of getting close to the end of this interview.
Uh our audience are they're high IQ people Overall.
So what would you say to them?
Enjoy life.
It doesn't matter how many days you got left.
If you live every day as if it's your last, you'll have an awesome life.
Todd, your comments.
Well, you know me.
I'm like, don't live in fear, shut your TV off and live life locally.
So I think I think we track pretty well there.
See, and actually, being a street magician has a lot to do with simulation theory because what Todd does, Roman, is he goes out onto the street and he films himself showing people that reality is not what they think through sleight of hand magic.
And it's really cool when the perception doesn't match the internal model of what's possible.
That's how you trust for consciousness.
You can detect NPCs with that.
Excellent.
Oh, yeah.
Todd, you have an NPC detection system.
I do.
That's now you see it.
That's awesome.
But that's the thing about sleight of hand magic, is it's it's interesting because it violates the internal model of what should be possible.
So we're gonna have some internal model of what jokes are.
That's what comedians find in our world and tell you about it.
And then you tell your friends about it to fix their world model.
Really good point.
Okay, let me ask you, Roman.
I'm gonna give out uh your handle on X. It's Roman Yam Y A M, short for Yampulski.
That's the Twitter uh channel right there.
Definitely follow that.
And then on Amazon and other booksellers, you've got the book AI, unexplainable, unpredictable, uncontrollable.
And here's another book, considerations on the AI end game.
And I I believe there are also audible versions available of both of these books.
So anything else you'd like to...
If it's incorrect, maybe in a different simulation, it's done, but my publisher is very slow.
Oh, is that right?
Unless they did it and I haven't seen it yet, then it's awesome.
You know, you can just pick an AI voice.
You can have you can have AI perform the book.
I've seen a lot of that.
That's a different story, yes.
You can do it yourself.
You can read out out loud to your children.
That's a good idea.
Yeah.
Um, is there anything else you'd like to add or or mention how people can uh familiarize themselves with your work?
And follow me on Facebook, follow me on Twitter, just don't follow me home, as I always say.
Okay.
Well, thank you so much, Roman.
It's been really intriguing.
This has been uh we're honored to have you on.
We appreciate your contributions to understanding.
Thank you so much for joining us today.
Thanks for inviting me.
Thank you, Roman.
Love your sense of humor.
Yeah, we love it.
You gotta keep it interesting.
All right, thank you, Roman.
Have a great rest of your day.
Take care.
All right, for those of you watching, uh, we will take a break, and then Todd and I will come back with the after party, which should be especially interesting because we've got we've got props now.
We've got actual illusions to run by you, but I need to find a better way to spin this thing that looks more convincing.
Anyway, we'll we'll get to that.
We we'll take a break.
We'll be right back after this break.
Stay with us.
Perfect.
Join the official discussion channel for this show on Telegram at T.M.E. slash decentralized TV, where you can ask questions or offer suggestions of who we should interview next.
Also be sure to subscribe to the email newsletter on decentralized.tv, where you'll be alerted about one day in advance of each new upcoming episode before it gets published.
On decentralized.tv, you'll also find links to our video channels and social media channels across all platforms, including Brighton, Rumble, Bit Shoot, Twitter, Truth Social, and more.
Check it all out at decentralized.tv.
All right, welcome to the after party.
And Todd, I think I speak for both of us.
Uh, for anybody who would like to be a future guest on this show, if you could just answer the way Roman answered, it would be awesome.
Wow.
It's like he does not waste time.
He gets right to the point.
He answers the question.
He is the most succinct guest we've ever had.
Very succinct.
Like he was he was nailing my question stack.
Yeah.
Right.
I was thinking, man, this is gonna be embarrassing.
We're gonna finish the interview in 20 minutes.
Well, it's funny.
Like he wants to live forever, but he doesn't want to do an interview that takes any longer than necessary.
I I totally I respect that.
It's like, let's answer the questions, let's get it done.
Uh total respect.
Anyway, what a brilliant man, huh?
Well, I'm I'm certainly uh appreciative that he uh wanted to be part of our simulation, at least.
Yeah.
Yeah.
Um he he's done a lot of interviews with some pretty big channels.
I I I think look, we got we got the illusion now.
Oh, excellent.
All right.
So let just because I want to do this right for the audience.
Let me take the tissue off.
We're gonna slowly spin this.
Now I'm spinning it in a circle, but you're probably thinking you're probably seeing that it it's like waving back and forth instead of spinning in a circle.
Yes.
I am seeing I'm seeing the back and forth.
It looks like it's just going back and forth.
And now that's that's pretty wild.
All right, now let me let me add the tissue.
Okay.
This is where the illusion gets really freaky.
Thank you.
Thank you.
My promise to the audience, we will we will improve the illusion.
Oh sorry, Todd, didn't mean to cover your phrase.
Uh we will improve the illusion.
Here we go.
Uh, and demonstrate it for you in a future show.
Perfect.
Okay.
And this actually, I saw this first on a video out of a science show from Australia in the 1970s.
Wow.
They had demonstrated this.
And it's uh, I forgot the name, it's just a it's a persistent illusion that tricks your brain, you know, because of the geometry, et cetera.
But it just goes to show you that our brains don't actually perceive reality as it is.
Our brains interpret reality based on our internal language model, so to speak, or physics model.
And that is the basis of close-up magic.
100%.
You know?
Yeah.
And and I will I'll share something with you.
I think I've shared with you before, Mike, but do you know that five percent of the population?
That's my number that I've ascribed to it.
I've been doing magic for 52 years, okay.
Five percent of the population absolutely hate magic.
Yeah.
And they attack you.
Yes.
I call them the grabbers, like right in the instead of just being entertained, yeah, on a street corner when I walk up to him, you're always gonna have one, you know, n out of 20, right?
It's gonna be like, I know how you do that.
Or look in his left pocket, look at his hand, you know, it's just or they'll grab it mid-effect.
And it's like, why can't you just enjoy and be entertained?
I mean, I know I'm I'm uh I'm not a magician, you know what I mean?
I don't really do this stuff, but I'm a pretty good entertainer.
But I just find it always interesting, and it's those personality types that I think stand in line for jabs.
Well, it's funny, they won't question the vaccine authorities.
Right.
But they will question your magic.
That's right.
But they believe in science magic when it comes to vaccines, you know what I mean?
But I think what it is with those five percenters is they just think you're trying to get something over on them when it's not.
They kind of lack the EQ to be able to just enjoy, you know.
Yeah, they and I I just don't know.
It's not because when I was in, you know, I I did uh slide of hand magic, uh not at the level you did, but I I would entertain my friends who were mostly getting high at the time, so it was much easier.
It was a great way to learn magic.
Um but I didn't.
I I never just for the record, I I never smoked pot, but my friends did, and when they would get high, I would hit them with the card magic, and they would go, wow, you know, it was great.
Um but they knew it wasn't real.
I mean, we all knew it wasn't real, that it was like the proficiency of the effect.
Right.
To sh to show it, you know.
But today people get all upset about everything, even entertainment.
And that's why I asked Roman the question like, what are the traits of humanity that are worth saving?
You know, I mean it it's it's actually kind of a serious question.
Because it's like, okay, if let's say there are many, many species on many planets all throughout just our own galaxy, the Milky Way.
Are they all worth saving?
Or just some.
And what's like what's the cutoff point?
You know what I mean?
Like if you murder your children, you know, if you mutilate the genitalia of your kids, ah, your planet goes into the red pile, you know.
Yeah.
Or whatever.
Well, I think my takeaway from this, Mike, this interview, Mike, is a couple of things.
One is if indeed we are living in a simulation, and if you want to embrace that mindset, then I think it's really, really critically important to abandon living in fear.
You know, there are so many things that that within this simulation that tries to drive fear into our hearts and souls, right?
So there's that, and then as he said, just enjoy life, man.
You know, it's it's it's like it's not so bad.
The simulation doesn't suck that bad.
I mean, I think it's pretty good for most people, right?
And uh, so you know what we failed to ask Roman a really important question.
How does he plan to survive the the machines extermination of humans?
We should have asked him that question.
Yeah, well, we need him as a return guest now, Mike.
Yeah, because I I honestly don't know uh his answer on that, maybe he doesn't know, but you would either I mean some people might answer, well, I'm gonna merge with the machines.
I I'm not saying that's his answer, but some people would say that, but then it's like, well, you give up your humanity if you merge with the machines.
And is that even possible?
Actually, I do remember he was very skeptical about the idea of transhumanism from a previous interview.
He said, he said, you can't really upload yourself to a machine.
You've just made a machine copy of you know your language patterns or whatever.
Okay.
So he has expressed skepticism about that.
Yeah.
But how do we survive the machines?
You know, I mean, I've I've got hatchets and things over here, but I'm not sure that that's the answer.
Um I I have an escape from New York.
Knife?
Yes.
Yeah, well, flamethrowers.
Uh, you know, I did an article years ago, like 10 years ago, called How to Kill a Google Robot.
Oh.
And I even argued at the time that it was going to be an important skill to kill robots.
Okay.
And how do you?
Well, by the way, I don't want the machines to hear this because they might hold it against me.
Okay.
One answer is to use vehicles so you can run over them with trucks or cars.
Okay.
Yeah.
That's why the machines want to run the vehicles like Waymo.
So they won't they won't be used to kill the robots, you see.
Good point.
Right.
The other way, and this is from Star Wars, The Empire Strikes Back, is wrap their legs in cables.
Okay.
Remember the ATT walkers?
Yeah.
And Luke Skywalker is like putting the cable around and tripped them up, you know?
And then they fell, and for some reason they exploded.
You can trip up the robots.
All right.
With cables.
Uh EMP weapons, but also you can use high-powered rifles to target their energy systems.
You would assume they don't they don't have a lot of armor unless there's a really strong improvement in orders of magnitude of energy density of the batteries.
So and armor is very heavy, right?
So you get yourself a 300 wind mag rifle, you learn where where is the battery pack on this robot.
Okay.
And you hit them in the battery package.
But then again, you got to dodge the drones that are coming in to take you out with the thermal cameras.
Yeah.
So it's going to be quite a challenging thing.
Man, I just kind of like my simulation right now to where I'm not dodging, you know, mini drones.
But uh do you think, Mike, you're a scientist, is there any scientific way to test whether we live in a simulation or not?
Yes, there are.
There are some ways.
Octomy.
Well, I mean, he's actually written papers about this.
Okay.
Uh, but you know, one of the ways is to test like the the duality of light theory uh or or the demonstration that you can take light from distant stars where the light has been traveling for billions of years, and you can use the double slit experiment.
And you know, to show the duality of light as a wave or a particle.
Okay.
And I I although this is this, I guess this is a a leap of an assumption, but I think it's the human observer that actually makes the duality of light work.
So if we if we don't have a human observer, it you know, the probability wave doesn't collapse in that way.
When a human observer is present, then it does.
Remember what Roman said that the universe isn't rendered until you observe it.
So like all the planets and all the stars out there, remember this.
If there's no conscious being on a planet, then the creator or the architect doesn't have to render that planet until you get there.
Ah.
Okay.
Like in a video game, they're not rendering the whole world, they're just rendering the part you're looking at.
Got it.
Uh right.
Otherwise, it wouldn't is too much.
So in our cosmos, wherever you go, then God renders that part.
Like, oh, this is what that planet looks like.
Okay.
All right.
I mean, that's the theory.
Yeah, it's a good theory.
Yeah.
I like it.
Because you know what?
Gosh, we should have asked him too about the computational nature of the universe, because our universe is a giant computational machine.
And physics is computation.
Light is computation.
Right?
And so it's just all decentralized computation.
Well, it certainly isn't a slow ass blockchain that takes 10 minutes, you know, for your change to come back.
This is happening in, you know, trillionths of a second, probably.
There's probably like a cosmic clock that controls the sequencing.
And then all the physics, all the cause and effect at the atomic level is being carried out on that clock.
And that's why chemistry takes time.
That's why physics takes time.
It doesn't all happen at once, obviously.
Otherwise, nothing would work.
Well, well, speaking of simulations, Mike, uh, my simulation this last week, I had 35 uh uh consultations for the UNA.
That's a head of 35 simulations.
Yes.
Was crazy.
You know, it's Wow, we're we're creating uh a time burden for you.
Bring it on, bring it on.
No, it's it's it's been fantastic.
And it's just uh I bring that up to just say that for those of you who have been watching and listening, um thank you for booking the consultations, and most of you have moved forward with these UAs.
And it's just that people are getting the message, Mike.
Let me bring up your website while you're talking.
It's my575e.com.
Right.
Is this site?
Go ahead and then share it with our audience.
Yeah, it's just been neat because I've been able to hear this last week so many different stories of people's lives and their operating realities.
And and are they W-2 earners, W9 earners?
Do they operate businesses, LLCs, do they own property?
Did they trade in crypto?
Did they have children?
Do they have homes?
You know, and all of their there's a solution based upon each of those use cases, and people are finally waking up to the fact that wow, uh this one person said, uh, this seems too good to be true, but I believe it will change my life.
Ah.
And I thought that was very, very wise, right?
And those of us who've had them for five years, 10 years, et cetera, uh, you know, it it that's true.
That's true.
But uh, I just, you know, I think we the message has gotten through to people, Mike, to where it's like just go to the website and watch the 90-minute video, it's free.
Download the PDF, and then think about your own operating reality and book a consultation.
It's that simple.
Now, it's a hundred and fifty dollar consultation, but what everybody appreciates is if they move forward with the UNA, then they just take that off of the investment.
So it's kind of like a built-in consultation hour, right?
To where before they pull the trigger, they just kind of want a gut check.
And that's what they ended up being.
And uh, and uh, and I've had so many people, Mike, ask me to tell you thank you for the work that you do.
Oh, wow.
So many people tell me I've been watching Mike Adams for 20 years, 20 plus years, and it's really you you're really endearing to the folks out there.
Oh, wow.
Well, thanks for that feedback.
Um, look, we love our audience.
We we love all the support that you give us, and we try to educate you and bring you really interesting guests like today, and also keep it light-hearted and funny, which reminds me.
So now that we I guess we we agree that we are living in a simulation, how does that change your uh keep more of what you earn slogan?
Is it like keep more of what you earn before the robots make us burn?
Like how how does this affect financial planning knowing that Skynet is coming for us?
Is my my question?
More of what you earn so that you can buy more life years, because that's gonna become a thing, right?
Trust me.
So what you're saying, like our guest Roman, he was saying that uh aging is a disease that's going to be solved, but it'll probably cost a lot at first if it's achievable.
Probably you can imagine, you know, because wouldn't most people who have a lot of money, wouldn't they pay like a million dollars to never age, I suppose?
Yeah, add it and add a year on to my life, million bucks a year.
Or to slow aging, like yeah, you you in 10 years of living, you only age one year.
You know, would people pay like a million dollars a year for that?
That's right.
If you keep more of what you earn even today, then you can afford all of the high quality products that you offer Mike that legitimately, seriously, can change lives because it's all about good food in your body and being healthy and such.
So that's a very practical yeah, go ahead.
No, that's a really good point.
Is that you need our products, healthranger store.com, to live long enough to make it to the AI cures for aging, right?
That's the Ray Kurzweil approach.
What a great bug.
That's why he's into nutrition.
Yeah.
Yeah, I want to live long enough to live forever, but we gotta, you know, don't die before the robots come on, or not the robots, but the AI cures come online.
Yeah.
No I don't know about you, Todd, but I don't I'm I I don't really want to live forever in this simulation.
I mean, it's been fun and all.
But I'm actually a lot more interested in in what else is out there.
So I love I mean, I love my life, don't get me wrong.
And and I love I mean, I feel very honored to be here and I'm thankful for it.
But I I don't want to be here forever.
And you know what, Mike?
I mean, look, I'll entertain simulation, but at bottom, you know, I'm a I'm a Bible guy.
And uh I really believe that uh, you know, the good Lord knows the day that we're not gonna be in this realm.
And if he speaks to that, right, then that tells me that there's something greater beyond this.
But while I am here, you know, it is it is my mission to do with the life that I have.
Yes.
And and uh pray, you know, pray for his hand in our life.
Roman would say, I believe that the Bible is consistent with simulation theory.
Yep.
You know, the story is there's a creator created our world, we inhabit it, it's a training ground.
After this, we leave the simulation, we go back to heaven.
I mean, it's the same story, actually.
Yeah, it can be, but I wonder what he would say about that homeless carpenter that walked around in sandals a couple thousand years ago that died for our sins in our place as our substitute.
That was God mode.
That was God mode, okay.
That was God mode in the game.
In the same no, the creator sent down God mode Jesus to try to teach people to stop being nitwits.
Like actual God mode.
Okay.
And and in Islam, they would say Muhammad is is God mode.
God knows.
And in other Buddha and so on, right?
And in in Hinduism, there's like 50 of them, 50 God modes or whatever.
I'm I'm not, I don't mean any disrespect.
I just don't know how many gods there are in that religion.
But it's it's God mode avatars being put into the simulation to try to try to correct the path because humanity kept screwing up bad.
I mean, read the old testament, right?
Yeah, it's like, oh, please You could do better than this.
Well, as for me and my simulation, I'm going to just appreciate my God mode little guy over here on my desk that uh, you know, I think is his story.
Yeah.
God mode's story.
No, that that makes perfect sense.
But it's also interesting that if God wanted to reset the simulation, that reads like the book of Revelation.
Yeah, that's true.
You know, it's like, let's send in the giant space rocks, shake the earth flat, everything's gone.
The souls are all taken out of the simulation and brought into heaven.
That's it's all in there.
I've, you know, I've taught about this chapter a lot.
Sure.
And then the planet is reset and it starts over.
He's like, ah, you know, SimCity, I don't like it.
Godzilla's going too crazy.
Let's let's reboot the game.
Game over.
Yeah.
Game over.
More quarters.
Come on.
More quarters.
Right.
And it's interesting that also, even, for example, Buddhism, and also like Native American traditions, they have uh stories of the universe ending and then being reborn and ending and being reborn again and again.
Even even some high-level physicists say, like, you know, the big bang, and some of them believe in in the big collapse.
Yeah.
That would become a singularity, and then it would explode into big bang, you know, 2.0.
But how do we know it's not version two billion?
Like this has happened billions of times already.
Valid point.
Right?
The the the big the big banging two billion bang, two billion and one bang.
I mean, how do we know?
We don't know no idea.
But uh, but uh I'm gonna follow the trends, Mike.
And the trends tell me that there's going to be a day in my simulation where a good stake is gonna be a hundred bucks, not 40 and certainly not four.
Yeah, that's probably true.
So inflation is real in the simulation.
It is, unfortunately.
It is that pushes us towards tilt.
So, yeah, um, you know, gold and silver hit all new highs.
Well, wait, silver is not an all-time high, but gold is an all-time high.
Silver is back above 40 for the first time, I think, in 11 years.
Wow, that's amazing.
It's really something.
So let me mention um folks.
If you go to medals with mic dot com, uh, this is one of our affiliate sponsors, it will take you to battalion medals, and you can get gold and silver there from our trusted partner.
We've worked with them for many, many years.
Uh, this is this is the Treasure Island team that launched this new venture.
And very competitive pricing, discrete insured delivery.
So that's metals with mic.com.
And then if you go to verified goldbacks.com, that will take you to the goldback page where we share our science lab testing of the gold content and gold purity of the goldbacks, proving that they do contain the gold they claim, actually a little bit more, an average of 102% of the claimed quantity of gold.
And that's how I did it right there, melting down.
That I actually took these photos.
That's me with the uh giant ceramic furnace and burning off the polymers to leave the gold behind, and it becomes a little gold foil, like there it is.
You see, that's that's actually my hand right there.
You see that picture?
Yeah, yeah.
If you if you melt down, that's like a uh a 10 or something, or that's wild.
You get a gold foil, and then you can further melt that down into a gold ball, like a gold BB.
Yeah, wow, and then you can weigh that uh accurately on the analytical balance, and you're like, hey, there's a lot of gold in here.
See, point nine one nine seven grams, and let me scroll down.
Here's the results.
Out of the gold the gold backs, the recovery percentage was always over a hundred percent.
Right, yeah, every time.
And and you know, Mike, people are listening to to your show, watching your broadcasts and such, when they're paying for their UNAs in gold backs.
That literally happened.
That's awesome.
Two weeks, all gold backs.
It was crazy.
Since I've been mentioning gold backs, and again the website is verified goldbacks.com.
You know they've doubled in value.
Really?
Yeah.
They're over seven dollars now, quite a bit over seven dollars.
And they, I mean, they were like maybe 360 or something when I first started mentioning them.
Wow.
Yeah.
Fascinating to just contemplate.
I mean, I that I got them for the utility, right?
Because it's like but you're right.
They are wow.
The value is going up in those ten dollar gold backs, right?
Totally.
But um, I'm gonna go.
I I really love your idea.
I'm gonna I'm gonna add a new slogan to Health Ranger store is um use nutrition to live long enough for AI to solve aging.
Yeah, and then maybe you can survive the the robot extermination agenda.
Yeah.
All right.
Um Todd, is there anything else you want to add today?
I mean, this has been fun.
Man, uh it's my my brain just is on control alt delete right now because I think my personal simulation um uh is is at tilt level.
And I'm really really grateful for uh Roman.
He was a fantastic guest.
I hope we can get him back at some point in time.
Uh and I am really fascinated to read his paper on how to how to beat this simulation.
I'm gonna find that and uh need to be.
You know what, Todd?
You can you can find his paper and you can copy the entire text and you can paste it into Enoch.
Okay for a summary.
So you can use our AI to summarize his paper about how to beat the simulation.
Yeah, that's a great idea.
Yeah, that's a great idea.
Man, I use Enoch all the time.
It's really amazing.
I I was using it in a previous segment, and it's just it's rocking.
And man, it oh, I can't wait until we get I can't wait for the next six months of what what we have planned for Enoch and what you know, all the data we're processing and the multilingual approaches and everything.
And right now, I mean, we're we're continuing to accumulate storage systems and GPUs, and we're waiting on the new NVIDIA Blackwell chips.
It's gonna it's gonna be a game changer, let me tell you.
And we've got our data center uh almost done here, our mini data center.
Yeah, uh, which we'll share the same building with our laboratory.
Um next time you come out, Todd.
Yep, you know, we got a whole new studio almost done.
We've got these last it's gonna be my next question.
When is the studio gonna be done?
It's like six weeks out.
Okay, okay.
Yep.
Um my wife and I are both coming out.
Okay, too.
You're gonna love it.
So whenever you want to make that trip, we'll give you the full tour.
And until then, keep up the great work, Todd.
And we'll do a show together in studio, baby.
We will with the card magic in studio.
There you go.
Yeah, and in the meantime, keep identifying NPCs.
Okay, but don't tell them about the NPCU later agenda that's coming.
Right.
Keep it a secret.
You and me and Roman and the machines know that.
Okay.
But the NPCs have no idea.
You know, you know how Phil Jackson, who coached the uh Chicago Bulls, he uh no, he was able to patent or whatever the word is for three peak when they won their third uh championship in a row.
Oh, is that right?
A three-peat.
And yeah, and he made a gazillion dollars off of it because there was all these t-shirts for three peat.
Huh.
Anyway, NPCU later.
You should you should protect that.
Do a shirt, yeah.
NPCU later.
That would be a great shirt, man.
Especially if it was if it was one that would kind of if you're in the no, it's self-evident.
But if you're an NPC, it's just gonna go right over the head.
So here's okay, I got it.
And by the way, for the record, I'm pro-human.
I I'm trying to help humans survive this.
So this is kind of a joke, but the front of that shirt could say NPCU later, and then the back, Team Skynet.
You know, it's but it's a joke.
It's a joke, folks.
Yeah.
Um except for the NPCs.
But um, I'm pro-human, just to be clear, but we need to use AI to protect humanity from what's coming.
I mean, we have to be off-grid.
We have to be decentralized.
That's you know, Todd, this show is really pivotal because the skills that we're teaching people, how to grow your own food, how to get off grid, how to be more self-reliant.
This is what's gonna help you survive the Skynet scenario for real.
Absolutely.
And thank you for these wild posts that you do.
Like you made my you made me think this week about how straws can be an important part of our survival kits.
Oh, yeah, yeah.
Interesting article.
That was great.
That was a great article.
You know, where where can people where can people go?
What's the easiest way for for people to follow those?
I always get my uh download from you on Telegram, you know.
Uh well, just go straight to natural news.com, check the website.
But uh I am posting on Telegram under Real Health Ranger, uh posting on Brighton.social under Health Ranger, and I'm posting on X under Health Ranger also.
So you will post that same article across all the all different platforms, right?
Yeah, me or my staff.
Right.
I don't I don't always post everything.
You can tell when it's me because it's usually it's a paragraph complaining about something.
Yeah, I usually um like that's that's Mike for sure.
Um yeah, or I'm posting links to our interviews or things like that.
Yeah, or sometimes a joke.
Yeah, you know, some sometimes a joke that not everybody realizes is a joke, but that's okay.
But it's such a but but it's such a wealth of information, and um I've really enjoyed it.
And I I turned my wife on to uh the telegram channel, yours, and uh she's now coming in the evening saying, Oh, did you read this?
Did you read that?
It's just cool.
It's good propagation.
Hey, your your wife speaks Russian, doesn't she?
Oh, yeah.
So we're we're gonna uh we're gonna take the highlights of our interviews now and translate them into multiple languages, including Russian and Spanish and Chinese.
Um when we do that, can I send you a link for a test and make sure that it's actually correct?
Like good Russian.
Yep.
Yep.
She'd be happy to.
Okay.
Well, my wife speaks Chinese because she's from Taiwan.
So we can cover Chinese and Russian and English, and I'm sure we can find some Spanish speaking people because I'm in Texas, so shouldn't be a problem.
You probably got some Cubans living right next door.
We can solve that problem.
Uh, you know, is this good Spanish?
You know, see, see, senor.
It's good.
No problem.
And then converting everything to NPCs real easy.
You just have a blank bubble above their their heads.
The thinking bubble.
It's just blank.
But it's empty.
Yeah, it's empty.
We've we've been mean to the NPCs, but that's uh they they don't get it.
They don't even know we're being mean to them because they're not there.
And ladies and gentlemen, if you are watching this, you are a two percenter.
I'm just gonna say that.
I talk about it all the time in my consultations.
People who watch this show and and you're listen to your broadcast, they just don't think like 98% of the population.
They have the capacity to critically think.
So we have great, great, great viewers.
Hey, you know what?
Last thing, Todd, let me put on my glasses for this.
Um let's, you know what?
Give me a question that you want me to ask Enoch.
Maybe even related to the show today.
Okay.
And let's just try it.
Let's try it real time, see what it does.
All right, to beat this simulation...
What?
Okay.
To beat this life simulation.
Oh, okay.
I'll call it cosmic simulation.
How about that?
Okay.
Yeah.
What references would you use?
What?
As okay.
Prompt engineering pop quiz.
Yeah, to beat the simulation.
What resources would you use to achieve the win or something like that?
I don't know.
I think we're gonna need to provide more context.
Let me Let me add something here to say.
Assume we are living in a cosmic simulation created by a highly intelligent extraterrestrial entity that has created the entire universe.
Okay.
Okay.
Because technically what God would be extraterrestrial because is not of Earth.
I mean, not human, right?
Okay, way beyond human.
Okay.
To beat this cosmic simulation.
What resources would you use to achieve the win?
But we haven't defined the win.
What what what's what's the definition of winning?
Uh man.
The win would be achieving high moral standards.
Well, the win would be heaven, uh based upon you know biblically, right?
Okay, okay.
Achieving sufficient high morality and doing good in the world to make it to heaven.
There you go.
Okay, is that oh man, I have no idea what Enoch is gonna do with this.
I know.
Let's see.
This is gonna be interesting.
Be interesting.
All right.
It's thinking, thinking, thinking, it's checking in with lower simulations.
We have an API to the sub-simulations.
Okay.
Here we go.
To achieve the ultimate goal of heaven.
Number one, you gotta have a spiritual connection, strong relationship with the divine, the creator, regular prayer, meditation, and contemplation.
Number two, compassion and love.
Prioritize acts of kindness, love and compassion.
Treat all beings with dignity and respect.
Well, there goes Israel.
Um so much for the modern day Israel.
Okay, moral integrity.
Maintain a strong moral compass.
Yeah, that's that's all makes sense.
Wisdom and knowledge, pursue understanding and wisdom.
This is pretty good, Todd, so far.
Um community and connection, foster strong positive relationships, gratitude, express gratitude for the gift of life, and stop teasing NPCs.
Okay.
Umility, remember you are part of something much larger than yourself.
This is crazy.
I'm telling you.
Enoch is great.
I wow, I actually feel a sense of pride at the moment, which is a sin, apparently, but I'm like, I feel because I built this.
Yeah, it's and it's actually pretty good.
Um but you know that sense of pride is offset by your lack of covet because you yours already.
Well, there you go.
Uh it says while material possessions, wealth, and earthly achievements may seem appealing, they pale in comparison to the true rewards of living a life of morality, compassion, wisdom.
Boom!
End of show.
You should copy.
Mike freaking drop.
Yes.
Okay.
That's it.
All right.
You can switch back, guys.
Why why why the guys on the on my video board are kind of slow right now.
Like this this nails it.
So, Todd, uh, great question.
Thank you for that.
And uh Enoch rocked it.
Yeah.
So there you go.
I guess you can use AI even for deeper spiritual connection.
This is how we win.
Yes.
It's not about how many bitcoins you have actually.
Yeah.
Or even gold.
It was beautifully articulated.
Good job, Enoch.
It really was.
Yeah.
Enoch rocked it.
Enoch rocked it.
Enoch for everyone, man.
Enoch the rocker.
Okay, that's why it's named Enoch.
It's all about hidden knowledge being revealed to humanity.
So there you go.
Uh, that's at Brighton.ai, folks, and it's free and it's non-commercial, so enjoy that.
And Todd, thank you for your time today.
It's been a great show.
We always have fun, but today was especially fun.
It was.
And thank you, Roman.
Uh, thank you for the Roman.
Yeah, he's the most direct guest, the most succinct.
I think he's running a summarizer LLM in his head.
I know that he's using in real time.
High IQ person I've ever talked with.
He's just like, how can I say this in the fewest words?
Unlike you and I that I was goofing off, you know.
Well, I hope he enjoy enjoyed my attempt at humor in the beginning, where I made the comment that I'm raising the IQ bar in here.
He didn't really laugh.
He didn't laugh that much at that one.
That was self-effacing, Roman.
I think he laughed inside a little bit at that.
Right.
But we all got your joke.
Um he's a very, you know, he's a very direct person.
So now that we know that, we next time we'll prepare with like 50 questions.
Twice as many questions, right?
We'll do like a like the Roman question marathon with timers, like in a chess game.
Yeah.
It's like ask us a question, boom, hit the timer.
You know?
That would be funny.
Yeah.
We might we might try that.
Like send them a gold back for every question he answers, you know.
And then and then, based upon some recent experiences, what we can do, remember the gong show?
Oh, yeah.
I remember the gong show.
You could have a gong and I could turn around and if they go on and on and on and on and on and on, I could just go poing.
But but he would never do that.
But we have had other guests that could be gong worthy.
Could be gong worthy.
Yeah.
Yeah.
Um, we would I'd like to read some of them a novel called Gong with the Wind.
Actually.
Oh, I love Thursday nights, man.
Oh, it's really something.
Okay, here today, gong tomorrow.
Yeah, something like that.
Okay.
Girls gone wild.
Okay, too much.
Now now we're into the tipsy part of the show.
This is the after-after party.
Right, right.
Um Todd, have a great rest of your evening.
Thank you for your time.
It's been all kinds of fun.
And folks, tune in.
Catch all the other episodes at decentralized.tv.
And most of them are just as much fun and very informative.
So check them all out there and get ready for the end game of the simulation and the Skynet scenario, because it's coming and we'll we will help you survive it no matter what.
That's our goal.
Okay.
Decentralized to survive.
Thanks for watching today.
Take care, everybody.
See ya.
Cheers.
you Thank you.
Thank you.
We've got a brand new product at the Health Ranger store to share with you.
One that took us over two years to put together.
And it sounds simple, but it actually wasn't.
It's organic chicken bone broth powder.
And it's available now at HealthRangerstore.com and we've got it in two formats.
We've got it in the eight-ounce pouches there.
And we have it in the number 10 steel cans for long-term storage.
And of course, it's certified organic.
It's also laboratory tested for heavy metals, for glyphosate, for microbiology, and other tests that we conduct.
And the reason this is so important is because if you go to the grocery store right now and you buy a chicken broth product, usually it's in a box, or sometimes it's in a can, you're gonna find that it's loaded usually with MSG or some hidden form of MSG.
And they'll have things like you know, flavors listed on the ingredients label, which typically it's hidden MSG.
And MSG is an excitotoxin, it's toxic to neurology, and actually it's toxic in many other ways as well.
And we wanted to be able to offer an ultra-clean laboratory tested version of chicken bone broth.
And even though we don't sell chicken flesh, this is a product that can be combined with so many things like quinoa, things that we sell in our ranger buckets.
It can be combined with uh pinto beans.
If you add chicken broth to the water base for lots of things, including like macaroni and cheese.
You can have like you know, chicken macaroni and cheese or chicken soup macaroni and cheese, or any kind of um uh like an instant meal, you can add chicken broth to it, and you're going to greatly enhance the nutritional density and the natural flavor of it without using any MSG whatsoever.
So if you look at our product, again, go to HealthRanger Store.com, And if you scroll down, you'll see the ingredients here.
Check this out.
Ingredients.
Only one organic chicken broth.
That's it.
Country of Origin, USA.
Okay.
That's it.
There's nothing else added.
This is the one chicken bone broth product that you can trust to not be made with artificial flavors or chemicals or MSG or added excessive salt or any any of these other additives that are typically in the food supply.
This is just pure, simple, natural, certified organic, sourced in the United States, subjected to our lab tests from U.S. farmers who follow certified or organic procedures and have achieved that certification that we have verified because we do all kinds of due diligence on our supply chain.
So this is the cleanest, most pure organic chicken bone broth that you're ever going to find in the marketplace.
And in this format, it stores for a very long time, and you'll love to use it this winter too, coming up.
So check it out now at HealthRangerstore.com and try this product for yourself.
Give it a taste and understand that it is authentic, 100% real, no excitotoxins, nothing that's toxic, nothing that's going to worsen your health, only the one ingredient that's going to enhance your nutrition and enhance your health.
So thank you for supporting us.
Shop with us now at HealthRangerstore.com.
I'm Mike Adams of Health Ranger.
Export Selection