DTV – Roman Yampolskiy on AI superintelligence, human extermination and simulation theory
|
Time
Text
All right, folks, welcome to the brand new episode of Decentralized TV here on Brighteon.com, the free speech video network.
And as always, I'm joined by my guest today, Todd Pittner.
Welcome, Todd.
Great to have you here today.
Welcome, Mike.
And I want to thank you because I totally took advantage of you this weekend in your company, your Labor Day sale.
Oh, okay.
I'm really grateful.
And by the way, the freeze-dried strawberries and the freeze-dried bananas came in, and my wife made a comment that I've never heard her say.
She said, hmm, these are divorce-worthy.
And I said, what do you think?
Divorce worthy.
If you eat all of these, we're going to court.
Well, if I could send you more strawberries to save your marriage, I'm happy to do that.
So just let me know if that's necessary.
Excellent.
Excellent sale.
Thank you.
Oh, that's great.
Well, thanks for supporting us.
We really appreciate that.
And, you know, that's all organic, lab-tested.
I know you love it in that format, the freeze-dried format that lasts for many, many years.
But getting to today's show, you and I have a really amazing guest coming up.
And I think our audience is going to just, they're going to lose their minds today with joy, you know?
Yeah.
We are definitely going to hit the control-alt delete on your minds.
Yeah.
Our guest today coming up is Mr. Roman Yampolsky.
And it's hard for me to remember his name at the moment.
I mean, his last name.
And he's a computer science expert who has written many papers and done literally decades of research on issues like AI safety.
How do we stop the machines from killing us all?
And also, are we living in a simulation from a superior intelligence that built this world and somehow we're in it?
And he is almost completely convinced that we are living in a simulation.
And today we're going to ask him why.
Like, why does he believe that to be the case?
Yeah.
Yeah.
So it's going to get interesting.
And I just want to know, how would we know?
And what does it really mean to be in a simulation?
Because when I wake up in the morning, you know, I still need to walk my dog or they'll go to the bathroom in the house.
Is that all part of the simulation, Mike?
Well, yeah.
I mean, as he explains it, and I've watched many of his interviews, I've found him very intriguing.
Of course, the simulation is persistent and it is convincing because we are in it.
You know, we are living inside a biological haptic interface that is very convincing to our consciousness that's that's basically you know steeped in the simulation.
So of course it feels real and looks real and all the emotions we experience them as real.
So you're going to break my heart because that means I'm an actual NPC, Mike.
Well, but no, but our consciousness is in the simulation.
Okay.
So you're not an NPC, but there are NPCs in the simulation who do not have souls.
So see, the simulation theory actually explains why there are NPCs to populate the world with non-player characters.
Yeah, that supports my 98% number, you know, my dim view of humanity.
Yeah.
Yeah.
So so the world is populated like a giant sim city, right?
Yeah.
There are a certain number of actual divine souls from a transcendent reality that agree to enter the simulation.
And here we are, and we're given, we're born into these bodies, right?
And yet, in order to fill out the rest of the world and make it look more full, there are NPCs.
Wow.
Yeah.
I mean, it tracks.
It tracks what we're thinking.
You know, that I think only 2% of this world people in this simulation have the ability to critically think.
And so I think that's right.
Yeah.
I'm totally into it.
You know?
Yeah.
And we got to be careful because we want to cover both topics, AI and the simulation.
And I feel like I could do a whole show on each of those with our guests, you know?
Look, there's, yes, as I did my research, as I want to do, I've broken it up into like six different categories.
Each one could be a show in and upon themselves.
I mean, this is going to be a fascinating interview.
So he's going to pop in any minute, and then we'll break shortly to kind of recompose the panels here and everything.
He should show up any minute.
In the meantime, I want to mention after the interview, we will have the after party.
And of course, the show is sponsored by a solution that you have brought to the world, which is about how to keep more of what you earn.
And then I've got a couple sponsors.
We're going to get to that later.
But did you notice what gold and silver have been doing lately?
Kind of thumbs up, Mike.
Yeah.
Silver went over $41.
That's amazing.
Yeah.
Go silver.
Good for silver.
Go silver.
And gold hit like 35, 70 or something.
So we've been doing this show how long, Todd?
Over two years.
Well over two years.
Over two years.
Okay.
So I think when we started doing this show, gold was around $2,000 an ounce.
$1,800.
Ah, $1,800?
Yes.
So in other words, you're saying gold has almost doubled since we started doing the show.
Yep.
And I do think we should take credit for it.
Well, I mean, look, we told everybody that that's the life raft, you know, the lifeboat for your assets is gold and silver and maybe privacy crypto on top of that.
Yep.
Yep.
And if you employ the strategy of keeping more of what you earn, you can afford to buy more gold and silver and privacy crypto and good food from you.
Yeah, right.
And also, in light of our guest that's about to join us, we're not saying that, you know, the way to win the simulation is to have the most money.
That's not what we're saying.
But as long as you're in the simulation, you don't want to be broke.
You don't want to lose your dollars.
You don't want to lose the purchasing power of everything that you've earned because that's what's happening right now.
The dollar is collapsing in real time.
My gosh, Mike, I had Labor Day weekend where I got a bunch of steaks for my daughters.
You know, I have four daughters.
And I couldn't believe it.
I remember when my girls were younger, I could go and buy four.
I mean, for all four of us, six of us actually with my wife, I could buy enough steaks for under 50 bucks to be able to feed us all.
And they were really nice steaks.
And this last Saturday, I went into Sprouts and I'm like, no freaking way.
You want almost $40 for one piddly, piddly ass steak.
I couldn't believe it.
I put it back, Mike.
I just, it violated my every sensibility.
And it just hit me that this food inflation, man, it's like if you don't strategically attempt to keep more of what you earn, then you're just going to, you know, good luck with hot dogs every weekend.
Yeah, if even, or more like, you know, cricket nuggets.
Right.
With little legs coming out the sides.
You know, that's how you know you're getting a good deal on your food when the cricket legs dominate the patty.
That's right.
So you didn't buy steaks.
What did you buy?
I bought chicken and shrimp, and I did buy two steaks as well that they were a little bit different, but I made hibachi.
So afford some rice.
They weren't like meat glue steaks, were they?
No.
You didn't get meat glue.
Not meat glue.
No, but that grosses me out thinking about that, by the way, ever since that's been in conscience.
No.
Just remember, Todd, these are all experiences in the simulation.
So the fat food.
I have a sneaky suspicion because Roman is the expert who we're going to interview.
And I think he knows something and has something going on.
And I'm just going to posit it here.
Okay.
I think he thinks the winner of the simulation is he who has the bestest beardeth.
Well, that could be.
Yeah.
But I also know from his interviews, he wants to pursue immortality in the simulation.
Interesting.
He wants to live for a million years in the simulation.
Okay.
Yeah.
And I want to ask him about that.
Yeah.
Great.
Yeah.
Cause I don't know that I want to live forever in the simulation.
I mean, have you thought about that, Todd?
You want to live forever?
I haven't.
I haven't thought about that, but it's kind of interesting.
I don't know.
Well, let's wait and we'll just ask them all about these things.
Yeah, we will do so.
But it's pretty clear also the machines will exterminate the NPCs pretty soon.
Yeah, I think so.
So there's going to be an NPC ethnic cleansing by the Skynet machines, basically, is what's coming.
We traffic in decentralized NPCs.
NPC you later, I think, is what the machines are going to say.
I can always get it on you, Mike.
Yeah, it's like, you know, this is a really important time to have your own mind, to have knowledge about what's happening, to be decentralized.
Because you realize when the machines start to go after people.
Oh, our guest is joining us here.
Welcome.
Mr. Mike Poisky.
My case is made.
Look at that beard.
I know.
We love your beard, sir.
Thank you so much.
Why don't you have a beard?
Because I never graduated from puberty, I guess.
I don't know.
Mine is embarrassing.
I actually did the computations from 16 to about 65.
You'll waste six months of your life shaving.
Oh, solve the cost of shaving supplies.
Plus, everyone looks better with a beard.
Well, that's the best intro ever.
So welcome to the show, Roman Jampolsky.
It's an honor to have you on.
Appreciate it.
So this show is about decentralization.
We try to share with our audience strategies for how to live more off-grid, and we have a technology emphasis.
And your work is absolutely critical in this area.
You're the author of the book AI, Unexplainable, Unpredictable, Uncontrollable, or co-author on this book, Considerations on the AI Endgame.
Let's go ahead and show it on the screen there.
And you can search for our guest name.
It is Jampolsky, Y-A-M-P-O-L-S-K-I-Y.
Don't forget the I before the Y. And you can find his books.
He's also on X at Roman Yam, YAM, which is obviously short for Jampolsky.
So, Mr. Jampolsky, can you please give our audience a little bit of your background in computer science and AI safety and the academia, infrastructure, et cetera?
Yeah, I'm an associate professor of computer science and engineering.
I've been working on AI safety for about 15 years now, published a number of books on that topic, and close to 300 papers now on different subtopics in that domain.
And in these papers, you arrive at some very, I think, to some people, shocking conclusions, although Todd and I both tend to agree with your conclusions.
But can you tell us about what are the things that you have said in your papers that are attracting the most attention or scrutiny?
Yeah, so I'm looking specifically at future AI systems we are likely to be able to develop in the next couple of years.
And I study what are the limits of control for what we call AGI or superintelligent systems.
And it seems that our initial assumption that given enough money and time, we can figure out how to control superintelligence are probably not true.
It's impossible.
Basically, given sufficiently intelligent system, it will find a way to escape from any controls we place on it and essentially do what it wants, which means we are not deciding how the future is going to turn out.
So when we're told by companies like OpenAI that, yes, we're going to build in guardrails and we're going to have a safety division, or we're told that by Google or Meta or whoever, it seems like what you're saying is that those efforts have already failed and it's not possible for humans to outthink the coming superintelligence.
Is that fair?
Right.
So I think it's nice that they are trying.
It's certainly better than not trying.
And I'm happy they have at least safety departments and they're working in it.
That's wonderful.
Would be much worse if they completely skipped that.
But it seems that their efforts will not scale beyond tool AI.
So the systems which are narrow AIs, they can make sure are much better, much safer than our wise.
For AGI, we're starting to see them essentially fail to guarantee quality, guarantee that the system doesn't lie, be able to explain and predict what they do.
And once we get to beyond human performance, I think they definitely will not succeed in that domain.
Now, let me share with you.
And by the way, my co-host Todd has questions for you as well.
But let me share something briefly that's also important.
My background is also mathematics and computer science.
And we, my company, built our own AI on top of open source models with some really, I think, really clever retraining techniques.
And we've released that publicly.
And I also use AI to write code to do the data pipeline processing for the training material for AI.
So I'm very familiar with this.
And one of the things I've noticed that I'd like to ask you is I've noticed that there appears to be the natural emergent property of intelligence from sufficiently complex neural networks.
I think you've spoken to this about the brute force method of just adding compute to silicon neural networks, giving rise to intelligence in ways that even the engineers don't quite understand.
Can you speak to that?
Yeah, so if you look at nature, just animals, as the brain size increases, it looks like they're getting smarter with humans being sort of at the top of that food chain.
There are exceptions, bigger brains and dolphins and I think elephants, but they have lower density of neurons.
So the more neurons, the bigger the neural network, the more likely you are to have advanced intelligence.
And that theory, the scalability hypothesis, has been shown to be true so far.
The more training, the more compute is done, the more capable those systems become.
And thank you for answering that.
And let me follow up.
Specifically, it seems to me that there is an emerging natural intelligence out of the complexity, even without additional training.
What do you think about that concept?
Because what I'm seeing is, yeah, go ahead.
I'm not sure I fully understood what you mean by that.
Can you explain a little that when you add more neurons to a system that has a base level of training, that its own exhibition of intelligence increases even without additional training?
In other words, as they are scaling up the data centers and the GPUs and so on, even if they don't refine new algorithms of training, the systems are going to get smarter beyond human intent.
If you just add additional nodes, additional neurons, they are not knowledgeable.
They have random weights, so they are kind of useless.
Only after you train them, integrate them with the rest of the network, do they contribute to more problem-solving ability?
Okay, well, thank you for clarifying that.
Todd, you've got an interesting question, so go for it.
Yeah, so just for the record, for those of our viewers who've been with us forever, you know that I'm usually invited to these high-tech interviews to raise the intelligent bar.
So, you know, humble brag there.
Roman, so I have two questions, and you can answer me either.
But the first one is, you know, I'm a magician.
You know, I put myself through college as a magician.
So this question has to do with magic or an illusion, more appropriately.
Why is human control over super intelligence an illusion?
That's question one.
Question two is, why do you assign a 99.9% probability to human extension from AI?
Extinction.
Extinction.
Thank you.
Yes.
So let's start with a second question.
I think it's impossible to do this.
So when something is impossible, it's basically 100% not happening.
And I'm very humble, just like you.
So I'll say maybe I made a mistake.
Maybe I'm wrong.
Maybe they'll find a way.
So it's not 100.
It's something close to it, 99% certainty beyond that.
So that's what I'm saying.
If they suggest that they can build a perpetual motion machine, I would say they have about zero chance of succeeding.
No matter how much funding, how much time, how great the batteries are, how much effort they put into new wires, they're not going to create a perpetual motion machine.
In this case, they're trying to create a perpetual safety machine, something that can always guarantee safety no matter what the level of AI is in terms of intelligence, in terms of interaction with other agents, malevolent agents, data sets.
You're always ahead of those systems.
You're doing better than a super intelligent entity.
That seems impossible to me.
So I'm saying basically you have zero chance of success.
Therefore, doom is that high.
I'm not sure what you mean by that illusion.
Is that what you mean by them telling us that they can get us there and creating this illusion of successful safety work?
That we can control it.
That's the illusion, that we can control super intelligence.
Yeah, we can't even control another human being at our level, right?
We develop light vector tests, we develop morals, ethics, religion.
All those systems fail all the time, right?
We know that you cannot really trust fully employees or anyone else.
Okay, let me take the next question.
Thank you for answering that.
Roman, may I call it, is it okay we call you by your first name?
Sure.
Okay.
Here's my name.
Okay, thank you.
Before I even became aware of your work, I was along a very similar conclusion through a different vector about the machines eventually exterminating humans.
And let me share that with you and get your reaction.
It occurred to me that there's obviously going to be very strong competition for resources that both humans and machines need in order to survive.
Obviously, electricity being the most obvious one.
But there's also fresh water.
So fresh water is used to cool data centers.
And where I am in Texas, water is very scarce.
There's going to be 400 billion gallons of water used by data centers by the year 2030.
So that's water that cannot be used by human beings.
Also, land.
So farmland that used to produce food for humans.
Obviously, machines don't need food.
They need the farmland to build data centers to expand their own compute bandwidth.
So those are three resources.
Water, electricity, and land.
So in my mind, it seemed inevitable that eventually the machines were going to say, like, we don't even, we're not, we don't want to kill the humans.
We just need their resources, you know?
Doesn't that seem inevitable to you that the competition for resources is going to marginalize humans sooner or later?
So at some point far in the future, yes, we would run out of space and water.
Right now, there is no shortage of water.
I don't even know if it has to be desalinated for cooling purposes.
So at the end, yes, the machines may decide that they need the resources for something else, and we don't care if we are using them or if we rely on them to continue existing.
So that's definitely one of the concerns.
But really, we're now discussing what would be the reason they decide to turn us off.
And that could be one of them.
Well, what's the most likely reason in your research that the machines would decide to turn off the human power grid or something like that?
What do you think?
That's the unpredictability part of it.
Like as a human-level intelligence, you cannot predict what a superintelligence would do.
You can consider some possibilities.
Maybe they don't want competition.
Maybe they are scared you're going to turn it off.
Maybe you'll develop competing super intelligence.
Again, impossible to know for sure.
We just know that we are not deciding what we're going to do.
Okay, I appreciate that position.
Understanding that humans can never predict the strategies of superintelligence that can outthink us.
That makes perfect sense.
All right, Todd, next question goes to you.
I love, Roman, I love the fact that you're just, you are hammering the answers so quickly.
This is going to be great.
We normally can never get through our questions with our guests.
So thank you so much.
I think so.
Okay, go ahead, Todd.
Sure.
So, Roman, you've argued that not only might AI simulations become unavoidable, but that we are ourselves, that we may already be living in one.
How does the need for an advanced AI, for advanced AI to simulate reality, lead you to conclude that it's more likely we live in a simulation than not?
Yeah, if it's a sufficiently complex system, its level of thinking is very detailed.
If a human thinks in pictures or maybe in text, it can think in whole environments, virtual worlds.
And to have a good idea of what's going to happen, it can simulate all aspects of it, including conscious, intelligent agents who are part of that simulation.
So if it needs to make a decision about sales of some product, well, it needs to simulate this planet with all the users and see, okay, what type of rice cereal will sell the best.
As a side effect of creating this experiment for marketing purposes, it would have to create real agents who experience all the real states of this world, suffering included.
So explain to me what living in a simulation would be like if we are.
Because I'm trying to...
What do you mean?
Here it is.
It would not be any different from your point of view.
You wouldn't even know unless I told you.
Okay.
Okay.
I just told you.
So who would be doing the programming?
Are you?
The creator?
I'm pretty sure I'm not doing it since I don't remember doing it.
But of course, it's possible that I turned off my memory to enjoy the game a lot more.
Whoever is doing it is very capable engineer and scientist, amazing artist.
We see the evidence of that everywhere, but also they seem to have some moral problems as well based on the same evidence.
I love the fact that you spoke in previous interviews about the origin story that's been told to various civilizations across history of a creator in the sky that created your world.
And after you die, then you go to heaven or some similar type of transcendent place.
And how much that sounds exactly like a simulation where we're living in the simulation and then one day we leave the simulation.
Can you speak to some of the similarities between those stories and why?
I mean, do you believe that ancient cultures were actually told the big truth and then they created religions around that?
So I want to kind of invert this experiment.
So let's say we take a book like mine, which talks about all this technology in modern terms.
We talk about superintelligence, we talk about computer simulations, and then we explain it to someone who has no knowledge of modern technology, a primitive tribe.
So you tell them there is this very capable, very smart entity, and it's able to create worlds like ours and create beings in that world.
And then that being, for some reason, wants to test if you're going to be doing this or that, some sort of ethical test.
Wait a couple of generations, and what they have in their local language is essentially the religious stories we are passing around.
Now, how they got that information is above my pay grade, but it just seems like it's a perfect mapping.
All these concepts we see in theology are now being done in AI safety work just with technical terms.
We have boxing procedures.
We have, you know, instead of Ten Commandments, we have guidelines, rules for large language models to follow.
All sorts of retraining, repurposing is happening.
We'll reset this model because it's not behaving well.
So again, just interesting similarities.
Of course, they could be just that.
Do you give any credence to those who claim that they have near-death experiences or NDEs?
Because one of the most common themes of their experience is that they feel like after they leave this world, the next world that they briefly visit and return from feels far more real than this world.
I read a lot about those, and what's interesting, they always kind of get visions of their own religion.
They never have visions from another religion.
So that makes kind of sense based on their previous training data.
There are good experiments with artificial neural networks where if you start disconnecting parts of it, it kind of creates an internal input, which also stimulates the network to produce those hallucinations.
So the network starts kind of producing outputs as if it's actually experiencing a world it's trained on.
Interesting.
Wow.
Okay.
Wow.
Well, that, okay.
That's going to drive a lot of philosophers nuts trying to parse all that.
Todd, you want to take the next question?
I mean, my question stack keeps getting bigger, so jump in when you can.
Well, like I said, I'm raising the IQ with my quest, the IQ bar with my question.
So, Roman, do you think that this master creator who may or may not be living in his mom's basement, do you think he eats Cheetos or Doritos?
That's not my question.
Sorry.
Could this simulation itself be a test whether we're wise enough to resist building uncontrollable AI or not?
Well, given all the developments, recent developments, we are very likely living in the most interesting of times.
Not just because I'm living in it, but because we are creating a lot of meta technology, meta-inventions.
We are creating intelligence.
We are learning to create worlds.
Never in the history of the world, we were able to create inventors, agents.
This is all novel.
So it wouldn't be surprising if they were experimenting with specifically this time period to find out, let's say, how to create safe superintelligence, what other types of superintelligence can be created, what are the interesting worlds.
It could be a tool for testing, tools for creative exploration.
We cannot know from inside the simulation.
Okay.
Quick question.
So do you believe that we are living inside a simulation?
Is that your belief?
I think there is a lot of good reasons to think we are, yes.
Okay.
And I heard somewhere that you would love to live in the simulation forever.
Did we get that right?
So I don't want to die.
And so that means I don't want to get old, get sick, and I want to live as long as possible.
If there is an option to learn what is outside the simulation and outside is better than inside, I'm happy to continue living forever outside.
Add it.
Okay.
All right.
Thanks for that clarification.
Now, I've got a totally different question for you, but I just want to preface it by saying that We invited you here in all seriousness.
And this is not, we don't consider this to be entertainment.
We're not asking you these questions because it's just interesting to the audience.
We are genuinely wanting to more deeply understand the nature of our reality as we experience it.
And I think that's important for our world.
I think your work is actually really, really critical for humanity.
And yet, most people, and I know you've experienced this, Roman, most people dismiss this.
And I want to ask you why that is.
What is it about the normalcy bias that people just listen to you or they'll read your book and then they'll say, ah, that was cool.
And they go back to their regular life.
You know, why?
Because I think most people actually dismiss it if you look at this as a type of scientific religious mapping.
Most people are religious.
Most people believe in a super intelligent being.
They believe it's a temporary world.
So the default for humans around the world is not atheism.
That's already a game.
Even within the scientific community and within atheist community, a lot of people do take simulation theory as a possibility.
Most don't have high probability like I do, but 10, 20% is very common for philosophers, for many Silicon Valley billionaires, for example.
So it's not as out of norm as you think it is.
Okay, that's good to know.
But the follow-up to that is why, but about AI, for example, the idea that AI is very likely to destroy humanity.
That thought is much more easily dismissed by people because, well, for why?
Because maybe a decade ago, it was science fiction.
Now, Nobel Prize winner, founder of machine learning, Turing Award winners, all in agreement with me.
We had a letter signed by thousands of computer scientists all saying this is more dangerous than nuclear weapons.
That's the modern state-of-the-art thinking in this space.
But if they really believed it, then it seems like they would prioritize not destroying humanity or not pursuing AI research.
But we still have the incentives being the financial returns rather than the preservation of the human race, it seems.
We really believe that we're all going to die.
Aging is happening, yet we spent about zero of our national budget and individual budgets and fighting aging.
It's the same thing.
So are you saying that there's a suicide cult type of attitude among humans anyway, and that's why they're pursuing machines that will kill us?
At the level of humanity, we are kind of doing things which are less likely to lead us to not going extinct, yes?
Okay, that's I was not hoping to hear that today, Roman, but I thank you for being honest about it.
Todd, you want to jump in?
Because I'm trying to sort this out.
Like, I take Roman seriously, but I find when I talk to people about, Roman, about your work, they don't, at least the people that I've spoken with so far, granted, it's a very small sample, they don't take it seriously.
They're like, yeah, whatever.
But they don't even believe that AI will replace human jobs either.
That's talking to them.
Well, for social reasons, social reasons, you know, that's.
Get better friends.
Smarter friends.
Okay.
Let me make a note.
Roman Jampolsky, get better friends.
I get it.
Okay.
Todd, go for it.
So if self-preservation is such a strong instinct for us, why isn't it stopping us from pursuing potentially suicidal AI development, Roman?
So there is kind of a prisoner's dilemma situation.
Game theoretically, what is best for society and what is best for you personally, not always align.
If you think you can make a lot of money right now, but then government will stop this development, but you're already locked in as a very rich, successful person.
It is a better state for you than to stop right now and be a poor person in the same world.
If you think someone, a competitor, will do it instead, but then everything is going to be fine anyways.
It's in your interest to beat them to it.
So individually, they're all making rational decisions.
Globally, we're all getting screwed.
How are you treated by the AI tech giants?
And let me set this up because in the telecom industry, the telecom giants will say, oh, there's no health risk from electromagnetic pollution or 5G, let's say.
The pesticide industry will say, oh, there's no risk for eating pesticides and herbicides.
The AI industry likes to say, ah, we've got it all locked down.
It's totally safe.
How are you treated by that industry given the severity of your warnings?
So it doesn't really matter how they treat me.
What matters is what they are claiming, right?
Not a single web, not a single scholar claims they have a solution.
No one published a paper, a patent, has even a prototype for a system which scales to superintelligence level capability.
They at best telling you that given more money and more time, they'll figure it out.
That's the state of the art.
But we know that's not true.
So are they just going to continue to deceive us until it's too late?
Well, some of them might actually believe they can do it.
They just haven't realized the difficulty of a problem yet.
They see it as business as usual.
Like we can secure credit cards, we can secure superintelligence.
What will be the sign that it's too late?
Good question.
That is a very hard question because so far, nothing we predicted should be scary, actually scared anyone to stop.
We talked about, you know, AI trying to escape, trying to lie, trying to blackmail.
Nothing seems to be making any difference.
So you've argued we need to pause a pause in AI development, but that most regulation is just simply posturing.
So which approaches do you think hold promise and why aren't they being adopted?
I'm sorry.
Ideally, we want differential development.
We want certain technologies developed and others not.
So we want narrow tools for solving specific problems.
You want to cure specific type of disease?
Beautiful.
Let's do that.
You don't want a general superintelligence to replace humanity.
Well, what realistic frameworks could actually reduce existential risk?
Again, personal self-interest.
If what I'm saying is recognized, people will understand they will not survive this.
They will not be rich.
They will not live for a long time unless they stop developing this mutually assuring destruction technology.
Doesn't matter who builds it.
The outcome is the same.
So that's key, that you're appealing to the humanity and the rationality of the top-level engineers.
And you've said in previous interviews that governments are probably incapable of a regulatory framework in any kind of time that matters.
How much are you concerned about the fact that our national leaders, let's take the United States, for example, is led by a person, Donald Trump, who basically knows nothing about AI.
And I'm not attacking him.
Most people know nothing about AI.
But that's also true across most other nations.
They don't understand what AI is capable of, and they could not possibly move faster than AI can move, especially under superintelligence.
How big of a problem is this?
It's not optimal, but also even if they fully understood it, there is limited things they can do without completely destroying all the infrastructure, right?
You can make it illegal for large corporations to do certain experiments, but as this technology becomes cheaper to develop with time, more and more actors can participate in this research.
By what year do you anticipate most desk jobs or computer jobs will be replaced by automation?
It's very hard to predict deployment.
So we might have human-level AGI within a few years.
Prediction markets are all pointing to that.
But it doesn't mean that people will immediately be replaced.
There are so many complete BS jobs which still exist.
They don't need to exist, but we just keep them around.
Well, what I've done in my company, I'd love your reaction to this, is I, I mean, I augment our human talent with AI tools.
But in doing so, for example, I use AI coding now.
That means I did not have to hire two additional human coders.
Now I'm just using Anthropic to write the code or Claude code, whatever.
Maybe Grok code coming up next.
So effectively, even though I don't intend to replace humans with AI, I will hire fewer humans because of the automation factor.
But there are other companies that, of course, would just immediately replace as many humans as possible, and there would obviously be economic pressure to do that.
Doesn't that mean that deployment will by necessity be very rapid for economic reasons?
Again, in certain industries, we'll see that, but in ours, maybe there is more interest in having a human around for social reasons, historic reasons.
Again, predicting something like that is very hard in specific terms.
I cannot tell you like by this date, 100% is just not possible to do.
But it seems the trend is more and more computer-desked jobs can be done automatically.
Is it possible that one day the machines will thank humans for inventing them as they dismiss humanity?
I mean, like your purpose as the biologicals was to invent us, the machines?
They likely to replace us for sure.
Will they thank us is a very difficult question.
I'm not sure if they will.
I cannot predict that.
Of course.
Okay.
Okay.
What would be the methods of efficient extermination?
Maybe I'm asking you to predict too much here.
I'm sorry, I'm not trying to put you on the spot.
I know you write science papers.
I'm not claiming you're a futurist with a crystal ball, but have you thought about this question?
Yeah, and the most interesting answers are outside of what we can predict.
Novel physics, novel weapons, novel poisons.
I simply cannot predict something like that.
I can tell you about historic precedent, things we know about, viruses, nanotech.
We don't know about things we don't know about.
All right.
So, Todd, basically, he told us anti-matter grenades are going to take us out.
Right.
Something that we don't even know of right now.
That's going to be the system.
Okay, Todd.
Next question is yours.
Yeah, thank you.
Roman, I mean, what are the humans to do ultimately?
I mean, other than die, you know?
Yeah.
If you have an audience, and I don't know how large your audience is, try to influence those people not to build super intelligence.
Tell them to vote for people who are against building super intelligence and so on.
So I want to build, I mean, I want to use a lot of AI to help empower humanity with knowledge.
My focus is actually health and nutrition.
And I have a lab, I have a mass spec laboratory.
We do food science experiments all the time.
So I want to bypass censorship of big tech and governments and bring the world in multiple languages knowledge about nutrition and disease prevention and so on.
I need to use AI to do that.
So I'm going to try to build the biggest inference center that I can and the biggest training center I can.
I'm not trying to build super intelligence, but even my own actions are kind of heading in that direction in a way, aren't they?
Well, just to share information, you don't need a very powerful AI.
You need censorship-resistant technology.
That's right.
That's all I want to accomplish here.
I don't want to build super intelligence, but I'll be handing over money to NVIDIA to buy all their microchips.
I'll be contributing to the architecture or infrastructure of the AI ecosystem.
So convincing NVIDIA not to sell to companies specifically explicitly trying to build super intelligence would be a great goal if you can accomplish that.
Yeah, well, if Jensen Huang would listen to me, he'd be on the podcast, you know.
There's more money to be made if humanity is around.
That's a really good point.
Very strong argument.
You need your customers to be alive.
I like that.
Great, great takeaway.
Todd, next question is yours.
If we are in a simulation and you are expert in it, is there cheat code?
How can we win?
I published a paper on how to hack the simulation.
You should read it.
It's really definitely.
I will.
Where can we find it?
I will read that.
It's still in the internet.
They haven't censored it yet, maybe because I'm still here.
What is the title is How to Hack the Simulation by Roman Depolsky.
You also wrote another paper about the evidence of the simulation that could be obvious to those of us who are inside the simulation, I believe.
But I forgot the details of that paper.
Can you go over the highlights of that for our audience?
So there are multiple papers talking about evidence either from quantum physics or about kind of glitches in a simulation.
All of them are kind of similarities between what our virtual worlds are doing in terms of saving, compute, efficiency of graphics rendering and what we observe in the physics world, observer effects.
You only render something in this universe if someone's looking at it at microscopic level.
Well, yes.
And what about the quantization of physics, Planck units, Planck constants?
The digital nature of physics is another good reason to think it may be, but also there are analog computers, so that doesn't prove it completely.
So, and you said it doesn't render the simulation until we are observing it.
So that obviously involves the observer effect, you know, Schrödinger's cat, Heisenberg principle, et cetera.
And you believe that to be a very real phenomenon in our simulation, correct?
I mean, I just want to clarify.
Well, those are experimentally verified pretty much beyond question.
What they mean is the interesting philosophical dilemma, but the actual experimental evidence is very strong.
How do you explain consciousness and the sense of self-awareness within the simulation?
Right.
So that's a very hard question, as it is called hard problem.
We don't know for sure.
could be external players in a simulation, but also it could be just a side effect of how your hardware and software interacts with the inputs you are getting.
So the example I like to give is about optical illusions.
When you look at an optical illusion, you experience certain internal states.
Maybe you see rotations, maybe color change.
Those are internal perceptions, quality you experience.
There is good evidence that AI can experience those as well as animals.
So maybe there is some rudimentary states of consciousness already in large language models.
Wow.
Todd, go ahead.
I'm going to grab an optical.
I have a really great optical illusion to share here.
You go ahead.
I'm going to grab it.
I'll be right back.
So you've obviously heard of the Mandela effect, Berenstein bears versus Baron Stain Bears, or build it and they will come, or build it and he will come, those type things.
Is the Mandela effect just maybe a glitch in the simulation?
It'd be surprising.
I mean, if they were making a change, they would wipe your memory, wouldn't they?
Yeah, but why do many of us think that there's think of this?
I mean, it's not the Baronstain bears, damn it.
It's Berenstein, but yet it's S-T-A-I-N in the books.
I don't have a strong opinion on it.
It might be part of a bigger set of pieces of evidence.
Okay.
Okay.
I think it's a glitch, but that's just my simulation.
Okay.
You see this, gentlemen?
Familiar with this illusion?
Yep.
Okay.
All right.
Now, this.
I like how you just have props around like, I'm going to demonstrate that.
Yeah, no, I do.
You should see what we have.
No, I want to see what you have.
So if I just simply, if I try to rotate this and keep it on the same vertical axis.
Well, I'm conscious, man, for sure.
Definitely not NPC.
Right.
Now, what's really interesting is if I take a piece of tissue, all right, and I'm going to tie it to one of the rungs of this apparent window.
Okay.
Hey, there it is.
And this isn't, Todd, this isn't your sleight of hand magic.
This is just straight up.
Neurological perceptions gone wrong.
And then I rotate it.
It appears as if this tissue is kind of leaping.
Well, I don't know.
Maybe I'm not holding it that well.
It appears that if it's going back and forth, but I'm actually rotating it around in a circle.
So you can ask basically a modern large language model.
The only requirement is that it's a novel illusion.
You cannot use something it can Google a CNU show beforehand.
That's true.
If it's completely novel and you show it and it has the same experiences as an average human, then we know it had the same experience.
Well, that's a task coming up with a completely novel illusion.
This was demonstrated decades ago.
And this is actually a trapezoid.
It's not even a square.
But our brain says it's square because that's the way we always see windows.
I can't even show you.
But like this side is much shorter than this side.
Anyway, our audience gets it.
You get it, Roman.
Okay, let me, Roman, I really appreciate your time.
Are you okay with about 10 more minutes?
Yeah, yeah, that's perfect.
Okay.
Let me ask you a kind of a devil's advocate, but a philosophical question.
The assumption underlying a lot of these conversations is that humanity is worth saving.
And what is it about humanity that is worth saving?
Like, why is this such an important goal?
Again, I'm being facetious a little bit, but in your philosophy, why does that matter?
Well, I'm enjoying my life.
I don't want it to end.
I don't know about yours.
Maybe it's not worth saving.
I have no idea how you experience life, but I love it and I enjoy my family, my friends.
So I'd like to protect them.
Well, I'm going to get new friends, so I think I'll enjoy it more when that happens.
But yeah, I'm enjoying life too.
But as a whole, I'm not that impressed with all of humanity.
I'm impressed with part of humanity, right?
But not all of humanity.
I mean, we have, you know, child killers and war and violence and the deception and irrationality and all these kinds of things.
I mean, there's good and bad together.
I could see an argument that not all of the race is worth saving.
So there is a sub-field of research about building a worthy successor.
If you think we are building super intelligence and we cannot control it and it's going to kill everyone, can we influence the future of the universe in terms of what type of superintelligence we build?
Will it be very creative?
Will it have consciousness and so on?
So some people are very interested in the future of a post-human universe.
I couldn't care less.
Okay.
Roman, in this simulation, people are born, they grow up, die, right?
So how can you rationalize the fact that you'd like to live in this simulation forever if you know that biologically it seems we all die?
Well, it's a disease.
You have an aging disease, and just like any other disease, we can cure it.
You can be at the same age, healthy forever.
There is no physics problem with that, right?
It's not violation of laws of physics.
So that's part of your simulation is hopefully to be able to figure that out for yourself.
I want to live forever.
I'm going to treat my body well.
I'm going to cut down from three to two Big Macs every time I go through the golden arches.
I'll still let you supersize it.
Okay, great.
And I heard you speak about this before, that the reason most people have children, you believe, is simply as a replacement strategy.
But there's obviously something built into the neurology and the emotions and endocrine systems of humans to desire to have children.
Yeah, historically we liked sex.
That's what evolution used to get you to have children.
You may not even know why they showed up, but that's what you did.
And then later, then you started to realize things.
You were like, I'm going to be old.
I need someone to take care of me.
I'm going to be weak.
So you had children.
And now you need someone to run your business after you can't.
So right.
So you think if the disease of aging is resolved, then people would consciously decide to not have children because they would rather enjoy their lives or they would put off having children for much later.
There's no biological clock ticking.
You don't have to have kids by 30.
You can wait until you're 3 million years old.
One of the great things about people dying off is that that's how the structure of scientific revolutions takes place, though, right?
When old ideas die off and they're replaced by new, younger thinkers, wouldn't living forever cause a concentration of intellectual models that could be bad for our future?
So typically older minds, with some exceptions, of course, have harder time learning.
So that's why they get stuck in the old theories.
If we preserve you in a state of still very flexible, very pliable mind where you can continue learning, that wouldn't be a problem.
New evidence would cause you to update, change your beliefs, and so science can continue.
Now, there is still a shortage of tenured positions.
So we need some people today.
And I'm thinking, like, I saw RFK Jr. bashing some of the United States senators today.
And I was thinking, I sure don't want those senators to live forever.
I'm not wishing harm upon them, just to be clear.
But sometimes ideas are like there are some people, many people, who will never change, and they're wrong.
And that's been reflected throughout history.
So my follow-up question is I'm concerned about people living forever.
The most powerful, most corrupt, most wealthy people would have access to that technology first, and they would use it to live longer and dominate humanity with their bad ideas.
That is a big concern, absolutely.
Interesting observation about what you said.
So we have the oldest presidents, multiple ones.
We have the oldest senators.
Why are they all not investing more into life extension?
Good question.
We keep electing them.
Maybe they should take a hint.
But the other thing I hear from you, and I hear this from other AI or machine learning experts, is a very high confidence that AI will solve aging.
And my background is obviously in nutrition and so on.
I'm not yet convinced, although I'm open to the idea that AI will solve aging.
Are you confident that that's possible?
I think it's possible against it's not a violation of laws of physics.
There is a program, DNA code representing all aspects of your body.
So we should be able to modify it to change how many resets your cells can go through without causing cancer, of course.
But it may be possible to solve it without AI.
It just we hope that AI would help to accelerate this process.
A very important example is protein folding problem.
We couldn't solve it manually, but with AI we now are able to solve that problem.
Super important problem in medical research.
And we can brute force all human relevant proteins, all animal relevant proteins.
So if we can do the rest of this exploration for the human genome and understand all parts of it, which right now we know very few what they do, that would definitely be a step forward to make you at least healthier, if not immortal.
Well, until that day comes, I've got my superfood smoothie here in my glass jar.
So this is what I'm into, avocados and turmeric and all kinds of goodies and sprouted cruciferous vegetables, et cetera.
And I use AI to do research in nutrition, and it's amazing.
It's opened up even me after 25 years in this space.
It's opened me up to all kinds of new knowledge.
And all I did was gather up nutrition knowledge and then train it into the model.
And then I started asking the model these questions.
Like, wow, where has this been my whole life?
You know?
So what's the optimal diet?
What should I be eating?
Well, it really has to be very personalized.
But if you have specific...
So we need to understand human genome, you see?
Yeah, and everybody's got different health challenges, but there are very powerful natural medicines to address almost anything, even preventatively as well.
But I think AI can be revolutionary for health and wellness and nutrition.
But again, only if people desire to have knowledge.
And that's another problem with humanity today.
And Todd, you and I, we talk about NPCs, kind of low-information people out there that, you know, they're living on food stamps.
They're getting free Pop-Tarts.
They're not going to go to an AI engine and say, how can I be healthier?
They're just not interested in it.
You know, there has to be some level of curiosity to improve our condition, I think.
And there is among some, like all of us here, Roman.
But there's a segment of humanity that's just, they just want to play video games all day or whatever.
Somebody's playing this video.
Yeah, I know.
Can you blame them?
This is a very interesting game.
But are they drinking smoothies or eating pop-tarts on the side?
It's actually a carnivore diet.
That's the right answer.
Yeah.
But, okay, well, Todd, if you can jump in.
Well, I just, for job security, if everybody's going to be living forever, I'm just going to start focusing on using AI for solving the parking lot problem because it's going to get ugly out there, man.
Stop driving cars, not an issue at all.
Yeah, yeah, you're not.
Just driving, making you money.
Okay.
Okay, one more really important point, Roman.
Okay, I'm sorry to hesitate, but in the machine learning community, I've noticed an automatic default trust in thinking that our governments want us to be healthy and successful and alive.
And I'm convinced that in some cases, various governments around the world are really not that interested in doing anything other than killing people off or depopulation and so on.
So it's like, even before the machines go full skynet, I feel like government policies are killing us too.
So why is there this very common trust in government as a good faith organization?
Or maybe this is outside the scope of where you focus, but this is a problem that look at different countries and their governments.
Some are better than others, and you can see which ones work best by people voting that they feed.
You may think it's not perfect or optimal, but most people want to be where you are.
Most people want to be where we are.
Where you are?
Where I am.
I assume you're in the U.S., right?
I'm in Texas.
Even better.
Yeah, most people do want to be in Texas, actually, for some reason.
Stop complaining.
It's awesome.
No, I'm not.
I'm actually, I love where I live.
There you go.
Yeah, I do.
I love it.
But that's another question.
How can our audience, how can we maintain a positive attitude and continue to live meaningful lives in pursuit of knowledge and the things that matter to us, even with the knowledge that one day the machines may try to kill us all?
It's all about having a sense of humor.
But we have that.
What is the funniest possible joke?
I think this life is the funniest possible joke, actually.
I started my paper about humor.
Excellent.
Okay, so that's a very practical answer.
But what else?
I mean, what would you add to this?
And we're kind of getting close to the end of this interview.
Our audience, they're high IQ people overall.
So what would you say to them?
Enjoy life.
It doesn't matter how many days you got left.
If you live every day as if it's your last, you'll have an awesome life.
Todd, your comments?
Well, you know me.
I'm like, don't live in fear.
Shut your TV off and live life locally.
So I think we track pretty well there.
See, and actually being a street magician has a lot to do with simulation theory because what Todd does, Roman, is he goes out onto the street and he films himself showing people that reality is not what they think through sleight of hand magic.
And it's really cool when the perception doesn't match the internal model of what's possible.
That's how you trust for consciousness.
You can detect NPCs with that.
Excellent.
Oh, yeah.
Todd, you have an NPC detection system.
I do.
That's awesome.
But that's the thing about sleight of hand magic is it's interesting because it violates the internal model of what should be possible.
So we're going to have versions of your internal model are what jokes are.
That's what comedians find in our world and tell you about it.
And then you tell your friends about it to fix their world model.
Really good point.
Okay, let me ask you, Roman, I'm going to give out your handle on X. It's RomanYam, Y A M, short for Yampolsky.
That's the Twitter channel right there.
Definitely follow that.
And then on Amazon and other booksellers, you've got the book AI, Unexplainable, Unpredictable, Uncontrollable.
And here's another book, Considerations on the AI Endgame.
And I believe there are also audible versions available of both of these books.
So anything else you'd like to add?
It's incorrect.
Maybe in a different simulation, it's done, but my publisher is very slow.
Oh, is that right?
Unless they did it and I haven't seen it yet, then it's awesome.
You know, you can just pick an AI voice.
You can have AI perform the book.
I've seen a lot of that.
It's a different story.
Yes.
You can do it yourself.
You can read out out loud to your children.
That's a good idea.
Yeah.
Is there anything else you'd like to add or mention how people can familiarize themselves with your work?
You can follow me on Facebook, follow me on Twitter.
Just don't follow me home, as I always say.
Okay.
Well, thank you so much, Roman.
It's been really intriguing.
This has been, we're honored to have you on.
We appreciate your contributions to understanding.
Thank you so much for joining us today.
Thanks for inviting me.
Thank you, Roman.
Love your sense of humor.
Yeah, we love it.
You got to keep it interesting.
All right.
Thank you, Roman.
Have a great rest of your day.
Take care.
All right.
And for those of you watching, we will take a break and then Todd and I will come back with the after party, which should be especially interesting because we've got props now.
We've got actual illusions to run by you.
But I need to find a better way to spin this thing that looks more convincing.
Anyway, we'll get to that.
We'll take a break.
We'll be right back after this break.
Stay with us.
Perfect.
Join the official discussion channel for this show on Telegram at t.me slash decentralized TV, where you can ask questions or offer suggestions of who we should interview next.
Also, be sure to subscribe to the email newsletter on decentralized.tv, where you'll be alerted about one day in advance of each new upcoming episode before it gets published.
On decentralized.tv, you'll also find links to our video channels and social media channels across all platforms, including Brighteon, Rumble, BitChute, Twitter, Truth Social, and more.
Check it all out at decentralized.tv.
All right, welcome to the after party.
And Todd, I think I speak for both of us.
For anybody who would like to be a future guest on this show, if you could just answer the way Roman answered, it would be awesome.
Wow.
It's like he does not waste time.
He gets right to the point.
He answers the question.
He is the most succinct guest we've ever had.
Very succinct.
Like he was nailing my question stack.
Yeah.
Right.
I was thinking, man, this is going to be embarrassing.
We're going to finish the interview in 20 minutes.
Well, it's funny.
I'm like, he wants to live forever, but he doesn't want to do an interview that takes any longer than necessary.
I totally, I respect that.
It's like, let's answer the questions.
Let's get it done.
Total respect.
Anyway, what a brilliant man, huh?
Well, I'm certainly appreciative that he wanted to be part of our simulation.
Yeah, he's done a lot of interviews with some pretty big channels.
Look, we got the illusion now.
Oh, excellent.
All right.
So just because I want to do this right for the audience, let me take the tissue off.
We're going to slowly spin this.
Now, I'm spinning it in a circle, but you're probably thinking, you're probably seeing that it's like waving back and forth instead of spinning in a circle.
Yes.
I'm seeing the back and forth.
It looks like it's just going back and forth.
That's pretty wild.
All right.
Now, let me add the tissue.
Okay.
This is where the illusion gets really freaky.
My promise to the audience, we will improve the illusion.
Oh, sorry, Todd.
Didn't mean to cover your phrase.
We will improve the illusion.
Here we go.
And demonstrate it for you in a future show.
Perfect.
Okay.
And this actually, I saw this first on a video out of a science show from Australia in the 1970s.
Wow.
They had demonstrated this.
And I forgot the name.
It's just a persistent illusion that tricks your brain because of the geometry, et cetera.
But it just goes to show you that our brains don't actually perceive reality as it is.
Our brains interpret reality based on our internal language model, so to speak, or physics model.
And that is the basis of close-up magic.
100%.
You know?
Yeah.
And I will share something with you.
I think I've shared with you before, Mike.
But do you know that 5% of the population, that's my number that I've ascribed to it.
I've been doing magic for 52 years.
Okay.
5% of the population absolutely hate magic.
And they attack you.
Yes.
I call them the grabbers.
Like right in the, instead of just being entertained, you know, on a street corner, when I walk up to him, you're always going to have one, you know, out of 20.
Right.
I know how you do that.
Or look in his left pocket.
Look at his hand.
You know, it's just, or they'll grab it mid-effect.
And it's like, why can't you just enjoy and be entertained?
I mean, I know I'm not a magician.
You know what I mean?
I don't really do this stuff, but I'm a pretty good entertainer.
But I just find it always interesting.
And it's those personality types that I think stand in line for jabs.
Well, it's funny.
They won't question the vaccine authorities.
Right.
But they will question your magic.
Right.
But they believe in science magic when it comes to vaccines.
You know what I mean?
But I think what it is with those five percenters is they just think you're trying to get something over on them when it's not.
They kind of lack the EQ to be able to just enjoy, you know?
Yeah.
And it's not, because when I was in, you know, I did sleight of hand magic, not at the level you did, but I would entertain my friends who were mostly getting high at the time.
So it was much easier.
It was a great way to learn magic.
But I didn't.
I never, just for the record, I never smoked pot, but my friends did.
And when they would get high, I would hit them with the card magic and they would go, wow, you know, it was great.
But they knew it wasn't real.
I mean, we all knew it wasn't real, that it was like the proficiency of the effect to show it, you know.
But today people get all upset about everything, even entertainment.
And that's why I asked Roman the question, like, what are the traits of humanity that are worth saving?
You know, I mean, it's, it's actually kind of a serious question because it's like, okay, if let's say there are many, many species on many planets all throughout just our own galaxy, the Milky Way, are they all worth saving?
Or just some?
And what's like, what's the cutoff point?
You know what I mean?
Like, if you murder your children, you know, if you mutilate the genitalia of your kids, your planet goes into the red pile, you know?
Yeah.
Or whatever.
Well, I think my takeaway from this, Mike, this interview, Mike, is a couple of things.
One is if indeed we are living in a simulation and if you want to embrace that mindset, then I think it's really, really critically important to abandon living in fear.
You know, there are so many things that within this simulation that tries to drive fear into our hearts and souls, right?
So there's that.
And then as he said, just enjoy life, man.
You know, it's like, it's not so bad.
The simulation doesn't suck that bad.
I mean, I think it's pretty good for most people, right?
And so.
But you know what?
We failed to ask Roman a really important question.
How does he plan to survive the machine's extermination of humans?
We should have asked him that question.
Yeah.
Well, we need him as a return guest now, Mike.
Yeah, because I honestly don't know his answer on that.
Maybe he doesn't know, but you would either, I mean, some people might answer, well, I'm going to merge with the machines.
I'm not saying that's his answer, but some people would say that.
But then it's like, well, you give up your humanity if you merge with the machines.
And is that even possible?
Actually, I do remember he was very skeptical about the idea of transhumanism from a previous interview.
Said, he said, you can't really upload yourself to a machine.
You've just made a machine copy of your language patterns or whatever.
Okay.
So he has expressed skepticism about that.
Yeah.
But how do we survive the machines?
I've got hatchets and things over here, but I'm not sure that that's the answer.
I have an escape from New York.
Knife.
Yes.
Yeah.
Well, flamethrowers.
You know, I did an article years ago, like 10 years ago, called How to Kill a Google Robot.
Oh.
And I even argued at the time that it was going to be an important skill to kill robots.
Okay.
And how do you?
Well, by the way, I don't want the machines to hear this because they might hold it against me.
Okay.
One answer is to use vehicles so you can run over them with trucks or cars.
Okay.
Yeah.
That's why the machines want to run the vehicles like Waymo.
So they won't be used to kill the robots.
You see.
Good point.
Right.
The other way, and this is from Star Wars, The Empire Strikes Back, is wrap their legs in cables.
Okay.
Remember the ATT Walkers?
Yeah.
And Luke Skywalkers like putting the cable around and tripped them up, you know?
And then they fell, and for some reason, they exploded.
You can trip up the robots with cables.
EMP weapons, but also you can use high-powered rifles to target their energy systems.
You would assume they don't have a lot of armor unless there's a really strong improvement in orders of magnitude of energy density of the batteries.
So, and armor is very heavy, right?
So, you get yourself a 300 Windmag rifle, you learn where is the battery pack on this robot.
Okay.
And you hit them in the battery pack.
But then again, you got to dodge the drones that are coming in to take you out with the thermal cameras.
Yeah.
So it's going to be quite a challenging thing.
Man, I just kind of like my simulation right now to where I'm not dodging mini drones.
But do you think, Mike, you're a scientist?
Is there any scientific way to test whether we live in a simulation or not?
Yes, there are.
There are some ways.
Talk to me.
Well, I mean, he's actually written papers about this.
Okay.
But, you know, one of the ways is to test the duality of light theory or the demonstration that you can take light from distant stars where the light has been traveling for billions of years and you can use the double slit experiment and to show the duality of light as a wave or a particle.
Okay.
And although I guess this is a leap of an assumption, but I think it's the human observer that actually makes the duality of light work.
So if we don't have a human observer, the probability wave doesn't collapse in that way.
When a human observer is present, then it does.
Remember what Roman said, that the universe isn't rendered until you observe it.
So like all the planets and all the stars out there, remember this.
If there's no conscious being on a planet, then the creator or the architect doesn't have to render that planet until you get there.
Ah, okay.
Like in a video game, they're not rendering the whole world.
They're just rendering the part you're looking at.
Got it.
Right.
Otherwise, it wouldn't is too much.
So in our cosmos, wherever you go, then God renders that part.
Like, oh, this is what that planet looks like.
Okay.
All right.
I mean, that's the theory.
Yeah, it's a good theory.
Yeah.
I like it.
Because, you know what?
Gosh, we should have asked him too about the computational nature of the universe because our universe is a giant computational machine.
And physics is computation.
Light is computation.
Right.
And so it's just all decentralized computation.
Well, it certainly isn't a slow-ass blockchain that takes 10 minutes for your change to come back.
This is happening in trillionths of a second, probably.
There's probably like a cosmic clock that controls the sequencing.
And then all the physics, all the cause and effect at the atomic level is being carried out on that clock.
And that's why chemistry takes time.
That's why physics takes time.
It doesn't all happen at once, obviously.
Otherwise, nothing would work.
Well, speaking of simulations, Mike, my simulation this last week, I had 35 consultations for the UNA.
That's a heavy five simulations.
Yes.
It was crazy.
Wow, we're creating a time burden for you.
Bring it on, bring it on.
No, it's, it's, it's been fantastic.
And it's just, I bring that up to just say that for those of you who have been watching and listening, thank you for booking the consultations.
And most of you have moved forward with these UNAs.
And it's just that people are getting the message, Mike.
Let me bring up your website while you're talking.
It's my575E.com.
Right.
Is this go ahead and then share it with our audience?
Yeah, it's just been neat because I've been able to hear this last week so many different stories of people's lives and their operating realities.
And are they W-2 earners, W-9 earners?
Do they operate businesses, LLCs?
Do they own property?
Do they trade in crypto?
Do they have children?
Do they have homes?
You know, and all of their, there's a solution based upon each of those use cases.
And people are finally waking up to the fact that, wow, this one person said, this seems too good to be true, but I believe it will change my life.
And I thought that was very, very wise, right?
And those of us who've had them for five years, 10 years, et cetera, you know, that's true.
That's true.
But I just, you know, I think the message has gotten through to people, Mike, to where it's like, just go to the website and watch the 90-minute video.
It's free.
Download the PDF and then think about your own operating reality and book a consultation.
It's that simple.
Now, it's a $150 consultation, but what everybody appreciates is if they move forward with the UNA, then they just take that off of the investment.
So it's kind of like a built-in consultation hour, right?
To where before they pull the trigger, they just kind of want to gut check.
And that's what they ended up being.
And I've had so many people, Mike, ask me to tell you, thank you for the work that you do.
Oh, wow.
So many people tell me I've been watching Mike Adams for 20 years, 20 plus years.
And it's really, you're really endearing to the folks out there.
Oh, wow.
Well, thanks for that feedback.
Look, we love our audience.
We love all the support that you give us.
And we try to educate you and bring you really interesting guests like today and also keep it lighthearted and funny, which reminds me.
So now that we, I guess we agree that we are living in a simulation, how does that change your keep more of what you earn slogan?
Is it like, keep more of what you earn before the robots make us burn?
Like, how does this affect financial planning knowing that Skynet is coming for us?
My question.
Keep more of what you earn so that you can buy more life years because that's the thing, right?
Trust me.
So what you're saying, like our guest Roman, he was saying that aging is a disease that's going to be solved, but it'll probably cost a lot at first if it's achievable.
You can imagine, you know, because wouldn't most people who have a lot of money, wouldn't they pay like a million dollars to never age, I suppose?
Yeah, add a year on to my life, million bucks a year.
Or to slow aging.
Like, you, you, in 10 years of living, you only age one year.
You know, would people pay like a million dollars a year for that?
That's right.
And if you keep more of what you earn even today, then you can afford all of the high quality products that you offer, Mike, that legitimately, seriously, can change lives because it's all about good food in your body and being healthy and such.
So that's a very practical.
Yeah, go ahead.
No, that's a really good point: is that you need our products, healthrangerstore.com, to live long enough to make it to the AI cures for aging, right?
That's the Ray Kurzweil approach.
What a great bug.
That's why he's into nutrition.
Yeah.
Yeah.
I want to live long enough to live forever, but we got to, you know, don't die before the robots come on.
Not the robots, but the AI cures come on.
Yeah.
No.
I don't know about you, Todd, but I don't, I'm, I don't really want to live forever in this simulation.
I mean, it's been fun and all, but I'm actually a lot more interested in what else is out there.
So I love, I mean, I love my life, don't get me wrong.
And I love, I mean, I feel very honored to be here and I'm thankful for it, but I don't want to be here forever.
And you know what, Mike?
I mean, look, I'll entertain simulation, but at bottom, you know, I'm a Bible guy.
And I really believe that, you know, the good Lord knows the day that we're not going to be in this realm.
And if he speaks to that, right, then that tells me that there's something greater beyond this.
But while I am here, you know, it is my mission to do good with the life that I have and pray, you know, pray for his hand in our life.
Roman would say, I believe, that the Bible is consistent with simulation theory.
Yep.
You know, the story is there's a creator created our world.
We inhabit it.
It's a training ground.
After this, we leave the simulation.
We go back to heaven.
I mean, it's the same story, actually.
Yeah, it can be, but I wonder what he would say about that homeless carpenter that walked around in sandals a couple thousand years ago that died for our sins in our place as our substitute.
That was God-mode.
That was God-mode.
Okay.
That was God-mode in the game.
No, the creator sent down God-mode Jesus to try to teach people to stop being nitwits, like actual God-mode.
And in Islam, they would say Muhammad is God-mode.
And in other Buddha and so on, right?
And in Hinduism, there's like 50 of them, 50 God modes or whatever.
I don't mean any disrespect.
I just don't know how many gods there are in that religion.
But it's God-mode avatars being put into the simulation to try to correct the path because humanity kept screwing up bad.
I mean, read the Old Testament, right?
Yeah.
It's like, oh, please, you could do better than this.
Well, as for me and my simulation, I'm going to just appreciate my God-mode little guy over here on my desk that, you know, I think is his story.
Yeah.
God-mode story.
No, that makes perfect sense.
But it's also interesting that if God wanted to reset the simulation, that reads like the book of Revelation.
Yeah, that's true.
You know, it's like, let's send in the giant space rocks, shake the earth flat, everything's gone.
The souls are all taken out of the simulation and brought into heaven.
It's all in there.
I've taught about this chapter a lot.
Sure.
And then the planet is reset and it starts over.
He's like, ah, you know, Sim City, I don't like it.
Godzilla's going too crazy.
Let's reboot the game.
Game over.
Game over.
Yeah.
Game over.
More quarters.
Come on.
More quarters.
Right.
And it's interesting that also, even, for example, Buddhism and also like Native American traditions, they have stories of the universe ending and then being reborn and ending and being reborn again and again.
Even some high-level physicists say, like, you know, the Big Bang, and some of them believe in the big collapse that would become a singularity and then it would explode into Big Bang, you know, 2.0.
But how do we know it's not version 2 billion?
Like this has happened billions of times already.
Valid point.
Right?
The big, the big banging, 2 billion bang, 2 billion and 1, bang.
I mean, how do we know?
I don't know.
I have no idea.
But I'm going to follow the trends, Mike.
And the trends tell me that there's going to be a day in my simulation where a good stake is going to be $100, not $40 and certainly not $4.
Yeah, that's probably true.
So inflation is real in the simulation.
It is.
Unfortunately.
It is.
That pushes us towards tilt.
So, yeah, you know, gold and silver hit all new highs.
Well, wait, silver is not an all-time high, but gold is an all-time high.
Silver is back above $40 for the first time, I think, in 11 years.
Wow, that's amazing.
It's really something.
So let me mention, folks, if you go to metalswithmike.com, this is one of our affiliate sponsors.
It will take you to Battalion Medals, and you can get gold and silver there from our trusted partner.
We've worked with them for many, many years.
This is the Treasure Island team that launched this new venture.
And very competitive pricing, discrete, insured delivery.
So that's metalswithmike.com.
And then if you go to verifiedgoldbacks.com, that will take you to the goldback page where we share our science lab testing of the gold content and gold purity of the goldbacks, proving that they do contain the gold they claim, actually a little bit more, an average of 102% of the claimed quantity of gold.
And that's how I did it right there, melting down.
I actually took these photos.
That's me with the giant ceramic furnace and burning off the polymers to leave the gold behind.
And it becomes a little gold foil.
Like there it is.
You see, that's actually my hand right there.
You see that picture?
Yeah.
If you melt down, that's like a 10 or something.
That's wild.
You get this gold foil.
And then you can further melt that down into a gold ball, like a gold BB.
Wow.
And then you can weigh that accurately on an analytical balance.
And you're like, hey, there's a lot of gold in here.
See?
0.9197 grams.
And let me scroll down.
Here's the results.
Out of the goldbacks, the recovery percentage was always over 100%.
Right.
Yeah.
Every time.
And, you know, Mike, people are listening to your show, watching your broadcasts and such when they're paying for their UNAs in gold backs.
That literally happens.
That's awesome.
Two weeks.
All goldbacks.
It was crazy.
Since I've been mentioning gold backs, and again, the website is verifiedgoldbacks.com.
You know, they've doubled in value?
Really?
Yeah.
They're over $7 now, quite a bit over $7.
And they, I mean, they were like maybe $360 or something when I first started mentioning them.
Wow.
Yeah.
Fascinating to just contemplate.
I mean, I got them for the utility, right?
Because it's like, but you're right.
They are, wow, the value is going up in those $10 gold backs, right?
Totally.
Totally.
But I'm going to go.
I really love your idea.
I'm going to add a new slogan to Health Ranger Store is use nutrition to live long enough for AI to solve aging.
Yeah.
And then maybe you can survive the robot extermination agenda.
Yeah.
All right.
Todd, is there anything else you want to add today?
This has been fun.
Man, it's my brain just is on control alt delete right now because I think my personal simulation is at tilt level.
And I'm really, really grateful for Roman.
He was a fantastic guest.
I hope we can get him back at some point in time.
And I am really fascinated to read his paper on how to beat this simulation.
I'm going to find that.
And you know what, Todd?
You can find his paper and you can copy the entire text and you can paste it into Enoch and ask it for a summary.
So you can use our AI to summarize his paper about how to beat the simulation.
That's a great idea.
Yeah.
That's a great idea.
Man, I use Enoch all the time.
It's really amazing.
I was using it in a previous segment and it's just, it's rocking.
And man, I can't wait until we get, I can't wait for the next six months of what we have planned for Enoch and what, you know, all the data we're processing and the multilingual approaches and everything.
And right now, I mean, we're continuing to accumulate storage systems and GPUs and we're waiting on the new NVIDIA Blackwell chips.
It's going to be a game changer, let me tell you.
And we've got our data center almost done here, our mini data center, which will share the same building with our laboratory.
Next time you come out, Todd.
Yep.
You know, we got a whole new studio almost done.
We've got it's going to be my next question.
When is the studio going to be done?
It's like six weeks out.
Okay.
Okay.
Yep.
My wife and I are both coming out.
Okay.
You're going to love it.
So whenever you want to make that trip, we'll give you the full tour.
And until then, keep up the great work, Todd.
And we'll do a show together in studio.
We will with the card magic in studio.
There you go.
Yeah.
And in the meantime, keep identifying NPCs.
Okay.
But don't tell them about the NPC or later agenda that's coming.
Right.
Keep it a secret.
You and me and Roman and the machines know that.
Okay.
But the NPCs have no idea.
how Phil Jackson, who coached the Chicago Bulls, he was able to patten or whatever the word is for three Pete when they won their third championship in a row.
Oh, is that right?
A three Pete.
And yeah, and he made a gazillion dollars off of it because there was these t-shirts for three Pete.
Anyway, NP, see you later.
You should protect do a shirt.
Yeah.
NPC you later.
That would be a great shirt, man.
Especially if it was one that would kind of, if you're in the know, it's self-evident.
But if you're an NPC, it's just going to go right over the head.
So here's, okay, I got it.
And by the way, for the record, I'm pro-human.
I'm trying to help humans survive this.
So this is kind of a joke, but the front of that shirt could say NPCU later, and then the back, Team Skynet.
But it's a joke.
It's a joke, folks.
Yeah.
Except for the NPCs.
But I'm pro-human, just to be clear, but we need to use AI to protect humanity from what's coming.
I mean, we have to be off-grid.
We have to be decentralized.
That's, you know, Todd, this show is really pivotal because the skills that we're teaching people, how to grow your own food, how to get off-grid, how to be more self-reliant, this is what's going to help you survive the Skynet scenario for real.
Absolutely.
And thank you for these wild posts that you do.
Like you made my, you made me think this week about how straws can be an important part of our survival kits.
Oh, yeah, yeah.
Interesting article.
That was great.
That was a great article.
You know, where can people, where can people go?
What's the easiest way for people to follow those?
I always get my download from you on Telegram.
You know, well, just go straight to naturalnews.com, check the website.
But I am posting on Telegram under Real Health Ranger, posting on Brighteon.social under Health Ranger, and I'm posting on X under Health Ranger also.
So you will post that same article across all different platforms, right?
Yeah, me or my staff.
Right.
I don't always post everything.
You can tell when it's me because it's usually a paragraph complaining about something.
That's Mike for sure.
Yeah.
Or I'm posting links to our interviews or things like that.
Yeah.
Or sometimes a joke.
Yeah.
You know, sometimes a joke that not everybody realizes is a joke.
But that's okay.
But it's such a wealth of information.
And I've really enjoyed it.
And I turned my wife on to the Telegram channel, yours.
And she's now coming in the evening saying, oh, did you read this?
Did you read that?
It's just cool.
It's good propagation.
Hey, your wife speaks Russian, doesn't she?
Oh, yeah.
So we're going to take the highlights of our interviews now and translate them into multiple languages, including Russian and Spanish and Chinese.
When we do that, can I send you a link for a test and make sure that it's actually correct?
Like good Russian?
Yep.
Yep.
She'd be happy to.
Okay.
Well, my wife speaks Chinese because she's from Taiwan.
So we can cover Chinese and Russian and English.
And I'm sure we can find some Spanish-speaking people because I'm in Texas.
So it shouldn't be a problem.
You probably got some Cubans living right next door.
We can solve that problem.
Is this good Spanish?
See, senor.
It's good.
No problem.
And then converting everything to NPCs real easy.
You just have a blank bubble above their heads.
The thinking bubble.
It's just blank.
But it's empty.
Yeah.
We've been mean to the NPCs, but that's okay.
That's okay.
They don't get it.
They don't even know we're being mean to them because they're not there.
And ladies and gentlemen, if you are watching this, you are a two percenter.
I'm just going to say that.
I talk about it all the time in my consultations.
People who watch this show and listen to your broadcast, they just don't think like 98% of the population, they have the capacity to physically think.
So we have great, great, great viewers.
Hey, you know what?
Last thing, Todd.
Let me put on my glasses for this.
Let's, you know what?
Give me a question that you want me to ask Enoch.
Maybe even related to the show today.
Okay.
And let's just try it.
Let's try it real time.
See what it does.
All right.
To Beat this simulation.
What?
Okay.
To beat this life simulation.
Oh, okay.
I'll call it cosmic simulation.
How about that?
Okay.
Yeah.
What references would you use as okay?
As prompt engineering pop quiz.
Yeah, to beat the simulation, what resources would you use to achieve the win or something like that?
I don't know.
I think we're going to need to provide more context.
Let me add something here to say assume we are living in a cosmic simulation created by a highly intelligent, extraterrestrial entity that has created the entire universe.
Okay.
Okay.
Because technically, God would be extraterrestrial because he's not of Earth.
I mean, not human, right?
Okay.
Way beyond human.
Okay.
To beat this cosmic simulation, what resources would you use to achieve the win?
But we haven't defined the win.
What's the definition of winning?
Man, the win would be achieving high moral standards.
Well, the win would be heaven based upon, you know, biblically.
Okay.
Okay.
Achieving sufficient high morality and doing good in the world to make it to heaven.
There you go.
Okay.
Is that, oh man, I have no idea what Enoch is going to do with this.
I know.
Let's see.
This is going to be interesting.
Be interesting.
All right.
It's thinking, thinking, thinking.
It's checking in with lower simulations.
We have an API to the sub-simulations.
Okay.
Here we go.
To achieve the ultimate goal of heaven.
Number one, you got to have a spiritual connection, strong relationship with the divine, the creator, regular prayer, meditation, and contemplation.
Number two, compassion and love.
Prioritize acts of kindness, love, and compassion.
Treat all beings with dignity and respect.
Well, there goes Israel.
So much for the modern day Israel.
Okay.
Moral integrity.
Maintain a strong moral compass.
Yeah, that all makes sense.
Wisdom and knowledge.
Pursue understanding.
This is pretty good, Todd, so far.
Community and connection.
Foster strong positive relationships.
Gratitude.
Express gratitude for the gift of life and stop teasing NPCs.
Okay.
Humility.
Remember you are part of something much larger than yourself.
This is crazy.
I'm telling you.
Enoch is great.
I actually feel a sense of pride at the moment, which is a sin, apparently.
But I'm like, I feel because I built this.
Yes.
And it's actually pretty good.
But you know, that sense of pride is offset by your lack of covet because you're yours already.
Well, there you go.
It says, while material possessions, wealth, and earthly achievements may seem appealing, they pale in comparison to the true rewards of living a life of morality, compassion, and wisdom.
Boom!
End of the show.
You should copy.
Mike freaking drop.
Yes.
Okay.
That's it.
All right.
You can switch back, guys.
The guys on my video board are kind of slow right now.
This nails it.
So, Todd, great question.
Thank you for that.
And Enoch rocked it.
Yeah.
So there you go.
I guess you can use AI even for deeper spiritual connection.
This is how we win.
Yes.
It's not about how many Bitcoins you have, actually.
Yeah.
Or even gold.
It was beautifully articulated.
Good job, Enoch.
It really was.
Yeah.
Enoch rocked it.
Enoch rocked it.
Enoch for everyone, man.
Enoch the rocker.
Okay, that's why it's named Enoch.
It's all about hidden knowledge being revealed to humanity.
So there you go.
That's at brighttown.ai, folks, and it's free and it's non-commercial.
So enjoy that.
And Todd, thank you for your time today.
It's been a great show.
We always have fun, but today was especially fun.
It was.
And thank you, Roman.
Thank you.
Thank you, Roman.
Yeah.
He's the most direct guest, the most succinct.
I think he's running a summarizer LLM in his head that he's using in real time.
He's the most high IQ person I've ever talked with.
He's just like, how can I say this in the fewest words?
Unlike you and I that I was goofing off, you know.
Well, I hope he enjoyed my attempt at humor in the beginning where I made the comment that I'm raising the IQ bar in here.
He didn't really laugh.
He didn't laugh that much at that one.
That was self-effacing, Roman.
I know.
I think he laughed inside a little bit at that.
But we all got your joke.
He's a very, you know, he's a very direct person.
So now that we know that, next time we'll prepare with like 50 questions.
Twice as many questions, right?
We'll do like a like the Roman question marathon with timers, like in a chess game.
It's like, ask us question, boom, hit the timer.
You know, that would be funny.
Yeah.
We might try that.
Like, like, send him a gold back for every question he answers.
You know, and then based upon some recent experiences, what we can do, remember the gong show?
Oh, yeah.
I remember the gong show.
I've got a gong and I could turn around and if they go on and on and on and on and on and on and on.
But but he would never do that.
But we have had other guests that could be gong worthy.
Could be gong worthy.
Yeah.
Yeah.
Um, we would, I'd like to read some of them a novel called Gong with the wind, actually.
Oh, I love Thursday nights, man.
Oh, it's really something.
Okay, here today, gong tomorrow.
Yeah, something like that.
Okay.
Girls gone wild.
Okay, too much.
Now, now we're into the tipsy part of the show.
This is the after-after party.
Right, right.
Okay, Todd, have a great rest of your evening.
Thank you for your time.
It's been all kinds of fun.
And folks, tune in.
Catch all the other episodes at decentralize.tv.
And most of them are just as much fun and very informative.
So check them all out there and get ready for the endgame of the simulation and the Skynet scenario because it's coming and we will help you survive it no matter what.
That's our goal.
Okay, decentralized to survive.
Thanks for watching today.
Take care, everybody.
See ya.
Cheers.
We've got a brand new product at the Health Ranger store to share with you.
One that took us over two years to put together.
And it sounds simple, but it actually wasn't.
It's organic chicken bone broth powder.
And it's available now at healthrangerstore.com.
And we've got it in two formats.
We've got it in the eight-ounce pouches there.
And we have it in the number 10 steel cans for long-term storage.
And of course, it's certified organic.
It's also laboratory tested for heavy metals, for glyphosate, for microbiology, and other tests that we conduct.
And the reason this is so important is because if you go to the grocery store right now and you buy a chicken broth product, usually it's in a box or sometimes it's in a can, you're going to find that it's loaded usually with MSG or some hidden form of MSG.
And they'll have things like, you know, flavors listed on the ingredients label, which typically it's hidden MSG.
And MSG is an excitotoxin.
It's toxic to neurology.
And actually, it's toxic in many other ways as well.
And we wanted to be able to offer an ultra-clean laboratory tested version of chicken bone broth.
And even though we don't sell chicken flesh, this is a product that can be combined with so many things like quinoa, things that we sell in our ranger buckets.
It can be combined with pinto beans.
If you add chicken broth to the water base for lots of things, including like macaroni and cheese, you can have like, you know, chicken macaroni and cheese or chicken soup macaroni and cheese or any kind of like an instant meal, you can add chicken broth to it and you're going to greatly enhance the nutritional density and the natural flavor of it without using any MSG whatsoever.
So if you look at our product, again, go to healthrangerstore.com and if you scroll down, you'll see the ingredients here.
Check this out.
Ingredients, only one organic chicken broth.
That's it.
Country of origin, USA.
Okay.
That's it.
There's nothing else added.
This is the one chicken bone broth product that you can trust to not be made with artificial flavors or chemicals or MSG or added excessive salt or any of these other additives that are typically in the food supply.
This is just pure, simple, natural, certified organic, sourced in the United States, subjected to our lab tests from U.S. farmers who follow certified organic procedures and have achieved that certification that we have verified because we do all kinds of due diligence on our supply chain.
So this is the cleanest, most pure organic chicken bone broth that you're ever going to find in the marketplace.
And in this format, it stores for a very long time and you'll love to use it this winter too coming up.
So check it out now at healthrangerstore.com and try this product for yourself.
Give it a taste and understand that it is authentic, 100% real.
No excitotoxins, nothing that's toxic, nothing that's going to worsen your health, only the one ingredient that's going to enhance your nutrition and enhance your health.