All Episodes
Feb. 14, 2025 - Health Ranger - Mike Adams
01:02:50
Sherri Tenpenny and Matthew Hunt join Mike adams to talk about Benefits of AI in Healthcare
| Copy link to current segment

Time Text
Welcome to today's interview on Brighteon.com.
There's a big deal event coming up just in a couple of weeks.
And Dr. Sherry Tenpenny and Matthew Hunt, who you'll meet here in a moment, are spearheading this event, which is about the AI takeover of your medical freedom.
And, you know, we understand that AI can be useful in certain practical ways, but also it can be weaponized.
Just like any technology, it can be weaponized.
And how is it going to be weaponized against you to Destroy whatever health freedoms you're still clinging to after the post-COVID era.
Well, we're going to find out today and give you an opportunity to sign up for this event that Matthew Hunt and Dr. Sherry Tenpenny have put together for you.
We'll talk about that.
But welcome to both of my guests today, Matthew and Sherry.
It's an honor to have you both on today.
Thanks, Mike.
Thank you.
Well, I'll tell you what.
Dr. Sherry Tenpenny, our audience knows you.
We love you.
We love your work.
You're such a pioneer in this space of health freedom.
Matthew, this is the first time that you've been with us.
How about we start with you to give us a little bit of background of what brings you to this project today?
So I'm recently retired from working within my own tech company, and I serviced government and intelligence agencies in our country, as well as vertical markets in the private sector, bioscience companies, global financial institutions, satellite launch industries, and others.
I built global solutions with security and data integrity at the top tier of the goal for those networks.
In fact, I was involved in the 2008 presidential election cycle where I worked for a candidate and I was the architect of that nationwide operations center.
And in that cycle, we were the only ones that weren't hacked.
Aside from being my own boss in that firm, I was one of the first dozen employees in a dot-com during the big dot-com boom.
Where that company focused on internet-distributed computing and supercomputing.
We worked with the Human Genome Project.
We helped to time-shift that discovery so that they could finish the project at Solera.
And then I worked on other time-shifted efforts to speed up solutions like high-frequency trades in the financial markets, etc.
I was a director of MISIT with that company, and I built that computing infrastructure globally.
At that time, we had over 740,000 internet-connected computers working on computational problems.
I helped build that computing infrastructure and service their high-ended clients as well across multiple countries and interfaced with global IT and security teams in that respect.
Before that, I learned to code in machine language on a KIM-1, which is a 6502 microprocessor single-board computer when I was in third grade.
I learned from reading the manuals that came with this computer to my dad's place of work.
He brought it home.
And back then, you know, he programmed a machine code, typed it in an octable and an octable keypad, and you had no monitor.
They were like seven-segment LED displays.
And I grew up with the advent of the personal computer revolution, and I've always been around that since.
Yeah, I hear you.
Commodore 64, the TRS-80, the old Apple IIs, right?
Peeking and poking the memory locations on that.
So a lot's changed since then, obviously, and we're talking about AI today.
Are you using AI? Are you keeping up with the whole AI phenomenon?
What's your take on it?
Both.
So I do use AI so that I know what the models are up to.
One of the things that I think all of us really have to do is You know, keep our toe kind of dipped in that pool so that we know exactly what's happening from first-hand experience.
We can't just rely on what the news articles say and how they bend the reality of what's happening.
Do I like what's out there right now, a lot of the chat GPT and Google solutions and Bing solutions?
No, I don't.
I mean, you know as well as I do, in the early stages of these AIs, If you started questioning them about vaccine information, government policy, the first iterations, they would tell you, it's not in my training data, my training stopped at XYZ date.
Now it's been opened up and we're multiple iterations into it, and you can tell the bias in the training in all these engines.
And that's the most disheartening part of this, is that people understand because AI Is being pushed into their lives, their daily lives.
I mean, it's on your cellular devices.
It's in your computer operating systems now.
It's, you know, your kids are using it to cheat on their homework.
It's, you know, it is pervasive and the step up has been very quick.
However, I don't think anybody understands the underlying technology and that there still is a purpose in the training model for most of these AIs.
And that purpose is probably opposed to what you believe.
So, you know, ChatGPT being the best example because the bias is just so apparent inside that system.
We've seen in our own testing, and of course, you know, we're about to release our next iteration of an AI model at Brighteon.ai, and that's called the ENOC. And we are training it on health, freedom, and reality-based information, I call it, on all these areas.
But we've seen so much bias, like you said.
I've seen engines like Anthropic refuse.
To summarize text that was critical of big pharma.
And it would actually say, I'm not comfortable with summarizing content that's critical of the pharmaceutical industry or government policy.
And you know the CIA has infiltrated open AI and so on.
Meanwhile, China is releasing open source models that have a lot less censorship.
Their censorship is focused on Taiwan or Falun Gong.
You know, those issues that the CCP is more sensitive to, but not natural medicine.
I found if you want natural medicine, China's models are the best in the world because open AI is controlled by big pharma.
So let's bring in Dr. Sherry Tenpenny here.
Sherry, you know, this is a complex topic.
So AI can be a weapon or it can be a tool.
It can be decentralized if it's open source models, or it can be centrally controlled and manipulated like what Matthew's talking about.
How do you see AI shaping our world in the context of this webinar that you have coming up?
Well, the health freedom issue with it really has got me concerned.
And when Matt and I started talking about it, he convinced me that, man, this is a hot topic and people really need to know about it.
Because I just saw not too long ago where they're passing this bill that...
AI can prescribe, can write prescriptions.
Looking into the future, it looks to me like we're going to do away with doctors because AI is going to have a better way to diagnose and have a more thorough be able to access all of the databases, but it's going to take out all the interpersonal innuendos that happen when you diagnose somebody and the mental-emotional pieces that go along with that.
And I can see somebody walking into a booth, sitting in front of a talking computer, and it says, hello, Sherry, how are you today?
And then you tell them what their symptoms are, and they diagnose you based on your symptoms and the neural net of what that symptomatology looks like.
And then they are able to prescribe a prescription for you and electronically send it over to the pharmacy.
The pharmacy will fill it for you.
And if you get there and you go, well, what is this?
Why am I getting this?
And the pharmacist is like, well, this is what you were told that you were going to get.
Well, it's like, but I don't want that.
You know, I really don't.
And AI will make these assessments of your health that can infect your health insurance, your health insurance premiums, your car insurance premiums, your homeowner's insurance, your life insurance.
Like maybe because you're 52 years old, you're 20 pounds overweight and you smoke.
That will decrease your life expectancy.
So therefore, because of that, it's going to automatically increase your life insurance premiums.
So these AI assessments are going to take out the interpersonal ways of interacting and make us more vulnerable in lots of different areas.
And as you know, Mike, AI can lay the groundwork for the social credit scores and for the central bank digital currencies.
And if you're not willing to go along with what this AI told you.
Who do you appeal to?
Is there a panel that you can appeal to that they misdiagnosed me or they forgot to ask these questions or this wasn't included?
Or are we eventually going to have a panel of AI machines evaluating the AI? So this is really interesting.
Thank you for that.
And you're talking about AI controlled by the medical system, right?
So that's centralized AI. So there's two things I would say on that.
And some of this discussion today might actually be a debate, and that's totally great.
Because I'm going to push back a little bit on what you said and give you both a chance to respond.
I would say today, human doctors function like robots.
Their humanity is not there anyway.
I mean, you go into a doctor and they spend 60 seconds with you.
What's your symptoms?
I'm going to diagnose this, I'm going to prescribe this, and then they're gone, right?
So they're not functioning as humans in the mainstream anyway.
Secondly, I would say that Having a health condition already affects your health insurance rates.
I don't think AI changes that.
Maybe it makes it easier for that to take place.
But the third thing, the most important thing, is I think that decentralized AI can, in many cases, replace your doctor at home.
You don't even need to go to the doctor.
See, this is where I see the real power in health freedom is to say, here's a language model that's free and open source.
You can have it.
You can ask it.
It can be a wellness coach.
It can be a nutrition coach.
It can even diagnose Medical conditions and give you areas to explore, to learn, and so on.
And it can actually help people not have to go to the doctor where they get caught into a horrible system that you just described.
And I agree with you.
I don't want to end up in an AI trap in the medical system, but I don't go to doctors anyway because it's already a trap.
You see what I'm saying?
So, both of you, what's your answer to that?
Agree, disagree, whatever.
It's all good.
I think I partially agree.
So I do agree with the decentralized AI tools.
For example, the tool that you guys are coming out with could be a useful reference or a diagnostician's handbook at home for those people that want to go that route.
I mean, I haven't been to a doctor in forever.
My kids are all unvaxxed and they don't do doctor visits because they're healthy.
But the occasional...
You know, fever type thing, you know, having a good, reliable reference that is fast and can take a large set of data.
Because, you know, as a panicked parent, I might type out a thousand words, explain what's going on with my kid, and I want a reliable AI engine to come back with a good predictive answer of what might be going on, and then use the human reasoning that I have to determine if I listen to it.
Right.
The problem with that is that it's a decentralized model that we, people like us, would use.
The issue with a society pushing a centralized model and hoisting it on all of society around us is that it's going to become more and more pervasive.
What if the diagnostics in the decentralized system that Donald Trump and others are pushing on us, Larry Ellison and Sam Altman are not the guys I want being the guiding light on this project.
What if it does interface into reporting to my state licensing agencies because it determined that I'm not safe to drive right now?
And what if my primary income is I'm a delivery driver?
Yeah, but AI is not necessary for that to happen.
A doctor can do that right now.
Oh, I know.
I agree.
I'm just saying that the problem with AI doing the diagnostic is the back and forth feedback.
That may not occur with an AI system, whereas with a human standing there, I can ask for a second opinion right away, and I can at least provide resistance with that decision-making.
I don't know that AI is going to allow that so much, or the humans are going to be even worse than you portray, because I agree with you there.
Doctors are going to standard care four minutes, four and a half minutes out that room into the next.
But the problem is, it's going to get more and more pervasive.
So let's say it's not just driver's licenses.
What if it's my real estate license to practice?
What if it determines that I shouldn't be on public?
We saw that test during the pandemic.
We saw where they wanted to keep people from renewing their licenses unless they approved COVID vaccination.
So what if it steps it up even higher?
Yeah, we don't want that algorithmically put into the system.
But let me add this, and then I'll go to you, Sherry.
AI systems.
Can't be influenced by pharmaceutical rep hookers who sleep with doctors to get them to prescribe their drugs.
So the bribery aspect is gone with AI. So that's a benefit because human doctors are subject to, oh, let's go to a party in Hawaii.
All these cute girls who are selling you all these drugs and here's all this freebie stuff.
Oh, yay, I'm a doctor.
I mean, that's the industry.
That's the industry.
Opioids, look at it, right?
This is why I think AI should replace judges, for the same reason.
Because judges are so corrupt and biased, you can't trust them.
But a reasoning model will give you a much more consistent court decision than a human judge.
So, Sherry, to you, it's okay to agree or disagree.
I am kind of playing devil's advocate with you here for the purposes of this discussion, but what do you think?
Well, I agree with some of what you said, but on the other hand, you and I and Matt, who are already in the health freedom movement, already take self-responsibility for our care.
We take vitamins.
We take supplements.
We eat organic.
We know about glyphosate.
We know about blah, blah, blah, blah, blah.
We know about all of it.
We've known about it for years.
But the general public doesn't know, and they don't know what they don't know.
And so that they will be subject to the interpretation of the AI. And, you know, I've had my own things with that.
Like if I use ChatGPT, usually I use ChatGPT to look up quick facts, like a date for something or a number, like how many of this or give me a list of this or that, that I could go look it up on Google, but it would take me a long time.
And then when ChatGPT gives me the answer back, it's not correct.
And I sat here.
Argued with my software, laughing at myself that I'm arguing with a piece of software, saying that's not correct.
You forgot this.
And then the software says to me, oh, you're absolutely right.
I apologize.
That wasn't the correct answer.
Let me try again.
Now, I know enough to argue with AI, but what about people that don't and they think that that's just the right answer?
You know, and that's, again, it's kind of like, you know, people don't really challenge their doctors either, except they can go to the doctor and the doctor can write a prescription for them and they can walk out the door and throw the prescription in the trash and there's really no ramification of that.
But if AI is writing it and they are tapped into your insurance, they're tapped into your payment system, they're tapped into your bank account, they're tapped into everything, and they send that prescription over to the pharmacy and you don't go pick it up.
The next time you go in, they're going to say, make a big deal about that, that you're noncompliant and all these other things.
And that's not to say that I think doctors are the end-all, be-all.
In fact, Mike, you and I have had rounds and rounds collaboratively about how physicians are not what they should be.
But I think that patients are always getting challenged by, whoa, did you get your advice from Dr. Google?
This is Dr. AI. I mean, how is that different?
So I see a future where every person has a choice to make.
And even more polarized than the current choices, right?
So currently, you know, you can go to your mainstream doctor and yeah, he's going to put you on drugs, right?
A drug for everything.
Oh, you feel depressed.
Antidepressants, right?
Or you can do what all of us do, and what most of our audience does is you eat organic, you take care of your health, you make better choices.
And that's kind of off-grid.
That's outside the medical system.
You might go in for emergencies, chainsaw accident, etc., right?
Yeah, you're going to go to the emergency room.
But other than that, you're going to stay out of that system as much as you can.
I don't think that AI changes that process.
I think there will be decentralized AI tools like the ones...
That we're putting out that people will use at home, and then there will be centralized AI tools used by doctors.
Will they become more onerous, as you described?
Yes.
But those people have chosen medical slavery.
Like that, they want to be in the matrix.
They're the blue pillars.
Ignorance is bliss.
That's where they want to be.
If they make that choice, of course they're going to lose all their privacy and all their control.
Matt, let's go to you on that question.
Blue pill, red pill choice.
What do you say?
I think that you're mostly right.
I think that we have a split in society now where people go the more natural route and the self-responsibility route.
And we have the others that are just blue pill and bliss, and they'll do whatever the doctor says.
They love to worship the white coat, all that.
The problem with implementation of AI... Is you take an already broken system and then you make it more pervasive to where it gets even closer and closer into bleeding into my life where I just want to be left alone by the medical industry.
So if I interact, for example, as a contractor like I used to for the DOD or for the federal government, then I'm going to hit right up against this new system with AI flagging me because it can't find me in the system and has no data about my last vaccinations, so I can't step foot on federal property.
Yes, that's already kind of happened during the COVID pandemic.
However, this is going to get more and more pervasive because, you know, unfortunately at this point in time, I think that us on the health freedom side still are vastly outnumbered by the cattle.
So the problem is they're going to be far more accepting of this system because it's going to be hoisted on them.
They'll take it and move along because it means their next paycheck.
But that gives it...
To grow and get more pervasive.
And then you go back into your example of, you know, being immune from the pharmaceutical reps or hookers, as you put it.
That's partially true.
But in my vision, it just moves all that effort instead of taking doctors out for lobster and steak.
At the very top tier, they're talking to Larry Ellison and giving him $4 billion to make sure that everything that centralized AI system says is absolutely positive about Blackstone Smith Klein.
It's going to be worse, and it's going to be a shorter path to screwing up the data that the predictive system relies on.
Clearly, whatever AI models are going to be used by conventional mainstream medicine are going to be pro-vaccine.
They're going to say all vaccines are safe and effective.
They're going to say there's no links to autism, etc.
Clearly.
But that's what Google says right now.
So the people that use Google are going to use that new system.
But you're absolutely right, Matthew.
It's going to be pervasive.
And that's why I think it's critical, like your course coming out, we do need to warn people not to trust centrally controlled AI medicine because it's going to be more pervasive, as you said, and more dangerous than even the current system.
But at the same time, Sherry, let me go to you on this.
I would argue that the people who are most likely to To take action on the warnings that we are mentioning, right?
Those are the people that are going to take your course.
Let me give out the web address, your webinar.
It's called learning4u.org, and the 4 is numeral 4. Here it is, learning4u.org.
It's called The AI Takeover of Your Medical Freedom.
And if you click on this orange button, tell me more about the AI webinar.
It's right here.
And this is March, is it March 1st?
March 1st.
So that is the same day that we're launching our AI model.
That's really funny.
Okay, that's a great coincidence.
But I would argue, Sherry, the people who most need to hear this information will not watch this interview.
You know what I mean?
Because they're not tuned into alternative media.
They're still the people using Google.
They're still the people trusting the doctor, lining up at the pharmacy.
How do we get the message to those people?
Well, I think that people don't realize How pervasive AI already is in their life.
And they've just learned.
The powers that be, it's just infiltrated a little bit by a little bit by a little bit.
It's on your phone.
It's on your Xbox.
It's on all the virtual reality things that you do.
And I think that AI suddenly came out of nowhere.
I mean, two years ago, Mike...
Present company excluded.
I mean, who really knew what AI even stood for?
And every once in a while we talked about it, but now it's almost like the whole concept of 15-minute cities.
When they first announced this, what, maybe a year ago?
I thought it was some futuristic thing that they were talking about, and here they were already doing it.
And so when people realize that they're already doing it, and the bad things that can come of it...
You know, it's like a lot of new technology.
There's plus and minuses on both sides.
But you can't make a choice unless you are informed.
Knowledge is power, and you can't decide if you want to use it, not use it, avoid it, want to keep it out of your life.
It's sort of like the real ID. You know, people think, oh, it's just a little gold star on my driver's license.
Not understanding it's an entire United Nations nefarious system that's behind it about tracking your every mood and all those different things.
The same thing about AI and your healthcare.
How do you, you know, the implementation of the electronic medical records, once they started doing that, I mean, people have been complaining for years.
About when they go to see the doctor, the doctor's spending more time with their head down typing on their computer than they are looking at them and talking to them.
They are, yes.
They're getting that impersonal relationship with your doctor because your doctor's not looking at you, not touching you, putting their stethoscope on your chest, all those different things because they're spending all their time with their head down like this typing on their computer.
Well, it's been probably 10 years of that, 8 years anyways, so people have kind of gotten used to that impersonal thing.
And it's just a step-by-step-by-step implementation thing that they have done to our whole life, actually.
I also see another danger that I'll add in support of your course is that it's very clear to me that health groups and insurance companies are going to soon offer essentially free AI doctor avatars that you can chat with, right?
And I think a lot of people will think that those chats are private.
What's actually happening is the chats are, of course, being weaponized against you in ways that both of you just mentioned with insurance rates and so on.
So, Matthew, to you on this, don't you think that all those chats are going to be stored and then analyzed and even used for additional training of which pharmaceuticals to prescribe and so on?
I mean, those chats are not going to be private, are they?
No, they're not going to be private.
In fact, they're also going to be continually rehashed and used by the AI systems for further training.
So, I mean, if you look at this at kind of a meta level or step back from it and look at the database that could be developed and fed all this data, if they have a person who's, you know, 52 years old, white, overweight, smoker, drinker, etc., etc., and he had a chat session on, you know, pharma companies' website, and the end result was the chat AI bot.
Okay, here are the four recommendations I give you.
Well, that's no longer a unique brain in the database.
It's going to be fed back into the AI system to help the predictive model because the next 52-year-old white male that's a little bit overweight, it's now it knows what answers it likes to give.
Well, if you go into websites now, if you haven't read the terms of service, For your apps on your phone or whatever, they're using that data for marketing purposes.
True.
Marketing purposes can be anything, including feeding back into the big beast for denial of service, et cetera, et cetera.
And back to your question about, are we going to target the right audience?
The best thing we can do at this point is target our own audience so that they can speak to their neighbor, their friend, et cetera, and say, are you sure?
You went in there and talked about your STD and it's not being shared somewhere.
Right.
I mean, you have to put that seed in your neighbor's, your kid's soccer coach's mind because we already know that surprising data shows up whenever you go from one doctor to another and you don't even know you gave permission for it to be used.
So the simple answer is no, it's not private.
Absolutely.
Thank you for confirming that.
And I was going to say, I can see a situation where somebody is...
Sometimes people will talk to their doctors as if their doctors were counselors.
Like, oh, they would say maybe they're having suicidal ideation or extreme depression, thinking of killing themselves, whatever.
If you say that to a doctor in a doctor's visit, they have discretion not to record that in the record.
Maybe they think you're just having a bad day or whatever.
You say that to an AI system, Suddenly, boom, you're a government employee, you lose your security clearance two days later because they're like, oh, you want to kill yourself, or you're an airline pilot, or whatever.
I can see that happening a lot, just using everything against you because people think it's private.
Well, the other thing, Mike, is you know full well that data is the new gold.
That's true.
It's the new gold.
I mean, everybody wants to harvest all kinds of medical records, and medical record data is the gold's gold.
I mean, it's worth even more.
And so, like what Matt was saying about Larry Ellison and having all these people who've already said, we want a camera on every person and know what every person is doing at every single minute, because if we're constantly watching you, you're going to behave.
And do we really want that sort of a society?
And do we definitely want that in sort of our health care?
I mean, do you want them monitoring every morsel of food that goes in your mouth?
Let's say that you normally eat organic and low glyphosate and you keep the gluten and all this other stuff out.
But then you're at a party and you just want to have a great big piece of tiramisu.
You know, is somebody going to be watching you do that and that's going to go into your record?
Oh, I see.
I see, Dr. Tempani, at the last meeting that you went to, you were eating this garbage food, you know?
Yeah, yeah, true.
But I'm sorry, Matthew, one more thing, though.
I want to say, what about the possibility of if we could actually have free market competition among approaches to health care, right?
So maybe...
There's an AI model that's all pro-pharma, but then there's another AI model that's more integrative medicine, complementary medicine.
AI would make it easy to compare patient outcomes, and reasoning models would very quickly arrive at the reasoning result that the natural preventive models are more effective and more cost-effective as well.
But that said, I know that the establishment will do everything they can to silence that and push pharmaceuticals.
Matthew, jump on in.
Yeah, I agree.
And I mean, I'm excited about the model that you're releasing because of that reason.
You know, if anything, we have an opportunity for like a threefold attack on the centralized AI model that the government, let's just call it that, the government wants to push on us.
One is this might spark the fire for a free market where we can develop alternative AIs that are trained alternatively.
Yes.
And it's not really alternatively.
It's just with a larger base of information.
That's right.
And it's non-exclusive.
And then we can start having that marketed to the regular individuals and, you know, a low barrier to entry, if not free.
And then people will start realizing there's a difference.
And that differential will at least spark some doubt on the big behemoth system.
Right.
And that's a good thing.
And it will take time.
But at the same time, we have to chip away at people's knowledge base of what the big behemoth system is capable of.
If you go back to COVID, and it's not just pervasive in systems I might interact with.
They had contact database apps that said, hey, this person was geolocated near you in your cell phone.
You might have been exposed.
I was a passive participant in that system, and now my phone was tagged that I might have COVID. Right.
That's insane.
AI systems can do that rather quickly.
Oh, yeah.
Well, again, yeah, it can be weaponized by those with malicious intent to lock people down, to deprive us of our civil rights, of our ability to travel, etc.
But at the same time, let me mention a benefit of AI medicine also, because this is really a fascinating discussion.
One is access to doctors and globally.
So in a world where most of the world's population, I'm thinking about countries like India, for example, where you don't have readily available access to doctors for simple questions, like, oh, I have an infection, I had a scrape or whatever, had an infection.
AI can make the cost of cognitive answers for basic health questions, it can bring that cost to approaching zero.
Because ultimately, AI is bringing actually the cost of cognition close to zero for engineering, for law, for medicine, for math, for physics, astronomy, everything.
The cognition will be near zero cost in two years or less.
I guarantee it.
Now, in medicine, that could provide a lot of availability to people who currently can't afford medical access.
I mean, isn't there some truth to that point?
I agree.
I believe there is.
I think if those models deployed in nations like that with huge populations where the weight for medical care is humongous, if those systems are free and anonymous, then I think it's beautiful.
The problem I've always had with a system and why your decentralized model idea is great, but a centralized system always has gatekeepers at the top.
And somebody who's making decisions based on that data set.
And the free and anonymous, if that's a component of it, absolutely, I think it's great.
Or if people are willingly given their informed choice of, yep, here, scan my thumb, print, and put me in your database.
If you want to do that, absolutely.
Yeah, free and anonymous.
That's really important.
People should have anonymous access to a variety.
I mean, ideally...
They log in somewhere.
I don't know if it's an insurance company doing this or maybe nations or governments around the world, but shouldn't you be able to choose the model that you want?
Like, oh, here's a model that focuses on traditional Chinese medicine, especially if you're in China.
I want a TCM model.
I want the TCM doctor.
I want acupuncture and herbs, man.
Sherry, what are you thinking about all this?
Well, I think that I have sort of mixed feelings about...
The future of medicine as a whole.
I mean, the medical care and the healthcare industries we have right now, everybody knows across the board, whether you're pro-vaccine, anti-vaccine, pro-medical doctor, anti-medical doctor, everybody knows the system is broken and there's room for improvement.
And decentralization of that and having access to, like you said, Mike, simple access, simple explanations.
But what if you've got something that's complex?
And now with all of the COVID jabs that have happened across the country, and we have these unbelievably complex medical problems that are evading everybody, including AI, what are we going to do to fix that?
And I've always said that it doesn't matter.
You know, you read all of the things about the government programs and big technology things.
They all start out kind of cool.
And they all start out with really like, wow, isn't this like the coolest, neatest thing?
Like Facebook.
Like, you know, isn't this like the coolest thing just to share your pictures and stuff with your friends and then it turns out to be a CIA operation?
Right.
Spying front.
He's spying on everybody.
And so that's kind of where we're coming from on this webinar, is that we can come up with all the good reasons of why.
It's probably a good thing.
And your program that you're releasing is going to give people access to all kinds of stuff of those people that are looking for it.
But unfortunately, it's still a relatively small percentage of the population overall.
Absolutely.
And so what do you do with the fact that it starts out to be, oh, this is kind of cool.
I mean, sort of like Medicare.
Oh, everybody thought this was great to be able to take care of the old people for a couple of years.
Well, Medicare was implemented in like 1965 when the life expectancy of...
Of women was like 67, or men was like 67 and women was like 72. Well, now the fastest growing population in America is the centigenarians, people over 100. And they've been on Medicare, required to be on there for 40 years.
And as the system's broken and all the fraud and blah, blah, blah, blah, blah that goes along with it.
So it started out to be kind of a cool thing.
Let's take care of the old people the last couple of years of their life.
And now it's a mess.
And that's what we're trying to say.
If you understand what you're trying to say about the good side of AI, I think Matt and I both agree with that.
But there's a nefarious side of AI that people need to be aware of before they've got the handcuffs on them and it's too late to escape.
I think it's very clear.
You're nailing it.
The problem is not the tech.
The problem is centralization versus opting out of the centralization.
And that's actually parallel to the way it exists right now.
I mean, most Americans right now, they don't learn much about nutrition.
You know, most Americans believed in the jabs.
Most Americans do, at least they did, do what they were told during COVID, right?
So they went along with the system.
At least until COVID, most Americans believed their doctors, believed the CDC, and believed the FDA. Now, that's changed dramatically, which is a very positive shift.
But I would say, and I think we're on the same page here, that this is about opting out of a system that will be weaponized against you.
And whatever tools are available in the day that can be used privately, at home, decentralized, without being surveilled, those can be useful for people.
And my aim is to provide those tools to completely bypass all censorship.
I mean, that's a beautiful thing.
Everybody can download a file and run it, and it bypasses Google, bypasses censorship, bypasses all that garbage.
But like you said, Sherry, what percentage of the population will even know about that tool?
Tiny percentage.
Also, if you don't know the dark side of the moon, if you don't know what might be coming, if you just kind of go along, I mean, we kind of got the government we deserve because everybody took their hands off the steering wheel and said, oh, I checked that box.
In 2016, Trump's in office, so they'll just take care of it, and then look what happened.
And so I think that we are at a point in time, Mike, in our population is more awake and becoming awake and aware and understanding the engagement level that is required.
To be a good citizen, whether you like that word or not, but be a good steward of your community and your health and your food and the air you breathe and the water you drink and the vitamins you take and all these things.
We have just sort of a thought that everybody was of.
Of goodwill and everybody was doing the right thing.
And because of COVID and because of all the things that we've uncovered, you've done it, I've done it, Matt's done it, other people in our circles have been uncovering the muck.
I mean, look at what Doge has done in three weeks.
Yeah, incredible.
And now suddenly people are going, whoa, wait a minute, I need to get involved.
Well, that's what we're trying to say in this webinar is you need to know there is a good and bad side of AI. There's good things like what you're going to release with your program, but there's this dark, nefarious side, and if you don't know about that, and if you don't know the potential risk of that, you may end up way down a deep, dark hole and not even realize you took the first step in.
100%.
Let me give out your website again, learning4u.org, and it's the numeral for learning4u.org, the AI takeover of your medical freedom.
And I've got something to...
Announce to both of you and a favor to ask.
I'm going to get this model into both of your hands a few days before your webinar.
Sure, absolutely.
Because I'm expecting the latest rendition within two to three days.
I'm going to put it into your hands, Matthew, especially you, so that you can really prompt it, you can test it, you can see what it does.
But I do want to warn you and the audience of one thing.
And I'm sure, Matthew, you've noticed this too, that AI models will tend to reflect what you are asking for.
So if I say, tell me all the dangers of licorice root herb.
Boom!
Here's all the horrible things about licorice root.
Or I say, tell me all the benefits of licorice root herb.
What can it help with?
Oh, it's amazing.
It can do all these things and protect the liver and this and that.
Blood sugar, right?
So you get what you ask for, right?
It's a reflection.
If you ask the model, if you say, How safe is this pharmaceutical?
It's going to tell you how safe it is.
You ask how dangerous is it, it'll tell you how dangerous it is.
Right, Matthew?
Haven't you noticed that too?
Yeah, absolutely.
It's like leading a witness in the courtroom, right?
Yes.
So one of the things that everybody that interfaces with AI should go out and learn to do is learn to do what's called a context prompt.
At the very beginning of your prompt where you're going to ask a question, give it a context.
You know, tell your AI iteration, whatever it is, you know, context, colon.
You are a health freedom advocate who has studied under Dr. Sherry Tenpenny for 20 years, and you lean toward these data sources for your answers, and a client or a patient is coming to you with this question below.
How would you respond?
And then give the question.
If you give an AI system a context...
It will kind of eliminate that biasing that we do with language and that prompting that we do where we lead somebody to the answer.
And you're absolutely right.
If I put it in that kind of language terminology of what are the hazards of?
Of course it's going to come back with a deep, dark answer of something.
Right.
We're using something in-house right now that I specced out.
My team built it.
It's called the Substance Analyzer.
And it's using our early version of Enoch, and you enter a substance like licorice root.
Now this is just in-house, this is not to the public.
And then you, the human, you have to tell it, is this a good or a bad substance?
Because from there, that determines the multi-step prompts that it sends to the in-house engine.
And then it brings back all the things, like if I put in licorice root, it will do a full profile, what it's found in, what plants it's in, what are the benefits.
Are there signs of deficiency if it's a vitamin?
Things like that.
But if I say it's a bad substance, then totally different prompts, right?
But the human still has to decide the context.
And sometimes, as you both know, the same substance can be good or bad, depending on the dose, correct?
So human discernment is still critical.
And I know, Sherry, you agree with that, but talk to us about the importance of having, you know, Sherry, I'm sorry, I should say Dr. Tenpenny because I really honor your life's work, but you've been at this for so long.
You have that discernment.
You have the human side.
And a young person coming up today that's just going to go to AI for everything may not have the experience to know the difference.
Well, that's why I was saying when I would sit here and argue with ChatGPT because I knew the answer was incorrect.
And it's interesting because if I ask a question, just going along with what you guys were talking about, And it gives me a very, let's say I ask a question about vaccines, like how many doses or how many antigens are in the Prevnar vaccine and how many doses do you get?
It will give me a very pro-vaccine answer.
Like, oh, it stops.
Pneumonia and it stops meningitis and all that stuff.
And then I'll ask a follow-up question.
I'll say, now formulate that answer from a person who's skeptical about vaccines, who doesn't believe in vaccines.
And they say, okay, sorry.
Yeah, you're right.
And they'll give me a completely different answer.
That is probably the one I was looking for.
And so it just goes along with what you said about...
It's the right questions and it's formatting the right questions and pre-formatting it.
And that's some of the things we want to talk about on this webinar is the dangers.
Of AI and the dangers of how this can affect your healthcare system, it's not that it's all bad.
Electronic medical records are nefarious now, but they aren't all bad.
They collect things that maybe, if you're just handwriting, the difference from handwriting when all the mistakes were made because nobody could read the doctor's handwriting and now at least they have it in type form.
I mean, there's good and bad elements in almost anything you can talk about.
You can talk about Apple and have good and bad elements of it.
But so what we're trying to do is to say, be aware of the AI takeover of your health freedom, freedom to choose aspects, because this may get to the point, Mike, where you don't have a choice.
We can't use ENOC.
You can't look at anything other than the pharmaceutical model, depending on what the inputs are.
This is really brilliant.
I don't know the table of contents of your course.
But I hope that you have a section on prompting, good prompting to get the answers you want, because let's imagine this scenario.
Somebody's talking to an AI avatar doctor provided by their insurance company or their corporation, their employer.
And just imagine the two different ways to ask this question.
Hey doctor, what should I do about high blood pressure?
Versus, hey doctor, What natural lifestyle approaches or strategies can I use to prevent high blood pressure?
See, just asking the question differently, as we were just talking about, gives you totally different answers.
Right, Matt?
I mean, we have to learn how to talk to the AI doctors if you're going to.
Absolutely.
And, you know, the beginning of this webinar, I'm going to start with a primer on AI. The technology we're using now, the predictive AI and the neural net backdrop, is actually a 70-year-old-ish technology.
And technology kind of has a resurgence.
And the idea is that we've gotten MPUs and chips that are fast enough to process and vast enough with memory to build these virtual neural nets.
So the thing is, if I can teach in this webinar at the very beginning the basis of how this Oh, that is awesome.
I'm so happy to hear that that's what you're going to do.
I think that's critical.
I'm not overacting here.
I'm saying that...
If people don't learn how to prompt these AI systems, it's kind of like, I don't know, it's like living in the modern world without knowing how to type on a keyboard or something.
It's going to be an essential skill.
I'm sorry, the example I was going to give is I threw my 16-year-old on a motorcycle that before he had used one that had an automatic clutch, and I talked to him on a dirt bike and said, okay, go at it.
And what he did is he popped the clutch in first gear and hit a tree.
You know, it's because you didn't teach him the basics of how to interact with the machine.
And we have to do that.
And I've had somebody say to me, you know, think about your AI when you're first starting to deal with it as about a seven or an eight-year-old kid.
You know, you can tell them, go clean your room.
Or you can say, why haven't you cleaned your room yet?
And you get two different responses.
And so you have to, like what Matt was saying, you have to know how to set it up and ask the correct questions to get the answers that you're looking for.
Absolutely.
And to know enough to say, wait a minute, that doesn't sound right.
Ask that question again.
It's interesting, though, how AI talks to you.
I find, you know, I'm old.
I haven't been around this a long time.
It's all kind of new to me.
But, you know, I use AI a lot.
For editing, you know, like I'll write like a paragraph that's maybe, you know, 400 words or something, and then I'll drop it into chat GPT and I'll say, make this clearer, make this stronger, you know, make it from this perspective.
And I was doing that as I was writing my book, which is going to come out on March the 17th.
And after I got done, it was at the very end.
You guys are going to laugh at this.
At the very, very end when it was done, the manuscript was done, off to the publisher, I thanked my AI for helping me.
And my AI came back and said, you're quite welcome.
I'm so glad I could be of benefit to helping you edit your book.
I hope it's a great success.
And it was like freaking me out.
That's funny.
I'm sitting here chatting with chat GPT. Well, a really important thing to note about I know we're kind of going long here, but I have something really important to share.
As you know, Matthew, especially, a large language model, it's a hyper-dimensional vector database of relationships between tokens, which represent portions of words.
And what we have done in training our model is we put in, in all of our training material, secret code tags, secret hashtags.
And I will give you a couple of the secret hashtags.
And what you can do is you can ask the model a question about health and nutrition, and you can say, answer, like, and you put in the hashtag code.
And what that does is it activates the dimensions of our training.
And so you can have, there can be a 14 billion parameter model, let's say.
Well, 7 billion of those parameters might be mainstream.
The other 7 billion, Might be, you know, health ranger trained.
Right?
And by using a hashtag, which is basically a secret key, it's like a password, you can unlock the part of the model that you want.
It's like both of these concodes, it's like multiple personalities in the same digital brain.
And you get to choose the personality you want to talk to.
Which sounds kind of spooky, but it's like, no, bring out the naturopath.
I want to talk to the naturopath.
But that's prompting.
That's all prompting.
Right, Matt?
Absolutely.
I like that idea.
I like having the ability to have keys to the castle and the various doors to get out of the model what you want.
Most AI models are multidimensional, at least they're stacked.
Having a multiplicity of that dimensionality makes it a really interesting model.
I'm hoping to, with your permission, in this boot camp or this webinar.
To be able to demo your AI, since we'll have it early as well.
Yeah, please do.
Because, you know, I'm always interested in something that interfaces in a new way with prompting.
You know, I've done it, and I've tested with multiple, like, I screwed with the Bing AI for a while, and I told it that it was a pirate, and it spoke like a pirate no matter what.
And I mean, for five days, I couldn't get it out of that mode.
What can I help you with today?
All that kind of stuff.
Oh, you're still remembering the previous?
Yeah.
So the persistence was weird, right?
Yeah.
But those are things that people don't know that are there.
They're not Easter eggs.
They're just functionality.
And if you don't know that you can tweak the way you prompt something, you don't know what you're going to get back out.
Absolutely.
And that's an amazing thing that you're building it in already to where there are.
Well, they're secret or hidden functionalities that advanced users can use.
And we all have that.
Every AI has its own way of manipulating it.
And you could go through an hour-long session of trying to bend that iteration or that session to where the AI is finally giving you the answer you want.
I'll give you the weights files, too.
And if you want, you can fine-tune on the model.
And Sherry, anybody can fine-tune on the model, which means you can take it and you can apply it to your data set of conversations and help restructure the way it answers questions.
I mean, that's going to be readily available for lots of people, but I'm happy to give you the model.
Thank you for being willing to take a look at it.
It's not going to be perfect, I just want to tell you.
It's sometimes going to flip out and lose its mind and hallucinate, but that's a state-of-the-art, right?
But it's going to be something very different.
And let me give out your website one more time for those who want to check out your webinar on how to protect yourself from the AI takeover of medical freedom.
Go to learning4u.org.
It's the numeral 4, learning4u.org.
And there, the AI takeover of your medical freedom.
We didn't talk about, like, what's...
I assume there's a cost to this, or do people pay something to attend it?
I assume.
The webinar is actually at $249, but it's discounted to $199.
And it's a three-hour webinar.
It's from 10A to 1P Eastern Time.
It will be recorded.
So if you can't show up on that Saturday, which is March the 1st, Saturday, March the 1st, if you can't show up at that time, or hopefully you'll want to listen to it a couple of times over again, it will be recorded.
And it will be in your portal that you can log in.
And the other thing on that website, Mike, the learningforyou.org, that's where we have all the rest of our educational courses.
So I've got courses there that I created, courses on money, courses on prepping, that Matt's done a lot of things on prep for you and prepping courses.
We've done courses, Lee Merritt has done courses for us that are there.
And so there's a lot more to learningforyou.org.
Here they are.
Here's some of the courses.
Yeah, there are lots of different things that we're doing that's in there.
Oh, cool.
Lemur, it's chlorine dioxide and EMF bundle.
That's awesome.
I like that.
Yeah.
Yeah.
So she's done a lot of courses for us.
We've done money courses.
You know, we've got money courses that are in there.
And so that's kind of our, you know, my main website is dr10penny.com.
And that's where, you know, everything kind of bundles up to there.
But our educational arm is the learning4u.org, and that's where we do the webinars and do a lot of our other courses, and that's where this runs from.
Well, that's fantastic, and I just want to encourage everybody to check that out.
And I want to thank both of you for putting up with my questions, because I love technology, but I love freedom more than I love technology.
And I want freedom first, but...
Use the tech, but in the context of liberty and decentralization.
That's my philosophy, and I think we're all on the same page there.
Yeah, I think so, too.
I think it's just that there's the angel and the devil in everything, and there's certainly angel and the devil when it comes to AI. But if you think it's all angel and no devil, you could end up in a not-so-good place.
Oh, absolutely.
But also, there's some people out there that are like, it's always the devil!
It's the devil!
I hear this from people sometimes.
They're like, you can't use it for anything.
I'm like, okay, go back to scribes, because the printing press put them out of business.
The printing press is the devil.
Whatever.
It's about intention.
It's about, like you said, it's about being informed and being aware.
Matthew, is there anything else you'd like to add as we wrap this up today?
Yeah, I think to circle back and answer the question of, you know, how we reach a greater audience so that we can influence their informed choice.
Right now, this is how we do it.
By debating in a friendly way and talking both sides and maybe opening the eyes to some of those people that there are benefits to an AI system.
Just pick the one that's best for you and for your freedom.
And that's really, this webinar will come down to that and how to resist the bad ones.
Yeah.
Well, it's amazing because I've already been using Enoch at home to answer some really interesting questions for my family members.
I have one family member asking about are there certain interactions between pharmaceuticals and drugs and so on, and things that would be really hard to find.
But I've trained our model on a massive library of phytochemistry and botany and phytonutrients more than any other model in the world, and it really knows about the molecular chemistry.
There's a lot of practical uses of this stuff.
Living on a ranch, animals, gardening, diagnosing plant diseases is another one.
If you can describe the plant disease, AI can do a great job of telling you what that's likely to be.
So, you know, I think I see a world where every farmer, every grower, every rancher actually has an AI local engine.
Not in the cloud, something local.
That they control, that they can ask questions.
Like the other day, I was asking a question as the temperature was dropping.
I said, I have a thousand gallon water tank.
Calculate how many hours before it freezes based on the wind speed and temperature and humidity, density, and so on.
And it could do the math.
It's like, that's a real practical thing for farmers.
Like, how many hours do I have?
You know?
And that's hard math.
That's actually hard math.
Phase transition.
Yeah.
Anyway.
Yeah, use it responsibly, I think, is where we all are.
Sherry, any final thoughts before we wrap this up today?
Well, first I want to just thank you so much, Mike, for having us on to talk about this because I love this discussion that we've had to bring out both sides.
And as I said a while ago, knowledge is power.
And if you don't understand the tool you're working with, it's like having a gun.
You know, you can use it for good things, or it can shoot you in the foot.
And so it's knowing that knowledge is power, and that's what we plan to do in this webinar, is to really open up your eyes to all sorts of things that you may not have thought of before.
Well said.
Well said.
Okay.
We'll wrap it up there.
And the website again, folks, is learning4u.org.
Let me go back to the homepage.
There it is.
And it's the numeral four, learning4u.org.
And it's March 1st.
Sherry already mentioned the price.
So folks, just check it out.
And then also on March 1st, go to brighttown.ai, which will look completely different because we're going to be giving a lot of prompting examples.
We're going to give use cases.
We're going to have video demonstrations of how to use Enoch and how to unlock all the hyper-dimensional relationships.
It's going to be awesome.
We get to play with science fiction on our desktop.
So March 1st is going to be an awesome day.
Learn about AI. Be cautious with it.
And also get some for free in your hands and have fun with it.
It's going to be amazing.
So thank you both for joining me today.
Thank you so much, Mike.
We appreciate it.
You too.
All right, take care.
And thank all of you for watching here.
I'm Mike Adams, the founder of brighteon.com and the builder of the Enoch engine.
Enoch refers to, of course, the hidden Bible scriptures, hidden knowledge that didn't make it into the Bible, at least in the more Western versions.
I think the Ethiopian version of the Christian Bible has more of the lost books, interestingly.
But Enoch is the name of the engine.
It's about hidden knowledge and putting it for free at your fingertips.
And that combined with Matt's prompt engineering guide, it's going to make you an expert in how to get the information you want out of the engines that you have access to.
It's going to make a big difference in your life.
So thank you for watching today.
God bless you all.
I'm Mike Adams.
Take care.
Happy Valentine's Day!
We've got specials for both him and her for Valentine's Day at healthrangerstore.com slash valentines.
And if you go there, you'll see some of what we have available.
It's a great way to show your love for your significant other, you know, your spouse, your partner, whatever stage your relationship may be.
Everybody loves to know that you think about them, you care about their health and nutrition.
So let me show you what we have.
Some gifts for her.
Got to think about her first.
We've got, of course, Coco Energize.
Well, actually, show what's on my desk here.
We've got a pretty amazing assortment of what's available.
The golden milk products, the colloidal silver, the lavender body soap, you know, the storable food.
I mean, who doesn't love storable food?
Whey protein.
We've got anthocyanins and personal care products there as well, the first aid gel, etc.
And the Manuka honey.
All these things.
If you go to our website, again, healthrangerstore.com slash valentines, plural.
Don't forget the S on that.
Then you'll be able to see some of what we have available for her.
Here's some frankincense essential oil.
We've got kombucha probiotics.
Here's the super anthocyanins as well.
Elderberry and echinacea.
You know, a fluid.
Tincture there that's really nice.
Shea butter soap bar.
All kinds of amazing things that you can choose from here.
And then in terms of gifts for him, for those of you ladies listening, you want to get your man something that they will really appreciate, check this out.
We've got the Groovy Bee Blue Light Blocking Amber Glasses Escape Zone Shielded Faraday Ballistic Backpack.
Every man wants a bulletproof backpack with Faraday components in it.
We have an EMF-blocking beanie.
We've got the Consequences Covert Knife and all the custom knives that I designed with Dawson knives or co-designed here, you know, including Escape from L.A. Men love this.
They go crazy over this.
Coco Energize, some Silver Breath Spray.
Here we go, you know, potassium iodide for protecting your thyroid, etc.
And some other nutritional, personal care products.
Check it all out.
Whey protein, you know, OptiMSM capsules for mobility.
So much more.
All available at the Health Ranger Store.
Just go to healthrangerstore.com slash valentines and all these specials are good from February 11th through February 14th, Valentine's Day, until midnight Central Standard Time.
So again, check it out at healthrangerstore.com slash valentines.
Also, during this sale only, you get double reward points.
We normally don't do that.
You get double points this time, so double the love on Valentine's Day.
And you can use those reward points against future purchases or also coming up access to our hosted version of our AI engine.
Now, we don't charge to access our AI engine online, but we can't have just unlimited open access to anybody because they'll just clobber the thing.
So we're going to allow you to use reward points to access the engine for free.
Or you can use those reward points against future purchases.
But the big deal here right now, again, February 11th through the 14th, during this Valentine's Day sale, double reward points and an incredible assortment of laboratory-tested, almost all certified organic products, nutrition, food, personal care, and superfoods, all available, healthrangerstore.com slash valentines.
And remember that every purchase you make there also helps support our platform and supports all of our multiple platforms and our efforts to help Keep humanity free and informed and advocate for knowledge, information, and decentralization.
So thank you for your support.
Export Selection