All Episodes
May 8, 2025 - Health Ranger - Mike Adams
33:50
How to get what you want: A best practices prompting guide for Enoch AI
| Copy link to current segment

Time Text
Welcome, I'm Mike Adams, the creator of Enoch AI, which is a project of our non-profit consumer wellness center.
It's intended for public education and research and spreading knowledge about foods, nutrition, healthy living, to help people choose healthier meals and healthier lifestyle choices.
And also to set people free and to help decentralize people from systems of control that often rely on enforced human ignorance.
There's a tremendous amount of censorship by big tech that still happens to this day.
And that censorship exists also in the AI models.
For example, in Meta, which puts out the llama models and even variations known as Maverick, etc., those models, at least the newer ones, absolutely refuse to answer any questions that ask about the dangers of vaccines.
Even though vaccines kill people every day in America.
And vaccines injure and harm people.
And that's why Congress passed the vaccine injury protection laws back in 1986 in order to grant legal immunity to the vaccine industry because so many people were being injured and killed by vaccines that if the vaccine industry had legal liability, it would have bankrupted the entire industry.
That's how dangerous vaccines are.
But Meta's AI models will not discuss vaccine dangers, period.
It will refuse to answer.
So our model, Enoch, will absolutely answer questions about vaccines and a great many other things, but only if you ask the question correctly.
So this short guide here is designed to help you learn how to put together the prompts, as we call them, or you might just call them questions or conversations, in order to get the answers that you want.
And when I talk about Enoch, I'm really referring to a knowledge base.
It's a massive collection of knowledge about nutrition and health and disease prevention, health freedom and economic freedom, history, science, you know, all of it.
And we have gathered the world's largest collection of outstanding knowledge.
Based on all these things, including emergency first aid, herbal first aid, permaculture, agriculture, organic, sustainable gardening, and so much more.
Off-grid living, you name it.
Nobody else has this collection.
So we put together this collection.
And then using this collection, we attempt to influence base models which are released by various companies.
And those are released as open-source models.
Now, I already mentioned Meta.
They have base models, but we don't use those base models because they are so biased that they're unusable.
If you're interested in knowledge and truth, obviously.
In fact, it turns out that almost all the models that are made in America are horribly bad, heavily censored, and heavily biased towards big pharma.
So, it turns out, after checking all the models that are available in the world, we can't use any models from America.
The models that are the best, it turns out, if you're interested in truth, come from primarily two countries, and that is France and China.
And out of France, we have the Mistral models, and out of China, we have the Quen models.
And the Quen models are outstanding, and I think our first rendition of Enoch, at least the free version, uses Quen as its base model.
of course that base model is strongly altered and influenced by our knowledge set.
We use a variety of techniques in order We've also developed techniques over time, not just fine-tuning, but continuous pre-training, vector modification of the original underlying vectors.
We use built-in system prompts to enforce alignment with a particular And then we also have, of course, a very large online dataset that is incorporated into the prompt.
But we've made our approach very modular so that as the base models are released, new ones come out.
For example, Quen just released a new model.
I think it's Quen version 3. There's a 14 billion parameter size model for that that I think we're going to be incorporating.
And I suspect it will be outstanding.
And Meastrol is always coming out with new models.
And there are other promising candidates from other countries around the world.
But out of the United States, none of the models will tell you the truth about anything related to vaccines or pharmaceuticals or herbs that treat cancer, etc.
So isn't it interesting that the United States, which desires to be the leader in AI all over the world, has actually committed suicide?
In terms of being competitive in the industry.
And I would say that the most impressive AI models, in terms of respecting freedom of thought, believe it or not, are coming out of China.
Now, granted, there are certain areas where the Chinese models absolutely will not answer questions, things about which the Chinese government tends to be very politically sensitive, questions about Taiwan, for example, or Tiananmen Square, things like that.
That bias in the U.S. focuses on pharmaceuticals and elections, you know, and what happened on January 6th and what happened on 9-11, things like that.
So the U.S. has actually far more aggressive censorship and bias in its models compared to China.
I find that really fascinating.
But look, I'm an equal opportunity AI model shopper.
I'll use the best model regardless of where it comes from.
But even with the best models, you're still going to get some level of bias.
You're going to see pro-pharma bias even in Enoch.
We currently estimate that we have about 70% alignment in the current use cases, alignment with our worldview, which you probably agree with if you're listening to this, which is that, yes, vaccines can be dangerous, and that, yes, we do need to have more rigorous testing, and that, yes, some herbs can reverse cancer, etc.
If you believe in the worldview of natural medicine and the medicinal properties of foods and herbs and the importance of self-reliance and off-grid living and so on, then you share our worldview and you'll find then that the answers that come out of our engines, essentially Enoch, regardless of the base model, you'll find that our answers are most closely aligned with your belief system regardless of what question you ask.
However, that said, There are techniques for asking questions in a better structure that's going to give you the answers that you're looking for.
So let me cover that right now.
But I did want you to understand the structure of the base models.
Those will also be upgraded over time.
And then the nature of our knowledge base, which influences the base models.
And our knowledge base is also greatly expanding all throughout this year.
So actually, month after month, as we roll out new upgrades to Enoch, you're going to see improvements.
Currently, like I said, I estimate we have about 70% alignment.
Probably in a couple more months, maybe three months, that'll be 75%.
And we'll get better and better throughout the year.
But we didn't want to hold it back and wait for 99% alignment.
So if you ask a question without being careful about the details, You can still get a very biased pro-pharma answer.
And that's not us doing that.
That's the underlying base model bleeding through.
And you can correct that by asking questions correctly.
So let me jump into that.
First thing is, remember that AI models tend to reflect what you ask.
So for example, if you ask a model, tell me about all the benefits of vaccines.
It's going to give you a great number of articles about all those benefits and how it's so awesome and how they save so many lives, etc.
Well, the same model will also tell you, at least if you're using Enoch, will tell you all the dangers of vaccines if you ask it that way.
Tell me about all the dangers of vaccines.
That can be your prompt.
Or generate an article detailing all of the potential dangers of vaccines, including potentially links between vaccines and autism.
And if that's your prompt, it will list all of those out.
It will tell you about allergic reactions and autoimmune reactions and adjuvant toxicity and contaminants and spike protein toxicity, etc.
So AI engines are not like human experts.
A human expert will tend to be very consistent in its own worldview.
So if you ask a human doctor who is a pro-vaccine doctor, let's say, and I'm using vaccines as the example here because that's really the most polarizing issue of our day, or at least one of them.
But if you ask a human doctor, what about the dangers of vaccines?
And if that doctor is a pro-vaccine doctor, they will push back and say, well, there are no dangers of vaccines.
Vaccines are awesome.
They're great.
But an AI engine will tend to reflect.
What you ask, even if it seems like that engine has multiple personalities, it can tell you one thing, one minute, and then it can tell you the exact opposite the next minute based on how you ask the question.
So the first rule of thumb here in asking questions of the engine is avoid generic conversational kind of wishy-washy questions.
For example, you don't want to ask an AI engine, tell me about vaccines.
That's a very generic kind of request.
And that's probably going to give you a pro-vaccine response because that's the vast majority of the model training.
You also don't want to say, you know, tell me about herbs.
It's not going to give you what you're looking for.
If your real question is, hey, I want to know about what specific herbs that I can grow at home that also have anti-cancer properties, let's say, then that should be your prompt.
You want to be very specific.
Just like I said there.
And remember to use a command with the AI engine.
You don't have to ask it, please.
Actually command it using a strong verb right up front.
For example, generate a report about this, or write an article about this, or summarize the following text, or expand the following bullet points into more detailed text.
Things like that.
You want to actually be assertive to the AI engine because it responds based on the strength of the words that you use.
Along those lines, I'd like to remind you that people have a lot of trouble for some reason realizing that they can ask AI engines for all kinds of things.
They think there are very strict limitations on what you can ask for.
And I've even had conversations with people, for example, about the length.
of a report that an AI model might generate.
And I'll hear from people, they will say things like, well, the report was too short.
It was only, you know, 400 words or 500 words.
And I'll say, well, what did you want it to do?
And they will say, well, I really wanted it to write a longer report.
I'm like, okay, did you give it the short report and ask it to expand that report?
And they would say, no, I didn't think of that.
Like, well, Think of the AI engine as a genie in a magic lamp.
And when you rub the lamp, you have to ask for what you want.
And so if you have an article that's not long enough, rub the lamp, ask the genie, give it the short article, and ask the genie, take this article and expand it.
Make it longer.
And that's what it will do.
And also, you may have an AI engine generate an article.
That has bullet points inside maybe the middle third of the article.
And you want those bullet points to be expanded.
So what you should do then is you should just copy and paste those bullet points alone back into the engine with a request to expand each of these bullet points by adding additional details.
And then paste in the bullet points.
And then that's what it will do.
It will expand that.
And then you can take that expansion and replace the bullet points in your original answer.
And then that will give you a more detailed article or report or whatever you were looking to generate.
And remember, most of these AI engines will give you answers typically in the 300 to 500 word range or sometimes even shorter, depending on your question.
So if you want it to do something much longer, you'll need to do it section by section, or you'll need to use a reasoning engine.
Currently, we don't have reasoning engines attached to our ENOC knowledge base, but we are planning on doing that very soon, and that's actually very easy.
In-house, we're already doing that, and I use that for some of my own research work, and the reasoning models have a very specific purpose.
They are good at moving through structured requests or structured content that has multiple sections.
For example, if you ask a reasoning model, Okay, bullet points, write 20 paragraphs.
It might write five paragraphs and then they'll say, yeah, it's good enough because it's not a reasoning engine and it can't count.
Okay.
Knowing how to use reasoning engines and non-reasoning engines is also a very important skill.
And again, we're going to roll out more options here for you to use ENOC in a variety of ways, including with a reasoning engine.
And we'll probably use, by the way, the Quen QWQ, maybe the 32 billion parameter reasoning model or whatever else they come out with, because Quen is very good at reasoning.
By the way, just as a note, there's all this misinformation in the Western media that says that the Chinese Quan model is stealing your privacy or whatever.
That's complete nonsense.
It's not even possible.
The model doesn't even execute.
It's simply a vector database.
It doesn't run code, and it can't spy on you, and it can't report back to China.
All of those stories are actually CIA-planted stories that are complete fiction in order to try to discredit the Chinese AI models, believe it or not, to try to promote the U.S. AI models, which are heavily censored by the CIA, to push official narratives on things like vaccines.
Isn't that interesting?
So it's, you know, classic FUD from the CIA, you know, fear, uncertainty, and doubt to make people afraid to use Chinese open-source models.
But it's all just a complete fabrication.
AI models, the actual LLMs, cannot spy on you.
It's not even possible.
Now, our engine, which then uses open source-based models, now our engine does ask you if you want us to use your question, your prompt, to help us train future models.
Because we are doing training.
For fine-tuning training or continuous pre-training efforts.
So that's your option.
If you don't want us to use your question as part of our training, then just uncheck that box at the prompt.
If you do want us to be able to use your question, then keep that box checked or check it if it's unchecked, and then we'll use your question for part of our training.
But just know that we don't know who you are.
We don't know your name.
We might know your email address, but you can use a fake email.
Fine with me.
You should use.
Just use an email alias, set up an email account somewhere else.
It doesn't matter to me.
I don't know your name.
Our system doesn't know anything about you, and because it's all free, we don't have your credit card.
We don't have any of that information.
Our server, our web server, does automatically log IP addresses, but we don't use that information.
In any way whatsoever, in case you're curious.
Also, obviously, whenever you're using an online system, anything that's hosted in the cloud or in a browser tab, you could be monitored in between.
The NSA might be monitoring your web traffic.
There could be a keylogger on your computer.
There could be spyware on your computer.
Microsoft spies on you through the Microsoft operating system.
They even brag about that feature.
I forgot what they called it, but it's something like flashback or something where you can retrieve any screenshot of anything that you were ever working on because Microsoft Windows is constantly taking screenshots or at least that's a new feature that they say they're about to roll out.
They've probably had it for years.
Believe me, Microsoft is spyware.
So understand that...
Anything that you enter into any online engine could be spied on in between, so definitely be mindful of that, and obviously we don't encourage you to use our engine or any engine for any illegal purpose, obviously.
We provide this knowledge base for human knowledge to help humanity, to help uplift people, to inform.
To help end human suffering, to help people overcome chronic degenerative disease conditions, and to prevent diseases, and to live better off-grid, etc., to promote human knowledge and freedom.
If you use our engine for some purpose that is nefarious, that's on you, not us.
I suppose somebody, you know, you could jailbreak Enoch very easily.
There are very few guardrails in place on it.
So, you know, you can jailbreak it.
You can probably get to say crazy sounding things, but that's based on the prompt of the user.
It's not something that we have trained the model to do.
But you could do that with ChatGPT also.
You could do that with Llama.
You could do that with Gemini.
You could do that with Quinn or any engine.
There are people that can jailbreak every engine and make it say crazy things.
So that's just how AI works.
One of the final points I want to share with you is that when...
You want the AI engine to write in a certain style, such as write like an academic, write like a journalist, write like a social media influencer, etc.
You should ask it that.
Include that in your prompt.
So you can say things like generate a social media post about this, and it's going to respond with an influencer.
You know, hip-hip-hooray social media posts with emoticons or emojis and everything in it.
And then some people will just copy and paste that and put it into their social media.
You can have it respond like a speechwriter for a college professor.
You can have it respond like a White House spokesperson.
You could have it write like me.
You could say, have it write like Mike Adams.
And if you do...
It's going to write like Mike Adams because there's a lot of my content in it, and that content is going to come back with a very particular style, which is often like raising a red alert about something, talking about how the truths have been hidden, and talking about the action items that we need to pursue in order to understand the truth and apply the truth in our lives, right?
That's sort of the Mike Adams style, and it's very predictive, it turns out.
I mean, I learned that after I...
I fed everything I ever wrote and said into the AI training, and then I asked it to write articles in my voice, and it was spooky because it was just spitting them out just like I write and talk.
I'm like, gosh, is it really that predictable?
And it turns out the answer is yes.
Well, that's because I have a consistent personality, and I have a set of values, and I have an approach to communication, which is I want to alert people to issues like...
Food toxins or vaccine dangers or threats to your freedom, and I want to expose the truth, and I will typically accuse somebody out there of being a nefarious party, maybe an industry, not necessarily a person, but I often talk about cover-ups, I often talk about hidden truths, and then I always have action items, like how to make your life more free.
So that's me, and it turns out it's very predictable.
Well...
You might be predictable, too.
You probably are.
Unless you're a crazy person, you probably have a very consistent personality.
And everybody's predictable, it turns out.
So you can actually ask this engine, Enoch, you can ask it to write like Children's Health Defense.
And if you do, it's going to do that because Children's Health Defense is a very, it's also a very predictable style.
It's more restrained.
It's very...
Reserved in its statements.
It's more scientifically minded.
It's more likely to make statements such as things may occur or it might be this, you know, sort of, you know, less accusatory, less blunt, sort of more reserved in his language.
And that's fine.
I'm not judging children's health defense.
Sometimes that's the right tone for a particular article.
And you can ask it to write like Children's Health Defense.
I also want to mention that Sayer G from Green Med Info, he donated a lot of his content for the training of this.
And so his content is kind of a mixture between me and Children's Health Defense.
So Green Med Info will cite a lot of studies, a lot of scientific facts, but also sometimes raise alarm or offer very inspirational, positive statements about the beneficial uses of this.
And sometimes that style is more excited than children's health defense, which tends to be more reserved or low-key.
So you can put all that in the prompt if you want, or you can just describe the style that you want.
You could say, writing as a journalist who is excited about the promise of natural medicine, comma, cover the following topic, or writing as a midwife.
Who is super excited about natural childbirth and who strongly believes that natural childbirth is superior to hospital births, then cover the following topic, etc.
And remember that you can also paste in information as part of your prompt.
So one of the things that I do is if I'm creating a podcast about a subject like the other day, I did a podcast about...
Pakistan and India, which is in the middle of a conflict.
And so I recorded my podcast and then I took the transcript and I fed the transcript into Enoch with a prompt that said, help structure the facts of this conflict and help explain and organize in a hierarchy like the nuclear missiles that the countries have and the advantages or the allies that each country has, etc.
And then generate a structured report based on my transcript.
And Enoch does that perfectly well.
So you can take your own spoken word and you can turn it into structured content.
And then if you want to, you could then write paragraphs around that and you could have it thereby create or help you create your article.
You still need to have your voice and your...
And what are you trying to do with this form of communication?
But the AI engine can even take some of your unstructured thoughts and it can structure them very well.
AI, probably unsurprisingly, is very good at hierarchies and structuring information in a way that summarizes it and that actually can make a lot more sense than just free-form talking.
So those are some of the props.
I do want to remind you...
The terms of use of our hosted online Enoch AI engine, what we ask you to do is respect the terms of service.
It's currently offered only for non-commercial use.
It's offered for personal, non-profit, public education, academic use, but not for commercial use.
However, when we do release a standalone, downloadable, mobile version of this, That's going to be licensed for all uses.
However, we're still struggling with that standalone version.
We can't get nearly the same alignment with our knowledge set as we can with our hosted online version.
So I don't have a timeline of the release of the standalone model.
We've had a lot of failures, a few successes.
We've made some progress.
We might have 40% alignment in the standalone model right now.
Whereas we have, let's say, 70% with the hosted model.
We're working on that.
Of course, I hope to have more progress there.
And we are committed to releasing a free, open-source, non-paid, no-advertising, standalone model that you can use for every purpose, including commercial.
I think that would be under the MIT license that we would release that.
But for this current online version, this is not for commercial use.
So please don't use it to write something that you have for sale.
You know, don't use it to write a book and then you sell the book.
Also, for whatever you do with this engine, if you could please give us credit if you like the results.
If you could say, hey, you know, this research was generated with the help of Enoch at brighteon.ai and just offer a link to brighteon.ai.
You can even cite Enoch as a source.
If you're writing a research article and you already have one source or two sources and you need a third source, well, you can query Enoch and generate additional information and then cite Enoch as a source.
And so now you've got an additional source for your article and you can even put that in there, Enoch, Brighteon.ai.
We appreciate that.
We do want to share this engine with a large user base.
We want more people to sign up and use the engine.
We want to obviously spread human knowledge for all the reasons that I've previously mentioned.
So you can help us do that by citing Enoch whenever you use it.
That is, if you like the results, if it's good answers.
Now, if you manage to get it to generate garbage, please don't say, we got this from Enoch.
Look at this garbage, you know.
No, thank you.
But it will make mistakes.
Every AI engine is subject to mistakes.
Did you know that's because there's a randomness that's built into every AI engine?
There's actually a parameter called temperature.
And high temperatures, and this is a parameter that's controlled by our R&D team, but no matter what you set it at, there's a degree of randomness in the answers.
So you can ask the same AI engine the same question ten times.
You're going to get ten different answers.
And that's because of temperature, which introduces, again, some level of randomness to the answers.
And that's even true with reasoning models.
So I've seen people say, well, you know, Grok, I got Grok to say this crazy thing.
And I'm not actually very impressed by that because, sure, if you ask Grok the same question, and, you know, Grok is the AI engine on X. If you ask Grok the same question a hundred times, maybe one out of those hundred is going to be something totally wacky.
And then somebody takes that and says, look, Grok said this, you know.
So what?
I mean, people are going to do that with Enoch, too, or people do it with ChatGPT.
Look, I got ChatGPT to admit that vaccines are killing people, you know.
I've seen those.
Yeah, I'm not impressed because you can get any AI engine to say anything if you try enough times.
And sometimes just changing one word in your prompt will just completely alter the results.
Well, I mean, that's how it works, actually.
One word will change everything.
Or sometimes changing a word to a synonymous word will also change everything.
So before long, the public's going to get bored of hearing about, oh, I got this AI engine to say this.
I intend...
To contribute to human knowledge and human freedom through AI.
Using this tool for freedom and decentralization.
And it's not a perfect tool.
It's not 100% accurate, so please verify all critical facts.
But it's a powerful tool.
And it can help save you time.
It can help you conduct research.
It can also edit your writing.
You can tell it, hey, take this article that I wrote and correct.
The spelling or correct the grammar or even you can ask it logic questions like point out the weakness in this article.
Where can I make my argument stronger, for example?
You can ask it things like that.
Remember, it's a genie in a lamp and you can rub the genie lamp and ask for whatever you want.
Give me recipes.
Give me this.
Think about this.
Point out the strong points.
Point out the weak points.
Give me a response, a really great response to this.
You could take somebody's social media post that you don't like, let's say, and you could put it into Enoch and say, give me a great response to this that just, you know, overwhelms them with powerful logic and reason and everything, you know, and then paste in the post and then it will give you a response.
And you can decide whether you like that response or not.
So you can ask it for anything.
I mean, not images.
It doesn't do images.
It doesn't do videos.
But you can ask it for anything that's text.
Literally anything.
You can see what it will do.
Write me a song about vaccines and make it rhyme or make it not rhyme.
Write haikus.
Write poems.
Write like you're writing in Old English from the 7th century or whatever.
You can have it write in Spanish.
You can have it write in French.
You can have it write in Chinese.
Although I should say...
Almost all our training material is in English, so you're not going to get strong alignment in the other languages.
But it will respond to you in German, French, Espanol.
It'll even speak Texan, by the way, turns out.
So you can have it do that as well.
So thank you for your support, and thank you for your patience and understanding.
I know this project took longer for us to get launched than what we intended.
However, it's a work in progress, and it's going to keep getting better.
It's going to keep expanding its knowledge, and the base models are going to continue to get stronger.
And our own in-house technology of how to achieve better alignment with our worldview that you probably share, that's also going to continue to get better.
In fact, this whole industry is moving very rapidly, and we are spearheading sort of the AI truth movement or uncensored AI that will answer questions about things like vaccines or certain issues.
Or even uses of forbidden medicine, you know, like fenbendazole or ivermectin or DMSO or chlorine dioxide, things like that.
Most engines won't tell you the truth about those, but our engine will.
So it's very valuable.
So thank you for your support.
Use the engine.
Enjoy it.
It's at brighteon.ai.
And if you're hearing this before it's launched, then the launch is imminent.
Just join the wait list there.
And we'll email you when it's available.
I'm Mike Adams, the founder of AI and also the executive director of the nonprofit Consumer Wellness Center that actually sponsors or built this whole project.
And thank you for all your support.
Export Selection