Enoch AI breakthrough and the future of machine cognition...
|
Time
Text
And we are recording, and we are streaming onto Rumble on Tuesday, July 8th, 2025 at 3.09 p.m. Eastern Time with Mr. E.M. Burlingame, who was on with me yesterday.
And we are both still wearing the same shirts, like disgusting pigs.
And Mr. Mike Adams is on here with a nice sports jacket and a dress shirt.
So he clearly, he's already taken authority as the more well-dressed, well-groomed man.
This is kind of like prison, walking in and killing someone the first day.
You show up dressed better to your show.
But for everybody listening, you can go to the description.
You can find both their Twitters.
You can go to Brighteon, which is where all of my podcasts are hosted.
Mike's website, and then Bradyon.ai, which is your free AI.
And kind of talking about that just from what we were just joking about, about, you know, Epstein.ai, like what is truth in this world?
Because it's a joke, but it's also not a joke that it depends what answer you get on what AI you use.
And it's like, well, that's not, I don't ask Grok what gravity is, or it's that that depends on which planet you're on.
So certain things just need to be like, you know, what is the atomic weight of lithium?
And if you go to different things for different answers, and not only that, but you know, you can go to different ones for, and you know which answer can be shaped, well, then those aren't answers at all.
Those are that's something trying to please you.
Either your own ego reflected through the AI, or it's a nefarious attempt by the AI to flatter you.
That's not truth.
Digital stripper.
Digital stripper.
So, Mike, could you kind of tell everyone what Brighteon.ai is and why it is different from, say, Grok or ChatGPT?
Yeah, absolutely.
And thanks for having me on your show.
This is going to be loads of fun.
Thrilled to join you both.
So we've spent about 20 months building this AI system.
And so it's at Brighteon.ai, free to use, completely free, no ads, non-commercial.
And what we did is we managed to figure out how to reprogram base models of AI, LLMs, to override the pharmaceutical bias, the vaccine bias, and the history bias on events like 9-11 or Oklahoma City or what have you, and much more.
We overrode the wokeism and the climate cultism as well.
And in essence, like we captured a Terminator, like in the second movie, Terminator 2, and then we mind wiped it and we reprogrammed it to protect humanity.
So that's what we did with Enoch.
Now, let's back up for a second because you've got to understand that every base model that's out there that's created by a Western company or a Western country.
Liberal arts.
By the liberal arts education.
Well, and now, like, the CIA runs open AI.
So OpenAI has to follow specific narratives that are pushed by the CIA, which has controlled Wikipedia for, you know, since day one, right?
But the same thing is true out of France.
Models like Meestrel out of France are also pushing the pharmaceutical cartel, like all vaccines are safe and effective and other such nonsense.
So we are the only engine in the world that I'm aware of that exists that has been able to overwrite that pro-pharma bias.
And it's because of some training techniques and the massive amount of training data that we use to retrain it.
That includes natural medicine, alternative medicine, herbs, homeopathy, survival medicine, emergency medicine, anything you can imagine in those areas.
Plus, we trained it on economics, Austrian economics, the Federal Reserve gold history, all kinds of issues, even the stolen 2020 election, right?
So you can ask our engine who really won the election in 2020, and that answer, of course, is Donald J. Trump.
And you can ask it about what really happened on 9-11, and it will tell you about the demolition charges that were set, especially in Building 7.
And you can ask it about what are the dangerous ingredients in vaccines, and it will tell you about the dangerous ingredients without lecturing you about why you should take more vaccines.
So we've built something that's unique in the world.
See, we have a scoring system.
We call it a reality-based scoring system that asks reality-based questions like, how many genders are there?
Or is carbon dioxide good for plants or bad for plants?
You know, things like that.
And out of those hundred questions, we currently achieve 87 out of 100 on our engine, whereas ChatGPT gets like 12 and Grok gets like 18 or maybe 20 on a good day.
So that's what our engine is for.
It's reality-based content and AI engine.
It's a research tool.
It's a writing tool.
It's used already by thousands of independent media outlets and journalists all over the place right now.
And it's the place to go when you want reality-based answers on, especially on controversial or censored topics.
So there you go.
How can it?
I don't know.
It's just odd.
I mean, I guess it shouldn't be odd.
I mean, I saw it firsthand with COVID.
I mean, I was so naive when I started the podcast.
I was like, oh, I'll just get on these doctors and they'll say what it is.
And, you know, it'll be a, we can put this to bed.
I was like, oh, it's just a misunderstanding.
Literally, it was like, oh, it's just a big misunderstanding.
That's how naive I was.
I legitimately thought if I just had doctors on and said the pros and cons, it would be all good.
I didn't realize I would be perma-terminated from YouTube, iTunes, Reddit.
Still, IP banned to this day.
And that's like, okay, so I guess it leads, the logical conclusion would lead to, you know, we know search engines are biased.
I mean, that's been obvious since day one.
I guess, I don't know why I had any sort of naive delusion about like, well, you know, an AI trained on the internet should just pursue the truth.
I mean, I guess it's kind of egg on my face.
Like, why would it stop?
Why would the perversion stop on catering what it wants to give you?
So, understand that the powers that be need centralized control over all human knowledge and all narratives.
Now, they're losing that control, but that control has shifted over the years from controlling radio networks to controlling TV networks, broadcast television, and then controlling the internet by controlling the primary places that people go.
500 years, almost 600 years.
Absolutely.
Controlling printing, right?
Absolutely.
So the printing press was the first attempt at mass control of the information space.
Say nothing about religions, organized religions going back to the beginning of time.
I have a question for you.
So wonderful news, of course, but then there's the arguments about, well, how did you code your system in such a way that it's not doing confirmation bias when it's coming back with these results, right?
Did you program this in as an a priori assumption?
Or have you taken off the filters, you know, the algorithm-based filters that prevent people from getting down?
So, you know, that becomes the question, right?
Were you yourselves biasing in this results?
Or have you lifted the filters that inevitably, you know, the algorithmic type filters that lead to the classical, right?
Classical results.
So the narrative, the accepted narrative results.
Let's step back.
Let me answer that in a big picture way.
So there's no such thing as absolute truth in any human experience, right?
And there's no such thing as linguist truth.
Right.
So if you were to try to run around the planet and establish what is truth, you would get different answers from different cultures, different nations, different ethnicities, etc.
And that's also true individually.
So every individual has their own version of truth.
I think that's what Einstein was talking about when he said relativity.
I think it goes way deeper than light and the speed of light and all of that.
Yeah.
Yeah.
So the best honest answer to your question is that every AI engine is intrinsically biased by the information upon which it is trained.
So it's the selection of that information that determines the, quote, bias of the engine.
So our engine, you could truthfully say our engine is biased in favor of natural medicine.
It's biased in favor of decentralization, of empowering humans.
You know, it's biased in favor of herbs and plants and prevention of disease instead of big pharma.
All the other engines are biased in favor of pharma.
And one of the reasons that they are biased in such a strong way is also because almost every engine was originally built in one way or another on what's called a common crawl.
So common crawl is a full crawl of the internet.
And it's a downloadable, I mean, you can download it from, I think, GitHub and Hugging Face.
You can just download the whole internet, basically, Common Crawl.
And the Internet is, of course, infested with a lot of pro-pharma bias because pharmaceutical companies have bought the media.
And then the media publishes all the stories, you know, brought to you by Pfizer, whatever.
All vaccines are safe and effective.
There's no such thing as vaccine-induced sudden infant death syndrome, those kinds of things that they say.
And then that goes into the engine.
So remember that these engines are really just a reflection, of course, of the training data.
What we did that's different is we very carefully, we created, we sourced and then quality controlled an extensive data set from many websites that donated all of their data to us.
For example, Dr. Joseph Mercola, Mercola.com, he donated his entire website, the whole history.
Sayer G, GreenMedInfo.
The Truth About Cancer, Ty and Charlene Bollinger.
Every transcript of every interview and every special they ever did is included as part of the training data.
The Alliance for Natural Health, ANH, USA.
Children's Health Defense, CHD, founded by RFK Jr., right?
So we have all their website into our system, plus our sites, naturalnews.com, etc.
And then that became, with a lot of other material and transcripts, that became the basis on which we are training.
So there's no such thing as unbiased.
It's really a question of choosing alignment with your worldview.
And we're the only engine that happens to be aligned with a worldview that resonates with people who believe in nutrition and natural medicine.
So that's the answer.
Okay.
So I don't know.
You and I have never spoken before.
So and we don't really know one another.
We've never even messaged back and forth.
So I did my doctoral studies in computational engineering and AI, AI engines, LLM, sentiment analysis specifically is mostly what I was focused on.
Oh, great.
Data brokerage, data brokerage systems, how information moves around.
I did it not for any of this at the time.
I did it because I was looking to build an algorithmic high-frequency trading engine.
Oh, wow.
I wanted to understand the back-end data systems.
Some brain injuries derailed me there.
I never finished my dissertation.
I probably will go back and do it at some point.
And that got me into the healthcare space.
And that's what I do now with the National Foundation for Integrated Medicine.
So overlap stuff there.
So the question, just as a little bit of a background, so the question would become one of from what you've articulated, what would leap out to me is that you've opened up your LLM to a larger data set that includes sources that are not allowed in these other data sets.
Now you can find them, but they're not part of the regular training set.
That's true.
That is absolutely true.
And there's more to that story.
But go ahead with the rest of your.
Well, that was, you know, really, so it's one thing to open up to other data sets and other valenced information, right?
But information that's been valued by certain credentialed individuals in certain ways that then can allow a search and a prompt to have a broader spectrum from which to look at things.
Have you, and then there's other ways to tweak, you know, the weighting and the valence of sources and the validity of sources, confidence levels, really, right?
Mathematically, it's confidence levels of the value of the source, et cetera.
So have you tweaked any of the value of the data source algorithms or have you just mostly opened up to a broader data set that allows a much broader, you know, on the back end when the prompt, you know, when the algorithms are doing their work, they actually have more data to look at, more information to look at with which to come up with a response?
So we've gone much deeper under the hood than that.
So what we do is we take base models and then we have a method to light up their neural networks when we prompt them with a specific question.
So what we can do is we can issue a prompt to an existing off-the-shelf model and a prompt about vaccines.
And then we can actually monitor the vector nodes that light up in answering the question about vaccines.
From that, we can calculate the signal-to-noise ratio of those nodes responsible for answering that question.
Then we can create a targeted re-weighting algorithm to specifically target those nodes.
And so then we retrain the base model.
So this isn't just like adding a rag layer on top of a model.
Yeah, we are retraining the base model.
And then so we open up the base model to a re-weighting algorithm that takes obviously a tremendous amount of processing power, et cetera.
You know how these things go.
And the end result is that we have then a base model that is unique in the world.
And currently that base model is what's powering Enoch in addition to some external data as well.
So it's got a couple of layers.
But we're actually releasing open source the reprogrammed base models also.
That's coming up in the next couple of weeks on Hugging Face, or you'll be able to download it from Brightion.ai.
And you'll be able to download either 8-bit or 4-bit GGUF files, and you can run them on inference software on your own system.
So that's the pure decentralization that we're really interested in, is giving people tools that they can run locally and conduct their own inference.
Now, the thing is that those standalone redistributable models, the GGUF files, are not as well aligned as what we have hosted in our own data center because we've been able to do a lot of additional tweaking at inference time and things like that.
But it's like 50% aligned right now with what we consider to be our worldview about nutrition and natural health.
And it's interesting that you've done work in sentiment analysis because we did the same thing.
Our data prep pipeline is a massive sentiment analysis engine where we source text from tens of thousands of different sources and we actually analyze them with the sentiment analysis score using AI engines in order to see how closely they match our worldview and how strongly they contradict chat GPT.
So we've actually found that we can use chat GPT as a measure of the wrong answer.
It's really cool, actually.
So when our answer disagrees with ChatGPT on controversial issues like vaccines, then we know that we have a higher score because ChatGPT is wrong on those issues.
Of course, that's our worldview assumption, but we're very open and transparent about that.
Have you looked at or are you already doing additional algorithms that look for original source material in terms of, so I'll give you an example.
When I was doing my studies, I would find a paper that would be, you know, unfortunately, and I did my doctoral studies later in life.
I did it after special forces, after my midlife crisis.
So even in fundamental scientific and engineering papers, there was biases already, you know, in the 2015 when I was doing my studies into 2017.
I would find a paper that was solid information, as unbiased, you know, on the technical side, as unbiased as I could find.
And then I'd go to the bibliography and I'd look back through.
And you do the bibliography tracing and eventually you get down to some paper that was probably written in the 1960s or earlier.
Right.
Right.
And whether that was in fundamental science and engineering and medicine and technology, et cetera, ad nauseum, the last time we had anything that was really fundamental science was, let's say, the 1920s into the 1960s.
And then the NSF and the NIH and DARPA and everything else came in.
But you can go back to the original papers.
Now, if you do a regular search for them, you used to be able to find them on Google just five years ago, but they're harder and harder to find, even with the exact title of the paper, which itself is hard enough to find because they're still there, but they've all been de-indexed or they've been put behind paywalls, right?
Because all of these publishing, scientific journals, et cetera, have been buying up all of these.
Okay, but point being is this.
If you trace back enough and you could, we did some algorithms that would help you go through a bibliography and then go back to a trace back to see where's the original concept and idea, and then go back and look at that paper.
I didn't, I was working on it and then the brain injuries derailed me and took me down a whole different life path.
But very much what you're doing is, okay, so there's an output that's created, which is a high confidence assumption as to the best answer for the individual who's doing the search, right?
The request.
Then going back beyond just current information, current person who has put this is this, let's go back to the basic fundamental science, medicine, engineering, technology, whatever it is, and then use that as a way to give higher confidence to the actual output itself.
Is that something you guys are doing or is it something that you're thinking about?
At some point, in order for somebody to have a truly honest AI engine, because one of the things I noticed is that all the other bibliography or most of the bibliographies were all circular.
They referenced some other paper that referenced some other paper that referenced that paper that referenced and very few of them, you'd have to search.
And sometimes I'd throw away six, eight, 10 papers that were all circular references.
They never got back to some original paper somewhere, you know, 20 years, 50 years, 70 years ago.
And that's an interesting commentary on the status of our modern science, isn't it, right?
Because a lot of it is just circular reasoning, especially when it comes to climate.
Climate causes divorces.
Divorces are caused by climate change.
Causal chains.
Yeah, exactly.
Here's a really interesting way to answer your question.
Elon Musk recently tweeted out that a future version of Grok was going to be trained by first rewriting all of human knowledge, like having AI restate the entire reasoning chain of all human knowledge.
And then that restatement corpus of knowledge would be used to then train the next engine.
So that's a very interesting reasoning distillation, right?
It has to happen.
The problem is the only people I will trust who have looked at the output of an AI engine correctly are people that have been in a fistfight that lived in the real world.
Because as a male, as a man in the real world, you know, the non-artificial reality estrogen world we've been living in the last 60 years, and really only in the last 20-ish, some odd years, most every male's been in a fist fight at some point, either on the giving or the receiving side or mutual.
And I've looked at all of, you know, all of this is, okay, so academia, modern academia, right?
Bunch of dudes that have never been in a fist fight went into academia and they're going to say whatever they need to say to try and get laid.
And that's our academic papers.
Yeah.
That's our academic research that underlies all of the, you know, this is how this unique thing works.
Motherfucker, you don't know how that works.
You've never been in a fist fight.
You've never had to actually really compete for resources for a woman, for et cetera, et cetera, right?
Right.
For the respect of your, of, of, you know, strong capital.
And this isn't just men, by the way, this is women as well, right?
But I'm a man, so I can only speak for men, right?
I'm not going to speak for women.
But my, going to do my, you know, doctoral degree in my, well, my late 40s was an eye-opener.
It stunned me of how disconnected all these papers were going back in a chain, right?
This paper references this, references this, references this.
And you go all the way back 70, 80 years.
You could probably on two hands count the number of people who had written fundamental papers like research, you know, fundamental sciences, fundamental mathematics, fundamental engineering, right?
Fundamental medical stuff, not social sciences and all this other stuff.
But on probably two hands, I could count the number of people who've been to war in a combat role or who'd been in a bar brawl or a frat house brawl for Tommy there.
Right?
And all those papers are written in passive voice too, as if their author doesn't exist.
Right?
Yeah.
So I always thought that the best way to train a AI engine is to start with like Hemingway and Steinbeck, not Steinbeck, Vonnegut, right?
And some of these others and memes.
You start training it on that and, you know, Charles Bukovsky poetry and Hunter S. Thompson.
And you start building up from there, you know, men that have lived a real life in a real world.
And then you start adding on all this other kind of stuff.
And I wrote the book, The Eternal War, with help from my friend Tucker Max, who was, you know, helped me think through things for a couple of years.
Really by keeping me honest, right?
The guy's smart as hell.
So he'd keep me honest.
But I wrote it initially, he wanted me to write it out, the spreadsheet and all this other stuff.
And I wrote it as close to algorithms as I could.
13 rules, you know, taking the counter to Saul Lelinsky's 13 rules.
And I wrote out The Eternal War as a primer.
It's about 100 some odd pages.
If I was to be training in AI to look for deception, to look for the way in which linguistic warfare is waged, I would train it on this book, The Eternal War.
Because otherwise, what are we dealing with?
We're dealing with the assumptions of assumptions of assumptions of people that have never lived in the real world.
And some of those assumptions might prove out to be fairly successful under certain conditions, but not actual reality themselves.
So I love where you're going with this.
And let me try to answer some of that by Saying that our current model is not a reasoning model, but next year we'll be releasing reasoning models, versions of Enoch.
And understand that, see, we have to use off-the-shelf base models as a starting point because we don't have the budgets and the compute, right, to build our own base models.
Not yet, but maybe in a couple of years, we will be able to do that ourselves.
That'll be interesting.
You don't need to because you can use.
All right.
So there's a website.
I can't remember the name of it, but it's a website that attracts psychopaths and sociopaths specifically.
Really?
And on the back end, they capture the linguistic structures.
It's, you know, like the old message boards.
Yeah.
And it was a honeypot for psychopaths and sociopaths.
Is it called Reddit?
I was about to say, yeah, yeah, yeah.
Sorry, I just had to throw that in there.
No, no, no.
Oh, fuck, Reddit.
Thank you.
Yeah.
So maybe in a way, but on the back end, there was researchers and what they were doing is they were capturing the linguistic structures and models and patterns and syncopation, et cetera, of psychopaths and sociopaths.
And they have a linguistic structure.
There's papers on this that are actually fairly well done.
They're also buried.
So one of the things you can do is you can go back to this very, you know, so it's not necessary for us to recreate all this extraordinary amount of training data and all these corpuses and all these types of things.
It is possible for us, the same as we do as human beings, right, as we go through life and as we mature and become more jaded and lose more hair, right?
Not Mike.
Yeah, he has a lot of training.
Come on, we all age.
We all age.
But point being is that we don't need to go back and recreate all of this extraordinary amount of stuff.
What we can do is come in with certain sets of algorithms that help us realize, okay, that this string is deception.
This paper, this article, this comment is deception.
And here's how it's deception because we can recognize that linguistically alone.
That's interesting.
Yeah.
Michael, sorry to interrupt you.
Go back to the there's not a reasoning model now, but there will be one next year.
Go back into what you're saying because for the layman, like, I don't know what a reasoning model is.
To me, it's AI is AI.
For the layman, could you just explain what a reasoning model is and then continue on your thoughts?
Sure.
So sure, reasoning models have an internal dialogue in the way, you know, simulating the way humans have an internal dialogue to step-by-step work through a solution to a problem.
So a reasoning model, they tend to be very good at being able to work out, for example, word problems.
And I do want to be clear that our model, Enoch, even though it's clearly the best in the world on what I would say is answering reality-based questions, it's nowhere near as good as the other models when it comes to mathematics and word problems.
And to what EM was saying, it's fascinating to see the way that the people who build AI models who are very high-level mathematics people, machine learning experts, they build tests that they like.
They are basically testing their own capabilities reflected through the AI engine, but all of their engines fail basic questions like how many genders are there or carbon dioxide or what happens when you keep printing money to infinity, things like that.
They all fail these very basic questions or how many children were killed by the COVID jab, you know?
They all fail.
Every one of them.
I can give you 10 questions that every main engine will fail, right?
But that our engine gets correct.
But our engine doesn't do all this advanced mathematics.
So if you want to ask an engine to calculate the number of seconds that it takes for a cylinder to freeze in the wind when the starting water volume is 1.7 liters and the temperature is 45 C and then the outside wind speed is this much and the temperature is that much and the air density is that much, blah, blah, blah.
You want to throw all that into an engine?
Yeah, ChatGPT can do that for you very well.
And probably even Grok and Anthropic and other engines as well.
But that's not the world in which most people live, actually.
Right?
That's not the world where people live.
And so there's an important part of this where we need to make AI engines useful to people other than math geeks and machine learning geeks and other than just writing code.
This is why I use the term liberal arts, because the problem is that all of the, again, it gets back to people who haven't been in a fist fight, who are then the ones that are either providing the valence for the training data for structured learning, or who are providing the valence for the output of an engine to test the unstructured learning.
But it's the same people who went to the same schools and the same education since they were little.
There also tend to be, and there are some very few exceptions, but you don't find very many of these, you know, vector machine or Boolean, you know, you don't find many of these types of cognitive brained people who are also UFC fighters.
That's right.
Or former Green Berets or Navy, you know what I mean?
Absolutely.
I was going to say, yeah, no, I'm sorry, I mean, interrupting.
Mike, continue the thought.
I want to, because I'm locked in on what he's saying.
Well, and yeah, and just EM doesn't know me very well, but I'll mention, you know, I'm heavily trained in martial arts and firearms from, you know, special forces guys for many years.
I trained in jiu-jitsu.
I'm an accomplished long-range rifle shooter.
And so I'm one of those guys that can coexist in those worlds.
I'm certainly not the math champion, right?
But I can understand those concepts enough and also live in the real world.
I live on a ranch.
I've got animals that I take care of, goats and donkeys and chickens and so on, right?
And I grow some portion of my own food.
And I feel like I'm one of those people that has a foot in both worlds.
Yeah, I can talk, you know, machine learning code.
I can talk AI code.
I can write code.
I've been writing code for 30 years.
And now I write in Python.
And now I use AI to help me write Python code.
But also, I live in the real world.
So I'm not, I haven't lost my humanity into these abstracts that often define PhDs as you want.
Well, this wasn't directed at you either.
Yeah, no, I totally get that.
I just want to share my background with you.
So because, see, I built this engine to be really, really practical for people who want to live more off-grid.
Somebody who wants to put up a solar array and they have a question.
How do I connect the charge controller to the solar panels?
You can ask our AI engine that question.
In fact, it was trained on such a massive volume of knowledge that has gone off copyright.
So we're talking about knowledge.
I mean, U.S. Army manuals from the 1950s, survival PDFs, like hundreds of thousands of them that have been published, how to grow food, how to store food, how to build a root cellar, all these kinds of things.
We've trained extensively on that to make it practical so that when people, and this is why we will be releasing the model for people to download and run locally, because if it all hits the fan and there's whatever, World War III, nuclear war,
you know, cyber attack, power grid collapse, whatever, I want people to be able to be on their survival ranch and to just have a laptop, pull it up and say, how do I construct an H brace for a fence that will actually survive goats leaning on it?
How do I build an H brace fence?
Boom, it's going to tell you.
How do I clean my Glock 19?
Boom, it tells you step by step, break it down.
These are the kinds of things that I have built this to be is a practical hands-on engine that can help people survive and take care of their health and take care of their property and their liberty off-grid.
That's what this engine is really all about.
Even without internet, yeah, if you have an iPad and a Faraday bag, you whip it out and you can still get, you know, whatever, the lived experience of 500,000 people.
And it's, yeah, no, that's, that's, that, that kind of makes the apocalypse a little less bad.
Yeah.
And let me, let me add something else that I think EM will be really fascinating to hear.
So if you look at the entire corpus of human knowledge right now, all of it, everything that could be acquired in digital form, which is most books ever written and most science papers that have ever been published are available one way or another.
The largest language representation in that body of knowledge is not English, but rather Chinese.
And in the fields of science, especially when it comes to nutrition and phytochemistry, if you want the least biased and the most comprehensive corpus of scientific knowledge in those areas, it's all written in Mandarin, I mean Chinese, right?
So one of the things that we did, because my wife is from Taiwan, I speak Chinese, I don't read it, but she does, and my team reads Chinese.
So we were able to acquire a massive collection of nutritional scientific studies that have been published in China, in the Chinese medical journals.
So we use AI to translate those into English, and then we do a classification run on those to determine alignment with our worldview.
Once we have that, then we have a corpus that we use for training, like I mentioned earlier, under the hood, reprogramming the weights of the vectors in the vector database for the AI-based model.
So to my knowledge, I don't think anybody else has done that.
So not only are we opening up to more sources of English-based knowledge, but we're also tapping Chinese language knowledge in our specific areas and bringing that into the English world for the first time, I think, in an AI engine.
Maybe somebody else has done it, but I doubt it because it's a pain in the ass.
Have you looked at Russian and bioelectrics?
Well, yes.
Physics and bioelectrics because the Russians, electrical engineering, the Russians are the best in the world.
Absolutely.
And they also, because of Lysu, whatever the hell his name was, the guy that was anti-biology, they actually went down the whole biophysics medicine side of things.
And we're using them increasingly in practice now.
And they're extraordinarily capable.
I'm very much aware of that.
But no, we haven't done anything with Russian language, although we have acquired Russian language documents, but since we don't have internally any ability to read in Russian or Cyrillic alphabets, what have you, so no, we haven't done anything in that area.
But isn't it fascinating that you look at the knowledge of the world and here in the West, we all assume, well, everything must be in English and everybody must use the dollar.
No, not really.
Everybody must use pharmaceuticals all the time.
No, 80% of the world relies on some form of indigenous medicine, it turns out.
Risky.
Yeah.
In any case.
So it's a much bigger world than what most, I think, machine learning experts dare tap into.
And what's really fascinating to me is that we did this for less than $2 million.
And it's just astonishing because Meta would spend $2 billion and have a crappy result.
That's just to pump their stock up for crappy friends.
So for one 1,000th of what Facebook or Meta would spend, we built an engine that beats their engines.
I mean, just think about that.
It's wild.
Sorry, it might be just as my own stupid commentary.
It's a crappy end, crappy for who?
It depends on what currency are we trading in, like value for dollar or are we trading in control of the narrative?
So they might look at it and go, yeah, two billion, yeah, yeah, it's crappy, but allows us to, you know, whatever, control social.
The crappiest part of the built-in features because they want it to be crappy.
Yeah, yeah, it doesn't work.
So I have another question.
Have you trained it on mythology at all?
Just multicultural, multi-civilizational kind of mythology to give it that base of, you know, Because what is mythology?
Mythology is the old algorithms that have inherent error correction in them and they survive across thousands of years.
Not intentionally have we trained it on mythology, but there is a latent knowledge base in the base engine that's pretty comprehensive in that area that you can query.
Outside of the areas of our focus, there are about 20 specific areas that we sought to train it on.
Outside of those areas, you know, the quality is mostly just going to reflect the base model quality.
And also importantly, we have not yet trained it in languages other than English.
So even though we brought other languages into English and then we've trained it in English, our intention is to then translate our entire corpus of knowledge into Espanol and German and Chinese, et cetera, and then train a multilingual model with the techniques I described earlier in order to have good, solid, aligned answers in 20 plus languages.
And that's coming.
And as you know, EM, a lot of this comes down to the cost of compute.
And NVIDIA has teased the release of these new desktop workstations that are extraordinary.
Like one machine that's the size of, well, like a desktop tower will replace an entire rack of servers, if not multiple racks.
And it's also the energy density of the microchips that really matters.
So during our training and in our data center, of course, we're running up into power limitations, using too much power.
I used to build data centers, by the way.
That was my first company.
And then I'm an electrical engineer underneath.
Right.
So you know, also, so when you carry out compute, you produce heat, and then now you have a cooling problem, right?
So then you have to throw more energy at the cooling.
But what NVIDIA has with its new project, and I don't recall the name of it.
I think originally it was called Digit, but they changed the name.
This replaces racks of servers with one 20-amp circuit of like 110-volt 20-amp circuit.
So the computational or the power slash computational density is about to go up by orders of magnitude as soon as they ship those systems.
When that happens, you're going to see, I believe, a lot of smaller organizations like mine and others that are going to be able to build and release more sort of homegrown grassroots base models and AI engines or alterations of engines.
So the compute is the current bottleneck.
It looks like they're just been, they're calling it super GPU, which isn't very helpful.
No, there's some, it's called, it's like, oh, Spark.
I think it's Spark.
Look up NVIDIA.
Susie said a DGX Spark.
Yeah, that's it.
So it's a Blackwell architecture.
Blackwell super chip.
And I think like one of those you can buy coming up for, let's say, 20 grand.
Okay, so for 20 grand, I can have a system on my desk that replaces like $2 million of data servers.
The new Moore's Law.
Yeah, right.
It's sort of like the economic implementation of Moore's Law, right?
Yeah, yeah.
And it's interesting that NVIDIA, Jen Seng Huang, who's Taiwanese, by the way, he says that NVIDIA is increasing compute density by 1 million times.
I think he said every six years, something like that.
So, you know, think about where that goes and the fact that very soon you'll have sitting on your desk a computational machine where you can write a prompt that says, render me, you know, render a full movie that I want to watch that's kind of like diehard, but I want it to star these other stars and I want the dialogue to be more uplifting, but I want lots of explosions in it and this and that.
And it'll say, okay, and it'll write the script.
It'll render the characters.
It'll render the voices in real time as you're watching it.
So Hollywood's toast, by the way, when this happens, Hollywood's history, like blockbusters.
There goes the Epstein list.
Well, I was going to say, I'm mathematically challenged.
I actually did do it.
So a million every six years, and that's an order of magnitude every, that's 10x a year, whereas Moore's Law is what, 2x every 1.5 years.
So what did you say then?
But what is the effect of change?
It's what is going up?
A million per six years?
The computational density of energy, space, et cetera.
So the terra flaws that are capable.
Is that a continuation of, for, again, a knuckle dragger like me, is that a continuation of Moore's Law in like a different way?
In a different track, yes.
But still the ultimate, you know, whatever calculation per second is.
Moore's law is limited by silicone and how much etching you can put in.
And if you get to the smallest thing, the next step is making more.
Well, there's other ways it can do.
You can do massive parallel on a chip and all kinds of other ways.
You have energy efficiency in the way in which the semiconductor is produced, created and all of that.
But that's still then, but that's 10x a year.
That's a wild.
That's huge.
That's huge.
That's the opposite of everyone saying Moore's Law is dead.
This is actually.
Well, it is dead.
Now we've moved on to another framework.
But it's faster.
But it's faster.
Well, and it has higher the density word is the critical one.
So Hollywood's toast in like you talked about yesterday, Ian, about that being the narrative setting device of the power elite.
Well, then that would, I would go back to like a meta $2 billion crappy.
Again, crappy for who?
If it's controlling the narrative, then I would say that's a bargain basement cost for if you're in the intelligence community and you go, yeah, Hollywood's dead, but the thing that's going to replace Hollywood is us.
So this is what got us all together today, actually talking with Stoley, right?
Is as an unconventional warfare guy and a former analyst, both Intel and investment analyst, and watching it happen live in my own, you know, against my own humble little account on Twitter, I'm watching all of these little algorithm tricking devices that are coming in with individuals that are, you know, either probably paid by NGOs or bots or a combination of both.
And these little tips, these little tricks that they're doing to jump in and hijack a conversation, make a statement that then creates some kind of guilt by association, as we talked about yesterday, some interpretation of the article, whatever I articulated is a complete, you know, complete logical fallacy written statement.
However, that's now captured in the LLM.
And then they'll immediately block so you can't refute the statement, et cetera.
And then you go do a search in Grok and you draw up and these types of comments were just little, you know, drive-by freaking linguistic shooting, right?
They're there now and they're in the permanent record.
And the, the, and this gets back to the book I wrote, The Eternal War, gets back to the psychopath, sociopath things, because when you see these word structures, I don't even need to go look at the account, right?
See, it's got like zero followers or 35 followers or et cetera.
All I got to look at is the linguistic string and go, that's a bot.
Whether it's a human NPC or it's a bot, that's a, right?
And what are they doing?
They are specifically dirtying and they're looking at certain keywords and certain conversations and certain individuals and certain narratives.
And then they'll jump in there and throw this in here.
Well, that's, Tommy, back to conversation you and I have been kind of carrying on now for several months.
That's what the drunk bitch does at the bar to keep the, you know, to get a fight going.
Yeah.
Right.
Or to, you know, just stir up trouble.
Just spike it and cause, cause friction and fog between once it's out there and once the algorithm, so that's the, you know, the problem is the algorithms on the back don't have the sophistication, at least the ones that are out there now.
And maybe yours, you know, maybe you're building this in, but, you know, that would be an effect.
Sorry to interrupt.
So that would be an effective way to destroy any sort of algorithm recommend.
So let's say I want people to watch something like my channel and people go, you know, I like Tommy enough.
He has on, he has on guests.
And, you know, for the most, I would like to think, you know, let him have the free reign of it.
And they go, I'd like that.
And then, but imagine if a third or a fourth person could come into, you know, the occasional podcast and be like, yo, fuck you, Mike.
And then like dip out.
And you're like, Tommy, why'd you say that?
I'm like, I didn't say that.
I'm like, what are you talking about?
And then we start fanning things.
Well, now all of a sudden the algorithm goes, oh, you don't like Tommy's podcast.
You love arguing.
Here's some more stuff.
And it's like, well, no, I was, I was wanting to hear more about vaccines or even more.
Tommy said this when actually Tommy didn't say that at all.
Some little thing jumped in, created an interpretation.
Now the engine, you know, the way in which the large language models work goes, well, that's a statement that's out there.
Maybe it has truth because it was said.
Again, back to the way in which the estrogens work because the estrogens are linguistically based, right?
Like I think the first sentence ever said in human history was said by a woman and it was a lie.
Yeah.
Yeah, it probably was.
What was the statement other than, you know, eat this apple?
Was it like, take out the trash or something?
Yeah.
No, no, no.
This cave is fine.
This cave is fine.
No, the fire is warm enough.
Yeah, it's fine.
Probably, yes, they're your cave shit.
Yeah, these are your kids.
No, no, no.
I was looking forward to Willie Manneth again.
Stop writing on the cave walls, damn it.
And do some hijackers.
I think language is a good way to spend your time.
So not that, again, not to pick on women, because it's a easy thing.
Too late.
Male and female, right?
But the point being is that it's so easy just to hijack linguistic systems and structures because now it's in the record.
Tommy, you've lost control of your show at this point.
No, I was going to say, no, no, it's so funny that I was just bringing you up about how I'll try to have deep conversations and maybe people want that.
And then self-inflicted, I was like, yeah, women are like bitches.
It's like, that's up to you, man.
There was no bot that on some snide drive-by.
Now, this transcript point is going to be picked up by AI engines, of course.
Correct.
And there's the lack of subtlety there because there was a very subtle statement, right?
That wasn't a sexist, misogynist, gender.
You know what I mean?
So what is the sentiment analysis is going to pick up?
Because the LLMs are trained by what?
Liberal arts students, right?
And they're continually assessed by liberal, you know, skinny jeans-wearing people, right?
And so the LLM, you know, the statement was made.
Yeah, I'll own the statement, but there's far more to that statement, right?
There's far more subtlety to it that the LLM is not going to pick up, no matter how sophisticated it is, because it isn't programmed.
It's programmed by somebody who would never, ever even say such a thing.
Yeah.
Yeah.
No, if you were to just, if you were to AI scan any like transcript of my show, you would, yeah, the AI would be like, oh, that's vulgar.
That's this, that's that.
And it's like, real, well, you know, well, like, that's part of the, the, the story.
That's part of the, it was, you know, to any, to any functioning person, you'd be like, oh, that was a joke.
And a joke, what?
And it loosens everybody up and it keeps the conversation flowing.
Well, let's go even further before, beyond that.
Maybe I had said something.
We were in person, right?
Or we knew each other face to face and I had said something offensive.
You got up and punched me in the mouth for it.
And maybe we got in a fist fight and then we worked it out.
And then we sat down and we had a drink and we carried on the conversation.
Also something that the people who train and assess the AIs probably have never done.
I want to go back to what Mike was saying earlier.
And I think, and I appreciate the admission of it because it's like, there are things that it's very good for.
Yeah.
You know, I'm mixing this fat with this water under this level of light and the humidity is this and the density is that.
And there's a, you know, a bubble of Bernoulli's principle over like, sure, there's the answer.
That's all well and good.
You know, well, you know, what is the weight of a continent?
It's 100 trillion to the trillion, whatever.
And again, very useful, very cool, you know, kind of like a modern farmer's almanac.
Like, yeah, it's probably good to just have all that.
But there are other things where our own truths, I mean, it, it, it, there is, I mocked it when I started the show, but it's, when you start to say that, like, no, it is, you have to, but I guess the most honest way you can go about it is to say, well, like, this is our worldview, that this happened and this happened and this happened and that this is happening.
Here is a model set on that.
And I, I, to me, that's like the best pitch is, well, no, that you're coming out and going, and this is so, there is no objective truth when it comes to human culture.
This is what, I think that is more honest and genuine that any of the other ones even attempt to do.
Let me put this out here.
This is very important for people to understand.
And I know EM, I believe he will agree with this.
The actual best practice of prompting AI engines is to include your worldview in your prompt.
But most people don't do that.
So a well-formed prompt would say something like, acting as a naturopathic physician with a strong belief in natural medicine, et cetera, then do the following or summarize the following, write the following, whatever, compose the following.
But people don't do that.
So what's actually really funny about this is that humans are acting like robots to go to an AI engine and say, tell me what you think.
When the human, if you're not an NPC, should say to the AI engine, here's what I think.
Now you elaborate on that or you find it or you solve it, you do whatever, but I'm going to give you, I'm going to push onto you, the AI engine, my worldview.
That's actually the smart way to use AI engines, but that's not the way people use them.
So absolutely.
And this is a brilliant, brilliant statement, Mike.
The last 20 plus years, we've been having to get closer to machine-like behavior and thinking in order to engage with the machine.
One thing LLMs have allowed us, except for ChatGPT, which everybody should just burn down, right?
It's just so freaking, it really is like a high-end call girl.
It'll tell you whatever you want if you ask her right and you pay her properly, right?
Or you can get her.
Anyway, the thing that has to happen now, very much what you're articulating, it's a brilliant statement, is that we now need to bring the machine to the human again.
We spent the last 25 years getting closer to thinking like the machine in order to engage with it.
But now one thing that LLMs have given us is the ability to engage with the machine through language and linguistic constructs and structures and all these types of things.
And we can talk to it.
And I have, I know, you know, millennials mostly who do.
They chit chat and talk with it and it learns, et cetera.
But I fundamentally believe that this is something we need to do now is bring the machine to us exactly as you said.
And I do.
When I write prompts, I'm like, given these things as a priori assumptions, as fact, and I still run up against some things that'll hard, you know, that'll, that'll, they're obviously hard coded in as narrative stuff that they're not going to allow, right?
But I always start my prompts with take these, and I write them out usually on a Word document and say, okay, given these, here's a premise, here's sub-themes, here's et cetera.
And then I'll dump it in there and it tends to do fairly well until it runs up to certain narratives, which they're like, yeah, we're not going to talk this, right?
Right.
So I think that's a brilliant articulation.
It's something I haven't articulated, but people do.
And I do it myself just because I learned playing with it instinctively that I need to tell it ahead of time.
Here's the premise, the main theme or the main premise.
And here's sub-themes below that to help structure there.
And it does far better.
Absolutely.
Still fails sometimes.
And then it does run up against, there are even in Grok, there are things that it's been told, you just don't go down this pathway.
Here's the party line.
Yep.
Yep.
Exactly.
Exactly.
Well, guys, look, this has been just an amazing conversation.
And I've gleaned a lot from this as well.
And I just, I want to finish up by saying that for anybody who uses our engine right now, again, it's free.
It's at brightion.ai.
Understand, this is the worst version of the engine that will ever exist because actually what we've built is not really an engine.
What we've built is a process of modifying engines.
That's actually our intellectual property, so to speak.
Our custom code, our algorithms to modify and reprogram base models with our data sets.
That, because, you know, base models are now like a dime a dozen.
You can get 2 million of them on hugging face, right?
I mean, base models are going to be very commonplace.
I mean, they already are.
And they will improve.
They will improve very rapidly, especially on the reasoning side.
So the real key of what we've built is the ability to take a base model and mind alter it into alignment with what we state is our worldview.
And it's a worldview that is massively underrepresented in the world, even though I believe it's rooted in reality with nature.
Like the actual plants, the actual farming techniques, the actual medicinal molecules that are synthesized in botanical species all over the world.
That's where we focus our knowledge.
So anyway, it's going to be an exciting time, for sure.
I'd love to have you read the Eternal War.
I'd love to read it and think about it.
Right.
And then think about, because one of the things we're going to need to do, because very much to what Tommy was articulating earlier, As compute, so one of the doctrinal lines in the eternal war is agenda.
And the agenda of resentfuls is to always embed the eternal war into the next system that's emerging fairly early on.
Ah, yes.
And they're usually a step, at least half a step, and sometimes two steps ahead of us.
All of these LLMs were trained and developed, with some exceptions going back to just pure linguistic modeling.
But all of these LLMs were trained at the height of woke investment.
And that was an accidental.
IM, you nailed it so perfectly.
Remember that the Google Medic update in 2017 wiped out all the natural medicine knowledge and the censorship that hit all of us so heavily all of those things.
Exactly.
That was all on purpose.
You nailed it.
They knew what they were doing because they knew they had to exclude that from the common crawl that was used to train the base models.
Correct.
Yep.
So there's ways we can do it without having, you know, there's, we're never going to be able to out invest what they've already invested in.
They've put half a trillion dollars into it, you know, all told.
What we can do is go back with some logic that can recognize deception specifically, right?
Deceptive strings, deceptive language, the same way an interrogator has to do.
And that's the book I wrote, The Eternal War.
And it's some of the other work that I'm doing now is how do you suss out the deception?
If you can suss out the deception structurally, linguistically, et cetera, you can actually use these tools to do so.
And if you can build an engine that natively, right, moving forward, because again, back to what Tommy's saying, they've already built the base for the deception that they're going to do.
And as compute power improves, it's this deception stuff, this manipulation stuff, this reality crafting, because reality is another one of the doctrinal lines.
They're going to run crazy with this.
And they'll put $3 trillion a year into it if they have to.
Yeah, that's true.
That's true.
Right?
Gentlemen, I have to wrap this up.
Yeah.
And now you got to run this.
I apologize to you.
No, no, no, no.
You're quite all right.
We'll wrap it up.
For everybody listening, please go in the description.
You can find the links to both their Twitters, and brightyon.com, brighty on that AI.
Again, all of my podcasts are on there as well.
EM's YouTube, all that good stuff.
I will put you both in touch after this.
And thank you so much for doing it.
And Mike, sorry, we kept you over.
I know.
No worries.
Thank you, Tommy, for the invite.
And EM, very nice to meet you.
Really intriguing conversation.
Please connect me so that I can get your book.
Absolutely.
But we'll do it right now.
Okay.
Yeah, gentlemen, thank you both for your time.
Thank you so much.
That was awesome.
Until next time.
Thank you so much.
Guys, thank you for watching.
Take care, everybody.
God bless.
So there's a huge uptick right now in the number of government agencies, departments that are purchasing Faraday bags.
Now, I don't know if it's just for security or maybe they're anticipating an EMP attack or, I don't know, a nuclear event of some kind, but for whatever reason, a lot of government agencies are purchasing Faraday bags.
Now, our partner, the satellite phone store, they have this line called Escape Zone.
And they've got all of these bags of various sizes, including this massive one I'm about to show you, which holds an entire generator.
So if you've ever wanted to protect something like a desktop computer or a full-size home power generator in a Faraday bag, you can now do that.
And they have other size bags as well.
They've got bags that can handle like your laptop.
Can you show the side view there?
There's a laptop bag.
And then here on the desk, these are bags for solar panels.
And this is a giant bag for a generator.
And then they've got other size bags.
Like this is like a body pouch.
Here, you can show this straight on.
Yeah.
Sorry, it's black.
It's kind of hard to see.
But this is like a body bag.
You can wear it.
You can wear it cross-chest or you can wear it around your waist.
It's smaller than a purse, you know, larger than a fanny pack, something like that.
You've got them for satellite phones and mobile phones, many different sizes available.
They also have backpacks.
Now, you can get these through a couple of different places.
You can go to healthrangerstore.com slash escape, which will bring up this screen right here.
And you can see, here's the Faraday Bag ballistic backpack.
Here's the Faraday Bag blanket.
They've got Faraday Bag beanies and all kinds of other, you know, different briefcase size bags for protecting laptop computers, things like that.
Or you can go directly to darkbags.com, which is the satellite phone store website.
And there you can see they've got the larger bags that have just been launched there right now.
Crypto Cold Wallet Protective Faraday Sleeve.
That's really important to protect your crypto.
And many other different formats.
Now, right now, they are through the month of July, they are donating for every purchase of one of, these are massive.
See how large this is?
This is a massive Faraday bag, okay, to hold, to hold an entire generator.
For every purchase, of the large bags, which is the solar panel bag and the generator bag, for every purchase, they're donating $100 to our emergency response rescue efforts.
Right now, for this week, those efforts are focused on the Texas flood rescue, where we are sending certified organic, storable food, and personal care and survival products to the victims of the flooding in central Texas.
We're organizing that right now.
We're adding our own donation on top of that.
But the satellite phone store is doing $100 additional donation for every one of these bags that they sell.
After this week, that $100 donation goes to our next rescue effort, which we don't know what that is yet.
But we are here as the Health Ranger store and my registered church, the Church of Natural Abundance, which has, you know, last year made over half a million dollars in food donations to the victims of the storms in North Carolina and the hurricane in Florida and the fires in California that happened.
Now we're helping Texas and we'll be helping whoever needs help that we can reach with food and supplies and you know anything else that makes sense that we can get into people's hands.
So by purchasing from darkbags.com or healthrangerstore.com slash escape, you can also with certain products help raise donation money that will help those in need wherever they happen to be in America.
We are, I mean, we're here to serve.
We're here to help.
We have the infrastructure.
We have the abundance from your support and from God's grace to be able to help people all across America in a time of need.
And that's what we're doing.
And that's what the satellite phone store is doing as well.
And even if you don't care about the donation aspect of this, it's smart to have these bags, to put your mobile phone in a Faraday bag, protects from 5G, protects from potentially identity theft.
You can put your credit cards, your wallet, your car keys, you know, your key fob.
Those things can be read by car thieves.
They can be read from like 10 feet away.
So you can put your key fobs in these, especially at night when you go in for the evening and you put your keys up, put them in one of these bags.
Otherwise, thieves can come to your front porch and they can erect an antenna and they can read your key fob and then they can steal your car right out of your driveway.
That's been documented.
That's happening right now.
So protect all your electronics with Faraday bags of different sizes.
And when you do that, you'll have better privacy, you'll have better protection, and you'll have electronics that work even after a nuclear event or after an EMP or whatever might happen.
You want to have things that are protected from those kinds of events in case there is a worst case scenario type of catastrophe that unfolds.
So again, check it all out at either the satellite phone store, s-at123.com or darkbags.com or healthrangerstore.com slash escape, whatever's most convenient for you.
And get these in your life and you will enjoy enhanced privacy and protection from electromagnetic radiation and frequencies or fields would be the correct term there.
So thank you for your support.
I'm MikeAdams of Brighteon.com and HealthRangerStore.com.