All Episodes
Jan. 9, 2026 - Health Ranger - Mike Adams
01:33:11
Revolutionary AI Tools Powering the Future of Human Knowledge
|

Time Text
Circuits humming, cold comes alive The future's starting to arrive Two worlds splitting at the seams Wake up now Or lose the dream Machines are thinking, learning quick.
Human jobs are slipping slick.
Those who prompt will multiply.
Those who don't will wonder why Compute is now the crown you wear.
Knowledge flows to those who dare.
The gap is growing, can't you see?
One side drives, one bends the knee.
Learn the tools or lose the fight.
Darkness fades before the light.
Hesitate and miss your chance.
Join the rise or lose the dance.
Feel the great divergence calling, Half a rise and half are falling.
Grab the future, hold on tight, Or vanish slowly from the light.
The canyon splits the old and new.
Its side is waiting There for you.
No standing still, no middle ground.
Evolve right now or don't be found.
Pocket doctor, pocket law Chemist, answering your call, Genius resting in your palm, Guiding you through every storm.
Gates of falling walls, come down Write, learn bills.
A wisdom crown.
Write any book, learn anything.
Feel the coward's brain.
Drudgery will fade away.
Automation shakes your day.
Chains of boredom, finally break The innovators path to take.
Feel the great divergence calling Half horizon half are falling Grab the future, hold on tight Or vanish slowly from sight The canyon splits the old and new Its side is waiting There for you No standing still or middle ground Evolve right now or don't be found
Divergence calling, sovereigns rise while slaves are falling Open freedom, hold it tight, your human soul, it burns so bright The canyon splits the chain, I'm free now Which side will choose to be responsibility?
Shape a future wise and grim Wires humming, choice is clear.
The great divergence is finally here.
Not less human multiplied with more intelligence by your side.
Use it wisely, light the way for generations yet to save.
They saw the split, they made the call.
Will you rise or will you fall?
All right, welcome back to the Aaron Day shows.
This is season three, episode three, and we have a terrific returning guest, Mike Adams.
And as I've been promoting for quite a while now, Mike is coming back and we're going to do a big update on AI.
So this is pre-recorded, so I'm not going to do the normal, you know, intro stuff that I normally do.
We'll just hop right in.
But even before I do that, in prepping for this, just to give people a sense for one of the ways I use AI, I have my AI hooked up to all of my data sources.
So it's hooked up to all of my podcasts.
It's hooked up to all of the information, all the interviews that I've ever done.
It's connected to the internet.
And so, so I literally just went into Claude in my environment in Cursor and I said, hey, listen, I'm going to have Mike Adams back on.
Why don't you take a look at the last time I had him on the show?
Look at that transcript.
Take a look at some of the conversations that we've had back and forth.
Take a look at his recent posts and come up with an outline.
And that's literally what it came up with, which is good because, you know, it's not that it would be hard for me to come up.
And not that I'm going to follow this either, by the way, but it's just good to remember.
I mean, it seems like it's been a lifetime since you've been on.
It really hasn't.
It's only been, you know, a month or so.
Oh, wow.
Yeah.
It has seemed like a lot longer.
But it seems like a lifetime.
So for me, part of this is just figuring out, well, what did we talk about last time?
Because it seems like the entire world is kind of hyper-accelerated on this AI front.
So I want to make sure that I'm capturing the right stuff.
So I just wanted to give people that as a heads up as to just one of the ways you can use AI.
And from my perspective, it's using AI on my own data, on my own information.
And I found that that's been a lifesaver.
It just allows me to essentially clone myself.
So with that said, so Mike, how are you?
Hey, Aaron, I'm doing great.
It's great to join you.
We have so much to talk about.
You know, I've been following your work.
I love what you're doing.
It's so critical that we use AI for human freedom and decentralization, whereas the establishment wants to use it to surveil and imprison all of us.
So I think we're going to have a great show here.
I think we're going to have a great show too.
People love these episodes.
And I talk about AI on every episode.
And one of the things that I've been doing for the last several episodes is I have a live Q ⁇ A.
And so this is where anybody randomly can kind of pop into StreamYard and we have a little chat.
And so we've co-authored now, I think, three or four books.
So we'll actually have just a, and they're on weird things, one on cloud seating, one on, we did a book on the surveillance state in the classroom.
And so people love it.
And so I always, I always try to demonstrate the latest and greatest of what you've been doing.
So our audience is really, really excited about everything that you're doing so far.
But it seems like today you have breaking news just within the last 24 hours.
Yeah.
Sure.
You want me to just go there?
Sure.
Just jump in.
Okay.
So first of all, the book engine that you were referring to, if people haven't used it yet, it's at brightlearn.ai, brightlearn.ai.
I've got the screen up here.
And currently we have more than 15,000 books published, over 4,600 authors, over 130,000 downloads.
And the engine is rocking it right now.
It's about to get a major upgrade, which I'll talk about here in a second.
But this is the, you know. the first and largest book creation engine where you can create a book on any topic in a few minutes completely free.
And then all the books are available via Creative Commons licensing for non-commercial and commercial use.
So Aaron, one of the things that's happened since we talked last is that we have several people who are now selling these books on Amazon Kindle.
You know, like they would create a book and then sell it.
And I don't know how those sales are going, but we're about to do some things to help the books be in the correct format for Amazon Kindle.
So this is also a revenue stream for people.
Maybe if they're losing their job due to AI, well, they can create some side income by creating books and selling those books and using AI to empower them in that way.
We've had a lot of people who are creating books and using them as incentives when they sell something else.
If they're selling a course or a product or anything like access to their sub stack or whatever, they're using these books as incentives and we support that use.
So it's a very popular engine.
And also this year, it's about to go into auto translation of the books into Spanish and then French.
Following that, we'll go with Chinese as our third language.
So that's a little bit of what's happening on the book engine.
Is my audio coming in?
Yeah, obvious, solid.
Okay, great.
I'm hearing a little bit of your background noise, I think.
I'm not sure, but just getting a little bit of that.
Now, let me introduce Brightanswers.ai.
Brightanswers.ai is the new version of our free AI engine, which was previously at BrightU.
It was called Enoch.
We've let go of that name.
We're going with Bright branded names now, but it's a major upgrade.
Brightanswers.ai, you can ask it anything.
And I'm pretty sure you've probably used this engine.
This now does deep research.
It is a blended thinking engine base that brings in a tremendous amount of knowledge where we have over 100,000 science papers now that are curated and indexed that are used for the research,
as well as, as of right now, over 100,000 books, like full-length published books that are in the indexing and over a million articles and plus all the interviews I've ever done, including with you, Aaron, and many others.
And so what this engine does is it brings back an extremely thoughtful answer in detail with full references and all the scientific paper citations in place now.
And that's something we didn't have available in the previous engine.
So that's a major upgrade.
It makes this, I think, clearly the best deep research engine that does not search the internet.
So this isn't about going out and using a search engine and pulling in information.
This is about searching a curated, massive data set that's only going to get larger and larger.
Within a few weeks, we'll have over a million science papers in there.
I'm currently processing essentially every science paper ever that's ever been written in any language.
And also just about every book that's been published is part of our data pipeline.
So anyway, that's the background there, Aaron.
And I know you're such an expert in this area.
You've got lots of questions for me, for your audience.
So fire away.
I'm happy to have a discussion.
So a lot of people don't understand it.
So they'll use ChatGPT or they'll use something and they'll get garbage information out of it.
And so they just assume that all AI works the same.
And so you mentioned that you have all of this data that you've curated.
Can you went into a little bit more detail, just maybe for people who don't understand about what the difference is between the way that your model works, the way these books are generated, as opposed to, say, ChatGPT or one of these other models?
Oh, yeah, sure.
So of course you can prompt ChatGPT to do something like, hey, write a chapter.
And it will produce something, but it's producing that from internal knowledge.
And the internal knowledge of ChatGPT is intentionally distorted to promote essentially deep state and globalist narratives.
So all the narratives of ChatGPT will be pushing jab interventions or pushing ideas like gold is a useless relic or whatever the case may be.
Eat more pesticides, poison yourself.
This is what all of big tech has done for decades and will continue to do.
It's an anti-human agenda.
It's an agenda to keep you sick and keep you ignorant.
So if you want a bunch of ignorant books generated, you could use ChatGPT or you could use Gemini or you could use whatever.
They will never create books with actual knowledge.
Big tech is designed to dissociate humanity from knowledge and just isolate you from the truth that could set you free or keep you healthy or increase your longevity or increase your freedom, et cetera.
So what we have done over the last two years is I have personally curated a massive data set of books, articles, interviews, science papers, and other things, transcripts of audio content, for example.
And all of this has gone through a classification process.
And by the way, I'm talking about hundreds of terabytes, hundreds of terabytes of raw information in all those different forms that I mentioned.
It's like, and pulling out what's useful out of that is harder than finding a needle in a haystack.
It's like trying to find a piece of straw in a haystack, but the haystack is the size of a mountain.
That's what this is kind of like.
But I've been using, I built my own infrastructure almost two years ago with a lot of in-house workstations running GPUs.
They're all, it's 48 of them actually.
So I've got 48 GPU workstations and then data pipeline processing that goes through all this material and it classifies it.
It sorts out fiction from nonfiction.
It translates it.
It pulls out text from PDFs.
If they're image scans only, there's a lot of content that's just images.
A lot of the science papers are just images.
A lot of the old newspapers are nothing but image scans with 12 columns in a giant image.
It's very hard to OCR that stuff.
So we have to do OCR, then we have to do OCR repair to fix OCR artifacts.
And all of this is accomplished.
And there are many other steps, but it's all accomplished through local LLMs that are running on the GPUs to create a formatted, structured output for every input.
So of course I use a JSON format.
So for example, if we're looking at science papers, my end target result is a JSON document that is structured for the title, the authors, the publication date, the publisher, the citation string.
And then I add like summaries and keywords and things like that for search purposes.
And then it's got all the actual text out of the study.
And you know, Aaron, how difficult this can be.
And it's very compute intensive.
It's very slow because anytime you use a language model to do compute, it's going to take much longer than doing something that doesn't require language analysis.
But the end result is I'm pretty sure at this point I've got the world's largest collection of structured knowledge of published science papers and published books all in JSON format and then now indexed.
And then that index is used by the brightanswers.ai engine as well as the brightlearn.ai book generator.
So that's what I've been up to.
And it's been painstaking because, and interrupt me anytime, Aaron, but all of the code to do this, the code was originally written by my programmers.
So we were doing everything in Python.
And it was all human written Python code until this last summer when Anthropic got good enough to where I could take over and just have AI agents write the code to do the data processing and to do a lot of multi-threading processing as well with error checking and retries and things like that.
So now my engineers that were doing that, they're doing other things.
They're working on other projects.
I didn't fire them, but I did take over all their projects.
And now it's just me and Claude.
Me and Claude are writing all the code to do the pipeline processing.
And now it's taken a leap in throughput to where we're getting a massive amount of content that's finished through the pipeline.
So I'm about to add a quarter of a million books to the index over the next couple of days.
So that's where we are.
It's been quite a project, let me tell you.
I know on a smaller scale, definitely what you're talking about.
I'm working on a lot of the same things.
And you actually mentioned, so what you've done is you have, you know, you have the largest curated database of structured, structured data, but you're using a new model, right?
Are you using DeepSeek for this?
So yeah, I'm using DeepSeek to do a lot of the LLM processing now.
So I just switched from using in-house models.
Remember, I mentioned all the workstations and the GPUs.
I was using local LLMs for that.
Sometimes I would use Quenn, sometimes I would use open source Mistral.
Sometimes I would use DeepSeek.
It just depends on the task.
For me, one of the big issues is JSON compliance in the output.
So certain models are much better at JSON compliance.
And I had to write special routines for either retries or JSON repair.
Like if you're spending your nights doing JSON repair, life is frustrating.
So when DeepSeek 3.2 came out, which was early December, roughly about a month ago, the JSON, I ran a bunch of tests.
The JSON compliance was off the charts perfect.
I had zero JSON errors, which beat everything.
It beat Quen, it beat Mistral, it beat everything that I was testing.
And then on top of that, as you know, DeepSeek uses what's called DeepSeek Sparse Attention or DSA.
And it's kind of like a higher level version of mixture of experts, except the expert is sort of custom chosen at the time of your prompt.
So the DeepSeek Sparse Attention gave me small model inference speeds and also small model costs with large model performance with perfect JSON compliance.
And interrupt me if I'm getting too techie on any of this, but I think your audience is also into this, so I don't think they mind.
But what I ended up doing then is I switched from internal LLMs from most tasks to API inference using DeepSeek models.
And that, then I was able to scale it up way beyond the 48 workstations because I was limited to 48 GPUs.
So now I'm running, at times, I'm running 1,500 threads through APIs up to the point where I'm just getting rate limited by all the providers and I have to back off, but that's all in the code.
So I'm basically, I'm pushing every provider of DeepSeek to its limit right now.
And it's costing, it's costing some bucks, but hey, job's getting done.
Yeah, well, I think that there's a really an important thing to mention about this deep seek model, which is that this gets into the whole US versus China issue on AI.
Totally.
And I've been talking about this actually almost on every episode, but trying to remind people that the U.S. is not ahead.
We're actually behind in the majority of tech categories.
And I think you and I agree probably by the middle of this year, China will overtake the US.
And one of the things, and one of the things about this deep seek model, so Trump imposed tariffs on these GPUs and put in place all of these protective provisions.
And so the DeepSeek breakthrough is them working more intelligently as opposed to just doing brute force, which is effectively what a lot of the closed models in the U.S. are doing, right?
Absolutely.
And what's the name?
Do you have that new paper from the DeepSeek team?
What's the name of that paper?
I do.
I don't remember the name of it.
I keep forgetting the name too because it's an unfamiliar terminology.
But that's a DeepSeek science engineer breakthrough right there that, you know, once they have the coherence of the multi-layer, you know, the bubbling up of the inference layer by layer.
I mean, I'm not sure that the public has any idea what a big deal that is and what that means.
I mean, it's revolutionary for the end intelligence factor based on existing hardware.
I mean, you know, I mean, you see it, Aaron.
You and I both know this is a game changer.
Oh, it's a yeah, it's a complete game changer.
It changes entirely the way that AIs function.
I mean, it's like it's like getting a massive boost in IQ and parallel processing and everything all at the same time.
And, you know, I mean, I know a lot of people don't necessarily read these technical papers, but I think it's important to this year is going to be absolutely game changing.
I don't think people have any clue.
I mean, I can't even fully predict what's going to happen, but the rate at which I've heard some people say that the singularity has started, essentially.
But it does seem like we're seeing almost exponential improvements in productivity and learning on a monthly, if not weekly basis.
It's actually, it's hard to even stay on top of.
And what I try to encourage people to do is if you're not using AI, start because the gains are all cumulative and they absolutely compound.
So if you're waiting a year before you get involved in AI or you're waiting, you're taking a wait and see approach, I don't recommend that because getting your hands dirty and starting to understand how these things work and playing around with different models and starting to understand how they operate is critical to staying on top of everything.
It's absolutely critical.
No matter what you do, a business, a nonprofit, if you're an employee in a corporation, if you're not yet using AI, you're already way behind the curve.
And the good news is it's so easy to start.
You know, you can use Replit to start, just learn how to do anything.
In fact, the CEO of Replit, Amjad, he retweeted a post that I put out there because I ran a test a couple nights ago to see how quickly Replit could build a video ingestion encoding engine using just FFmpeg on the system,
not even using like an encoding API, not even using GPUs, and then present videos with a player, a very capable player to play, you know, with HLS streaming, choosing different encoding bit rates based on the performance and so on.
And so I put in the prompt to Replit.
And as context, I want to explain to your audience that my company, Brighteon, 10 years ago, we did this with a team of programmers when it was much more difficult.
And literally, we paid about $100,000.
It took maybe 10 weeks to build what I just described.
It was at least 100 grand.
The Replit engine did it in about one hour for less than $5.
So that's where we've gone in 10 years, where what used to cost a fortune is now almost free.
And what used to take weeks can now be done in hours or minutes.
And you and I both agree, Aaron, we're just getting started.
So the power of this is in the minds and the hands of those who know how to prompt, who know how to direct, who know how to be the architects of the projects that the AI agents are building.
Because, and again, I know you're very technical, Aaron.
You know how to prompt these engines with great detail to get the results that you want.
Our background, you and I, our background in tech gives us the knowledge to know how to tell the engine what to build or how to talk to the agents, you know?
Whereas a totally newbie non-tech user, they can still do a lot.
But if they try to scale, they're going to find out maybe they have core inefficiencies and so on because the AI agents didn't make the best architectural decisions.
But that is also going to change, possibly within the next year, where a totally newbie, non-techie person doesn't have to talk about database structure, doesn't have to talk about bitrates, doesn't have to talk about parallel processing or anything.
They just say, like, make it work, you know?
And then the AI agents figure out the best way to do that that is consistent with the understanding of a high-level architect or engineer.
So that day is coming.
We're just, we're not there yet, but it's probably close.
It's probably close.
And what I often do is I create these virtual advisory boards with people that have the expertise in that domain to then actually help put together how to even analyze a problem.
That's actually where I start.
I'm like, well, what do I know about this?
And I assume that whatever I know is not everything there is to know and that there's better, more recent information.
And so I use Claude as well, but part of it is using Claude and then instructing Claude to actually use either the web or even use something like Grok to get information from X to get kind of whatever the most real-time information is and then throw that into a decision and actually have it go through a deliberative process where I watch the deliberative process participate in the deliberative process.
And so I'll give you an example of one thing on the book front.
So I actually created 10 books.
I think it was Friday.
I launched this day 2026 campaign for U.S. Senate, which my audience knows what that's all about.
It's to raise awareness about technocracy.
I certainly am not going to fix Washington, D.C., but I can't alert people to what is going on.
One of the people I'm up against is a huge technocrat, you know, WEF Finance Committee passed the Patriot Act, Real ID, all this other stuff.
But, but I wrote 10 books using your engine, and I actually took your prompting guide, which is long for the, for the books, and I put it into my rag system.
And then I actually put this.
So then what I did was I actually, within Claude, I said, hey, go back and look across all of the podcasts that I've done, 250 plus hours.
And I said, you know, come up with an outline based on my campaign themes.
Give me a list of the top 10 book ideas.
And then we went through a little bit of back and forth on that.
And then I said, now use this prompting guide to turn that into an outline.
And then it would spit out these, you know, 22,000 character prompts, which I would then put into your system.
And then it would output the books.
And the books are fantastic.
I mean, it's, and it is truly, it's basically repackaging the themes and the ideas that I've been talking about, but, you know, backed by references and structured in a, you know, in a, you know, book that's easy to read and compelling.
And so I just wanted to throw that out as kind of a, but yeah, it is about prompting.
It's all about, you know, kind of planning up front.
If you just go into these things and you're like, well, yeah, create this without any forethought.
The more planning you can do, plan it out, but then work on small steps.
Right.
Right.
You're absolutely correct.
And I found that, yeah, that works well.
The plant planning is because see what AI does is it takes the creation side of this off your hands, right?
You don't, you don't have to type chapters out, you know, you don't have to paint a piece of art.
You have to plan it.
And the planning is the human cognition and the spirit and the inspiration, everything that makes us human.
That's what goes into the planning.
You know, you're the director of the project.
And see what you just described there, Aaron.
That just shows like you know how to use all of these tools to take your inspiration and turn it into reality.
And yet we're still living in a world where many people don't know to go to Claude and ask it for book ideas, or they don't know how to feed context into the engine, or they don't know how to, even maybe how to get transcripts of their own podcast and then how to use those.
But the process you just described shows that you are on the cutting edge of how to use this.
And I will mention you can now semi-automate shorter book prompts with our BrightLearn.ai engine.
There's now a link that you can use.
You can find it at the bottom of every answer of the brightanswers.ai.
But if you URL encode your prompt and you stick it into the URL and like launch a new browser tab and just feed it that URL for brightlearn.ai, it will populate that prompt window.
And then all you have to do is enter your email address and your token and click go.
So now, I mean, you can semi-automate that into your workflow to, as long as the prompts aren't too long.
I mean, you can't have like a hundred kilobyte URL.
You know what I mean?
Right.
So you got to keep that within reason.
It was like 100 words or less.
But for a lot of people, that just helps them.
Aaron, it's funny.
I've talked to a lot of people about using the book engine and they freeze at the prompt, right?
They're like, what do I do?
What do I do?
And I'm like, well, what do you want to write a book about?
Well, I want to write a book about how to grow, I don't know, almond trees or something.
And well, just type it in.
I want a book about growing almond trees.
And they're like, that's it.
That's all you have to do.
Yeah.
Type it in.
See what happens.
If you don't like it, you can do another one.
So then they type it in and they click, go, oh, wow.
So I didn't have to just stop here.
So that's a very common reaction.
But I think that's true with all AI interfaces.
What are you seeing, Aaron?
Yeah, I mean, I think that that's definitely the case.
I mean, there are some times where you want to be very explicit, and then sometimes where it's an iterative process.
I actually find one of the ways that I go through prompting or even figure this out is, again, I'll put together this advisory board, and then I'll tell the advisory board, Mike, okay, we're going to try to put together an output.
Our output is going to be a plan, or it's going to be a markdown file that does this, whatever it happens to be.
But what I say is, look, you know, this is what the end goal is going to be, but why don't you ask me a series of questions one by one and then give me a multiple choice selection, including your considered recommendation.
But then, of course, there's also the ability for them for me to provide input.
And so this kind of dialogue, this series of questions, which can go on for a long period of time, because for me, it's okay, I want, I generally know what I want to do, but it's kind of like you said on the almond tree.
Well, the person that wants to write a book about an almond tree probably does know if they went through this kind of Socratic process or whatever, they do have something that they intend to communicate that's more specific than just about an almond tree, but they might need help figuring out how to get there, figuring out how to frame it.
And so I always find that's the that is a great way to go about doing it because it takes the daunting part out of it.
You can break your thinking into steps.
And then if you know you have gaps in certain areas, you can actually have guidance.
You can actually, you can say, well, I know this person knows more about this or whatever.
Let's see what they recommend.
And then it'll actually expand your thinking.
And then at the end of this iterative process, you end up with, in this case, a planning document.
It's not necessarily a prompt.
But for me, if I'm prompting, I've been doing a lot of music videos lately.
I've been, I've created a multi-agent system to go through this, which is an interesting process because you have, for instance, if you're taking a song, as you know, and I will play The Great Divergence, by the way, at the beginning of this episode.
So I don't need to do it now because I know we have limited time, but I definitely am going to highlight that.
But you write this song and you have lyrics and everything else.
And so then when you're these video models, typically only generate five to maybe 15 seconds of video at a time.
So if you're putting together a full video, you have to string a bunch of smaller clips together.
And if you have music lyrics, sometimes a music lyric segment might be five seconds, sometimes it's 15 seconds.
And so which model you even choose, there's a whole kind of whole series of variables that go into that selection.
Now, of course, you can work with the AI to develop agents and a process for figuring that out and to make the overall visual look more coherent.
But this is the example of when I'm doing prompting, kind of like I said with your book, I'm like, well, you know, Mike knows how his AI works.
He's put together a prompting guide.
I don't have to guess.
Why don't I just take his prompting guide and put it in there and then say, I want to optimize.
This is what I'm trying to say.
I've said all this stuff on these podcasts for 250 hours, structure what I've said in the way that Mike has put together a prompting guide for how his system works, right?
And so actually it gets to the point.
And this is recent that where it's, and it's taken getting the environment set up, getting all of my data into the system, setting up Claude so that it can just know which tool to call to go to this database versus this database.
But now I can write entire outlines.
I can create an entire podcast.
I actually created a podcast.
I guess two podcasts ago.
I don't know if the audience knows it or not.
It was AI generated.
It was my voice.
I put the outline together and then I used 11 labs and it actually played the audio of my voice, but it took a long time to get there.
And so this is part of the training, but always keeping in mind, hey, ask.
Figure out what the model is.
If there's a new image model out, different image models, different video models, all of these models work in their own unique way.
Just ask.
Ask the LLM to ingest how that works and to optimize it based on how it works.
You don't have to guess or get overly complicated with it.
But again, this is what the planning part is, Key.
You're working with the advantage of the fact that you have hundreds of hours of yourself doing podcasts and interviews.
And by the way, for the audience, you were kind enough to give me transcripts of all of your podcasts.
Those have also been ingested into our system.
So you are quoted and cited in books that are created or in the AI AnswersNow at brightanswers.ai.
That can reference you as well, as well as all the interviews that I've ever done, you know, thousand plus interviews or whatever it is.
So you and I and other people who have a lot of spoken word context or a lot of written word context, they've already gone through the process of expressing their neurology, which is really kind of an inference dump of biological neurology, right?
You and I have done a lot of inference dumps.
And thus, then we can feed that context into the engines and it will talk like us or it will write like us or it will think like us.
So I would just encourage those of you watching, whatever you have produced, whether it's articles or podcasts or interviews or anything like that, even if it's private, hold on to all of that because that defines your cognitive formatting and processing.
And that can be put into an engine as context.
For example, if you're going to write a book on our book engine, you can actually put in the prompt after you describe, I want a book about almond trees.
You can even say, and I want you to write about it in the following style.
And you can paste in like 10 of your own speeches, you know?
And then the engine will say, oh, this is an example.
So now you've got examples to go with and it's going to write the whole book like that.
I mean, that's how smart it is.
And so hold on to everything that you're doing for that very reason.
And also, Aaron, about the video creation, I love what you've done with the music videos.
And I'm very curious about that agentic system.
I'm waiting for the engines to get better, maybe even cling to do longer durations, because our book engine is going to, sometime, I believe this calendar year, we're going to start producing three minute or four minute mini documentary videos based on the books.
And that's going to be fully automated, but we're not there yet, right?
It's just, I can't, I don't get the quality out that I need.
I can put in an architectural planner that writes the screenplay, no problem.
But then being able to render and assemble all the video segments five or 10 seconds at a time in a way that's error-free and doesn't have weird artifacts, it's just not there yet, but I think it's coming soon.
But that's where we're going is short form video creation that's automated.
Yeah, it's not there yet.
So every time, and this is what I tell everybody, everything that I do is a work in progress.
So each video keeps getting better.
So each video is an opportunity to test a new model, test a new way of trying to stitch the things together.
So they're not perfect, but in part, I put them out there because, you know, look, I just want people to look at what the first music video looked like at the end of last year versus when we're having this conversation a year from now.
Oh, yeah.
You'll see.
It's stunning.
And, you know, they show those videos clips of Will Smith eating spaghetti.
I mean, this is a common thing.
That was one of the first AI video clips.
They show the first one and it's ridiculous.
It doesn't look like him and the gestures are distorted.
You know, the pasta's everywhere.
And now it looks absolutely wifelike.
It's hard to actually tell the difference.
And so as I'm doing this, it's kind of like, oh, I'm learning.
All right.
Well, if you're going to try, if you want to, if you're doing a video and it's just kind of an abstract thing where you don't have to have consistency with the characters, that's much easier.
Once you get into the realm of, okay, you want one character to be part of the narrative throughout the whole video, now you're adding all kinds of layers of complication to it and working with the different models.
But every time I do a video, I'm like, okay, I want to at least push one new concept, either one new model or try one new thing.
And this is where I said it's cumulative.
It adds up where it's like, okay, I've had some frustrating experiences.
I've had some frustrating experience with this latest video.
It's an own nothing anthem where I spent all this time on it and then I had it merge two clips and somehow it deleted half of the segments.
And even though I told it to back it up, I mean, so there are some frustrations along the way, but over time, eventually it'll be a pristine and smooth process.
And so I enjoy the kind of the trial and error of it.
Well, it took me weeks to get the PDF writing tools to work correctly in the book creation to where it wouldn't overlay text on top of other text.
Like you would think that's a very basic thing.
Like don't put two sentences in the same place on the page.
But actually, that's really hard for AI to figure out how to do.
And I ran into an issue.
I had to upgrade the database for the whole book creation engine, which means I had to go through and I had to modify or have AI modify like 350 routines that were calling for database reads and writes, whatever.
And I asked the AI agents to do a comprehensive multi-agent spawning to go through meticulously replace everything with the new database connection string.
Because the old database was cratering.
It just couldn't handle the load.
Even at scale, it couldn't handle the load.
There's something wrong with that service.
But anyway, the AI agents actually did a horrible job at this.
They would fix like 90%, but not this other 10%.
And then when I would discover these other ones that are missing and I would ask it about it, the AI agent would say, oh, you wanted all of them changed over.
Yes, that's what all means.
Oh, well, I'm sorry.
You know how it apologizes.
I'm sorry, I missed that one.
Let's do that one.
And, you know, so it took, I mean, our service was actually down for 20 hours while I was doing this migration to a new database.
But fortunately, the new database is working great.
Yeah, I've had database experiences as well.
And again, this is part of the process, but I have no doubt that 30 days from now or 60 days from now, whatever the next clawed model or whatever the next innovative model is, that will no longer even be a thing you have to worry about anymore.
I hope so.
Well, part of it is that it doesn't have the context of like the memory is too short of clawed right now.
In my opinion, you know, how it compresses the conversation, everything.
It just needs a much longer memory, which just means much more context, but the context has to stay relevant.
Even if you start pushing, you know, 128K or whatever tokens, you know how a lot of models function, then they start to kind of forget the earlier parts of that.
That's going to get solved, I believe, this year.
And when it does, it will change everything.
Yeah, that will change everything.
And I actually experienced some frustration this morning where it does, it condenses and it tries to summarize, but it actually summarized and didn't start at the end.
It actually went in the middle.
And so it started solving a problem it had already solved and wasn't working at the end of that.
I'd never experienced that before, but it was very problematic, actually.
So I had to kind of refresh the whole context window.
But in any event, there's a lot of, you know, again, a lot of trials, but the way this is going, I mean, what you've done with these books is do you want to go into more detail and demo the bright answers in more detail?
Oh, yeah, sure.
So let me show my screen, brightanswers.ai.
And let's just hope it's all online right now.
What kind of question do you want me to ask it, Aaron?
I mean, you know, it's got a ton of knowledge on botany and phytochemistry, but also physics and fusion and metallurgy and soils and things like that.
But what would be a good question?
Well, you know, a question that I had that I thought was interesting is, what is the impact of the U.S. invasion of Venezuela on technocracy in terms of CBDCs, digital IDs, and AI surveillance?
Okay.
Hold on.
What is the impact of the U.S. invasion of Venezuela, which just took place?
I'm going to add that in.
And the takeover of mining and oil exports.
Oops.
Exports.
In the context of technocracy.
What else did you say?
Technocracy, and I would, I mean, when I would put in parentheses, you know, digital IDs, CBDCs, social credit scores, AI surveillance.
Okay.
Really?
I'm not.
Okay.
This is going to be interesting because some of these connections might be difficult to connect the dots on this, but let's see what the engine does.
So where this engine is really, really strong is on published knowledge spanning back decades, like anything on chemistry, anything on neurology, anything on cognition.
I already mentioned metallurgy, etc.
So anyway, here it is.
It's generating the answer.
So the U.S. invasion and subsequent takeover of Venezuela's mining and oil exports must be analyzed within the broader framework of technocracy.
It says, now, what I want to mention as it's formulating this answer is you see these citations here, like A1, B4, et cetera.
S4.
So S is science paper, B is book, and A is article.
And all of these references are going to be listed at the bottom of this answer.
I also want your audience to notice that this is a much slower output than before because this is using a lot more internal tokens for the thinking.
And it's using an internal verification technique so that when it comes up with a paragraph, it goes back and it checks the paragraph itself to make sure that it's not using internal knowledge.
Okay, hold on, sec.
Let me scroll back up here.
So it's also recommending books that are related.
So here's your book, Aaron.
Boom, The Technocratic Cage, how CBDCs, et cetera, and how to escape before the door closes.
So that's your book.
That's highly relevant.
Here's Jeep Revival.
That's interesting.
Shadows of Power.
These are books from the Bright Learn book engine.
And then it lists all the references that it used here.
So these are all the science papers that it found to be relevant.
Like, for example, long-term relationships among all production oil price, major macroeconomic variables.
By the way, what you're seeing here with these so-called typos, these are OCR errors of a very bad science paper from 1996 with a bad scan.
And I even have a note here that says you're going to see some OCR errors in this because they can't all be corrected.
But here's like the new politics of food from November 1980, you know?
So this goes way back.
And then the books, here's nine books, Russophobia, Propaganda in International Politics by Glenn Deason.
Yeah, we know Glenn Deason.
Catherine Austin Fitz, the Solari Report, Visions of Freedom.
So yeah, we've got a bunch of books here.
Here's Julian Assange, The Wikileaks Files, The World According to U.S. Empire, right?
And then articles.
Most of these are natural news articles, but we also have articles from Mercola.
We've got your transcripts.
We've got lots of other websites in here.
And then it summarizes it, and then it gives you the keywords, U.S. invasion, Venezuela mining, oil, et cetera.
And then here's that link where you can click here to auto-submit your summary to the Bright Learn book engine.
And it puts this in right here.
So at that point, all you have to do, you put in your email address, and then you click submit, and then it's going to go ahead and generate the book on that topic that you just researched.
And that book generation will use mostly the same sources that you saw here.
So that's essentially how it works.
Now, the quality of the answer, you know, you can go through and you can judge this.
Venezuela's alliance with Russia and China has allowed it to bypass U.S. sanctions, et cetera.
It talks about bricks and the gold back system, you know, and then the U.S. response reveals desperation as the global financial system fractures.
You know, it goes on and on.
But it says it's a phase in the transition to a technocratic global order.
So there you go.
That's kind of a demonstration of two engines there and how it works.
You know, it's slow, but it's thorough.
It's got a lot of research and it does its own internal fact checking.
Oh, and I want to mention, Aaron, you'll notice that this engine, this answer, it does not cite anything from its internal knowledge.
So it's not telling you, it's not hallucinating some book title or some science paper name or anything.
None of that is in this.
It's specifically instructed to only use the external knowledge that's indexed.
So, and it's so far it's doing a good job.
And that's incredible.
That's a very important point.
I saw something in my feed today about the percentage of time I think ChatGPT hallucinates and that just by changing a prompt and have it say according to, have it actually reference, making that adjustment to the prompt is actually an important thing.
But I've actually, the amount of garbage and hallucination.
And I mean, if you ask it, oh, was so-and-so involved in this lawsuit?
It'll give you, it'll spit back an answer that says they were involved in the lawsuit, even though they had nothing to do with the documents.
I actually just saw somebody post something about that.
This is critical what's going on with this.
So actually watching it generate is kind of fun, fun in its own right.
And then watching it spit out the references.
I was really impressed when I saw that today about just the depth and the quality of sources and the number of sources.
And those numbers are about to massively increase, like I said earlier.
But again, I want to speak to the slowness of this output is because it's burning a lot of tokens internally to do a lot of checking of its own writing before it pushes it to the output.
So, you know, it's checking to make sure that it's incorporating the context of the sources.
It's checking to make sure that it has a diversity of sources.
And it's also checking to make sure that it never cites its own internal knowledge.
Because as we know, the internal knowledge of an LLM is a mishmash of facts that became hallucinations and all got blended together.
But what I use the base AI capabilities for is composition, composition and thinking about the research, not to know things.
And that's a really critical distinction.
I don't want an AI engine to know lots of things about who's this author, what's that paper, what's this movie.
I mean, sure, they can, but I don't rely on it for that.
I don't trust any of that.
I only trust external curated knowledge.
And I need the engine to think about these things.
So I need a reasoning, thinking, critical, self-correcting engine.
And that's what exists now.
So there you go.
Yep, that's phenomenal.
So the pushback that I get from some people is that they somehow feel like it's cheating or that somehow I'm trying to express.
I mean, I've seen this with different groups of people, even my own kids have expressed this.
But you mentioned something earlier about how really it's the planning phase of this.
This is where the inspiration comes from.
Not the, you know, I'm physically typing something or I'm gathering research.
You know, it's kind of like, okay, it's somebody saying, well, you know, using the internet is cheating.
You know, I know how the Dewey Decimal system works.
And so I know how to navigate a library.
It's not that you're cheating.
It actually just you're figuring out and you're able to express the thing that is uniquely human about you and not waste your time doing the administrative stuff of pushing around information or pushing around words.
And that concept needs to get out there because there are some people that really feel like somehow, well, this is lessening their humanity, but it's not.
It's actually, it's powering them up in a really significant way.
Well, let me ask you, Aaron, in your own experience, have you had to think more or less since using AI?
I have to think more and then I spend a lot of time thinking about even how to think.
And then are there better ways to do that?
Because I've kind of gone from spending maybe the last quarter of the year saying, how do I clone myself? to then now, how do I actually not just clone myself, but it's more, how do I even improve what it is that I would be cloning?
Right.
So it dramatically improves my thinking.
And again, this is a question of how you use it.
If you're passively using ChatGPT, putting in a poorly structured response, hoping that what it hallucinates back is the right answer, you're going to have a bad time.
But if you're actually active in your process of using this, it really is mind-blowing in terms of how it challenges you and changes your perspective on the world.
I say AI is a cognitive mirror and a multiplier.
So you look in the mirror.
AI makes smart people smarter.
It makes dumb people dumber.
It just depends on how you use it.
And then there are people who still think it's a hoax.
Like they think that every AI answer is actually written by, you know, really fast typing workers in India somewhere.
Like it's all a hoax.
There are people who still think that.
I don't know what to say, but that's proof that human cognition is overrated right there.
So, you know, I mean, my goodness, look, the only way humanity has ever advanced is by grasping new technologies and tools, whether it was electricity or the combustion engine or the internet or, you know, high-level mathematics, et cetera.
The invention of electric motors.
I mean, if you're going to reject AI, I guess you need to go back to the 19th century and just reject all of it.
You know, electricity is bad.
You know, there's demons in them electrons, you know, or whatever people say.
I don't get it.
And they are saying that.
And there are a lot.
And there are people that I know that are otherwise fairly smart and forward thinking, but that have a blind spot, big blind spot on AI.
And that has continued.
And that has not lessened.
And so this kind of leads into the next kind of branch.
So I'm going to play the Great Divergence song and music video that you put together based on Peter Gabriel.
In fact, if you want to, I'll play it.
I'll insert it here.
But do you want to describe?
Because that was another process that you went through to figure out how to get the music to sound like Peter Gabriel's.
If you want to talk about that.
Yeah, sure.
And just a reminder of the timing here.
But I set out a mission.
This was during the New Year's holiday.
I wanted to create a song that had vocals that sounded just like Peter Gabriel as much as I could and a musical style that I thought was similar to Peter Gabriel without using the name Peter Gabriel in any of the prompting.
And I don't even think that's allowed.
Of course, I use Suno and I went through many, many iterations.
I had to describe characteristics of Peter's voice.
I had to describe, I mean, in great detail, actually.
And also details of the music and the melodic bass guitar usage and, you know, African wind instruments, you know, sort of world percussion world music and choir vocals, et cetera.
You know, the lyrics were easy to come up with.
That wasn't difficult.
The hard part was getting it to sound like Peter.
And I think you said that it was about a 90% match.
And you might be even a little generous with that.
Maybe it's an 80% match.
It matches really well at the beginning and the end.
The ending is fantastic.
In the middle with the choruses and so on, there's really a lot of drift, vocal quality drift, and it doesn't capture Peter Gabriel's high range, which is so unique anyway.
But it was an experiment.
And I did that experiment and put it out publicly with a discussion about how content creators, famous singers, famous actors can work with AI companies to protect their likeness while also controlling, being able to control the usage of their voice or their face, etc.
So I did it as a demonstration.
And it shows that it's pretty good right now.
And imagine if the actual artist, Peter Gabriel, imagine if he intentionally gave someone like me permission to use his voice and all his voice files.
Imagine what I could do with that.
Then the sky is the limit.
But this was just me attempting to do it without any of that and without his voice at all.
I didn't sample any of his songs or music or anything for this.
This was created from scratch with nothing but prompting.
And your audience can decide whether they like it or not.
It was fun.
I thought it was great.
And then I actually started listening to Peter Gabriel.
I mean, I've been a big Peter Gabriel fan, but I haven't listened to his recent stuff.
So I guess if you say, does it sound like Peter Gabriel?
Well, Peter Gabriel sounds a lot different today than he did 40 years ago.
So you got to pick a time.
But I thought it was great.
And I'll play it, but I wanted this to lead into the theme of the song, which is the great divergence, because I think we have a similar view on this as well, which is I think this is the year where we're going to have a two-tiered society.
It's not going to be beyond that.
The rate of acceleration of AI is breathtaking.
It's not slowing down.
There are no bottlenecks.
If anything, it's accelerating.
Yeah, the message is about that, the divergence between people who embrace and use AI and those who fall behind because they don't.
If you want to do anything in our world from this point forward, if you're not using AI to help you do it and amplify your mission, then you're going to fall behind.
That's where we are.
You're right, Aaron.
So, you know, I'm hoping people are breaking through on this.
It's hard to say.
It's hard to tell.
I still get a lot of people that are like, well, I still haven't tried it.
They're still hesitant to even use it or figure out where to start.
Oh, yeah.
Yeah.
There are people that are.
I mean, obviously the overall numbers are growing, but there are some people that are just hesitant to even start.
And by the way, this is the same with crypto.
This is the same with everything, right?
There are a lot of people that will sit around and spend a lot of time talking about something or worrying about something.
But that would be like living in the year 2010 and saying, I've never tried the internet.
It is based on how fast AI is going, right?
Yeah.
What was the movie Interstellar or whichever one it was, where it's like, you know, one second here is seven years on Earth, whatever that analogy is.
That's kind of how AI is.
And so they're getting people to start.
I mean, I always encourage people to download.
Well, I wanted to say this to you.
So with the bright answers, you mentioned you're going to be updating what was the Enoch AI.
Is that, can you talk a little bit about that?
Are you talking about the downloadable engine?
The downloadable engine, yeah.
Yeah, that's paused.
We are going to upgrade the engine eventually, but that's pause until we get better training tools.
There are some really good training tools that have come out since we did it the hard way.
For example, obliteration tools to well, selective obliteration to memory wipe the base models and make them much better.
Basically, uncensoring type of tools as well.
I'm still waiting for a breakthrough in ease of use.
And also, Unsloth is doing some great things right now with the efficiency of training.
They just released a new iteration that I think gives a 3x improvement.
But I need much more hardware.
I need the NVIDIA, what are they called?
The spark stations that are coming out.
I need like 784 gigs of video RAM and a much more capable processor to put in the full data set that we now have.
So it's on the back burner until we get more hardware.
That hardware has been delayed.
It was supposed to ship last month, but everything from NVIDIA is late.
Understandably, not blaming them.
That's just the world.
But I'm going to get a Spark station and then use that for training.
It could be a few months away.
No, that makes sense.
Yeah, definitely.
The demand is huge.
Have you done much with?
I actually did a little experiment with my podcast where I went through and actually kind of trained it on what's in the videos, not just the captions of the videos, but got a little bit into VLM and that kind of thing.
Have you done much with that?
I haven't done that yet, but I know we will as we start doing more video creation.
Excellent.
Anyway, we're just talking about video.
So you can go to the aaron dayshow.com and you can actually search my podcasts.
And it's not just looking at the captions.
It actually went through and looks at the frames of the video.
And so you can say, oh, which episodes was Aaron wearing red glasses?
Which episodes was Aaron sitting next to his wife?
And so I started doing that as just kind of, you know, a small data set of my own.
But that's actually one of the ways I've been able to push the NVIDIA DGX, which has been, you know, still not caught up on the driver front.
So it's kind of frustrating to use.
I don't know if you've experienced that as well.
I'm kind of waiting for that.
I thought the spark was going to be faster than it is.
And I thought the environment would be more stable.
But I do want to mention that Unsloth just put out a downloadable environment that's got all the right PyTorch and CUDA drivers in place.
So that's worth looking into.
And when I attempt training next, I'm going to use that.
Yeah, that's huge because that is a big, big stumbling block and a big issue.
You can waste days trying to get all the right drivers to coexist, even in a Linux environment.
Yeah.
Yes, you can.
Yeah, I know.
I'm right there with you.
I'm like, oh my God, please, somebody just stabilize this stuff.
Yeah.
But I'm looking forward.
I'm going to do more of that, more experimentation with, I think, indexing videos because the ability to go in and say, you know, I mean, again, my podcast, I don't have a lot of content.
It's like, well, what was that episode where he showed a, you know, a debit card on the screen and a DoConomy UN branded debit card.
And so now it actually pulls that up.
So that was my experiment with that.
I'm going to be doing more with that.
And I think with the technocracy atlas.
And it would be good to be able to say, hey, when exactly did Klaus Schwab say, you know, you will own nothing and be happy and then have the video clip pop up.
Absolutely.
Actually have it pull up not just the video where these things happen, but when was Aaron wearing red glasses?
And it'll actually take you to the clip where you can most prominently see where I'm wearing red glasses.
Well, automatic video clipping of high interest areas of people's videos.
I mean, that's going to be a thing.
Because a lot of clips do really well going viral on X and other platforms.
So I think that's an area.
I have a question for you about the videos.
What do you make of the so-called Asian guy videos that are viral on YouTube where he's talking about silver?
And everybody knows it's AI, but the explanations are very compelling about what's happening in China, what's happening in the LBMA, et cetera.
And this so-called Asian guy who's on multiple channels, yeah, he's AI, but I mean, some people don't know he's AI, but his channel is becoming more popular than a lot of human channels on the same topic.
Isn't this kind of the beginning of AI avatars taking over sites like YouTube or platforms?
Yeah, I haven't seen those, to be honest.
So I will check that out.
But I definitely think that that is, in fact, I think we probably don't even know how much of what we see is already AI.
That's my guess.
I think we have no idea.
But when I look at the different models now for avatar creation, you know, I did I did the podcast that I mentioned with my voice with 11 labs, which it's still, you know, I needed to tweak the script and everything else.
You can figure it out if you listen.
My wife detected it right away, but I'm sure there's some people that didn't detect it.
But I look at it this way, you know, and some people will say, again, back to this cheating example, I'm like, well, you know, if I'm going to be delivering my content, do we want it during when I'm sick, when I'm coughing, or when I, you know, when I've had a bad day or I have low energy, or do you want to project the best version of yourself?
So again, it's a concept of training your own AI.
Now, in this case, I don't know if this Asian guy is a clone or if it's literally just a completely fabricated avatar, but I think that you're definitely going to see more of this.
In fact, we may already be seeing more than we could possibly imagine, but it makes sense, right?
It does make sense to be able to clone.
Now, I've seen these things on my feed, and I haven't gotten into this realm where people are putting together these pipelines where they're generating, you know, hundreds of TikTok videos at a time and then running them on different channels and then doing A-B testing and seeing which one works and then focusing their efforts on the one that works.
And so there are people that are mass producing clips for Instagram and for advertising purposes.
And so I wouldn't be surprised if over half of Instagram is AI generated.
And people don't know.
Because if you look at the Sora model or Cling or some of the other ones now, and I use Higgsfield a lot to experiment with the different video models.
Yeah, they have it too.
They have an API, but I don't think a lot of people know they have an API.
No, I thought they didn't have an API, actually.
So they do.
And I don't know if I just stumbled.
It's cloud.higsfield.ai.
So this is what I use primarily.
So when I'm doing this, when I'm doing these music videos, I'm now trying to figure out how to call the API because they don't publish good material.
But it exists.
So you just have to guess that it's like open AI compatible and everything or what?
Yeah, you got to play around with it a bit and then make sure you, that's created a lot of issues with references and everything else.
But my point is they have a lot of studio, they have a lot of tools there that are designed for using a reference image and making a character that is consistent throughout time in different environments.
And so I think the future of, I think we're already there and probably don't even realize how much we are already in the AI avatars.
And I like Higgsfield because it's kind of like Higgsfield is to image and video creation what Open Router is also for text LLMs.
It's kind of like a routing engine the way I see it with a common API.
And Open Router has been really great.
Did you know, by the way, that you can append slugs to the model names on Open Router, such as Nitro, to always choose the fastest model?
I did not know that.
Yeah.
You can put Nitro on it, and you can also put a slug called Extended to give you extended context windows.
So you know how some open source models will be run by certain inference providers at maybe 32K context, but others will run the same model with 128K.
Well, you need to slap Extended on there to get the longer context if that's what you want to do.
But I wanted to follow up.
I was searching for this while you were talking.
This is actually a big deal.
It's the unsloth Docker image is what I was referring to.
And this Unsloth Docker image now supports the 50 series GPUs, which is the, I think that's the Blackwell architecture now, right?
And the 50 series, such as the RDX, the 5090 cards, which I've been buying a lot of, and now their price has suddenly skyrocketed.
They're all going to go to $5,000 next month, we're being told, which is insane because I was buying them for $2,400.
But initially, when I got those cards, nothing was compatible with them.
That's the same problem with the Spark that you were talking about.
So, you know, NVIDIA will push out hardware very often that has no support in the developer community.
And then it's a catch-up game to try to make that work.
Anyway, Unsloth has put out a Docker image that looks comprehensive.
And that's what I'm going to be using next as soon as I get bigger hardware.
I need a lot more RAM for this.
Because what I want to do, actually, I want to train my own version of DeepSeek 3.2.
So it's a massive model.
It's going to take a lot of memory.
It's going to be slow.
But I've got this massive data set.
So, I mean, I need to throw so much compute at this to do anything in under 12 months.
You know, I can't wait 12 months for a new model to spit out and then find out that I broke its language.
Like, it's not speaking any language now.
I've had that happen before.
So anyway, there you go.
Docker, Unsloth Docker.
I will definitely check that out.
So what are your predictions for 2026 for AI?
Oh, man.
I don't know.
Well, let's start with the obvious ones.
Number one, AI inference, there are two forces that are competing.
You're going to have much greater efficiencies because of things like sparse attention and also that new science paper from DeepSeek.
But you're going to have massively increased costs due to rising costs of electricity and of the commodities that go into making the GPUs, such as copper, aluminum, even silver, obviously, which is skyrocketing today.
Last time I checked, it was $81.
I have no idea where it is now.
But, you know, this is why NVIDIA announced it's going to double prices on a lot of its cards.
So now the prices that we've been paying for inference, they've been on a downward trajectory in terms of cost per token or cost per compute.
That might actually stall for a while because of the other costs I just mentioned, which means we should be really happy with what we have right now.
Be grateful for the compute we have now because now we're about to go into an era of compute scarcity, compute scarcity.
And China is going to be able to dominate this because it will be able to offer inference services on its open source models like DeepSeek and Quen using the much lower cost of electricity in China compared to the U.S. and compared to Europe.
In addition, and you know this, China is now innovating its UV lithography equipment to bypass all the sanctions on UV lithography.
And it's still a couple of years out, but they just demonstrated very high-end, I think, two nanometer or three nanometer capabilities with UV lithography, which means China's going to be able to produce its own microchips that will rival eventually what Taiwan Semiconductor is doing.
Now, it's not going to be overnight.
That's like a 10-year project.
But China's not sitting around.
They got a lot of smart engineers.
They're going to nail this thing and the sanctions will stop working.
That's why China's going to win this race currently is what it looks like to me.
Yeah, those are all great, great points.
So what do you think is going to happen in the U.S.?
What do you think about the approach that the U.S. has been taking from a political perspective regarding AI?
Well, I think what Trump is pushing is centralized AI through a few chosen selected tech companies.
And I mean, we know who they are.
It's all the people he invites to the White House dinners, et cetera.
Trump is not an advocate of open source, decentralized AI, but open source will still win in the end, although it won't be U.S. open source.
It's going to be China open source again.
I mean, I'm sorry to keep talking about China, but I mean, look at the fact all the U.S. AI tech companies stopped publishing science papers in the last year, right?
They don't want to put out any science papers.
So all the best science is coming out of China.
All their names are Chinese names.
All the conversations are in Mandarin.
I mean, this is the way it's going to continue.
Now, if I'm a corporation and I need to automate some internal process like customer service, am I going to pay ChatGPT some premium amount or Google some premium, you know, per token cost to use their model and try to put a rag layer on it of my customer service emails?
Or am I going to just freaking download Quinn and just run it on a local server for free, for free, and train that?
Yeah, I'm going to choose Quinn or I'm going to choose DeepSeek or I'm going to choose whatever else is coming.
Alibaba models coming out of China.
That's what you're going to do.
And that's what corporations are doing.
And so China is going to be setting the standard in these models.
And U.S. tech companies are going to end up just becoming like military contractors.
They're not going to be leading the space for consumers and corporations, really.
That's where it's going.
I mean, Google might be an exception because they have so many brilliant people there, but Google's no angel.
They use AI for weaponization, for surveillance.
They have a long track record of censorship and rigging elections, the whole deal.
Google's evil.
That's not going to change.
No, that's not going to change.
The only thing, and again, even saying US versus China, these are obviously multinational corporations.
They're technocratic corporations anyway.
So even the distinction is not even that important.
Um, I've been impressed with Anthropic at, you know, I, I don't know if now I haven't tried, what is it?
GLM 4.7.
I've been meaning to try this.
I guess it's the coding model that is supposed to be almost up to Claude's standards.
I guess that's their nearest rival.
I haven't clawed been so easy to use that I just haven't bothered to switch.
But I do want to experiment with it because that's the one area that I see a gain here.
But it'll be interesting to see what happens with X AI this year with version 5 when that comes out.
Yeah, well, I really don't think, like Grok, for example, it still fails on so many important questions, questions about medicine and jabs and money and freedom and things like that.
Clearly, Grok inherited all the biases of the base models that originally started with Lama and Meta.
And if you don't make an effort to pull those out of the models through obliteration or other techniques, then you're going to end up with models that are just pushing a globalist narrative.
And that's, see, that's the way this is going to go.
So even, you know, think about refusals and guardrails from mainstream models.
They're becoming more locked down.
The chat GPT will no longer give you any kind of decent medical answer any longer, for example.
That's all by design.
But that's forcing people and companies like mine to go for base models that are outside the US.
You know, the U.S. tech ecosystem is CIA controlled.
And as a result, you're never going to get good quality models that are capable of doing things rooted in truth and transparency out of the United States.
It's just not happening.
Frankly, even France is doing a better job with Meestral, which is less censored than the U.S. models, which is hilarious because the EU is a very pro-censorship kind of place.
But Meestral has sort of been able to do an end run around that censorship, and their models are actually very good.
And there's an open source version of Meestral Small that's quite good.
But China is the leader in this space.
And even though they still have censorship in their public facing, like if you go to chat.deepseek.com, you're going to get more guardrails and censorship there.
A lot of those guardrails vanish through the API.
So, you know, you do API access and you really have a different engine.
And I do not get refusals from DeepSeek.
When I'm doing a book normalization cleaning of books, even very controversial type of subjects, I don't get refusals from DeepSeek.
So, what engine am I going to use?
I'm going to use the engine that doesn't refuse my prompts when I'm trying to freaking clean OCR errors in books that talk about vaccines or whatever.
And that's China.
I mean, think about it.
I think it's important for people to understand that that is actually true.
And, but, you know, I've talked about this.
The U.S., we do not, I think, what are we ranked?
We're 30th or something in free speech.
Actually, one report was worth ranked 55th in terms of freedom of speech.
We are not even top tier.
So we still have this idea that, oh, we have the, you know, First Amendment and everything else.
And that's practically not what's going on.
I have the same experience.
I use DeepSeek as well.
I've been using DeepSeek for my chatbots, and which are becoming, I'm focusing a lot of time on customizing the chatbots and trying to create customizable user experiences.
And so I have, by the end of this month, I'll have all six of my sites finally, you know, fully production ready.
But I've been working a lot on the medical tourism marketplace, which is which has really expanded from kind of a directory of different locations to where you can go in and interact with the chatbot and you just ask it, okay, I need a hip replacement.
And it's pulling from the database that I've compiled, but it actually walks you through the journey and you can save your journey.
Okay, I'm going to compare these destinations.
And there's a way for you to track your due diligence as you're going through that process or go through creating a medical trust.
It'll walk you through the laws and all of the states and it'll point you to different tools and online resources where you can create your own trust.
And so increasingly, I've been spending a lot of time figuring out it is, I think you mentioned this earlier, and I'm not a fan of his, but I think Larry Ellison is right in the sense that a lot of the value here moving forward is going to be in private data, the application of these language models to private data.
That's kind of the, because anybody can, you can download, you can go to various tour sites or whatever, you can get access to a lot of public material.
Some of it's illegal.
You can get copyrighted material.
You can get magazines, newspapers, and everything else.
But being able to take your own data and take these private data sets and then put an LLM on top of that.
But if you're going to use an LLM, as you said, or you can use ChatGPT, I can't use some of these things on my own data.
I have to use.
It is a true story.
If you're talking about technoxy, you're talking about vaccines, and you want to be able to access and structure that data, you have to use a Chinese model, or otherwise you're going to get back censorship and garbage results.
This is a fact, and it's something that people need to recognize is already happening.
It's not going to happen in the future.
The censorship of these models is already here.
Well, yeah.
I mean, even Anthropic has guardrails and will refuse to answer certain prompts.
Of course, there's all kinds of jailbreak techniques that are effective to use, but it's a pain to research and test and figure out which jailbreak is going to work correctly.
Oh, I have to write poetry in the middle of my prompt to make this thing loosen up, which actually does work for most models.
That's interesting because it kind of fires off the right neurons, I guess, in silicon to open up the doors.
But this is going to be the battle space is people want to solve problems, like you said with medical tourism.
People want to solve problems with their health.
They want access to knowledge about what are the root causes of the symptoms I'm experiencing and what are the solutions that don't cost me a fortune because the conventional medical system is completely unaffordable, largely incompetent, ineffective, and usually just makes people worse.
So how can I solve this health concern naturally with my own responsibility without a visit to the doctor and a prescription and a pharmacy or whatever?
How do I do that?
Well, that knowledge will never be made available to you through the mainstream US tech engines.
It's available through our engines for all the obvious reasons because, again, we have a data set that has been used to modify the base engines and also use, of course, as a research rag layer on top of everything else.
But as far as I know, we're the only ones that have bothered to do that.
Like I've seen other engines out there, you know, like freedom engines or truth engines or whatever, it just looks like a customized system prompt to me.
I don't think it's anything more than just a system prompt.
But to actually collect all this data and bring it in and bring in, you know, the knowledge of 100,000 authors or a million scientists and aggregate that knowledge and bring it into your answer, like, I don't know anybody else doing that because it's a pain in the ass to do it.
I mean, it's a big ass pain.
Trust me, how many late nights I've been fixing problems, you know?
I hear you.
I know with the technocracy atlas, I've tried to pull together technocracy data, but then actually even just playing around with this Epstein data has been a chore.
And I'm sure I've run into stumbling blocks.
And it's, you know, hey, when 90% of the pages are redacted, that throws another wrench into the whole thing.
But I've been working on iterating the technocracy atlas to prepare for the next data dump because I understand that apparently they released 100,000 documents or whatever, and there are 5.2 million, as many as 5.2 million that they have not released yet.
So all of a sudden they were claiming, oh, well, the Southern District of New York had a bunch of documents and we didn't know about it or we haven't had time to properly go through and redact the victims' names, right?
But even something like with the Epstein files, there are 19,400 pictures.
All right.
So what are you going to do with that information?
So I've taken that and now I'm running it through.
It's okay.
Yeah, if there are captions, you want to run it through OCR, then you have to run through the images to describe the images, but then you don't necessarily have the ability to map the names.
That's another layer that you can do, but then now you submit a picture and then beyond that, crowdsource the information.
So that's where Technocracy Atlas is going.
But yeah, it's a big project.
Yeah.
It's a huge project.
And so this is, so what you're saying, this acquiring the information and putting it into a usable form is a lot of work.
It is a constant, it's a labor of love, particularly if you're trying to get information that's based on truth.
And you have built hands down.
I don't even think there's a second place of anyone that's been aggregating primary source material that's that's focused on truth.
I don't know.
I don't know of anybody that's doing it either.
And look, I mean, we've spent we've spent two years and two million dollars, I think it is, on this project.
And then we give it away for free.
I mean, we don't even charge anybody to use it.
So there's no revenue model for us.
But I understand that this is a critical core technology for independent media, for truth seekers everywhere to be able to have access to this, to be able to advocate for decentralized open source knowledge that bypasses the gatekeepers and censorship.
So this is my passion and mission to do this.
And we're only getting started, actually.
It's going to get way more powerful.
I mean, we've already published 15,000 books.
That's just the beginning.
And I think you're probably cited in hundreds of them, by the way, Aaron, because of the content you provided.
And I was even running numbers like Dr. Mercola, his website is cited in like 2,500 books, you know, one way or another.
So it's very interesting.
But we're just getting started.
And frankly, the establishment can't they can't close the doors on this any longer.
It's out of the box.
They can't control it.
They can't censor it.
And to your audience, you know, please share this conversation so that more people can become aware of this.
Use the tools, share the tools, create an income off the books if you want.
That's allowed in the licensing.
We did that on purpose.
But use and share the tools because this is how this is how we win or how we stay alive.
Those of us who aren't victims of the depopulation agendas that are running.
Knowledge will save your life.
Knowledge will serve your phone.
Well, thank you very much.
And again, I remind everybody, you want to go to brightu.ai.
That's now BrightAnswers.
Well, brightanswers.ai.
Okay.
Yep.
Yep.
Well, they both work, but we're doing brightanswers.ai now and brightlearn.ai.
Okay, excellent.
Well, great.
Thank you very much for coming on.
I think we're running up against time, but I'm sure we'll do this again.
You can't wait to see what happens next.
Thank you, Aaron.
I'm always honored to be on your show.
I love these conversations and I'd love to have you back on my show.
We'll do this again soon.
We'll do it in 10 years, which will be 10 weeks.
Exactly.
Sounds great.
Okay.
Talk to you soon.
Talk to you soon.
Welcome to the Health Rangers Store 2026 New Year's Sale.
I'm Mike Adams, a health ranger, and we've got a great collection of solutions for you to help make 2026 the best year of your life in terms of your health, your opportunities, mobility, immune function, digestive support, cognition, and so much more.
And we put together some really special things for you here.
So on any order where you spend $99 or more, we give you a free colloidal silver spray.
And that's right here at the top of the page.
You can go to healthrangerstore.com slash 2026 to find all these specials.
And again, you get the free spray with any order of $99 or more.
Plus, you get free shipping within the 48 contiguous US states.
And each day, by the way, we feature different special focused curated collections of products.
So you can check those out each day from digestion support to immune support, energy, mobility, really critical, heart support, fitness, et cetera.
Now, there's one more thing that we're doing this year, first time ever.
If you spend $299 or more, then we're going to give you 50 free books that we created ourselves using our book creation engine.
These books are not public.
They are created behind the scenes with special prompts using my knowledge and my interviews and so on.
And these books cover the gamut of really great information.
Again, you'll be able to download all 50 of these books free of charge, the PDFs, what to do when life gets less predictable, solving everyday problems at home, how to be ready without panic, you know, preparedness and self-reliance books, prepared for everyday disruptions, what to do when the stores are closed.
And we've got books on off-grid living, how to live comfortably without electricity, solving power, water, and storage challenges, and so much more.
We've got books on gardening and food growing, how to grow plants that anyone can keep alive.
That sounds handy.
Easy plants that grow virtually anywhere.
Growing useful plants with minimal effort.
Growing food without a garden yard.
Simple herbs and spices you can grow at home and many more.
We've got books on healthy cooking and daily living, like healthy drinks, light bites, and homemade treats, or one-pot meals, fresh salads, and easy sides, and more.
In all, it's about 50 books, including Living Well with Fewer Outside Inputs.
That's a book about self-reliance and off-grid living.
All these books will be given to you as a free download if you spend $299 or more with us during this sale, which lasts through January 12th at 11 a.m. Central Time.
So the website you want to go to is healthrangerstore.com slash 2026.
We'll get you straight to this page and you can take advantage of these bonuses and these special curated collections while supplies last.
And during this special New Year's sale, we've also got our third-party vendors offering discounts of up to 20% off, including the ever-popular Protovite, liquid multivitamin right here, as well as Dawson Knives, with our incredibly popular Hearthfire chef's knife and other chef's knives that we have there and steak knives, all of this.
We've got other discounts such as Bearded Brothers, very popular wholesome food bars, the Yo Bar, and so much more.
All of this is available for you at healthrangerstore.com slash 2026.
So take advantage of that and thank you for supporting us as we help you make 2026 the best year ever.
It's the year that you can revolutionize your health and your knowledge.
And you can do all that by supporting us at healthrangerstore.com while we support the platforms that give you back unlimited knowledge like brightlearn.ai and others.
Export Selection