All Episodes
Jan. 16, 2026 - Health Ranger - Mike Adams
47:47
DeepSeek will SHATTER AI Barriers with V4 Release
|

Time Text
Welcome to this special report.
I'm Mike Adams, an AI developer and known as the Health Ranger, the founder of all the Brighteon platforms and now the new Bright platforms as well, such as BrightLearn.ai, which has now become the largest book publisher in the world with over 21,000 free books published in the last 45 days, thanks to many of you.
We've had over 6,000 authors creating books, and you can create your own book completely free of charge.
And the books are rather amazing.
People are blown away.
I've had comments from people saying, I think I'm living in the future.
And I've got also breaking news on our book engine.
I'll get to the AI news here in a second.
But our science paper count now in our index, and this is one of the reasons why our book engine is so incredibly good, is because it researches a massive amount of other full-text books and full-text science papers and full-text articles and transcripts and interviews and many other things in order to gather the information to write the chapters.
And we use advanced AI model reasoning with all of our custom alterations of models in order to achieve that.
Well, I can announce today that we now have 230, well, let's call it 240,000 science papers in our index now.
240,000.
It's actually 239,828, but we'll call it 240,000.
So anytime that you create a book at brightlearn.ai, it's actually researching through these 240,000 science papers.
That is, if you selected science papers.
And these were just added.
I mean, it was 100,000 two days ago.
So I just added another 140,000 over the last two days.
And there's more coming.
It'll be a million before long.
Trust me.
I know, because I'm footing the bill for all the search indexing and storage and everything.
So it's going to be huge.
A million science papers, and there'll be hundreds of thousands of books in the engine.
Okay.
My point is, when it comes to AI technology, I know what I'm talking about.
I've built the most successful book creation engine in the world and I've studied AI engines.
I've built and released an open source AI model with all kinds of custom training using ablation techniques or sometimes called abliteration, which is hilarious.
And I've used all the main models.
I've tested them all in various ways.
And I'm here to tell you that I believe within the next 30 days or so, maybe 45 on the outside, but probably within the next 30 days, a new model is about to be released that will make most human cognition obsolete.
And I want you to be aware of this because this is going to change everything in the AI space and it's going to accelerate AI job replacement, especially in middle manager roles.
This is going to bring reasoning engines to the forefront.
Now, it will still take a while, a year to two years for most corporations to absorb this breakthrough.
But the breakthrough is significant.
And I want to explain why.
It's going to get a little bit geeky here because, of course, I had to dig deeply into a new science paper that was authored by scientists that work for or are affiliated with the DeepSeek company.
Now, DeepSeek is the AI engine organization that shocked the world one year ago, actually.
It was just about exactly one year ago with the release of DeepSeek R1, which was a reasoning model that used an internal thought process of reasoning through a problem, exploring different alternatives, fact-checking its own thoughts and conclusions, revisiting what it had previously thought, et cetera.
Basically, what's called a chain of thought reasoning engine or COT as it's now called.
Since then, DeepSeek has pioneered many new technologies in the last year.
Those include DeepSeek Sparse Attention or DSA as it's known, which in my view, in my experiments, it's the most impressive bit of technology that I've ever seen in LLMs.
It's really quite incredible because it allows engines, especially with a mixture of experts layer, it allows them to give you performance and cost throughput of very large language models, but with the actual cost of small language models because of the token speed and efficiency due to only sort of selective activation of relatively small parts of the language model in order to bring you your answer.
Now, I have used DeepSeek a lot.
I use DeepSeek version 3.2 for all of the document cleaning.
I started doing that since 3.2 was released in really, well, late November, early December last year.
Once I saw the capabilities of DeepSeek 3.2, I immediately switched over to it and I scaled up a massive multi-threading operation.
And my company alone was using a significant portion of the DeepSeek inference capabilities in the world at times.
For example, I would have 1,500 DeepSeek APIs running simultaneously cleaning or normalizing books or normalizing science papers.
So it's bizarre to use so much inference of a language model that it starts giving you 429 errors, which means you're using too much and you need to back off and start having delays between your API requests.
So I built in the 429 error handling into my code and sometimes it has to slow down.
But I don't think anybody else is using DeepSeek as much as I am other than maybe the DeepSeek company itself.
Anyway, that's only part of the story.
The other thing I realized in using DeepSeek is that I got the best results if I separated the cognition of the engine from the actual memory or facts or factoids of what it was dealing with.
So for example, I would instruct the engine specifically to never use internal knowledge in any of your composition of, for example, summaries of science papers or summaries of books or keyword extraction or rewriting paragraphs for clarity, things like that.
I would always instruct the engine, do not use internal knowledge.
Instead, you're going to use this external knowledge that I provide through, of course, our massive library of the world's knowledge.
And I found that this approach worked extremely well.
And it gave me just really impressive output, which is what you see now at brightanswers.ai as well as brightlearn.ai.
Both of those use various renditions of DeepSeek in part of the research and composition and fact-checking or chain of thought processes in order to create content.
Of course, it's combined with our engine, our knowledge, our processes, our special in-house, you could say, secret sauce.
But DeepSeek plays a role in all of that.
And by separating cognition from knowledge, I was able to get really extraordinary results.
So imagine my surprise when I stumbled across this new paper that was just released a few days ago from the DeepSeek scientists.
And it's called conditional memory via scalable lookup, a new axis of sparsity for large language models.
Now, what this means is that in effect, they have now, and this is what's coming out in DeepSeek version 4.
I'm certain of it.
This is what's coming out.
They have separated the reasoning and cognition from the knowledge of the internal LLM.
Or the way this is described in the science paper is that they use conditional memory as a complementary sparsity axis instantiated via n-gram, a module that modernizes classic n-gram embeddings for 0-1 lookups.
So what that means, so an n-gram is a, I mean, from biology, it's a neurological event that gives rise to a nugget of memory or a nugget of knowledge.
And in AI technology, the n-gram, as it's spelled, E-N-G-R-A-M, in this paper, which, of course, refers to N dash-gram.
What this means is that they have also, they have isolated the knowledge of the model, such as knowing the authors of books or knowing the names of people or knowing the cities of a country or knowing things, whatever they are.
They've isolated that into a separate sort of, let's call it a room, a separate storage area inside the model.
And they found that it works best to make that about something like 20 to 25% of the model.
And then the other 75 to 80% of the model is brains.
It's brains that can do amazing thinking.
So it's, you know, reasoning and rationality, brains, chain of thought, reasoning, also with sparse attention technology so that it only activates the parts of the brains that need to be used in order to achieve certain tasks.
And that's very similar to the way human brains work.
So you don't activate your entire brain for every task.
There might be one task you're doing, which is trying to work out a math problem.
So you're activating the math part of your brain.
Or another day, I don't know, it's Easter and you're decorating Easter eggs.
And so you're activating the art portion of your brain, the creative part, right?
So you're having fun.
Or, you know, there may be another thing, you're writing a paper, and so you're activating the linguistic creativity parts of your brain.
Those are all different logical parts of your brain, even if they overlap the physical structures.
But they're different logical parts of your brain.
So you use sparse attention.
All of us do.
That's how humans get things done.
That's how you drive a car.
That's how you figure out how to walk down a flight of stairs without falling on your face and face planting on the sidewalk.
You use sparse attention all the time.
We all do.
And what they figured out at DeepSeek is then how to use sparse attention, which is selective activation of the neurons required to carry out reasoning tasks and cognitive tasks.
And then they've separated that entirely from the knowledge portion.
So, you know, the sort of database of facts.
Okay.
And what this allows the DeepSeek engine to do is to plow through reasoning with much more efficiency with far fewer tokens.
Which is a.
This is a transformer topology change affecting multiple layers of how, let's say, how cognition bubbles up through the layers of an LLM in order to produce intelligence, actual intelligence.
And then that intelligence can retrieve knowledge and facts and things and then incorporate that into its thinking in order to produce the best answer.
or the best response to your prompt, whatever you're asking for, whether it's do this math or figure out this calculation, or write this paragraph.
And the amazing thing about this transformer topology structure is that it maintains strong persistence and integrity of its ideas through a much longer context window.
Whereas DeepSeek version 3.2 has a context window of about, well, roughly, I think it's 128K.
Although I think there are different variations of that model that are a little bit longer.
Nevertheless, I think there was a special version that had maybe 192K, but nobody really uses that.
Nevertheless, Speciale was just, it was an experiment that DeepSeek offered for a short period of time.
They had an API to it, and then they pulled it off.
Well, I think they pulled it off because it was such a breakthrough that they incorporated that into what's about to become DeepSeek 4.0, which is rumored to have a context window of up to 1 million tokens.
Now, that's not confirmed, just to be clear.
It's a rumor, but even if it were half of that, such as half a million tokens, that would still be significantly more than what is available today.
And what does that mean?
Well, it means that, for example, if I wanted to have DeepSeek do something like write a, let's say write an article or write a chapter or write an executive summary report, what I could do is, you know, how I have this library of now 240,000 science papers, right?
Well, at the moment, I can only select a small number of those papers to put into the context to teach the engine, like, here's the science surrounding this question, and I want you to consider all this science in answering the question.
But that's limited to 128K tokens.
And one token is about three-quarters of a word, by the way, an English word.
So, you know, it's only like less than 100,000 words.
And you might think, well, that still sounds like a lot, but not if you start throwing in 50 science papers.
You see what I mean?
So I have to limit the amount of knowledge that's pushed into the context of the engine, which limits its ability to reason through additional relevant information because it won't fit in the context window.
But once DeepSeek version 4 is released, that will allow me to bring in maybe hundreds of science papers into the context window and to instruct the engine, hey, analyze this entire body, this massive corpus of science surrounding this particular question.
Maybe you asked a question about, I don't know, cold fusion or whatever, time travel.
And, you know, and it pulls up time travel journal.
from the year 2055, obviously, because it's a time travel journal.
And then it can bring in all the time travel papers and it can actually reason through all of that context without losing cognitive coherence.
So in a practical sense, this has enormous implications.
For example, in the area of law, once you have DeepSeek version 4, which is supposed to be coming out again in about a month, you would be able to ask it a law question and feed it a massive library of relevant law books or documents or case history, just a massive amount and say, here, crunch this.
And now, of course, there's a cost associated with all those input tokens, but it's not very much.
It's a very reasonable cost considering the amount of cognitive work that it's doing.
In any case, the engine would then go through all of that and it would formulate its answer.
And it would be able to, if instructed properly, to carry out a chain of thought reasoning, chain of verification, fact-checking its own thinking before it outputs, things like this.
And what that means is that now the AI model is highly intelligent.
It's carrying out cognitive work with incredible efficiency and a massive memory, a large memory, and it's highly efficient because it's not wasting internal tokens on its own internal knowledge.
It doesn't need to.
That's been separated.
That's been set in a different room.
And what that means for you and I and the world is that the inference costs of this engine are going to be very, very low, especially given the amount of cognition that it's going to perform.
And what that means, and this is my guess, I think that DeepSeek version 4 is going to represent about a 10x improvement in the cost per cognitive function over previous engines, including DeepSeek 3.2.
And when I say cost per cognitive function, if you were to be able to meter cognition, cognitive output, if you could put a meter on it, the cost of cognitive output right now has a certain associated dollar cost, you could say, that cost is about to be slashed by a factor of 10.
That's my guess.
I mean, that might be off.
Maybe it's a factor of five or a factor of 20.
I don't know.
But I think it'll be a factor of 10.
And that means that you're going to see deployments of DeepSeek version 4 in specific applications that engage in things like recursive reasoning, which forces the engine to revisit its own answers by feeding answers back into itself and essentially burning tokens and time for enhanced cognition.
In other words, it's kind of like saying to it, hey, work hard on this problem, keep working, check your work over and over and over again, refine your work, do it 10 times until you get the absolute best answer.
And it's going to say, okay, I'll burn the tokens.
I'll spend the time.
So instead of just a great answer in, let's say, 60 seconds, it might give you a world-class answer in 10 minutes at the cost of more and more tokens.
But the point is, again, sorry if this is sounding too technical, but the point of all of this is that with the proper application of this technology, it's clear in my mind that most human cognition that happens at a desk or a work environment, most of it will be obsolete when DeepSeek version 4 comes out.
Now, it doesn't mean that everybody's going to lose their job instantly, and it doesn't replace human creativity.
It doesn't replace, you know, high-level decision-making for CEOs and founders or high-level managers, whatever.
But it does mean that it will be relatively simple to replace most middle manager type of jobs, including those that require a lot of reasoning and decision-making.
Analyzing information, assessing things, working with spreadsheets and input-output of numbers and facts and quantities and inventory and sales volumes and estimations, whatever, you know, things that people do in middle manager jobs at companies, all those things will be easily replaced by DeepSeek version 4 this year.
Easily.
So, you know, last year, customer service was easily replaced.
Most of it, not 100%, but typically 80 to 90% of customer service could be automated.
This year, the customer service automation is going to go up to something like 98%, and the automation is going to bleed into a lot more of the higher cognition, middle manager type of roles.
And that's going to, of course, force all those middle manager human workers to either upgrade their skill set, augment their own cognition using AI tools, and then push themselves higher into the corporate hierarchy, if that's possible, or they will be replaced by AI or a department of, let's say, 10 humans who are doing middle manager jobs.
Nine of the 10 will be replaced by DeepSeek, and one will be there to keep an eye on DeepSeek.
Let's make sure nothing goes catastrophically wrong, you know.
And that one person will typically have to be the person who knows how to use DeepSeek, who knows how to run agents, who knows how to write prompts, who knows how to interact with AI.
So if you're in a company environment right now, you want to be that one person that knows how to use AI.
Trust me.
That's for sure.
Now, there's an image that I want to show you here.
And to my editor, pull up the scaling law image from section three, evaluation.
Note that there are two images side by side.
The image on the left is called allocation ratio.
And the image on the right is the number of embedding slots on a log scale.
So what I want you to understand about this chart, notice that the vertical axis says validation loss.
And it's a smiley face chart or a U-shaped chart there.
And what we're looking for here is the lower parts of the line are better because that means we have less validation loss.
This really affects the cognitive coherence of language models.
So you want really, you know, you want the bottom of the U shape, basically the gully of the U.
Now, what this is showing is the allocation ratio of cognition versus memory storage, essentially, inside the language model.
And that ratio is designated by the row character there, which looks like a lowercase letter P, but that's actually pronounced rho.
So rho typically as a variable indicates ratios.
So the allocation ratios indicated here are the ratio of the cognitive portion of the model versus the knowledge storage portion of the model.
And what it means is that if you try to go with almost all storage, you know, knowledge storage, then you don't get good results.
You get higher validation losses.
Also, if you try to go with mostly cognition, you get validation losses.
And the proper ratio where you get the sweet spot in the bottom of the U is about 70% to 80% cognition focus.
And then, you know, 20 to 30% typically of knowledge storage.
So that ratio is proven in this plot or demonstrated in the plot.
I know it's not a perfect, if you look at the dots, it's not a perfect fit for the U, but it's close.
Now, notice that it's also showing on that chart pure MOE, which is mixture of experts.
And notice that pure mixture of experts models, which is what has been the state of the art across, I mean, all the mainstream engines have used MOE still to this day, that the losses are higher.
And understand that these validation losses get compounded through the topographical layers of language models.
So as this says, the hybrid allocation surpasses pure MOE.
So this is a big deal.
Now, on the right, the right-hand side, you see this is a downward slanting two lines.
And the vertical axis is, again, called validation loss.
And the horizontal axis is number of embedding slots on a log scale.
That's important to note.
Now, embeddings, let's simplify that.
Let's just call it facts that it's remembering, you know, stuff in its memory.
So we don't need to say embeddings, but what this is showing is that the more stuff the model knows, the lower its validation loss.
That kind of makes sense.
The more stuff you know, the better you're going to do on answering questions, right?
That makes sense.
But it's on a log scale.
So, you know, that's noteworthy.
I mean, it's 10 to the 6th at one point on the x-axis, and then it's 10 to the 7th.
So notably, these are not straight lines if you have a regular linear scale.
But this is a log scale.
So is it that interesting?
So if an LLM knows 10 times as much stuff, it's going to do slightly better on answering questions, but not 10 times better.
Anyway, that's just interesting.
But the bottom line is that what DeepSeek has been able to innovate here is a way to isolate knowledge from reasoning and to mix in like a secret pancake recipe just the right amount of cognition, which is 70 to 80%, and just the right amount of knowledge, 20 to 30%, in order to achieve a really strong, coherent cognition that is about to be unleashed upon the world.
And I should also mention here that they did all this testing on a 27 billion parameter model, not even their full-scale DeepSeek model.
So I don't know what these numbers are going to look like on the full model.
This is a small model.
You could run this on your desktop with a GPU, like a video, you know, a gaming card.
You could run this model.
That's unbelievable.
That's just wild.
Now, there's one more thing.
Well, actually, two more things about this.
Sorry to, again, sorry about all the technical stuff here.
I'm trying to break it down.
DeepSeek is rumored to also be releasing a DeepSeek version for light model.
The light model will have blazing inference performance throughput in terms of the number of tokens per second.
And it will do that by, I mean, I'm guessing that it's going to have a more limited window of activation of digital neurons in the sparse attention internal algorithm and routing.
It's just my guess.
We'll see.
It'll probably also have less knowledge, so it doesn't have to look up as much stuff, a lot less knowledge.
So this light engine will be really good for doing simple tasks very quickly.
And yet what we've learned with recursive reasoning and looping of models is that even very small, fast models can become quite capable if you engage them in recursive reasoning.
That is forcing them to rethink their own answers over and over again.
And you can feed their own answers back into them with additional context and ask them to refine and fact check or double check or redo the calculations or whatever you need to do.
So in other words, you're not going to have to wait a long time to get good answers.
You're just going to burn more tokens at a lower cost with a smaller, lighter, faster model that DeepSeek is giving away for free.
So you can download it, install it in your own corporate infrastructure.
It will probably install on a very modest graphics card.
It'll certainly run on the NVIDIA DGX Spark hardware, which is, what, five grand or something.
And it may not be the fastest thing in the world, but it's going to run and you can use that as a server for your entire company.
And then throughout your organization or your company or your nonprofit, whatever it is, then everybody can access the same cognitive engine and different kinds of questions and tasks can be queued up.
But a fast DeepSeek, DeepSeek Light, will speed the whole cognitive functions across the board inside the organization.
So in other words, you're not going to be sitting there waiting for AI to do the work and spit out the answers and thinking, oh, it is faster if I had a person do this.
Instead, the AI is going to be much faster than a human.
It's going to have real intelligence, high-end cognition, and when wrapped appropriately in the right kind of code, which admittedly not everybody knows how to do it yet, but it's going to become more commonplace.
You're going to be able to get expert-level answers out of very relatively small fast engines when they have this technology.
Because it's like you're asking an engine here to do some brain work, and it's only having to use its brain.
It's not having to sort through encyclopedias of knowledge.
Imagine telling a person to, I don't know, add up the first 100 numbers or digits of pi.
Let's say, 3.1415.
Now, just keep going.
Add up the first 100 digits.
And then you throw that person about 1,000 phone books.
And all this knowledge and information and science papers falling out of the sky.
Like, here, why can't you add the numbers?
And the person's like, because you're throwing all this other crap at me.
That's how the language model feels when it's trying to sort through all this knowledge stuff that it doesn't even need.
It's trying to add the numbers of pie and it's sorting through.
Like, what movies did Dustin Hoffman star it?
Nobody needs that.
That's why sparse attention is so incredibly powerful.
And this is about to break the corporate world.
That's my prediction.
And the bizarre thing about all this, this is just one science paper of several key papers that the DeepSeek team has released recently.
This is just one.
Every single one of them is a breakthrough.
And I'm not seeing these papers coming out of U.S. AI companies at all.
Google's not releasing papers like this.
OpenAI isn't.
Meta, nope.
Microsoft, nope, nobody.
All the breakthroughs that I can see in AI are coming out of China.
So, personally, I am preparing to set aside some substantial time when the DeepSeek version 4 model is released.
And of course, I'll share that information with you as it happens.
If it becomes the most capable model, as I strongly believe it will, I will upgrade all of my projects to use DeepSeek version 4 where appropriate.
Either, well, the large context window for research is going to be huge, but also the faster inference, just the sparse attention activation is going to be a game changer.
So you're going to see probably a notable improvement in the logic flow of chapters in the books that are created at BrightLearn and in the answers at brightanswers.ai.
Now, if you've used brightanswers.ai recently, you notice it's slower than it used to be.
That's because it's involved in a lot more deep thought.
And yes, that's slow.
Well, if I can harness DeepSeek version 4 light to be much faster and use that to help generate at least portions of the answers combined with the larger research context window, I may do that or I may do it as an experiment.
Of course, it's all going to be overlaid with our own AI engine, our own knowledge base, our own secret sauce, as I mentioned before, which is the way that we alter AI engines to have a more accurate worldview on topics like vaccines and climate and pharmaceuticals and natural health and herbs and things like that.
So, in other words, all the engines that I've built so far are about to get even better.
Oh, I forgot to mention one more thing.
DeepSeek version 4 is also excellent at coding.
Apparently, it's rumored to be better than Anthropic Opus 4.5.
Now, Opus 4.5 is the coding engine that I primarily use for all my code.
And it's, you know, Anthropic or clawed code is both amazing and frustrating at the same time.
It is a breakthrough technology, but I still spend crazy hours trying to troubleshoot bugs and trying to get it to do what I want when DeepSeek version 4 very likely is going to be able to cut my vibe coding time in half is my guess.
It could be even better than that.
But if it cuts it in half, because it's able to take on these tasks and to really examine very large code bases in full context and to carry out cognitive assessments and even internal simulations of what the code is doing and to solve complex problems involving multiple routines and large code bases,
then I'm going to be able to get a lot more done in a lot less time, which means you will get more awesome AI engines that I'm building because it'll all be faster thanks to DeepSeek.
That's my guess, by the way.
I can't totally promise that yet, but I'm somewhat confident that DeepSeek version 4 is going to be better at solving code problems than Claude Code Opus 4.5.
That's my guess.
And if that's true for me, it's going to be true for the whole world, which means that all the large code base projects that currently clawed code can't really handle because it's too much code.
Well, suddenly, DeepSeek version 4 will be able to ingest that code and be able to make comprehensive global structural changes to the code or bug fixing at a much larger level or higher level of the code.
And that's going to free up coders to be more creative and to have more throughput of their code projects.
And aren't we all becoming coders now?
Essentially, anybody playing around with AI, we're all becoming coders.
And if you're not yet using AI coding engines like Replit, then you should be because this whole field is about to explode in terms of productivity.
It's about to explode.
Okay, one other thought in all of this and me just looking at this with my native math skills, which are pretty decent, but I'm not a mathematician, so I never really refined high-level math, but I do understand big picture concepts.
And one thing that's become apparent to me about this model and what DeepSeek is doing is that because of some really unique architecture, such as what's called multi-head hashing, that's mapping these n-gram, which is facts and thoughts and stuff.
It's mapping it to its internal embedding tables.
What DeepSeek is able to do is to allow the capabilities of the engine to grow with only a linear addition of engine size rather than an exponential requirement for growth of the engine size.
So in other words, for most LLMs and AI functions, if you want it to be, I mean, this is a simplified version, but if you want it to be twice as smart, you might have to add 10 times as much compute or size to it.
You want it to be three times as smart, you might have to add 100 times as much, etc.
So that's an exponential curve of adding a much larger set of parameters, but only getting a fraction of that in terms of performance.
And that's why OpenAI is rumored to be trillions of tokens or trillions of parameters, excuse me, a multi-trillion parameter model, but it's not that much better than models that are a fraction of the size.
You can keep growing bigger and bigger, but you don't get a linear output of increased capabilities as a result.
But what DeepSeek is able to do, and other companies have also done similar things, this isn't unique to DeepSeek, but because of their approach, their mathematical approach on the internal topology of their system, they've been able to allow growth to affect the performance in a linear fashion.
So a model that's twice as big gets roughly twice as good performance.
Now, that's a simplification of it, but that's roughly what's going on.
And that's a huge deal.
That's a huge deal because, you know, look, if you're talking about neural networks, everything is normally squared, you know, right?
I mean, the number of connections just begins to increase exponentially.
Actually, it's not always squared.
Sometimes there's another exponential, but everything begins to increase exponentially, like the number of combinations just starts to explode.
And that's all part of neural networking.
But if you can get a corresponding response to a linear increase, then that's like the holy grail right there.
You've figured out how to scale and scale your results as you scale the, let's say, the parameter infrastructure or the topology of the model.
So I may not be using all the right terms for those who are machine learning scientists, but I'm also trying to sort of translate this into a little bit more everyday language.
Anyway, the bottom line is most human cognition is obsolete in about 30 days.
If I'm right about this model.
And I guess we'll see.
Now, notice I did not say that all human cognition, I didn't say all, I said most.
And I also, I didn't say every job is going to end, you know, in March.
This is going to take a long time for corporations to roll this out, to figure it out, to test it, to modify internal operating procedures, etc.
So you're not going to lose your job tomorrow just because DeepSeek comes out.
But over the next one to two years, if your job involves sort of middle-level cognition, then you should be thinking about how to upgrade your cognitive role in your organization because that replacement is coming.
Not tomorrow, but it's coming soon, let's say.
All right.
Anyway, thank you for listening.
And if you want to use the AI tools that I've built, they are at brightlearn.ai.
That's our AI book creation engine, which is amazing.
Or brightanswers.ai or brightnews.ai.
And we have a lot of new features coming.
I just need more time to vibe code.
And I did hire a vibe coding person, but that person is not working on any of those projects I mentioned.
They're working on a new project that I haven't mentioned yet.
So I'm still the only one on these other three projects.
And I'm fighting with the AI agents sometimes.
So things are a little slower than what I would like.
One of the main features I'm working on for BrightLearn.ai is full-length audio books.
And that's not just fighting with AI agents.
That's also trying out all these different AI voices.
And frankly, one of the biggest challenges there is how to make the voice more expressive based on the content of the paragraph that the AI voice is performing.
So I don't want an AI voice to read a whole book in a monotone, you know.
And there was a piece of plywood.
Come on.
It's got to be more interesting.
It's got to be relevant.
If there's a warning section, it's got to have a warning tone.
If it's just more of a lighthearted section, it's got to have more of a lighthearted tone.
And that requires a classification prompt and analysis of every paragraph in the context of the chapter and the book and figuring out what it's trying to say, what it means, etc.
It's like, oh my gosh, this is really complicated.
And so thus it's kind of slow, but that's coming.
That's coming.
I don't want to put out a crappy full-length audio book that nobody wants to listen to.
In other words, my goal is to put out thousands of audio books over the next, well, just this year, actually.
Thousands of audio books.
But I want them to be good.
I want them to be a pleasure to listen to.
So that's going to take a lot of work.
Anyway, I'm on the job.
And if anybody can make this work, it's me.
That's for sure.
Because I I care about audio.
You know, i'm an audio engineer, i'm a musician, i'm an ai developer, like.
If anybody can make these books sound good, it's me.
But it's going to take some work, so have some patience and i'll bring you more news when that's available.
Uh, until then, thank you for listening.
No um, much appreciated, and we'll talk to you again soon.
Okay, this is pretty rare, but we have an overstock sale at Healthrangerstore.com on just a few products, but some of them our customers really love, so let me show those to you.
Right now, you can see 40%, a very limited supply of the super protein formulations, both the regular and the chocolate that you can see there, as well as apple slices and mango slices in a really great individual format, great for travel, and the collagen joint support stick packs for an instant drink mix with blackcurrant in there, by the way.
Also, the situation is we overproduced these because we had to purchase very large lots of very specific ingredients and we'll see you next time.
and it's probably my fault.
Um, you know, my team was asking me, well, you know, we have all these ingredients.
It's probably going to produce too much.
Should we just produce it anyway?
I said yes, just just overproduce it.
And if we end up with too much, then we'll just offer a discount to our customers, and that's what we're doing right now.
So these products are labeled to expire a few months from now, but I don't know if you know this.
We store all of our products in an environmentally controlled, climate controlled, air conditioned environment that is fully insulated.
The entire building is fully insulated, and the dirty little secret of the supplements and superfoods industry is that most products are stored in warehouses that get way crazy hot, especially during the summer months, and that can be true at, you know, even Amazon or other major fulfillment houses.
They're not.
The different warehouses are not always, or rarely actually uh, climate controlled.
So our products have actually a longer actual shelf life compared to most products.
But the FDA limits us.
We can only put a certain amount of time on the product, and that amount of time is coming up in a few months for these products, even though realistically, they can be used for much longer safely.
So that's the situation.
So hey, we overproduced, you get it at 40 off and you get to help support us and acquire these really amazing products.
So here it is, organic super protein, the chocolate formula.
This is based on the BOKU superfood formulation and let me just show you here, if you scroll down, let me show you the ingredients, because it's like, oh my goodness, look at all the stuff that goes into this.
I mean it's not easy to source, like organic Sacha, Inshi protein powder and carrot powder, and then cranberry flour, cacao sometimes can can be difficult, you know etc.
So this is just one of the products.
There are others here.
Sourcing is becoming more difficult because of supply chains and tariffs and things like that, and very often we just have to buy larger lots Than what we want to buy.
So, anyway, our loss is your gain for right now while supplies last.
It's a pretty limited supply, so take advantage of it.
Just go to healthrangerstore.com slash overstock.
Healthrangerstore.com slash overstock, and you'll see the products that are on sale 40% off while supplies last.
And it's a very limited supply right now.
It's just these five products at the moment.
Many other products we can't keep in stock, you know, because there's so much demand for clean food and clean supplements and lab-tested products.
And I can't wait, by the way, to show you our new lab.
It's up and running.
It looks awesome.
I've shown a couple people the lab so far.
They're just blown away.
I'm going to give you a video tour with all our new equipment, you know, all the mass spec equipment, the ICPMS, the triple quads, the signal quad, the gas chromatography, the ion chromatography.
We've got a number of instruments there.
I'm going to walk you through some of the sample prep and some of the testing that we do because nobody is as committed to clean food as we are.
Nobody in the world.
We do more testing than anyone, period.
End of story.
Mic drop.
We just flat out do more testing for glyphosate, for heavy metals, for afatoxins, for microbiology, and many other things, depending on like atrazine, depending on the product.
So super clean food, super clean supplements.
Shop with us at healthrangerstore.com for all the regular products.
And then if you want the overstock sale price products while they last, that's healthrangerstore.com slash overstock.
Export Selection