Health Ranger - Mike Adams - AI Capabilities Advancing RAPIDLY Even on the Same Hardware Aired: 2026-04-04 Duration: 11:10 === AI Reviewing Entire Code Base (05:04) === [00:00:03] So here's an AI update for you. [00:00:05] You know, I'm an AI developer and I've built several very popular platforms and I'm building more things that are going to be really fascinating. [00:00:14] And one of the techniques that I use now, which shows you the capabilities of artificial intelligence is that I ask AI. [00:00:24] See what? [00:00:24] Once I'm part way through a project, I ask the AI engine to tell me what features or improvements would make this project more successful given the project goals. [00:00:37] So, in other words, I ask it to give me suggestions, and sometimes I ask it to ask me questions. [00:00:45] Like, you know, here I want to add this feature. [00:00:48] You ask me what you think is relevant about this feature. [00:00:52] Just give me a series of questions, and I'll give you the answers and give you the details. [00:00:56] Other times, I tell it to come up with new ideas and new features and to review the entire code base and find anything that's missing or anything that might make. [00:01:07] Processing more efficient or more redundant, more resilient to errors or disconnects or things like that. [00:01:14] And I found that today, the AI systems that exist, like Claude Code, for example, are incredibly capable now with this process. [00:01:25] They're very good at coming up with suggestions. [00:01:28] In fact, it was just yesterday I was working, let me back up, I've rewritten my document processing engine. [00:01:37] For classifying and cleaning documents to be used as indexed reference documents for our bright answers.ai engine, as well as the bright learn.ai book engine. [00:01:49] And this also powers the research engines for the articles at naturalnews.com, as you probably have guessed, because they're so well researched now. [00:01:57] You're like, how do they do all that research? [00:02:00] Yeah, AI agents with this incredible, massive document base. [00:02:04] That's how it's actually done. [00:02:06] But anyway, I. [00:02:07] I decided to rewrite the whole thing, the entire way that document processing is handled, because I was running into some inefficiencies with the old design. [00:02:17] Anyway, so I rewrote the whole thing in about four hours with the help of AI. [00:02:27] That was a project that used to take six months, let's say a year and a half ago, or yeah, even heck, even a year ago would have taken months. [00:02:36] But I got that done in four hours because I was able to reuse a lot of the existing code, but I had to restructure the whole workflow, et cetera. [00:02:44] Anyway, that took about four hours. [00:02:45] And then after the four hours was done, I then asked the, I was using Claude code for this one. [00:02:52] And so I asked Claude, I said, I want you to review the entire code base now, and here are the goals, and I want you to give me suggestions. [00:03:00] on what's missing, you know, what have I left out of this? [00:03:03] Now, before, let's say six months ago, it could not have reviewed the entire code base because the code base was too large. [00:03:10] It didn't have a large enough context window. [00:03:12] It couldn't keep it all in its memory at the same time. [00:03:15] You know, it couldn't really process the whole code base. [00:03:18] Now it can. [00:03:19] So it goes through the whole code base, which for this project, I don't know, it's a couple hundred K of Python code. [00:03:26] So it's not massive, but it's not tiny either, right? [00:03:31] So it goes through the whole code base and it comes back with 14 suggestions. [00:03:36] 14 suggestions separated by high priority, medium priority, and low priority. [00:03:43] And I went through all 14. [00:03:45] I just, you know, reading through, like, oh, yeah, that sounds good. [00:03:48] I forgot about that. [00:03:49] Oh, yeah, the retries over here. [00:03:51] Yeah. [00:03:52] Oh, there's a file renaming collision problem over here, blah, blah, blah. [00:03:58] And I looked at all 14 and I said, Yes, all 14 do it, you know. [00:04:05] So the engine says, okay, sir, you know, we'll do all 14, and it spawns 14 agents and then it updates all the code, it adds all 14 things. [00:04:16] And then there's another step that I always do after it makes a bunch of changes like that. [00:04:19] I always go back and I tell it, review all the changes you just made one more time to see if you introduced any errors or if you left something out. [00:04:32] And occasionally it will find problems that way. [00:04:37] Not always. [00:04:38] Sometimes it just comes back and says, everything was perfect, no problem. [00:04:41] But every once in a while it's like, oh yeah, I found there's two bugs and here they are, Bloom. [00:04:45] I'm going to fix these. [00:04:47] So, anyway, I had to go back and check all 14 features that it added. [00:04:51] Everything was good. [00:04:53] And so then I ran the program and smooth as silk, you know, smooth as silk. [00:04:59] It's just like churning away. [00:05:02] And again, this would have taken months for a human programmer to do not that long ago. === NVIDIA Hardware Memory Limits (04:57) === [00:05:08] Months. [00:05:09] And I know this because I hired human programmers to do this, you know, a couple of years ago when I started the whole AI project and the data pipeline processing. [00:05:19] I was paying human programmers to do this. [00:05:22] Now, I just use AI and myself, and it all happens in a few hours or a few minutes in some cases. [00:05:33] So, anyway, this is where AI is today. [00:05:38] And AI advancements have not slowed down. [00:05:42] They have not slowed down. [00:05:44] There have been advancements that make the existing AI inference hardware, the hardware base that's installed around the world, including all of our hardware in my mini data center, I call it. [00:05:55] 48 workstations. [00:05:57] Our hardware will be able to do more and more with each passing month as more innovation takes place. [00:06:03] For example, Google famously released a paper called TurboQuant, and TurboQuant allows about a six times compression of the KV cache, which is one of the, it's like the context cache that's needed for the model to process all your context. [00:06:22] You know, when you paste in a hundred K of text and you ask a question like, here, you know, Here's an entire book, or here's a PDF file, and I want you to find an answer that's somewhere in this book, right? [00:06:37] So you paste in the whole book. [00:06:39] That's a lot of context. [00:06:40] And it turns out that that context takes up loads of memory in the GPU. [00:06:47] It's the memory hog, actually. [00:06:49] I mean, obviously, the model itself takes up memory, but the KV cache takes up even more. [00:06:54] Well, it can, depending on your context size. [00:06:57] And remember that DeepSeq version four, which is supposed to be coming out soon, Has a 1 million token context window. [00:07:06] A million tokens, that's going to burn a lot of memory in your GPU, obviously. [00:07:12] So, anyway, Google comes out with TurboQuant and says we can reduce that by a factor of six without losing any fidelity in the model. [00:07:21] And through some clever math, it actually works. [00:07:24] And people are now demonstrating that. [00:07:26] So, all of a sudden, not only does the same hardware handle six times as much context, you know, theoretically. [00:07:35] But also then looking through that KV cache is much, much faster for the GPU, you know, for, for the inference process. [00:07:43] It's faster because it's less stuff to sort through. [00:07:46] So it's faster and smaller. [00:07:48] It's still the same base model, and it's the same base hardware, but it does more. [00:07:54] And also, in a similar fashion, we've seen some new OCR models that are very, very small, like 1 billion parameter OCR models that are incredibly good, just ridiculously good, better than models that were 20 times larger at OCR. [00:08:11] TTS models for text to speech are also, some of them are getting very good and very fast, et cetera. [00:08:18] So, a lot of improvements are coming on the same hardware. [00:08:22] And that's why demand for high bandwidth memory is actually suddenly, you know, falling because people are realizing that, hey, we might not actually need as much high bandwidth memory. [00:08:33] We can do more with smaller GPUs. [00:08:36] And this sucks for NVIDIA, obviously, because NVIDIA wants to sell you the largest, most expensive card possible. [00:08:44] And by the way, I had a bad thing with NVIDIA recently. [00:08:48] I had a $9,000 GPU that was faulty, it kept. [00:08:56] Dying in inference. [00:08:59] And I went through a warranty replacement process with NVIDIA. [00:09:02] And at first, they were helpful and they're like, you know, send us the serial number and this and that, and give a photo and proof of purchase and all this stuff. [00:09:10] And they were like, you know, they were acting like we're going to replace it. [00:09:14] And then at the end, they said, no, we're not going to replace it. [00:09:17] Even though it has a three year warranty, they said, you got to talk to the retailer that sold you this. [00:09:22] I'm like, are you kidding me? [00:09:23] This is a manufacturer's warranty replacement. [00:09:26] So I'm still fighting with NVIDIA over that. [00:09:29] You know, you spend $9,000 on a card, you expect it to work. [00:09:32] And if it doesn't work, you expect them to replace it. [00:09:35] But so far, they have refused. [00:09:36] So NVIDIA sucks for that reason. [00:09:40] But I mean, I spend a lot of money with NVIDIA, you know, hundreds of thousands of dollars a year. [00:09:46] And you would think that they would treat me like a customer, you know, instead of a piece of trash, but whatever. [00:09:55] So just be careful with NVIDIA. [00:09:58] A lot of times, their cards don't work. [00:10:01] And you may have to get a warranty replacement. === Free BrightLearn AI Engines (01:05) === [00:10:05] Anyway, just letting you know about all of that AI is getting a lot more technical and capable. [00:10:13] You can use my AI engines, they're all free. [00:10:16] You can use my AI engine at brightanswers.ai, for example. [00:10:22] That's our deep research engine. [00:10:23] Or you can use our book creation engine at brightlearn.ai. [00:10:27] And there you can also, of course, download books and the audiobooks that we have available now. [00:10:33] They're all completely free. [00:10:35] Downloadable. [00:10:36] That's at brightlearn.ai. [00:10:39] And you can follow more of my podcasts at brightvideos.com and my articles at naturalnews.com. [00:10:46] So, a lot of AI advancements coming, and I have more announcements coming up this year as well. [00:10:51] Major things are happening, major improvements. [00:10:53] So, just be ready for that. [00:10:54] Thank you for listening. [00:10:55] Take care. [00:10:59] Start your day right with our organic, hand roasted whole bean coffee. [00:11:03] Low acid, smooth, and bold. [00:11:04] Lab tested, and ethically sourced. [00:11:07] Taste the difference only at HealthRangerStore.com.