| Time | Text |
|---|---|
|
Sparsity and KV Cache Innovations
00:06:27
|
|
| So what just happened is that the DeepSeek company, which is of course a leading AI developer out of China, appears to be on the verge of releasing their new DeepSeek version 4 model. | |
| And they just kind of leaked something in some code-based document updates to their GitHub repo. | |
| And what that means is that, well, people have analyzed what they posted to GitHub, but they found references to some new stuff that I think connect the dots on what DeepSeek is about to release. | |
| And if this is correct, it's history unfolding. | |
| This is going to be, I believe, the most significant open source large language model ever released. | |
| And it's going to change everything. | |
| So without getting too super geeky here, let me explain what this looks like. | |
| First of all, it looks like DeepSeek is going to call this Model 1, not just DeepSeek version 4. | |
| But we may be wrong about that. | |
| So be ready for either case. | |
| Model 1 or DeepSeek version 4. | |
| Doesn't matter what it's called. | |
| I don't care what it's called. | |
| But remember a few days ago when I talked about the Ngram structure that was described in a science paper by DeepSeek scientists, including, I think, maybe the CEO of DeepSeek was on that paper, I heard. | |
| And these are all super high-level math and computer science engineer type of people. | |
| Very, very bright people. | |
| And remember what I said about how the N-gram approach effectively separates reasoning from memory. | |
| And so what they found out is that if they make the engine about 75% reasoning and about 25% memory, that is, you know, knowledge or n-grams, which is a reference to human, essentially human knowledge nuggets or like word pairs or concept pairs, things like that. | |
| If they do a 75% to 25% ratio, then the engine is faster and smarter because the reasoning components of it don't have to sort of re-figure out facts and knowledge. | |
| They don't have to recalculate it through all the multiple layers every single time. | |
| So it's faster and smarter. | |
| It's more accurate just by separating these two things. | |
| And again, that's a super simplified version of what they've done. | |
| Also remember that what DeepSeek released in version 3.2 was something called DeepSeek Sparse Attention, I think is what they called it, DSA. | |
| And there's a term that you want to become familiar with called sparsity. | |
| Yeah, I know. | |
| Sparsity. | |
| It's a weird word. | |
| And sparsity handling means how well the engine is able to narrow the number of digital neurons or vectors that are activated in order to solve your problem or answer your question. | |
| And the fewer, let's just call them neurons, the fewer neurons that are needed to answer the question, the faster it works. | |
| And the more lean it is on consumer-grade hardware. | |
| Because you don't have to run the whole freaking engine. | |
| You don't need all the hundreds of billions of parameters in order to answer simple questions like, you know, how many letters are in the word strawberry? | |
| You only need a little tiny narrow little low IQ part of it. | |
| So the algorithm that controls that is called sparsity. | |
| And sparsity handling is referenced in the new documents that were just leaked about DeepSeek. | |
| Now, here's something that's really interesting. | |
| If you've run inference on large language models, you know about something called KVCache. | |
| KV means key value. | |
| Put simply, it's a way that the engine sort of stores knowledge as it's processing your prompt. | |
| And if you ever want to understand how important KV cache is, if you're running something like LM Studio, turn off the KV caching in the GPU and force KV caching to happen in your computer's CPU, or RAM, effectively, your computer's RAM. | |
| And all of a sudden, your model's performance slows down by about 10 times. | |
| You know, it's 10 times slower. | |
| What happened? | |
| Well, the KV cache now isn't immediately fetchable within the GPU's fast bandwidth RAM or video RAM as it's called. | |
| And as a result, performance totally sucks. | |
| Well, KV cache is actually strongly related to the Ngram algorithm that, again, separates knowledge from reasoning. | |
| And then the fact that there are now admitted to be key advancements in the restructuring of key value cache, this is indicating a strong architectural change to the upcoming DeepSeq model that indicates that it has incorporated the Ngram science paper or the concepts described in that paper. | |
| It has already incorporated that into its engine. | |
| And that roughly around late February or roughly a month from now, give or take, we're going to see probably the most architecturally advanced open source language model that's ever been created. | |
| And I think it's going to be a game changer. | |
| Now, the third thing that was leaked in this paper, and this is really exciting, this is probably the biggest deal of this, is that they started talking about the model natively being able to operate for inference or decoding in FP8 or floating point 8. | |
|
Spark Station Hardware Prospects
00:05:37
|
|
| Now, FP8 means a quantization of the model into an 8-bit quant for each, well, for each calculation, let's say for each vector. | |
| And the FP8, see, these models natively are trained in a 16-bit floating-point data format or numerical format. | |
| So FP16, of course, takes twice as much storage space as floating point 8. | |
| And you only really go to an FP8 quantization if you're planning on having a model that can be run on lower-end hardware, which means the kind of hardware that perhaps you and I have. | |
| In other words, sort of pro-sumer hardware. | |
| Even though it's a massive model, it's possible that with FP8, we could run it on something like the NVIDIA Spark. | |
| I think it's the, what is it, the GDX Spark. | |
| It's got 128 megabytes of RAM. | |
| And those of you who have the amazing Mac, what are they called, the Mac Pros or whatever, and you've got like 512 gigs of unified video RAM or 7, what is it, 784, whatever you have, you can definitely run probably FP8 versions of this model. | |
| And then there will probably be very quickly the organization known as Unsloth. will be able to put out something like an FP4 or a 4-bit quantization version of the model that might, just might fit into consumer-grade GeForce RTX 5090 cards, which run on your desktop computer. | |
| You have to have a kind of a beefy computer for it, by the way. | |
| I mean, you need a power supply that alone has to provide over 1,000 watts. | |
| So it's not for just some simple low-end desktop. | |
| But I've got one running right here. | |
| I'm looking at it right now. | |
| In fact, I've got a bunch of these cards running all the science papers analysis. | |
| It's been going for months. | |
| I'm heating my building with the heat off of these 5090 cards. | |
| That's why I love winter in Texas because I don't have to pay any heating bills at all. | |
| I'm just heating with GPUs. | |
| But in the summer, oh man, I pay the price with all the extra air conditioning, you know? | |
| But the 5090s put out a lot of heat. | |
| But they've got 32 gigs of RAM, and I'm wondering if I'll be able to run the new DeepSeek Model 1, some kind of simplified version of it, a 4-bit quant, or even, I mean, I've heard of some quants going down to like 2-bits or even dynamic loading of layers, which sounds really slow to me. | |
| But anyway, there could be some ways to run this on consumer-grade hardware. | |
| That's a big deal. | |
| And the other thing that's about to happen is that NVIDIA is about to release what's called the Spark Station hardware, which is their, I think, is it the, I don't want to get this wrong, is it the GB100 Blackwell microchips? | |
| Maybe it's the GB200, whatever it is. | |
| It's the state-of-the-art Blackwell microprocessors in a workstation format with a lot of video RAM. | |
| Could be up to 784 gigabytes. | |
| Now, those workstations are going to cost, could be $30,000, $40,000, $50,000. | |
| But they will replace server hardware that used to cost $150,000 or $200,000 easily. | |
| And so the reason I'm saying all this is because it's very likely that you'll be able to take this open source DeepSeek Model 1 that China is releasing, and then you'll be able to buy a piece of hardware that, again, might be 50K, which that's a lot for an individual, but not so much for a business. | |
| I mean, think about what I spend on just lab science equipment. | |
| I mean, is my company going to buy one of these Spark stations and run DeepSeek locally? | |
| Yeah, absolutely. | |
| On day one, I am. | |
| Because then we can run it locally. | |
| The full model. | |
| We can run it locally and we can do all the inference and all the calculations and all the writing and everything that we need, all the reasoning and logic. | |
| We can do it without sending anything out through the cloud anywhere. | |
| We can run it locally in our own data center. | |
| And of course, we have our own data center. | |
| So, you know, racks of systems that run our content ecosystem and all of our social media sites and things like that. | |
| So I'm going to be doing that. | |
| And I will keep you posted if there are versions of this that can run on consumer-grade hardware. | |
| Now, the reason all this matters is because the context window of this new DeepSeq model is reported to be rather massive. | |
| We don't have an official confirmation from the company yet, but many people suspect that it could support either 512,000 tokens or even, according to some rumors, up to a million tokens. | |
|
Missed Opportunity: Bananas Across the Stack
00:09:48
|
|
| The current standard of DeepSeek 3.2 is 128,000 tokens, which is kind of limiting, actually. | |
| Even when I try to use it for certain kind of research posts that I'm shoving in like 100 excerpts from science papers and books and things, I can blow through that 128K pretty quickly, believe it or not. | |
| It sounds like a lot, but you can use it up if you're throwing in large context. | |
| So if it has a much larger context, then what this means, oh, and I forgot to mention it's also rumored to be very good at coding. | |
| So writing code, writing almost any kind of code. | |
| It can write Python, it can write Node.js, it can write JavaScript, it can handle APIs. | |
| It can write in any language, including full stack languages for web applications or even mobile apps. | |
| And of course, static HTML pages, obviously. | |
| But what this means is that this could be the very first AI model that's called a repo-level coding system. | |
| Repo means repository, a repository of code. | |
| Typically refers to a full stack of code. | |
| A full stack means it's got all the different chunks of code for the different components that make up an application. | |
| Like, for example, let's talk about, let's say, you know, Uber. | |
| So Uber has code that runs the Uber website. | |
| And then there's other code that runs the Uber app. | |
| And then there's more code that runs the Uber backend that calculates how much they should gouge you for the fare, you know. | |
| And then there's more code that runs the revenue share and then the accounting and the payouts and then, you know, routes calculation and monitoring and complaints and this and that. | |
| All of this is written in different types, different languages, different codes. | |
| Some of it's Python, some of it's whatever. | |
| And some of it's front end, some of its back end, okay? | |
| That code base could fit within 1 million tokens, possibly. | |
| I mean, I don't, maybe Uber's code is bigger than that, but there could be a lot of companies that their entire code base could fit within a million tokens. | |
| And what that means is that once this new DeepSeq model is available, you could tell it, especially if you're running it locally and you don't want to share your code over some API in the cloud or anything. | |
| You could tell, you could give it your entire code base as part of a prompt, even hundreds of thousands of lines of code, whatever, and say, hey, add this feature. | |
| And I want you to add it to the app. | |
| I want you to add it to the website. | |
| I want you to handle it in the back end. | |
| I want you to update the database. | |
| I want you to, you know, have the fallbacks and the checking and the retries and all this stuff. | |
| I want you to add this feature across the entire stack. | |
| And because of this very large context window, DeepSeek would, this is rumored, it would be able to handle all of those changes across the full stack of code. | |
| So what this means is that the AI coding can take on a higher level function, more of an architect or a supervisor role over large code bases. | |
| This is a big deal because right now AI is good at small projects and it has a lot of trouble with larger projects. | |
| Even I ran into that myself with the Bright Learn engine. | |
| When it was small, everything was great. | |
| And then it got larger, like tens of thousands of lines of code. | |
| And then I tried to do a database switchover. | |
| I mentioned this to you a few weeks ago. | |
| And the AI agent like utterly failed to automate that. | |
| It did a horrible job with that. | |
| And I had to go through and, you know, recheck it. | |
| I had to keep telling it over and over and over again. | |
| Oh, you missed this and you missed that and you missed this. | |
| Check this. | |
| And it's like, oh, really? | |
| I didn't know you meant everything. | |
| And that's going to change with DeepSeek version 4. | |
| Now, importantly, this means that the Open AI company run by Sam Altman and the CIA and a bunch of spooks and globalists and evil people, in my opinion, that OpenAI is basically going to be obsolete. | |
| And that's a good thing. | |
| That's a good thing. | |
| And OpenAI will probably lobby the Trump administration to try to ban DeepSeek Model 1. | |
| They will. | |
| And in fact, the U.S. deep state, the intelligence community, the CIA, et cetera, is already pushing a lot of false rumors about DeepSeek. | |
| For example, have you heard the rumor that, oh, you shouldn't use DeepSeek because China will get all your data? | |
| Yeah, that's nonsense. | |
| It's an open source model. | |
| You run it locally. | |
| Nobody sees your data other than you. | |
| Even you run inference on servers in the USA. | |
| China doesn't see that data. | |
| They released it as open source. | |
| There's not trackers in it. | |
| It can't even possibly function that way. | |
| That's not possible because there's no executable code in the language model. | |
| It's just a collection of vectors. | |
| Really? | |
| That's all it is. | |
| It's a bunch of safe tensors files. | |
| There's no executable code in it at all. | |
| So the people that are pushing this rumor, though, oh, don't use DeepSeek. | |
| China might be spying on you. | |
| That is 100% CIA bullshit propaganda. | |
| 100%. | |
| And it relies on people who don't have technical knowledge to believe that. | |
| But if you have technical knowledge, you know that's not even possible. | |
| It would be like saying, you know, like Ecuador is spying on you through the bananas you bought at the grocery store. | |
| Like, wait a second, I just bought these bananas. | |
| I'm eating these bananas. | |
| Like, how does Ecuador spy on my bananas? | |
| How do they know what I'm eating? | |
| Well, they don't because it's not possible because the banana doesn't report to Ecuador. | |
| Okay. | |
| DeepSeek doesn't report to China. | |
| It's an open source model. | |
| You can take it and put it anywhere you want. | |
| You can use it offline. | |
| Like literally yank the Ethernet cable, use it locally and have no internet access at all. | |
| And it still works. | |
| Oh, well, how is that possible if it's reporting to China all the time? | |
| Yeah, because that's just propaganda. | |
| It's not reported to China. | |
| But it'll be funny if the U.S. government or Trump tries to ban DeepSeek. | |
| Now, I think they have banned it in government offices already. | |
| Even though it's the best reasoning model out there a year ago, R1, the best reasoning model in the world. | |
| But the government spread a bunch of rumors and then they tried to ban it all across government offices. | |
| Even the state of Texas, which is run by people who are technologically illiterate, I have to say. | |
| They don't know anything about technology. | |
| And so they banned DeepSeek in the Texas offices, which is why those offices are very inefficient because they won't use the best AI technology to get things done. | |
| They want to use crappy tech that's worse than DeepSeek, you know? | |
| But anyway, I will use DeepSeek because I understand it and I understand this is a game changer. | |
| And I'm going to use DeepSeek to benefit you. | |
| I'll use it where it's appropriate. | |
| I mean, I use lots of different models in our AI backend. | |
| You know, I use Meestrel out of France. | |
| I've used Quenn before. | |
| I use a lot of different models and I use our own in-house model that we built based on modifying base models. | |
| So I'll use a bunch of models and maybe I'll customize DeepSeek and I'll do obliteration technique on it, do a mind wipe and replace it with all our knowledge. | |
| And who knows? | |
| There's lots of things I could do. | |
| But I'll use DeepSeek where it's appropriate to support freedom and liberty and to have an America first stance on all of this. | |
| But it's really, I'm more like humanity first. | |
| I'm not even, even though I love America, I'm an American. | |
| It's not my focus to just talk about America. | |
| My focus is I want to educate all of humanity. | |
| I want to uplift people and free people all over the world across multiple languages and cultures and different nations. | |
| So yeah, I'm humanity first. | |
| And that's what I'll use the technology to achieve. | |
| In the meantime, OpenAI is going to be obsolete, I believe, unless they've got some card up their sleeve. | |
| And maybe they do. | |
| I mean, they've got tons of money, that's for sure. | |
| But how much you want to bet? | |
| Now, I don't own any stocks at all. | |
| I don't own any AI stocks or any other stocks. | |
|
NVIDIA Stock Surge
00:04:01
|
|
| Believe it or not, I don't even own, I don't own silver mining stocks. | |
| I should, but I'm just too busy. | |
| I own silver, obviously, but I don't own any stocks. | |
| So I'm not here to push some stock opinion on you. | |
| But if I were going to bet on something, I would bet that when DeepSeek version 4 gets released, I would bet that OpenAI stock prices crater. | |
| I would also bet that NVIDIA stock prices skyrocket. | |
| Because once it becomes apparent that you'll be able to use DeepSeek on relatively modest hardware to have all your AI inference completely free other than just electricity costs, and that you don't need to pay these big bills to OpenAI or Gemini or Microsoft or whatever, then everybody's going to rush out and buy more NVIDIA hardware. | |
| Oh, by the way, the prices are going up big time. | |
| Big time. | |
| I mentioned that previously. | |
| I think NVIDIA knows what's coming. | |
| I was paying $2,400 for the 50-90 cards. | |
| And now, actually, let me check. | |
| All right, let's see what they are. | |
| What? | |
| No. | |
| Wait a second. | |
| This is really weird because these were 3,500 like 10 days ago. | |
| And now I'm seeing them listing for 1,000. | |
| They're only 1,000? | |
| Okay, well, I'm going to buy some. | |
| Oh, my God. | |
| This is like a sale. | |
| It's like a 65% off sale because these are rumored to go to $5,000 in February. | |
| And right now, they're $1,000. | |
| I can't, I'm genuinely astonished because I like, are these counterfeit or something? | |
| What the heck? | |
| How are these $1,000? | |
| That's crazy. | |
| Hold on. | |
| I got to shop for two of these because I've got two workstations sitting here that are lacking high-end cards. | |
| I'm running some crappy low-end cards in them. | |
| I need to upgrade these. | |
| Hold on, let me get that done. | |
| Okay, I feel like that's a steal. | |
| I don't know what happened. | |
| I feel like I just took advantage of some kind of glitch or something because I just bought two of the 50-90 cards for $1,000 each. | |
| They're going to be $5,000 each. | |
| So I just bought for $2,000 what's going to be $10,000 in about 10 days. | |
| And you know why, by the way? | |
| These cards use the GDDR7 RAM, which is a high-speed RAM. | |
| High-speed RAM is bottlenecked in the supply chain badly. | |
| And it's needed for all the high-end systems for high-speed inference. | |
| For things like we were just talking about, KBCash, for example. | |
| You got to have high-speed RAM on the Blackwell on the board to talk to the GPU. | |
| You got to have high bandwidth between the RAM and the GPU. | |
| Otherwise, your inference gets really slow. | |
| So this GDDR7 RAM is in horrible short supply right now. | |
| In fact, I think it's completely sold out through about 2027, the end of 2027. | |
| So I don't know what's going on with the pricing here, but I feel like I just got a steal. | |
| Anyway, whatever. | |
| Sorry about that tangent. | |
| Just remember, if you buy one of these cards, just remember, you need a big system. | |
| Folks, I can't emphasize enough how important it is to decentralize so you can bring as much compute to your local system as possible. | |
|
Open Source Models for Privacy
00:05:21
|
|
| And there's going to be a lot of censorship coming up. | |
| There's going to be a lot of attempts to ban certain models, maybe DeepSeek, like we're talking about. | |
| It's going to be a lot of efforts to try to control the way people use AI. | |
| And the best workaround for that is use open source models, have your own hardware running on your own local systems. | |
| Yeah, you'll burn a lot of electricity, probably. | |
| I mean, for sure, but then you have total control and there's no censorship. | |
| And then also, of course, you'll have total privacy. | |
| And then eventually, when I'm able to finally get the centralized downloader available for Brightlearn.ai, then you'll be able to download a massive library of thousands of books free of charge to your local system. | |
| And then you'll be able to actually use your local AI to ask questions of those documents, of the books, locally. | |
| It'll be awesome. | |
| It's like chat GPT on your desktop, but better for free. | |
| You know, literally for free. | |
| Open source, man. | |
| This is what I'm talking about. | |
| You know, you buy the card once, which for some reason is only $1,000 now. | |
| You buy the card once. | |
| You buy the computer once. | |
| And then the model's free. | |
| And, you know, you're good. | |
| Oh, the software you want to use to run local inference is called LM Studio. | |
| Lmstudio.com stands for language model. | |
| And it's also free for non-commercial use. | |
| Oh, and you know, while I'm at it, I might as well just tell you that the best operating system to run on this is actually Linux Mint. | |
| Linux Mint. | |
| So you just go to linuxmint.com. | |
| You download that. | |
| It's free. | |
| It's open source. | |
| Follow the instructions. | |
| It'll put it on a thumb drive. | |
| Okay. | |
| And then you boot up your HP system on a thumb drive. | |
| And, you know, you have to hit like F1 to trigger BIOS during the boot. | |
| And then you select a Linux Mint. | |
| You install Linux Mint. | |
| And then, you know, you get Bill Gates and Windows and all that bullshit off your system because Windows is just going to spy on you. | |
| It does spy on you constantly. | |
| And it slows everything down with all the spying, it turns out. | |
| Anyway, you install Linux Mint, and then you download LM Studio and you run it in the Linux. | |
| It runs great. | |
| It runs better. | |
| It's so much better, actually. | |
| So I use Linux Mint on a bunch of systems. | |
| It's based on Debian Linux. | |
| It's a really good repo of Linux. | |
| And it's got a great user interface. | |
| Very easy to use, super easy to use. | |
| And then if you have any problems with Linux Mint, just install Claude code. | |
| If you have any problems on there, you just ask Claude code. | |
| Like, hey, I have this problem with Linux Mint. | |
| It's not doing the thing here. | |
| And Claude will tell you what to do, and you're done. | |
| So these are really valuable instructions, by the way. | |
| So if you want to get into this, just seriously follow what I just said there. | |
| You're going to have a really great setup. | |
| And at the end of the day, you're not going to have Windows. | |
| You know, screw Windows. | |
| Yeah, screw Bill Gates, right? | |
| You're not going to have Windows. | |
| You're not going to have to use some AI model in the cloud. | |
| You're going to have local inference with absolute security and privacy, control over your own data locally. | |
| You're going to have this kick-ass graphics card on a pretty decent workstation. | |
| You don't have to overpay for something new because you don't need a new Windows 11 operating system. | |
| You're not going to use it anyway. | |
| You're going to override it with Linux. | |
| And this thing's going to work for you. | |
| It's going to work great. | |
| Because Linux Mint actually has good drivers for these cards. | |
| For all the CUDA cores, for NVIDIA, it's all worked out. | |
| The Linux community is much smarter, much better than any Windows nonsense that's out there. | |
| So that's why I don't mess around with Mac. | |
| I know there's a lot of Mac fans out there. | |
| I don't mess around with Mac because I just go to Linux. | |
| Screw Windows. | |
| You don't need Windows for anything. | |
| Linux does it all and it's all free. | |
| Okay, so if you enjoy this information, check out all the free AI tools that I've made available. | |
| And they are at brightlearn.ai. | |
| That's our free book creation engine. | |
| You'll love it if you haven't used it already. | |
| We also have brightnews.ai, which is our news trends analysis engine. | |
| I've got some surprise new features coming there. | |
| And then we have, oh, brightanswers.ai. | |
| And brightanswers.ai is our AI research engine where you can ask it any question you want about anything. | |
| And it will do deep research and give you amazing answers. | |
| And I forgot to tell you, we now have 340,000 science papers indexed in our in-house data set that is used by all of those engines in order to understand your prompts, to do research for your books, to understand news trends and analysis and lots of things. | |
|
World's Largest Content Library
00:03:32
|
|
| So, you know, I've built up the world's largest in-house content library of curated science papers, curated books, over 50,000 now, and millions of pages of articles and interviews and other content, spoken podcasts, transcripts, and also things like conferences, various transcripts, just millions of pages of amazing things. | |
| And that's all part of the data repository that we uniquely have in-house that's used by the AI engines that I built and have made available for free. | |
| So you can use those platforms and those engines completely free at those websites I just mentioned. | |
| And finally, if you want to support us, because it does cost, believe it or not, you know, a lot of money to do this. | |
| If you want to support us, shop at healthrangerstore.com, where you'll have lab-verified, ultra-clean foods, superfoods, nutritional supplements, personal care products, and home care products that are just pristine in their formulations. | |
| And we have massive laboratory testing. | |
| And I already have on my schedule next week. | |
| I'm filming in the laboratory to give you a new tour. | |
| I think I'm filming that towards the end of the week. | |
| So it'll be the following week when I bring you that video. | |
| I'm going to give you a walkthrough of our new lab with all the instruments that we use for all the different tests. | |
| So that's coming. | |
| You'll get to see because we, I mean, we have a food science lab that is larger than most universities, as you will soon see. | |
| In fact, I mean, you know, we have PhDs that come in because they want to do method validation in our lab. | |
| And they're like, oh my God, this is amazing. | |
| This is better than the lab at this whatever university. | |
| And I'm like, yeah, I know, I know, because we're serious about clean food. | |
| You know, we really test for lots of contaminants because we have clean food. | |
| Anyway, you can get all that clean food at healthrangerstore.com. | |
| And by doing so, not only will you be giving yourself amazing nutrition, but you'll also be supporting us with whatever profits we manage to eke out of this system. | |
| Whatever profits we have, we're reinvesting that into this AI technology and all these platforms for decentralization of human knowledge, spanning multiple languages and multiple medium formats, including audio books and content and coming up soon, video content. | |
| And it's always going to be free. | |
| So that's our commitment to you. | |
| You support us by shopping at healthrangerstore.com and we support you in whatever way you want to learn or express yourself with all these technology platforms. | |
| And believe me, I have only just begun. | |
| There's much more coming, especially once I get my hands on DeepSeek and I got to install these 250-90 cards. | |
| I just ordered. | |
| Get out the Dremel tool, you know, plug those suckers in, start running some faster inference. | |
|
Pink Himalayan Salt
00:00:25
|
|
| I got a lot of books to clean. | |
| So anyway, there you go. | |
| That's what's happening. | |
| Thanks for listening. | |
| I'm Mike Adams here, the AI Adventurer for today. | |
| Thank you for listening. | |
| Take care. | |
| Pink Himalayan salt. | |
| One of the purest and healthiest salts on earth. | |
| Non-GMO, certified kosher, lab tested and trusted. | |