The collapse of COMPETENCE among humans, and their coming replacement with AI
|
Time
Text
We are living in an era characterized by the collapse of competence.
And you've no doubt noticed this in almost anything that you do where you try to interface with a company, a receptionist, a bank, a medical office, the government, right?
Anytime you try to interface with anybody in society today, a car mechanic, a grocery store manager, they demonstrate complete incompetence.
They have no idea what they're doing.
And this is sometimes I call it idiocracy, but it is the collapse of competence.
And it has reached to the highest level of our judiciary, where we have a Supreme Court justice, Katanji Jackson, who's a black woman, and she was the DEI nomination under Biden,
whose legal argument in the most recent notable court decision, her legal argument was so stupid that the rest of the court wrote in their majority opinion, essentially, she's too stupid to even understand the law.
She's so tarted, we're not even going to bother responding to her arguments.
Now, that's my paraphrase, but that's essentially what they wrote.
They only wrote like two sentences or something to dismiss Katanji Jackson's entire argument, which is all just nonsense, because she's literally retarded.
Now, it's just like Democrats to put in women on the Supreme Court based entirely on their skin color and their gender, even if they are retarded.
I mean, we have Sotomayor as well.
You know, there's another semi-retarded pick right there.
To the point where for those justices, we are much better off asking an AI engine for a legal argument compared to these justices.
Much better.
And this is probably how AI is going to take over much of the government.
They're going to put us in a situation where we realize the humans there are so incredibly stupid that we would be better off to have a machine make the decision.
Now, speaking of AI, most of today's college graduates, even in graduate school or MBAs, they use AI to formulate their answers and write their papers, etc.
They are basically just becoming prompt engineers and not learning how to write or think for themselves.
And as this continues, as that generation moves into, over time, middle management and upper management and then, you know, government, etc., they're going to basically turn over our society to AI agents with human beings as the sort of the front group puppets.
There'll be a human front man or front woman who pretends to be smart, but actually they're powered by chat GPT or whatever.
And they won't know how to think themselves, even less than the people today who are already cognitively incompetent.
And this is partly why we are seeing today this inability of so many people to engage in reason.
Just the idea of rationality has not just become unpopular, it has become unrecognizable by the vast majority of people.
They don't process information in a way that is rational.
They simply react emotionally and according to the herd.
They just follow the herd.
They do what they're told.
They think it's their own thoughts, but it's not.
And so those people are functioning more and more like large language models.
Right.
So you're going to get this interesting dichotomy where you're going to have reasoning AI engines rising up and running society more and more.
At the same time, you have dumbed-down humans who are just regurgitating propaganda and garbage and will increasingly look like sort of low-end language models.
And, you know, we're already seeing that even in medicine.
If you have a real question for your doctor about something like, let's say, Lyme's disease, if you try to talk to your doctor about Lyme's disease, he's just going to give you a bunch of nonsense, just regurgitating language model garbage.
If you want to do real research about Lyme's disease, you have to use an AI engine like our own AI engine, Enoch, which is free of charge, about to be released, Brighteon.ai will allow you to join the waitlist.
You'll be able to do all kinds of research there and get amazing answers that are far more comprehensive than almost any mainstream doctor.
Plus, you can ask this AI engine about herbs that cure cancer or the dangerous ingredients in vaccines.
You know, questions that your doctor just will have no idea about typically, unless they're trained in alternative medicine or complementary medicine, they'll have no clue.
So human beings are no longer going to be the reliable or the exhaustive source for information, even about anything technical, about physics or chemistry or medicine or materials science or history or culture or anything.
Human beings used to be the keepers of knowledge, passing it down, writing books, etc.
That's no longer the case.
So the era of human knowledge hegemony has now come to an end.
In fact, Elon Musk recently announced that the only way to make Grok better, that's his AI engine, and I had the same idea, by the way, I just don't have his billions of dollars of money to put into this, but the only way to make it better is to rewrite all of human knowledge using AI and then take that knowledge base and then use it as the training foundation for a brand new AI engine.
In other words, it's going to completely disconnect from human written knowledge.
And the next generation of AI will only be trained on knowledge that has been written by AI.
And the reason I think that's actually a good idea for AI performance is because human thinking is not as good as you think, which is sort of a recursive statement right there.
But if you go all around the world and you collect all the human knowledge in all the books, you'll find that it's not actually well written or it's not conceived very well.
It's not organized that well for the most part.
I mean, there are exceptions to it, but overall, human beings are not really built to be linear, rational thinkers.
They're not great conceptual, abstract, holographic thinkers, but not linear thinkers.
So the bottom line is, if you want a reasoning AI engine, you probably don't want to train it on human reasoning because human reasoning is mostly unreasonable.
I mean, think about it.
If you train AI on everything that's written on the internet right now, you know, your AI will be insane.
It will think that men can have babies.
It will think that vaccines are all safe and effective.
So if you want to have an engine that is based on reality, which is our goal, then you have to actually be either very, very careful of the information that you train it on, or you have to rewrite it.
So, of course, our engine, Enoch, will be available in two versions.
One is a browser-based version, free, non-commercial, has about 80% alignment with reality.
And then we have a free downloadable version that is a GGUF file that you can run on any kind of AI inference software such as LM Studio.
And that version has about 50% alignment with reality, which means every other answer will be good or every other answer will be bad also, biased based on the base model that it's trained on, which is, you know, I mean, look, it's hard to make AI models that are any good.
But what's necessary for smaller organizations like my own to be able to build our own base models, which is our goal.
We're going to be doing that in the next couple of years.
The limitation is currently the cost and the associated amount of hardware and electricity that has to go into training a model on something like 10 trillion tokens.
So if you're going to do a 10 trillion token base model, currently you need to spend many millions of dollars.
But that's going to change dramatically in the next, well, even in the next year.
We're seeing orders of magnitude improvements in NVIDIA computational power.
NVIDIA is improving its computational speed by about a million times every 10 years on average.
And at the same time, there's a lot more power efficiency in these newer microprocessors.
So you're going to be able to expend a lot less power to get the same amount of training result.
The bottom line is that training your own base model will become a lot more affordable.
At some point soon, it'll be under a million dollars.
However, the data set is critical for that.
And I don't know anybody else in the world that has a data set as good as ours, not even Grok or Elon Musk or anybody, because we've spent, I don't know, 20 months or so working on our data set.
And our data set is just the best reality-based data set in the world.
And we're expanding it.
And we have some very innovative methods to do that.
And as a result, when we are able to train our own base model, it will be the most reality-based, you know, high-scoring on the reality test AI model in the world.
In fact, our current model, Enoch, already beats ChatGPT, and it beats Anthropic, and it beats Google and Microsoft and Meta and everybody on reality questions, like how many genders are there, for example.
So, I mean, we've already built the world's most accurate AI engine when it comes to reality.
Now, our AI engine doesn't do all the advanced math word problems that ChatGPT can do, so it's not that kind of engine.
But in terms of getting the broad strokes of reality correct, there's nothing else that's better.
In any case, some very exciting times ahead of us, and thank you for your support and understand that you are seeing the end of human cognition combined with the rise of AI supremacy in terms of cognition.
And it's going to be important to make sure that we have AI engines that actually reflect human values and human reality.
So thank you for listening, everybody.
I'm Mike Adams, the Health Ranger.
Be well and take care.
Thank you for supporting us at HealthRangerStore.com.
And we've got a new batch of turmeric in stock now.
After a long search, the supply chains are really becoming more difficult.
And turmeric, as you know, is often contaminated with lead.
So of course, we do MassSpec elemental testing for lead, cadmium, you know, arsenic, mercury, and other elements.
We found an ultra-clean batch and we have it available for you now.
We've got it all for you at healthrangerstore.com.
We do not have turmeric tincture back in stock.
We're working on that, but we do have the turmeric root powder.
And as you know, I, of course, I include turmeric root powder in my own smoothie, which is why it's orange.
It's why my dental work turns orange, which freaks out the dental hygienist as well.
But that's another story.
Okay, so if you want really good, high-quality, ultra-clean turmeric root powder, we've got it now, healthrangerstore.com.
We also have, yeah, there's the screen showing it to you.
We also have acaxanthin.
Acaxanthin, Hawaiian acaxanthin, we have it right now in stock.
Of course, this is known as the king of carotenoids.
This is a fat-soluble, super-potent antioxidant.
And it's got a multitude of uses and benefits and support mechanisms for human health.
Athletes use it.
Lots of people use it.
This is also what turns the flesh of salmon slightly pink.
That's actually the acaxanthin that they're getting from their dietary sources.
And it's believed to help support the natural athleticism and the endurance and stamina of salmon.
That's just one of the many benefits of the king of carotenoids.
Now, acaxanthin is in a dark bottle because it's sensitive to light.
But it's very potent.
You only need either two or four milligrams per day.
We've got that available for you right now, healthrangerstore.com.
We also have a supply of NAC in stock, N-acetylcysteine, which of course helps support the body's normal detoxification process with outstanding liver support and glutathione and so much more.
So check it all out at healthrangerstore.com.
Turmeric root powder, acaxanthin, capsules, as well as NAC, all in stock right now, plus hundreds of other items that are also certified organic, laboratory tested, non-GMO.
And we don't use synthetic colors, fragrances, fillers, garbage, none of that stuff.
Nothing like that.
Ultra-clean products for home, for health, for personal care, for nutrition, and also for long-term food storage, for emergency preparedness.