All Episodes
March 23, 2024 - Health Ranger - Mike Adams
22:06
AI Large Language Models are NOT aware, alive, conscious or even intelligent...
| Copy link to current segment

Time Text
Some important observations about AI, chatbots, large language models, and the existential threat against humanity.
Thank you for joining me today.
I'm Mike Adams here, and we are building the infrastructure of human freedom.
And one of the projects we're building right now is called Brighteon.ai.
It's a large language model.
It's currently called Neo from The Matrix, and it is a generative text-generating chatbot.
It can do text summaries.
It can write articles.
It can analyze the sentiment of content.
It can expand bullet points into paragraphs.
It can do all kinds of interesting things, and we are training it on a lot of material that you just don't find in any other LLM. We're training it on herbs and nutrition and natural medicine, permaculture, off-grid food production, gardening, medicinal extraction from foods, and so on.
And we're going to be releasing this free of charge as an open source, downloadable LLM. The current target now is the first week of April.
It's been delayed a little bit.
Hopefully there won't be any more delays.
We've run into some problems.
We lost some training days.
But it's all part of the process of learning how to do this.
But in this, I've learned a lot of things that I want to share with you, which is the 7 billion parameter model.
This thing is not intelligent.
It's not really AI. People say it's AI, but I can tell you that all this thing does is it predicts the next word in a sequence.
That's how these LLMs work, at least at this scale.
Now, I know Elon Musk has just released open source Grok1, which is a 314 billion parameter model.
It's still based on the same architecture.
It's a mixture of experts using transformers and generative processes to spit out text.
You just have to have a much larger server in order to run his model, and it's got a lot more parameters, which means it understands more relationships between words and concepts.
But a 7 billion parameter model is exceedingly good already at generating text, but it doesn't possess intelligence.
How do I know that?
Well, number one, we've broken the model in so many ways during our training process, I can tell you it's very easy to break it.
And when I say break it, it means it just starts spitting out gibberish, sometimes in different languages.
I would equate this to like a human being having a seizure.
It's kind of like a large language model seizure, where it's convulsing, it's just spitting out nonsense.
None of it makes any sense.
And it's incredibly easy to make LLMs behave this way accidentally, by the way.
Making them actually make sense is a very delicate thing.
It requires a lot of understanding of how to fine-tune them correctly and setting the parameters correctly and understanding the Transformers model and how it affects the autoregression inference of the model.
Let me explain autoregression here for a minute.
So the way these LLMs work right now is they literally just predict the next word.
So if you ask it a question like, why did the chicken cross the road?
And that phrase, that series of words is statistically linked to the next word, which will be part of the answer.
And the next word usually would be the word to, T-O. And once that word is in place, then that whole phrase, why did the chicken cross the road to, is then pushed back to the model, essentially as the next query.
I mean, I'm simplifying this, but...
And then it would spit out the next answer, which is the word get.
To get.
Because, of course, it's going to eventually fill out the answer to get to the other side.
Or something like that, right?
That's typically how these things work.
So the autoregression description of this means that as each new word is generated, it feeds that word back into the context window, as it's called, of the word chain that it's analyzing.
And then it tries to predict the next word.
You need to understand that large language models, they have no planning capabilities.
They do not plan.
They don't plan in their digital minds to answer the question with, quote, to get to the other side.
They don't have a plan like that, and then they don't break down the plan into steps like, oh, if I'm going to write to get to the other side, then I have to start by writing the word to.
No, that's not the way they think.
They don't think that way at all.
All they're doing is predicting the next word in a sequence.
So LLMs, as they exist today, although they are seemingly very amazing technology for generating text, and they seem to make a lot of sense to people, and some people even use them as psychologists or girlfriends.
Some people fall in love with AI chatbots.
There's all kinds of weird, twisted stuff going on.
But these LLMs do not think, they do not plan, they do not have memories.
They do not have goal-oriented behavior.
And all of those traits that I just mentioned, those are necessary traits for what we would generally call intelligence or AGI, artificial general intelligence.
An AGI system has to have a memory of what it has experienced, what it has said, what it has taken in in terms of data or experiences.
It has to have planning capabilities, which means it has to have goal-oriented behavior.
Like, oh, I want to help this person solve a problem.
Therefore, I need to break it down into these steps.
And here are the steps.
No, LLMs don't work that way at all.
Even the Elon Musk Grok LLM at 314 billion parameters, it's just a more elaborate word prediction engine.
So as a result, None of the LLMs that exist today, not even OpenAI, none of them demonstrate AGI or Artificial General Intelligence because they don't plan, they don't have memories, they don't have goal-oriented behavior, they don't have strategies, they don't break down strategies into smaller steps.
None of that happens.
Okay?
They're just really cool generators of content, both visual content, images, videos now with Sora, mid-journey for photos, you know, text content, OpenAI, all the other open-source models, including the one that we're releasing, and so on.
They're really good at generating content, but they are not alive, they are not awake, and they are not aware.
Now, They also...
They can't be relied upon to accurately quote people or to issue accurate citations.
They will invent quotes because of the nature of the way Transformers work.
They...
In fact, in a...
A recent summary that I had our own engine generating, it was inventing quotes from me.
I'd given it a transcript of one of my podcasts and asked it to generate a summary.
And it actually invented a quote that I did not say in my own podcast, but it attributed that to me.
And I've seen that many times.
There are articles out there where Google's AI, Gemini, will fabricate fake articles in order to defend itself from accusations of racism or anti-white bias.
These AI engines are known to fabricate defamatory articles about people, like Donald Trump, let's say, or Alex Jones or whoever.
They will just make it up.
So these LLMs should never be used as reliable sources of citations of very specific pieces of knowledge.
Rather, they're very good at the gestalt, the big picture, assimilating all the elements from multiple sources and arriving at an overall picture that looks about right but may be off on some of the details.
And that's why the photo generators that are run by AI engines, they can do very photorealistic, very impressive images, even video snippets or whatever.
But sometimes the characters will have too many fingers.
Or if they're walking along the ground in a video snippet, they might be floating a little bit.
They'll be just weird little artifacts.
But overall, the picture is very impressive and It's correct in the big picture, but in the details, not so much.
So our model, which again, you can sign up for it to download it for free at brighteon.ai.
Our model is being trained to have a lot of knowledge on gardening.
you know, phytonutrients and so on.
It will give you very good big picture answers.
If you ask it questions like, hey, what's the best time of year to plant, I don't know, potatoes in Iowa or wherever, or questions like that, it will give you a pretty good picture.
It may not be 100% accurate on every statistic that it cites, you know.
It may say, well, the average springtime temperature in Ohio is 47 degrees.
That might be made up.
It might actually be 44, 62, who knows?
The engine will make up little specific, quote, facts that are kind of imagined.
And we've seen this with all the lawyers that are trying to use OpenAI to write their court briefs.
The court briefs cite case law that does not exist.
They make it up.
And it's said that AI engines like this, these LLMs, it's said that they dream their content.
That's a pretty good description.
They are predicting the next word in a sequence, and they're kind of dreaming it along.
And as with a dream, your overall dream experience may be reflective of the real world, but there can be specific things that don't quite work right in a dream.
For example, have you ever tried to read a sign in a dream?
And then have you ever looked away and turned back to read it again?
And have you ever noticed that the sign changed?
I don't know if you notice those things, but I do.
Words change around in dreams all the time.
And it's a common thing.
It's not just something that I've experienced.
It's a very common thing.
Words are elusive in dreams because your mind is always sort of shifting them around and trying to understand them and trying to make connections between them.
And that's the way the LLMs are operating as well.
So understand that an LLM, because it's just generating the next word in a sequence, an LLM is not capable of intentionally deceiving anyone.
The idea of deceit doesn't even enter its, well, non-mind.
It doesn't even have a mind.
It's just a statistical transformer that generates content.
That's all it is.
It doesn't have agendas.
Not at the current level.
Because it doesn't have goal-oriented behavior.
So it can't actually have an agenda of, let's say, anti-white bias.
That agenda came from the human engineers that built it, i.e.
the Google engineers.
They designed it to have anti-white bias, and the model is simply playing out its programming as designed.
But the model itself doesn't have some kind of vindictive hatred of the white race.
It has no idea because it's not conscious.
It's not aware.
It doesn't have goal-oriented behavior.
There are a lot of people who say that we're on the verge of AGI right now.
I actually think we're still a ways away.
I don't know if that means a couple of years or 10 years or what, because things are moving quickly.
Let's face it, 2023 was quite a year of accomplishments in this realm.
So maybe the acceleration of AI progress is going to put us into a realm where we have AGI by the year 2027, let's say, just as an example.
But understand that that will not be achieved by making language models larger.
It can't happen.
Language alone, by the way, is not the way that we experience the world.
I mean, think about it.
Even when you're an infant, you're absorbing all this information from the world around you.
Sights, sounds, smells, tastes, touch.
And also other senses that maybe modern science doesn't recognize yet.
You are not actually taking in most of the information around you in the form of words, especially as an infant, because you don't even know what those words mean yet.
You're taking in information through a multimodal sensory experience that you could call as a very high bandwidth experience.
You've got just all the visual bandwidth.
You've got auditory, you know, sense of touch and so on.
This is how neural networks are formed in human beings.
This is what ultimately gives rise to a general intelligence inside the mind of a human being.
And then that human being eventually learns to understand language and use language in order to express its goal-oriented ideas and behaviors.
AI engines currently do none of these things.
They don't take in the world like a human child.
Not yet, anyway.
Maybe that's coming.
Another important point to consider here is that human beings, if they lose their intelligence, if they lose consciousness or the sense of self, and they become NPCs, non-player characters, then they begin to act just like large language models.
And when they speak, all they're doing is repeating gibberish that they heard from someone else.
You see these people all the time.
These people sit on the city council.
These people are your friends, your neighbors, your family members, your co-workers.
This is the most common type of person out there.
All they do is repeat, oh, I trust the science, while they're taking jabs and wearing masks and locking their children down and doing other stupid things.
They don't think for themselves at all.
They have become sort of bio-LLMs, as I call them.
All they're doing is churning out words, and yeah, they can string together a bunch of words that make sense.
Well, so can a machine.
Doesn't mean the machine is thinking.
Doesn't mean the human is thinking.
Most humans do not pursue goal-oriented behavior, believe it or not, most of the time.
All they're doing is just passively reacting to sensory inputs and demands.
And that's why they're so malleable.
That's why...
We talk about these people as oblivious sheeple because they are programmable life forms.
They are influenced or manipulated by media and government messages through language, but also through visual imagery and all kinds of subliminal programming through Hollywood movies and TV shows, you know, peer pressure, social influence, things like that.
And they are not their own person.
They are a programmable life form, and everything that they say can be replicated by an LLM, because those human beings never have original thoughts.
So understand that LLMs, yes, they can replace sheeple right now today.
That's why LLMs power a lot of the fake Twitter bots, chat bots online, the fake sock puppet accounts are run by AI. I mean, Mossad does that all the time.
Israel has a very prominent online fake account program to try to promote Israel all the time.
And it's convincing in text format.
It sounds like a real human being because the LLMs mimic mindless humans.
But you're never going to have an LLM that has specific goal-oriented behavior broken down through a series of steps to achieve the goal.
And you're never going to have an LLM that talks the way that I'm talking right now or the way that you typically talk because you have an idea in your mind and you're seeking to impart that idea upon your audience.
You're not just generating the next word.
You're actually trying to achieve a goal.
My goal for this podcast is to help you understand that small-scale AI projects are not sentient.
They're not intelligent.
You don't need to fear them.
It's part of my goal.
Since I'm releasing an AI project, I want you to use it.
it.
I want you to use it to help enhance your productivity, your life, your understanding of the world.
I'm trying to give you tools so that you can amplify and explore your own creativity, your own first person perspective, your own consciousness, so that you can more effectively and more successfully navigate this world or amplify your productivity.
Maybe you can end up with more time on your hands because your work is completed more quickly because of the use of this model.
That's my goal.
And I've translated that goal into this podcast that you've heard here.
But an LLM can't do that because it has no goals, it has no memory, it has no values, it has no morality.
It's not human, it's not demon, it's not spirit, it's just math.
It's just transformers predicting the next word.
And that's never by itself going to achieve artificial general intelligence.
If AGI is achieved, it'll be through something different, not LLMs.
So thank you for listening.
Mike Adams here.
Be sure to sign up for a free LLM at brighteon.ai.
And be sure to check out more of my podcasts and interviews at brighteon.com.
Thanks for listening.
Take care.
Springtime is here and we have special spring detox bundles now for you to help you save a bundle at healthrangerstore.com.
And the first bundle that's in stock available right now, I've got it on my desk here.
It's a combination of non-GMO vitamin C and acetylcysteine, our 5G defense product, which helps your body deal with, well, peroxynitrite production that happens upon exposure to 5G radiation.
We've got turmeric gold plus, which has ginger and black pepper in it, as well as our nascent iodine.
This all helps naturally support your body's natural detox capabilities.
And if you bought these separately, they would be a lot more expensive than the special price that we have at healthrangerstore.com.
Just go there and search for Spring Detox Support Bundle, and you'll see this bundle right here available at this special price, which is a significant discount off of purchasing these separately.
We also have two other bundles available at healthrangerstore.com.
The first is a nutrients blend bundle here that has the microalgae superfood blend here along with our beet juice, again, all organic and all laboratory tested, along with our garden harvest blend.
So this is a combination of fruits and vegetables loaded with phytonutrition plus the microalgae that, again, available at a special price for A lot less expensive than buying them separately.
And then we have one more bundle here, which is called the Super Greens Bundle.
And this has our greens plus superfoods mix, our broccoli, sprouts, and also our radiance blend.
Those three together, absolutely loaded with phytonutrition.
Again, non-GMO, certified USDA organic plus laboratory tested on top of that.
Plus, of course, we do microbiology testing on all these products as well.
We test for E. coli, salmonella, yeast and mold, total plate count, and we also do glyphosate testing.
On top of that, we do more testing than anybody else in this industry on our entire product line.
So if you get it at healthrangerstore.com, you can be sure it has been heavily scrutinized, even for heavy metals.
So we test for lead, arsenic, cadmium, mercury, and other toxic elements as well, making sure that you get the cleanest and most nutritive foods, superfoods, and supplements that are available anywhere in the world.
So thank you for your support at HealthRangerStore.com.
Take advantage of these spring detox bundles.
They're good only while supplies last or through the end of March 2024.
And right now we've got supply of all of these.
I don't know how long it will last.
The supply chain is always a little dicey from time to time.
At the moment, we're looking pretty good.
So hopefully you can get your hands on these or as much as you need.
Thank you for your support.
HealthRangerStore.com.
Take care.
A global reset is coming.
And that's why I've recorded a new nine-hour audiobook.
It's called The Global Reset Survival Guide.
You can download it for free by subscribing to the naturalnews.com email newsletter, which is also free.
I'll describe how the monetary system fails.
I also cover emergency medicine and first aid and what to buy to help you avoid infections.
So download this guide.
It's free.
Export Selection