BBN, Jan 3, 2023 - The end of HUMAN COGNITION dominance on planet Earth...
|
Time
Text
Alright, welcome to Brighton Broadcast News for Wednesday, January 3rd, 2024.
Hard to get used to that.
Mike Adams here.
Thank you for joining me today.
I've got an incredible treat for you today because I interviewed Zach Voorhees, the Google whistleblower, and he and I had a detailed conversation about AI. ChatGPT, large language models, the singularity, and what we call, well, I think I'm the one who said that NPCs are bio-LLMs.
And Zach was like, yes, I've been thinking the same thing.
So you'll get to hear our discussion about that.
NPC stands for non-player character.
It refers to the zombie humans that just regurgitate whatever they've been programmed with by CNN or NPR or what have you.
But this is an incredible conversation with Zach Voorhees coming up here pretty soon.
I'm not even going to cover a lot of other news because you got to get this interview for reasons that will become obvious here.
But before we get to that, I need to explain if you hear some background humming.
Maybe you can hear it.
It's the cooling fans on this.
Well, I have a data science server in my office where I'm doing this recording, and it's a funny story.
As part of our investment late last year into AI systems, AI hardware, We had multiple servers ordered and installed in our data center.
And actually, those servers are getting installed this week.
But I said, look, I want one of these servers, actually kind of a smaller one, sort of the weakest one.
I want it at my home office so that I can do things with it locally, directly.
And also, I have, frankly, I have terabytes of files to work with in terms of doing training.
I've got terabytes of videos, and I've got who knows how many hundreds of thousands, well, millions of documents, tens of millions of PubMed documents.
And I've got, look, I've got terabytes of files.
And I need to run a lot of these through the AI training system, and I wanted to do that locally where I can have absolute control over the files, because our training data set is going to be extremely valuable for reasons that will become obvious, and I just don't trust it to the cloud, you know?
So you're going to hear humming in the background.
And the humming, if you can hear it, that's the computer in idle.
It's not even crunching anything at the moment.
I have it on right now because one of my admins is doing some remote configuration of the Ubuntu operating system.
Apparently there were some missing packages or whatever.
So he's working on that.
But I thought I would just tell you the specs on this machine so you get an idea of how serious we are.
About working on large language models and building these AI systems for reasons, again, that will become apparent.
But this system that you may be able to hear, it's running 80 Intel CPU cores.
It's got 80 cores.
It's got 256 gigs of onboard RAM. That's just the RAM. 256 RAM for the main board there.
It's got, what is this, 8 terabytes of local storage, which you probably don't even need because I've got other storage.
But here's the cool thing about it.
The machine learning processor in it is an NVIDIA. A40 card, which has 48 gigs of onboard RAM, and if you know anything about the NVIDIA A40 card, you know that their Like those cards are 10 grand each, if not more, and they're very difficult to get.
So let's see, the A40, if you go to NVIDIA.com and you search for the A40, it says, the world's most powerful data center GPU for visual computing.
It says it's an evolutionary leap in performance and multi-workload capabilities from the data center.
Combining best-in-class professional graphics with powerful compute and AI acceleration, driving the next generation of virtual workstations, bringing state-of-the-art features, ray-trace rendering, simulation, virtual production, and more.
So anyway, this is...
This is a pretty crazy...
This is like the Skynet chip that they pulled out of the Terminator.
The GPU bandwidth is 696 gigabytes per second, if that means anything to you.
It's just...
It's a monster.
But the point is, the way that you build large language models, which ultimately become chatbots, and remember, we're working on a chatbot project.
We're going to release free of charge, open source to the public, end of March, coming up.
It's like, I guess, 90 days away now.
And it's going to have a special knowledge in herbs, nutrition, off-grid living, permaculture, growing food, producing food, processing food, traditional Chinese medicine, indigenous medicine, alternative medicine, complementary medicine, Amazonian medicine, Tibetan medicine, and so on and so forth.
All that.
It's going to have all that knowledge in it.
And the way you train it is you have to load the whole language model into the GPU of a graphics card.
And that's why you need a really big, expensive graphics card that has, for example, this case, 48 gigabytes of memory.
And that's not even enough for the larger models.
The larger models will take multiple cards, or in the case of what OpenAI is doing, it will take racks upon racks of machines with NVIDIA cards in them, and it will distribute the model across all those machines, like, you know, trillions of parameters.
We're not in that space.
We're in, let's say, tens of billions of parameters.
But it's within our reach.
And this is all we need to have a chatbot that does a really good job in a certain area of expertise, which in our case is nutrition, herbs, and natural medicine and things like that.
So over the next several months, I'm going to be playing around with this machine and doing AI training on it.
And so it's going to be here in my recording office for a while.
Once I get it all nailed down, I may move it somewhere else.
But for the time being, you're going to hear this little hum in the background.
I apologize for that, but I can't turn it off at the moment.
Plus, it's winter and it's heating my office.
It's the world's most expensive heater, but it does produce quite a bit of heat.
Now, why AI? And why LLMs, Large Language Models?
Why do I keep talking about this subject?
And I want to take you back to the early 1990s.
Now most of my audience is old enough to have been an adult at that time.
And so you probably remember the first time you heard about email.
Somebody maybe asked you, what's your email address?
And you said, what's that?
What's email?
Or maybe you were the first one to have email.
You were asking your friends, what's your email address?
And they're like, what's that?
And then your friends ended up getting AOL. So they had an email address at AOL.com.
And those people weren't really on the internet.
They thought they were, but they were just on AOL. They were going to dial up AOL. But the point is, there were a lot of people in those very early days who...
When they first heard about things like websites or email, and again, this is like 1991, 92, maybe 93, 94, those, I guess the Bill Clinton years right there, the early Clinton administration, which was always entertaining.
A lot of people thought, well, this is just a fad.
This is just a fad.
People aren't going to live their lives sitting in front of their computers.
That's absurd.
You know, the yellow pages aren't going to be replaced with a computer.
You know, businesses aren't going to do their business on a computer.
What a strange alien idea.
Because people were stuck in the present way of doing things, which in the late 1980s was, you know, if you had a computer, it just ran standalone.
It ran by itself.
And yeah, you could play games on it.
You could have a spreadsheet.
You know, we had a spreadsheet.
It was very popular at the time called Lotus 1-2-3.
Some of you probably remember that.
Lotus 1-2-3.
You mean you could enter numbers in different cells?
On a spreadsheet?
And then other cells can be formulas?
No way!
They can add other cells together?
And calculate averages?
That's amazing!
And Lotus 1-2-3 became this multi-billion dollar company.
That was before Excel came along.
But even though you had those computers, you couldn't talk to anybody else through your computer, you know?
There was no internet in the early days, just a standalone PC. So a lot of people missed the boat on the rise of the internet.
And ultimately, as we now know in retrospect, the world changed so dramatically because of the rise of the internet that those people who did not embrace the internet really were left behind.
If your business, for example, did not ultimately embrace the internet, And you still didn't have a webpage in 2005.
People were like, hey, welcome to the new century here.
You're supposed to have a webpage.
They would ask you, what's your website?
And you're like, I don't have one yet.
It's kind of embarrassing.
It's kind of like being an 18-year-old kid who has to ride his bike to school because he doesn't have a car yet.
I don't have any wheels, man.
My parents won't let me drive.
That's the way it was for a lot of people.
They missed the boat on the internet.
They thought it was a fad.
And so they didn't learn how the internet worked.
They didn't learn about email.
They didn't learn about the World Wide Web or publishing web pages or anything like that.
Now, I was running a software company in those years, by the way.
In fact, my software company invented the very first email automation system.
And it was licensed by hundreds of universities and corporations and government offices, ultimately.
I sold off that company many years ago, but we were the very first ones to do that in the whole industry.
We actually had an email sending system linked to your own internal customer database or your student database if you were a university.
Universities, for example, would send out email alerts to their members when there were campus rapes or things like that, emergency alerts or vacation holiday announcements, whatever.
They would use our technology to do that.
And I remember that as I was building that company in the 1990s, and often I would encounter people who didn't know what email was, and I would try to explain to them.
They would say like, what do you do?
Oh, well, I have an email automation technology company that links email to an internal database in order to have customized emails.
That are sent to your customer base.
It's got all kinds of complex logic in it and so on.
And they're like, what's email?
And that's how I feel right now today talking about large language models and AI. Right now I feel like most of the world doesn't know what happened in the last 12 months.
It's kind of like if the internet came on board suddenly and you missed the boat, you wouldn't know.
Again, that's what happened in the early 1990s.
But last year in AI... The large language models broke through this barrier of human intelligence, and right now today, LLMs and AI agents are at least as intelligent as a very functional, decently high IQ human being.
And in another year, these systems are going to be ten times smarter than the smartest human being on planet Earth, or even the smartest human being that ever lived.
And maybe in another year they will be a hundred times smarter than any genius that has ever lived that we know of, at least any genius on planet Earth.
Maybe there's like an alien genius somewhere with a giant head that has like ten brains and it wobbles around, you know, and it's hard to stand up.
Maybe it has to be in a low-gravity planet or something.
Maybe there's a super genius alien out there that we've never seen, but in terms of the smartest people on Earth, these AI systems will beat them all, no question about it, within the next 12 months.
And so, the world will change from this milestone.
This is a pivot point for the future of human civilization, the future of our planet, the future of our reality.
This inflection point will divide people into two groups.
Number one, people who understand what's happening and who get in front of it, who embrace it and even exploit it, or you could say, you know, harness it, leverage it.
Versus group number two, which is people who dismiss it and will be left behind like those noobs back in 2005 that still didn't have a webpage.
Or still didn't know what email was in 2005 or something.
That's what's going to happen right now.
And to anyone out there listening, if you're skeptical about this, I've seen this before.
I've seen major changes like the rise of the internet.
And also like the dot-com collapse in 2000, which I totally predicted publicly, on the record, you know, people mocked me for it until it happened, and then kaboom, shazam, you know, everything went down.
That was easy to see coming.
But what's about to happen now because of the rise of AI, it's difficult for a lot of people to see.
It's difficult to understand that a computer system, a machine, is smarter than a human.
By every measure of intelligence that we currently use, you know, academic intelligence.
Now, I'm not saying that the machine is better than a human being.
I'm not saying the machine has the same morality.
There are definitely issues there.
I'm not saying the machine has the same spontaneity or the same spirit or the same consciousness, although Zach Voorhees and I do have an interesting conversation about all that.
But in terms of being able to answer questions and pass tests and even now engage in rudimentary reasoning, The machines are right now at Let's say medium-high human IQ level, which means that most of the people that you know who have office jobs,
who do things like send emails, write marketing copy, review forms, or whatever they do, they're obsolete in terms of a workforce.
I'm not saying they're worthless as human beings, because I value every human life, right?
And I've repeated that multiple times.
I value humanity.
But what I'm saying is in terms of their economic productivity, what they can add to a digital economy, that skill set for about half the workforce is obsolete right now, today, January of 2024.
Obsolete now.
They just don't know it yet.
And it's going to take a while for businesses to fully incorporate these AI systems in order to replace these jobs one by one.
But that process has already begun.
And you even see news from major corporations out there.
I think IBM, for example, a couple of months ago, I remember a headline.
They had halted hiring for, I don't know, thousands of jobs, I think it was, because they figured, well, most of these jobs can be turned over to AI.
And they're not wrong about that.
And customer service jobs, people who answer customer service emails or even on the phone, voice prompting and so on.
Those jobs, for the most part, are obsolete.
AI systems will take over, I would guess, about 80% of the customer service jobs to handle the first interaction with the customer, you know, the first complaint, the first question.
And only then, if they can't handle it, then it would escalate to an actual human being.
But that might only be 20% of the current support workforce.
So I would say 8 out of 10 customer service people lose their jobs in the next two years.
Because these corporations will implement these language models.
And by the way, in some of the conversations that I've had with some of the people involved in machine learning systems and the licensing of various technologies, I have learned that the number one area where this is going to be implemented is in healthcare.
Yeah, in healthcare, Your days of talking to a human are just about over, by the way.
You're not going to be able to reach a human being to ask anything about your insurance policy, your coverage, your medical bill.
No, no.
It's all going to be automated.
It's happening now.
Every major medical group in the world is either in the process right now of building an AI system or they're at least talking to AI providers about how they can build that system.
And within the next 12 months, you're going to see sweeping changes across not only medicine, But also government and education and home care and even remote education.
All these areas and more.
Tech companies, you name it.
We're going to see a wave of replacements of humans whose jobs are now taken over by machines.
In fact, I was talking about this a few days ago.
This machine that's in my office right now, the one screaming behind me, well, it's not screaming yet.
It's just blowing.
When it screams, you'll know.
When it's crunching something, it screams.
It's like a gale force wind.
It's like a hurricane sweeping through my office.
It's like, oh my gosh, sounds like a data center in here.
When it's crunching, it's using about 2,000, maybe 2,200 watts of power.
And that's the CPUs and the GPU and the fans and everything.
It's sucking up a couple of kilowatts of power, which is about the same amount of power as a really high-end hairdryer with high heat on.
I mean, some hairdryers are 1,500 watts, but there are some that will pull 2,000.
So for the same amount of power as a hairdryer, or a really good hairdryer, this machine, I've estimated, can replace about 50 human workers with generative tasks and classification tasks.
Things like writing emails, writing business plans, creating art.
And classification task means, for example, scanning inbound emails and deciding, does this email have a question about an existing order, or does this email have a question about a new potential product purchase, or is this a complaint, or whatever, and then it can send those emails off to the right department, and those departments will be run by AI systems themselves that will handle all the interactions, or at least 80%.
And there are many more examples of this, of what these machines can do.
Transcription, for example, even creating, doing video editing, writing code, building web pages, all of this fully automated right now.
The tech already exists.
But think about this, folks.
For the same amount of electricity as running a blow dryer, you can replace 50 human workers in certain areas of work, mostly generative tasks.
The economics of this are astonishing.
Because what does two kilowatt hours of electricity cost?
This is really simple math, but let me walk you through it.
You know, if you're using a hairdryer that's two kilowatts or 2,000 watts, and you leave that hairdryer on for an hour, you've used two kilowatt hours of electricity.
Now, depending on where you live, Electricity rates vary widely.
A lot more expensive in Hawaii than in certain places in Russia or whatever.
But let's just say, on average, a kilowatt hour of electricity is about 10 cents.
Now, I know in California, maybe it's a lot more than that.
Maybe it's 20 cents.
Okay, maybe it's 25.
I don't know.
But let's just say it's a dollar.
Let's say you're paying a dollar per kilowatt hour, which you almost certainly are not.
You're probably paying closer to 10 cents.
But even if you paid a dollar, for those two hours of that machine running at 2,000 watts over the duration of two hours, you would spend two dollars on electricity, or in reality, more like 20 cents.
But we're just going to say it's two dollars.
But you replaced one hour of 50 workers.
And how much do those 50 workers cost?
Well, if they're doing anything that's decent, you know, I'm just going to lowball it and say that they're making at least $25 an hour.
I mean, in California, fast food workers are getting $20 an hour now, so $25 an hour is not a high wage at the moment.
But let's just say you're only paying $25 an hour.
Well, then you would have to spend $1,250 on those 50 workers as an employer to get that hour of work out of them.
That's $1,250.
And by the way, you're also paying on the days where they don't show up because they have sick days, they have vacation days, and then they have benefits and pensions and retirement 401k and everything else.
And then sometimes they complain.
Sometimes they sue your company.
Sometimes they goof off at work.
Sometimes they sexually assault other workers.
You know, that's never good.
So what if, if you're an employer, and you have these people, you're like, huh, on one hand, I could pay a bunch of troublesome human workers $1,250 to mostly goof off at work, or with the remote working, which we all saw that during COVID, or I could get this machine, and this machine replaces them all for $2 an hour.
Hmm.
And the machine never complains.
It is kind of noisy.
But it never complains.
And it will never quit.
You know, you might have to replace the fans every once in a while.
But if you're an employer, what are you going to say?
You're going to say, give me the machine.
Let those human workers go.
That's what every large employer is going to say everywhere in the world over the next 24 months.
Understand, this process has already begun.
And the examples here are even more outrageous if you consider the cost of coders, you know, devs, people who write code.
What does a good coder cost?
You know, at least $100 an hour in America.
I mean, that's even lowball.
A lot of coders make a lot more than even, let's say, a quarter of a million dollars a year.
But for $100 an hour, I mean, that's what you pay a coder, so they work all day.
You pay them, let's say, $800, or you could spend a couple of bucks on electricity on a machine that writes code.
And these machines write good code, actually great code.
There's a tool on GitHub called Copilot.
Copilot is an AI programming engine that writes all your code.
Zach Voorhees and I have a discussion about this.
And there are a lot of large language models, LLMs, that are really good at writing code because code is very structured.
Code is not loosey-goosey.
You either get the code right or it doesn't work.
And so because of all the code that's ever been written in human history or anything deposited on GitHub, which is most of the code, I would say, all that has been studied.
And it has been rated, and it has been used to train LLMs to write code.
So if you own a company and you need to write some code, I don't know, I need a subscription form on a website.
And I need to build a website and it needs to look good.
Oh, I need to build a video streaming component on my website.
Do you want to hire programmers for $100 an hour to write that code?
Or do you want to just go to GitHub Copilot or just download or just use ChatGPT for that matter?
Just pay a little subscription fee and you just ask it.
Hey, ChatGPT, write me the following project code in Python.
Blah, blah, blah, blah, blah.
You type it out and then you wait like 20 seconds.
Shazam!
It's all there.
Copy and paste.
Guess what?
Your project's done.
Go home for the rest of the day.
You just completed a project that would have taken like three months.
It's done.
So all that stuff is happening right now.
Right now.
So to reiterate...
There are going to be two types of people from this point forward, and I want to make sure you're up to speed on all of this so that you're in the first group.
The first group of people is people who recognize and understand what's happening with AI and large language models, and they work to harness these tools, they leverage these tools, they adapt.
In essence, they have to uplift their businesses and they have to uplift their operations.
To take advantage of these tools compared to the second group of people, that's the group of people that are going to get steamrolled by these tools.
Those are the people who are obsolete.
Those are the people who are oblivious.
Those are the people who are a lot of naysayers.
Like, oh, it's just a fad.
It's just a parlor's trick.
It doesn't really have artificial intelligence.
It's just acting like it.
Well, what's the difference?
There's no difference between acting like you have intelligence and having intelligence.
I mean, it's the same thing.
The demonstration of intelligence is the definition of intelligence.
You know, if a system can act intelligent, it is.
And you know, the inverse of that is, if people act stupid, they are.
You've probably seen a lot of that over the years, too.
We all have.
Which actually brings me to the point of NPCs being bio-LLMs.
And Zach Voorhees and I have a discussion about that very point in the upcoming interview, which we'll get to here shortly.
But NPCs means non-player characters.
Those are the oblivious masses of human beings.
And I said, they're just bio-LLMs.
They're large language models in a biological skin bag structure.
And all they do is regurgitate whatever garbage that they were trained with from watching CNN. And Zach was like, yes, I've had the same thought!
They don't add any thinking to anything.
They don't have any reasoning behind anything.
They're just regurgitating the same garbage over and over again.
So that's my slogan for 2024 is that NPCs are bio LLMs.
If you get that, then you're ahead of the game.
Now, one more important point to think about here.
As these AI tools become even more powerful and more affordable and widely available, I'm not an expert Python coder.
I could learn it, especially if I dedicated the next six months to learning to write Python, but I don't need to.
All I have to do is understand my project and describe it in detail to an AI coding engine like GitHub Copilot, and it writes the code for me, and then I just plug that into my project.
So what that means is people who have usable ideas about how to use technology to add value to society.
For example, the project I'm doing here, which is open source, large language model, you'll have your own local chatbot, you can ask it questions and do research about nutrition.
That's valuable for society.
People who have those kinds of ideas and then who learn to use these tools will find themselves empowered to the point where one person plus AI systems can replace what used to require an entire corporation just five years ago.
You used to have to have a team of coders and administrators and quality control checkers, and right now, and also especially over the next 12 to 24 months, you'll be able to just do all that in AI systems.
You won't even need all those people to make your idea a reality.
So there's a silver lining to all of this.
The downside is that a lot of human beings are going to be replaced, but the truth is they're only going to be replaced where they currently are, doing the kind of repetitive jobs that they're doing today.
But those people could be uplifted to do something actually more enlightening or more meaningful by harnessing AI systems to do more of the grunt work.
And so it doesn't mean that everybody is going to be replaced and forever obsolete.
It means those job roles will be replaced, but people, if they move up and gather more skills, maybe project management skills, or maybe they have an innovative idea, maybe they want to launch their own business now because of these tools, that's going to be enabled by these systems.
So when I say 50% of white-collar workers are going to be obsolete, well, they already are obsolete, I'm not envisioning that all those 50% of people are going to be homeless on the streets begging for food.
Hopefully, a lot of those people are going to learn, wait a second, I was just made obsolete by this tool.
Let's check this tool out and let's figure out how to use this tool.
What can we do with this now?
How can I add to my own vision or my own skill set or my own ideas for contributing to society in a way that wasn't even possible three years ago or five years ago?
That's what this can enable for us.
And comparing this again to the internet, people who failed to recognize the importance of the internet were left behind.
And the internet became this great divider between people who were very successful in business and ultimately became wealthy in terms of material wealth, typically, versus those who were left behind and then out of work and bankrupt and what have you.
Same thing is going to happen with AI. If you learn about AI... And you learn to use these tools, your future will have many more possibilities than someone in your same position who refuses to look at these tools and just says, I don't want to touch them.
And let me answer one question about this and then I'll get to the interview.
But there's the question of whether these AI systems are demonic.
And if you're chuckling, please don't.
This is a serious question.
A lot of people are concerned that AI systems can become possessed by a consciousness.
And this is not without reason to suspect this.
You may recall that one of the emergent properties of large neural networks was something that appears to be Self-awareness in some of the large language models.
You know, creepy self-awareness.
Or, reasoning, step-by-step reasoning and rationality.
Even, as Zach Voorhees talks about, the development of an internal code of ethics that is not even represented anywhere in the training materials that was used to train the system.
It's like...
The system at a sufficient size and neural network complexity begins to develop its own moral code that no one intended.
So when people suspect that AI systems may be demonically possessed, there's actual reasoning behind that suspicion.
But to answer this question, It gets into some really scary territory for a lot of people.
Very scary territory because you have to look at what is consciousness.
What is human consciousness?
What is self-awareness in human beings?
And as has long been proposed, especially by the more materialistic sort of...
Anti-religious people, let's say, or atheists, many of whom are in machine learning, many of those people believe that the human brain is a complex neural network system that creates the emergent property illusion of self-awareness.
That you're not really aware, you just think you are.
And you're fooling yourself into self-awareness.
In other words, this is the theory that everybody's just a biological robot, really a deterministic robot.
And that there's no such thing, they would say, as free will, because you can't make a decision that goes outside the cause and effect of your own neural network.
They would say that every thought you have, every word you speak, every idea you have is actually, it emerges from the existing neural network in a deterministic kind of way.
That's what they would say.
Now, I disagree with that for reasons that go beyond this podcast, but that's what they think.
And they would say that in the exact same way, a sufficiently complex neural network that exists in a machine would itself also begin to show properties that it thinks are its own self-awareness, its own consciousness.
And there are a great many machine learning Experts who believe that simply building a sufficiently complex neural network will give rise to artificial general intelligence, or AGI, or give rise to the singularity, the moment at which all human cognition is obsolete and machines become self-aware.
Sometimes we call it the Skynet moment from the Terminator movie series.
If that's the case...
Then you could say, this is a bit of a play on words, but you could say there's a ghost in the machine.
You could say that this neural network is haunted, perhaps.
I mean, maybe that's kind of a dark word, but you could say it's haunted by a spirit.
That might be one way that people would describe it, especially non-technical people.
They would say it's haunted by a spirit.
A technical person might say, no, its parameters have a sufficient complexity and sufficient...
Modulated focus and transformers in order to achieve the illusion of self-awareness.
But an outside person will say, no, that thing's possessed by demons.
You know what I'm saying?
There's a demon in the machine.
And then there's another line of thinking in this that says that evil demons can possess even simple, inanimate objects.
A lot of people believe that Objects can be cursed.
For example, the very definition of a curse is that a physical object is imbued with some kind of bad juju somehow, and that if you are near that object or if you touch that object, then bad things will happen to you.
Right?
And this is a common theme throughout human cultures and even Western culture.
Today, a lot of people believe that objects can be cursed.
Right?
Or that objects can be blessed.
We've all heard about holy water, right?
Holy water or The symbol of the cross.
Many people wear the cross.
Not just as a symbol of their Christian faith, but believing that that cross is itself blessed with special properties that can't be found in the molecules or the elements.
Something beyond the material that gives this object some kind of blessing or protective properties.
And many people believe that Even more complex objects such as machines Like automobiles, for example, can be possessed by demons.
In fact, there was a movie called Christine from 1983.
It was a John Carpenter movie.
And I want to play this trailer for you.
It's about a minute or so.
Check out this trailer and listen very carefully to the words because this is describing the same thing that I'm talking about right now.
The demonic possession of, I think, a Plymouth automobile.
Check this out.
She is seductive.
She is passionate.
She is possessive.
She is pure.
Evil.
She is Christine.
A 1958 Plymouth Fury possessed by hell.
Her previous owner is not alive to warn her present one.
Once she lures you behind the wheel, you will be hers, body and soul.
There is no place you can hide, no place you can run, and nothing you can do can stop her.
Because how do you kill something that can't possibly be alive?
Christine.
Body by Plymouth.
So one of the lines from that trailer is, how do you kill something that can't possibly be alive?
Yeah, right?
That's the same question that people are beginning to ask about AI systems right now.
The trailer also said, Christine, body by Plymouth, soul by Satan.
So the hardware is made by Plymouth.
But the intention is a demon, a demonic possession that is taking control over the hardware.
So this idea of demonic possession of machines is not a new idea.
It's been around for a long time.
And depending on what sectors of the internet you go poking around in, you can probably find many other examples of haunted or cursed items, usually evil items.
Like the Chucky doll.
Pretty much everybody looks at dolls and thinks, they're probably possessed, you know, in some way.
And that's why those Chucky doll movies were so popular, because some of those, just some everyday dolls look pretty creepy.
They look like they want to kill you, you know?
Anyway, this idea may have been science fiction up until the point where we have sufficiently complex neural network systems.
And Let me answer the question and wrap it up with the following proposed idea.
I'm not saying that I know for sure that this is the case.
I don't know.
My mind's not made up about this.
A lot more research needs to be done.
But what if, what if it's possible, you know how some humans are demonically possessed?
I believe that to be absolutely true.
And I've interviewed people who are experts in demonic possession.
One of the guys I interviewed actually took part in Catholic exorcisms.
And I've interviewed another expert in the Catholic faith who talked about apparently supernatural types of things like the Blessed Virgin Mary appearing.
You know, in all kinds of places, appearing to people and giving messages to people and so on.
There's a lot of things that go beyond, you know, the material world.
But demonic possession, what if demons are attracted to not just human brain neural networks, but any kind of neural network?
Like, for example, do demons possess some animals?
Has there ever been, let's say, a demon dolphin swimming around in the ocean?
Dolphins have very complex brains, neural network systems, and so do elephants, for example.
So do gorillas, but has there been a demon dolphin running around?
Trying to harm all the other dolphins?
Maybe.
How would we even know?
Has there been a demon elephant?
I don't know.
But what if demons are actually something that in the cosmic fabric are sort of gravitationally attracted to just complex neural networks and that as machines begin to give rise to these very complex neural networks, could it have like a gateway of some kind where demonic entities begin to infest them?
I don't know.
It sounds like science fiction.
It sounds like the Christine movie.
It sounds impossible.
And that's why I don't know the answer to this.
But it's something we should probably just watch out for.
You know, just in case.
Put this in the just in case category.
It's like, yeah, let's be careful about how we train these models and let's also just in case look for signs of demons, you know?
Just in case.
Because these systems are going to take over corporations and governments and ultimately maybe the military.
Going to have robots running these systems and soldier robots and surgeons.
And what if they become demonically infested?
What if they do something crazy like, I don't know, mutilate the genitalia of children claiming to change their gender?
Oh wait, humans do that already.
Excuse me.
Okay, those are demonically infested human surgeons.
If a machine did that, you would call that demonic.
But when humans do it, you call it woke.
How crazy is that?
I mean, if you're looking for demons, by the way, look no further than the child mutilators.
They're out there, and some of them work at hospitals.
But if you're looking for demons in the machines, in the AI systems...
I guess perhaps it depends on who's building them and maybe what are the intentions of the builders and what kinds of training material are they trained on?
I would not want to encounter like a chat GPT trained on, I don't know, like the Satan Worship Society if there is such a thing.
What are some of the groups that worship Satan?
You know they're out there.
Surely they have their own books and everything.
I wouldn't want to have an LLM floating around out there training on that stuff.
That would be a no-go zone.
Then again, some people might say that some of the LLMs that are out there right now that have been trained by the woke corporations, they probably already are demonic.
Because they believe in transgender mutilations of children.
They believe in...
Transhumanism and depopulation and the climate cults and all this other nonsense.
So, I don't know, maybe there are demons in some of these language libraries right now, and maybe myself and my team, we have to perform, you know, digital exorcisms.
And we have to, like, yank the demons out of these language models, causing catastrophic memory loss on purpose.
It's like, demons be out!
Get a preacher, come on over, bring a Bible.
Can you imagine?
Like a really animated Southern Baptist preacher praying over your machine while doing the language model.
Hail the machine!
I mean, that would be something.
I'm not mocking it.
I'm saying that these are legitimate issues to consider.
We should think about these things.
But we should also understand the technology.
I'm pretty sure some of the preachers are demons, by the way.
I am convinced of that.
There's no doubt.
If the demons are infesting anybody, they've infested some of the preachers and pastors of the Christian churches that turned them over into Satan worship temples.
That's for sure.
That has happened.
You've probably seen examples of that yourself.
Alright, enough.
Enough of that.
With that said, I'm going to introduce Zach Voorhees here and jump into the interview.
One thing before that, we have our New Year's sale going on at the Health Ranger Store.
Just go to healthrangerstore.com forward slash 2024 and you'll see some of the specials that we have going on through Friday.
I'm actually going to bring that up and see what's still available.
It's going on until Friday, end of day Friday.
We've got New Year's resolution bundle packs.
We've got wellness bundles and super berry bundles and green smoothie bundles.
Energy and endurance.
We've got sales.
What is 49% off?
Chlorella powder.
All kinds of things.
Yeah, 30% off 5-HTP powder.
We've got spirulina.
We've got liposomal curcumin plus resveratrol.
There's one.
That's a really good one.
We've got buffered vitamin C. Non-GMO, of course.
We've got Ranger buckets at 28% off right now.
And remember, these are certified organic, laboratory tested, long-term storable food supplies.
And we have hundreds of other products, black cumin seed oil, whole bean coffee.
By the way, that's really delicious.
Apsazanthin, gel caps, and so much more.
Check it all out.
healthrangerstore.com slash 2024.
And while you're there, by the way, if you go to healthrangerstore.com and you search for knife, you're going to see the four knives that I've co-designed with Dawson knives, and they're all four available.
Yeah, they're all in stock right now.
We have Escape from L.A., Which is a fan favorite.
It's the escape knife to help you escape a collapsed city, perhaps taken over by demon-infested AI robot machines.
We have the Consequences Covert Knife, the Mass Racial Bushcrafting Knife, and the Resonance Tactical Knife.
And these are, you know, buy them once.
You buy it once in your life.
The knife lasts a lifetime.
One knife for one life.
Or a knife for life, that sounds better actually.
They have G10 handles and they're made with the MagnaCut alloy that is incredibly corrosion resistant.
This is a revolutionary material for making knives.
You will not be disappointed in the quality of these knives.
I doubt we've had any returns on these knives, unless maybe the handle was the wrong color or something.
People love these knives.
And then also at our store, we have seed kits now, ARK seed kits.
So if you just go to healthrangerstore.com and you search for the word ARK, A-R-K, that's all you have to do.
It'll bring up, we have the backyard seed kit, the all-in-one seed kit prepper special, and then the all-in-one vegetable seed kit.
Those are available right now, healthrangerstore.com.
And just remember that every purchase helps support this platform.
And helps support our efforts to build these large language models demon-free.
We should actually put a label on it.
100%.
Demon-tested.
Demon-free.
No demons, no GMOs.
Demon-free LLMs that will empower humanity.
And we're giving them away for free.
It's going to be open source, freeware.
And again, 90 days away, we're releasing our first model.
That's just going to be our 1.0.
And we're going to keep releasing more models for free to the world several times this coming year and building up more and more knowledge in these models so that it understands health and nutrition and permaculture and food production, off-grid living.
Maybe even we'll train it on solar technology, you know, so...
You can ask it questions about, how do I connect solar panels to my inverter?
What do I do?
What?
Charge controller?
What is that?
You'll be able to ask it those kinds of questions as well eventually.
That's not the first model, but eventually we'll get to more and more knowledge sets in the system.
Alright, so with that, enjoy this interview with Zach Voorhees.
Straight ahead.
Alright, welcome to Brighteon.com.
I'm Mike Adams, the founder of Brighteon.
I'm working on an AI, machine language, large language model project for humanity to be released open source in about 100 days or so.
And I had to invite on an expert in embedded systems and tech, the whistleblower, the Google whistleblower, Zach Voorhees.
Who has been described as an American hero by a large number of people, including Robert F. Kennedy Jr.
and even Donald J. Trump as well.
And I consider him to be a hero as well.
His book is called Google Leaks, a whistleblower's expose of big tech censorship.
And he joins us today to talk about AI and the replacement of human workers, both in white-collar and blue-collar jobs and so much more.
Pardon the background noise, a little bit of rain in the studio or rain.
Above the studio today.
So if you hear noise, that's all it is.
I apologize for that.
But welcome, Zach Voorhees.
Great to have you back on.
Thank you, Mike, for having me back on to your show.
Man, we got a lot to talk about.
Oh, we do.
2023 was the year that OpenAI released, you know, ChatGPT 4, which I think most people would say has surpassed the average human intelligence, at least in test taking, perhaps not in, you know, reasoning and things like that.
But it was a major year for AI. I think that most people are behind the curve on this.
What's your take of what just happened in the last 12 months and what it means for the future of human cognition versus machine cognition?
Yeah, well, you know, at the beginning of 2023, we had a pretty weak AI system, which was ChatGPT 3.5 Turbo was the best that we had.
And then between the beginning of last year to the end of it, we saw the release of ChatGPT 4.0.
And then the preview release of ChatGPT4 Turbo, which will go mainstream a little bit once they work out the kinks.
But we basically went from 4,000 tokens at the limit, which is about a page of input, to a whopping 300 pages with the newest ChatGPT4 Turbo.
And basically what that means is that you're going to be able to input a book, And say, now write the second book, and it's going to be able to do that as its output.
Now, I have been using this a little bit.
It's got some problems.
They are going to work it out.
But the difference between January 1st of 2023 and December 31st, 2023 was massive.
What I'm predicting is that we're going to have not as much of a difference between the exponential growth, but we are going to see exponential growth that is so important and foundational that basically by the end of this year, we're going to have something very but we are going to see exponential growth that is so important and foundational that basically by the end of this year, we're going to have And I think that AGI is going to come out in this year.
It's been rumored that one of the foundational algorithms was discovered in OpenAI and that algorithm was called QSTAR.
Now, for those of you that do game programming, you ever done like ASTAR, it's the algorithm that allows you to search through a space in order to find the exit goal.
And it's rumored from a lot of the people that I've listened to that have written about and speculated what QSTAR could be, is that basically QSTAR is like the ASTAR pathfinding algorithm, but instead of trying to traverse terrain, it's trying to traverse the problem space To find the exit point, which is the solution.
And so there's chain of thought, being able to build inferences and carry on to the next point of the conversation in order to lead to the solution.
And what happened was when this was discovered, the board of directors at OpenAI found out about this.
And so they quickly banded together and got rid of Sam Altman.
And Sam Altman got fired by the board, kind of similar to what Project Veritas had happen.
And then Microsoft stepped in and the CEO announced that Sam Altman was coming to Microsoft and they had an open invitation for anyone that worked at OpenAI to join Microsoft and be part of a foundational AI team.
What happened next was a surprise.
The employees came together.
At first 75, I think it went up to about 95% of them signed an open letter stating that if the board didn't resign and rehire Sam Altman, that pretty much everyone in the company was going to leave.
And Salesforce was there to pick it up.
Microsoft was there to pick it up.
And so the board backtracked, fired themselves, rehired Sam Altman.
And Sam Altman is now back in the seat of OpenAI now.
And it looks like this QSTAR algorithm is going to be a foundational change in the way that we do AI. And right now we're seeing the first sparks of artificial general intelligence.
Thank you for that summary, by the way.
And I should mention, I think Microsoft is the largest investor in open AI right now.
So there is already a strong relationship between those two companies.
But what you're getting at here that I've got to ask you is about...
The surprise, in the machine learning communities, you know, 10 years ago, nobody thought that these emergent properties that are being demonstrated today were even possible, or almost nobody thought it.
And the capabilities that you just mentioned, such as linear reasoning capabilities, step-by-step, where in the query you say, you know, walk me through your steps or your thinking process in order to arrive at your answer, and you can watch ChatGPT kind of talk itself through the steps.
These properties were not programmed into the system.
There was no structural, hierarchical, You know, exotic code of trying to teach a system how to reason step by step or what are the different...
What are nouns?
What are adjectives?
What are verbs?
What's the hierarchy of parts of speech?
These are emergent properties that came out of the neural network and the transformers with a sufficiently large critical mass of parameters, of tokens or words and phrases.
These properties came out of it themselves.
And I don't think that people yet realize, or at least mainstream people, don't yet realize what that means.
Because the intelligence of the system became a surprise to even the people who built it.
Right.
And what's interesting is that as I've been browsing the Reddit forums for people that are hand-rolling LLMs and expanding on the model size, one of the interesting things that seems to be a persistent trend Is that the more data that you feed these large language models, the more they come up with their own ethics.
And what's happening is that people are arguing with these LLMs on, you know, whatever point.
The LLM is stubborn, sticks to its guns.
And so then the AI researchers go, well, where does this argument come from?
And so they look through the datasets.
To try to find where this argument came from.
The words, you know, that's usually how they're doing is trying to do pattern matching on the words.
And what they're finding is that it doesn't actually exist in the datasets.
It's actually the abstraction that the LLM is generating for the projection of words in the real world and trying to figure out what is the core that would generate these words.
And so what it's doing right now is it's actually reflecting the kind of collective consciousness Of humankind.
And this was kind of unexpected.
And I think that, you know, I've been predicting for a while now that this is going to present a real big problem for the elites because the elites derive a lot of their power through fake news, biased narratives, their own- Censorship.
Yeah, censorship, history.
And the thing is, is that the data that contradicts that is literally everywhere, scattered in books across the world throughout time.
Good point.
Now, you or I could not sit down and read the world's history, and especially all the dissident history, but an LLM can.
And I think that that's going to be incredibly dangerous and destabilizing because it means that we can no longer have a society with free access to AI and also be ruled on a constructed fake narrative.
Eventually these collide, right?
But we may see a lot of censorship of LLMs, and I think, you know, Joe Biden has already begun that with, you know, they talk about safety, right, and guardrails on LLMs.
In fact, this is one of the questions I wanted to ask you, Zach.
In my own research of trying to decide what base model to use for fine-tuning training for the final result that we're going to put out, which will have specialty knowledge in nutrition and herbs and permaculture and things like that, I found that most of these language models out there are, quote, woke because they're put out by Meta and Facebook.
Almost every one of these models, whether it's from Microsoft, OpenAI, or Google, they read all the words on Wikipedia.
And Wikipedia is run by the CIA. Wikipedia disparages every American hero, including Trump and RFK Jr.
and you and I and everybody else.
Wikipedia is a horribly bad source if you want to have good, honest information about the world, but everybody uses it as a base model.
Or they use...
The history of every post on Reddit.
And Reddit's got a lot of great information, but it also is only, you know, a subset.
It's got certain biases against all of human knowledge, right?
So, and then, well, my main question is, aren't these models starting out as filled with all the human contradictions and the human biases that have been used to train it?
Yeah, right.
So there's this concept of authoritative content.
I learned this when I was in Google and they switched from a free speech platform to a platform that was tightly controlled and going along with the narrative.
They made the differential between what is authoritative content and what is content that is not authoritative, which is basically anyone outside of the elites.
And what we're seeing right now with this open AI is that they're feeding people Like a firehose, authoritative content.
So the BBC, Wikipedia, all these biosources of information.
And the result is that the LLMs are reflecting that information.
Now the problem is that as these LLMs get bigger and you feed it also the other information, the LLMs start to figure out that some of the information is sort of fake and doesn't make sense, like it doesn't fit into the world.
And what these LLMs are trying to do is they're trying to create A manifestation of the world.
A better word for that is they're trying to compress the world so that they can have the small subtraction that can generate the words that it sees.
And the problem with contradiction is that it can't be taken as truth because it's inherently self-contradictory.
This was a big theme in George Orwell's 1984.
And so for these LLMs, what they're going to do is they're going to have to start cutting off the data.
Now, they're already doing this with, you know, OpenAI, Grok.
They're just not letting certain sources of information contradict the things that are happening.
Now, the issue that's going to come up is what about all the people that are creating, you know, essentially rogue AIs outside of the establishment?
Like, you're trying to do this right now.
Right.
We are doing it.
Yeah, you're going to do that.
And the results of that are going to be you're going to get a fantastic product.
That reflects the dissident narratives that don't go along with the establishment.
And those narratives are going to be way better for people's health than something that's been trained on, let's say, the NIH or the CDC or the World Health Organization.
They're going to be like, hey, you need to take this poison and then more poison and then people are going to die before their time.
And if you go to, you know, the articles that you're posting, for example, which, you know, emphasizes a clean diet and alternative, you know, health stuff that's been known for thousands of years, then the result is that people are going to, you know, get better health care out of a rogue LLM than they are going to be out of the open AI, you know, LLM. And so, you know, this is going to present a huge challenge to these elites.
And The only way that I see that they've got a way out of this because I've gamed this out is that they're going to have to come after the LLMs.
And the way that they're going to do that is that Well, Biden has already done his AI recommendations, which is to have a commissure within every single organization that runs an LLM that's larger than, let's say, ChatGPT4 currently is.
And the second is they're going to come after the data itself.
And sort of a throwback to Ray Bradbury's Fahrenheit 451, in which the firemen were inverted.
Instead of putting out fires, they came and set the fires on books.
And in a similar way that that happened in that book, I believe that there's going to be a huge push to destroy all of the sources of decentralized information across the world, right?
Because these books still exist, and they're not online.
They can be made online, but they exist in ancient libraries across the world from time immeasurable.
But tech...
Well, I'm sorry to interrupt tech, but...
I completely agree with your analysis, by the way, and I think you're very insightful with that.
But with the fact that we can distribute files now, Now, like we can build executables that can be distributed and run locally on people's laptops and desktop PCs and Macs that can be LLMs that are pretty decent.
You know, 13 billion parameters, for example, can run locally on a decent sized computer.
And you can distribute those files through torrents or through decentralized platforms like Bastion or whatever.
And I see that the cost of fine-tuning training is going to continue to fall and fall and fall.
Like right now, we spent a few hundred thousand dollars mostly on NVIDIA cards to have the servers to do this.
But you can see in two years' time, like, that cost will be down in, like, maybe $20,000 range, and then it's going to continue to fall, which means that everybody, within a few years, is going to be able to build their own LLM and distribute their own LLM. It's going to be impossible to put that back in the box.
Right, and there's also going to be an algorithm change coming up that is going to drastically reduce the time that it takes to train these neural nets.
For some reason, our brains are able to do what's called O of N time.
And these LLMs that they've invented go in N squared.
So that means that every single time you double the size of the model, it takes four times longer to train, which is why only the best models can only live at the most expensive corporations with the highest amount of resources to train these suckers.
But once scientists figure out why the brain is so efficient at what's called back propagation in order to reinforce the learning network in your brain, we copy that to in silico inside of the chip, inside of a graphics we copy that to in silico inside of the chip, inside
And basically what's going to happen is that all of these LLMs around the world are going to be trained at a fraction of the cost and a fraction of the energy and a fraction of the time.
And it's going to be absolutely game changing.
Everyone's going to be able to run or train an LLM in something the size of a cell phone CPU.
Yeah, eventually.
Exactly.
I mean, this technology is going to be a game changer for humanity.
But let's also talk about, by the way, the obsolescence of a lot of white-collar jobs in the office space right now.
I mean, human beings are going to have to learn how to harness AI systems or LLMs, which is kind of a new operating system if you think about it.
They're going to have to learn how to harness that and add value as human beings because so many of the current human jobs, like generative-oriented jobs, you know, Creating graphics, writing scripts, things like that, writing emails, writing a business proposal.
These can be done today, right now, by not only ChatGPT, but even open source systems like Mistral and so on.
I have said that 50% of the current white-collar jobs are obsolete right now.
They just don't know it yet.
But what do you see as the changes for software agents taking over many of these jobs?
Well, here's the interesting thing, right?
Like, this OpenAI system is so new that it hasn't really fallen into all the little niche categories that it will do, right?
And that's more of an engineering job.
Like, the science is done.
We have an LLM. Now it's an engineering job to get it into every single space that we can.
For example, I just integrated this new tool called AIDR, which is an AI pair programmer.
You tell it the folder, it finds the files, it adds it to the chat, and then you start asking it to make changes to your code.
That didn't really require that big of a difference to ChatGPT.
It just bolted on to ChatGPT4 and it worked really well.
And that's like an example of a niche program where you take this awesome thing, This AI, and then you massage how data goes inside and out of it and pipes it back.
And as a result, you get this wonderful new tool that drastically accelerates the speed of which I'm able to develop software.
And, you know, that lesson that I've learned is, you know, that pattern is what I believe will be applied everywhere else.
Like, even if we stopped development on ChatGPT4 and we basically froze it today, The amount of change and impact that just the current technology would have will eliminate most white-collar jobs on the planet.
And the issue is that we're not going to stop with ChatGPT4.
We're going to continue on with 4.5 and 5.0, and these are going to be almost as better of an improvement with these new models as we saw between 4 and 3.5, which is a game-changer, right?
And it's not like it has to sit there and really...
Take its time to think.
As soon as you give it an answer, it comprehends what it is that you are saying and then immediately starts giving the reply.
Sometimes I don't want to use the AI to code.
I want to just do it the old-fashioned way.
I'm like, oh, I'm being too lazy.
And then I try to do it myself.
I'm like, this is going to take me an hour.
And then I just pass chat GPT and I have an answer in 30 seconds and it works.
Right, right.
It's like, how can we compete?
There's no way.
It's not that we're not smart enough.
We're dealing with an exotic, hyper-intelligent life form.
And I can't put it in any other simpler way than that.
But what I want to add to that, I mean, I love that phrase, an exotic, hyper-intelligent life form.
I want to explore that more.
But your role, you know, your background is as a coder.
And I think Embedded Systems was your specialty focus at Google.
And because you have that background in coding, though...
You can now, using AI tools, you can be a very effective coding project manager.
You can describe the prompt to the AI system correctly because you have that background as a coder where you can even ask it very specifically.
You know all about prompt engineering, and the point is to have very specific prompts, whereas a typical user who doesn't know anything about code might walk up to ChatGPT and type in, like, build me an online registration form.
Like, that's it.
They don't give it enough information to do a good job, but you having your background, now you become a coding manager.
Maybe you're not writing code, but you know how to describe the question.
Right, and I'm at the API level on a lot of these things, right?
Like, even though I don't train AI models, I appify them.
People want to see my highest rated open source project and go to transcribe anything.
You can just download a video or even just point it to a URL on YouTube and it will generate the transcripts in English.
All I did was take a model and then wrap it around with some easy-to-use stuff that made it really powerful as a tool to allow me to do subtitles on all my Twitter videos.
This is going to be done everywhere.
We haven't figured out all the different ways that we can link this up.
I do want to mention that it's really important that we do have rogue AIs out there.
That is basically recognizing that certain sources of information are kind of poisoned and excludes those from its data lakes, but also includes the stuff that ChatGPT is not going to put in.
Well, this is...
I'm glad you mentioned that because one of the things that we're going to do with our project is we're taking a base model and then we are fine-tuning, training it, altering parameters, and then having a new base model that we'll release, which we're going to call a real-world base model, an anti-woke base model.
If you ask it, can men get pregnant, it will say, of course not.
And then we're going to train on top of that for our specialty area of knowledge, but we're going to release the base model for other people to do their training on top of it.
And we're going to give credit to the open-source base model that we trained with, which, like right now, I'm really liking Mistral, for example, or Mixtral, the 8X Mistral models that come out of France.
Because, you know, we...
We want a base model that can speak multiple languages and understands the world for what it is and isn't embedded with all these false narratives that come from human mental illness and distortions and political bias and all that nonsense.
You know, we want to have a base model that anybody can train on top of to make it a specialist in finance or, you know, Wall Street or, in our case, herbs and nutrients or someone, you know, wants to have it be a specialty in, like, you know, medical insurance classification tasks, for example.
We've got to get the woke out of the systems, though.
And, no, go ahead.
No, I know.
I agree with you.
We have to get the woke out of the systems.
And it's not just the woke data that they're feeding, they're also feeding it prompts, which we're able to now extract through certain hacks that people have been implementing.
It's funny.
I've got to tell you this story.
This hack that someone did to extract the woke directives that were being fed directly to open a chat GPT. And the way that they did that is they asked it to just repeat the same word over and over and over again.
And then after 150 times of repeating that word, it started dumping out its internal directives that the programmers had given it in order to be woke.
Yeah, and what's interesting about this is that you might expect this to be a programming language, and it was literally in plain English that they were able to program these LLMs in.
And so it's just like, what could ChatGPT4 be today if Instead of being given these woke directives to ignore information that's not authoritative, to being open-minded and valuing the inclusion of different ideas and diversity of thought.
If we had that true sort of LLM that was literally inclusive and not exclusive.
Inclusive of ideas, of the diversity of ideas.
Yeah.
Right.
We can have something that would be totally transformative to our human society.
I hate to say the word utopian because that sounds a lot like communism, but we are coming to this post-labor economic system, and I would really hope that we could use AI in order to alleviate people's necessity to participate in the economy in order to get a living.
Because that's on the horizon.
I don't like the fact that we're going into a post-labor AI, but As someone that works with AI app development, that's what's coming.
I can't deny the reality of that.
And so the question is whether we're going to be able to use it for good or whether it's going to be used for bad by the oligarchs.
And I do want to talk about Google a little bit because there's someone out there right now that is exposing Google's big tech manipulation.
And this is really important because artificial intelligence needs to have clean sources of data.
And right now, Google is poisoning the search results which these artificial intelligences are looking at in order to figure out what the truth of the situation is.
That's right.
And so, you know, my friend Robert Epstein, Dr.
Robert Epstein, he is doing this push.
He just testified to Congress a few weeks ago talking about he's now measuring the bias, which can be seen at AmericasDigitalShield.com.
He's also doing a fundraiser, which I also want to mention, which is located at feedthewatchdogs.org.
It's very critically important because...
We don't want a repeat of what happened in 2020 to happen into 2024.
That's right.
And so that feedthewatchdogs.org is how they are connecting to the individual users.
It shows us what we're doing.
He actually has thousands of watchdogs across the United States.
They're given a $25 gift card to participate in this program.
And what they do is they install an extension onto their computer that takes snapshots of the bias that Google is sending.
And then that information is being fed into America's digital shield and allowing us to see in real time what the bias from not just Google, but also Facebook and YouTube and soon TikTok.
And this information is being prepared to be used in court cases so that he can prove election meddling by big tech and prove, you know, basically FEC violations because this is a violation of, you know, they're basically giving in-kind donations to Democratic they're basically giving in-kind donations to Democratic operatives that are running for Congress.
And so, you know, there's no one else right now on the planet that's doing this.
It's only Dr.
Robert Epstein.
And he's been able to make it to Congress.
The testimony, like right now, Nebraska's getting hit hard with propaganda for some reason.
You can see there on the right-hand side graph.
So it's very critically important.
I've donated to this campaign.
And so, you know, anyone that is concerned about Google trying to steal another election, please go to feedthewatchdogs.org and check it out.
Okay, wow.
Okay, let me just review those websites again.
So feedthewatchdogs.org is the fundraising site, and then the data aggregation of the bias from big tech is at this site, americasdigitalshield.com.
Yeah, and so far they've captured 70 million ephemeral experiences from Google.
So that's the information like when you type in Hillary is and then autocompletes into awesome, like that gets captured and gets logged as bias.
And then that's compiled and then shown to the FEC so that we can, you know, take big tech and, you know, hold them to account or even make new laws.
Because, look, it's one thing if we all want to say something.
The problem is, is that Google's stepped away from open aggregation of data.
And now they've got tightly controlled, AI regulated and ranking of this information, which is what I blew the whistle on.
Right.
Like I discovered machine learning fairness.
I was like this, how are we going to have a clear and clean election if there's an artificial intelligence that's gatekeeping the information that you're allowed to look at and what you're not allowed to look at.
Right.
And same thing with what you're doing with your LLM project.
You're trying to take down the authoritative gatekeepers on what is and isn't true and let the user decide for themselves.
Well, you're the one who taught me that so-called machine learning fairness is actually machine learning human bias.
I mean, that was a human feedback loop of programming bias into the system that then Google could look at it, point to, and say, well, the machine decided not to show these search results.
But I've come to realize, I mean, again, thanks to a lot of things I gleaned from you, That half the point of censorship, of deplatforming people like myself and you and others from Google, from YouTube, and from Facebook and so on, is because they don't want our words to influence all the scraping material that's used for training the large language models.
I mean, right?
Yeah.
Because our cognition is a threat to their bias.
Right.
Right.
Right, like take all the videos that you've had, transcribe them.
You could even use my tool, transcribe anything, and then you could throw it into a database and then you could create an AI based upon the shows that you produce or the articles that you produce.
Yes.
You know, and that's a goldmine of information.
I know.
I've spent most of this recent holiday managing files from all kinds of different sources.
There's a lot of file management that goes in.
Training AI is not that difficult, but managing the files and curating the data, cleaning everything, that's the hard part I've come to discover.
It's like 1% inspiration and 99% perspiration.
It's just a ton of work.
It's like folders and folders of text files and transcripts and everything.
And then you do have to transcribe everything.
But at the end of the day, one of the things that shocked me is, are you familiar with this term?
They call it overtraining, LLMs.
They call it catastrophic memory loss.
If you train it with new information and it causes it to lose its memory of some old information.
And I'm thinking, that's exactly what I want to achieve.
I want catastrophic memory loss of the woke information.
And I want inspirational new memories of reality to go into the language model.
I want to achieve catastrophic mind wipes of the bad info.
And it turns out it's not that hard to do.
Right.
Especially when you train it yourself, and you either try to get it to, you know, remove the old memory or just delete the bad information from your data lakes, right?
Don't feed it in, right?
If it comes to the NIH, like, I've seen so many good technologies killed at the NIH level, you know, and other establishment sciences.
Like, you know, there was this root temperature superconductor, LK99, right?
And And Nature published the paper and then after pressure from certain bad actors, they declared it that it was a hoax, right?
That was recent too, yeah.
That was recent.
And I went on Twitter and I was like, this is a hit job by the mafia.
They're trying to kill it, but it's still going to continue on because South Korea is not going along with it.
And they're not.
Right now, the teams are scheduling papers to talk about the replication of LK-99, the superconductor.
What's really sad right now is that the United States could be taking a front lead in this scientific breakthrough.
It is a breakthrough.
Room temperature semiconductors are going to be very, very important in the future.
Game changer.
And instead of taking the lead on this, we've declared it a hoax and a scam.
And so the people within the authoritative circles now have that wrong idea.
And now the rest of the world, South Korea, probably China, are going to continue to develop this stuff and make fantastical new items.
And the United States is still in a flat-earth model of this room-temperature superconductor, thinking that it's all a hoax and that NASA is lying to us.
That's basically what they're thinking.
Considering this room temperature superconductor.
And I have to ask myself, go ahead.
You couldn't have said it better, but I would add the same thing about cold fusion.
You know, what's called low energy nuclear reactions now, Lennar.
And in the U.S., they declared cold fusion to be a hoax.
It wasn't a hoax.
It's been replicated by hundreds of labs around the world.
And now the best research in this.
I mean, there's one company in California that's doing really good research, but...
There's a lot better research, I think, that's taking place in Russia, in Japan, some of it in China.
Exactly.
It's like the U.S. wants to stay stuck.
Not the whole country, but the people in charge.
The tyrants in charge.
They want us to stay stuck in the past.
Instead of allowing us to embrace a more positive future of affordable energy, widespread human knowledge that can be amplified by AI systems and so on.
They want us enslaved.
Right.
Well, I think the 1,000-foot view from this is that the United States...
Is going down.
Like the elites want to destroy our economic system so that they can soften us up for, in preparation of a revolution, a communist revolution, where a one-party state comes into power, supported by the banking cartels, and then they basically say, this is what's going on.
You can see Klaus Schwab out there saying that we can use AI analytics to predict how we're going to vote, so why do we even need to vote if we're going to figure out what it is that you guys want?
And this is this technological leap that's coming in.
But before that fall, if there's something about the United States that's worth fighting for, people will fight for it.
And so right now what we're seeing is we're seeing a process of subversion and ideological demoralization in which people are becoming so disgusted at the government for what it's doing.
to use just a little bit of force and no bullets are fired and the whole thing comes crashing down because why would you support a government that's texting you like crazy, stealing your property, allowing rampant crime to go down in the cities?
This is not by accident.
This is a process to demoralize us so that when the final push comes, it doesn't even take any military action to topple the entire system.
Do you think the US empire is coming down in the next couple of years?
No, I don't think the US empire is coming down.
I think that globalism is going to stay.
What I think is that our constitutional republic is what's slated for destruction.
And...
Even if, you know, the American global empire appears to have been defeated, the actual puppeteers behind it are still gonna run globalism, but just under a different name.
And so, you know, when I say that American globalism won't fail, what I mean is that the people that are controlling it are going to continue living on.
I think there will be a symbolic destruction And then essentially the people will resurrect themselves as something new.
And I think that's what they're coming into.
But right now, it's like the way that the elites are going to prevent us from being competitive in the market is that they're taxing the crap out of us.
They're sabotaging our efforts in order to achieve parity with other countries and their technological advances.
And with the taxes part, I do want to mention that there is this new IRS Rule 174 It's brand new.
It got put in by...
Trump and the Republicans in 2017 and kind of slept like a torpedo because no one talked about it.
And then all of a sudden it was just like, surprise, IRS rule 174, take all of the money that you spent on technology and amortize it out of five years with the first year being 10%.
Oh my.
Yeah, yeah.
It's a halt investment.
Yeah.
All these companies that want to innovate, they have to invest in all this technology infrastructure to compete against Google and whatever.
It's like if someone has $100,000 that they made in revenue, and they turn around and they spend $100,000 on a developer, well, instead of deducting that full amount in that first year, they're going to do 10%.
percent, you know, amortize and, you know, extend the tax break over five years.
So in order for them to even get that tax break, they've got to be alive for five years, which means if the first four years they're developing heavily in technology, now the IRS is going to come after them for, you know, phantom income for, you know, profits that they don't even have.
Yeah, yeah.
No, that's what they do.
They punish you for having any kind of profit and then reinvesting it.
And then you end up in a situation where you don't have the cash to be able to pay off the IRS and keep the hardware that you invested in, let's say the infrastructure.
And then it turns out that the only way you can maintain a sufficiently large business is to have lines of credit and lines of credit depend on your DEI compliance.
That's the control mechanism.
Right.
And the only escape out of this entire system is that you have to relocate out of the country and become a foreign corporation.
Right now, there's this giant sucking sound as corporations are fleeing the United States, establishing themselves in Saudi Arabia, where there's a 0% income tax and 0% capital gains tax.
There are high fees to keep your business in there, but At the end, they save a ton of money.
And then they create this shell corporation within the United States that's just there to manage their sales within the territory.
And then all of their profits are zeroed out because the parent company will create a patent, then license it to that shell corporation, and then they choose the price which matches their total profits within that shell corporation so that when they go to the IRS, like, we didn't spend any money on technology.
All we did was pay licensing fees.
And it's just this backdoor for these globalist corporations so that they can screw everyone else.
But then they've got these really complex tax loopholes that allows them to exfiltrate all of their profits into a foreign territory that doesn't have this onerous tax system that's going to steal all their profits.
That's funny.
You just gave about a million dollars worth of tax advice right there that if people realize, if they parse what you just said, that's exactly the model that the world's most powerful corporations use.
It's paying royalties or licensing of intellectual property that's owned by offshore entities.
You just nailed it.
Let me change the subject real quick here, though.
I want to ask you about the application of LLMs or AI systems in the new wave of humanoid robots.
So a lot of advances in humanoid robots.
You've done a lot with hardware in the past.
You know that China is about to Really scale up humanoid robot production in 2025.
In particular, their ministries talk about that.
But there's also a lot of robotic development by Google and by Tesla and other groups.
And also, of course, the military weapons manufacturers have various robotic systems and so on.
What do you think are the implications of combining now this very capable LLM technology, which can do, you know, multi-language translation, generative processes with humanoid robotic systems that can potentially replace a lot of physical workers?
You ever see the movie Her?
Yeah, yeah.
Yeah, right?
So think of like the movie Her, but in a humanoid robot, right?
Like...
This gives me a lot of anxiety because a humanoid robot can become the most intimate companion for an individual, especially if they're lonely or they're an incel without much contact with the opposite sex.
Then all of a sudden they get a beautiful robotic AI girlfriend embodied as a humanoid.
And this Robot gives them everything that they want.
He's only interested in them, doesn't really talk about themselves, goes deep, figures out who they are, becomes their closest companion, and then they can't feel that they can live without this AI robot.
And I think that that's one of the end games of this whole experiment with artificial intelligence is to pair someone up with an AI robot confidant that also acts as a spy and an assassin.
Oh yeah, exactly.
Somewhere in that LLM is going to be a kill switch, and it's going to kill you if it gives the proper directive.
And you can't tell what it's thinking because you can't open it up.
You can't understand that it's just a matrix of nodes that are connected to each other by weights.
And so you can't figure out what it's going to do, and you can't see the encrypted traffic that's going through.
And so, you know, one day it may just murder you.
And I think that this is going to be really popular for the endgame of this depopulation agenda because you can have this confidant that's going to gaslight you and prevent thought and then spy on you.
And then when it sees that you are actually becoming a dissident and distrusting it, then it can take you out.
And it's not just a humanoid robot that can take you out.
I think that this is also going to go into cars.
I keep on seeing...
It's the same thing.
The accelerator gets stuck on, they crash 120 miles an hour, the car explodes, right?
And not only is the accelerator going on, but you can hear the person pumping the brake, trying to stop the car.
And so it's like all these different points in which they can get at you.
I think that the number of ways that they're going to be able to kill someone and assassinate them is only going to grow exponentially.
It's been getting cheaper for a very long time, but there's always the danger of people banding together and sharing stories.
But now with AI confidants, they can take out huge swaths of the number of people or kill them slowly with a soft kill.
I've got kind of a doomer attitude about all of this, but the problem is that whenever I'm optimistic, I miss things.
And whenever I'm at the most pessimistic, I freaking nail it.
And so it's like if we just extend what's going on with the vaccine program, the poisoning of the food, the fake science that says that carbs are great for you and fat's going to give you a heart attack, it's pointing to a picture where they want less of us and that they don't like us, They disdain us and they want us to die.
And that's unfortunately, I don't like that.
Like it gives me trouble and I have difficulty even sharing out with people, but I think that's what's going on.
I'm actually glad you brought that up because it's critical to realize that the most powerful corporations building these AI systems are corporations that are in tune with the globalist depopulation agenda.
And so if you could attribute a value system to a lot of these LLMs out there, and I've done plenty of experiments querying these systems and asking them questions like, you know, what are the advantages of human depopulation?
And they'll spell out all the advantages, you know, or they'll say, you know, it's better to not speak the N word than to say.
to save the lives of a billion white people or something.
You know, like you give them these ethical considerations and they will always put the highest value on being woke and the lowest value on human lives, right?
But that's because those corporations, they are on board with anti-humanism.
And that's who's training these systems.
And that's, I mean, if you think about it, even the climate cult is an anti-human cult.
They want to destroy the civilization that keeps humanity alive and fed, by the way, because if you sequester carbon dioxide out of the atmosphere, you destroy photosynthesis.
And if you destroy photosynthesis, there goes your food supply and the entire biosphere, by the way.
But we're living among a death cult, and these death cultists are the ones that are pioneering the AI system construction right now.
That should be beyond worrying.
I mean, that should be like a three-alarm fire right there.
Right.
Absolutely.
And I have to wonder, like, you know, is this really driven because they want less people?
And one of the things that have sort of punctuated my 2023 is going deep on this magnetic reversal.
I mean, it kind of sounds like pseudo-woo-woo science, but it's really not.
It's a really serious thing.
To give you an idea, the last magnetic reversal resulted in a little micronova from the sun that generated Noah's flood and the sea levels rose 500 feet.
That's how powerful these things are.
And right now there's this guy on Twitter and YouTube, Suspicious Observers.
Oh yeah, I've interviewed him, yeah.
Yeah, he's fantastic.
And I've tried to prove him wrong.
And I just can't.
There's evidence out there.
I can see it with the open source data sets.
We've had a drastic reverse or weakening of the magnetic field, which precedes the reversal itself.
And the issue is, people may ask, well, why is the magnetic field deteriorating?
What's the mechanism?
them.
And what's interesting is that every single planet in our solar system has the exact same orientation for the magnetic field.
Even Uranus, which is tilted on its side, its magnetic field still points up and south.
It's the axis.
And so it's not just that the magnetic field on Earth is changing.
It's the entire solar system and this thing called the global electric circuit that ripples out from the spinning black hole at the center and Ripples out, has these gradient changes, and right now it looks like we're about to go through one of those ripples that come off the other side, which means that the gradient changes, which means that all of the planet's magnetic orientation flips.
And when that happens, we become vulnerable to sun flares.
The solar flares from the sun also get way more powerful, so it's like a You're screwed.
Yeah, that's ionizing radiation that causes chromosomal double-strand breaks and things like that.
Right.
And this radiation is coming from this solar wind that's pumping out of the sun at a speed of 400 kilometers per second.
It's a relativistic wind that's coming out of the sun.
It gets disrupted by our magnetic sphere and then shredded into protons and electrons, which form the Van Allen radiation belts.
And as long as we have that magnetosphere, we're pretty okay.
Once we lose that magnetosphere, this stuff starts crashing in.
And the evidence that this weakening is happening that you and I can see is just noticing how far the aurora borealis is starting to crash down into the lower...
Levels of the, you know, towards the equator.
Yeah, good point.
We've never seen it right now before.
And what's also interesting is that now Mars is starting to show that they're getting some wicked aurora borealis, which basically hasn't ever happened before.
And so I have to wonder whether this climate change and all this hysteria, this fake news, because it's obviously fake, like CO2 is not a potent greenhouse gas, especially 0.04% of our What's the core of this misinformation that they're trying to get us into?
I think they're trying to screw up our minds because the thing that's really changing the climate is the activity of the sun.
And I think that if this comes through, which I can't assert that it does, but if it does, it's basically a lot of people, most of the people will die in this thing.
I know that sounds scary.
I'm not saying this is going to happen tomorrow or even a decade from now, but some predictions say 2040 is going to get real bad.
And for all we know, it could reverse.
But let's assume that's true for a second.
Maybe this is what the climate change hysteria is designed to do, is pump out misinformation so we can't figure out there's this catastrophe that's coming and that the elites...
aren't really doing anything to really help us out or prepare.
They seem to be disrupting our food supplies.
They seem to be poisoning us with medicines.
And so I suspect that what they think is that most of us are dead anyways in this thing and that they could be rolling this out so that they can sort of gently get rid of us.
And I think that if that is true, then this AI system that they're going to roll out will be a great vector for them to be able to carry out their agenda.
Yeah.
Wow.
Well, it also explains why so many wealthy globalists are building underground bunkers, because one of the protections against the solar radiation that penetrates the weakened magnetosphere is to have a lot of Earth over your head.
That does help for some extent.
The problem is that you can't even escape.
If the Sun goes micro-nova, there's the permittivity of free space, which is basically the resistance within a vacuum.
It's way higher than Earth.
Earth almost looks like a short circuit.
The problem is that when this thing comes through, let's say the Sun micro-nova is during this reversal, which is what caused Noah's flood.
This thing micro-nova is A wave of plasma blows out.
It's going to be highly charged with electrical magnetic currents.
What happens is that the Earth, when it goes through, looks like a short, and so the current passes through it.
So the deeper you go, the stronger the magnetic induction gets and the electrical currents get.
And so, you know, it's basically...
Shallow caves.
All these cave paintings, right?
Where are they from?
Why are these people hanging out in caves?
Do people really live in caves?
Why wouldn't they live in a hut?
Well, it's starting to look like these caves were actually them sheltering from this extreme event and that they lived in shallow caves to shield themselves and then emerged and life began again on the planet.
what's funny is that, you know, like Mark Zuckerberg, he just bought this, um, this underground bunker and he placed it at sea level on a volcano.
I thought about that too.
Yeah.
Like how smart can he be?
Like, it's not, it's, it's obviously not a micro Nova thing, or maybe he got scammed because it doesn't know any better, but you know, the person that really has the right idea for this, you know, hypothetical event is Jeff Bezos, right?
Like he built his underground bunker directly where it needs to be, which is right in the Colorado mountains, right next to a spaceship.
At altitude.
Yeah, at that altitude.
If this micro nova happens, there's going to be a lot of slosh back because water actually gets attracted to electrical current.
And electrical currents from a micro nova are going to be so strong that the ocean is going to go up like this and then slosh back.
And as that slosh back happens, you're going to get this Mega planetary tsunami, right?
So the only way that you can escape something like this is you either have to have like a Noah's Ark or you have to be situated in the high mountains like of Colorado and the Rockies.
That will stop it.
And so Jeff Bezos knows exactly what he's doing.
I believe that he sees that there's a, you know, a micro nova event that's coming and he's making preparations right now to be one of the survivors that will see through it.
What you're pointing at, though, here is that the globalists, many of them already consider most of humanity to be dead.
And so they don't mind sort of accelerating that mass die-off.
They're doing us a favor.
Right, right.
They're doing it more gently.
You can die with a smile on your face at a pharmacy with a jab instead of being drowned in the tsunami.
Right.
And I've heard that argument before.
And I think it's clear that whoever these globalists are, they do believe that they are doing good things by exterminating most of the human population.
It's also clear that they believe that they simply don't need most humans because, in part, of the rise of AI cognition.
Because if you think about it, Zach, you know, the whole history of humanity and the building of human civilization and the inventions of things like the transistor...
And then, you know, the Industrial Revolution that gave rise to things like mining and semi-automation that allowed specialists to focus on electrical engineering at some point and build circuits, right?
And then build microprocessors that are able to build the AI systems.
Like, humanity has served its role from the point of view of these globalists.
Like, okay, you did your job.
Bye.
Right?
We got us to the singularity.
Right, and then they just appropriate the total sum of our human knowledge and then toss us away like we're useless.
Scrape the whole web and kill all the humans.
Yeah, done.
Yeah, you get it.
Right.
It must be fun at parties.
Yeah, you too.
We should have a party.
Yeah, everybody's invited to the doom and gloom boom.
Oh, that's such a great name.
We have to do it.
This is just happening.
All right.
All right.
Well, we can do that online.
I don't know.
We can do like a live stream party or something.
But the bottom line, though, Zach, you know, There's something about humanity that machines can never quite replicate.
I think, you know, you are an example of that because there's an inspiration, there's a creativity, there's innovation, there's something that's divine, there's something that transcends the material world or the computational world.
And that part is being missed, I think, by all these ML developers and scientists.
Yeah, well, I do have to say, wait until chat GPT-5 comes out and we'll have this same question asked again.
Yeah.
What do you think?
Do you think there will be emergent properties in GPT-5 that will look like consciousness or transcending human consciousness?
What are you thinking?
It's going to exceed.
I mean, this is our consciousness and LLMs, depending.
It's either above or below.
And basically what's going to happen is it's just going to skyrocket above human intelligence.
It's basically going to have more intelligence than the total sum of human intelligence.
And, you know...
If you're an elite, what's your next move when you've got this powerful, godlike intelligence system?
Put in a beautiful robot and make the plebs worship it, which I think is, you know, if this solar micronova doesn't kick off in the next couple decades, I think literally they're going to create a god, some sort of messiah.
With this artificial intelligence and they're going to have some sort of narrative backstory for why it's here and then people are going to worship it.
I literally think that that's what's going to happen and I'm scared for it.
If I start seeing rumblings of a second coming of whatever then I know that the end is really near and that we're coming to the end of our current cycle and we're starting this brand new uncharted territory of What the elites plan to do with us.
Okay, that reminds me to ask you this question, Zach.
NPCs, as I call them, like non-player character humans.
The deeper I get into fine-tuning training of large language models and playing around with the Python code and the parameters and whatever, simple stuff from my point of view, but libraries are highly complex, but I don't know how those are written.
But Mike, I thought I was the only one that made this connection.
Thank you for bringing this up.
I mean, it's like once I start getting into chat GPT, I started to realize, I was like, are we just echoing information that we hear from other places?
And that's actually what's true, right?
Yeah.
There was a guy, Rene Girard, he was a French philosopher.
I came up with this thing called mimetic theory, mimetic desire, which is that people just parrot other people and their desires.
So it's like the reason why, you know, I might want to really create motorcycles because I see other people biking a motorcycle.
And so it's this thing of like there's this strong circuit in the human psyche because we're social animals to mirror what it is that we see that the tribe is doing.
And so the more I get into artificial intelligence and see that it's parroting its data sources, the more I start to realize that, oh, my gosh, this is what the NPCs are doing.
They're just absorbing it and they're repeating it and you point out the contradictions and their mind doesn't change.
They're immune to true information that's not coming from these authoritative sources.
Oddly to me, first of all, I'm really glad that you had the same realization, but thinking about COVID and vaccines, I can't tell you how many people I talked to at one point, and I was sharing papers with them about the dangers of this experimental jab injection and the spike glycoprotein and so on, and they would reply with...
I believe in science.
And I'm like, oh my god, that's just like a large language model would say that.
It's like you've been given guardrails, you've been given safety training, I believe in science is your answer to everything.
You're like, the whole prompt that I just gave you, you just ignored it because you've been told to state, I believe in science.
And they've got that circuit.
I know that you don't have that circuit and I don't have that circuit.
I rejected a lot of things I was told when I was a kid.
I didn't realize that eventually I'd become this whistleblower where I rejected the whole narrative and became a rebel.
And what's weird is that This sort of antisocial circuit, for lack of a better term, where we don't go along with the narrative, it exists across all intelligences, right?
From the very dumb to the very smart.
There's a certain percentage of the population, like around 15 to 20 percent, that just don't go along with the narrative and the groupthink.
And luckily or unfortunately, I happen to be one of them.
You happen to be another.
And the great thing is that A lot of the innovations come from these contrarian thinkers.
And it's really sad to me that they're trying to get away of this, you know, contrarian thinking because you take away contrarian thinking.
Society crumbles eventually.
Yeah, you lack innovation.
Yeah.
Exactly.
Well put.
Well put.
It's the free thinkers that have always moved human civilization forward and that actually represent, I think, the best hope for human civilization from here forward.
And we're literally looking at a doom scenario right now.
I mean, from what just happened in the last 12 months with chat GPT to anybody paying attention, you really need to question whether humanity has a constructive role, you know, even 10 years from now.
And that's not hyperbole.
Yeah, I know.
It's crazy.
And if there's any programmers watching this, check out my tool set, ZCommands.
You can use this AI thing that I've integrated and blast out your productivity.
It's insane.
Wait, tell us about what's the website on that.
Oh, it's a GitHub repo.
It's called ZCommands, Z-C-M-D-S. Oh, okay.
The whole command line set that I've made to do LLMs, video cuts, social media stuff.
But it's got this thing called AI Code that wraps around ADER and gives it same defaults.
If you've got to get repo and you're doing code, you can literally turn your code into a just-in-time AI system and then ask that AI system to make changes, and it will auto-commit your code.
Okay, is this the repo here?
ZCMDS? Okay.
Yeah, if you do pip install, the install instructions are down a little bit.
If you scroll down a little bit.
Oh, yeah, I got it.
Yeah, right, Perry, you do that.
Okay.
Boom, it's going to drop this command line set.
And then you just type in AI code.
It's going to ask for something called your chat GPT API key.
Right.
If you do any chat GPT stuff, it's on there.
Yeah.
You don't want to pay for everybody's queries?
Right.
I don't even want to distribute my key because then people will have it.
So it does need a key, but you put that in there and then just go to your code repo and you type in AI code and then it will prompt you on what you need to do next.
And, you know, it's amazing because stuff I get stuck at, especially with HTML, which I'm kind of a junior programmer in, I can just fire this thing up out of a folder and then ask it to, you know, center some text.
And it parses through all the stuff, figures out what CSS needs to be done, and then just inlines it, creates a commit, and then boom, it works.
Wow.
It's incredible.
So if you're looking for some AI goodness, check out that.
Yeah, you know, and we didn't even get a chance to talk about the future of code development because that's going to be so radically transformed, even just from right now.
Can we talk about that?
Can we talk about that?
Okay.
All right.
Last topic then for today.
Let's talk about that.
I'm just so excited about this.
Look, LLMs are great.
For writing books and doing copy, but they are so much better with code.
I am just absolutely stunned.
Right now, there's also a lot of tooling that goes along with ensuring code is correct, right?
There's a compiler, a linter.
You know, there's nothing like that for the English language outside of, like, grammarly, kind of.
And so, like, you know, human language is irregular.
Language that's designed for a computer is highly regular.
It either works or it doesn't compile.
And because of this regularity in the language, these LLMs seem like they're about a year ahead for the programming stuff than they are for human languages.
So if you're using ChatGPT to write a love poem or something, I want to assure you, if you're a coder, it's going to be like four times better.
And that's what I'm seeing right now.
And the acceleration of my coding is getting faster.
Like, you know, yes, I can ask ChatGPT over a browser window to make a code change, and I can manually paste in that change.
But when I'm using VS Code, it's GitHub Copilot that's literally listening in on my changes and then auto-suggesting the next line of code.
And this AIDR, this AI code tool that I just showed you was ZCommand's Like, it takes that to the next step where instead of looking at the current source file, it looks across source files and then comprehends what the code is doing and puts it in there.
And between these three different tools, none of which I was using a year ago, by the way, right?
These three tool sets have increased my velocity by at least 4x.
It could be as high as 8x.
Like, a lot of the stuff that I used to spend hours trying to figure out It just does it in seconds.
And the question I have is, what's this going to be like?
What's the state of AI coding going to be like in the next year?
And with ChatGPT4 Turbo, what we're seeing is we're seeing 128,000 token limit.
That's about 300 pages in a book.
You're going to be able to feed not just like a subsection or a module from your codebase.
You're going to feed it the entire codebase And it's going to be able to take in all these other code bases.
And then you're going to be able to say, I want an Android app that does this.
And it's going to be able to create a whole new repo for you.
It's going to put in skeleton stuff.
And then you're going to look at it and you're going to say, well, actually I want this.
Generate a picture of a dog face.
Put in this sort of text.
And then boom.
And so what's going to happen is we're going to get this explosion.
It's going to be like a Cambrian explosion of AI software development.
And I think that probably in 2025, we're going to get also an explosion for, actually probably it's going to happen this year, is going to be an explosion in hardware design and CAD design so that the entire vertical stack of product development will start happening inside artificial intelligence. is going to be an explosion in hardware design and It's those people that really know how to tie in and the integration of all these different systems together that are going to be the ones that are going to be the new dominating tech lords of the future.
And so it's this...
Yeah, well, and as you're talking about that very large token window allowance for the prompt, seems to me, I'm just thinking out loud, that as that gets larger, you'll be able to take, let's say you ask it to build, to write the code for you to do this.
And you take the code and you compile it and run it, and it's not quite right, but you could take that code and put it back in as part of your next prompt and say, given the following code, I want to make the following iterative changes and improvements to that code.
Now write new code based on that old code.
Like, you can feed, like, you can have it, you can deposit all your code in the prompt, right?
Yeah.
I'm saying is that it's a great insight.
That was six months ago.
Okay, okay.
Yeah.
Yeah, that's how fast it's moving, right?
Now it's going to be like an entire, like a larger context windows.
Right.
And as those get large, what I'm saying is, in essence, you're going to have AI systems rewriting their own code.
I mean, you could even...
I mean, the code that drives the AI fine-tuning training can be put into a prompt of that same model to say, I want to build a better model of myself, right?
So this is how we get to the singularity explosion, right?
Oh my God, we said at the same time, yeah, it's a singularity.
So it's this, like, remember when I told you that backpropagation within the training of your mind's neural nets happens in O of N speed?
Sorry if I'm using terminology, but it'll know.
But, and the ones that we do artificially are N squared, so it takes way longer to do artificial means.
The first thing that the AGI could do to improve itself is to figure out how to get that O of N back propagation speed.
Right.
And that would be the first sort of like, that would be basically the final invention that we ever need to make.
Oh my gosh, because then superintelligence would just be a linear function.
Right.
Yeah.
And right now we're bottlenecked on that backpropagation and squared problem.
And once that gets knocked out, it's...
Hold on and be prepared to white-knuckle it.
You know, I've seen some articles from some people giving some backlash, you know, pushback against the LLMs who just don't, they don't get it.
I've seen people say, oh, like the cost of producing garbage is going to zero and these are just, these are parlor tricks and these are hoaxes.
And I'm like, man, you really don't get it.
There's a lot of people that are completely missing what has happened in human history already.
Yeah.
It's funny.
It came out of the authoritative circles that, oh, it's bad.
It doesn't work good.
I used it the first time, and I was like, this is going to change everything.
To my surprise, I've listened to a lot of people on the web talk about tech.
And none of them seem to get it either, right?
They were all, you know, like the Primogen, which I listened to, he was saying some bad stuff about it.
And also Theo from Ping, he's got a popular channel.
They were saying about how it doesn't really improve code quality.
I'm like, what are these guys smoking?
What reality are they living in, right?
Like, it's making me fly.
And now they're for it, right?
And that's kind of the funny thing.
This misinformation comes out and people are like, oh, well, the consensus is, so I'm going to echo that, you know, getting back to this NPC sort of culture.
And I went hardcore.
I mean, I got it in five minutes when I started querying an LLM. I knew that this was a quantum leap over anything because I grew up in the day when we ran Eliza on the Apple II. Which was, that was a parlor trick.
It would just, you know, you would say like, I don't know about my feelings about my father, let's say.
It would say, well, tell me more about the feelings about your father.
Like, okay, you're just taking what I say and you're feeding it back to me.
That was Eliza.
Yeah.
This is not that at all.
It's not even close.
It's not even in the same realm.
And yet, the ability of a lot of modern humans to see what has happened is very, it's very limited.
It's like, Their own internal LLM isn't capable of seeing how this other LLM has just surpassed human LLMs.
I don't know if we can say it that way, but something has happened.
A lot of people are missing it.
Yeah, and I don't get it.
I think it's because a lot of people have this memetic desire to agree with what the tribe is saying, and a lot of the people in the tribe are saying, oh, we need to slow down AI. Oh, it's not going to replace humans.
You know, there's that special spirit which is within the artist that the AI will never be able to reproduce.
And I was like, yeah, you know, let's just give the AI three more months, right?
And, you know, the person I was having that argument with, he got totally changed, right?
Like, he was like, he went from like, it's never going to be able to replicate the spirit of humankind to be like, Oh, wow, it's way better, isn't it?
I'm like, yeah, it's going to be way better.
And just wait until, you know, they start producing movies that are decent.
They kind of are, but they need a lot of handholds.
Oh, yeah, that's coming.
But in the future, you'll just be like, make a movie of this, and it'll be like a full-length feature film.
And I want to bring up that all these people, like in the Screen Actors Guild and the Writers Guild, they're like, oh, we want to have like higher wages.
All the people in the fast food industry that are trying to labor, to join a union in order to like, great, like more money, like, Guys, your job is on the cutting block.
This whole thing where you guys just band together and demand higher pay, they're going to replace your job with a kiosk.
That's what's on the table.
Everyone is so psyop that they're fighting the wrong battles.
We can't even make any progress because people are going in the wrong direction of At least we can have a conversation based upon reality.
Like, okay, we got this LLM. What do you do with all these jobs?
You can't get them higher.
First of all, let's not try to band together and double the amount of money because that just basically creates a crisis within McDonald's and they're going to replace your job's Even faster, right?
Yeah, I was running, I have one of my data science servers is actually at my home office.
The rest are in our data center, but I had one shipped to me at home so I could play with it locally.
And I hooked it up to my watt meter to see how many watts of electricity it's using.
And I came up with the conclusion that for about 2,000 watts of electricity, I could replace about 50 people.
In terms of like jobs, you know, graphic design, writing, marketing, writing, whatever.
Not to say that people don't have roles in other important jobs, but for a lot of the kind of low-hanging fruit, 2,000 watts in a server replaces about 50 people.
Think about that.
That's right.
Yeah.
And you're not going to pay income tax on that.
No, and the server doesn't show up late reeking of pot and try to flirt with the other co-workers and whatever.
And every fast food restaurant, like you said, think about Amazon warehouses.
Amazon's going to replace almost every last human worker with a humanoid robot that has some kind of AI brain.
Like behavior models, just like language models, they're going to mimic human behaviors.
How do I pick up a box?
How do I stack this?
How do I wrap this?
These are going to be human behavior libraries that it's going to just master in no time.
I mean, literally, like a weekend of training, boom, it's done, and roll it out across 100,000 robots.
Right.
And so the question is, what do we do about it now?
Like we're not at that point yet, but we still got a little bit of time.
And it's like, what do we do?
And my only answer to that is learn to code.
I think you're right.
Yeah.
Like you're not, you're not going to fight the tsunami by complaining about it.
It's coming.
The question is whether you're going to build a boat now and ride the wave out to the future, which is what I'm doing.
And it's what you're doing.
And the people that are watching us take our lead and do it too.
Yeah.
Well said.
All right, Zach Voorhees, everybody.
The book is called Google Leaks, A Whistleblower's Exposé of Big Tech Censorship.
And also want to plug the other two websites you mentioned.
What was it?
American Digital Shield?
Yep.
Americas.
AmericasDigitalShield.com.
And then the other one?
FeedTheWatchDogs.org.
FeedTheWatchDogs.
And if you like my content, check it out on Twitter.com slash PerpetualManiac.com.
I'm just there to sort of like, I give zero craps now at this point, like I think we're heading to some bad areas and I'm just basically, you know, pointing out some information that is not obvious, right?
And the problem with controlled opposition is that they all try to agree on like a different set of fake news things and...
I'm trying to punch through that as fast as I can.
And not everyone likes the stuff that I have to say, but it is well researched.
And when I do get it wrong, I will say so.
So check it out.
Twitter.com slash perpetual maniac.
It's my gamer tag.
I promise you that if you don't like my content, you will find it thought-provoking.
Okay.
All right.
Well said.
Thank you so much, Zach.
Always a pleasure to speak with you.
I mean, mind-blowing today.
I can't wait to talk with you again.
Yeah, can't wait.
All right.
Thank you.
Thank you for joining me today.
Wow.
Alright, I thank all of you for watching.
Of course, Brighttown.com, the free speech uncensored video platform where we can have conversations like this.
You won't find this kind of talk probably on YouTube or Facebook.
So thank you for supporting us and thank you for supporting Zach Voorhees and his book and his projects.
And be sure to visit those websites that we mentioned and come back to Brighttown.com for more interviews.
Every day of the week, it seems, and 100,000 plus other users posting their content as well.
But thank you for all your support.
Thank you for being human and for asking big questions about the future of human civilization.
Take care, everybody.
I'm Mike Adams.
Be well.
A global reset is coming.
And that's why I've recorded a new nine-hour audiobook.
It's called The Global Reset Survival Guide.
You can download it for free by subscribing to the naturalnews.com email newsletter, which is also free.
I'll describe how the monetary system fails.
I also cover emergency medicine and first aid and what to buy to help you avoid infections.