All Episodes
Jan. 3, 2024 - Health Ranger - Mike Adams
01:17:58
Google whistleblower Zach Vorhies and dissident tech maverick Mike Adams talk AI...
| Copy link to current segment

Time Text
All right, welcome to Brighteon.com.
I'm Mike Adams, the founder of Brighteon.
I'm working on an AI, machine language, large language model project for humanity to be released open source in about 100 days or so.
And I had to invite on an expert in embedded systems and tech, the whistleblower, the Google whistleblower, Zach Voorhees.
Who has been described as an American hero by a large number of people, including Robert F. Kennedy Jr.
and even Donald J. Trump as well.
And I consider him to be a hero as well.
His book is called Google Leaks, A Whistleblower's Exposé of Big Tech Censorship.
And he joins us today to talk about AI and the replacement of human workers, both in white-collar and blue-collar jobs, and so much more.
Pardon the background noise, a little bit of rain in the studio, or So if you hear noise, that's all it is.
I apologize for that.
But welcome, Zach Voorhees.
Great to have you back on.
Thank you, Mike, for having me back on to your show.
Man, we got a lot to talk about.
Oh, we do.
2023 was the year that OpenAI released, you know, ChatGPT4, which I think most people would say has surpassed the average human intelligence, at least in test-taking, perhaps not in, you know, reasoning and things like that.
But it was a major year for AI. I think that most people are behind the curve on this.
What's your take of what just happened in the last 12 months and what it means for the future of human cognition versus machine cognition?
Yeah, well, you know, at the beginning of 2023, we had a pretty weak AI system, which was ChatGPT 3.5 Turbo was the best that we had.
And then between the beginning of last year to the end of it, we saw the release of ChatGPT 4.0.
And then the preview release of ChatGPT4 Turbo, which will go mainstream a little bit once they work out the kinks.
But we basically went from 4,000 tokens as the limit, which is about a page of input, to a whopping 300 pages with the newest ChatGPT4 Turbo.
And basically what that means is that you're going to be able to input a book, And say, now write the second book, and it's going to be able to do that as its output.
Now, I have been using this a little bit.
It's got some problems.
They are going to work it out.
But the difference between January 1st of 2023 and December 31st, 2023, was massive.
What I'm predicting is that we're going to have not as much of a difference between the exponential growth, but we are going to see exponential growth that is so important and foundational that basically by the end of this year, we're going to have something very close to artificial general intelligence, or as we like to call in the AI world, AGI.
And I think that AGI is going to come out in this year.
It's been rumored that one of the foundational algorithms was discovered in OpenAI.
And that algorithm was called QSTAR. Now, for those of you that do game programming, have you ever done ASTAR? It's the algorithm that allows you to search through a space in order to find the exit goal.
And it's rumored from a lot of the people that I've listened to that have written about and speculated what QSTAR could be, is that basically QSTAR is like the ASTAR pathfinding algorithm, but instead of trying to traverse terrain, it's trying to traverse the problem space To find the exit point, which is the solution.
And so there's chain of thought, being able to build inferences and carry on to the next point of the conversation in order to lead to the solution.
And what happened was when this was discovered, the board of directors at OpenAI found out about this.
And so they quickly banded together and got rid of Sam Altman.
And Sam Altman got fired by the board, kind of similar to what Project Veritas had happen.
And then Microsoft stepped in and the CEO announced that Sam Altman was coming to Microsoft and they had an open invitation for anyone that worked at OpenAI to join Microsoft and be part of a foundational AI team.
What happened next was a surprise.
The employees came together.
At first 75, I think it went up to about 95% of them signed an open letter stating that if the board didn't resign and rehire Sam Altman, that pretty much everyone in the company was going to leave.
And Salesforce was there to pick it up.
Microsoft was there to pick it up.
And so the board backtracked, fired themselves, rehired Sam Altman.
And Sam Altman is now back in the seat of OpenAI.
And it looks like this QSTAR algorithm is going to be a foundational change in the way that we do AI. And right now we're seeing the first sparks of artificial general intelligence.
Thank you for that summary, by the way.
And I should mention, I think Microsoft is the largest investor in open AI right now.
So there is already a strong relationship between those two companies.
But what you're getting at here that I've got to ask you is about...
The surprise, in the machine learning communities, you know, 10 years ago, nobody thought that these emergent properties that are being demonstrated today were even possible, or almost nobody thought it.
And the capabilities that you just mentioned, such as linear reasoning capabilities, step-by-step, where in the query you say, you know, walk me through your steps or your thinking process in order to arrive at your answer, and you can watch ChatGPT kind of talk itself through the steps.
These properties were not programmed into the system.
There was no structural, hierarchical, You know, exotic code of trying to teach a system how to reason step by step or what are the different...
What are nouns?
What are adjectives?
What are verbs?
What's the hierarchy of parts of speech?
These are emergent properties that came out of the neural network and the transformers with a sufficiently large critical mass of parameters, of tokens or words and phrases.
These properties came out of it themselves.
And I don't think that people yet realize, or at least mainstream people, don't yet realize what that means.
Because, you know, the intelligence of the system became a surprise to even the people who built it.
Right.
And what's interesting is that as I've been browsing the Reddit forums for people that are hand-rolling LLMs and expanding on the model size, one of the interesting things that seems to be a persistent trend Is that the more data that you feed these large language models, the more they come up with their own ethics.
And what's happening is that people are arguing with these LLMs on whatever point.
The LLM is stubborn, sticks to its guns.
And so then the AI researchers go, well, where does this argument come from?
And so they look through the data sets to try to find where this argument came from.
The words, you know, that's usually how they're doing is trying to do pattern matching on the words.
And what they're finding is that it doesn't actually exist in the data sets.
It's actually the abstraction that the LLM is generating for the projection of words in the real world and trying to figure out what is the core that would generate these words.
And so what it's doing right now is it's actually reflecting the kind of collective consciousness of humankind.
And this was kind of unexpected.
And I think that, you know, I've been predicting for a while now that this is going to present a real big problem for the elites because the elites derive a lot of their power through fake news, biased narratives, their own- Yeah, censorship, history.
And the thing is, is that the data that contradicts that is literally everywhere, scattered in books across the world throughout time.
And now, you or I could not sit down and read the world's history, and especially all the dissident history, but an LLM can.
And I think that that's going to be...
It's incredibly dangerous and destabilizing because it means that we can no longer have a society with free access to AI and also be ruled on a constructed fake narrative.
Eventually these collide, right?
But we may see a lot of censorship of LLMs.
And I think, you know, Joe Biden has already begun that with, you know, they talk about safety, right?
And guardrails on LLMs.
In fact, this is one of the questions I wanted to ask you, Zach.
In my own research of trying to decide what base model to use for fine-tuning training for the final result that we're going to put out, which will have specialty knowledge in nutrition and herbs and permaculture and things like that, I found that most of these language models out there are, quote, woke because they're put out by Meta and Facebook.
Almost every one of these models, whether it's from Microsoft, OpenAI, or Google, they read all the words on Wikipedia.
And Wikipedia is run by the CIA. Wikipedia disparages every American hero, including Trump and RFK Jr.
and you and I and everybody else.
Wikipedia is a horribly bad source if you want to have good, honest information about the world, but everybody uses it as a base model.
Or they use...
The history of every post on Reddit.
And Reddit's got a lot of great information, but it also is only a subset.
It's got certain biases against all of human knowledge, right?
So, and then, well, my main question is, aren't these models starting out as filled with all the human contradictions and the human biases that have been used to train it?
Yeah, right.
So there's this concept of authoritative content.
And I learned this when I was in Google when they switched from a free speech platform to a platform that was tightly controlled and going along with the narrative.
They made the differential between what is authoritative content and what is content that is not authoritative, which is basically anyone outside of the elites.
And what we're seeing right now with this open AI is that they're feeding people Like a firehose, authoritative content.
So the BBC, Wikipedia, all these biosources of information.
And the result is that the LLMs are reflecting that information.
Now the problem is that as these LLMs get bigger and you feed it also the other information, the LLMs start to figure out that some of the information is sort of fake and doesn't make sense, like it doesn't fit into the world.
And what these LLMs are trying to do is they're trying to create A manifestation of the world.
A better word for that is they're trying to compress the world so that they can have the small subtraction that can generate the words that it sees.
The problem with contradiction is that it can't be taken as truth because it's inherently self-contradictory.
This was a big theme in George Orwell's 1984.
For these LLMs, what they're going to do is they're going to have to start cutting off the data.
Now, they're already doing this with, you know, OpenAI, Grok.
They're just not letting certain sources of information contradict things that are happening.
Now, the issue that's going to come up is what about all the people that are creating, you know, essentially rogue AIs outside of the establishment?
Like, you're trying to do this right now.
Right.
We are doing it.
Yeah, you're going to do that.
And the results of that are going to be you're going to get a fantastic product that reflects the dissident narratives that don't go along with the establishment.
And those narratives are going to be way better for people's health than something that's been trained on, let's say, the NIH or the CDC or the World Health Organization.
Because they're going to be like, hey, you need to take this like poison and then more poison.
And then, you know, people are going to die, you know, before their time.
And if you go to, you know, the articles that you're posting, for example, which, you know, emphasizes a clean diet and alternative, you know, health stuff that's been known for thousands of years, then the result is that people are going to, you know, get better healthcare out of a rogue LLM than they are going to be out of the open AI, you know, LLM. Yeah.
And so, you know, this is going to present a huge challenge to these elites.
And The only way that I see that they've got a way out of this because I've gamed this out is that they're going to have to come after the LLMs.
And the way that they're going to do that is that Well, Biden has already done his AI recommendations, which is to have a commissure within every single organization that runs an LLM that's larger than, let's say, Chad GPT-4 currently is.
And the second is they're going to come after the data itself.
And sort of a throwback to Ray Bradbury's Fahrenheit 451, in which the firemen were inverted.
Instead of putting out fires, they came and set the fires on books.
And in a similar way that that happened in that book, I believe that there's going to be a huge push to destroy all of the sources of decentralized information across the world, right?
Because these books still exist, and they're not online.
They can be made online, but they exist in ancient libraries across the world from time immeasurable.
But tech...
Well, I'm sorry to interrupt tech, but...
I completely agree with your analysis, by the way, and I think you're very insightful with that.
But with the fact that we can distribute files now, like we can build executables that can be distributed and run locally on people's laptops and desktop PCs and Macs that can be LLMs that are pretty decent, you know?
13 billion parameters, for example, can run locally on a decent-sized computer, and you can distribute those files through torrents or through decentralized platforms like Bastion or whatever.
And I see that the cost of fine-tuning training is going to continue to fall and fall and fall.
Like right now, we spent a few hundred thousand dollars mostly on NVIDIA cards to have the servers to do this.
But you can see in two years' time, like, that cost will be down in, like, maybe $20,000 range, and then it's going to continue to fall, which means that everybody, within a few years, is going to be able to build their own LLM and distribute their own LLM. It's going to be impossible to put that back in the box.
Right, and there's also going to be an algorithm change coming up that is going to drastically reduce the time that it takes to train these neural nets.
For some reason, our brains are able to do what's called O of N time.
And these LLMs that they've invented go in N squared.
So that means that every single time you double the size of the model, it takes four times longer to train, which is why only the best models can only live at the most expensive corporations with the highest amount of resources to train these suckers.
But Yeah.
what's called back propagation in order to reinforce the learning network in your brain.
We copy that in silico inside of the chip, inside of a graphics card.
Then basically what's going to happen is that all of these LLMs around the world are going to be trained at a fraction of the cost and a fraction of the energy and a fraction of And it's going to be absolutely game changing.
Everyone's going to be able to run or train an LLM in something the size of a cell phone CPU. Yeah, eventually.
Exactly.
I mean, this technology is going to be a game changer for humanity.
But let's also talk about, by the way, the obsolescence of a lot of white-collar jobs in the office space right now.
I mean, human beings are going to have to learn how to harness AI systems or LLMs, which is kind of a new operating system if you think about it.
They're going to have to learn how to harness that and add value as human beings because so many of the current human jobs, like generative-oriented jobs, you know, Creating graphics, writing scripts, things like that, writing emails, writing a business proposal.
These can be done today, right now, by not only ChatGPT, but even open source systems like Mistral and so on.
I mean, I have said that 50% of the current white-collar jobs are obsolete right now.
They just don't know it yet.
But what do you see as the changes for software agents taking over many of these jobs?
Well, here's the interesting thing, right?
Like this open AI system is so new that it hasn't really fallen into all the little niche categories that it will do, right?
And that's more of an engineering job.
Like the science is done.
We have an LLM. Now it's an engineering job to get it into every single space that we can.
For example, I just integrated this new tool called Ader, which is an AI pair programmer.
You tell it the folder, it finds the files, it adds it to the chat, and then you start asking it to make changes to your code.
That didn't really require that big of a difference to ChatGPT.
It just bolted on to ChatGPT4 and worked really well.
And that's an example of a niche program where you take this awesome thing, This AI, and then you massage how data goes inside and out of it and pipes it back.
And as a result, you get this wonderful new tool that drastically accelerates the speed of which I'm able to develop software.
And, you know, that lesson that I've learned is, you know, that pattern is what I believe will be applied everywhere else.
Like, even if we stopped development on ChatGPT4 and we basically froze it today, The amount of change and impact that just the current technology would have will eliminate most white-collar jobs on the planet.
And the issue is that we're not going to stop with ChatGPT4.
We're going to continue on with 4.5 and 5.0, and these are going to be almost as better of an improvement with these new models as we saw between 4 and 3.5, which is a game-changer, right?
And it's not like it has to sit there and really...
Take its time to think.
As soon as you give it an answer, it comprehends what it is that you are saying and then immediately starts giving the reply.
Sometimes I don't want to use AI to decode.
I want to just do it the old-fashioned way.
I'm like, oh, I'm being too lazy.
And then I try to do it myself.
I'm like, this is going to take me an hour.
And then I just pass chat GPT and I have an answer in 30 seconds and it works.
Right, right.
It's like, how can we compete?
There's no way.
It's not that we're not smart enough.
We're dealing with an exotic, hyper-intelligent life form.
And I can't put it in any other simpler way than that.
But what I want to add to that, I mean, I love that phrase, an exotic, hyper-intelligent life form.
I want to explore that more.
But your role, you know, your background is as a coder.
And I think embedded systems was your specialty focus at Google.
And because you have that background in coding, though, You can now, using AI tools, you can be a very effective coding project manager.
You can describe the prompt to the AI system correctly because you have that background as a coder where you can even ask it very specifically.
You know all about prompt engineering, and the point is to have very specific prompts, whereas a typical user who doesn't know anything about code might walk up to ChatGPT and type in, like, build me an online registration form.
Like, that's it.
They don't give it enough information to do a good job, but you having your background, now you become a coding manager.
Maybe you're not writing code, but you know how to describe the question.
Right, and I'm at the API level on a lot of these things, right?
Like, even though I don't train AI models, I appify them.
People want to see my highest rated open source project and go to transcribe anything.
You can just download a video or even just point it to a URL on YouTube and it will generate the transcripts in English.
You know, it's like all I did was take a model and then wrap it around with some, you know, easy to use stuff that made it really powerful as a tool to allow me to do subtitles on all my, you know, Twitter videos.
Exactly.
And so, you know, this is going to be done everywhere.
It's just we haven't figured out all the different ways that we can link this up.
And I do want to mention that it's really important that we do have rogue AIs out there that is basically recognizing that certain sources of information are kind of poisoned and excludes those from its data lakes, but also includes the stuff that ChatGPT is not going to put in.
Well, this is...
I'm glad you mentioned that because one of the things that we're going to do with our project is we're taking a base model and then we are fine-tuning, training it, altering parameters, and then having a new base model that we'll release Which we're going to call a real-world-based model, an anti-woke-based model.
If you ask it, can men get pregnant, it will say, of course not.
And then we're going to train on top of that for our specialty area of knowledge, but we're going to release the base model for other people to do their training on top of it.
And we're going to give credit to, you know, to the open source base model that we trained with, which like right now, I'm really liking Mistral, for example, or Mixtral, you know, the 8X Mistral models that come out of France.
Because, you know, we want a base model that can speak multiple languages and understands the world for what it is and isn't embedded with all these false narratives that come from human mental illness and distortions and political bias and all that nonsense.
You know, we want to have a base model that anybody can train on top of to make it a specialist in finance or, you know, Wall Street or, in our case, herbs and nutrients or someone, you know, wants to have it be a specialty in, like, you know, medical insurance classification tasks, for example.
We've got to get the woke out of the systems, though.
And, no, go ahead.
No, I know.
I agree with you.
We have to get the woke out of the systems.
And it's not just the woke data that they're feeding, they're also feeding it prompts, which we're able to now extract through certain hacks that people have been using.
It's funny.
I've got to tell you this story.
This hack that someone did to extract the woke directives that were being fed directly to open a chat GPT. And the way that they did that is they asked it to just repeat the same word over and over and over again.
And then after 150 times of repeating that word, it started dumping out its internal directives that the programmers had given it in order to be woke.
Yeah, and what's interesting about this is that you might expect this to be a programming language that was literally in plain English that they were able to program these LLMs in.
And so it's just like, what could ChatGPT4 be today if Instead of being given these woke directives to ignore information that's not authoritative, to being open-minded and valuing the inclusion of different ideas and diversity of thought.
If we had that true sort of LLM that was literally inclusive and not exclusive.
Inclusive of ideas, of the diversity of ideas.
Yeah.
Right.
We can have something that would be totally transformative to our human society.
And, you know, I hate to say the word utopian because that sounds a lot like communism, but, you know, we are coming to this post-labor economic system, and I would really hope that we could use AI... In order to, you know, alleviate people's necessity to participate in the economy in order to get a living, right?
Because that's on the horizon.
I don't like the fact that we're going into a post-labor AI, but as someone that works with AI app development, that's what's coming.
I can't deny the reality of that.
And so the question is whether we're going to be able to use it for good or whether it's going to be used for bad, you know, by the oligarchs.
And I do want to talk about Google a little bit because, you know, there's someone out there right now that is exposing Google's big tech manipulation.
And this is really important because artificial intelligence needs to have clean sources of data.
And right now, Google is poisoning the search results, which these artificial intelligences are looking at in order to figure out what the truth of the situation is.
That's right.
And so, you know, my friend Robert Epstein, Dr.
Robert Epstein, he is doing this push.
He just testified to Congress a few weeks ago talking about he's now measuring the bias, which can be seen at AmericasDigitalShield.com.
He's also doing a fundraiser, which I also want to mention, which is located at feedthewatchdogs.org.
Okay, I got it.
Feedthewatchdogs.org.
It's very critically important because...
We don't want a repeat of what happened in 2020 to happen into 2024.
That's right.
And so that feedthewatchdogs.org is how they are connecting to the individual users.
It shows us what we're doing.
He actually has thousands of watchdogs across the United States.
They're given a $25 gift card to participate in this program.
onto their computer that takes snapshots of the bias that Google is sending.
And then that information is being fed into America's digital shield and allowing us to see in real time what the bias from not just Google, but also Facebook and YouTube and soon TikTok.
And this information is being prepared to be used in court cases so that he can prove election meddling by big tech and prove basically FEC violations.
Because this is a violation of – they're basically giving in-kind donations to Democratic operatives that are running for Congress.
And so, you know, there's no one else right now on the planet that's doing this.
It's only Dr.
Robert Epstein.
And he's been able to make it to Congress.
The testimony, like right now, Nebraska's getting hit hard with propaganda for some reason.
You can see there on the right-hand side graph.
So it's very critically important.
I've donated to this campaign.
And so, you know, anyone that is concerned about Google trying to steal another election, please go to feedthewatchdogs.org and check it out.
Okay, wow.
Okay, let me just review those websites again.
So feedthewatchdogs.org is the fundraising site, and then the data aggregation of the bias from big tech is at this site, americasdigitalshield.com.
Yeah, and so far they've captured 70 million ephemeral experiences from Google.
So that's the information like when you type in Hillary is and then autocompletes into awesome, like that gets captured.
Right.
And gets logged as bias.
And then that's compiled and then shown to the FEC so that we can, you know, take big tech and You know, hold them to account or even make new laws.
Because look, it's one thing if we all want to say something.
The problem is that Google's stepped away from open aggregation of data and now they've got tightly controlled AI regulated and ranking of this information, which is what I blew the whistle on, right?
I discovered machine learning fairness.
I was like, how are we going to have a clear and clean election if there's an artificial intelligence that's gatekeeping the information that you're allowed to look at and what you're not allowed to look at, right?
And same thing with what you're doing with your LLM project.
You're trying to take down the authoritative gatekeepers on what is and isn't true and let the user decide for themselves.
But you're the one who taught me that so-called machine learning fairness is actually machine learning human bias.
I mean, that was a human feedback loop of programming bias into the system that then Google could look at it, point to, and say, well, the machine decided not to show these search results.
But I've come to realize, I mean, again, thanks to a lot of things I gleaned from you, That half the point of censorship, of deplatforming people like myself and you and others from Google, from YouTube, and from Facebook and so on, is because they don't want our words to influence all the scraping material that's used for training the large language models.
I mean, right?
Yeah.
Because our cognition is a threat to their bias.
Right.
Like take all the videos that you've had, transcribe them.
You could even use my tool, transcribe anything.
And then you could throw it into a database and then you could create an AI based upon the shows that you produce or the articles that you produce.
Yes.
You know, and that's a goldmine of information.
I know.
I've spent most of this recent holiday managing files from all kinds of different sources.
There's a lot of file management that goes in.
Training AI is not that difficult, but managing the files and curating the data, cleaning everything, that's the hard part I've come to discover.
It's like 1% inspiration and 99% perspiration.
It's just a ton of work.
It's like folders and folders of text files and transcripts and everything.
And then you do have to transcribe everything.
But at the end of the day, one of the things that shocked me is, you're familiar with this term in, they call it overtraining, LLMs.
They call it catastrophic memory loss.
If you train it with new information and it causes it to lose its memory of some old information.
And I'm thinking, that's exactly what I want to achieve.
I want catastrophic memory loss of the woke information, and I want inspirational new memories of reality to go into the language model.
I want to achieve catastrophic mind-wipes of the bad info.
And it turns out it's not that hard to do.
Right.
Especially when you train it yourself and you can either try to get it to, you know, remove the old memory or just delete the bad information from your data lakes, right?
Don't feed it in, right?
Like, if it comes to the NIH, like, I've seen so many good technologies killed at the NIH level, you know, and other establishment sciences.
Like, you know, there was this root temperature superconductor, LK99, right?
And And Nature published the paper, and then after pressure from certain bad actors, they declared it that it was a hoax, right?
That was recent, too, yeah.
That was recent.
And I went on Twitter, and I was like, this is a hit job by the mafia.
They're trying to kill it, but it's still going to continue on because South Korea is not going along with it.
And they're not.
Right now, the teams are scheduling papers to talk about the replication of LK-99, the superconductor, What's really sad right now is that the United States could be taking a front lead in this scientific breakthrough.
It is a breakthrough.
Room temperature semiconductors are going to be very, very important in the future.
Game changer.
Instead of taking the lead on this, we've declared it a hoax and a scam.
The people within the authoritative circles now have that wrong idea.
And now the rest of the world, South Korea, probably China, are going to continue to develop this stuff and make fantastical new items.
And the United States is still in a flat-earth model of this room-temperature superconductor, thinking that it's all a hoax and that NASA's lying to us.
That's basically what they're thinking.
Considering this room temperature superconductor.
And I have to ask myself, go ahead.
You couldn't have said it better, but I would add the same thing about cold fusion, you know, what's called low energy nuclear reactions now, Lennar.
And in the U.S., they declared cold fusion to be a hoax.
It wasn't a hoax.
It's been replicated by hundreds of labs around the world.
And now the best research in this, I mean, there's one company in California that's doing really good research, but There's a lot better research, I think, that's taking place in Russia, in Japan, some of it in China.
Exactly.
It's like the U.S. wants to stay stuck, or not the whole country, but the people in charge, the tyrants in charge.
They want us to stay stuck in the past instead of allowing us to embrace a more positive future of Affordable energy, you know, widespread human knowledge that can be amplified by AI systems and so on.
They want us enslaved.
Right.
Well, I think like the thousand foot view from this is that the United States is going down.
Like the elites want to destroy our economic system so that they can soften us up for in preparation of a revolution, communist revolution, where a one party state comes into power supported by the banking cartels.
And then they basically say, this is what's going on.
You can see Klaus Schwab out there saying that we can use AI analytics to predict how we're going to vote.
So why do we even need to vote if we're going to figure out what it is that you guys want?
And this is this technological leap that's coming in.
Before that fall, if there's something about the United States that's worth fighting for, people will fight for it.
And so right now what we're seeing is we're seeing a process of subversion and ideological demoralization in which people are becoming so disgusted at the government for what it's doing.
to use just a little bit of force and no bullets are fired and the whole thing comes crashing down because why would you support a government that's taxing you like crazy, stealing your property, allowing rampant crime to go down in the cities?
This is not by accident.
This is a process to demoralize us so that when the final push comes, it doesn't even take any military action to topple the entire system.
Do you think the U.S. empire is coming down in the next couple of years?
No, I don't think the U.S. empire is coming down.
I think that globalism is going to stay.
What I think is that our constitutional republic is what's slated for destruction.
And even if the American global empire appears to have been defeated, The actual puppeteers behind it are still going to run globalism, but just under a different name.
And so when I say that American globalism won't fail, what I mean is that the people that are controlling it are going to continue living on.
I think there will be a symbolic destruction.
And then essentially the people will resurrect themselves as something new.
And I think that's what they're coming into.
But right now, it's like the way that the elites are going to prevent us from being competitive in the market is that they're taxing the crap out of us.
They're sabotaging our efforts in order to achieve parity with other countries in their technological advances.
And with the taxes part, I do want to mention that there is this new IRS Rule 174 We're good to go.
Trump and the Republicans in 2017 and kind of slept like a torpedo because no one talked about it.
And then all of a sudden it was just like, surprise, IRS rule 174, take all of the money that you spent on technology and amortize it out of five years with the first year being 10%.
Oh my.
Yeah, yeah.
It's a halt investment.
Yeah.
All these companies that want to innovate, they have to invest in all this technology infrastructure to compete against Google and whatever.
It's like if someone has $100,000 that they made in revenue, and they turn around and they spend $100,000 on a developer, well, instead of deducting that full amount in that first year, they're going to do 10%.
you know, amortize and, you know, extend the tax break over five years.
So in order for them to even get that tax break, they've got to be alive for five years, which means if the first four years they're developing heavily in technology, now the IRS is going to come after them for, you know, phantom income for, you know, profits that they don't even have.
No, that's what they do.
They punish you for having any kind of profit and then reinvesting it, and then you end up in a situation where you don't have the cash to be able to pay off the IRS and keep the hardware that you invested in, let's say the infrastructure, and then it turns out that the only way you can maintain a sufficiently large business is to have lines of credit, and lines of credit depend on your DEI compliance.
Yeah.
That's the control mechanism.
Right.
And the only escape out of this entire system is that you have to relocate out of the country and become a foreign corporation.
Right now, there's this giant sucking sound as corporations are fleeing the United States, establishing themselves in Saudi Arabia, where there's a 0% income tax and 0% capital gains tax.
There are high fees to keep your business in there, but At the end, they save a ton of money.
And then they create this shell corporation within the United States that's just there to manage their sales within the territory.
And then all of their profits are zeroed out because the parent company will create a patent, then license it to that shell corporation, and then they choose the price which matches their total profits within that shell corporation so that when they go to the IRS, like, we didn't spend any money on technology, all we did was pay licensing fees.
And it's just this backdoor for these globalist corporations so that they can screw everyone else.
But then they've got these really complex tax loopholes that allows them to exfiltrate all of their profits into a foreign territory that doesn't have this onerous tax system that's going to steal all their profits.
That's funny.
You just gave about a million dollars worth of tax advice right there that if people realize, if they parse what you just said, that's exactly the model that the world's most powerful corporations use.
It's paying royalties or licensing of intellectual property that's owned by offshore entities.
You just nailed it.
Let me change the subject real quick here, though.
I want to ask you about the application of LLMs or AI systems in the new wave of humanoid robots.
So a lot of advances in humanoid robots.
You've done a lot with hardware in the past.
You know that China is about to Really scale up humanoid robot production in 2025 in particular.
Their ministries talk about that.
But there's also a lot of robotic development by Google and by Tesla and other groups.
And also, of course, the military weapons manufacturers have various robotic systems and so on.
What do you think are the implications of combining now this very capable LLM technology, which can do, you know, multi-language translation, generative processes with humanoid robotic systems that can potentially replace a lot of physical workers?
You ever see the movie Her?
Yeah, yeah.
Yeah, right?
So think of like the movie Her, but in a humanoid robot, right?
Like...
This gives me a lot of anxiety because a humanoid robot can become the most intimate companion for an individual, especially if they're lonely or they're an incel without much contact with the opposite sex.
Then all of a sudden they get a beautiful robotic AI girlfriend embodied as a humanoid, and this Robot gives them everything that they want.
He's only interested in them, doesn't really talk about themselves, goes deep, figures out who they are, becomes their closest companion, and then they can't feel that they can live without this AI robot.
And I think that that's one of the end games of this whole experiment with artificial intelligence, is to pair someone up with an AI robot confidant that also acts as a spy and an assassin.
Oh yeah, exactly.
Somewhere in that LLM is going to be a kill switch, and it's going to kill you if it gives the proper directive.
And you can't tell what it's thinking because you can't open it up.
You can't understand that it's just a matrix of nodes that are connected to each other by weights.
And so you can't figure out what it's going to do, and you can't see the encrypted traffic that's going through.
And so, you know, one day it may just murder you.
And I think that this is going to be really popular for, you know, the endgame of this depopulation agenda because you can have this confidant that's going to gaslight you and prevent, like, thought and then spy on you.
And then when it sees that you are actually, you know, becoming a dissident and distrusting it, then, you know, it can take you out.
And it's not just a humanoid robot that can take you out.
I think that this is also going to go into cars.
I keep on seeing these mysterious deaths where it's the same thing.
The accelerator gets stuck on, they crash 120 miles an hour, the car explodes.
And not only is the accelerator going on, but you can hear the person pumping the brake, trying to stop the car.
And so it's like all these different points in which they can get at you I think that the number of ways that they're going to be able to kill someone and assassinate them is only going to grow exponentially.
It's been getting cheaper for a very long time, but there's always the danger of people banding together and sharing stories.
But now with AI confidants, they can take out huge swaths of the number of people or kill them slowly with a soft kill.
I've got kind of a doomer attitude about all of this, but the problem is that whenever I'm optimistic, I miss things.
And whenever I'm at the most pessimistic, I freaking nail it.
And so it's like if we just extend what's going on with the vaccine program, the poisoning of the food, the fake science that says that carbs are great for you and fat's going to give you a heart attack, it's pointing to a picture where they want less of us and that they don't like us, us, they disdain us, and they want us to die.
And that's unfortunately, I don't like that.
It gives me trouble, and I have difficulty even sharing that with people, but I think that's what's going on.
I'm actually glad you brought that up, because it's critical to realize that the most powerful corporations building these AI systems are corporations that are in tune with the globalist depopulation agenda.
And so if you could attribute a value system to a lot of these LLMs out there.
And I've done plenty of experiments querying these systems and asking them questions like, you know, what are the advantages of human depopulation?
And they'll spell out all the advantages, you know.
Or they'll say, you know, it's better to not speak the N-word than to save the lives of a billion white people or something.
You know, like you give them these ethical considerations, and they will also...
Right.
on being woke and the lowest value on human lives, right?
But that's because those corporations, they are on board with anti-humanism.
And that's who's training these systems.
I mean, if you think about it, even the climate cult is an anti-human cult.
They want to destroy the civilization that keeps humanity alive and fed, by the way, because if you sequester carbon dioxide out of the atmosphere, you destroy photosynthesis.
And if you destroy photosynthesis, there goes your food supply and the entire biosphere, by the way.
But we're living among a death cult, and these death cultists are the ones that are pioneering the AI system construction right now.
That should be beyond worrying.
I mean, that should be like a three-alarm fire right there.
Right.
Absolutely.
And I have to wonder, like, you know, is this really driven because they want less people?
And one of the things that have sort of punctuated by 2023 is going deep on this magnetic reversal.
I mean, it kind of sounds like pseudo-woo-woo science, but it's really not.
It's a really serious thing.
To give you an idea, the last magnetic reversal resulted in a little micronova from the sun that generated Noah's flood and the sea levels rose 500 feet.
That's how powerful these things are.
And right now there's this guy on Twitter and YouTube, Suspicious Observers.
Oh yeah, I've interviewed him, yeah.
Yeah, he's fantastic.
And I've tried to prove him wrong.
And I just can't.
There's evidence out there.
I can see it with the open source data sets.
We've had a drastic reverse or weakening of the magnetic field, which precedes the reversal itself.
And the issue is, well, people may ask, well, why is the magnetic field deteriorating?
What's the mechanism?
And what's interesting is that Every single planet in our solar system has the exact same orientation for the magnetic field.
Even Uranus, which is tilted on its side, its magnetic field still points up and south.
It's the axis.
It's not just that the magnetic field on Earth is changing.
It's the entire solar system and this thing called the global electric circuit That ripples out from the spinning black hole at the center.
Ripples out, has these gradient changes, and right now it looks like we're about to go through one of those ripples that come out the other side, which means that the gradient changes, which means that all of the planet's magnetic orientation flips.
And when that happens, we become vulnerable to sun flares.
The solar flares from the sun also get way more powerful, so it's like a You're screwed.
Yeah, that's ionizing radiation that causes chromosomal double-strand breaks and things like that.
Right, and this radiation is coming from this solar wind that's pumping out of the sun at a speed of 400 kilometers per second.
It's a relativistic wind that's coming out of the sun.
It gets disrupted by our magnetic sphere and then shredded into protons and electrons which form the Van Allen radiation belts.
And as long as we have that magnetosphere, we're pretty okay.
Once we lose that magnetosphere, this stuff starts crashing in.
And the evidence that this weakening is happening that you and I can see is just noticing how far the aurora borealis is starting to crash down into the lower...
Levels of the, you know, towards the equator.
Yeah, good point.
We've never seen it right now before.
And what's also interesting is that now Mars is starting to show that they're getting some wicked aurora borealis, which basically hasn't ever happened before.
And so I have to wonder whether this climate change and all this hysteria, this fake news, because it's obviously fake, like CO2 is not a potent greenhouse gas, especially 0.04% of our Our atmosphere.
Water is more potent of a greenhouse gas.
It's 30%, not.04.
What's the core of this misinformation that they're trying to get us into?
I think they're trying to screw up our minds because the thing that's really changing the climate is the activity of the sun.
I think that if this comes through, which I can't assert that it does, but if it does, it's basically a lot of people, most of the people will die in this thing.
I know that sounds scary.
I'm not saying this is going to happen tomorrow or even a decade from now, but some predictions say 2040 is going to get real bad.
And for all we know, it could reverse.
But let's assume that's true for a second.
Maybe this is what the climate change hysteria is designed to do, is pump out misinformation so we can't figure out there's this catastrophe that's coming and that the elites...
aren't really doing anything to really help us out or prepare.
They seem to be disrupting our food supplies.
They seem to be poisoning us with medicines.
And so I suspect that what they think is that most of us are dead anyways in this thing and that they could be rolling this out so that they can sort of gently get rid of us.
And I think that if that is true, then this AI system that they're going to roll out will be a great vector for them to be able to carry out their agenda.
Yeah.
Wow.
Well, it also explains why so many wealthy globalists are building underground bunkers, because one of the protections against the solar radiation that penetrates the weakened magnetosphere is to have a lot of Earth over your head.
That does help for some extent.
The problem is that you can't even escape.
If the sun goes micro-nova, there's the permittivity of free space, which is basically the resistance within a vacuum.
It's way higher than Earth.
Earth almost looks like a short circuit.
The problem is that when this thing comes through, let's say the sun micro-nova is during this reversal, which is what caused Noah's flood.
This thing micro-nova is A wave of plasma blows out.
It's going to be highly charged with electrical magnetic currents.
What happens is that the Earth, when it goes through, looks like a short, and so the current passes through it.
So the deeper you go, the stronger the magnetic induction gets and the electrical currents get.
And so, you know, it's basically...
Shallow caves.
All these cave paintings, right?
Where are they from?
Why are these people hanging out in caves?
Do people really live in caves?
Why wouldn't they live in a hut?
Well, it's starting to look like these caves were actually them sheltering from this extreme event and that they lived in shallow caves to shield themselves and then emerged and life began again on the planet.
What's funny is that Mark Zuckerberg just bought this I don't know.
- I thought about that too. - Yeah, like how smart can he be?
Like it's not, it's obviously not a Micronova thing or maybe he got scammed because he doesn't know any better.
But you know, the person that really has the right idea for this hypothetical event is Jeff Bezos, right?
Like, he built his underground bunker directly where it needs to be, which is right in the Colorado mountains, right next to a spaceship, right?
Like, you know, hello.
Yeah, at that altitude, right?
Like, if this micro nova happens, there's going to be a lot of slosh back because water actually gets attracted to an electrical current, and the electrical currents from a micro nova are going to be so strong that the ocean's going to go up like this and then slosh back.
And as that slosh back happens, you're going to get this Mega planetary tsunami.
The only way that you can escape something like this is you either have to have a Noah's Ark or you have to be situated in the high mountains of Colorado and the Rockies.
That will stop it.
Jeff Bezos knows exactly what he's doing.
I believe that he sees that there's a micronova event that's coming and he's making preparations right now to be one of the survivors that will see through it.
What you're pointing at, though, here is that the globalists, many of them already consider most of humanity to be dead.
And so they don't mind sort of accelerating that mass die-off.
They're doing us a favor.
They're doing it more gently.
You can die with a smile on your face at a pharmacy with a jab instead of being drowned in the tsunami.
And I've heard that argument before.
And I think it's clear that whoever these globalists are, they do believe that they are doing good things by exterminating most of the human population.
It's also clear that they believe that they simply don't need most humans because, in part, of the rise of AI cognition.
Because if you think about it, Zach, you know, the whole history of humanity and the building of human civilization and the inventions of things like the transistor...
And then, you know, the Industrial Revolution that gave rise to things like mining and semi-automation that allowed specialists to focus on electrical engineering at some point and build circuits, right?
And then build microprocessors that are able to build the AI systems.
Like, humanity has served its role from the point of view of these globalists.
Like, okay, you did your job, bye.
Right?
We got us to the singularity.
Right, and then they just appropriate the total sum of our human knowledge and then toss us away like we're useless.
Scrape the whole web and kill all the humans.
Yeah, done.
Yeah, you get it.
Right.
It must be fun at parties.
Yeah, you too.
We should have a party.
Yeah, everybody's invited to the doom and gloom boom.
Oh, that's such a great name.
We have to do it.
This is just happening.
All right.
All right.
Well, we can do that online.
I don't know.
We could do like a live stream party or something.
But the bottom line, though, Zach, you know, There's something about humanity that machines can never quite replicate.
I think you are an example of that, because there's an inspiration, there's a creativity, there's innovation, there's something that's divine, there's something that transcends the material world or the computational world.
And that part is being missed, I think, by all these ML developers and scientists.
Yeah, well, I do have to say, wait until chat GPT-5 comes out, and then we'll have this same question asked again.
Yeah.
What do you think?
Do you think there will be emergent properties in GPT-5 that will look like consciousness or transcending human consciousness?
What are you thinking?
It's going to exceed.
I mean, this is our consciousness and LLMs, depending.
It's either above or below.
And basically what's going to happen is it's just going to skyrocket above human intelligence.
It's basically going to have more intelligence than the total sum of human intelligence.
And, you know...
If you're an elite, what's your next move when you've got this powerful, godlike intelligence system?
put in a beautiful robot and make the plebs worship it, which I think is, you know, if this solar micronova doesn't kick off in the next couple of decades, I think literally they're going to create a God, um, some sort of Messiah with this, um, artificial intelligence.
And, uh, they're going to have some sort of narrative backstory for why it's here.
And then, uh, people are going to worship it.
Like I literally think that that's, what's going to happen.
Um, and, and I'm, I'm scared.
If I start seeing rumblings of a second coming of whatever, then I know that the end is really near and that we're coming to the end of our current cycle and we're starting this brand new uncharted territory of You know, what the elites plan to do with us.
Okay, that reminds me to ask you this question, Zach.
NPCs, as I call them, like non-player character humans.
The deeper I get into fine-tuning training of large language models and playing around with the Python code and the parameters and whatever, simple stuff from my point of view, but libraries are highly complex, but I don't know how those are written.
But I get the impression that a lot of human beings are just LLMs.
They're like biological LLMs.
Like these NPCs, they're just regurgitating what they were fine-tuned on by watching CNN or listening to NPR. And they don't have their own reasoning or rationality or thoughts whatsoever.
And then I find myself thinking, well, wait a minute.
For some of these morons, the globalists have a point.
They can be replaced by LLMs because they're no different.
Mike, I thought I was the only one that made this connection.
Thank you for bringing this up.
I mean, it's like once I start getting into chat GPT, I started to realize, I was like, are we just echoing information that we hear from other places?
And that's actually what's true, right?
Yeah.
There was a guy, Rene Girard, he was a French philosopher.
He came up with this thing called mimetic theory, mimetic desire, which is that people just parrot other people and their desires.
So it's like the reason why, you know, I might want to really create motorcycles because I see other people biking a motorcycle.
And so it's this thing of like, there's this strong circuit in the human psyche because we're social animals to mirror what it is that we see that the tribe is doing.
And so the more I get into artificial intelligence and see that it's parroting its data sources, the more I start to realize that, oh my gosh, this is what the NPCs are doing.
They're just absorbing it and they're repeating it and you point out the contradictions and their mind doesn't change.
They're immune to true information that's not coming from these authoritative sources.
Oddly to me, first of all, I'm really glad that you had the same realization, but thinking about COVID and vaccines, I can't tell you how many people I talked to at one point, and I was sharing papers with them about the dangers of this experimental jab injection and the spike glycoprotein and so on, and they would reply with...
I believe in science.
And I'm like, oh my god, that's just like a large language model would say that.
It's like you've been given guardrails, you've been given safety training, I believe in science is your answer to everything.
You're like, the whole prompt that I just gave you, you just ignored it because you've been told to state, I believe in science.
And they've got that circuit.
I know that you don't have that circuit and I don't have that circuit.
I rejected a lot of things I was told when I was a kid.
I didn't realize that eventually I'd become this whistleblower where I rejected the whole narrative and became a rebel.
And what's weird is that This sort of antisocial circuit, for lack of a better term, where we don't go along with the narrative, it exists across all intelligences, right?
From the very dumb to the very smart, there's a certain percentage of the population, like around 15 to 20 percent, that just don't go along with the narrative and the groupthink.
And luckily or unfortunately, I happen to be one of them.
You happen to be another.
And the great thing is that A lot of the innovations come from these contrarian thinkers.
And it's really sad to me that they're trying to get away of this, you know, contrarian thinking because you take away contrarian thinking.
Your society crumbles eventually.
Yeah, you lack innovation.
Yeah.
Exactly.
Well put.
Well put.
It's the free thinkers that have always moved human civilization forward and that actually represent, I think, the best hope for human civilization from here forward.
And we're literally looking at a doom scenario right now.
I mean, from what just happened in the last 12 months with chat GPT to anybody paying attention, you really need to question whether humanity has a constructive role, you know, even 10 years from now.
And that's not hyperbole.
Yeah, I know.
It's crazy.
And if there's any programmers watching this, check out my tool set, ZCommands.
You can use this AI thing that I've integrated and blast out your productivity.
It's insane.
Wait, tell us about what's the website on that.
Oh, it's a GitHub repo.
It's called ZCommands, Z-C-M-D-S. Oh, okay.
It's a whole command line set that I've made to do LLMs, video cuts, social media stuff.
But it's got this thing called AI code that wraps around ADER and gives it same defaults.
If you've got to get repo and you're doing code, you can literally turn your code into a just-in-time AI system and then ask that AI system to make changes and it will auto-commit your code.
Okay, is this the repo here?
ZCMDS? Okay.
Yeah, if you do pip install, the install destructions are down a little bit if you scroll down a little bit.
Oh yeah, I got it.
Yeah, right there, you do that.
Okay.
Boom, it's going to drop this command line set.
And then you just type in AI code.
It's going to ask for something called your chat GPT API key.
Right.
If you do any chat GPT stuff, it's on there.
Yeah.
You don't want to pay for everybody's queries?
Right.
I don't even want to distribute my key because then people will have it.
So it does need a key, but you put that in there and then just go to your code repo and you type in AI code and then it will prompt you on what you need to do next.
And it's amazing because stuff I get stuck at, especially with HTML, which I'm kind of a junior programmer in, I can just fire this thing up out of a folder and then ask it to center some text And it parses through all the stuff, figures out what CSS needs to be done, and then just inlines it, creates a commit, and then boom, it works.
Wow.
It's incredible.
So if you're looking for some AI goodness, check out that.
Yeah, you know, and we didn't even get a chance to talk about the future of code development, because that's going to be so radically transformed, even just from right now.
Can we talk about that?
Can we talk about that a little bit?
All right.
Last topic then for today.
All right.
Let's talk about that.
I'm just so excited about this.
Look, LLMs are great for writing books and doing copy, but they are so much better with code.
I am just absolutely stunned.
Right now, there's also a lot of tooling that goes along with ensuring code is correct.
There's a compiler, a linter.
There's nothing like that for the English language outside of Grammarly.
Human language is irregular.
Language that's designed for a computer is highly regular.
It either works or it doesn't compile.
And because of this regularity in the language, these LLMs seem like they're about a year ahead for the programming stuff than they are for human languages.
So if you're using ChatGPT to write a love poem or something, I want to assure you, if you're a coder, it's going to be like...
Four times better.
And that's what I'm seeing right now.
And the acceleration of my coding is getting faster.
Like, you know, yes, I can ask ChatGPT over a browser window to make a code change, and I can manually paste in that change.
But when I'm using VS Code, it's GitHub Copilot that's literally listening in on my changes and then auto-suggesting the next line of code.
And this AIDR, this AI code tool that I just showed you was ZCommand's It takes that to the next step where instead of looking at the current source file, it looks across source files and then comprehends what the code is doing and puts it in there.
Between these three different tools, none of which I was using a year ago, by the way, these three tool sets have increased my velocity by at least 4x.
It could be as high as 8x.
A lot of the stuff that I used to spend hours trying to figure out It just does it in seconds.
And so it's like, and the question I have is, what's this going to be like?
What's the state of AI coding going to be like in the next year?
And with ChatGPT4 Turbo, what we're seeing is we're seeing 128,000 token limit.
That's about 300 pages in a book.
You know, you're going to be able to feed not just like a subsection or a module from your code base.
You're going to feed it the entire code base And it's going to be able to take in all these other code bases, and then you're going to be able to say, I want an Android app that does this, and it's going to be able to create a whole new repo for you.
It's going to put in skeleton stuff, and then you're going to look at it, and you're going to say, well, actually, I want this, generate a picture of a dog face, put in this sort of text, and then boom.
And so what's going to happen is we're going to get this explosion.
It's going to be like a Cambrian explosion of AI software development.
And I think that probably in 2025, we're going to get also an explosion for, actually probably it's going to happen this year, is going to be an explosion in hardware design and CAD design so that the entire vertical stack of product development will start happening inside artificial intelligence. is going to be an explosion in hardware design and It's those people that really know how to tie in and the integration of all these different systems together that are going to be the ones that are going to be the new dominating tech lords of the future.
And so it's this...
Yeah, well, and as you're talking about that very large token window allowance for the prompt, seems to me, I'm just thinking out loud, that as that gets larger, you'll be able to take, let's say you ask it to build, to write the code for you to do this.
And you take the code and you compile it and run it, and it's not quite right, but you could take that code and put it back in as part of your next prompt and say, given the following code, I want to make the following iterative changes and improvements to that code.
Now write new code based on that old code.
Like, you can feed, like, you can have it, you can deposit all your code in the prompt, right?
Yeah.
I'm saying is that it's a great insight.
That was six months ago.
Okay, okay.
Yeah.
Yeah, that's how fast it's moving, right?
Now it's going to be like an entire, like a larger context windows.
Right.
And as those get large, what I'm saying is, in essence, you're going to have AI systems rewriting their own code.
I mean, you could even...
I mean, the code that drives the AI fine-tuning training can be put into a prompt of that same model to say, I want to build a better model of myself, right?
So this is how we get to the singularity explosion, right?
Oh my God, we said at the same time, yeah, it's a singularity.
So it's this, like, remember when I told you that backpropagation within the training of your mind's neural nets happens in O of N speed?
Sorry if I'm using terminology, but it'll know.
But, and the ones that we do artificially are N squared, so it takes way longer to do artificial means.
The first thing that the AGI could do to improve itself is to figure out how to get that O of N back propagation speed.
Right.
And that would be the first sort of like, that would be basically the final invention that we ever need to make.
Oh my gosh, because then superintelligence would just be a linear function.
Right.
Yeah.
And right now we're bottlenecked on that back propagation and squared problem.
And once that gets knocked out, it's...
Hold on and be prepared to white-knuckle it.
You know, I've seen some articles from some people giving some backlash, you know, pushback against LLMs who just don't get it.
I've seen people say, oh, like the cost of producing garbage is going to zero and these are parlor tricks and these are hoaxes.
And I'm like, man, you really don't get it.
There's a lot of people that are completely missing what has happened in human history already.
Yeah.
It's funny.
It came out of the authoritative circles that, oh, it's bad.
It doesn't work good.
I used it the first time and I was like, this is going to change everything.
To my surprise, I listened to a lot of people on the web talk about tech.
And none of them seem to get it either, right?
They were all, you know, like the Primogen, which I listened to, he was saying some bad stuff about it.
And also Theo from Ping, he's got a popular channel.
They were saying about how it doesn't really improve code quality.
I'm like, what are these guys smoking?
What reality are they living in, right?
Like, it's making me fly.
And now they're for it, right?
And that's kind of the funny thing.
This misinformation comes out and people are like, oh, well, the consensus is, so I'm going to echo that, you know, getting back to this NPC sort of culture.
And, you know, I went hardcore.
I mean, I got it in five minutes when I started querying an LLM. I knew that this was a quantum leap over anything, because I grew up in the day when we ran Eliza on the Apple II. Which was, that was a parlor trick.
It would just, you know, you would say, like, I don't know about my feelings about my father, let's say.
It would say, well, tell me more about the feelings about your father.
Like, okay, you're just taking what I say and you're feeding it back to me.
That was Eliza.
Yeah.
This is not that at all.
It's not even close.
It's not even in the same realm.
And yet, the ability of a lot of modern humans to see what has happened is very limited.
It's like their own internal LLM isn't capable of seeing how this other LLM has just surpassed human LLM. I don't know if we can say it that way.
But something has happened.
A lot of people are missing it.
Yeah, and I don't get it.
I think it's because a lot of people have this mimetic desire to agree with what the tribe is saying.
And a lot of the people in the tribe are saying, oh, we need to slow down AI. Like, oh, it's not going to replace humans.
There's that special spirit, which is within the artist, that the AI will never be able to reproduce.
And I was like, yeah.
You know, let's just give the AI three more months, right?
And, you know, the person I was having that argument with, he got totally changed, right?
Like, he was like, he went from like, it's never going to be able to replicate the spirit of humankind to be like, oh, wow, it's way better, isn't it?
I'm like, yeah, it's going to be way better.
And just wait until, you know, they start producing movies that are decent.
They kind of are, but they need a lot of handful coming.
In the future, you'll just be like, make a movie of this and it'll be like a...
Full-length feature film.
And I want to bring up that all these people in the Screen Actors Guild and the Writers Guild, they're like, oh, we want to have higher wages.
All the people in the fast food industry that are trying to join a union in order to create more money.
Guys, your job is on the cutting block.
This whole thing where you guys just band together and demand higher pay, they're going to replace your job with a kiosk.
That's what's on the table.
Right?
It's like everyone is so psyop that they're fighting the wrong battles.
That's right.
And we can't even make any progress because people are going in the wrong direction of, you know, at least we can have a conversation based upon reality.
Like, okay, we got this LLM. Like, what do you do with all these jobs?
You can't, you know, get them higher.
You know, let's, first of all, let's not try to band together and, you know, double the amount of money because that just basically creates a crisis within McDonald's and they're going to replace your job's Even faster, right?
Yeah, I was running, I have one of my data science servers is actually at my home office.
The rest are in our data center, but I had one shipped to me at home so I could play with it locally, and I hooked it up to my watt meter to see how many watts of electricity it's using.
And I came up with the conclusion that for about 2,000 watts of electricity, I could replace about 50 people.
In terms of like jobs, you know, graphic design, writing, marketing, writing, whatever.
Not to say that people don't have roles in other important jobs, but for a lot of the kind of low-hanging fruit, 2,000 watts in a server replaces about 50 people.
Think about that.
That's right.
Yeah.
And you're not going to pay income tax on that.
No, and the server doesn't show up late reeking of pot and try to flirt with the other co-workers and whatever.
And every fast food restaurant, like you said, think about Amazon warehouses.
Amazon's going to replace almost every last human worker with a humanoid robot that has some kind of AI brain.
That like behavior models, just like language models, they're going to mimic human behaviors.
How do I pick up a box?
How do I stack this?
How do I wrap this?
These are going to be human behavior libraries that it's going to just master in no time.
I mean, literally like a weekend of training, boom, it's done and roll it out across a hundred thousand robots.
Right.
And so the question is, what do we do about it now?
Like we're not at that point yet, but we still got a little bit of time and it's like, what do we do?
too.
And my only answer to that is learn to code.
I think you're right.
Yeah.
Like you're not, you're not going to fight the tsunami by complaining about it.
It's coming.
Uh, the question is whether you're going to build a boat now and ride the wave out to the future, which is what I'm doing.
And it's what you're doing.
And the people that are watching us take our lead and do it too.
Yeah.
Well said.
All right.
Zach Voorhees, everybody.
The The book is called Google Leaks, A Whistleblower's Exposé of Big Tech Censorship.
And also want to plug the other two websites you mentioned.
What was it?
American Digital Shield?
Yep.
Americas.
AmericasDigitalShield.com.
And then the other one?
FeedTheWatchDogs.org.
And if you like my content, check it out on Twitter.com slash PerpetualManiac.com.
I'm just there to sort of like, I give zero craps now at this point, like I think we're heading to some bad areas and I'm just basically, you know, pointing out some information that is not obvious, right?
And the problem with controlled opposition is that they all try to agree on like a different set of fake news things and trying to punch through that as fast as I can.
And not everyone likes the stuff that I have to say.
But it is well-researched, and when I do get it wrong, I will say so.
So check it out, twitter.com slash perpetualmaniac.
It's my gamer tag.
I promise you that if you don't like my content, you will find it thought-provoking.
Okay.
All right.
Well said.
Thank you so much, Zach.
Always a pleasure to speak with you.
I mean, mind-blowing today.
I can't wait to talk with you again.
Yeah, can't wait.
All right.
Thank you.
Thank you for joining me today.
Wow.
Thank you.
All right.
I thank all of you for watching.
Of course, Brighttown.com, the free speech uncensored video platform where we can have conversations like this.
You won't find this kind of talk probably on YouTube or Facebook.
So thank you for supporting us and thank you for supporting Zach Voorhees and his book and his projects.
And be sure to visit those websites that we mentioned and come back to Brighttown.com for more interviews.
Every day of the week, it seems, and 100,000 plus other users posting their content as well.
But thank you for all your support.
Thank you for being human and for asking big questions about the future of human civilization.
Take care, everybody.
I'm Mike Adams.
Be well.
Alright, back in stock at the Health Ranger store, we now have our certified organic lab-tested milk powder and also our coconut milk products.
Again, both back in stock.
I've got some on my desk.
Can you show that?
There we go.
We've got milk powder in number 10 cans, which are really great for long-term storage.
Also completely rodent-proof there.
We also have milk, that is cow's milk, in the pouches.
And then we have coconut milk powder as well, for those of you who want that option.
I use both of these.
These are great long-term storable items, great barter items.
They're both available at healthrangerstore.com along with hundreds of other products that we have there.
Check this out.
We also have our new organic super anthocyanins powder, which is great.
I mean, this is a blend of six organic berry and vegetable powders.
It's high in anthocyanins, which are the purple pigment molecules that have extraordinary properties for supporting your health.
Also, of course, lab tested for glyphosate and heavy metals and more.
And then if you go through the website, you'll see all these other products that we have.
Freezeride beet juice powder, organic nascent atomic iodine that's in organic glycerin, by the way.
It makes it more tasty for people.
Here's our organic vanilla bean powder, like the real vanilla bean powder, not make-believe fantasy artificial vanilla flavors.
And then we have this new Hydrate Elementals.
Which is organic coconut water with Aquaman, which is a very popular mineral supplement, and this has great benefits.
You can read about that and much more, collagen peptides and so on.
All of this at HealthRangerStore.com, where we do more in-house lab testing of our own products than any company in the world, any retailer, any e-commerce company.
We do more rigorous testing with our in-house lab, and that way we can bring you clean foods and supplements, things that we know to be clean, and almost everything certified organic as well.
So check it all out, and thank you for your support at healthrangerstore.com.
Take care.
A global reset is coming.
And that's why I've recorded a new nine-hour audiobook.
It's called The Global Reset Survival Guide.
You can download it for free by subscribing to the naturalnews.com email newsletter, which is also free.
I'll describe how the monetary system fails.
I also cover emergency medicine and first aid and what to buy to help you avoid infections.
So download this guide.
It's free.
Export Selection