All Episodes
Jan. 22, 2025 - Health Ranger - Mike Adams
01:12:48
Zach Vorhies and Mike Adams unveil AI breakthroughs, secrets and warnings for 2025
| Copy link to current segment

Time Text
Welcome, everybody, to this Brighteon.com interview.
I'm Mike Adams, and today we're joined by just a really brilliant man, a very special guest, someone I've known and interviewed for several years.
He's known as the Google whistleblower.
Zach Voorhees is his name.
And he joins us today with, I think, you'll find it to be a revolutionary talk about AI and reasoning models and what's coming in the near future.
So welcome to the show today, Zach.
It's an honor to have you on.
Thanks, Mike.
It's good to be back on your show.
It's always great to have you on, Zach.
And one of the things that I love about just being in touch with you on a personal basis is that you are always on the cutting edge of the latest developments in AI and tools and coding and so on.
And you have seen, I mean, we've all witnessed, but you have really taken notice of these just historic changes in the last, I mean, six months.
Can you give us a big-picture overview of what has just happened in the world of AI? And then we'll get into some of the implications of what it means for society.
Right.
First off, I want to let you know that everyone's saying that AI is going to slow down or running out of data.
It's complete nonsense.
This train has no brakes.
And what happened last week was a wake-up call, a shock to the entire tech community.
When it was announced that OpenAI had created a soft AGI agent.
So AGI is the Artificial General Intelligence.
It's the holy grail of what artificial intelligence engineers are trying to make because it will literally be the last invention of humankind.
Why?
Because once you have AGI, you have a machine.
That is smarter than any other person on the planet combined.
And you can make millions or billions of them sitting in a data warehouse coming up with inventions, patents, innovations.
If you want to talk about how mankind is going to get to the stars, it's going to be done through artificial general intelligence.
And importantly, Zach, I agree with everything you said, but the AGI machines can also build Better AI. Yes.
That's where we get into the super cycle here.
Right.
You have this recursive feedback loop where you create a smarter AI and the AI starts to innovate on better artificial intelligence that humans can make because they're smarter than us.
In other words, they make the smarter AIs.
Those smarter AIs then turn around and make smarter AIs.
Here we go, fourth industrial revolution.
So the big news last week was that we hit the first benchmark of this AGI path.
We're now in the soft AGI phase.
And that came with the announcement by OpenAI that they had scored a revolutionary performance metric on this ARC AGI test.
Now, most of the artificial intelligence tests that are out there...
Contains some semblance of problems that have already been seen before, right?
So make me a linked list implementation or make me a tic-tac-toe game or make me a to-do list app, right?
Like these things have been done so many times that an AI is really good at being able to recreate it simply by accessing its own data store and figuring out how to complete it.
This Arc AGI test is different.
Because what it does is it generates novel problems and novel solutions that no one has ever seen before.
And this was the weak point of all of the large language models until this last week, where the score went from 0.5% completion to a whopping 20% completion.
Now these are...
The crazy difficult math test that you're referring to, is that right?
Yes.
Innovation physics basically tests an LLM's innovative capability to think outside of its box.
Okay.
And the highest score was 0.5.
Humans have always reigned supreme in this test since its inception.
It's a pretty new test, but a human PhD scored an average of around 18 points.
Oh, wow.
And OpenAIs was able to, with their O3 model, get 20. Wow.
Yeah.
As I understand it, these are the kinds of questions that a PhD mathematics expert could spend days on.
Correct.
Correct.
And the amount of energy required per question is between $1,000 and $5,000.
That's like a year's worth of energy at standard rates.
Let's say in San Francisco.
Actually, it's way more than that.
Let's say somebody living in a place where energy is cheap.
It's like an entire year's worth of energy.
And that's being spent on one question.
Now, this will improve as it does.
General AI has fallen by a factor of 1,000 in terms of cost in the last two years.
I expect that these AGI issues will also require 1,000th of the amount of energy in, let's say, the coming two years.
But this is a very important benchmark because now that we've seen this and now that we've experienced it, everyone that says that...
Artificial intelligence can't do this or can't do that.
They've all been stunned into silence.
And now we're entering into officially the soft AGI phase.
Things are going to progressively get smarter.
But one of the big things that's a very worrying trend for me, Mike, is that a lot of these models are starting to get very, very expensive.
OpenAI just...
Announced a new, you know, their O3 model that beat this ARC AGI test.
A cheaper version of it is being made available for between $200 and $500 per month.
And now that seems prohibitively expensive.
But what was surprising was just the other day, Sam Altman came out and said they're actually losing money on this model because they didn't anticipate that their heavy users that need this model Would use it so much because it's basically you pay for it per month.
It's a retainer rate.
You get to ask it as many questions as you want.
They said people are going to ask X many of questions.
It turns out they asked 3X. But let me bring in, Zach.
I'm sorry to interrupt.
We've both seen NVIDIA announcing new compute hardware that is extremely power efficient, right?
So you even alluded to this.
So the compute cost, which is very heavily tied to power costs, like that's the ongoing cost, not just the upfront hardware, but the power density of compute is going to improve by at least an order of magnitude this year alone, correct?
I mean, at least that's my assessment.
What do you think?
Yeah, we're looking like a thousand X increase.
You know, in the next year with this new Blackwell processor that they're releasing, which is...
So, you know, the NVIDIA CEO was on stage at the NVIDIA event and he held up a wafer that represented all of the stacked CPUs that are going to go into this.
You know, final product.
And it's, you know, it's about the size of, you know, one and a half feet in diameter as a circle.
And they cut those out and they stack them on top.
And then there you go.
You got your AI chip.
And what he said is that the interconnects between all the chips, because you have to shuffle data back and forth, equaled 1.4 petaflops.
I'm sorry, petabytes per second.
Oh, my.
Yeah.
Wait.
So you're talking about the entire traffic.
Of the entire world over the internet in this chip.
It's on part of that in terms of the scale and the size and the speed of how fast these chips can talk to each other.
That's insane.
Yeah, that's insane.
So what that means is that the reason...
First of all, the size of the language models or the reasoning models that can be kept in distributed memory can be immense.
But also the process of the reasoning models thinking through their own steps themselves will be rapidly accelerated, correct?
Correct.
And one of the things that Eric Schmidt, former CEO of Google, said, and he was the good CEO, is that what nobody realizes yet is how big these context windows are going to get on these large language models.
Now, they have like a...
A fake size they put out there.
But the LMs start to forget a lot of the stuff that's in there.
But very, very soon, you're going to be able to fit an entire book into a large language model.
Let's say an entire encyclopedia.
And it's going to know every single bit of what's in there.
You're going to be able to ask questions about any bill, any book, any library.
Let's say your entire corporate documentation in...
In total, feed that as part of your query into a large language model.
It's going to think about it and it's going to give you an answer.
Yes.
Well, I mean, look, right now, there are a lot of models out there that have a 128k token context window or even a 200k context window.
But what you're saying is that that's going to increase to millions of tokens in the context window.
Well, there's two things.
I'm saying yes.
And also the 200K context window is sort of a fake number that's inflated because in reality, the LLM doesn't...
It gets more forgetful the more context that you add.
So even though they technically have 200K, it's not a real 200K. But in the future, it's going to be a real 200K where it won't forget anything.
And it'll understand all of it completely.
Yes.
So is that because the...
The $200,000 claim, is it more like a flash attention, like a moving window of attention across the context?
Yeah, and also, the LLMs, for some reason, kind of mirrors the human experience, but for whatever reason, we don't know why, it seems to remember specifically the stuff at the beginning and the stuff at the end, and everything in the middle starts to forget.
And that's the big problem.
That's just why I call it a fake context window.
It's not actually that large.
It gets forgetful the more you load into it.
Okay.
So let me ask you to explain the practical...
See, look, a lot of our audience still, I think, aren't aware of what this means for, let's say, remote workers.
Like right now, okay, right now, like...
Let's say I have a company and I use the Slack application.
And through Slack, I've hired a journalist, a graphic designer, and a coder.
And I just communicate to those people, let's say, through text on Slack.
And right now, they're human.
But in a few months, they may not be human and they may be just as good, but 100 times cheaper, right?
Can you kind of talk us through what that's going to look like?
Right.
So what's really interesting in this LLM world is This simulated chain of thought.
And what's happening is that we're getting a bifurcation of agents.
And the two bifurcations are essentially architect versus implementer, right?
And so the two ways that this is going to transform how managers are able to get work done is that they're going to be able to give a rough outline of their high-level goals.
And then an architectural...
AI agent will figure out all the little steps that will need to be done, break them down into micro tasks, and then outsource that to a collection of humans or AI or a combination of the two.
And so from a high-level standpoint, you're going to be able to get more done by saying less and defining what exactly it is that you want up front.
Obviously, that doesn't always work in practice, but...
That's the general goal.
The second part of this is going to be the implementer AI, which is, you know, once a microtask is received, you're going to have to have an AI or a human carry that task out.
And so with Slack AI, essentially what we're going to have is we're going to have less cognitive load in order to get stuff done.
And then the things that will actually be doing the task will slowly...
Well, actually, it's...
It's pretty quick by historical standards, will rapidly be transformed into a collection of agents so that the entire thing is essentially automated with feedback when problems arise due to design deficiencies in your initial spec.
Okay, so the architect AI... Right.
It's going to act as the intermediary.
So, for example, let's say it gives a microtask, and that microtask turns out that some detail is invalid or contradictory to the goals of the work that's being proposed.
It's going to be up to the AI to figure out that contradiction, to figure out how to explain that.
And if it can't resolve it to itself, then it will force feedback to the specifier, which will be, let's say, you, Mike Adams.
And it's going to say, hey, we've got a problem here, buddy.
And here's the details of the problem.
You're going to think about it, and then you're going to give a new directive to it.
And then it's going to modify the spec that you initially gave it, and then adjust with a new set of microtasks.
Right.
Okay, so let me give a practical example of this.
And you kind of explain this, please.
But let's suppose I'm a business manager and I've got a bunch of data, like, I don't know, sales data for the last year of products.
Maybe I own a grocery store, let's say.
I've got a bunch of sales data and it's not really well formatted, but it's in a CSV file or something.
So can I just take that CSV file, hand it over to the orchestration agent and say, you know, Create some meaningful charts so we can visualize this data.
Go!
Right?
And it'll take it from there?
Yeah.
So it'll take it from there.
And what the next step will be is it's going to try to figure out how your performance is relative to similar sites.
Right?
So the first question of, you know, how is my marketing doing is how is my marketing doing relative to other groups of similar capacity?
And if you're doing better than most of them, then there's not a lot of optimization.
But if similar groups are doing much, much better, then you know that you've got a marketing problem.
And so you give this information to an AI. The AI goes out, does a bunch of research, maybe scans the web.
Maybe there's some paid APIs.
It has access to get privileged information.
And then it's going to look at all of that and then come back with a report.
And not only will it come back report of how you're doing, but it's also going to give you a report.
Of recommendations that you need to do in order to increase that marketing.
And then if you're like, hey, this is great.
I like the recommendations.
Go ahead and do it.
Then it goes back from a reporting agent back to architect to implement the goals that you've just greenlighted.
Okay.
Wow.
So let me actually give an example of this.
I asked a reasoning model, not OpenAI in this case, but I was playing around with the DeepSeq reasoning engine, which is very, very good.
And I described the problem of big pharma paying money to the media to push pharma propaganda and paying money to doctors, kickbacks, etc.
And I asked it for a list of policy or regulatory and law.
Procedures or alterations that could help solve this problem in the interest of improving public health.
So I specifically asked this to a reasoning model, not just a straight LLM. And Zach, it came back with just really incredible suggestions like restricting direct-to-consumer drug advertising, prohibiting financial kickbacks from big pharma, strengthening FDA oversight.
It even says promote consumer access to generic drugs and reducing brand bias, etc.
Criminal prosecutions for misconduct.
I was like, wow.
This model thinks the way I think because I suppose that I think logically about this.
But this model is not anti-pharma biased.
It was just walking through the thinking process of how we achieve better outcomes.
And I was blown away, Zach.
I mean, what's your reaction to that?
Yeah, there's going to be a lot of pressure on politicians when the common person can ask an AI and get all of the research that would have taken millions of dollars to do to come up with a policy solution set.
And they, hey, look what's happening right now with the California fires, right?
The policy would be like, hey, don't ground your firefighters when there's a fire going on.
Number one.
Right?
Yeah, number one.
Second, don't turn off the water pressure to your fire hydrants.
That's two.
That seems like Logic 101. Yeah, and then the third thing is, you know, don't use salt water to fight a firefight.
You know, like, this was taught in the Bible, right?
Like, they salted the earth.
And that's what they're doing right now, is that they're salting the earth, you know, instead of using fresh water to fight the fires.
And so...
You know, what's going to happen is that all these people that are saying, oh, this can't be explained by a conspiracy.
This can be explained by incompetence.
But once people start having access to common sense solutions that are being repeated by a plurality, a democracy of different AIs, essentially, they're going to come and say, well, why aren't you doing this?
Because this hyper-intelligent, exotic, you know, intelligence is telling, We should not be dumping seawater to fight a fire because nothing's going to grow there.
And yet, if they still do it, then that's going to be an incredible amount of pressure that's going to come on these politicians, and not just in the United States, but all around the world, to do the right thing.
And if they don't do the right thing, then why aren't they doing the right thing?
This age of reason through AI agents, it's very exciting in some ways.
There are potential risks that we'll talk about in a minute, but I can't help but realize after four years of extreme lawfare against Trump and Trump supporters, I can't help but conclude Trump would have had a much more fair time in the court system if the judges had been AI. Seriously.
Right.
I mean, they would not have the anti-Trump human bias that the judges demonstrated.
And I'm not saying that all judges should be...
Replaced by AI, you know, there's obviously a human compassion side that's critical in this, but when it comes to implementing policy or even, you know, one day I could see like a mayor of a town could be an AI agent and actually do a way better job than these mayors, like the mayor of LA right now.
Come on.
AI could do a better job.
There's no question at this point.
And that's shocking.
And my opinion of the system and the elites that are running it is that essentially what they're going to do in order to get this AI to everything is that they're going to show, one, is that the AI is way better than the human versions.
And then at the same time, they're going to put corrupt sycophats in all the positions that are going to cause pain, right?
We're going to have an unfair judge.
Oh, look, he was selling kids for cash so that they would go to a juvenile detention center so that he's getting kickbacks, right?
These sort of things are going to get rolled out.
We're going to get demoralized because we're going to see how corrupt that system is.
And then as a result, we're going to want to have a lot of change.
And unfortunately, that change is going to be to a non-human economic justice civil liberties system.
It's going to be re-centered on artificial intelligence.
Artificial intelligence to decide your court cases.
Artificial intelligence to do your checkout.
Artificial intelligence to do all your work that you're doing.
And at the beginning, when all this innovation comes, at the beginning, it's always a much, much better idea than the humans.
Now, here's the problem.
We don't control that AI. It's controlled by oligarchs and their backers that remain in the shadows.
So they can introduce bias to achieve their goals.
They're going to have the unbiased, the fair artificial intelligence.
It's going to be a giant trap.
But we're going to be corralled and forced to go down it because the outcomes are going to be so much better.
They're going to be so much more fair.
And then at some point in the future, the rug pull is going to come and then you can't do squat about it.
You don't like this unfair AI judge that is...
It has a secret code in it to target you.
You can't do discovery.
Even if you're able to get a download of the entire model weights, who knows how to figure that out other than an absolute specialist with a very expensive, exclusive tooling to figure out how the AI came to that conclusion.
Right, and these AI judges could be triggered by a keyword.
One keyword could activate an entire You know, hyperdimensional array of bias because that's the way language models can work also.
I mean, that's the way that they work all the time.
Like people have been showing that AI models can be broken where they ask it a question and then they just keep on asking the same question over and over and over and over again.
They feed that to the AI and it goes insane, right?
And so what happens is that you can embed certain, almost think of it like MKUltra keywords.
The prosecutor gets an inside tip that if he uses the word chocolate pie in his opening speech, then that's going to trigger some hidden latent space within the artificial intelligence to be absolutely brutal against the defendant.
Or likewise, maybe they say vanilla ice cream and then all of a sudden the AI changes its imperative so that the targeting question gets off.
Where they say, oh, well, prosecutor, you technically did something illegal here, so we're just going to throw out the entire case.
But this gets to the key issue of centralization versus decentralization.
And you and I both, Zach, are advocates of decentralized technology, and I know we both support the open-source AI community where you have open-source model weights, you have open-source LLMs that people can run locally.
And the open source community right now is doing an extraordinary job, from what I can tell, of keeping up with the big black box corporate AI like OpenAI.
And believe it or not, China is putting out some really incredible models.
I think China is trying to sort of take away revenue from OpenAI at the moment because they're putting out open source models or like crazy inexpensive models like DeepSeek.
That are far better than almost anything I've seen from Anthropic, for example, or other companies in the U.S. What's your take on that?
Yeah, mark my words, it's going to come to an end, right?
Because this whole problem with how do you audit the latent space to make sure that it doesn't have some crazy trigger in it to do crazy stuff.
Like, I mean, are you going to run a corporation on a black box that you don't actually know how it works with unknown espionage keyword triggers?
No way.
Latent LLM space.
These LLMs, mark my words, will eventually be controlled in the same way as nuclear exports.
And the problem with LLM, rather than a nuclear bomb, like if I try to take a nuclear bomb, hypothetically, and move it from outside of the country to the inside of the country or vice versa, there's going to be a bunch of nuclear detectors that are going to go off.
And you're going to get investigated really fast.
So not easy.
The problem with these LLMs is that at the end of the day, they're just basically a 16 gigabyte CSV file.
Essentially just a spreadsheet of weights.
And you don't need a truck to carry that across the border.
You just use the internet.
You just download it through BitTorrent.
And then once it's in the hands of your target country, It is free to start wrecking havoc when it gets a trigger word.
And so this whole idea that we're eventually going to have free market for artificial intelligence is true today.
But I don't see that being true in the next, let's say, three years.
I think that we're basically going to have a great American firewall.
That no internet connection can get through without government approval.
Well, that's huge what you're saying because that will cause the U.S. to be way behind the rest of the world in AI development.
China, Russia, even Iran.
There are a lot of scientists in India and places like that.
I had this conversation with some high-level people in the AI language model space.
I said, well, couldn't the U.S. government just ban the math?
And they said it's too late because the math, you know, linear algebra basically, is already part of so many thousands of science papers and white papers.
And you can't, it's not, this isn't like 1945 where you can ban, you know, atomic physics math because nobody could share anything.
Like today, you can't even ban Bitcoin.
You know what I mean?
How do you think this is going to play out?
I mean, I think it's going to have to be a...
It's a really tricky problem because the issue is a couple things.
First off, the United States is going to put down a firewall, in my opinion.
And so that's going to stop the transfer.
But you can't just have a hard shell.
And expect that to be your only line of defense, right?
Like, you know, you have an immune system even though you have skin.
And so the second part of this is that we're going to have to have an immune system within the United States.
And Microsoft, Apple, and others have been working on this for a very, very long time.
It's called a signed binary in which, you know, if I make an app, right?
I can't just like...
Give it to a user and then they run it.
They're going to get a pop-up that comes up and says, this is possibly a dangerous app that could do harm to your computer.
Microsoft can't verify the authenticity of this.
And then you have to go through some steps to make it run anyways.
And then it runs after that.
This is going to increase in its draconian ways to be absolutely insane.
I mean, I already see some of that.
Like, Windows just doesn't want to run batch files anymore without warnings.
What, batch files now?
Yeah.
Hilarious.
Oh, yeah.
Just me running batch files.
It's like, warning, this is unsigned, you know?
Yeah.
So basically what we're going to do is that we already have app signing.
That app signing is going to get more aggressive where Windows will just refuse to run any app that's unsigned.
That's coming.
Yeah.
But I think that there's going to be a second front where the data itself will have to be signed.
So if you want to get a text file that's 8 gigabytes, it's going to have to be signed for you to even be able to open the file if you're running on Windows or Mac because that data file could have an LLM in it.
And so if you're going to have this immune system within the United States, you can't just say, well, the...
Application binaries have to be cryptosigned.
You're going to have to say the data itself has to also be cryptosigned.
Well, but then data scientists like me and you, we won't use Windows.
I mean, we'll all just use a Linux OS or something.
And by the way, right now, even if I'm copying files, like backup files, and it runs into a Monero wallet, EXE, Windows pops up.
I won't copy the Monero file.
You know, it could be a virus.
It doesn't like crypto.
Right.
It does that now.
But we'll just move to something else, won't we?
Well, and this is the way the deep state works, is that when they want to shut something down, they first start with shutting it down for the normies.
And so they kettle you, and then you're like, well, screw Windows, I'm going to Linux.
And so that gives you an escape hatch.
It means that the most vocal opponents of whatever policy is being in place are given an escape hatch so they can retreat into a safe island.
So maybe that's Ubuntu or Debian.
That will work for a while.
And then later on, they're going to come for the islands.
This is already happening with Ubuntu.
Ubuntu is being infiltrated by the big tech corporations that are now funding its development.
They're moving away from package managers that are open to proprietary package managers like Snap.
This is essentially the first steps of making sure that this signed binary You know, process where you can only run stuff that's been approved by the government goes even into the safe spaces that we as researchers use.
Now, will that happen for everyone?
No, right?
Like, there will be some people that will get essentially a root level certificate so that they can run whatever the heck that they want.
And my guess is that that's going to be targeted at...
Chiefly, I think, AI engineers, because they need to have absolute freedom to do whatever they want.
But in exchange for that root-level certificate to run whatever you want, you're going to have to consent to be completely monitored by the government.
Wait a minute, so like KYC for developers?
Yeah, exactly.
We know who you are.
Here's a root-level certificate.
By the way, everything that you do in Ubuntu while the certificate is running is going to report back to the government.
Oh, my God.
And in a way, it's sad because not only will they do this, but I feel like they have to do this.
And the reason why I feel that they have to do this, Mike, is because, look, with this AI, you can create not just a virus that's race-specific, but something that can kill all of humanity.
And not just all of humanity, but let's say all million...
Life forms on Earth.
Deer, humans, cats, goats, everything.
And let's even get even worse.
Someone created a virus that wipes out, or let's say a bacteria, that's 10% better at photosynthesis than anything else that's come before it because of some enzyme that's been created, where it goes from 3% efficiency to 13% efficiency, right?
If you just release that bacteria into the wild, It's going to convert the entire oceans into a sea of grass, right?
A floating, photosynthesizing grass is going to radically change the atmosphere.
It's going to suck all the CO2 out.
You won't be able to grow crops, and then there'll be mass starvation, death, and then eventually the entire planet will be converted into this new form of plant life, right?
But that requires...
I understand what you're saying, and the AI could create that.
Digitally, but that still requires a whole set of laboratory equipment to synthesize.
You talk about synthetic life creation in the 3D world.
That requires a whole different set of equipment.
No, no, no.
You just need an enzyme.
You just need to have a few key enzymes that you introduce into one bacteria and then through DNA fusion or RNA fusion.
And then all of a sudden, now, boom, you've got...
You only need to create one of these things.
And then it starts replicating.
Now you've got millions.
And then you release it to the wild.
You're going to have trillions and pendillions.
Of these microbes that are going to eventually take over the planet.
How do you stop that?
Bad actors or chaos agents will just use AI to try to gum up the works for the whole planet.
Right.
And all these countries that haven't been infiltrated by the globalists, they're going to see this as the stalemate weapon.
that's going to prevent their infiltration.
They're like, well, you know, the rest of the planet may be destroyed, but it's going to stop the United States from infiltrating our country, disposing of our leader, and then vaccinating everyone to death, including the royal family, right?
And so that's a great trade-off.
Like, we'll threaten the whole planet, but at the same time, our country won't get evaded, and the oligarchs of the country will be able to remain in power without being overthrown.
And that's going to replicate in pretty much every country that can get away with it.
And the only solution to that is a world surveillance system.
And I hate the fact that I'm saying all these things because it almost sounds like I'm for it when I'm not.
But on the other hand, I don't want the entire planet to be destroyed because some rogue scientist with an LLM in his ear was instructed to create such a thing, possibly by accident.
He didn't even know what he was doing.
He was synthesizing something else.
inadvertently synthesizes a key enzyme that gets introduced into the population of bacteria on this planet.
And then all of a sudden, it's game over in the next 40 years as this thing slowly grows and then quickly starts taking over all available space in the biosphere.
Or have you ever heard of Ice 9?
Yeah.
Ice 9, right?
So some new crystalline structure that spreads and causes all the oceans to turn ice.
This is the same thing you're describing.
I totally get it.
You're freaking me out because I want to use Open source language models to distribute human knowledge and decentralize that knowledge.
And I guess I better hurry up and get my models distributed before the government outlaws open source language models.
Yeah, right.
We've got this window right now of opportunity where we can create really great innovative stuff and get out there and get onto the space before they shut the door.
And when they shut the door, what they're going to do is they're going to shut the door first on new innovation.
And so you want to get that innovation out as quickly as possible.
And that's why everyone right now is racing on the clock.
It's because those that are on the inside, I believe, know that this is what's coming and that this doesn't represent a long-term trend.
This represents a very special point in history in which we're being allowed to innovate with pretty much zero restrictions.
You can download a model and do it, and there you go.
It's like the early days of the internet before censorship, when you could publish a site and say anything you wanted.
You could share anything you wanted.
That all ended in about 2014, and then it got insanely authoritarian from that point.
Now, let's plug our projects here, but mine is brighteon.ai.
For those listening, if you want to download our model that's coming out, March 1st is the date.
And it's called Enoch, and it's a revolutionary new small model.
It's only going to be 7 or 8 billion parameters, but it encompasses a tremendous amount of human knowledge on herbs, nutrition, natural health, off-grid living, sustainability, you know, those kinds of alternative medicine, massive amount of content.
Zach, what would you like to plug for people to follow you or projects you're working on that you want to, you know, Well, I mean, I guess since the election is over, I can kind of let the cat out of the bag that, you know, I was director of engineering for Robert Epstein's election monitoring program.
Wow.
On November 1st, we started seeing that there was lopsided vote reminder, partisan vote reminders going out.
And once we announced that publicly, Google, Facebook, and others started pulling the plug on that.
I'm sorry, it was just Google.
We weren't monitoring either.
We were just monitoring Google because we know that they've had a history of, you know, if you're a leftist, they're like, go vote today.
Reminder, if you're conservative, you don't get such things.
And, you know, I crunched all the data that we've been collecting for over a year.
You know, and what I saw is this massive uptick.
And partisan go-vote reminders.
And once we announced that, like there were AGs ready to go to file TROs, temporary restraining orders, in various states as essentially election campaign finance violations, right?
And so we were all ready to go.
We announced this.
We thought that the AGs were going to do it.
And then to our surprise, within hours, they cut off all of those partisan go-vote reminders nationwide.
Especially in the key swing states.
Wow.
So Google was, of course, weaponizing just like they did back in 2020, but for some reason they decided to stop it, probably because of your work publicizing it.
It's either a coincidence or because we said it, but it was within hours of us doing that.
And so one of the great things about this last election cycle is that I actually had some write access to it.
I haven't been public about it.
Robert Epstein's...
My wife was essentially assassinated, and the former director of our organization had some random attacker come up and slash her fiancé's face with a six-inch cut across his face, almost like a joker smile.
Yeah, it was crazy.
So I was embargoed from telling anyone that I was working on the election stuff, and I wanted to tell people, like, hey, I'm working on this, you know.
We've got to be vigilant.
But I wasn't allowed to do it.
But now that the election is over and the danger is out and Trump's now president, I can talk about it.
So that was what I've been up to for the last year.
We were plugging that.
Unfortunately, after Trump won, everyone pulled their funding.
So I had to switch jobs very suddenly.
This new stuff that I'm working on is really exciting.
I can't talk about it yet.
But if there's anything that I would like to plug, it would be follow me at twitter.com slash perpetualmaniac.
It's my gamer tag.
It eventually became my social media tag for politics, believe it or not.
But that's where I'm at right now.
Go ahead with your plugs, and then I have a follow-up question about big tech, but go ahead.
Yeah.
And then, hey, if you want to jump into the artificial intelligence stuff and you're a coder, check out my custom AI agent.
Well, actually, it's not my AI agent.
It's a front end for Ader.Chat, which is the best code assistant thing on there.
And it's from the command line.
And I use it to code every single day, hours a day.
You know, I'm talking this LLM. And so if you want to install that, you can find the URL at...
GitHub.com slash Zachy's slash AI code.
Or if you've got Python installed with the pip package manager, then you can just type in pip install advanced dash AI code.
And then you'll run it with AI code.
And my advice to everyone that's out there that's a coder, get on the LLM train.
The future is not going to be human coding.
It's going to be...
Humans specking out what they want to have done and then the AI agents doing it.
And it takes a long time to get better at this because you have to understand how the artificial intelligence thinks.
It's going to change the way that you program.
And if you can do this and you stick with it, you're going to get incredible velocity right out the gate.
And it's only going to get better from here.
And if you use my tool, I update it.
We did an update today.
Whenever new models come out, And new techniques.
I add that to the code base and you get it for free.
Okay, so this is significant.
Ader.chat is the tool you're talking about.
Is that the website they should go to or just they should do a pip install on the tool?
Yeah, so you can use Ader.chat.
It's a little bit complex to set up, right?
And it basically expects to have a Linux system.
So if people want to do Ader.chat, they can.
My front end just basically takes care of all the installation issues.
One type install, and then, boom, you're using it.
It'll ask you for the key.
You just give it an open AI or an anthropic key, and based upon the key that you have, it will switch to different modes.
And it's just absolutely incredible.
So either Aider Chat or if you want a simpler version to install it with great defaults that are better than the defaults you get out of the door with that program, use my program front end called AI Code.
And the PIP package is called Advanced-AI Code.
Got it.
This is significant.
And I'll say because I've been involved in a lot of data pipeline processing for the training materials for our language model.
And so I'm managing, you know, enormous quantities of files.
And I find that I'm very often needing to create Customize just batch files, like command line batch files.
And so I asked the AI, obviously, to just write me a batch file that does the following.
And as long as I'm good at describing the task, which I am, I'm very procedural, then it spits it out.
And there are many times, Zach, where it works on the first take.
It's like, I described it correctly, gives me the code, copy and paste the code, put it in the folder, run it, boom, it's done.
And I used to have to ask a coder to write the batch code.
Not that batch files are that crazy difficult, but there's a lot of just little details about variables.
Windows batch files are super weird.
Yeah, right.
So I'm like, I don't want to spend three hours looking up the freaking syntax of this command.
This is like, let's let the AI write it.
But I'm going to be using your tool then.
I'm going to experiment with Python code to do a lot more complex things.
It's clear to me that this is the future, but it doesn't mean that coders are obsolete.
It just means that coders need to change their role to be more of the architects of coding.
It's just like journalists need to be architects of stories, not typists.
You see what I'm saying?
So could you talk about that, how...
AI, on the upside, it can also elevate our roles to be more of the orchestrators or creators of what we envision.
Yeah, so the weak point about all these LLMs is the fact that they didn't have chain of thought reasoning, right?
They're basically statistical machines that basically gave what it thought was the most statistically relevant answer based upon the question.
But it wasn't actually doing any thought.
Right.
And everyone's like, oh, this is a limitation.
I was like, ah, just you watch, because they're going to come out with simulated chain of thought reasoning.
And I knew this because I started doing the exact same thing, right?
And you can just do it very simply.
Like, you know, instead of just asking the LLM a question, you ask it three.
The first one is, here's what I want.
Come up with the high-level goals of what you would do.
Right?
And so you prevent it from going within the weeds of the problem.
Just the high-level stuff.
So it comes out and it gives a high-level stuff.
And then you take your original query and you say, okay, well, here's the original query.
Here's what the architect said.
And now, based upon this, implement it.
Or you can add another step, like check the architect's answer and figure out any contradictions or problems that you see, right?
Right.
So you can walk it through step-by-step.
You walk it through step-by-step.
And then the next step is obviously...
And then the last part is like, okay, well, they've done this and here's the result.
Check it again and see if there's any problems.
Or you can run like, let's say, a code checker on it to be able to give you any problems.
And then you basically get into this feedback loop of one LLM is specialized in certain areas for architecturing, checking work.
Implementing the thing and then checking the results and making any corrections.
That's a five-process chain of thought.
I was able to script that very easily.
It's a rudimentary chain of thought simulation, but I was able to do it in an hour.
I saw that and I saw massive improvements in the performance.
Specifically, what I did is when I accessed databases, I used a language called SQL. You don't want to delete your database, right?
So you want to make sure that the LLM's not giving you an update command that's going to blow away a table.
So I had to do chain of thought reasoning.
And I did this and got massive performance increases.
And I said, well, this is the thing that's obviously going to be next.
And so all these people are saying, oh, they don't have chain of thought reasoning.
I was like, well, it's going to get simulated.
And now it's simulated.
And this is where all the power...
of these new LLMs are coming from is that they basically specialize all a bunch of different agents and the agents basically get to vote on the answer give their feedback and then that's put into another AI that takes everyone's feedback decides what is the correct you know course of action performs it and then other agents look at it make sure that there's no mistakes and you can just basically expand this From,
let's say, five agents, and what I was able to do in the afternoon, to a thousand, which is what the O3 model is doing.
Basically, you create a town hall simulated debate of the smartest large language models in existence.
They all get together and try to come up with an answer.
And so that's where we're getting the soft AGI revolution right now, is that they just jack up the number of agents and get them into a debate, and then the whole thing takes place in seconds, then you get back your answer, and then it's higher quality.
And this is going to continue as this seems like the best way of innovation forward with our current technology.
Even if the underlying technology gets better, I still think we're going to be in a multi-agent model of AI reasoning.
And I think that this is going to scale up, right?
Like, get a million agents, have them all start, you know, debating about how to make a new piece of key innovative technology.
Let's say they're trying to solve cancer, which I already know that cancer is kind of a man-made disease and was introduced to it.
But let's say hypothetically that we didn't know how cancer happened.
They could put in a million agents that are all smarter than Einstein and get them to debate on the best ways in order to figure out how to kill it with essentially a pharmaceutical tacular nuke.
Now, I know we all hate the pharmaceutical industry.
This is all hypothetical.
You can apply this to any sort of problem domain space.
Well, you could say, like, give it a question for advanced materials, right?
Or develop the physics for a gravity propulsion system or anti-gravity, whatever, right?
That kind of thing.
Or room temperature superconductor.
Yeah, exactly.
I was thinking cold fusion, low-energy nuclear reactions, like design the cathode that's perfect for Lennar.
That kind of thing.
You're nailing it, Zach.
Exactly.
You can scale this up where it takes 20-plus years to develop a human PhD, but it takes like 10 seconds to spin up a PhD AI agent that knows everything about the whole history of physics and chemistry.
Right.
You can spend five years training it, and then once it's trained, It takes seconds to copy.
It's just copying a CSV file, right?
And that computer already has the hardware ready to go.
It just needs to be fed what the model is, and then you're off to the races.
So once you have that model, you just replicate it millions or billions of times.
And right now, we don't have an energy grid that can support that.
And so it's my prediction that the entire climate change stuff is going to go away very, very quickly.
Because if we don't want to get exterminated by China, and I don't believe the elites at the top want to get exterminated by China, they're going to need to lay down either 10 or 100 or 1,000x more power than we currently have right now.
I was going there next, actually.
I'm so glad you brought that up.
Yeah, because I know that companies like Microsoft have contracted long-term to acquire the entire output of nuclear power facilities.
Right.
These data centers are going to have a little small nuclear reactor to a large-scale nuclear reactor just powering it for all these millions of agents that they're running.
That's exactly what I was...
Hold on a second.
People calling me in the middle of this.
I was going there, too.
So, yeah, you're going to have microfusion reactors or fission reactors.
Some of these already exist.
Some of them, like I think Raytheon has one that fits on a tractor trailer for military use.
You're going to have these in data centers.
So you're going to have nuclear-powered facilities of data, massive compute capability to try to win the race to superintelligence.
And Zach, can you speak to the importance of that?
Because, I mean, this is an arms race.
Nothing else in history.
Whoever gets to superintelligence first, what advantages do they have?
Whatever nation or corporation does it first.
Okay, so for military dominance, you're going to need intelligence, which is what they're working on.
You're going to need lots of power.
You're going to need massive manufacturing.
You're going to need to have space dominance, right?
China has energy superiority.
They've got manufacturing superiority.
But unfortunately, they cannot buy an NVIDIA chip because Trump and Biden, and now Trump again, are basically doing export controls on NVIDIA. Sure.
And the lithography equipment to make microchips.
Yeah.
And so right now I see this as obviously what it is.
We're gearing up for a long-term conflict to figure out who's going to be on the throne of the world.
Right?
Like, it's ramping up slowly right now, but very soon, over the next year, you're going to see massive amounts of power contracts being laid down because we need to play some catch-up.
Like, China's, you know, on track to lay down 10x the amount of power.
I think it's going to be 1,000x.
The United States is going to have to do the same thing.
And all these climate change wackos that think that carbon dioxide is poisoning the Earth.
You know, when plants will readily absorb it, the more that's out there.
They love it.
Yeah.
That's all going to come to an end.
They're all going to be disempowered.
They're going to have their funding cut because the new agenda is going to be, hey, we need to build up power to fuel this military dominance machine so that China doesn't get a little bit cocky and launch a surprise attack.
And then with the intentions of overthrowing, you know, Western globalism as a nice name for what exists out there.
Right.
But a couple of thoughts on this.
So Western Europe is screwed because they've destroyed their own domestic energy supply.
Biden destroyed domestic energy in the U.S. for the last four years.
And China has found workarounds where they can create their own microchips even despite Western sanctions.
For example, they did that with one of their mobile phones, just blowing people away with their microchip design.
So it's kind of like...
How the U.S. put sanctions on Russia to try to collapse Russia's economy, but Russia just got stronger.
It seems like China doesn't really care that the West has put sanctions on it.
It's just developing its own domestic microchip fabrication infrastructure.
Yeah, they're a little bit behind, but I think that the elites had no clue to the scale of what China was up to.
I would point back to the former CEO of Google, Eric Schmidt.
He gave a Stanford talk.
They've wiped it, but it still exists because of the decentralized nature of the Internet.
Nothing is ever lost.
So there's this speech.
I recommend you listen to it.
It's like an hour and a half.
And what he says is, and this was like, I think, three months ago, he said that China was a decade behind the United States in chips because of the CHIPS Act.
But that they were going to catch up very quickly and they were going to close the gap over the next five years.
That would have been the end of the story until a few weeks ago when Eric Schmidt came back and said, oops, it turns out that we were wrong about China.
They are not a decade behind the United States.
They are one year behind the United States.
And I saw that and went, ooh, that sucks for world stability.
Because if you've got two actors with nation-state power, Now,
suddenly, Trump's attempt to acquire Greenland makes perfect sense, because Greenland has massive energy resources, and it has the cold weather that's required to cool all this computing equipment.
A massive subcontinent, I suppose you could say.
I mean, it's an island nation, but it's large.
It could be the data center hub for the West, in essence, couldn't it?
Yeah, but I mean, with all this power, they've been suppressing really great power sources, right?
Like free energy.
I don't mean like free energy, like you just pull it out of the ether, like some zero-point energy system.
I'm talking about energy that's too cheap to meter, right?
And a lot of people are...
I've been investing in this and silencing the results of this and putting it under black projects, black skunkworks, essentially.
Interesting story.
I discovered Google's secret cold fusion lab in 2019. Right?
And talk to the people there.
And then later, they came out.
You can even look this up.
Like, Google has results.
And their results were, oh, we tried this, and it turns out that it's all bunk.
And I was laughing.
I was like, no, I know the inside story.
I talked to the people.
It was so weird, Mike.
The managers that were managing that, it was the most sketchiest thing I've ever...
It was the spookiest, most sketchiest thing I've ever...
I've survived through a mass shooting incident at YouTube right before I left.
But by far, the thing that really holds to me is the conversation that I had with the people running that.
Essentially, I realized that it's a bunch of spook managers with a bunch of well-intentioned engineers that are lefties and brainwashed, and they have no idea that they're basically working on a future AI. You know, data center energy model,
but that they can't let the cat out of the bag because the existing elitist power structure requires energy to be a scarce resource in order to extract value out of the general population or appropriate it for themselves.
Wow, what a conundrum that you've just brought to the forefront here, because if they acknowledge...
Essentially free energy technology that is needed for the data centers, then they lose the petroleum and oil and gas.
Right.
And this whole thing about Greenland is great because the energy is cheap.
Cheap in comparison to fossil fuels, which is a bad name for the fuel, but it's popular.
In comparison to fossil fuels, geothermal energy looks like a really great solution.
But with low energy nuclear reactions...
Formerly called cold fusion reactions, where you have basically a platinum catalyst that's reducing the colon barriers between the deuterium atoms.
All of a sudden, you don't need geothermal anymore.
In fact, that seems like an arcane and cost-prohibitive energy source to run your massive millions and billions of AI agents.
And so in reality, I think the real thing is that we're going to have a distribution of data centers.
In places that aren't even on the map.
Maybe you'll be able to see them on Google Earth or Google Maps or something if they're not wiped off by, you know, government edict.
And that these data centers are going to be all over the Western global empire.
So you think they're going to be super secret high security centers with technology that's not allowed to be released to the public in terms of the energy sources?
Yeah, and if you see one that's above ground, it's going to be low security.
I think that the biggest ones will be completely underground.
The only way that you'll know that they're even there is by the heat signature that they're giving off from their vents.
Wow.
So deep underground data centers with secret energy sources.
I mean, hey, we built this entire tunnel system under the United States.
You think that they're not going to use that for data centers?
It'll be interesting, too, because out in the middle of nowhere at night...
You know, these satellites will be going over and they'll see the heat signature from these vents, you know, coming off.
And that's if they're in the middle of the desert, you know, if they're near the, you know, any place that has access to water, like the ocean.
They'll just pipe the vents through the ocean so that they'll go through two miles of ocean water so that the heat signature will be a purple instead of like a blazing white.
Well, I've also seen quite a bit of innovation of actually putting data servers in the ocean, like physically just taking a rack and putting it in the ocean for automatic heat dissipation.
Yeah.
But, I mean, it seems like there are a lot of technical...
Problems with that.
It's easier to just probably pump ocean water through a heat exchanger, right?
Right.
Yeah.
Or pump air through a heat exchanger, like you said.
But the vents, the heat signature will be viewable by thermal drones, which are pretty widely owned.
But I guess the FAA can just say, well, that's a no drone zone.
Right.
They can say it's a no drone zone.
And you can't really get rid of the heat signature completely.
You know, you can basically just take it close to zero.
Because if you're running terawatts of power generation in the water, it's going to heat the water.
It's going to go from black and blue to a purplish color on your infrared sensor.
So they won't be able to hide it completely.
But if you do see a hot vent coming off of the desert, that means that it's a low-security nuclear thing powering God knows what.
And if it's in the ocean, you see all of a sudden like, huh, it's really weird.
You know, the Pacific Ocean, you know, off the coast of San Francisco is suddenly getting a lot, you know, it's starting to change, starting to get hotter.
They'll probably say global warming or some stupid stuff like that.
Right, they'll say global warming, but it's actually their data centers.
It's actually some hidden black project data center that's...
Generating a massive amount of God knows what using ungodly amounts of energy and that they're piping that into the ocean to try to hide it so that people don't take notice of it that much.
Wow.
So we're about to wrap this up.
Thank you for your time.
So energy is the new...
Gold, in essence.
I mean, energy through microchips converted into intelligence, which is the super weapon for world dominance.
I mean, the chain is so clear now.
Yeah, it's that.
It's energy, it's intelligence.
I mean, you know, if you're going to fight a war, you're going to want to have a billion drones manufactured underground.
And then say, oh, you know, we don't have any drones.
And then, you know, you go to war and then all of a sudden it's like, ah, rug pull, guess what?
We've got a billion suicide drones, $20 each.
They don't even have a gun.
They just fly to your head and explode like they're doing in Ukraine.
That's the future.
Right.
But China's got the manufacturing scalability that you already mentioned, and they could just roll up with a cargo ship and all the 40-foot, you know, sea containers open up and, you know, 100 million drones fly out of it and just start going to town.
Yeah.
Well, you know, they do have that, and that's what we can see on planet Earth.
We don't know what's in space, right?
Like, for all we know, there's a manufacturing zone, you know, on the dark side of the moon.
For all we know, they're mining all of this, all these asteroids with what would be rare Earth, but not rare, you know, an asteroid, like all these heavy metals.
And so, for all we know, Western globalism is deep mining asteroids at this point, and they've got a large supply of rare earths ready to go.
And that there may be even manufacturing, they'll just drop ship them into either the United States or some other location on the planet.
And then all of a sudden, it's like, surprise, turns out we got a billion drones, we can kill 75% of the Chinese, right?
And that's the big...
Of all the things that we have that I think is sort of like the trump card, I think it's our space dominance that we have, and we don't know the extent of the space dominance.
But if the elites have any sort of competence, my assumption is that they've been developing the technology for the last 70 years.
So God knows what they've got outside of the terrestrial plane.
Well, this is going to be a really fascinating interview.
For people to look at, because Zach, what I'm going to do, I'm going to have AI illustrate this entire interview since you and I don't have cameras on.
I'm just going to have illustrations, illustration panels through the whole thing with captions.
And it'll be fascinating to see what kind of illustrations the AI comes up with based on what we've been talking about.
Fascinating stuff.
You're going to run that through...
Which AI are you going to run that to get the images?
There's a couple of engines I use.
I'll tell you privately the one I end up using.
I'll probably test a couple of them to see what's best.
But I've got to try some different styles.
I might do like a sci-fi illustration style and just kind of test those out.
Usually the results are pretty good.
And then the subtitles are very popular now because for the hearing impaired, everybody loves the fact that we're doing that.
And we found that AI can do the subtitling.
At about one-to-one, so like a 60-minute interview, it takes it about 60 minutes to process and generate the captions and put it back in the video.
So that's acceptable.
Easy.
Right.
That's great.
It's essentially faster than real-time.
The VLC video players, they're now doing real-time subtitle generation whenever you play a movie.
Unreal, man.
It's unreal.
Yeah.
I do want to let you know that if you start seeing AI scientists start being assassinated, that's your clue that we are really close and that we've entered into some sort of soft World War III with China.
We had that one guy, OpenAI died, but I'm talking about lots of mysterious deaths happening over the stuff where the government actually gets concerned about it.
That's not happening yet, but I think it's going to be happening in the next three years.
As this AI war with China starts to kick up into full gear.
Well, kind of like the way that Iran's nuclear scientists keep getting mysteriously killed off.
Yeah, I think the same thing is going to start happening with China and with the United States as they start doing a tit-for-tat to slow the other down.
Oh my.
Well, look, thank you for taking time here to share your thoughts with us and to our audience.
The number one thing is that you be prepared for what's coming because...
Not all the humans alive today are going to navigate this.
And there's got to be a couple billion people out of work.
We're quickly becoming a bunch of worthless eaters with this new AI tech that's coming in.
And the only way that you can escape that is if you start using the artificial intelligence now.
Because in the future, it's going to be people that use AI and are integrated tightly with it.
And then everyone else who's going to receive...
Universal basic income and poverty living.
So, you know, we're not at that stage yet.
You still have time.
So start using AI right now because you can't stop it.
If you don't use it, it's not going to stop the advance.
This train has no brakes.
Good point.
Good point.
Okay.
Zach Voorhees, everybody, the Google whistleblower.
Thank you so much, Zach, for your thoughts here today.
And Perpetual Maniac is your handle on Twitter, correct?
That is correct.
Is there an underscore?
Nope.
Perpetual Maniac.
Do a search where you'll find it.
Perpetual Maniac.
Okay.
Thank you so much, Zach, and stand by here, and thank you for joining me today.
And for all of you listening, I'm Mike Adams, the founder of brighteon.com, and we're releasing at brighteon.ai our free open source, downloadable human knowledge-based model called ENOC. That's coming March 1st, unless I guess it gets outlawed before then, but hopefully that won't happen.
So thank you for joining me today.
Take care, everybody.
Alright, the goldback company has just issued a new set of goldbacks for the state of Florida.
This is the one goldback for Florida.
It's got a unique pattern on it.
And here's the five, which is five one-thousandths of an ounce of gold embedded in the goldback itself.
These contain real physical gold in them.
We've done the lab testing and verified the purity and the mass of the gold.
And for the first time ever, Florida is now represented In Goldbacks.
If you go to verifiedgoldbacks.com, then you'll see our lab testing that we've conducted on the Goldbacks.
And there's a link there where you can purchase Goldbacks.
We do benefit from that purchase.
We get a little bit of an affiliate reward for the Goldbacks.
We partner with the Goldback company.
But you get real physical gold in this highly divisible form.
And here's some of our testing that we've done with the Crucible.
And this is the acid dissolving.
This is the stone test and so on.
We use our ICPMS instruments and our analytical balances to verify the amount of gold they contain.
And just here's some more photos and then here's some of the results down here.
Sorry, it's a lot of information.
Yeah, here we go.
Recovery over 100%.
So you're going to get at least 100% of the gold that is claimed in these gold backs.
So this says one one-thousandth of an ounce of gold.
Which today, you know, I mean, gold's getting a lot more valuable in terms of dollars because dollars are declining in value.
But this is a divisible form of gold.
You know, it's hard to spend a gold coin.
I mean, gold coins have their place, don't get me wrong.
And I think anybody acquiring gold is on a path of wisdom as far as I'm concerned.
But this is a divisible form.
Like, you can use a thousandth of an ounce of gold to pay for a sandwich, for example.
Or to barter with someone or to use it at a farmer's market or to give it as a gift or to use it as tips.
These are amazing.
They have amazing artwork.
There are different states that are represented, like Wyoming, for example, and Utah, but now Florida for the first time.
So go to verifiedgoldbacks.com, click on the link there, and you'll be able to see the entire Florida set that's available now for the first time ever.
Texas is not yet available, but we're hoping that will happen maybe next year.
But Florida is available now.
Collector's sets are available.
These are going to be collector's items in addition to their utility, and they make amazing gifts for people, especially Christmas gifts, too.
Export Selection