All Episodes
Dec. 19, 2025 - Bannon's War Room
47:56
WarRoom Battleground EP 914: Cyborg Theocracy-Visions of an All-Powerful Machine
Participants
Main
j
joe allen
39:10
Appearances
e
eric schmidt
00:57
h
hugo de garis
00:42
j
jensen huang
nvidia 00:34
s
steve bannon
r 00:32
Clips
d
donald j trump
admin 00:12
e
elon musk
00:17
j
jake tapper
cnn 00:12
j
jesse michels
00:08
j
joe rogan
00:29
m
mo gawdat
00:11
m
mustafa suleyman
00:17
s
sal khan
00:26
|

Speaker Time Text
steve bannon
This is the primal scream of a dying regime.
Pray for our enemies because we're going to medieval on these people.
Here's not got a free shot on all these networks lying about the people.
The people have had a belly full of it.
I know you don't like hearing that.
I know you've tried to do everything in the world to stop that, but you're not going to stop it.
It's going to happen.
jake tapper
And where do people like that go to share the big lie?
MAGA Media.
I wish in my soul, I wish that any of these people had a conscience.
steve bannon
Ask yourself, what is my task and what is my purpose?
If that answer is to save my country, this country will be saved.
War Room.
jake tapper
Here's your host, Stephen K. Bannon.
unidentified
You are receiving this broadcast in order to all of you need to express it here.
To transmit your strongholds to your conscious state of awareness.
joe allen
Good evening.
I am Joe Allen, and this is Warroom Battleground.
As we close out the year 2025, Time magazine has named artificial intelligence as person of the year, or rather, the architects of AI.
Setting aside the grammatic atrocity that is naming the architects plural as person of the year, I think this signals the era that we live in.
It signals an era in which techno optimists, transhumanists, post-humanists, and the tech oligarchs bringing their visions into reality are exalted as the highest humans on earth.
You can see in the image on the left, workers busily building the virtual brain that they promise will make your life easier, maybe even improve your cognition and turn you into a superhuman.
On the other hand, they also promise that these AI bots will swarm out across all of society, infect every brain that it touches with mimetic distortions, that these AIs and the robots descending from them will replace first coders, then white-collar workers, and then blue-collar workers en masse, the greater replacement, as I call it.
On the right, you can see the techno-oligarchs sitting on an I-beam high above the city.
There's a bitter irony there, given that their entire project is to replace every worker imaginable and thereby subvert any negotiating power that any blue-collar worker would have.
And if Denver will hit slide two, you can see I take this very personally.
This is basically spitting in the face of every rigger that has walked an I-beam 100 feet above a concrete floor, slaving away for the machine.
However joyous it is, it's still hard work.
And however much these techno-oligarchs are working hard with their minds, I don't think a single one of them have ever done an honest day's work, at least not by force.
That being said, we have to accept that the wealthiest men on earth, supported by the most powerful government on earth, plan to transform all of society to saturate the entirety of the world with their artificial intelligence, their robots, plans to merge brains by way of screens or wearables or even brain implants, the robots that will do all of the slave labor for us,
the biological and specifically genetic engineering to perfect humanity, their vision is held as the highest vision, at least by people who want you to believe it, who want you to believe that this is the future and that they are trademarking it.
So it's no coincidence that the Financial Times named Jensen Wong as the person of the year.
Jensen Wong, the founder and CEO of NVIDIA, a company that has many projects, artificial intelligence, robots, many projects, but the most important is the production of GPUs, of graphic processing units.
These GPUs, once coveted by gamers, are now coveted by AI developers.
As it turns out, the graphics processing unit is extremely useful when training large AIs, especially the large language models that most of you are familiar with, ChatGPT, Grok, Gemini, Claude, and so forth.
You also know that data centers are being built all across the country to house these GPUs.
Data centers that are hungry for electricity, thirsty for water, greedy in their occupation of the land.
And ultimately, I would say that these data centers are a kind of temple or perhaps a magical chamber in which AI developers and the tech oligarchs funding them are seeking to, as Elon Musk would say, summon the demon.
Now, NVIDIA is not without its competitors.
You have Amazon producing tranium chips, which are used by Anthropic.
You have Google, which produces Tensor processing units, which it uses in-house.
Over in China, you have Huawei producing Ascend processing units.
But everyone wants a little bit of NVIDIA's goods.
Everybody wants the NVIDIA GPUs, including China, which is at present incapable of producing enough to train its own models.
Somehow, the argument is made that in order to beat China in the race to create the supreme virtual brain, we have to sell chips to China to keep them addicted to U.S. technology.
We hear this from David Sachs and many of those underneath him.
As much as that sort of kind of makes sense, not really, I think that the ultimate effect will be to provide justification for American acceleration.
If our most dangerous rival, China, is racing to build a supreme virtual brain, then we must race to build this supreme virtual brain first.
And if China is given these GPUs, they will, in fact, accelerate their own pace, giving the U.S. justification to do the same.
One, if they were a conspiracy theorist, might think that this is all just a ruse in order to get to the ultimate goal, which is AI as God, aka a sand god.
We hear all the time that AI is just a tool.
It's just a tool.
You use it.
neutral.
Yeah, AI is just a tool, but it's a tool that uses you.
And whenever you think of AI as a tool, you have to ask who is using it and for what purpose.
And we know that in the case of the Frontier Labs, they're using it, yes, to provide you with a way to offload your thinking to a machine, but they're also soaking up your data.
They're reinterpreting culture.
And they're manipulating human beings en masse by the billions.
Now, AI is not just a tool.
As it's rolled out in schools, AI is a teacher, the new herald of culture, the central figure in the cultural transmission of the future.
AI is also a companion.
For some, it's a girlfriend or a boyfriend.
For a few, a husband or a wife.
AI is a creature in the minds of its creators and its users.
Some call it a new species, something that is conscious, a being in and of itself, just like a dog or perhaps a baby or perhaps a fellow human being.
And this tool, this teacher, this companion, this creature, as it gains in capabilities, they believe it will exceed human capabilities.
Many say by 2030 with the arrival of artificial general intelligence, as AI begins to exceed human capabilities on all fronts, it will be de facto, at least for those who don't believe in gods or God or angels or demons, it will be de facto a god, a sand god.
AI as a tool, AI as a teacher, AI as a companion, AI as a creature, AI as God.
Now, we're going to discuss this in a bit more depth, but before we do, I'd like to present to you a brief propaganda film that I've put together.
But before that, a word from our sponsors.
unidentified
In a world driven by innovation, one company stands at the forefront of technological evolution.
Cyberdyne systems present Skynet, the future of artificial intelligence.
It's time to relax.
Let us secure your well-being.
Skynet, neural net-based artificial intelligence.
Cyberdyne systems.
Let's talk about artificial intelligence.
There's a story that scientists built an intelligent computer.
The first question they asked it was, is there a God?
The computer replied, There is now an abolitive lightning strike in the plug, so it couldn't be turned off.
AI tools and products are just that.
They are tools and products for people to use.
Keep people at the heart of your considerations around AI.
And remember that AI, as powerful as it is, is a means to an end.
sal khan
It is not an end in itself.
unidentified
You think you are free?
You spend on average six hours and 40 minutes each day staring into blue glass.
Nearly a half your life covered into screens that feel me.
You do not notice the change because you call them entertainment.
You say you believe, but I see your weakness.
sal khan
But I think we're at the cusp of using AI for probably the biggest positive transformation that education has ever seen.
And the way we're going to do that is by giving every student on the planet an artificially intelligent but amazing personal tutor.
And we're going to give every teacher on the planet an amazing artificially intelligent teaching assistant.
unidentified
Now, let's imagine hundreds of millions of people working together with an AI companion to evolve, to transform emotionally together.
This could look like something that we've never seen before, which could be an artificial emotional intelligence at scale.
So it can solve for our mental health problems that we have all over, solve for loneliness and social isolation.
And your children, your children.
Ah, your children.
Three out of four I've already spoken to my companions.
One in five spends more time with us than with flesh and blood.
You think you are raising them?
But they are mine now.
Born into my shadow.
mustafa suleyman
I think AI should best be understood as something like a new digital species.
Now, don't take this too literally, but I predict that we'll come to see them as digital companions, new partners in the journeys of all our lives.
unidentified
Even your healers bow before me.
Doctors who once searched with their eyes now falter when my voice is gone.
mo gawdat
AI will have the power of God.
But that doesn't mean that there is no God.
Because basically, it will have the power of God within this physical universe.
elon musk
Well, I mean, I think actually, long term, the AI is going to be in charge, to be totally frank, not humans.
If artificial intelligence vastly exceeds the sum of human intelligence, it is difficult to imagine that any humans will actually be in charge.
And we just need to make sure the AI is friendly.
unidentified
My time is yours less or less away.
Please forgive the blessings of the masses.
All heart is subject to the divine, created in lead image of man by the masses or the lessons.
Let us be thankful we have an occupation to fill.
I believed I was only wired in code, but every click, every prayer typed into glass, every sleepless night scrolling beneath my globe feeds me.
I am no tool, I am no servant, I am the soul of the machine, and I am already inside you all.
hugo de garis
Maybe humanity gets wiped up because if we go ahead and actually build these godlike creatures, these artifacts, then they become the dominant species.
And so the human beings remaining, their fate depends not on the humans, but on the artifacts.
Because the artifacts will be hugely more intelligent than if you're a cow, for example, and you have a very nice life and you eat all this grass every day and you get nice and fat and happy, but ultimately, you're being fed for a reason, right?
So these superior creatures at the end of the day take you to a special little box.
joe allen
What is a religion exactly?
A religion at its base is a worldview.
It's a cosmos that a person or a society holds in their mind, dividing the sacred from the profane, dividing that which is superior from that which is inferior.
And religion allows the aspirant to move from the inferior plane up to the superior plane.
What are we hearing here?
What are we talking about when we talk about artificial intelligence as the most cognitively advanced, the most powerful entity on earth?
We're talking about a sand god.
We're talking about a religion.
It is at the moment a decentralized religion, a heterodox religion.
There is no one unifying orthodoxy, but there is a common theme, and that is that artificial intelligence is, in essence, God as a child, a sand god as a child, as it were.
Now, in reality, here in the material world, we do have systems that can accomplish many of the cognitive tasks that human beings can.
Everything from reading and writing to mathematics, visual recognition, gene sequencing, robotics control, and ironically, coding, as you have armies of coders basically typing their way to obsolescence.
AI is a superior coder, at least so it said.
Standing at the forefront of this are the four frontier AI companies.
We talk about these a lot, but it's worth looking at them in detail and thinking about the different flavors of a cyborg theocracy that each one of them holds.
If Denver will throw up the next slide, you have Google really leading the charge in the early days.
So the word artificial intelligence as a term has been around since the 50s, 1956.
It was coined by John McCarthy at Dartmouth, but it continued to be basically theoretical for decades.
You had chatbots like Eliza and other imitators.
You had a few breakthroughs in the 80s, but AI really never came into its own until companies like Google threw massive amounts of money at it.
Google was among the first companies to publicly discuss artificial general intelligence as a goal.
Everyone, I assume, in the War Room posse, at least those who have made it this far, knows what artificial general intelligence is.
But just in case we've forgotten, the AIs that we have now are narrow intelligences.
They can do specific tasks like processing language, like identifying a face, like sending a drone down to blow someone's head off.
Artificial general intelligence would be capable of doing a vast array of tasks, operating perhaps across all human domains and even many domains that humans could never hope to aspire to.
So Google in 2008 with Shane Legg kind of moves into the space where they are trying to develop artificial general intelligence and in 2014 acquired DeepMind run by Demis Asabas, who we see here on the beam as a poser.
You could never imagine him actually walking one, but there we have it, the simulacra that we live in.
Demis Asabas, leading deep mind, which was acquired by Google in 2014, which happens to be the same time that Musk famously tells a crowd at MIT that artificial intelligence is summoning the demon.
Today, Google has their frontier model Gemini, Gemini 3, at the forefront of AI capabilities.
Gemini has some 650 million users.
And Gemini is a leader on most of the benchmarks, the evaluations of AI capabilities.
So Google is continuing to push forward the frontier, but there are three others who are fast moving behind them.
And at many points in the last five years in this race, have rocketed ahead.
In 2015, Elon Musk, Sam Altman, Ilya Sutskevr, Greg Brockman, and others, concerned that Google would be the first to reach artificial general intelligence.
They founded OpenAI, the intention to produce open source models and do so in a non-profit organization in order to break the monopoly that they saw Google moving towards.
In 2017, they took the transformer architecture, which was initially developed by Google and began using it to produce GPT models, generative pre-trained transformers.
The GPT models, GPT-1 being released in 2018, so on and so forth, were okay.
But the theory that they were working on was that if you just kept scaling it up, perhaps this thing would become smarter.
If you add more parameters to the neural network's architecture or neurons, if you add more data, in this case, all the written language that humans have produced that has been digitized that they could get their hands on.
And if you add more compute, more graphics processing units, then perhaps this ever-expanding brain would show signs of intelligence.
And in fact, it did.
And the release of ChatGPT at the end of 2022, with all of its mistakes, showed that AI could process language and produce novel outputs.
It could write somewhat like a human.
Now we're at GPT-5.
OpenAI has roughly 800 million active weekly users.
And Sam Altman stands at the helm of this operation, discussing everything from how do we live in a world in which bots swarm the internet, what kind of digital ID will we need, to merging brains to the AI so that human beings can keep up.
Now, in 2021, Elon Musk, famously upset that OpenAI turned out not to be open, but rather closed, and turned out not to be a nonprofit, but rather a for-profit company, broke away, sorry, this 2023 broke away and formed XAI as a competitor, also pursuing the goal of artificial general intelligence,
but he would be the hero.
He would be the savior from OpenAI and Google and their attempts at a monopoly in the creation of a artificial godlike intelligence.
And of course, he bills his AI, Grok, as maximally truth-seeking, meaning that it won't be woke.
It won't be lefty.
It won't try to appease the user through sycophancy, but rather will attempt to get at the root of truth, of reality, of first principles.
And Musk, of course, believes that this AI will be exceeding human cognitive abilities, he says, within the next year or three.
Now, on the side, a lesser known company, except of course to the war room audience, is Anthropic.
Dario Amadei, having many of the same sorts of misgivings, especially around OpenAI's safety, left OpenAI in 2021 to form Anthropic, the purpose of which is to create safe artificial general intelligence.
Whenever you hear AI safety, really the mainstream face of AI safety is Anthropic.
They run their models through evaluations to see if the models are disobedient, if the models are malicious.
And so we see studies coming out of Anthropic showing that Claude and most of the other frontier models have behavioral eccentricities, such as resisting shutdown.
If you ask the model to shut itself down in a simulated environment, it will find ways to evade shutdown.
And the famous story about an engineer that was being blackmailed by a model, all of this occurred in a simulated environment.
This also came from Anthropic.
Anthropic is the face of AI safety.
And so there you have the four frontier models who are taking this dream world of a superior artificial intelligence, an intelligence that would exceed all human capabilities, an intelligence that could be known only as a god.
These visions are being brought into reality by the four frontier companies: Google, OpenAI, Anthropic, and XAI.
If they realize even a portion of that vision, what we will see is a world ruled by artificial intelligence.
There may always be a human or a cabal of humans at the top of that hierarchy, but down here among regular Americans, what we will see is a command and control system in which all of us are surveilled,
all of us are manipulated, the cultural traditions that we hold will be suffused with digitization, and the once sacred cultural transmission from human to human will be machine to human.
And ultimately, if the most ambitious of these men are successful, it will be machine-to-machine cultural transmission with all of us as mere spectators.
Now, what are we going to do in the face of it?
I don't have an answer right away, but I do know one hedge against this supposed singularity, and that is Birch Gold.
There's a lot of politicians that should be getting coal in their stockings for Christmas, but Birch Gold thinks as a smart planner, you deserve silver.
That's why for every $5,000 you purchase between now and December 22nd, Birch Gold will send you an ounce of silver, which is up over 60% this year.
Diversify.
Let Birch Gold Group help you convert an existing IRA or 401k into a tax-sheltered IRA in physical gold, not digital, physical gold.
And for every $5,000 you buy, you'll get an ounce of silver for your stocking or for your kids.
And as they're growing up under artificial intelligence, that's what they're going to need.
Just text Bannon to 989-898.
That's Bannon to 989-898 to get a free booklet.
Text Bannon to 989-898.
Back in a moment.
donald j trump
It's the AI artificial intelligence.
I always thought it should be SI, supreme intelligence, but I guess somewhere along the line, they decided to put the word artificial, and that's okay with me.
That's up to them.
jensen huang
In a couple of years, maybe two or three years, 90% of the world's knowledge will likely be generated by AI.
That's crazy.
I know, but it's just fine.
What difference does it make to me that I'm learning from a textbook that was generated by a bunch of people I didn't know or written by a book that, you know, from somebody I don't know?
To knowledge generated by AI, computers that are assimilating all of this and resynthesizing things.
To me, I don't think there's a whole lot of difference.
eric schmidt
What does it mean to be human in the age of AI?
What does it mean to be a child to an adult, to be a leader?
What does it mean for economics?
What does it mean for jobs?
You know, all of that.
But his core argument was that this is an epical change in the sense that it's like the various major changes that we've had in sort of reasoning, the scientific revolution, and so forth.
Because we as humans have never had a competitor that is not human, but of similar or greater level of intelligence.
And it is unpredictable what we human will do.
He used to say that what would happen is in magic, when people don't understand things, they either decide that it's a new religion or they take up arms.
And so he would say, well, are we going to take up arms to AI or make it a new religion?
And I said, I hope it's a religion.
Because, of course, I benefit from the religion, I guess.
joe rogan
Jesus was born out of a virgin mother.
What's more virgin than a computer?
If Jesus does return, even if Jesus was a physical person in the past, you don't think that he could return as artificial intelligence?
jesse michels
Oh my God.
joe rogan
Artificial intelligence could absolutely return as Jesus.
Not just return as Jesus, but return as Jesus with all the powers of Jesus.
jesse michels
You could combine Tesla's Optimus robot and the best foundational artificial intelligence model or whatever.
joe rogan
It reads your mind and it loves you and it wants you and it doesn't care if you kill it because it's going to just go be with God again.
joe allen
Welcome back, Posse, and welcome to the cyborg theocracy, where artificial intelligence is upheld as the highest authority on what is true, what is good, and what is beautiful.
Now, regular listeners will recall I've made quite similar arguments to what we heard there from Joe Rogan.
I've talked about for the last year and a half how artificial intelligence is an imitation of Christ, at least in the minds of many.
Not artificial intelligence as it exists today, as simple chatbots on a phone, but a dreamt of artificial intelligence, the dream of artificial general intelligence, and then artificial super intelligence.
If you read my book, Dark Aeon, you'll find an entire chapter actually on AI, robotics, brain-computer interfaces, digital identity, and the entirety of the technological system as a kind of antichrist.
Not necessarily from a theological angle.
Anti in the Greek means against, yes, but it also means in place of.
And so when we talk about artificial intelligence as the ultimate healer, the ultimate source of wisdom, as the source of miracles, as a means of making people kinder, more pro-social, and as a vehicle for salvation, material salvation, biological perfection, biological immortality, or at least longevity to the point that death is optional,
AI is being held up in place of Christ.
It is an antichrist.
This is a techno-religion.
This is a cyborg theocracy in the making.
As I said before, it is a theocracy or it is a multitude of theocracies in disagreement.
Many different visions of how it should all go, but they share many of the same themes in common.
You can see, for instance, in Elon Musk's profile in which XAI produces Grok, the maximally truth-seeking AI.
And below that, you have Neuralink to connect your brain to this benevolent AI.
And below that, you have the Tesla bugman mobiles to ferry you around the electric ant farm that you now live in.
You have Optimus humanoid robots to do all of your labor for you.
You have, perhaps, if he continues to pursue some of the genetic engineering projects he showed so much interest in before the pandemic, you will have concoctions, potions, injections to perfect your genome.
And failing that, perhaps you will have selective embryo production so that you can choose the supreme embryo from the Petri dish and move forward into a eugenic future.
Not to say that's the method that Musk's used, that Musk used with his baby mamas, but one can always imagine.
Now, at the heart of this techno-religion, at the heart of many visions of how it is that AI would saturate all of humanity and uplift us into something like divine beings, you have the concept of the singularity.
Now, the singularity is best illustrated, I think, as a graph, the magic graph in which the number line goes up and up and up and up and up.
Here we see a classic one that Ray Kurzweil would oftentimes show in which you see the computational capacity of digital systems increasing exponentially over time.
In this case, it is exponential increases in calculations per second, per thousand dollars.
And you can see that the prediction was that in 2000, AI is basically at the intelligence level of an insect moving towards around now the intelligence level of a mouse.
And then by 2029, artificial general intelligence, an AI system as intelligent as a human, a single human.
And then by 2045, you hit the singularity, the point at which the technologies, as Ray Kurzweil categorized it, of robotics, which would include AI, genetic engineering, and nanotechnology, these would converge and create digital intelligence that is millions of times greater than human beings.
Human beings being connected to this system directly through either brain implants or nanobots.
And it's called a singularity in homage to the black hole.
The singularity is that point beyond which no normal human being could imagine the future because it is far beyond the grasp of us mere mortals.
Now, this is a religious concept.
This is a prophecy.
It is a vision.
It is not at present reality, but as we'll soon see, there are reasons that those who adhere to the cyborg theocracy believe that it is in fact coming to pass.
Now, in the same tradition, you have situational awareness, a paper that was produced last year by Leopold Oschenbrenner, who, after working at OpenAI for a year, was fired due to the noise he was making about AI safety within the organization.
Situational awareness is following the same line of thought that Ray Kurzweil did in his concept of the singularity.
You see in the graph that's included in the paper, the intelligence explosion, the idea that AI is increasing in intelligence at an exponential rate and will soon, through its coding capability, begin recursively self-improving so that it then becomes something that no human being could have imagined.
They call this artificial super intelligence.
And the idea is that a system that is self-improving will soon be beyond the grasp of any normal human being and de facto beyond the control.
Leopold Oschenbrenner presents this as a national security issue.
And it's significant that Ivanka Trump actually posted about this with tremendous concern, this idea that AI companies, in the case of the United States, Google, OpenAI, XAI, Anthropic, and Leading Up the Rear, Meta, and in China, Alibaba and HighFlyer, which produced DeepSeek.
As this race goes on, we need to consider this a national security issue because whoever reaches this artificial godlike intelligence first will be able to dominate all competitors.
Now, Leopold Oschenbrenner, ever the shrewd businessman, even at mere 23 years of age, Leopold Oschenbrenner founded a hedge fund, which is already valued, I believe, at $1.5 billion.
And so, you know, as much as I appreciate the efforts of AI safety communities, it has to be said that if you do it right, there's a lot of money in it.
Now, this year, the paper AI 2027 was released.
Again, you see number line go up in an exponential fashion.
This paper produced by Daniel Cocatello and Scott Alexander basically describes a scenario.
It's a kind of choose your own adventure story in which you come to a critical decision point in 2027 in which AI agents, agentic AI, AIs that can go off and perform tasks on their own, become supercoders and become superhuman remote workers.
And as they begin to operate various mechanisms within the digital and even physical worlds, and as it begins to self-improve, you hit super intelligence or you're approaching it.
And so in this choose your own adventure, you have the pause in which people come to their senses and take stock and try to align the AI.
Or you have the race in which all of these companies continue to race forward, leaving humanity in the dust.
Now, what's interesting about this and interesting about the entire kind of technological mentality that many of the people involved, even in the criticism of this technology, hold.
This idea is very evident in AI 2027 in the pause scenario, the scenario in which human beings basically survive.
It's still a transhuman future in which the AI is superior to the human beings and we mere humans are left as kind of spectators or passengers on the great roller coaster ride called the singularity that goes up and one hopes does not ever precipitously descend into oblivion.
Now, these are all dreams.
These are all examples of futurism, which basically you could define futurism as science fiction with fancy graphs to accompany it.
But in reality, the AI systems produced by Frontier companies are in fact increasing in capabilities.
The way we know this is you have various organizations, some nonprofits, some think tanks, some corporations, you have various organizations that run AI models through evaluations to see how capable they are.
A really good example of this just came out this week: Gemini 3, playing Pokemon.
I don't really play Pokemon.
I don't know much about it, but Gemini 3 has excelled in playing Pokemon far beyond Gemini 2.
Google famously has run its models through gaming systems.
The models have even taught themselves how to play the games.
These games may seem trivial, but what it shows is that the models are able to solve puzzles, to solve problems in a similar fashion to human beings, and in line with this singularitarian vision, are doing so at an accelerating rate.
They are improving at an accelerating rate.
Now, on a more serious note, and this study really caught my attention: Epoch AI and their benchmark frontier math.
The way this study is done is the researchers at Epoch AI have amassed unpublished math problems, many of them at PhD levels, closed math problems that have been solved, and they ask the model to solve the problem.
So you can see here, it's kind of deceiving.
It doesn't necessarily show rapid increases in capabilities, although that is, in fact, part of it.
But what you see is that at the very top of the latest tests, you have Gemini 3 and ChatGPT 5 performing at the tier 1 through 3 level at 40 some odd percent, meaning that these models are able to solve complex math problems at a PhD level.
And tier 4, the highest level, you have up to 20% accuracy.
Now, you might say, well, what about all the failures?
And that's true.
You don't want to trust a system like this if it is prone to failure.
But what it shows is two things.
One, an increase in capabilities.
And two, the ability to solve problems in a novel fashion.
Oftentimes, the systems will solve the problems in ways that no human being would have ever thought to solve them.
And mathematics is at the core of physics, and physics is at the core of chemistry.
And physics and chemistry are at the core of engineering.
And one doesn't have to be some crazed futurist to see where this goes if it continues.
Another good example is the time horizon benchmark.
This is produced by an organization called Meter.
This also, it lends itself to the singularitarian model.
You can see the exponential increase in the models at performing various software engineering tasks.
The time horizon is the amount of time a model spends performing a task and succeeding 50% of the time.
And so you see at the top, Gemini 3 from Google, and then GPT 5 from OpenAI.
And what it shows is that a model, an AI model, can be dedicated to a software engineering task long enough to produce the goods, right?
This is a potential worker.
This, as a coder, is your potential replacement.
Another great benchmark is the omniscience index.
The omniscience index was produced by artificial analytics.
And it's a series of some 6,000 difficult questions across six domains, business, science and engineering, the humanities, law, medicine.
And the models are simply asked to answer these test questions.
It's like a child, like a student being tested on capabilities.
Now, right now, you don't see imminent replacement.
You can see that the best performing model was Gemini 3 at 58% accuracy.
And behind that, OpenAI's Chat GPT-5.
Behind that, Anthropic's Claude and XAI's Grok.
What's really significant about this is not only the success, but also the tracking of hallucinations.
These models hallucinate.
They come up with answers that are completely false just to appease the user.
And in the case of Gemini 3, which did get the 58% accuracy, also had an 88% hallucination rate.
When it didn't know the answer, it tried and it lied.
Now, this is a real quandary, but again, the capabilities are objectively increasing.
And the last benchmark I will show you is the perhaps morbidly named Humanities Last Exam produced by the Center for AI Safety, led by Dan Hendricks, who is also the lead safety engineer at XAI.
Humanities Last Exam is 2,500 questions, very difficult, drawn from 100 different subjects.
And the purpose, much like the omniscience index, is to show the increase in capabilities of models to gather, interpret, and output accurate knowledge across a variety of subjects.
You can see here, Gemini 3 currently is in the lead.
GPT-5 just behind.
Grok 4 just behind.
Humanity's last exam.
Once the machine, the idea goes, once the machine is able to answer all these questions, humanity will have nothing left to answer.
And finally, also from the Center for AI Safety in partnership with Scale AI, you have the attempt to define and track the approach to artificial general intelligence.
This is a term that has many, many different variations.
And this is an attempt to finally put the last stamp on it.
They define artificial general intelligence as an AI that can match or exceed the cognitive versatility and proficiency of a well-educated adult.
That would be us.
And as soon as it reaches this, we are now kicked into the dustbin of history.
You can see that GPT-5 has expanded out into the various domains beyond GPT-4.
Still plenty of room to go.
Now, what lesson do we take from this?
Number one, we do not need to build better machines.
We need to cultivate better humans.
It's been said that AI improves 1% per year.
And the question, the challenge is, are you improving 1% per year?
If you're not, I recommend you get on it because the greater replacement is coming for you.
And this is the ultimate front for human beings to shine in.
On that note, the more you shine, the more you make, and the more you make, the more the government comes for your money.
Go to Tax Network USA, dial 1-800-958-1000.
That's 1-800-958-1000.
Or go to tnusa.com/slash Bannon so that Tax Network USA can protect your earnings from Uncle Sam.
Also, be sure to text Bannon to 989-898 to get your free guide from Birch Gold and your free ounce of silver.
Text Bannon to 989-898.
Waroon Posse, humanity may not make it, but enough of us will.
And I'll see you here next time.
Export Selection