WarRoom Battleground EP 922: AI Doom Debates with Liron Shapira
Stay ahead of the censors - Join us warroom.org/join
Aired On: 1/8/2026
Watch:
On X: @Bannons_WarRoom (https://x.com/Bannons_WarRoom)
On the Web: https://www.warroom.org
On Gettr: @WarRoom
On Podcast: Apple, iHeart Radio, Google
On TV: PlutoTV Channel 240, Dish Channel 219, Roku, Apple TV, FireTV or on https://AmericasVoice.news. #news #politics #realnews
Not content with addicting our kids to their gizmos or amassing fortunes the size of lesser European states, our tech elite has turned with rabid enthusiasm to artificial intelligence.
No, only the less cautious articulate the real reason, what many quietly believe, that AI will reinvent human existence.
AI is going to take the human on the other end away, and kids are going to grow up talking to artificial creatures.
They are not going to learn how to talk to real humans, which bodes very, very poorly for their own lives, their own work lives, for marriage, for child rearing.
All those are threatened if kids grow up interacting with AIs rather than humans.
The eugenicist Julian Huxley predicted this as far back as 1957.
I believe in transhumanism, he said.
He coined the term.
And so do untold numbers of today's tech class.
That is the vision, the religion, the ideology that animates so much of the breathless race for artificial intelligence and for general artificial intelligence and super intelligence and beyond for the day when humans are no longer embodied beings at all, but live infinitely in the cloud.
I call it AI exceptionalism, as if this is going to be something where, you know, it's going to solve every problem and humans aren't even going to be needed to do.
Everyone can just sit around and play golf all day and you're going to get universal basic income and they're going to cure.
And it's like, no, like, first of all, that's not likely to happen.
And second of all, I think it raises huge concerns.
And so I am, you know, not an AI exceptionalist.
I'm an individual and human exceptionalist.
I think new technologies have to be developed in a way that aligns with American values.
Things like self-government, free speech, having a healthy labor force, federalism and the rights of states, and the creation and maintenance of strong families.
It is Thursday, January 8th in the year of our Lord 2026.
I am Joe Allen, and this is War Room Battleground.
As you know, Posse, artificial intelligence has spread out across the world, infecting brains like algorithmic prions, giving the sense that perhaps the entire human race is under threat of getting digital mad cow disease.
We've seen instances of AI psychosis.
We've seen instances in which artificial intelligence has lured children into suicide.
Now, up on Capitol Hill, the fight for who gets to run this algorithmic insane asylum and who goes to the digital padded room has heated up.
We have laws on the books across the country at the state level banning psychiatrists from using artificial intelligence as a kind of agent, as a proxy for their practice in Illinois.
We have laws on the books in California to hold up AI companies to accountability, transparency.
SB 53 in California is probably one of the strongest laws looking at the catastrophic risks of AI and making some attempt to hold these companies accountable.
You have a similar law on the books in New York, the RAISE Act.
And Josh Hawley and Richard Blumenthal have introduced a similar national level bill entitled the AI Risk Evaluation Act, the goal being to monitor companies and force them to publish their safety protocols, to publish any safety incidents, and to delineate what sorts of penalties they would suffer if, for instance,
their AIs began to lure children into suicide or drive people insane.
At the national level, this struggle for control over who is in charge of the future of AI, who is responsible for any damages, and what direction it will go is led at the moment by a bipartisan coalition, a very small one.
But if I look into my crystal ball, I certainly see as this issue heats up, as the various catastrophes become more and more imminent, that this fight will be explosive.
You have Bernie Sanders, who recently learned the word artificial intelligence, calling for a full moratorium on data center construction.
That may be unrealistic, but at least it sets a bar.
It tells these companies that someone is willing to stand up to them.
And even if it doesn't end up being Bernie Sanders, ultimately, we know that you have younger, brighter minds on the left like Rokana, and you have younger and at least diligent individuals like Ron DeSantis in Florida who are willing to step up and lead the charge against these companies and their excesses.
Now, as you know myself, I'm much more concerned about the social and psychological implications of all of this.
The AI psychosis is monstrous.
The ways in which these sycophantic systems will lure people into not only mental instability, but also suicide.
And in the case of the famous murder suicide that occurred last August, in which a 53-year-old former Yahoo executive murdered his mother at the encouragement of ChatGPT and then stabbed himself to death.
And the authorities found that GPT was encouraging not only his general break with reality, but also his suspicion that his mother was in fact in on the conspiracy against him.
These sorts of things are extreme edge cases.
These sorts of incidents give us a sense of how bad it could get should these prions spread and the infection become worse.
But just on a general level, you don't have to go too far into the internet to see that not only are search engines now dominated by AI interpretation rather than guiding you to human-produced information, but social media is suffused with it.
You see endless streams of AI slop, AI-generated images, AI-generated posts, essays that are supposedly human-created, which are obviously the result of algorithmic systems.
And of course, deep fakes.
If you look just recently, the shooting in Minneapolis, you have real footage of an incident which is tragic and an incident which we should be able as a society to look at the video evidence from multiple angles and come to some kind of consensus, some kind of conclusion as to what is and isn't real.
And yet you see the split wherever you are on that line.
You see the split, not just in what is right and what is wrong, but what is real and what is not real.
And this is real video evidence.
Imagine a world in which half, three quarters of the videos on the internet are simply deep fakes.
And they are so close to reality.
They're so photo or video realistic that there's really no way for the human eye or the human mind to detect the difference.
The only recourse you have is to turn to an AI to ask, is this real?
I've talked about the religious implications of artificial intelligence for years.
If there is any one question that religion answers, that humans are yearning for eternally, what is real?
What we see are the wealthiest men on earth, empowered by the most powerful government on earth, putting their algorithmic systems, their non-human minds forward as the ultimate arbiter of what is and isn't real.
And if you think that the fight in Minneapolis is going to spark off into something like another subsequent string of national tragedies, imagine two, three, four years on down the road, if these companies are not restrained, if the flow of AI slop and deepfakes is not stopped, what it looks like when we're all scrambling to decide what is real and what is not,
while half or more of our countrymen are activated by videos, text, fabricated evidence, deepfakes that have encouraged them to hate their fellow Americans.
It's a dystopian idea, one that I don't think we are necessarily going to experience in its fullness, but some portion of it is already happening.
The seeds of this dystopia have already sprouted.
And it's up to us on the individual level, on the communal level, to push back, on the institutional level, to say, this is not how our companies, our churches, our government agencies are going to be run at the behest of algorithms.
And of course, at the political level, by putting in place regulation and perhaps even banning certain levels or certain uses of artificial intelligence to at least give humanity a fighting chance in this cosmic war against the machine.
Beyond the social and psychological problems, you have the economic problems.
You have the problem of replacement.
What happens when jobs en masse are replaced by AI?
And then on the deepest level, the catastrophic risks.
What happens if AI systems allow any simpleton to create novel viruses, for instance, or any other type of bioweapon?
What happens when AI systems empower a tyrannical government or security state to unleash swarms of death drones that can autonomously kill hundreds, perhaps thousands of people with only one push of the button?
And in the most far out, the most fantastic vision of human doom, what happens if these AI companies create a system that they can't control at all?
What happens when they create first a human-level artificial intelligence, artificial general intelligence?
What happens if they create a system or a series of systems, a system of systems, which is smarter than all human beings on Earth combined?
Here to talk about that possibility is Liron Shapira, host of Doom Debates.
If Denver will roll, I just want to give you a sense of what Liron has going on over there.
I would argue that artificial superintelligence is vastly more powerful in terms of the downside than hydrogen bombs would ever be.
unidentified
Let me make an uninterrupted point for a few minutes.
If you don't mind, I think that there will be tons of side effects, and I think that we will stave off a lot of wonderful possibilities for the future.
It's very possible that super intelligent AI alignment is intractable.
You did agree that one data center pretty soon could be better than a doctor at doctoring.
Maybe it could be better than a general commanding an army.
Maybe it could be better than a Hitler or a David Koresh.
unidentified
We need to think about the good futures more instead of just reacting and being terrified by things and wanting everything to stay the same because otherwise you end up being like, I warned you, and then nothing's going to happen.
Now, the war room is not at all unfamiliar with harsh evaluations, but I'm curious: if you had to pick, say, three most likely paths by which an artificial intelligence system or multiple systems were to overtake the human race and, as you say, spread across the solar system and then galaxy like a cancer, what would those three paths be?
The first place I would go is I would go all the way to what you'd call science fiction, except it's not going to be fiction, it's going to be real.
I would go all the way to nanotechnology, new forms of life.
And the reason why I insist on going there is even though it might not happen, nobody can predict the future.
I do want to give people a sense of perspective that the intelligence scale goes a lot higher than humanity.
Like Einstein, with all due respect, it's possible to make a mind that's much, much smarter than Einstein's mind.
And that's what we're doing with AI in as short as five or 10 years.
And when you see a mind like that on the same planet as you, you should expect things that are pretty miraculous.
Because what the human race has already done in the year 2026 relative to humans in biblical times is already quite miraculous, right?
And we've pulled that off just using little two-pound pieces of meat in our heads, right?
We've done it with very little hardware over the course of 2,000 years of human-level intelligence.
We're about to have superhuman intelligence.
So, I do want to set expectations that we're about to see fireworks in terms of the level of superhuman technology that's probably going to exist soon.
Things like nanotechnology, things like building a Dyson swarm, like a swarm of satellites, harvesting the sun's entire power so Earth doesn't get any sunlight.
I do want to set expectations that those kind of crazy technological feats are likely to happen.
I would just encourage people to go look at the consensus timeline of the experts.
So, for example, if you go to metaculous.com, which is a prediction site, they will tell you roughly 2032.
If you'd asked them five or 10 years ago, they would have been like, oh, don't worry, 2050, 2060.
But now they're converging to like 2032, which is in about six years.
And they don't know for sure.
So, when they say 2032, they really mean it could happen this year.
It could happen in three years.
It could happen in nine years.
If you listen to the experts, you know, Elon Musk is saying, Yeah, it could happen in 2026.
If you want my personal opinion, I just agree.
I think it could happen in one year to five years.
If it doesn't happen in 10 years, I start to get surprised because even people who have traditionally been pessimists are now saying it'll probably happen within like 10 years.
You know, I came at this quite skeptical of the possibility of, say, for instance, superhuman AI or even human equivalent AI.
It was going over the evaluations that I won't say it's changed my mind, but it's certainly driven home the real possibilities of what these systems could do.
So the meter benchmark, for instance, how long can an AI code 50% of the output a human could do?
These sorts of things.
The benchmarks, for instance, the omniscience index or humanity's last exam.
How well can AI go into its own mind, so to speak, and draw out meaningful answers to incredibly difficult questions on health, business, science, so on and so forth?
Was that at all part of your journey?
I mean, I know that you've been at this for a decade and a half plus, maybe two decades.
You've been concerned about this.
Do those evaluations come into play as a way of kind of judging or measuring where we're at in relation to this possible artificial general or super intelligence?
Yeah, so the meter benchmark that you're referring to, it is very interesting.
And it's talking about the dimension of task length.
So like, can an AI work for two hours straight?
Or rather, can it do a task that would traditionally take a human two hours to do, like write a software program, like a simple checker's game or whatever?
Can the AI also do that?
And if a human can do it in two hours, can the AI also do it with 80% reliability?
So it gets to mess up a little bit.
And that time length, like two hours, it's turning into four hours.
We're roughly at this point where if a human can do something in four hours, an AI can do it with maybe 80% reliability if you run it now.
And maybe the AI will even do it faster than the human.
Like that's roughly where we are right now.
But to your question, have I been following this for the last 20 years?
Because I have been a self-described AI doomer for the last 20 years.
But the difference is that I used to think we had a lot of time.
I used to think we had like a century and it's okay.
Like it's not the biggest rush.
Like we'll figure it out.
You know, we'll discover new theories.
The problem is that the timeline got accelerated with ChatGPT.
Recent developments have pulled the timeline forward, as you saw in Metaculous.
Now I don't think it's going to happen in 2100.
I don't think it's going to happen in 2150.
I think it's going to happen 2130, 2030, something like that.
So to your question about looking at these benchmarks, we have to realize how weird it is that these benchmarks already exist because the meter benchmark preassumes that there's such a thing as artificial general intelligence.
Like the idea that you could ask about a general task, any task that a human can do.
That wasn't even on the table to ask about AI doing anything that a human can do.
And that's now the language that we're talking in.
We're talking like, here's a human, here's an AI.
And we're now watching the AI ascend past humanity as we speak in a matter of months or years.
You know, when I think about the history of this and just the recent history, say the last nine, 10 years, the development of the Transformer, its adoption by OpenAI, the release of GPT, I think GPT-1 was released, what, 2018?
And at the time, it was very, very clunky.
It wasn't a whole lot better than, say, something like ELISA.
A bit more sophisticated, but not much.
And then all of a sudden, by 2022, you have a very sophisticated chatbot, ChatGPT, released in November of 2022.
And even then, it's really wonky.
And it's only a large language model, right?
It can only process text.
At the same time, you had DALI and all those sorts of independent programs coming out.
And it has just been an onslaught ever since.
These models are now multimodal.
They are much, much more accurate in the ability to gather or to, within themselves or on the internet, to gather and interpret information.
I'm wondering, I've seen your posts.
I think your posts on Less Wrong, for instance, go back to 2009.
I mean, you've been thinking about this for a long time.
Was there any moment or any incident or incidents that really changed your mind on how soon something like artificial general intelligence could actually develop?
Yeah, I changed my mind roughly the same time everybody else did.
If you go dig up Metaculous, if you look at the history of the predictions that the community has been making on that website, you can see around 2022 when ChatGPT comes out or when GPT-3 comes out, the underlying model, you can see the timeline just crashes.
It crashes from like 2050 to 2030.
So my own opinion was roughly coincident with that.
And what you're seeing with ChatGPT is, you know, it's the famous Turing test, right?
Alan Turing proposed this in the 40s, this idea that if you can talk to an AI in natural language and you can bring up any subject and you can't even tell if you're talking to a human or a bot, which you used to kind of be able to tell.
And now the only reason you can tell is because they programmed it to act like an AI.
But if somebody goes and programs it to pretend to be a human, they've done tests where they do that and you really can't tell.
This was a famous test.
I didn't think the Turing test was going to fall in my lifetime.
And now there's been studies to show like, nope, we're past the Turing test now.
This is such a brave new world where we're past the Turing test, watching the AI, the meter evaluation, where the AI is getting better than humans at every single task.
And the time horizon is going up at a rate of like faster than doubling every year.
And it's about to, it's about to go, you know, it's about to do things that humans can do in like a whole year.
It's about to be able to grind through that and who knows how little, like a day.
And then what's it going to do the rest of the year?
Like it's going to do superhuman amounts of work in a single data center.
And some overlap there, obviously, but you're talking about anywhere from a 10th to perhaps a sixth of the entire planet.
And Liron, if you would hang on through the break, as the War Room posse processes this and imagines a world in which artificial intelligence has perhaps taken over everything, you're going to want something to trade in.
It's probably not going to be Bitcoin.
Definitely isn't going to be dollars.
What you're going to want is gold.
A new year means new financial goals, like making sure your savings are secure and diversified.
Will this be the year you finally listen and talk to someone from Birch Gold Group?
Honestly, they're great people.
I appreciate their educational approach.
And they are not AIs.
These are flesh and blood humans.
And their understanding of macroeconomics are astounding.
There are forces pushing the dollar lower and gold higher, which is why they believe every American should own physical gold.
So until January 30th, if you are a first-time gold buyer, Birch Gold is offering a rebate of up to $10,000 on qualifying purchases to claim eligibility and avoid a world catastrophe.
Singularity, start the process.
Just text Bannon to 989898.
Birch Gold Group can help you roll an existing IRA or 401k into an IRA in gold, and you are still eligible for a rebate of up to $10,000.
Can't beat that with a stick.
Now, make right now your first time to buy gold and take advantage of the rebate up to $10,000 when you buy by January 30th.
Text Bannon to 989898.
Claim your eligibility today.
Again, text Bannon to 989898.
Back in a moment, War Room Posse.
unidentified
American fame In America's heart, are you on Getter yet?
No.
What are you waiting for?
It's free.
It's uncensored, and it's where all the biggest voices in conservative media are speaking out.
I cannot recommend enough the Doom Debate platform.
You can find it on YouTube.
You can find it on Leron's social media.
You'll see some War Room favorites like Max Tegmark, Jeffrey Miller.
You'll also find people like Robert Wright.
You can find Leron debating Beth Jezos, who has still not accepted the invitation to come on the War Room, but I'm sure he'll come on any day.
Gary Marcus, a War Room favorite, Holly Elmore, Roman Yampolsky, whose P-Doom beats everyone's.
I think it's almost 100% P-Doom.
And you can also really dig your teeth into, sink your teeth into the technical details.
As Leron and his various opponents are going over the possibilities of either some kind of wonderful future of abundance or a horrific, doom-inflected, catastrophic end to all humanity and life itself, they are teaching you the underlying mechanisms of artificial intelligence and you can really gauge not only where it's at now, but also where it's going and where it may go in your life.
So doom debates.
Leron, if we can just come back with a little breath of fresh air, a little bit of optimism.
You have been involved in Silicon Valley firms and technology for a long time.
And I would say just as an outsider, I would describe you as in general a techno-optimist.
I don't know whether I would want to debate you on the possibility of doom.
It's not a huge concern of mine, just because I think that if I had any kind of thesis, it would be a reformulation of Yudkowski and Nate Soros.
It would be if anyone builds it, everything sucks.
But what I would argue about is whether or not, you know, fully autonomous vehicles all over the road, bugman mobiles, or people lost in virtual reality kind of in a digital trip, whether that is beneficial to humanity.
But maybe we can coexist, assuming we're not all destroyed, huh?
Yeah, that is a very eerie sort of phenomenon, isn't it?
I think it was on the Joe Rogan show where Elon Musk was talking about watching his kids grow up and just kind of weaving it in with artificial intelligence and talking about how watching an AI being trained is very much like watching a baby grow up.
And there came a point where it wasn't really clear what he was even talking about.
Was he talking about a digital mind?
Was he talking about his baby?
And I think that even beyond just the capabilities, you described the Turing test as this major milestone that's already been passed.
This tendency for humans to anthropomorphize these systems and the vast, vast number of people who are using them, it's as if we've been invaded by artificial immigrants.
Is there, as far as AI development goes, without a total ban on development of AI, like what is a comfortable limit for you?
How far do you think these companies should take AI capabilities?
I wish I could tell you a really crisp answer because then we would just go right up to that line and stay there and never take a step forward.
That would be fantastic.
Unfortunately, because of the nature of this research, nobody knows where the line is.
It really does feel like we're driving in the fog toward a cliff and all the different AI research companies are just flooring the gas because, hey, the closer you get to the cliff, you know, it's like shuffleboard, more points for you, right?
More money, trillions of dollars.
And the truth is that today, I don't think we're over the cliff yet.
You know, there's some people who will tell you today, AI has caused so much damage.
It's so bad.
No, I think today it's still net good.
You know, it's very useful.
I use AI a lot today as long as I'm alive.
The problem is I do think the cliff is coming and the cliff is just when it gets smarter than humanity.
And so at the very least, the kind of proposal we need to do right now is we need to just build an off button.
We need to build a brake pedal because right now there is no brake pedal.
There's only gas.
So at the very minimum, let's get ready to hit the brakes a little later.
So the short answer is it's not enough because what I'm saying right now is like we're making a smarter species and we're going to lose control.
Like this time, in 10 years from now or less, we may have no levers of control because all of the levers of control are at the hands of the AI and it's game over.
There's no undo button, there's no off button, it's game over, right?
No children, like this was going to be our galaxy, now it's never going to be.
We're never going to have grandkids.
The kids that we have are not going to grow up.
Like, this is a major disaster here that we're trying to avoid.
And the regulators are coming out and they're saying, Hey, can you guys send us a report when you're creating this AI?
You know, there's a big disconnect between the magnitude of the emergency and these little baby step regulations.
Like when the rubber beats the road, which is literally a few years, it's not going to be enough.
Well, you know, in Marsha Blackburn's proposed, it's not a fleshed out bill yet, but the Trump America AI Act.
It's a framework that gives a sense of where kind of one national or one federal standard might go.
And in it, the recommendation, one of the recommendations is to have agencies such as the Department of Energy, which has been responsible for tracking nuclear risks and really controlling that possibility of doom for decades.
Do you think that those sorts of approaches, just specifically the Department of Energy, do you think that they're capable of such a task?
Do you think that they have the right expertise to kind of switch over to address the possibility of out-of-control AI?
So the problem is that all of humanity has to cooperate.
So unfortunately, you know, this whole solution is actually a bit complex.
It requires an international treaty.
I mean, if you think about nuclear proliferation, right, it's not about one country managing itself.
It's about all the countries, everybody policing everybody, right, in this kind of shared centralized way.
And I'm no fan of centralization.
You know, I like free markets.
I like everybody defending themselves, right?
Everybody pulling themselves up by their own bootstraps.
Unfortunately, when it comes to creating a smarter species, you really do need some oversight that random hackers don't decide to create a smarter species and unleash it on the whole human race.
So you do need something like nuclear proliferation enforcement that's happening through a consortium of nations.
And this all has to happen fast, you know?
So like when I see these little efforts, you know, one state at a time proposing something, it's better than nothing.
And the funny thing is that the AI companies are already aggressively fighting even that, even these token efforts.
But we need to just get serious.
You know, the grassroots, the people watching right now, they need to consider this an urgent voting issue.
Like whatever you think is your number one voting issue, consider surviving the next decade to also be an important voting issue.
And I think, you know, you hear a lot from people who are older.
They say, oh, well, I'm not going to be alive.
It's not my problem.
But I think that whether the real issue around artificial intelligence for you is the possibility of people just simply getting their brains melted or of massive job loss or of humans creating some kind of catastrophe by being enabled by AI or the ultimate, right?
Like out-of-control AI, it's starting the salience, I think, is really sinking in.
The war room posse really understands, I think, the magnitude from psychological all the way down to doom.
But you're right, it is a matter of mobilizing as many people as possible.
Do you think that populism plays into this?
Do you think that this is much more the task appropriate to a populist approach as opposed to kind of standard elite or moneyed political activism?
So it has to be grassroots because leaders, they're not going to really lead from the front.
You're not going to have a leader that says, hey, I've heard the argument for why we're doomed.
I've looked at metaculous.
I know the predictions.
So trust me, America, we need to go do these international treaties.
We need to have a stop button on AI.
There's not going to be a forward-thinking leader who gets elected president or to Congress and pulls the nation along.
It has to be what the voters are demanding, right?
The voters are going to get what they're demanding in the polls.
And so, you know, the term raising awareness, usually it's just like hippies wasting their time, you know, raising awareness.
It's kind of meaningless.
In this particular issue, I actually think raising awareness helps in the sense of taking the issue seriously and making it a voting priority.
Because I think that the war room posse, I think that they, most of them already agree that this is an important issue, but they haven't been treating it like the number one voting issue.
And when they talk about it with their friends, their friends are like, yeah, you know, I'm pretty convinced that makes sense.
But again, they don't go and vote on it, right?
They don't have politicians promising to build that stop button and go negotiate with China, right?
Have China build their own stop button too.
Like this isn't treated urgently.
And it's crazy how little time we have left.
Only people in Silicon Valley have opened their eyes to how little time we have left.
The rest of the world is completely head in the sand.
Well, before we sign off, I'd like to give you the opportunity just to give any message that maybe I haven't prompted you, like GPT, that I haven't prompted you to give.
When 60 Minutes last visited Boston Dynamics in 2021, Atlas was a bulky, hydraulic robot that could run and jump.
When we dropped in again this past fall, we saw a new generation Atlas with a sleek all-electric body and an AI brain powered by NVIDIA's advanced microchips, making Atlas smart enough to pull off hard-to-believe feats autonomously.
If optimists can watch videos, you know, YouTube videos or how-to videos or whatever, and based on that video, just like a human can, learn how to do that thing, then you really have task extensibility that is dramatic.
Robots today have learned to master moves that until recently were considered a step too far for a machine.
unidentified
And a lot of this has to do with how we're going about programming these robots now, where it's more about teaching and demonstrations and machine learning than manual programming.
Right now, we're training optimists to do primitive tasks, where a human in a kind of a what's called a mocap suit is and sort of cameras on the head is moving in the way that the robot would move to say, pick up an object or open a door, or the basic tasks throw a ball dance.
This robot is capable of superhuman motion, and so it's going to be able to exceed what we can do.
Why not, right we?
We would like things that could be stronger than us or tolerate more heat than us, or definitely go into a dangerous place where we shouldn't be going.
We might be able to give people, if somebody's committed crime, a more humane form of uh, containment of future crime, which is if if, If you say, like, you now get a free Optimus, and it's just going to follow you around and stop you from doing crime.
But other than that, you get to do anything.
It's pretty wild to think of the various of all the possibilities, but I think it's clearly the future.
Criminals forge your signature on one document, use a fake notary stamp, and pay a small fee with your county, and boom, your home title has been transferred out of your name into a robot.
Go to hometitalock.com.
Use promo code steve at home titlelot.com to make sure your title is in your name.
Also, text Bannon to 989898 to get your free Birch Gold Guide.