| Speaker | Time | Text |
|---|---|---|
| This is the primal scream of a dying regime. | ||
| Pray for our enemies. | ||
| Because we're going medieval on these people. | ||
| Here's not going to free shot all these networks lying about the people. | ||
| The people have had a belly full of it. | ||
| I know you don't like hearing that. | ||
| I know you've tried to do everything in the world to stop that, but you're not going to stop it. | ||
| It's going to happen. | ||
| And where do people like that go to share the big lie? | ||
| MAGA Media. | ||
| I wish in my soul, I wish that any of these people had a conscience. | ||
| Ask yourself, what is my task and what is my purpose? | ||
| If that answer is to save my country, this country will be saved. | ||
|
unidentified
|
War Room. | |
| Here's your host, Stephen K. Bannon. Good evening. | ||
| I'm Joe Allen with War Room Battleground. | ||
| If you followed my travels over the last few months, you know that I have crisscrossed the country multiple times, even at one point ending up in Switzerland interviewing robots who, as you might imagine, turned out to be racist. | ||
| I mainly tried to keep my company confined to transhumanists, Luddites, the occasional normie, and of course, war room posse. | ||
| In the recent weeks, I found myself in some very strange situations. | ||
| One of the more unique was a Latin Mass at a city I won't disclose in which all the women, of course, had their heads covered. | ||
| There were children everywhere, beautiful iconography, and the strict social order you could see in the health of the family and certainly in the health of the individuals I met. | ||
| You could see how that social order produces a certain type of human being, a human being who is devoted not only to family, but to God. | ||
| A human being who is infinitely capable, I believe, of responding to the future in a way that does not destroy the past. | ||
| Now, after the Mass, I found myself sitting outside of a coffee shop. | ||
| I was speaking to some of the young people. | ||
| Some of them had their spouses with them. | ||
| Most all of them will soon have children. | ||
| And they talked about what the future would look like for their children. | ||
| How would they protect their children from the predations of tech corporations in a future in which tech corporations basically rule the world? | ||
| What was really interesting about that was sitting right across from us was an anarchist. | ||
| He was sitting and working on a pair of boots. | ||
| He's a cobbler by trade. | ||
| The anarchist was diametrically opposed to the Catholic way of thinking. | ||
| He clearly was not religious in any dogmatic way, very much obsessed with esoteric traditions, otherwise known as the occult. | ||
| Certainly, he was not into the kinds of rigid social order that a Catholic or any other kind of deeply religious community would produce. | ||
| And yet, he agreed completely with the assessment that tech corporations are a primary threat to human life. | ||
| Now, a few weeks later, Thanksgiving, I found myself in another strange kind of juxtaposition of social schemes. | ||
| In the afternoon, I joined a group of families who were deeply Christian. | ||
| We talked about a lot of things. | ||
| In regard to technology, they are as suspicious as I am. | ||
| In regard to religion, let's just say that if I'm going to heaven, it'll be after them. | ||
| But they were, you could see in these families that same deep devotion in the way in which it expressed itself outwardly in their marriages, in their children, in their homes, in the way they order their lives. | ||
| Later on, I found myself in another and perhaps more exciting milieu. | ||
| I was invited by one Sam Hammond to a Thanksgiving dinner comprised mainly of what we would now call tech accelerationists, although in the old days you would call them transhumanists. | ||
| The turkey was delicious, the stuffing was as well. | ||
| The wine, to the extent I sipped it, was not bad. | ||
| And I'm not sure, but there were bottles of Brian Johnson's snake oil lining the counter. | ||
| And it may be that I myself imbibed some of this bizarre immortalist elixir. | ||
| Now, in the same spirit of conciliance that anyone would approach a Thanksgiving dinner, we all discussed our differences civilly. | ||
| And in the desire to continue that conversation, I would like to welcome the war room audience to encounter the mind of the futurist and chief economist at the Foundation for American Innovation, Sam Hammond. | ||
| Sam, thank you very much for joining me. | ||
| Thanks, Joe. | ||
| You're looking younger. | ||
| It's got to be the Brian Johnson snake oil. | ||
| So, Sam, as we've discussed, your imagination is monstrous. | ||
| You basically have a brain full of showgoths, their tentacles creeping out of your eyes, creeping out of your ears, and in the bizarre words that you speak, casting the future in terms of a singularity. | ||
| Your book or pamphlet, AI and Leviathan, is absolutely brilliant, however nightmarish this future that you paint may be. | ||
| If you would just give the war room audience a sense of what you're trying to communicate with AI and Leviathan. | ||
| What is this effort in futurism? | ||
| Yeah, so the inspiration comes from an essay by Tyler Cowen called The Libertarian Paradox, where he noted that libertarians have been fighting for small government, limited government for years. | ||
| We also value markets, creative destruction, the wealth-producing propensity of capitalism. | ||
| But maybe these things were a package deal. | ||
| We got the welfare state, we got the administrative state, the managerial state as a byproduct of capitalism and the success of the Industrial Revolution. | ||
| And so that sort of shook the foundation of my worldview. | ||
| I was raised and grew up pretty libertarian, but coming to understand that these sort of enemies that we fight, like the Leviathan, the state, got stronger as a byproduct, as a bundled package deal with prosperity. | ||
| That looking ahead, are there other package deals with AI and the AI transformation? | ||
| And so the way I sort of see it is that we are sort of sitting on a knife edge. | ||
| The powers of AI are incredible for everything from healthcare, biomedicine, education, but they also are incredibly powerful tools for surveillance, for censorship, for social control. | ||
| And there's a kind of race dynamic going on between the state and the rest of society. | ||
| Will we end up in a digital panopticon a lot of China where there's a group in Shanghai that runs the surveillance systems and they have a big situation room? | ||
| They brag that they don't identify anybody within a sharp eyes. | ||
| I think the name Sharp Eyes. | ||
| It comes from a mouse quote. | ||
| Or will we sort of start to fragment as the capabilities that are now today only possessed by the CIA or Mossad or by state agencies become sort of democratized and we can all rebundle our organizations around smaller communities. | ||
| And I sort of want to walk a middle ground where we can have our cake and eat it too. | ||
| And I think it's going to be a very narrow path. | ||
| And one of my roles or purposes is to try to communicate that this is a package deal, that you have these sort of naive techno-optimists that think we're going to just plow forward into the brave new world and everything is going to be hunky-dory. | ||
| You have people who think we're just straight up doomed. | ||
| I think we're somewhere in between where we're going to have to make very hard trade-offs. | ||
| And at the very least, we should be communicating those trade-offs to the public. | ||
| You know, one thing I really appreciate about your perspective, Sam, is that you don't pull punches and you've never been shy about stating things that might be shocking to normal people. | ||
| Now, I'm not a normal person, so I'm not shocked. | ||
| I'm more intrigued than anything. | ||
| The future is wide open. | ||
| But your vision of the future is deeply informed by your education as an economist, your political education. | ||
| You had described kind of a progression from a naive anarchist to one who is much more open to the possibilities of the uses of the state and maybe even the inevitability of a large degree of centralization. | ||
| And the pairing that I read here in AI and Leviathan, the pairing I read here is particularly interesting because on the technological level, you basically take as axiomatic the claims of Ray Kurzweil and other futurists that we are indeed heading towards a technological singularity. | ||
| And you're looking at how will the state respond. | ||
| And, you know, spoiler alert, but you say that most likely it will stabilize as Chinese style, one world or one state control, one sort of centralized leader using AI to control the populace. | ||
| On the other side of that, you see something more akin to liberal democracy as it evolves into a high-tech society where corporations basically take up that role. | ||
| And in between, the anarchic states. | ||
| My first question regarding the first part, the technological singularity. | ||
| Why do you believe that, in fact, these technologies will keep increasing at an exponential pace? | ||
| And do you really believe that by, say, 2045, we'll see something like Ray Kurzweil's view, where we have AIs that are millions of times smarter than human beings, most human beings locked into those systems with trodes, being regularly genetically updated to keep up with the machine. | ||
| Is that really the generalized future you see us going towards and why? | ||
| Yeah, I don't know if I can put the specifics, right? | ||
| So predicting the future is hard, right? | ||
| There are some things that are more easily predicted, right? | ||
| So we can predict, it's maybe setting aside the bogus climate science, it's easier in principle to predict one degree warming over a century than what the weather will be in a month, right? | ||
| Because the weather in a month is a random dynamic system, similar with societies. | ||
| So I can say with confidence that the world of 2045 will be at least as different seeming to us as the world of 1950 was to the world of, say, 1650. | ||
| Just radical transformation. | ||
| And what that looks like in practice will in part be up to us. | ||
| But yes, I mean, one thing I think with some of the more naive techno-optimists where they sort of get their overconfidence is the fact that we've lived through a period of rough, relative stagnation over the last 40, 50 years as we've shifted from building actually new technologies to offshoring and globalization and these sort of substitutes to true innovation. | ||
| And as we exit that era of stagnation and re-encounter, like when history has restarted, we have to be prepared for a very tumultuous transition. | ||
| I'm someone who thinks that the mind is essentially computation. | ||
| And I think that is a prior that maybe makes me more open to the idea that we're going to recreate mind and digital substrate. | ||
| And I think we, you know, I don't take this as axiomatic. | ||
| I think it's partly we're seeing it play out, right? | ||
| We're seeing similarities between these vision models and their internals and the way our visual cortex works. | ||
| We're learning a little bit about how human language works by studying these language models. | ||
| And so it's less that we're building this totally alien technology, although it is alien in some respects, but we're really building a simulation or emulation of ourselves, but in a format that is potentially unbounded, right? | ||
| When you say the mind is computation, are you using computation as a metaphor for what happens inside the human brain, or as I would say, in the human soul? | ||
| Or is it something more fundamental that the human mind and the computer truly do share the same sorts of patterns and the same sorts of processes that lead to what we call mind? | ||
| Yeah, I think to say the mind is computational, I just say that it's a classical computer. | ||
| Those are two different things, right? | ||
| So in some sense, the computational aspect of our mind gives credence to this notion of an immaterial soul, right? | ||
| Because what is software? | ||
| Software is not the bits. | ||
| It's not the transistors. | ||
| Software is this immaterial pattern that sits on top of those things. | ||
| And the core insight of Alan Turing and their founders of computer science was that these things are independent of the substrate. | ||
| You can build a computer out of pneumatic tubes. | ||
| You can build a computer out of hydraulic locks and gates. | ||
| Similar with the mind. | ||
| We live in an immaterial reality constructed by our brain that's running on a particular kind of wet hardware, but there's nothing in principle that prevents us from putting that into a different form of hardware. | ||
| And as we've seen over the progress of AI over the last, say, 15 years, has come not so much from deep insights into how do proteins fold, right? | ||
| Protein folding was solved because essentially the computers caught up. | ||
| And that problem existed within the envelope of the kinds of supercomputers we had to model protein folding. | ||
| And as computation continues to double or quadruple in its performance, and as our algorithms get more efficient, the human mind will fall within that envelope too. | ||
| And then what Kurzweil foresees is by 2045 is the biggest supercomputers will have more computation than not just a single brain, but of all the brains put together. | ||
| And we don't really know what that transpires or what comes out of that. | ||
| Will it be something that we wield as some single unified entity? | ||
| Will we distribute that compute in a way that gives everyone a stake or will it be monopolized? | ||
| These are the sort of questions ahead of us. | ||
| And I think the biggest risks come from folks who think that this is just a normal technology and that the world of 2045 will just look like the world of today, only more so. | ||
| Yeah, the idea that this is a normal technology, I think, has been blown away by, what is it, over half of young people say that they've used chatbots as companions or view them as companions. | ||
| They talk to them as if they were actual people. | ||
| I think that goes well beyond anything we saw with video gaming or even social media. | ||
| And we're only three years into the chat GPT era. | ||
| On the other hand, I am praying for either a solar flare or at least an S curve. | ||
| But setting that aside, the political approach that you have here, I think, is really interesting. | ||
| And it's something that gets lost on a lot of people, especially on the right. | ||
| Without naming names, there are a lot of people that associate transhumanism with globalism, with leftism. | ||
| They associate techno-optimism and accelerationism with the same. | ||
| If they haven't been snapped out of that hypnosis by the Trump administration yet, I think they really should. | ||
| But we've covered accelerationism and its newest form, effective accelerationism. | ||
| I think that whether you would take that label on, you certainly sympathize with that camp. | ||
| Would you say that that's correct? | ||
| Yeah, I would say I'm an accelerationist everywhere but superintelligence. | ||
| I think it's appropriate if we are going to build this thing to go in it with a degree of humility and trepidation. | ||
| The real point I mean to get at, though, is that you, by and large, would be categorized as on the right, at least everywhere it counts except for technology, I would say. | ||
| And maybe you would say that techno-futurism is a form of right-wing expression. | ||
| But I think it's on a practical level really interesting how your interests align with someone like Steve Bannon in regard to something like chip exports to China. | ||
| You're very much a hawk on that, correct? | ||
| You wrote a fantastic article for, I believe, the American Conservative a few weeks back. | ||
| Why, if you believe acceleration and technological progress are in fact the ways forward, why would you want to preference America or privilege America and shut out the Chinese on this? | ||
| Don't they need more advanced noodles too? | ||
| I think there's like two big reasons. | ||
| One is if we are on this knife edge between different forking paths the way the world could go, and if AI is as potentially monopolizing a technology as it is, as it stands to be, that hegemonic power of AI could be kind of fractal. | ||
| It could lead to runaway growth of one Google, but also at the world stage, runaway power of a single country, especially as, not just through autonomous weaponry, surveillance, automated cyber attacks, but just the flywheel that will kick off as we begin to automate our industrial factories and so on and so forth. | ||
| So I'd much rather the U.S. be in that lead, in part because then we can bake in, at least gives us some hope of baking in values like civil liberties, respect for privacy, sort of building a constitution into the AI. | ||
| The other reason is just conflict. | ||
| So I think the world wars were kind of inseparable from industrialization. | ||
| We moved from a world of agrarian craft economies to ones where we were racing to build the biggest rail networks because if we had more trains, then we could move more tanks. | ||
| And it was the fact that France and Germany, Russia, these countries in the lead up to World War I were in this all-out race to build the biggest rail networks. | ||
| And it was the fact that it was close that I think led to conflict. | ||
| You get more war and more conflict when you have two great powers that are kind of neck and neck. | ||
| And to the extent that we can make the U.S. lead in AI and the core infrastructure sort of uncatchable, I think it reduces the threat of there being an all-out war. | ||
| Do you think the Chinese are actually in a position to catch up? | ||
| Do you think that Huawei, for instance, could actually meet the demand for data centers for computation in any way comparable to where the U.S. is at right now? | ||
| They could if we had a totally laissez-faire free market here, right? | ||
| So, you know, the manufacturing equipment that goes into building these chips are some of the most complex pieces of engineering ever produced by mankind. | ||
| The components that go in involve thousands, tens of thousands of suppliers. | ||
| There are dozens within the supply chain, dozens of mini monopolists. | ||
| ASML and the Netherlands builds the lithography machines. | ||
| They have 100% market share. | ||
| And so that enables our export controls so we can tell those companies, you may not export this technology to China. | ||
| China still gets some of it because they smuggle it or before the controls were in place, they hoarded a bunch of it. | ||
| But they're otherwise really cut off. | ||
| Now, I think this is important because China has other advantages. | ||
| They are massively producing us on energy. | ||
| They're adding 400 gigawatts to their grid every year. | ||
| The stat I read is that they add a United States roughly every seven years to their energy grid, whereas our energy grid has been flatlined. | ||
| And we look at what goes into building these AI models, other than having the talent and the engineers, it's really the data centers and the energy to power them. | ||
| And we're already running up against hard constraints there, which is why we're making all these deals with the UAE and Saudi Arabia and so on, where there's actually abundant energy. | ||
| China doesn't have that problem. | ||
| If these export controls were lifted, I don't think they'd just catch up. | ||
| I think they'd leapfrog us. | ||
| And we see their comparative advantage is these massive infrastructure projects, just like they can do, build giant highways. | ||
| They could build the AGI cluster if we let them. | ||
| Something that's at the center of AI and Leviathan, which again, I recommend the War Room Posse checkout. | ||
| It's very brief. | ||
| It's a little heady in places, but you can get through it in an afternoon very, very easily. | ||
| But one of the you open up with the theme of, or a kind of metaphor or a symbol of the X-ray goggles, the X-ray specs, and comparing AI to that, it gives people the power to see beyond what they could otherwise see. | ||
| And someone like me, I mean, I see the development of these technologies, especially in regard to surveillance, and it's very off-putting. | ||
| You don't want to be seen in that way. | ||
| You don't want someone to have that power over you. | ||
| And so my first instinct is to reject it, landbast it, do anything I can to push it away. | ||
| You, on the other hand, are approaching it much more from the perspective of how can this be used? | ||
| And how can you use these technologies to protect yourself from surveillance or any other kind of predation? | ||
| Can you go into that a little bit? | ||
| Break down the X-ray spec metaphor. | ||
| Yeah, part of it is recognizing that technology is often more discovered than invented, right? | ||
| So if one day we woke up and there were X-ray specs that were be built with sort of off-the-shelf technology that no one could ever control, what would happen? | ||
| Suddenly I could see through your clothes, I could see through walls, I could break into banks, I could cheat at poker at the casino. | ||
| There are all these systems in our society that would just suddenly break because of this new capability. | ||
| There's kind of three canonical ways society could respond. | ||
| We could sort of change our culture or change our norms. | ||
| We could become nudists and embrace post-privacy norms. | ||
| Kind of like social media influencers basically do now with their souls. | ||
| Right, exactly. | ||
| We could adapt or do mitigation. | ||
| So we could retrofit our homes with copper wire or anything that blocks the X-ray penetration. | ||
| And the third option is we have an X-ray Leviathan, the all-seeing state that orders all the X-ray glasses to be handed over to feds, and then they use their monopoly on X-ray glasses to scan our bodies to make sure we don't have them. | ||
| But the core point is the fourth option of nothing happening or some stable equilibrium is not tenable, right? | ||
| Because it's fundamentally a kind of collective action problem. | ||
| I want the glasses, but I don't want you to have the glasses. | ||
| But you have the exact same incentive, and so very quickly we move into a new world where we all have the glasses and we have to do something about it. | ||
| And AI is very similar. | ||
| It's hardly even a metaphor. | ||
| We even have some of these meta-rair-band glasses that, could you imagine, you could imagine downloading a machine learning model for, there are such models for detecting people through walls using Wi-Fi signal displacement. | ||
| I'm sure that won't be on the Apple App Store, but people will jailbreak these things. | ||
| We're going to go to break shortly, but in our remaining moments before, can you just tee up the idea that one of the more dramatic predictions you make is that the democratization of AI, AI diffusing across the population, whether it be America or any other country, is going to inevitably lead to regime change. | ||
| Why? | ||
| Why is this your core argument before moving on to the distant future? | ||
| Yeah, it's not that I'm a technological determinist, but I do see the way in which our institutions or our governments or organizations are technologically contingent, right? | ||
| So, you know, the growth of the administrative state, for instance, was presaged by and partly driven by the telegraph and early rail networks that let Washington, D.C. have agents of the state be in faraway parts of the country and be able to still communicate and get back and forth. | ||
| And so whenever you have a big technology shock to the core sort of inputs to organizations, the ability to monitor, to broker contracts, to enforce contracts, principal agent costs, the ability to, if I give you a job that you're going to execute on that job, when those costs come down radically, you get new kinds of institutions, right? | ||
| We saw that in micro with Uber and Lyft, right? | ||
| That was a regime change. | ||
| You know, these were public taxi commissions that were quasi-governmental. | ||
| And for the people involved, for the taxi drivers involved, it was incredibly violent and dramatic. | ||
| You saw protesters in Paris throwing rocks off of bridges and so on. | ||
| No, I think for the rest of us, it was a massive improvement. | ||
| But that was a shift that happened quite dramatically within a span of less than five years. | ||
| The ridership completely flipped. | ||
| And that was because of mobile and internet and these new technologies leading to new kinds of organizational forms. | ||
| Well, if there's any one thing that people need to keep in mind as this transition unfolds, it's that you're going to need some kind of economic hedge against total economic disruption. | ||
| Wouldn't you agree? | ||
| I think so. | ||
| And that note, there are a lot of politicians that should be getting coal in their stocking for Christmas. | ||
| But Birch Gold thinks as a smart planner, you deserve silver. | ||
| That's why for every $5,000 you purchase between now and December 22nd, Birch Gold will send you an ounce of silver, which is up over 60% this year. | ||
| That's you, Uber drivers. | ||
| Get that silver. | ||
| Get that gold before it's too late. | ||
| And the Waymos take your job. | ||
| See, smart people diversify and have a hedge. | ||
| That's why I encourage you to buy gold from Birch Gold. | ||
| With the rate cuts from the Fed in 2026, the dollar will be worth less. | ||
| And as the Waymos come in, you're going to need something to throw through their windows. | ||
| Get a brick of gold. | ||
| What happens if the AI bubble bursts? | ||
| You're going to have nothing but Bitcoin and gold. | ||
| So diversify. | ||
| Let Birch Gold Group help you convert an existing IRA or 401k into a tax-sheltered IRA in physical gold. | ||
| And for every $5,000 you buy, you'll get an ounce of silver for your stocking or for your kids. | ||
| What a great way to teach them about saving smartly. | ||
| Take out your phone. | ||
| Don't open the Uber app. | ||
| Don't open the Lyft app. | ||
| Don't open the Waymo app. | ||
| Go to your SMS and text Bannon to 989-898 to claim your eligibility for this offer. | ||
| Again, text Bannon, B-A-N-N-O-N to the number 989-898 today because Birch Gold's free silver with qualifying purchase promotions ends on December 22nd. | ||
| Text Bannon to 989-898. | ||
| Back soon with Sam Hammond in total regime change in America. | ||
|
unidentified
|
Hell America's Voice family. | |
| Are you on Getter yet, bro? | ||
|
unidentified
|
What are you waiting for? | |
| It's free. | ||
| It's uncensored, and it's where all the biggest voices in conservative media are speaking out. | ||
| Download the Getter app right now. | ||
| It's totally free. | ||
| It's where I put up exclusively all of my content 24 hours a day. | ||
| You want to know what Steve Bannon's thinking? | ||
| Go to get her. | ||
|
unidentified
|
That's right. | |
| You can follow all of your favorite Steve Bannon, Charlie Perk, Jack Pasolet, and so many more. | ||
| Download the Getter app now. | ||
|
unidentified
|
Sign up for free and be part of the new thing. | |
| All right, War Room Posse. | ||
| Are back with Sam Hammond, chief economist at the Foundation for American Innovation. | ||
| He is the author of AI and Leviathan, a very slim tracked packed full of nightmarish futures, but also tips on how to survive them. | ||
| Okay, Sam, if we could just return briefly to the concept of regime change, the breakdown of the current order under the pressure of AI and other downstream technologies. | ||
| You don't necessarily present this as something that's ideal or even something that's desired, but you do present it as a cultural and political landscape that people will have to deal with. | ||
| So if we could just return really quickly to the mechanisms by which democratized AI will in fact erode current institutions and how you think people should find that narrow corridor as you describe it in the book. | ||
| Yeah, part of this is a continuation of existing trends, right? | ||
| So, you know, if the 1950s and 60s, NASA did the Apollo project, today it's being done by SpaceX. | ||
| And we're seeing very similar phenomena across many parts of our broken institutions, right? | ||
| I think we all recognize that the U.S. government and bureaucracy is overwrought. | ||
| It's decaying. | ||
| It's degree. | ||
| Room possibly would definitely agree with that. | ||
| And what have we done instead? | ||
| Well, we've started to outsource it. | ||
| And I think even if we had the sort of competence of the 1950s Eisenhower administration or something like that, the fact is a lot of this talent, a lot of the know-how is embodied in these private corporations, right? | ||
| So we're using Palantir as our spy agency. | ||
| We're using SpaceX as our launch capability. | ||
| And I just see running that forward becoming more and more true. | ||
| And especially when you look at the roadmaps, what these AI companies are saying they want to build, right? | ||
| Today we have chatbots. | ||
| OpenAI has this actually spelled out in their research plan. | ||
| Next is innovators. | ||
| So AIs that don't just do your homework, but can actually autonomously do new science, make new discoveries. | ||
| Then after that is AI organizations, right? | ||
| So these are not just a single AI or a single chatbot, but autonomous AIs working in teams as part of an AI corporation. | ||
| And then once you have sort of end-to-end AI corporations, the world starts to look very, very different. | ||
| We're going to have these companies, these organizations, potentially making millions, billions of dollars autonomously. | ||
| They might have a human at the top sitting at the chairman of the board, but otherwise any human would be a friction, would be a bottleneck. | ||
| We're talking about something like Jeff Bezos ruling over an army of robots that do everything. | ||
| Right. | ||
| Everything. | ||
| You just take out the human who is now led around by an algorithm and you replace him with Digit the robot. | ||
| Yeah, that's barely even a joke. | ||
| Elon Musk has been fighting for shareholder control over Tesla for this very reason, right? | ||
| Because he said, you know, it's less about the money. | ||
| This trillion-dollar package he's gotten. | ||
| It's more about the control because I'm going to use Tesla to build a humanoid robot army. | ||
| They're planning to ramp to 50,000, 100,000 by next decade, millions of these humanoid robots coming off the factory line. | ||
| Yep, building basically an Optimus Gigafactory right now in Texas. | ||
| Correct. | ||
| And so that's a lot of power under one person, but it's also a new kind of organization. | ||
| We complain about the DMV or whatever, but a lot of the jobs the governments do are already extremely exposed to current AI technology, auditing, law, accounting. | ||
| These things are going to fall this decade. | ||
| And if there's going to be sort of a balance between the private sector and the public sector, if we can have our state capacity, the minimum viable government we need to enforce contract and make sure we maintain rule of law, we need to keep up, right? | ||
| If we don't, and I think this is sort of a safe default scenario that the government doesn't adapt quickly enough, then it will just be displaced in the same way we're already seeing. | ||
| And when you say we, you mean the United States? | ||
| I think broadly speaking, most Western democracies are pretty exposed because of our slow procedural orientation where we take our time and the technology doesn't wait, right? | ||
| And so it calls for, I think, this balancing act where we want to be pushing AI into government, but also taking that as an opportunity to set standards. | ||
| Because another worry is not just the private concentration of power, but you could imagine some tempot dictator. | ||
| What is the thing that keeps them from having total power? | ||
| Well, it's the fact that the military could do a coup or their generals will defy an order. | ||
| But if all those become sort of automated, if the whole machinery of government becomes AI, then it's a matter of just changing the prompt and you change your government. | ||
| And so we need to build in some levels of privacy, civil liberties, engineering into the tech stack itself so that we don't have this sort of lock-in effect that could arise. | ||
| But this goes to my point, that I think a lot of this technology is at this point, the Pandora's box has been opened. | ||
| There's no way up but through. | ||
| We can try to resist the technology, but really I think a better path is to try to steer the technology, to master the technology, not let it master us. | ||
| So my own perspective, I can appreciate your view as a futurist and as an economist in seeing these trends going forward and seeing them as being quasi-inevitable, what do you do about it on a practical level, on an economic level? | ||
| But as a humanist, as a flea-bitten monkey person, I am much more concerned about, well, then what do people do? | ||
| Like, what do regular people do? | ||
| What do my friends and family do in the face of this? | ||
| Do we buy Bitcoin? | ||
| Do we become human AI symbiotes? | ||
| Do we vote for the new tech accelerationist party? | ||
| Like, what would you, in your view, what sorts of futures would a blue-collar working man be dealing with or a small business owner be dealing with? | ||
| Like, how would they respond to this? | ||
| Well, I mean, over the next say five years, I think a lot of blue-collar manual labor work is still relatively safe. | ||
| I think there's a lot of ways that AI could be empowering to small business owners and entrepreneurs. | ||
| The fact that you can now do your own marketing and graphic design and things that normally require big teams or get legal counsel essentially for free. | ||
| Now, in the long run, if we want to ask, what is sort of my vision for the best possible outcome? | ||
| And again, this is not necessarily a forecast. | ||
| This is now me telling you what I would want to happen. | ||
| I see there are potential opportunities for AI to be a corrective to a lot of the problems of modernity. | ||
| And I think a lot of conservatism, a lot of right-wing thought is an important reaction to modernity and the trade-offs that came from, yes, we want to have these large-scale systems because they're more efficient and they produce standards. | ||
| By modernity, do you mean to include managerialism, bureaucracy, egalitarianism? | ||
| The post-Enlightenment era where we lost something with that too. | ||
| We lost local communities. | ||
| We lost the interfacing with your neighbor. | ||
| We got pushed into big metropolises and under the thumb of a sort of impersonal bureaucracy, a Kafka-esque bureaucracy. | ||
| And again, I see that as a worthwhile trade-off because we are more prosperous, we have longer living standards. | ||
| We can try to recreate community, but it was a real trade-off. | ||
| And is there a way in which AI could, And insofar as it does start to dissolve some of these state functions and these forces of homogenization, enable a new kind of high-tech communitarianism. | ||
| I want to go back to the one-room schoolhouse that was down the road before it all got consolidated in these big schools. | ||
| But with robots. | ||
| Well, you could have the AI tutor in the morning and the jiu-jitsu class in the evening, right? | ||
| And you're definitely going to want to keep up with their physical prowess. | ||
| Well, I think that's actually true. | ||
| I think if the early part of the 21st century was good for the nerds, I think the latter half will be good for the jocks. | ||
| Okay. | ||
| I was never much of a jock, but I do appreciate the sentiment. | ||
| At least it's a monkey person sentiment. | ||
| But you see, where I'm sort of going with this is, you know, and there's, and this is also what animates a lot of the more conservative pro-AI folks is they see the power of AI to dissolve Hollywood, to, you know, at least in the short run, deflate the sort of managerial professional class economy, you know, these laptop workers that rule over us. | ||
| You know, when that becomes plentiful, then it's the electrician or the plumber that is actually in high demand. | ||
| No, I just think that will be a relatively short transitional window where at some point we'll also have robotics that do that as well. | ||
| And so to your point, to your question, then what? | ||
| What are we left doing? | ||
| And I look around the world at what can we garner some inspiration for? | ||
| Like what's close to this sort of post-scarcity world today? | ||
| And I see the Gulf states. | ||
| I see UE, Qatar, Saudi Arabia. | ||
| They have trillion-dollar sovereign wealth funds. | ||
| They essentially have free social services for their citizens. | ||
| They import these guest workers that build their stadiums, which from their point of view are basically robots. | ||
| They do these things. | ||
| Yeah. | ||
| In the original etymological root of the word, robot comes from, what is it, robota in the Czech, which means servant or slave. | ||
| So I guess they are, they're Czech robots. | ||
| So I think there is a world where we end up in a kind of rent-ier state. | ||
| And I think this is one of the things we have to balance. | ||
| The reason Saudi Arabia has a big sovereign wealth fund is because if they don't, they suffer a resource curse. | ||
| And so we need to, I think the Trump administration has been quite thoughtful and has a lot of foresight in the fact that Trump wants a U.S. sovereign wealth fund. | ||
| But that leads open the question of what do we do on a daily basis. | ||
| And we look back in history. | ||
| What did people in the 1600s do on a daily basis? | ||
| Well, they, yes, they sowed the land or whatever, but they also went to church. | ||
| They raised their families. | ||
| They participated in community life. | ||
| They went to rituals and had services. | ||
| And I think there's a world where we can get back to something that is potentially more human than what we have today because AI has this potential for this radical relocalization of human society. | ||
| So taking this line of thought, by the way, before we go, I will say one more prayer for a solar flare. | ||
| And failing that, just give us an S curve. | ||
| But a long, flat S curve. | ||
| I'm intrigued by the roots of your thought in what was once called transhumanism is now called science and technology. | ||
| We were both reading Ray Kurzweil's The Age of Spiritual Machines around the same time, 2001 or so. | ||
| And it made a deep impression on me, the totalizing vision of technology, the idea of superhuman AI, all human beings attached to it through nanobots or whatever, the indistinguishable nature of physical and virtual reality, all that. | ||
| But when I read it, it just sounded like a nightmare world. | ||
| I'd read Ted Kaczynski a couple of years before and oftentimes joked that on one shoulder is Ray Kurzweil and on the other is Ted Kaczynski, sort of like a devil and a fallen angel on each shoulder. | ||
| For you, my sense is that Kurzweil had a different impact. | ||
| Am I correct about that? | ||
| I think the primary impact it had was just looking back at how much he got right through relatively simple methods, right? | ||
| So people will nitpick that his timing is off here and there. | ||
| But Age of Spiritual Machines came out in 1999, and he predicted that we'd have human-level AI, AGI, by 2029, which if you look at the betting markets and the other forecasting sites is roughly where things are converging. | ||
| He may have had a very slightly different path to how to get there. | ||
| Right. | ||
| He talked about whole brain emulation. | ||
| Yeah, that we'd scan the brain. | ||
| And in a way, we did that indirectly, right? | ||
| These large language models are trained on human-generated data. | ||
| And in the limit, they are learning the thing that generated that data, not the data itself. | ||
| And the thing that generated that data is a mind, which is why these companies are actually even starting to talk about the welfare of the AI. | ||
| So what I got from Kurzweil was just, first of all, that history hasn't ended, that we should not limit our imagination. | ||
| And if you talk to most people, they're relatively linear thinkers, whereas Kurt Kurzweil always trusted, there's these exponential trends, and we've got to take them very seriously. | ||
| And then secondly, that you can do a lot and go a long way with these very simple forecasting methods of what will be the biggest supercomputer? | ||
| What will be the most, what is the computational power of our brain, and when will those two lines intersect? | ||
| Yeah, you go into a bit. | ||
| You give a hat tip to the early extropians, Max Moore and others in that genre. | ||
| Max Moore is the reason we're saying transhumanism when he pivoted to that as a term. | ||
| And you also give a hat tip to the effective accelerationists. | ||
| Get the sense that you also see it as being part of the same trend. | ||
| I want to pivot, if we can, to your vision of the future, your timeline. | ||
| I mean, if there's one thing that Ray Kurzweil can be given credit for, it's that he had the guts to say, this is what I believe is going to happen. | ||
| And he laid out a very specific path. | ||
| You do the same. | ||
| And if you would, just give the audience a sense. | ||
| You broke it down into three basic periods. | ||
| The immediate future, about six, seven years from now, beginning in 2036, and then ending, of course, in the 2040s as we approach the singularity. | ||
| What are the different elements that people should expect to see as we move forward towards this imagined, I would say, singularity? | ||
| Sure. | ||
| So let's start with where we are today. | ||
| Today we are at a place where we have Google, OpenAI, Anthropic, X are serving a neck-neck race to release the best general purpose language model. | ||
| The big breakthrough last year were reasoning models, thinking models, models that can actually do tasks. | ||
| And now the application of reinforcement learning, which is an AI training technique that basically gives these models goals and goal-directed behavior. | ||
| Now, there's an organization called METER, M-E-T-R, that tracks the level of autonomy in these systems. | ||
| By autonomy, I mean what's the longest task that they can do before they sort of become discombobulated and fall off track and start to drift. | ||
| That is now doubling every seven months or so. | ||
| Kind of in a Moore's Law fashion. | ||
| Yeah, faster in Moore's Law. | ||
| Moore's Law was every two years. | ||
| This is every seven months. | ||
| So the best model today, at least that's publicly released, is Gemini 3 from Google. | ||
| It can perform tasks, engineering tasks that take humans roughly two and a half hours with a high degree of reliability. | ||
| In seven months, that'll be five hours. | ||
| Then seven months from then, it'll be 10 hours. | ||
| And then 20 hours. | ||
| And then suddenly, very quickly, we have systems that are doing things autonomously that would normally take humans or teams of humans weeks or months. | ||
| Those are going to be incredibly powerful. | ||
| They're going to be incredibly useful economically because this is when you move from AI being a tool to being a direct substitute for all kinds of at least at first white-collar work. | ||
| The greater replacement. | ||
| Yes, your words. | ||
| And then, you know, but it's also very dual use, right? | ||
| So the autonomy of these systems can be used to automate your Excel job, but it could also be used to execute cyber espionage campaigns, as Anthropic just revealed. | ||
| They disrupted a Chinese effort using their models and their servers running autonomously to spy on U.S. corporations and government agencies. | ||
| So I think that's really the next, say, two or three year period. | ||
| I think we could see a major run-up in these cyber attacks and potentially in ways in which the internet becomes somewhat unusable, or at least we need to build new internet rails, both because it'll be hard to know what's real and what's not, the proliferation of deepfakes, but also the level of cyber threats, the vulnerabilities in our cyber infrastructure are very severe. | ||
| And if we don't fix them fast enough, we may have to just build alternatives. | ||
| And you see the appropriate response, or at least the most effective response, is people basically moving into gated communities, both in reality, physical reality, and virtually, right? | ||
| Yeah, you see this in like privatization. | ||
| Yeah, you see this with online communities, right? | ||
| So if you go on Facebook and look at the comments on some fake image of an African child who built a Jesus statue out of shrimp shells, you see all the people commenting, being like, oh man, praise the Lord. | ||
| And it's like, well, we're not going to make it. | ||
| Those people are not going to make it. | ||
| But how do you solve bots? | ||
| Well, you could either have some identification system where we all scan our iris, like WorldCoin wants to do, or you end up moving into these more gated communities where you are very selective about who gets in. | ||
| And I think that that's already happening in the digital realm. | ||
| I think it will increasingly happen in the real world too. | ||
| And ultimately, leading to the singularity, if I may, should I read the last passage of the book? | ||
| Is this too much of a spoiler? | ||
| You know, you describe this privatization. | ||
| You describe a city powered, but you know, with a massive singularitarian data center powered by fusion, and they are about to go God mode. | ||
| And you say the city is a home to a fusion-powered supercluster with billions of times more computational power than every human brain combined. | ||
| It just completed its first big training run, and the new model is ready to be tested. | ||
| The engineers have read the sequences and know the danger, but their pride, curiosity, and benthamite expected value calculations all scream, turn it on. | ||
| Besides, who's going to stop them? | ||
| Nightmarish future indeed, Sam Hammond. | ||
| I cannot deny it will happen or not, but I'm still praying for that solar flare. | ||
| AI and Leviathan. | ||
| Where can people find the book and where can people follow you? | ||
| So this is a limited run, but it's drawn from essay series. | ||
| If you just Google it, you can find it written there for free. | ||
| All right, Sam Hammond, I really appreciate you coming by. | ||
| Thank you very much for hanging out with us, flea-bitten monkey people for now until it's all robots. | ||
| Before the robots, though, the IRS needs more money, your money. | ||
| If you owe the IRS back taxes, they can garnish your wages, levy your bank accounts, and even seize your retirement or take your home. | ||
| Don't let the IRS target you. | ||
| Call the professionals at Tax Network USA. | ||
| Their tax lawyers and enrolled agents are experts in powerful programs that may even help you eliminate your tax debt. | ||
| Whether you owe a few thousand or a few million, they can help you. | ||
| With one phone call, you can start the process of stopping the threatening demand letters. | ||
| Call 1-800-958-1000. | ||
| That's 1-800-9581000 or visit tnusa.com slash bannon. | ||
| And of course, you still need to diversify your assets in the face of the singularity. | ||
| Diversify, let Birch Gold Group help you convert an existing IRA or 401k into a tax-sheltered IRA in physical gold. | ||
| Just text Bannon to 989-898 to claim your eligibility for this offer. | ||
| Again, text Bannon to the number 989-898 today because Birch Gold's free silver with qualifying purchase promotion ends on December 22nd. | ||
| Thank you. |