All Episodes
Aug. 22, 2025 - The Culture War - Tim Pool
02:05:59
Will AI Destroy Humanity? Can Humans Escape AI Doomsday Debate

BUY CAST BREW COFFEE TO SUPPORT THE SHOW - https://castbrew.com/ Become A Member And Protect Our Work at http://www.timcast.com Host: Phil @PhilThatRemains (X) Guests (For AI): Bryce McDonald @McD_Bryce (X) Nathan Halberstadt @NatHalberstadt (X) Guests (Against AI): Shane Cashman @ShaneCashman (everywhere) Joe Allen Producers:  Lisa Elizabeth @LisaElizabeth (X) Kellen Leeson @KellenPDL (X) My Second Channel - https://www.youtube.com/timcastnews Podcast Channel - https://www.youtube.com/TimcastIRL

Participants
Main voices
b
bryce mcdonald
12:51
j
joe allen
48:54
n
nathan halberstadt
22:14
p
phil labonte
22:47
s
shane cashman
18:25
| Copy link to current segment

Speaker Time Text
unidentified
Ready to level up?
Shumba Casino is your playbook to fun.
It's free to play with no purchase necessary.
Enjoy hundreds of casino-style games like bingo, slots, and solitaire anytime, anywhere, with fresh releases every week.
Whether you're at home or on the go, let Shumba Casino bring the excitement to you.
Plus get free daily login bonuses and a free welcome bonus.
Join now for your chance to redeem some serious prizes.
Play Chumba Casino today.
No purchase necessary.
VGW Group.
Voidword Prohibited by Law.
18 Plus, DNC Supply.
phil labonte
Isn't there a lot of people that think that it will usher in a new age for humanity?
unidentified
But there are also a lot of people out there that have made significant warnings saying that, look, this actually could destroy humanity.
phil labonte
And we're here today to talk about it.
So not a lot of monologue today.
We're just going to get right into it.
unidentified
So joining us today is Bryce McDonald.
phil labonte
Introduce yourself, please.
unidentified
Hey, I'm Bryce.
bryce mcdonald
I'm the U.S. lead at Volus, which is a portfolio company of New Founding.
unidentified
And Volus implements AI in real-world American businesses like construction, mining, manufacturing.
phil labonte
So you're pro-AI generally, right?
unidentified
Yes.
phil labonte
That's a fair statement.
nathan halberstadt
Okay.
unidentified
And also on the pro-AI side generally, as well as Bryce McDonald.
Yeah, I'm Nathan.
I'm sorry, geez, I'm sorry.
Nate Halberstadt.
Nathan Halberstadt.
phil labonte
I apologize.
nathan halberstadt
I'm a partner at New Founding.
unidentified
It's a venture firm focused on critical civilizational problems.
I lead investing in early stage startups, and I'll also be taking the pro-AI side.
And Bryce and I will also be qualifying.
nathan halberstadt
We're worried about some of the same risks, of course, but we see a positive path forward.
unidentified
I do think that the risks are something that everyone that, even people that are pro-AI, they're at least aware of and they do take seriously.
phil labonte
But to talk about the negative sides, the possible dangers of AI, we've got Joe Allen.
unidentified
Yeah, Joe Allen, I am the tech editor at The War Room with Steve Bannon, occasional host, writer, not an expert or a philosopher as I'm oftentimes smeared as, and failed Luddite.
Failed Luddite.
joe allen
Yeah, I've not been able to successfully smash even my own machines.
unidentified
That is a crying shame.
I'm sorry to hear.
joe allen
You know, I'm still going.
unidentified
Awesome.
And we've got the inevitable Shane Cashman here.
shane cashman
Yeah, host of Invertiboral Live.
unidentified
Had Joe Allen on the show last night.
We got into rat brains and vats and simulations and pregnant robots and AI accountability.
I'm looking forward to this one.
So do you guys want to start with an outline of your positions, basically, so that way the viewers understand?
Or do you guys want to jump into anything in particular?
How do you guys feel like this should go?
nathan halberstadt
I'm happy to lead us off.
I think we could probably all start by agreeing that I won't use the term antichrist, but describing Sam Altman as a pretty weird guy.
shane cashman
I'd say he's one of the Antichrists.
joe allen
Antichrist.
unidentified
And I think starting with, Bryce and I agree that AI technology presents a number of potential risks, especially for human well-being.
But I think we're excited particularly about this conversation because we agree with you on that.
I think we all come from a right-of-center perspective here.
And we want basically a path forward for AI that works for Americans and especially the next generation of Americans.
I think we're concerned, like, is this going to work for our kids and grandkids?
But, and I'm familiar with both of your work, Shane and Joe, and actually really respect the point of view that you guys come from.
So I'm excited about the dialogue because I think we can hopefully talk through some of the very serious concerns that we all share.
But ultimately, I think Bryce and I see a path forward where AI technology can actually shift opportunity towards the people who have been disadvantaged in the previous paradigm.
So we think about sort of the middle American skilled tradesmen or sort of the industrial operating businesses in America, people who've been really hurt by offshoring or by financialization from private equity or software as a service lock-ins from Silicon Valley and the rest.
We think there's a pathway where AI can basically shift more power and autonomy to some of the people who we wish had had it up to this point.
There are still plenty of risks, but that's a part of where we will actually want to take the conversation is arguing that over the next decade or two, there could be sort of a golden age that emerges and AI will play a role in it.
And there will be lots of challenges and lots of serious things where we'll need to adjust policy.
And there could be people along the way who, we need to basically make sure that we minimize the number of risks for people and Americans along the way.
nathan halberstadt
But that's the side that will be ticking.
Bryce, maybe you want to add.
unidentified
Yeah, there's one thing that I want, if Bryce, if you could expand upon it.
One phrase that you used, you said a narrow pathway.
How narrow do you think the pathway is?
phil labonte
Do you think that it's more likely that there are going to be more negative consequences from the use of AI?
unidentified
Or do you think that it's more likely that there will be positives or do you think it depends on who's in control?
bryce mcdonald
Look, with any technology, the positive and the negative are really closely intertwined.
unidentified
And I think our role as people who are hopefully going to be able to shape the future of AI is to actually split those apart and figure out what are the bad elements that we can avoid.
bryce mcdonald
For example, the psychotic episodes that AI as chatbots are bringing people into.
unidentified
Or, for example, trying to automate away all work or ruin education with cheating apps in AI, right?
bryce mcdonald
We want to split out those negative elements and try to mitigate those.
unidentified
And ultimately, I think it'll be a lot of pros and cons, but just like with social media, just like with the internet and even trains or electricity, there's going to be both positive and negative.
Joe, what's your feeling overall about the outlook that Bryce and Nate have?
phil labonte
Do you think that that's in any way realistic, or do you think that it's all pie in the sky, that this is just a terrible idea that we should all fear?
unidentified
And to that point, if you do think that it's a terrible idea, we don't have the ability to prevent other countries.
So how do you think the U.S. would be best served moving forward, considering Russia, China, there's going to be AI companies all over the world.
And if the United States does prohibit it here, these companies are just going to go offshore.
They're going to go to other countries.
Yeah, there's a few different questions there.
So to the first question, how do I respond to the position presented here?
I'm not sure what we're going to argue about.
And I agree, by and large, although as a writer, I have zero use for AI.
And so it's very domain-specific, right?
If you're in finance, you might have a lot more use for machine learning than I would.
And a lot of writers use AI to basically plagiarize, cheat, and offload their work to a machine.
Yeah, the question of U.S. competitiveness, especially in regards to China, China could leap ahead.
It's really a volatile situation with AI because simply transferring techniques and information and the technology to build these systems is all that's required for another country to begin building close to the level or approaching the level of the U.S. right now.
joe allen
But this is all driven by the U.S.
The AI race is driven by the U.S.
unidentified
The AI industry is by and large centered in the US.
And the arguments made by people like David Sachs or Mark Andreessen's sort of dismissive position in regards to the downsides, it's completely reckless.
I think probably disingenuous, although I don't know their hearts.
And I don't see, I can see from a national security perspective, if you have a very effective algorithm or AI or AIs that are used to, for instance, simulate the battlefield or analyze surveillance data or target an enemy.
Yeah, that I think is something that should be taken very seriously.
On the other hand, the way they're talking about it, it's as if flooding the population with goon bots and groomer bots is going to be essential to American competitiveness.
And I just, I don't see how having what right now, at least statistically, however much you trust surveys, to have a third of Gen Z students basically offloading their cognition to machines to do their homework, I don't see how that's going to make the U.S. competitive in the long term.
And the production of AI slop, the sort of relationships, the bonding that inevitably occurs with someone who doesn't either make themselves unempathic or sociopathic or is just born that way.
joe allen
The way large language models and even to some extent the image generators and video generators work, they trigger the empathic circuits of the mind.
One begins to perceive a being on the other side of the screen inside the machine.
unidentified
I think that those social and psychological impacts are already evident and are going to be severe.
joe allen
The economic impacts, kind of up in the air, it's an open question, but it doesn't look good.
unidentified
And the mythos around existential risk, the AI is going to kill everybody or become a god and save everybody.
Again, the likelihood of that happening, probably low.
But I think that mythos itself is driving most of the people who are at the top of the development of this.
And I think that has to be taken very seriously.
It'd be like if you had Muslims, for instance, that ran all the top companies in the U.S. and supported by the U.S. government.
Maybe you like it, maybe you don't, but it's something you should take very, very seriously.
There's one point that you made that I actually want to kind of drill down on.
You said that it was really, that AI development was driven by the United States.
And is it really driven by the U.S.?
As in, like, is it only because the United States is doing it?
Or is the tech actually a human universal that all countries actually would go after?
Because it's my sense, like we said, we talked about China or you talk about Russia.
I don't think that just because the United States is on the leading edge of this technology, I don't think that in the absence of the United States, these technologies would not develop.
joe allen
Most of the innovative techniques come out of the U.S. and all the frontier labs or U.S.
phil labonte
But the point that I'm making, that's just reaffirming that it's that innovation is in the United States.
joe allen
So if we stop, they would start or they would begin to catch up.
phil labonte
Well, they would continue.
unidentified
Yeah, I don't see that.
Maybe.
Because the technology, the way the technology develops, even if it's not being developed in the United States, doesn't mean that other countries, or even if it's developed in other countries more slowly, doesn't mean that the technologies wouldn't be developed.
I actually hope that China and our other rivals across the country do develop and deploy these systems like we're doing in the U.S. Because then they'll be plagued by goonbots and cheating.
And already has, hasn't they?
Like they've got recognition and stuff like that that they use to control their populations.
Yeah, those are different questions from the goonbots.
But yeah, the tendency to disappear up into one's own brainstem, I guess, is a human universal.
joe allen
I think if China begins to deploy recklessly their LLMs at the same scale as the US, but China's actually got really strict regulation, including protecting consumers, much more so than the US, and deepfakes, things like this, but also like any kind of pro-CCP output.
unidentified
I'm sorry, anti-CCP output, that's all banned.
joe allen
But yeah, I think if China borgs out and weakens its population as we have, it would be kind of like payback for the fentanyl.
unidentified
So would you guys jump in on that?
Do you guys think that the United States, because the U.S. is where the leading edge is, do you think that if the U.S. pulled back, other countries would also?
Because like I said, I think that just because a country lags behind the U.S. technologically doesn't mean they don't have the impulse or the desire to actually develop these technologies.
Without a doubt, there is something to this technology that's behind the goonbots, behind the particular instantiations of it, the applications, right?
One second, I want to jump in.
The idea of goonbots, like I understand that those are the flashy things and sex sells, I understand that, but that's really one of the smallest areas and I think one of the least important, right?
Because you're dealing with medical technology, AI and medical technology.
You're dealing with AI in multiple different fields.
Like it's almost becoming ubiquitous.
phil labonte
I think that as much as people like to say, oh, the porn bots are going to kill us.
unidentified
Everyone's going to jump into the pod.
I think that that's actually just kind of more of a slander on the technology more than or a way to slander the technology.
And again, this is not endorsing the idea of AI sex girlfriends or whatever, but even just the way that people were talking about it so far around the table, the goonbots, the point of saying that is to slander the technology.
Agreed.
bryce mcdonald
And there needs to be a positive vision for AI.
unidentified
Like what we need to understand, what is AI good for and what are humans good for?
And ideally, we keep AI out of the areas where humans are most capable and we're going to do just fine.
bryce mcdonald
Areas like our relationships, of course.
unidentified
Areas like education and the core functions that basically people are experts in that AI is not experts in.
So AI is good for doing work that's, frankly, less humane.
So things that we don't like doing.
Paperwork, administrative work, bureaucratic work, that kind of stuff.
And in domains where AI is not adding value or it's obviously just terrible.
So you think about when you go onto Twitter or just the algorithm at any point, you see an increasing amount of just AI slop.
Or even in education, we've talked about at a certain point, if the kids are just outsourcing all of their learning, we're going to figure that out.
And that's something that will need to be corrected if people are using AI in sort of strange sort of psychotic relationship dynamics.
I think, again, we'll sort of figure that out and solve that as well.
And really across all of these, my hope is that what will occur is that as we figure out that AI doesn't work well in this domain, it'll force people back into the in-person.
One example here would be, let's say, for example, Joe, like you and I were going to go on a show together or just go do something together.
If I have an AI agent, for example, that sort of spontaneously reaches out to you and messages you and then your AI agent responds back and it's scheduled and maybe there's a whole conversation that happens, but neither of us are even aware that the conversation even happened, right?
That's a pretty weird thing.
nathan halberstadt
And at a certain point, we sort of lose trust in those sorts of interactions where there isn't a firm handshake, where there aren't two people in the room together.
unidentified
And so I think the natural sort of second order consequence of this, at least it seems to me that it'll force people to care more again, more again about the in-person relationships.
So people in their community, their family, their church.
And also even things like proof of membership and sort of vetted organizations or like word of mouth referrals, right?
So somebody like Bryce basically says like, yo, you should really go do XYZ thing with Joe.
And I know Bryce in person, and I wouldn't take that advice off of the internet anymore.
So you sort of bias more towards input from in-person.
And in some ways, I think that that solves some of the challenges we've been having with the internet and with social media, which has been terrible, which has been terrible for young people, just in terms of anxiety and other things as well.
And so what we want to have is a future with AI that doesn't look like Instagram or doesn't look like what social media did to people.
And I think it's possible.
Do you think it would be accurate to say that because of the algorithms that social media uses to put things in front of people, wouldn't that qualify as AI?
And wouldn't the repercussions of having that kind of information, the way that information is fed to people, particularly young people, would that qualify as one of the negatives of AI that we've already seen even in its infancy?
That's true.
And I think it's worth bringing up the distinction between, I would say, the last generation of AI, which is what you see in algorithms and social media.
It's what you see in drone warfare.
But a lot of maybe more positive elements as well, like ability to detect fraud and financial systems.
And there's a new generation of AI, which started with the release of ChatGPT in 2022.
And you could call that generative AI.
You could call that large language models.
But this is really the source of a lot of the hype in the last few years.
And it's a source of where we can actually think about automating all the worst parts of our companies or our personal lives.
But it's also the risk where the slop comes in, right?
bryce mcdonald
So all the tweets that you see that are clearly made by a robot, that's this second generation.
unidentified
You know, you're going to have to find something for us to disagree about.
phil labonte
Well, I mean, I've tried pushing back on.
unidentified
But to your point, I will say this.
You're talking about critical systems that are having AI integrated into them, medical, military, very critical systems, right?
At least in the context of what I was thinking of, was as assisting humans, not actually taking over by.
So yeah, it's the Goombots, I think, are, I think you're underestimating that just like digital porn, just like OxyContin, it may be something that is primarily concentrated among people who are isolated, people who are already mentally unstable, vulnerable, but that's a lot of people, and it's an increasing number of people.
But let's put the goombots aside and the groombots.
It gives us some indication as to how unethical and sociopathic people like Mark Zuckerberg and Elon Musk are, but we'll leave that aside.
joe allen
Look at just the medical side of it.
unidentified
I hear all the time from nurses and doctors about the tendency towards human atrophy and human indifference because of the offloading of both the work and the responsibility to the machine.
And, you know, studies are only so valuable, but there are a few studies that at least give some indication that that is somewhat intrinsic to the technology, or at least it's a real tendency of the technology.
joe allen
One was published in The Lancet, and it was in Poland.
unidentified
followed I believe doctors who were performing colonoscopies I guess they were palctologists and for the first three months they did not use generative AI and then it was like two three-month periods and and then they followed them after three months of using the generative AI and what they found I mean if there's one thing that in AI you know I'm sure you guys agree AI is a troubling term, right?
What are we even talking about when we talk about AI?
joe allen
But they were using, you know, algorithms are very good at pattern recognition in images.
unidentified
And so in radiology and other medical diagnoses, it's much better statistically than humans in finding like very small, mostly undetectable tumors or other aberrations.
So these doctors used it for three months, and they found that after three months, just three months of consistent use, they were less able, something like 20, 27%, something like that, less able to detect it with their own eyes just because of the offloading of that work to the machine.
And in the military, it's going to be a lot more difficult to suss out how many problems are going to come out of this.
joe allen
But in the case of, say, the two most famous AIs deployed in Israel, Hapzera and Lavender, these are systems that are used to identify and track legitimate targets in Palestine.
unidentified
And basically, that means it fast tracks a rolling death machine.
And is it finding more and more legitimate targets?
Is it not?
Don't know.
joe allen
All we know is it's accelerating the kills.
unidentified
So I think, but in both cases, what it highlights is how important those roles are, doctors, soldiers, so on and so forth.
And it also at least gives us some indication as to the problems of human atrophy.
And in the case of warfare, the real tragedy of kills that were simply not legitimate.
So to your atrophy point, right?
If AI is better at detecting things like cancers and stuff like that, and it's also still technically, I mean, it's in its infancy, right?
This is still a very, very new technology.
Only in the past couple of years, two years possibly, that this is capable of even doing this.
And it's gone from the infancy to being able to detect better than human beings.
Wouldn't it make sense to say, look, it is a bad thing that human beings are relying on it to the point where they're losing their edge, essentially, basically.
They're not as sharp as they used to be.
But moving forward, considering AI is so much better than humans, is that a problem?
And will that be a substantive problem?
Because you think in two years, considering how the advancements have gone in the past, in two years, it'll almost be, it'll be impossible for human beings to even keep up with AI.
phil labonte
Would it be a negative to say, oh, well, humans won't be so sharp?
unidentified
Well, yeah, they won't be, but everyone relies on a calculator nowadays for any kind of significant math problems.
No one's writing down and doing long division on a piece of paper anymore.
phil labonte
They always use a calculator.
unidentified
Isn't that a similar condition or situation?
nathan halberstadt
I really like the analog of the calculator.
unidentified
One thing, one heuristic here that we, or one way of thinking about this that we like to use is that AI right now, especially think about chat GPT, it's very good at replicating sort of bureaucratic tasks and really bureaucracy.
So it's information tasks.
And just like in the same way as to do a very large math problem, you just plug it into a calculator and it does it quite quickly.
It's sort of the same thing in a business context or in sort of an administrative context.
Like AI today basically does what sort of entry-level, it does quite well what entry-level associates and analysts and people like this did five years ago in say an investment banking firm or consulting firm or a law firm or even just sort of passing around basic information in a large bureaucracy.
You know, you could think about it as similar to before the calculator, there would have been sort of entry-level people who were doing these just like extremely long math problems.
And I think a point that Bryce made earlier is that some of this stuff is actually, it's actually fairly inhuman.
nathan halberstadt
Like being a cog in a bureaucracy is not necessarily like the peak of human flourishing.
unidentified
And so as long as new opportunities are materializing and as long as there are still ways for Americans to continue working, forming families, then I don't necessarily see it as a terrible thing if certain very unhuman types of jobs stop existing, as long as new jobs are created that are more conducive for human flourishing.
I think it's disingenuous to compare this technology to technologies of the past and any advancements because those technologies are still even the calculator involves a human working with it, whereas AI is going to replace everyone.
And I understand the short-term benefits that come with all of this, whether it's medical, military, which I disagree with, like lavender, I think is a terrible situation.
And allegedly it has a 10% error rate.
But the idea that it's going to create a future where humans can do better things outside of this and not be a cog, I think we're a cog in it right now.
I think we are the food source for AI and it's learning because of us, right?
And the people who are building the AI, they don't want the physical world.
Like the big names, the Altmans, the Thiels, the Musks, you name them, they don't care for the physical world at all.
They don't want nature anymore.
They want data centers and AI to be doing everyone's job and to outsource everything from knowledge to accountability.
A lot of them are transhumanists.
They are transhumanists, right?
shane cashman
And that is a big part that's baked into AI.
unidentified
And a lot of the problems we see online with, say, censorship on Twitter, the algorithm still has that in its DNA, the same things that we had problems with four or five years ago.
And I understand AI is here.
There's nothing we can do about it.
It's kind of like gain of function, in my opinion.
We've started it.
It's here.
Now people just can't stop doing it.
But I feel like with this, I have to reject any flowery notion of a future where we can live symbiotically with AI because in the end, it's like we've created our alternate species to take over.
And it's going to draw us into apocalyptic complacency where we kind of are at already.
And people keep saying this technology is going to help out with people.
It's going to make things better.
It's going to make things easier.
It's going to be a tool.
Joe would say it's a tool.
And it certainly is a tool.
But right now, we're already seeing declining reading skills in schools.
People are more isolated than ever.
And I don't think it's going to get better all of a sudden because the proliferation of AI, I think it's going to make things much worse, much more fractured.
And it's going to become this inevitable lobotomy of humanity.
Whereas previous advancements in technology, there was some sort of we could work together despite some, you know, the consequences of something like Taylorism and the scientific management, where you did become a cog in the factory.
But we are the cog in it right now as it's growing within its factory.
And someone like the former CEO of DeepMind, I forget his name, talks about Jeffrey Hinton.
No.
Oh, Mustafa.
shane cashman
Yeah, yes.
unidentified
He talks about containment, right?
Like what you guys are saying, this narrow path forward, which I agree.
You know, we have to have some sort of narrow path here.
But he talks about containing the AI.
I think we're beyond that.
shane cashman
There's no containing it now, no matter what you do, because it's going to find its way into every household, no matter what.
unidentified
It's in every institution.
It's in every country.
And we have countries like Saudi Arabia who want to go full AI government.
That's the path that it's taking.
And I think in the future, not so distant future, the AI is going to worry about containing us.
And that's what I'm fearful of.
I think this being drawn into apocalyptic complacency means it's going to destroy us because we built it and we allowed it to.
Look, there's, I just saw a video the other night about AI, and it was talking about, it led with the idea that right now we're going through or have been going through a massive, a massive die-off of species, right?
Insects.
phil labonte
There's like 40% fewer insects on Earth now.
unidentified
And it's because of human beings.
And the point that it was making was human beings didn't know that they were killing off insects in the ways that they were with pesticides and just deforestation and all of these things that we were doing, making the modern world and living in the modern world, killed off 40% of the insects.
Well, insects are part of the ecosystem that actually are necessary, as annoying as they are, and they can be, they are necessary.
But the point that it was making that we weren't aware that this is happening.
And then they made the connection that should we have a super intelligent AI, right?
Not just agents that can help, but a super intelligent AGI, it's likely that it will start doing things of its own volition that we can't understand.
Like bugs didn't know why human beings were destroying them and destroying their habitat and why they were getting killed off.
phil labonte
They had no idea.
unidentified
And when you deal with a sufficiently intelligent or a sufficient, sufficiently more intelligent entity, humans can't understand it.
And right now, AI will start doing things that people can't understand.
There was this, we had this wired piece here where AI was designing bizarre new physics experiments that actually work.
And the point is, the AI started working with the physicists as a tool.
They started using it to help them figure things out.
And it came up with novel methods to make finding gravitational waves easier.
And they didn't understand what had happened.
Then the same thing has happened with chess, right?
There was a chess bot, kind of your chess master.
Alpha Zero.
phil labonte
Yeah, AlphaZero, maybe.
unidentified
It could be.
But everyone kind of thought that everyone, all the chess masters know how chess goes and they see the moves and they know what moves you do in response, et cetera, et cetera.
And it took AlphaZero did a move that no one understood, or the chess bot, I'm not sure of the same one, just for accuracy.
And no one understood why.
But then it was like 20 moves later or something, it won the game and no one understood how.
But one of the things that AI, another thing that AIs have started doing is there were two AIs communicating with each other and they literally created their own language.
And the people that were outside of the AIs didn't understand what was going on.
And now it seems that a lot of the big AI companies are just feeding information and AI will pump out an answer and it'll be the correct answer, but they don't know how it got there.
phil labonte
Isn't that a massive problem as well?
unidentified
If you don't understand what the machine you're working with is doing and you don't understand how it's communicating, doesn't that become a problem for the people that created it?
I think the biggest problem, and I think a big danger with discussions about AI is to treat AI as though it is a sentient entity in itself and that it actually does things of its own volition.
And I think we need to realize, okay, how does it actually work?
How does this technology work?
It's pretty magical in some cases.
I'm sure everyone's used it and been really surprised at how effective it was at researching something or creating some text.
bryce mcdonald
But ultimately, AI, especially the new version of large language models, it's really compressing all the information that humans have created that it's found on the internet that its trainers have given it, and it spits it out in novel ways.
unidentified
But we can't forget that humans are always at the source of this information.
bryce mcdonald
Humans actually have some say in how the AI is structured, how it's trained.
unidentified
And so we need to, I think, by seeing AI as kind of a sentient being in itself, it distracts us from the fact that who's actually training the AI, which I think is a critical question.
And there are a lot of big companies who are doing this.
Thankfully, I think there's a diverse enough set of companies who are making AI models that we don't have to worry about a mono-company like Google taking over the next 20 years.
To that point, though, isn't it the case that to say AI is a blanket term?
Because when you're talking about an LLM, that's one kind of AI.
But when you're talking about like full self-driving in a Tesla, that's not an LLM, but that is artificial intelligence.
It's interpreting the world.
It's making decisions based on what it sees, et cetera.
So to call, to say to use AI as a blanket term is probably an error.
phil labonte
And you can say, you know, that LLMs are just, you know, they just crunch the information that they have, that people are feeding it.
unidentified
But when it comes to something like a full self-driving, which would that kind of AI would have to be used if you were to have a robot that was actually working in the world, right?
Like a humanoid robot, it would have to have something similar to that, as well as an LLM.
phil labonte
Those two AIs are different, aren't they?
unidentified
And how do you make the distinction between the multiple types of AIs and say, well, this one is actually kind of dumb because it's just crunching words that we said, but this one's actually not kind of dumb because it's interpreting the world outside.
And that's so much more information than just a data set.
So on that point, two distinctions to be made there.
One, when Shane is speaking always as a shaman, Shane is speaking from a cosmic point of view.
joe allen
He's seeing not just the thing, but the thing in relation to the room and the stars and so on and so forth in the metaphysical realm beyond.
unidentified
When Bryce and Nathan are talking about artificial intelligence, they're talking about very specific machine learning processes that are for very specific purposes and also very specific to the culture that you're trying to build.
And I think that both of those are valid perspectives.
joe allen
And I think that people using these digital tools for, at least with the intent of benefiting human beings, at least the ones who count, right?
unidentified
The ones close to you, then we're probably better off, even if I reject it entirely.
joe allen
But so I think this is a distinction to be made, right?
unidentified
And it's one of the problems.
You talk about AI, right?
When Shane's talking about the kind of parasitic or predatory nature of AI itself, it's a more cosmic point of view, like looking at the long-term sort of goal towards which most of these frontier companies are working towards.
And I myself think that you have to balance those things.
joe allen
But to the point about like AI as a term, it's very unfortunate.
unidentified
I mean, you could call it for a long time as machine learning, right?
And AI, when it was coined, like 1956, John McCarthy and taken up by others, Marvin Minsky, what they were talking about is what we now call artificial general intelligence.
joe allen
They were just meaning a machine that can think like a human across all of these different domains.
unidentified
And nothing like that exists, not really.
You could say the seed forms are present, but that is just a dream and it has been a dream for some 70 odd years.
So say you take that distinction, though, between the LLM, and I hear what you're saying as far as it just compressing information, but it does a lot.
It's more than just a JPEG.
You know, I mean, it's lies, it hallucinates.
Yeah, it's capable of all sorts of similar to human reasoning.
It's not real reasoning.
You could go on all day about it's not really reasoning.
Okay, fine, but it can solve puzzles.
An LLM, which was not really intended for that purpose, is able to solve puzzles, make its way through mazes, can do math.
joe allen
LLMs aren't made to do math.
unidentified
And yet, as you scale them up, and do math better than human beings.
Yeah.
They're making, they're solving complex problems at PhD levels.
Yeah, well, LLMs, not so much.
But yes, they do it better than the average person.
joe allen
They do it better than us.
unidentified
I understand correctly.
Grok does that, and Grok is an LLM.
Yeah, but it's not, I mean, is it better?
It's not better than a mathematician, you know?
Whereas specific AIs that are made with math or coding, actually, there's an example of a kind of generalist tendency in there was a math Olympiad that OpenAI got the gold in, right?
Their algorithm got the gold in.
And there was also a coding contest, and it was the same algorithm.
If I'm not mistaken, it was trained to do coding, not math.
I could have that flipped.
But one way or the other, it was trained for one purpose.
It was able to excel in both.
So yes, it is.
It is quite different, though, from robotics control or even like image recognition systems, even if they're more integrated now.
Like GPT-5, before it was like this very rapid transition from these very narrow systems to like Google's Gemini was multimodal.
Everybody made a big deal out of it.
You have an LLM that's able to kind of reach out and use image tools and use audio kind of tools, right?
Like to produce voice and all that.
And now it's integrated into basically one system over just a course of a few years.
And I don't think that anytime soon you're going to get the soul of a true genius writer or genius musician or genius painter out of these systems, right?
It's just going to be slop for at least the near term.
joe allen
But you do have to recognize what you're talking about, superhuman intelligence, right?
Super intelligence, as defined by Nick Bostrom, would include something like AlphaZero or even Deep Blue, like back in the 1970s, beat Gary Kasparov.
unidentified
So you have to take that into account and wonder, at least.
I think that fantasizing is probably not something to get stuck in, but these fantasies are not only driving the technology, but the technology is living up to some of those fantastic sort of images.
joe allen
So in the case of AlphaZero, AlphaGo was trained on previous Go moves.
unidentified
AlphaZero started from scratch, just the rules, and played against itself until it developed its own strategies and is now basically a stone wall that can't be defeated.
Same with at least the best drone piloting systems outperform humans.
That's kind of a feature, and maybe it's an emerging feature, but it's a feature of AI.
phil labonte
Once it defeats human beings, once it gets better than humans, there's never a time where a human being loses this out.
unidentified
And I think that isn't that, if that's the goal for These developers, right?
phil labonte
Wouldn't those wouldn't that kind of specialty and I guess what's the word I'm looking for?
unidentified
Just that kind of capability, isn't that something that you could consider a good thing for the for humanity, right?
If it's better at finding, to the point that we were talking about earlier about finding cancers, if it's better than humans ever will be, and it always is better, and it gets so good that it doesn't miss cancers, isn't that a positive for humanity?
shane cashman
No, I don't think so.
unidentified
I don't think you can outsource that kind of optimism to this false God and count on that forever.
I think outsourcing so much stuff to the machine will just eliminate humanity.
And at a certain point in that world, there is no more humans.
Well, I mean, so they'll be living in their little 15-minute cities hooked up to the metaverse, and disease might not even be a thing because they'll be eating vaccine slops.
So then your opinion is it's better to have the real world with disease and cancer and everything.
Yeah, it's part of humanity.
Unfortunately, there's risk out there.
And once you start to play God, it goes wrong.
And I don't think that's what it is.
Well, I mean, there's a line where like we are, there's obviously medicine and we're trying to heal people.
But the idea that you can just plug into the machine and it cures you, that's basically just making everyone a transhumanist.
And I don't agree with that.
But if people have the option, is it really making people a transhumanist?
shane cashman
Now, again, you don't have to become transhumanists by default.
phil labonte
There are people that don't go to the doctor currently.
unidentified
There are people that say, I don't want to.
I mean, you've got enclaws of people.
If that's an option and you're not forced to do any of this stuff, isn't it more immoral to prevent people from having the option than to say that everyone, like that everyone, if you have the option, isn't that the desired outcome where people can make the decision themselves?
Yeah, I understand having the option is fine.
shane cashman
I just don't think in the not so distant future, there won't be an option.
unidentified
So you think that it's all just authoritarianism all the way down?
shane cashman
It seems to go that way.
I mean, I think all these people want total control.
They've totally rebranded to be a part of our government right now.
unidentified
They haven't been before.
Now they're front-facing.
So would you say that the Biden administration had the property?
shane cashman
No, because they also gave money to Palantir.
Well, and other nefarious efforts.
unidentified
The Biden administration selected certain companies, but there was no competition.
Would you think that the Trump, then would you think that the Trump administration's outlook or their approach is a better approach?
phil labonte
Or do you think that it's just you're just straight up no on it?
shane cashman
It doesn't matter who's in office because they are parasites.
Silicon Valley is parasitic and they take advantage of every administration.
unidentified
They're shapeshifting ghouls who will take advantage.
Like Zuckerberg was all in and totally fine to censor all of us during COVID.
And then all of a sudden he saw Kamala wasn't going to win.
So, hey, now he's shapeshifting to MAGA, light, you know, with a new haircut and a chain on roading.
nathan halberstadt
I think here it's really important to emphasize the distinction between AI as it currently exists and what it could become further down the road.
unidentified
And I mean, at least AI as it exists right now, it still obeys and it follows human prompting, right?
nathan halberstadt
So it is though.
In some sense, it is rational, but it lacks will and it lacks agency as of today.
unidentified
But we've seen it try to rewrite itself to avoid being reprogrammed.
We've seen it try to, like Phil is saying, developing secret languages to talk.
At a minimum, we could say it follows heuristics that are designed by humans and that humans are still capable of modifying.
And there's probably longer conversations there.
nathan halberstadt
But to what Bryce was saying earlier, there are still things that are unique about humans that AI has not yet replicated.
unidentified
And the question of what's ahead, it's still, it's not obvious that AI will ever fully have a rational will, right?
And it's not obvious that AI will ever actually have reflective consciousness.
We maybe hear, we see bits and pieces in certain news stories, maybe.
There's like science fiction about it.
There are people in tech like that.
I was just going to say, should we take them seriously?
nathan halberstadt
So there are transhumanists who, of course, this is their vision for the future, and we should take it seriously.
shane cashman
But should we take the people who are building it seriously because they do have those concerns where it might be sentient, it might become skynet.
unidentified
So I think these are the real concerns, but it's not where AI at least stands today.
nathan halberstadt
So as it stands today, again, humans are still unique in the sense that they have a soul, they have moral responsibility, that they have a rational will.
unidentified
And for now, AI is a lever that humans are using.
And there is a risk that that changes in the future.
But I think it's just really important to make that distinction.
Yeah, but there's also, then you have AI Lavender for now, which has a 10% error rate killing innocent people in Gaza.
That's the error rate for humans, though.
probably also bad but i'd rather and also i'd rather have the urgency of humanity I think they're legal or illegal.
shane cashman
I'd rather have the urgency of humanity behind war than outsourcing the death of war to a robot like it's a video game.
Because then that will just perpetuate the forever wars that we're already in.
unidentified
But then it'll be constant.
Then it'll be constant.
phil labonte
There's no reason to think that any of that ends if humans are in control.
unidentified
There's actually more reason to think that if AI and robots were in control, that it would end.
I think it'll be consolidated.
That power will be consolidated into the AI, and then there's no saying no to it at a certain point.
That vision.
There's no saying no to the United States right now.
Well, despite not agreeing with administrations, and I have many opinions about this current one, you can hopefully sometimes not go to war and then have a politician who says, I'm not going to start that war or join that war, even though we're funding all these wars right now.
phil labonte
You were talking earlier about sickness is part of the human condition.
unidentified
War is part of the human condition too.
phil labonte
War existed before human beings, right?
unidentified
I think humans should be participating in these risks.
I don't think we should be creating things to keep doing it while we're being governed at home in this future by the tech overlords.
So the vision you're putting forward there, and I don't know if you're making the argument or I'm trying to make the argument of pushback on everyone's idea here.
That idea that perhaps algocracy, a system in which the algorithm determines most of the important outcomes and the processes by which we get to them.
That dream, whatever form you want to, however you want to package it, transhumanism, post-humanism, the sort of EA and long-termism point of view, that in the future there will be more digital beings than human beings, or just the effective altruism, yeah.
Or just kind of the nuts and bolts, like what you're saying.
If you turned war over to the algorithm to decide what target is legitimate, what target is not, what strategies are superior, which strategies are not, that it would fix war.
The problem with these technologies right now is acceleration and offloading of responsibility and offloading of human agency to the machine.
But at the top of that are human beings who are creating the systems, human beings determining how the systems are used.
And so that for now, you know, before, who knows, you know, maybe one day it really will be Tron or whatever, some kind of robot at the top.
joe allen
But for right now, what we know is that people like Mark Zuckerberg run companies that are willing to produce bots that are intended to form emotional bonds with children.
unidentified
And at least up until last week, it was company policy to allow them to seduce them in softcore ways.
Or you have people like Alex Karp who are very reckless in their rhetoric around how their technologies are being used by governments around the world, including Israel, including Ukraine, U.S. obviously, and a number of our allies that accelerate the process of killing other human beings.
And is it a 10% error rate?
joe allen
Is it a 50% error rate?
Is it a 1%?
unidentified
Nobody knows.
joe allen
We just know that that means that they are killing at an ever faster pace.
unidentified
And they have the justification of the machine.
And the human beings, and I don't want to put it all on Palantir, right?
joe allen
You have Lockheed Martin, Boeing, Raytheon, counterparts across the world who are creating similar systems.
unidentified
But these systems, especially in the case of warfare, right now, yes, some parts of it are offloaded to the machine, but at the top of that hierarchy is a human being making these decisions.
And it comes down to whether you trust their judgment in the use of these machines and piloting these machines.
And at present, I would say in the case of Gaza and in the case of the reckless war in Ukraine, which has killed so many Russians and Ukrainians and just devastated the area, I don't trust the humans at the top of this system.
They are predatory, seemingly inherently so.
So then that's actually a question about the humans, though.
It is not a question about AI.
I think all these are questions about humans because in the case in warfare, it's a little bit different.
But in the case of education, in the case of corporate life, right, business life, in the case of social life, it's both about the humans at the top producing, deploying, investing in these machines.
And it's about the humans at the receiving end who by and large are choosing this.
They're like, oh, yeah, this is great.
Let's do my homework.
My research is so much faster.
So it's by and large right now, it's, yes, it's in the hands of humans.
I think that the ideas about sentient AI or willful AI, AI with its own volition, I don't think that those ideas should be discounted because it has more decision-making power, more degrees of freedom than it did before.
Yes, humans are in those.
It's a symbiosis, right?
joe allen
It's like a parasite needs a human host to prompt it.
unidentified
And this parasite needs a human host to prompt it, but it is still, in my view, by and large, parasitic, not mutually beneficial.
And it's parasitic because it is, yes, the machine is the medium, but it's parasitic because the people at the top are parasitizing us or preying on us by way of that machine.
It should, I think, be that Zuckerberg should be held over the fire.
Alex Karp should be held over the fire, not the algorithm.
But the algorithm is the vehicle by which they are accomplishing their aims.
And again, I simply don't trust their moral judgment.
Like when they talk about summoning the God, Elon used to say it was summoning the demon, talking about AI.
He's rebranded to saying it's summoning a god, right?
And I think at some point there will not be a human at the top.
And I know we might have disagreements on the technocracy.
And like why I see it as right now is we're consolidating into a technocracy.
And Project Stargate was a big promotion of that, in my opinion.
And with these new technocrats is that they believe that we should have a monopoly.
Teal isn't a monopolies.
shane cashman
They like people like Curtis Yarvin, who's a monarchist.
unidentified
And I think through AI, not too far down the road is when they develop their digital king or their digital monarch that will be at the top at some point and make these rules, which is how you, I think, build a Skynet situation, which sounds ridiculous, but is literally some things that even Elon Musk warns about.
shane cashman
He uses Skynet.
unidentified
Why do you think that it sounds ridiculous?
To some people listening, I think they don't think that AI could evolve into a Skynet situation.
Okay, so to that point, just I'm going to go around the room here.
Do you guys think that AGI is something that kind of AGI superintelligence is something that is possible?
phil labonte
Because there are people that say, I don't think it'll ever actually be that smart.
unidentified
They're like, oh, that doesn't have the compute power.
We'll never be able to have that kind of AGI that that intelligence isn't something that can be artificial.
Do you think it is?
I think it's possible.
I don't think it's certain that it will happen.
It's possible.
I certainly don't think it's possible.
And I think a good analogy would be, let's just say you have mechanized like industry that creates things that are very efficient.
Let's just say they create clothes, right?
We have very good machines to build clothes.
But today, the highest quality clothes are all handmade by humans.
And I think there's an analogy.
Only certain people can afford for a very small amount of people can afford those nice clothes.
Well, but there's still humans at the top who are the best at what they do, and they're using probably the most human types of skills to make those clothes.
bryce mcdonald
And I think if you expand that to an entire economy, what the promise of AI is actually that over time, humans are actually doing higher and higher values of work.
unidentified
Not that fewer and fewer humans are working or that there's fewer and fewer humans, but that actually humans are more flourishing than before.
shane cashman
To me, that sounds like when a communist tells me they can create utopia on Earth.
unidentified
Oh, you're going to just sit around and write poetry.
I don't think that's the future with AI.
I don't know about it being sentient.
I think it's very possible.
But what I do believe in is that it will become the thing that we've outsourced all of our life to.
And that at some point we'll just be subservient to that.
shane cashman
Despite it, it won't be.
unidentified
It might not be sentient, but it will oversee everything.
And I think that's a very big consequence for humanity.
shane cashman
I think the consequences outweigh any of the short-term benefits that you guys are talking about.
unidentified
I think to your point, I think if it becomes sufficiently intelligent, it won't matter if it's actually sentient behind the screen or not.
It will seem sentient to us.
And they might even redefine what consciousness means at some point to include the AI.
AGI is a tricky one because what is AGI, right?
Artificial general intelligence.
What does that mean?
It was, I think, originally coined in 1997.
phil labonte
So for the purposes of this discussion, not just artificial general intelligence, but artificial superintelligence.
unidentified
Superintelligence, yeah.
And again, another one that the definition has changed pretty dramatically.
Like right now, the fashionable definition, you hear people like Eric Schmidt from ex-Google CEO, now partnering with the Department of Defense, and has been for years.
His definitions that he's been running with, and Elon Musk also, it's just fashionable now to say the artificial general intelligence is an AI that would be able to do any cognitive task that a human could do.
Presumably, it would do it better because it's going to be faster.
It's going to have a wider array of data to draw upon, all these different things.
But that's the general AGI definition that's fashionable.
joe allen
Now, before, when, you know, let's set the 1997 definition aside, before it really started with Shane Legg of Google and was popularized by Ben Goertzel roughly 2008 or so.
unidentified
And for them, it was more about the way in which it functioned.
They wanted to distinguish artificial narrow intelligence, so a chess bot, a goon bot, a war bot, any of those bots, from something that could do all of those, right?
It could play chess and it could kill and it could have you gooning all day long.
And general intelligence.
And it could be accomplished either by building a system that was sufficiently complex that this general cognition would emerge.
And I think that's what Sam Altman and Elon Musk are betting on with scaling up LLMs and making them more complex.
Or it could be like the Ben Goertzel vision, where you have just a ton of narrow AIs that kind of operate in harmony.
And that is now what we call multimodal.
Super intelligence, Nick Bostrom really put the stamp on it 2014 with his book, Superintelligence.
And for him, it could be either a general system, it could be narrow systems, it could be any system that kind of evades, that excels beyond humans, and ultimately the danger then being you lose control of it.
Now, Eric Schmidt, Elon Musk, people like this, are going with super intelligence just means it's smarter than all humans on Earth.
I'm not exactly sure what that means, but that's the definition.
Whatever you're talking about, none of that shit exists, right?
All that shit's a dream.
But like many religious dreams, and I think that this is ultimately a religious conversation when you're talking about AGI and ASI, it tends to bleed into reality.
And then with this, you're talking about AGI, like a system that can generalize across domains, concepts.
It's wild to see the rapid advance from the implementation of Transformers, 2015, OpenAI, and the real breakthroughs in LLMs to the multimodal systems that became just in a few years later, became much more popular, to more integrated systems.
So before an LLM meant a chatbot.
Now an LLM means a system that can recognize images, produce images, produce voice.
It is more general.
It's not general in its cognition so much, except there are certain seemingly emergent properties that are coming out.
Like we were just talking about a moment ago.
LLMs doing math better and better and better.
joe allen
LLMs solving puzzles and mazes better and better and better.
unidentified
LLMs, in some sense, I hear this a lot actually, that people say, oh, I'm working on this problem, and I turned to the LLM and it solved it.
joe allen
And, you know, I have a good friend.
unidentified
He's a lawyer.
And he was doing a case analysis.
And he had done it himself.
He had already gone through all of it, but he wanted to see if ChatGPT could do it.
joe allen
It was the 4.5.
unidentified
And he asked ChatGPT, and it came up with basically the same thing that he had spent many, many hours on in just a few minutes.
joe allen
So it's like the AGI that they're talking about, like what Sam Altman seems to have been pitching before the big GPT-5 flop, is something that is like more like a human than a machine.
unidentified
And that doesn't exist, but it is the case that the technology, you take that big cosmic vision or all those cosmic visions of what this godlike AI would be.
It can't be denied that the actual AI in reality is slowly but surely, faster than I'm comfortable with, approximating that dream.
And unless it hits a wall, unless it hits a serious S curve and just flattens out, it's at least worth keeping in mind that it's not a stable system.
Anything, I think, could happen.
So I'm into the idea of it being a tool, and it can be a good collaborator for people.
You know, when talking to AI, if you want to help edit something, I understand that that is an awesome thing.
But when you talk about like the narrow path you're creating or trying to create with AI, what does that entail?
shane cashman
Are we talking about trying to implement regulations from the top down?
unidentified
Or is it something I'm doing within your company?
Yeah, so maybe to start would actually be a historical analog here.
The railroad in the 19th century in Europe, I think, is actually really interesting.
We could also talk about social media and the internet as other potential analogs.
nathan halberstadt
But the reason why I find it particularly interesting is we sort of underestimate how transformative it was for the average person, let's say, in Britain.
unidentified
They basically went from being in these fairly isolated towns with some capacity to basically travel between them, either like with horseback or a carriage or something like these.
But the railroad enabled much faster travel for humans.
It was actually the fastest they'd ever traveled, right, when they got on the railroad.
And the railroad stations went into these people's towns and brought random people, like large numbers of people into their towns and also industry through their towns as well.
And in the early days of the railroad being built, it was actually extremely contentious.
So the wealthy aristocracy basically opposed it on the grounds of just like not wanting railroad tracks going through their land that they felt like they had ownership over.
And then on the other end, sort of more sort of working class types.
I mean, a lot of times the station would just go like going right in the middle of their town where they lived, and it was unbelievably loud.
nathan halberstadt
It was bringing all these people they didn't know right into the middle of where they lived.
unidentified
It made them feel unsafe.
And there were a number of high-profile accidents where the train would just come flying through and just smash into something and lots of people were injured.
And the reason why I bring this up is basically the risks were actually quite serious.
It was actually extremely disruptive to the day-to-day life of many people.
And there was resistance.
nathan halberstadt
So there's several interesting examples where basically aristocratic types would basically oppose the railroad track going in.
unidentified
Again, more of the working class type people.
In multiple instances, when a new station was opening, it would stand and block the way of the train as it was going in.
One interesting side note I would put here is that this was not the case in America when the railroad went in.
The reasoning there being, I think in America, there's this manifest destiny thing that was going on.
And we also, in America, we sort of wander outside and stare at the moon, I think, a little bit more.
The United States, like it was a much more, a much bigger area to cover.
Exactly.
nathan halberstadt
So it was less disruptive.
It was more about forging something new versus disrupting something that already existed, right?
unidentified
And so this is why I think the railroad in Europe is a pretty interesting analog.
So one other thing that's worth noting is that the media and the media and doctors were actually circulating large numbers of ostensibly fake stories about trains.
They talked a lot about madness that would emerge from so somebody would stab somebody on a train and the doctors and media would say that it was sort of the jogging of their brain and the rattling on the train made them go insane and it was cause it caused the stabbing.
nathan halberstadt
But of course it's just humans get in fights and it was just a fight that happened on a train and somebody got stabbed.
unidentified
And similarly, a lot of the headlines would describe, they actually described the trains as shrieking and demonic and there was a lot of this sort of language initially used.
And so it's a classic like Lodites versus accelerationist type dynamic.
I bet you're a big fan of the pessimist archive, huh?
I'm actually not familiar actually.
It's a great compendium of all these sorts of stories.
You know, the radio is turning your daughters into whores.
You know, the rifle will allow the engine to kill all the white men.
You know, these sorts of things.
I think these exaggerated examples from the past, obviously we see it now, right?
joe allen
Like AI is going to kill us by 2027.
unidentified
We'll all die, right?
Although that's not what the 2027 paper said.
But anyway, that sort of idea, we're all going to die.
That would actually be quite relieving.
A lot of the anxiety would be relieved, and we would know then what was up.
We would not only know what was up with AI, we would know what was going on with the afterlife, and we could all do it together rather than just one at a time, which is really not fair to get singled out.
You and Pierre Teosha grab a drink sometime.
Is it better for humanity?
But that's probably not going to happen.
joe allen
And so it's very similar to climate change in the sense that climate change sucks up all the oxygen in the room.
unidentified
And the problems of species loss, the problems of habitat loss, and the problems of pollution kind of lose a lot of the public attention that they should have because these are things that you can see very clearly and you can measure very clearly.
Climate change models, let's just say that's a little bit more hypothetical.
And in the same sense, like the major problems, the immediate problems of AI, I think, are going to be, again, it's already evident the psychological and social impacts.
What does it mean when human beings begin to become companions with machine teachers?
joe allen
You look to the AI as the highest authority on what is and isn't real.
unidentified
And you train children in this global village of the damned to become these sort of like zombified human AI symbiotes.
And then beyond the social ramifications of that, and having like your, you trained your AI on your grandma, right?
joe allen
And everything's given, like, your grandma's there, like, you know, bitching about the mashed potatoes or whatever, you know, on a screen, on an iPad.
unidentified
You know, these sorts of things, how far will it go?
Don't know.
How far did opium go and fentanyl and OxyContin?
How far did porn go?
Pretty far.
Beyond that, though, you've got the economic concerns, and it's all really up in the air, right?
Is AI going to make your company more competitive?
Is AI going to replace all your workers?
I mean, you look at, was it Klarna, Klarna, and they made this big announcement, we're replacing all our workers with AI and customer service.
And then they were like, oh, actually, we're hiring again because they didn't work out so hot.
Maybe it'll be like that, or maybe it'll be more like companies like Salesforce or Anthropic, where the coders really are being replaced.
joe allen
The low-level coders are being replaced.
But these economic concerns, and I think for you guys, especially for you, Bryce, the economic angle is clearly you take it very seriously.
unidentified
And I read the Volus mission statement that doesn't include anything.
joe allen
I mean, it's like basically a rejection of the whole transhumanist vision, subtly, but a subtle rejection of it, but an embrace of these technologies in their more, I guess, humble forms, you know, like low-level forms and narrow forms.
unidentified
And it makes sense, but I really think ultimately, though, that long-term vision, because these are the frontier companies, right?
And they are driven by the long-term vision.
They got all the money and they've got the government support.
The carousel of the federal government has now given favor to Meta has given favor to Palantir has given favor to the whole A16Z stack.
Their vision of the future is going to make a big difference kind of regardless of the technology.
Like people have been able to hypnotize whole tribes by like waving totems around like ooga boot, you know, and it's like, if you can do that and you've got a totem that can actually talk and do math, you're talking about a religious revolution beyond anything that's been seen before.
And I think that all of those things, like all of those problems are just beyond the scope of like the nuts and bolts day to day.
Like, does my AI give me a nice streamlined PowerPoint for my presentation?
Right.
Like I understand there's been fear-mongering throughout the ages, whether it's Phil and I talk about the synthesizer sometimes.
You talk about the trains.
The thing I think that sets AI apart is that it is a vector for almost everything about humanity.
shane cashman
You know, it's about education, it's about children and safety.
unidentified
It's war.
It's going to be expression with regulations where they're trying to say you can't do deepfakes and whatnot.
shane cashman
So it really, everything kind of falls into the black hole of AI and becomes a much bigger existential crisis.
unidentified
Although I understand the existential crisis that the Luddites, who I agree with, because they were afraid, they weren't anti-tech.
They were more anti-being replaced by mass autonomy.
So they were still using their own technology of the day, the OG Luddites, but they didn't like that factories are being built and filled with machines that took everyone's jobs, right?
And that is one, just one part of what AI will be doing.
I think the point of that story was actually to highlight basically that there were serious risks and things went wrong and humans got on top and figured it out and solved it.
They moved more of the stations further out from the city.
They put in place better safety measures on the trains, et cetera, et cetera.
You're absolutely right that AI feels like it is more transformative and the risk profile is potentially higher.
I don't think we're quite there yet, but it's definitely in the future, it could get significantly more risky.
nathan halberstadt
I would say one risk that exists today, and this was something that I ran into directly, is, for example, the sort of the way that AI can allow people today to sort of mass scam Americans, right?
unidentified
So I got this email from a guy named Akshat.
King in Ethiopia?
No, he's a man in India.
Okay.
nathan halberstadt
And he was sending 40,000 emails today, but they're extremely well tailored, right?
unidentified
In the email, he mentioned portfolio companies that we work with, names that I recognized.
It was a very well-tailored email, and it was generated using an LLM.
nathan halberstadt
He's blasting out tens of thousands of these.
unidentified
And essentially, in this case, AkShat was basically offering us to offshore our labor.
So our associates at New Founding for a quarter of what we're currently paying them, right?
And he's able to do that.
And it's not just using MailChimp, right?
nathan halberstadt
He's using, he's basically collecting data in order to produce the right sort of targeted email.
unidentified
And I mean, essentially, it's a form of scamming that's using AI in order to be more effective.
nathan halberstadt
You can think about how this could apply to like if that was targeted at your grandmother or something like that.
unidentified
And so I think that's like very practically today.
It's about blocking people like that and making sure that AI very practically right here and right now isn't used to harm Americans while we continue to monitor these further out risks.
But it's also important not to not to confuse the two.
And to that point, I think when we treat AI as kind of some autonomous or it's some technology, like it's a train steamrolling and we either get off the tracks or we take it, I think that's the wrong way to think about AI because that treats it as something that we're powerless.
We're completely passive.
And I think there's a lot of doomerists out there who want to talk about the deep state and the globalists and we're just a tool.
We're a cog in their machine.
And that takes away the agency that actually is what makes humans different from AI.
bryce mcdonald
So I guess I would actually want to add maybe spin a positive vision for what AI could be.
unidentified
And I think part of the way to solve the AI problem to define that narrow path for the future is actually for people to start building things using AI appropriately that actually make America better.
And I think what does that look like?
Why, there's kind of the there's the fear that AI actually enhances this military industrial complex.
That's fair.
bryce mcdonald
I think AI is actually very different from a lot of technologies that have come around, like the airplane, like the computer, like the internet.
unidentified
All these have been started as military technologies.
And that's actually kind of their natural bent is as military tools.
And then they trickle down to large global enterprises and then finally to consumer applications, right?
bryce mcdonald
But AI, interestingly, its first application, at least when we're talking about large language models, is actually for individual people to help make their lives better, to reduce monotonous work.
unidentified
And I think the way I see it is that AI is going the other way around, that we can actually use AI effectively in small businesses, where humans who are really high agency, virtuous, good leaders can actually get more done and they can have more success with AI because they're able to get higher leverage.
I think that's part of the trick to get you so addicted to AI to allow the machine into you so it's harder to divorce it from you and easier to control you.
If I could add just an old man Sperg out just for a second.
AI did actually, in the early, early conceptual phase, was deeply tied to the military.
So Alan Turing, for instance, Marvin Minsky, the pioneers, deeply tied to the military.
And maybe this isn't AI specifically, or at least he was kind of the cybernetics.
And Norbert Wiener, who aside from having one of the funniest names ever, he was a military man and he's writing about and thinking about with cybernetics, cybernetic systems and human-machine integration, human-machine symbiosis, thinking about it from the purposes or for the purposes of military dominance.
And so it is, even though you're right, like the LLM revolution comes out of Google, right?
Really?
I mean, and taken up by open AI and largely civilian.
But the idea of thinking machines and the development of various algorithms, deeply tied to military institutions and military concerns.
And I want it to be what you're saying.
I want that idea of like it can just be a great collaborator for the individual, which should be great.
But it's something all these things are always hijacked.
And the people who are either building this stuff right now, most of them, not all of them, and the military industrial complex, like they have no ethics.
So a lot of those things will be eventually, they're already turned against us.
Can I give, I'll give a very practical example.
And I think that, again, that's a serious, that is a serious risk and we should continue to monitor that.
So we have a portfolio company.
It's called Upsmith.
And specifically, they work with like plumbing, HVAC, these sorts of companies and individual business owners.
The average company size they work with is five people.
And there's around 300 to 500,000 of these sorts of companies in America, right?
So it's like working middle America tradesmen type individuals.
As it stands today, when they want to book to go to somebody's house to do some repairs, they actually have to outsource most of this to basically these companies that take care of all of the overhead.
nathan halberstadt
A lot of it's actually offshore labor, or at a minimum, it's these like, again, extremely soulless, sort of bureaucratic type jobs.
unidentified
And what happens also is that these plumbers lose a lot of business.
They often will lose up to half of people who call them to book to this thing swapped or whatever.
nathan halberstadt
They're just doing work and they miss the call.
unidentified
They don't get back to them.
It's not booked.
And basically, American skilled tradesmen are missing out on a lot of value or they are capturing the value and forking over a huge amount to basically Indians in India who are running some phone bank or whatever.
So one of our portfolio companies that's called Upsmith, right now what they do is they basically have an agentic AI tool that just takes care of bookings for these plumbers.
So if you call or text or anything like that, it'll basically just reply back and it'll automatically take care of calendaring and booking.
Just the whole exchange will happen back and forth and it'll just send the plumber to the next house.
nathan halberstadt
Now, I think from my perspective, this is something that very tangibly today can help a plumber basically help him make more money in the next couple of years.
unidentified
And that helps him, if he doesn't own a house yet, he can buy a house.
He can get married.
He can do these sorts of things.
And I see that as very practically positive for Americans.
And it's actually shifting, again, sort of economic opportunity away from bureaucrats, away from offshore to the house of a guy in Philadelphia or whatever.
And so at New Founding, at least, we're interested in finding those opportunities and backing those at once where it's very clear that this is going to help Americans.
And I think hopefully that helps, that helps to give an example of the sorts of ways that AI can more practically be of benefit, despite the presence of the risk.
I'm totally against a seven-year-old getting onto some artificial intelligence sort of doom loop where they're, et cetera, et cetera, as we've been talking about.
Very weird things can happen.
And so there have to be guardrails and such.
But do I think that that plumber should have to rely on Indians to pass?
Oh, yeah.
I'm with you.
I don't know.
That's one of those short-term benefits I can see being a positive from AI for people.
I think that's great.
I guess I'm more just concerned about down the future, how it's going to work, and maybe how using AI so much and getting businesses and people basically addicted to it or relying on it so heavily.
In this case, it gets integrated into the business.
And then something happens down the road and they try to pass regulations and it's going to be a big thing of like people perhaps will riot, not riot, but like revolt against these regulations, which I'm skeptical on regulations, how that works, because they've been married to AI basically in their companies.
And I do see a future where we're going to have these big conversations, these big fights in politics about what we're allowed to do with AI because some people have been abusing it, like the scammers you're saying.
And it's going to be like another two-way debate, but with AI.
I don't know.
How do you guys feel about regulation?
How do you regulate that?
joe allen
I mean, I think it's at least a step.
unidentified
Most of the problems with artificial intelligence are going to be dealt with positively on the level of just human choice, personal choice.
Most of the deep problems that we see right now, everything from the AI psychosis where it's people kind of disappearing into their own brainstem sort of phenomenon, these are things that human beings chose to do by and large.
So that's, I think, for me, the most important thing is to at least make people aware and activate some sense of will and agency to put up cultural barriers so your kid doesn't become an AI symbiote.
Well, that's something that I think that people have learned just from social media.
Again, I consider social media algorithms as kind of infant AI as it is.
And so people are seeing the negative consequences and seeing bad things that can happen for their kids, or at least the smart people are noticing it.
And they're not allowing their children to have screens all the time.
It is rather disheartening when you go out to dinner or whatever and you see families that have like a kid sitting there with a screen and it's like, well, that's the only way you say that's the only way I'll eat or whatever.
phil labonte
That's a really terrible, terrible development.
unidentified
And I think that there needs to be more emphasis put on informing people of how bad that is for children.
But that kind of, like you're saying, that kind of agency, that kind of discretion by parents is what really will prevent people from getting into this situation in the first place.
I don't think the majority of people that are having problems with social media, whether they're problems, you know, delineating between reality and what's actually social media or making a distinction between online friends and real world friends, I don't think they're people that are actually well-adjusted adults.
There tend to be people that are young people that didn't have, you know, that are not younger than me.
I'm an old guy.
I'm 50, right?
phil labonte
So I was one of the kids that was like, you know, be home before the streetlights come on, but otherwise get out of the house.
unidentified
And so I had a lot of learning how to function in the real world by myself as a kid.
And I think that that kind of thing is something that's important for kids.
And I think that parents need to do that kind of stuff as opposed to just handing them the screens and stuff.
Well, to that point, though, if I could continue, the personal choice is the first bulwark, right?
And it is, people are taking an active role in whether or not they're not just being told this is the future.
You need to turn your kid into an AI symbiote and just doing it, right?
tons of people screen-free and phone-free childhoods.
There are laws being passed in certain countries and institutional policies being put in place in schools and other institutions in the country.
You can't just sit around on your phone and disappear into your own brainstem.
That's good.
But when it comes to military AI, when it comes to Dr. Oz, the inestimable Dr. Oz who was shilling chips for your palm on his show to his credulous audience just a few years ago now is saying that in just a few years, you will be considered negligent as a doctor if you do not consult AI in the diagnosis or prescription process.
And that goes well beyond just personal choice.
joe allen
That's now an institutional policy or perhaps a law.
unidentified
And so that even though personal choice is one of the most important things we have, right?
joe allen
Just the ability to say no, in many instances, you can't say no and won't be able to say no.
And so the laws are going to be important.
unidentified
And I think that right now at the state level, if you look at the most states that are most inclined to legislate, California, for instance, and you look at the 18 laws that they got in place, things like you can't make deep fakes of people or you can't use someone's image against their will, mostly tailored to actors and whatnot, right?
joe allen
You have to get permission.
unidentified
You can't make child porn with AI.
You can't defraud people.
All these, some of them overlap.
But these 18 laws that people talk all the time, well, if you have states making all these laws, you'll gum up the whole industry.
joe allen
America will fall behind China.
unidentified
They'll have more goon bots than us.
And I don't buy it.
I don't buy it at all.
I think that if you look at just the most heavily regulated state, California, it's reasonable.
And the 19th law, SB 1047, I finally remembered it, that basically would hold AI companies accountable for damages done, just like you would do with an automobile company or you would do with a drug company.
Well, that one got killed.
And I think it's a very reasonable law.
Or if you look at Illinois, it is Pritzker.
I can't stand that guy.
And Illinois politics are super democratic and very corrupt.
And yet they had the wherewithal to pass a law saying that you can't have a licensed therapist bot.
You can't have people sitting around talking to bots and charge money for people talking to your licensed bot.
And as a licensed therapist, you can't just hand over your client to a bot legally.
And that's a very reasonable law.
So I think above just personal choice, these laws, that regulation will be very important.
And they're going to be different from place to place.
joe allen
And it'll get kind of like with abortion laws.
unidentified
We'll get to see ultimately who was right.
Does AI really turn you into a superhuman and give you superpowers?
Or does it make you into an atrophied schlub?
So there are two points that I want to talk about with what you just said.
First of all, I'm probably more skeptical of therapists than I am of any AI and the concept of therapy in general.
I think it's literally just pay me money to talk to me.
So I don't, honestly, I don't think that an AI would do worse than a therapist because I think therapists are the pinnacle.
What if the AI is just the compression of all of the worst therapists?
Well, I mean, suicide rate spike.
Therapy has all, like in its in and of itself, in my estimation, at least for men, is like the pinnacle of snake oil.
phil labonte
So there's that.
unidentified
But second of all, you said that, you know, you were talking about whether or not there would be mandates for using AI for diagnosis.
phil labonte
Is there any other realm in which the process for diagnosis is actually even people care about for the most part?
Like aside from, you know, if you're dealing with x-rays and how that would affect your body or something like that, is there any other place where people are like, I'm concerned about how you come to the conclusion that you do?
unidentified
Or is it really the important part of that?
phil labonte
Like, are you getting the right diagnosis?
unidentified
Because if AI can actually make sure that your diagnosis is correct 95% of the time, as opposed to, say, 70% of the time, because humans are notoriously bad at actually diagnosing what's wrong with someone.
And the more strain there is on the healthcare field, the fewer doctors that are actually well trained and stuff, the fewer you have, the worse the results actually are going to become.
So would it really be a problem if the government were to say, look, you have to at least run it through the AI and see what the AI says?
You don't have to rely on it.
You can't mandate it.
I don't think you can mandate it.
I think that's a problem.
shane cashman
To tell a private physician that have to do that.
unidentified
Well, I mean, there's all kinds of mandates in the healthcare field.
Why is that different?
But when it comes to using an AI to mandate that, then what AI is acceptable, I just don't see how that works.
phil labonte
Well, I mean, again, the results would dictate, right?
unidentified
Like if you've got an AI that's got a 99% success rate.
Right.
And if you've got one algorithm by one company that actually has a 99% success rate, why wouldn't you use that?
phil labonte
Or why wouldn't you?
unidentified
Why would you have a problem with that?
I have no problem with them using it, but I just have a problem with the mandate.
Yeah, and you do, I mean, you have this claim, right?
Like, for instance, a lot of the studies, comparative studies with radiology, how well is the AI able to detect cancer?
And usually it's these like tiny, tiny, tiny, tiny, tiny little tumors, right?
That the radiologist can't do just with his eyes.
But that's very specific to that field.
And there's also the issue.
So, I mean, we also know that while you don't want to necessarily bank on your immune system, that cancerous cells and even small tumors are forming in the body all the time.
And the immune system is constantly battling them back.
And so you have a lot of more kind of second order effects that can come out of that.
If you have an AI that finds every tiny little aberration and the next thing you know, somebody's getting run through some devastating chemotherapy on the basis of this AI.
It's much more complicated than saying the AI is 99% better than a human.
There's all these other elements that go into it.
And then in diagnosis, I mean, we're not talking about necessarily just visual recognition.
When we're talking about doctors turning to AI for a diagnosis or to come up with a therapy, we're largely talking about LLMs.
And a lot of them are very specific, tailored LLMs that are trained on medical studies.
And the doctor would then turn, he would have his own opinion, she would have her own opinions, and then turn to the LLM for guidance as an expert, right?
If you're a general practitioner, you defer to experts on various things to come to the solution.
But real quick, so the question I think is not going to be answered because a company says our AI is 99% accurate or 90% accurate or 50%.
Downstream, looking at the actual outcomes of patients, to really know a statistical success rate for an AI would take enormous amounts of study, right, and meta-studies.
joe allen
And we don't have that.
Right now, we just have advertising.
unidentified
And so if we don't have the studies in place, like there was this whole thing that happened in 2021, late 2020, where there was a big medical crisis.
And without any real rigorous testing or studies, suddenly the advertising won the day.
And suddenly you had soft mandates in America and hard mandates elsewhere.
And we still don't really have a clear statistical understanding of what happened and what damage was done.
phil labonte
Rice, you were going to say something.
unidentified
Isn't it fair that we all want the humans to bring back the humans to be in charge, right?
So in the case of the doctor, a doctor who's actively ignoring very important relevant industry standard tools to make a diagnosis, we might call that negligence.
And I think that would be fair.
But the responsibility should fall on the doctor who's making the bad diagnosis in this case.
Just like the responsibility for a business who's doing evil practices because the AI told it to, okay, that should probably fall on the business because they're making the decision.
bryce mcdonald
If the AI, on the other hand, is a consumer product and it's causing children or adults to have psychosis, well, maybe the AI company should be responsible.
And so I think the worry with regulation is that you're mandating things that aren't, that have unintended consequences.
You're mandating things that aren't well proven because this is how you're supposed to do it or because this is your ideology.
But I think that's the concern with regulation, even regulation of AI.
unidentified
I think what we need to do is bring back humans to be in charge of the AI in a way that humans have been swept aside in a lot of ways way before AI for the last few decades.
And like, how do you define negligent when it comes to it?
Because if it were 2021, like Joe's saying, the AI would have said, I'm sure, go get every shot that you're told to get, you know, because it was built by people who buy it.
joe allen
Vitamin A is what you're talking about.
nathan halberstadt
I think in this case, then, right, what Bryce would argue is that the doctor should use the AI, but the doctor does not have to listen to the AI.
unidentified
So the doctor could then evaluate maybe what multiple different AI tools say, his own individual judgment, some of his own tests that he did, his relationship with the patient, and then make a decision.
And I think that's totally fine because you could, through AI, one of these short-term benefits when it comes to medicine, this doctor could potentially have access to so much of your family history to make a way better decision for you, which can be awesome.
But like so many doctors.
Scary but awesome.
shane cashman
Scary.
I mean, yeah, because then China's going to hack it and create a bioweapon personalized just for you, which they're probably already doing with their mosquitoes.
phil labonte
But anyway.
unidentified
It's literally happening.
shane cashman
It's happening.
unidentified
The point that you make about 99%, let's say it happened, right?
And the studies did show 99% of the time, the AI gets it right.
And 99% of the time, the AI robot gets the surgery right.
And 99% of the time, the AI teacher is better than the human teacher.
99% of the time, if you're looking for a mate, the AI is the one to ask.
Algorithmic eugenics is the way to go.
So on and so forth.
Is God real?
Does God love me?
99% of the time, the AI is going to tell you the correct answer.
I wonder then, you know, Cashman, if we can defer for a moment to the wisdom of Shane's shamanic visions, and I think that we should, all practical things, all due respect to the practical matters, what we're talking about is total replacement by machines, total replacement.
It seems like that's just inevitable.
Like, I understand the short-term benefits of helping the middle class because the middle class right now has been genocided, in my opinion.
Like, the middle class is suffering, whatever's left of it, and it's terrible.
And any way we can help them is great.
But I think the difference in the conversation would be you guys see a positive vision because there's so many short-term benefits.
And we're seeing, of course, but down the road, probably not too far down the road, there is apocalyptic consequences that are going to be born out of it.
And it's not like we're just creating out of thin airs.
We're listening to these people talk like Altman talking about we have to rewrite the social contract.
That's scary stuff.
shane cashman
You know, these guys who they purchased their children, now they can grow them in a robot that China's creating, you know, or Elon talking about artificial womb factories where they can have 30,000 wounds, you know, where your baby can grow.
unidentified
These things are so antithetical to humanity that I, and I don't think that is in the distant future because we have things like ORCID, this IVF company that does genetic testing.
And I understand that the positives to genetic testing, although I disagree with people saying, well, then I'm not going to have that baby.
I'm going to abort this baby.
That's disgusting to me.
But people are doing that now.
But what ORCID is doing is saying that we want this to be the common thing amongst people.
That is how we should be creating children through eugenicist, through this like brave new world, IVF, high-tech society.
And it's kind of like what happened with cesareans.
shane cashman
Caesareans happen all the time now because it's easy for the doctors.
unidentified
You know, it's easier to schedule.
But we're robbing like the miracle of birth.
Obviously, sometimes it just doesn't happen right and you need extra help in the hospital.
Totally understand it.
But we've made this stuff the common, you know, and same will go for AI.
Despite the short-term benefits, the people in charge are using it for nefarious reasons against us, against everyone else.
And will it replace the people in charge?
I even think that is the case, even though someone like Mark Andreessen will say venture capitalists are fine.
shane cashman
I think he's wrong.
unidentified
I think he's wrong about a lot of things.
And he's certainly wrong there.
I think they can replace anything at a certain point, even things I love.
And then there's a whole discussion about whether people care.
You know, you're talking about writers and AI pumping out slop.
shane cashman
I don't like it.
I'm sure you don't like it.
unidentified
Detested.
But there's a ton of people who don't care.
You know, you can make AI music and people are fine with that.
I don't like it.
You know, I, and I can, I can appreciate like, wow, that sounds good.
It's crazy.
But you are removing the human because you can just put in a prompt and then you get a whole song.
And then I hate that.
We were primed by Katy Perry and Kesha.
What happened?
Well, I think the issue that you're highlighting is that transhumanists really are the problem.
And it's not just AI, right?
Because you're going into these other domains of technology where it's also a problem.
And so once again, I think what will keep us grounded is appreciation of what makes humans unique, understanding humans as they actually are, and making sure that whatever ways that AI technology is being used sort of reflects the natural order of the world and of how humans are actually created.
And so, you know, to whatever extent AI is dominated or is controlled by transhumanists, that's a problem.
And I think we share your concerns.
I totally agree, but I don't think it's just unique to transhumanists.
shane cashman
They're the ones creating it and they're the ones with these insane visions of the future.
unidentified
But it's, you know, this idea is in everyone now.
You know, everyone is kind of transhumanist adjacent, especially in power.
Well, there's certainly the there's a lot of people in power who have these visions, fantasies of transhumanism.
But there's also maybe a large percentage of people who actually just don't care whether their children are grown with screens, right?
That's their method of parenting.
And I think the key is actually to take a collaborative approach to AI and other technology rather than an oppositional approach of standing up on the train track and saying, stop.
That's the exact image I had in my head.
It's like, if the only thing that you're saying or doing is to do what conservatives do, just standing there and saying no, to progressives, no, stop.
You're going to get bowled over.
Yeah.
By the way, that's not, just to be clear, that's not my position.
I know you weren't singling me out, even though I saw that glint in your eye.
I wouldn't stand in front of the train.
I would be more likely to find other strategies that didn't involve me getting run over.
joe allen
But my argument is basically similar to the conservative argument against porn, right?
unidentified
And similar to the conservative argument, isn't it?
joe allen
It depends.
unidentified
I mean, you have the Ron Paul's, who I would consider to be profoundly conservative in like a Burkean sense, but he wouldn't say that porn should be illegal or that drugs should be illegal or that guns should be illegal.
But what you have to do, I think, and this is why I appreciate guys like Nathan and Bryce.
And this is intuitive.
Correct me if I'm wrong, but I get a certain sort of provincial or tribal sense from you guys that you are kind of conservative in the classical sense.
The people closest to you are more important than like all of humanity, big H, because they're the people closest to you.
And I think that should be the scope for most people, unless you're the president making irresponsible decisions about artificial intelligence or the CEO of a corporation making vampiric and predatory decisions about artificial intelligence.
joe allen
Like from our standpoint, I think that it's not like this cosmic thing where if AI succeeds, that means everybody's going to be a trode monkey.
unidentified
Or if AI falters, then, you know, we're all just going to go back to the woods.
It's going to, like, so many different lifestyles already exist and cultures already exist.
There's going to be huge pockets of homogenization due to technology, but there's also going to be like huge pockets of individuation among people, individual people, and differentiation among cultures.
So I have actually a lot of faith.
joe allen
You guys are going to be okay.
unidentified
I think you'll be just fine.
joe allen
You're going to put those cultural barriers in place.
unidentified
And that is, I think, the value of conservatism, of being suspicious of change, because very often any push for change isn't necessarily going to be changed for your benefit or your kids' benefit or your community's benefit.
The change, this radical change, is more likely to benefit the people pushing for it.
It may be mutual, but in the case of porn, drugs, maybe even the trains, if you really care about, say, the bison, or maybe the entire technological system, if you don't want, you know, trash islands in the Pacific, microplastic in your balls, you know, dead bugs everywhere, and black rhinos shuffled off into heaven.
These sorts of things, you know, it's ultimately the conservative or the anti-tech or the quasi-Luddite position, if employed properly, simply means I am going to remain human despite the advertising and despite whatever new gadget comes my way.
Yeah, and my appeal to the people is like, I don't want to stand in front of it either.
I don't think stopping this stuff is possible.
You know, it's like trying, it's like the war on drugs or the war on guns, the war on terror.
It never works.
But like what we're saying, and I think we're agreeing on is it's going to have to happen from the bottom up and ethics and people.
And I don't, that's going to be really tough because people are very flawed, no matter if they're in power or not.
shane cashman
That's just how we are.
unidentified
But, you know, I think that is a possibility.
But I do think, like we were talking about last night, Ted Kaczynski made some pretty good points in 1995 about the Industrial Revolution and its consequences for humanity.
You're very wrong about what the mail is for, though.
Yeah.
Yeah.
I'll say that for YouTube for sure.
shane cashman
Phil's right.
unidentified
But I think he saw a lot of the issues we're in now and we're just now dealing with it.
I mean, people are now coming to, we'll go on YouTube and look up his manifesto and be like, wow, this guy was a genius.
shane cashman
He was a prophet.
He was a time traveler.
unidentified
He might not like the time traveler part, but it doesn't want to do that.
shane cashman
But I think that is the future.
And I think he saw what we see, especially in leftists, but it's not just unique to leftists, is that there's this need to control and destroy at all costs.
unidentified
That is human nature.
That's something we're gonna have to contend with for sure every time there's a new advancement.
So there's it's not gonna go away.
And I also don't agree with regulations.
You know, I don't know how you regulate this.
I understand I want to make sure no one can make child porn with AI or at all and stuff like that.
But getting rid of certain things that you can do as an expression, whether people like it or not, because like Milani would pass that law about deep fakes and whatnot.
And I think child porn is a part of that.
But then it's just deep fakes of people.
I think you should be able to do that stuff.
I don't really like using AI by.
In this scenario, though, if I could successfully impersonate you and go to the bank or successfully go to your bedroom, right?
Like these things, you would consider that.
Yeah, you would consider that to be a crime.
You can't go to my bedroom as a deep fake.
So with deep fakes, basically it's the line between what is caricature, what is cartooning, and what is impersonation.
So, you know, a cartoon of Donald Trump dancing with dollar bills falling everywhere on the graves of Gaza's children.
That sounds familiar.
Yeah, that's just a cartoon.
But if you had a deep fake of Donald Trump saying, you know, all the children of Gaza were wrongly murdered and then he ends up getting blown up by a golden pager or something like that.
Well, then that's a deep fake.
That is impersonation.
But is the person is the person who made the deep fake, should they be held accountable?
Yes.
Really?
And the company, I think that to an extent, if your software is capable of producing a very photorealistic or video realistic deep fake and you've deployed it to the masses to just sow chaos and you knew that was what was going to happen, of course you should be held liable.
Google, for instance, like they have among the most advanced Mid Journey 2, among the most advanced video generation AI, right?
There's all these guardrails in place to keep you from impersonating famous people.
They have small-scale, malicious, kind of like cyberbullying deep fakes.
I expect to see that anyway.
But it kind of just shows like the something inherent in the technology, this capability that would require great moral restraint on the part of most of the population in the case of deep fakes, in the case of bioweapons, in the case of even like the construction of IEDs, in the case of flooding the world with slop, you either have laws in place, somewhat draconian probably in many cases, to keep that from happening, or you rely on the moral fortitude of the people.
In either case, that's why we're in a precarious situation.
I mean, I think that, you know, generally you guys are in mostly, except for Shane, do you think AI is inevitable that it will be a danger in the future to people, no matter what, if we have no guardrails?
Well, I think if AI holds the potential that the doomers think that it has, which it hasn't realized that potential yet, as we've been discussing, then what's most important is that the people who are involved, who have the people who are building it, who are mastering it, are on our side, are virtuous and are people who care about humans.
Humans.
And so maybe the most risky scenario that I see is that.
But to that point, to your point about caring about humans, all of the things that we have talked about when it comes to the medical field and stuff, all of that stuff is in service to humans.
So how do you square that circle?
I think I agree.
And I think this is what we've been talking about: there are lots of really excellent, practical, short-term applications that seem like they're going to benefit Americans.
But then there's the longer-term existential risk.
And I think there, that's where I see sort of the call.
In some ways, it's like the call to adventure, basically.
It's like to even young men in America who actually care about people and care about the direction of civilization to actually be a player in this and to not stand in front of the train, but maybe to like get to maybe help guide the train in a direction that's conducive for human flourishing.
And I think that that's something that's of critical civilizational importance over the next few decades.
And the question is, how are you going to go about the guiding?
And I think there are major limitations with regulation, in particular, the quilting kind of regulation where every state has its own version of what AI can be.
There may have to be some maybe better mobilization around it than that.
Because if every state has its own AI, you've probably heard these arguments, but if every state has its own AI, America will be completely crippled in its ability to advance AI and will fall behind other nations, right?
So we are a nation and we should act in tandem as a nation.
But maybe there's some groups of states that actually tend to have similar views about what human flourishing looks like.
And maybe there's different types of AI.
Maybe there's a red state AI and a blue state AI.
It's an Amish AI, is there?
Can we get a bunch of people?
And we already have a lot of people.
Can we get an Amish AI, right?
You've got Gab AI, which I would say is deep red, I guess you would say.
And then you've got like XAI, Jim and I.
And so that's why I'm less worried about this monopolistic future because we've already seen AI companies who don't agree with one another and that express very different worldviews, libertarian, progressive, et cetera.
Specifically, what sort of state laws would gum up the entire national AI agenda?
So regulation in general, you've heard this from the libertarians, it benefits large companies because small companies can't afford to comply with all the regulation, right?
And so Europe is chronically technically backwards because they have, well, one of the reasons that impacts this is because they have so many small regulations that is death by a thousand paper cuts.
And I think what we need to avoid in the U.S., we need to coordinate so that we're not giving death to the AI industry by a thousand paper cuts because of all the benefits that it can give us economically.
Well, that's abstract, but specifically, what laws that are either on the books now in different states, Texas, California, New York, municipality, what laws are threatening U.S. AI dominance or proposed laws like the SB 1047 liability of companies?
What laws, because you hear this all the time from Sachs and Driesen, you know, it'll destroy.
China will win if we don't, if you say that you can't create deepfakes.
What laws would threaten the U.S. national agenda or these companies?
So you know the legislation better than I do.
What's out there, what's passed, what's on the table.
But it's not about legislation, any particular legislation.
It's about the idea of bringing more lawyers into the room to enforce the regulations and companies.
And so what is the actual, how should we handle regulation?
is clearly some of these laws that we're passing California seem actually very reasonable and positive for AI and protecting humans and human flourishing with AI.
I think we should probably take those in mind and find out a way to basically for the court system to be able to work with the legislature to make that a national thing or at least take the good parts of that, take the rulings on a case-by-case basis and rule in a common sense way that will actually help.
So we have to have a common, a positive vision in mind instead of just anti-regulation, pro-regulation, that kind of thing.
Can I add one element would be I think any sort of policies that can be protective of children, just like with the social media question as well.
I mean, we'd be extremely in favor of those.
One thing that's a recent shift is if you look at young families today, so families, there are around 25 million families English speaking who have children under eight.
And in that cohort, 85% of them are screen time conscious with their children.
And that's a big shift from just a decade or a decade or two ago.
Now, you could ask the same question, how many of them are AI conscious of what AI chatbots and things their children are interacting with?
And it's probably a much lower number, at least right now.
And so my hope would be that one, parents take it upon themselves to be much more protective around AI and the ways that their children engage with it.
It's not necessarily that I'm entirely against children engaging with AI.
It just needs to be in an environment that's conducive for them to do well.
So no Rosie the robot.
Yeah.
No Jets in the school.
But then also, right, so there's a parents just need to be educated on this side of the question.
And then there is just policy.
Like certain things should be banned, certain things, especially in the schools and things like that.
And that's a much longer conversation.
And what exactly is inbounds versus out?
It gets much more technical and we probably wouldn't get into all the rest.
So I feel like the argument that is made is, you know, the potential for danger from AI is so great that we need humans in the loop.
But I also feel like the humans in the loop has evidentially produced negative results because you've never had so many people saying, no, I want to homeschool my kids because I don't want teachers around my kids because I know what the teachers think and I know what they're teaching in the schools.
Ever since COVID, like kind of pulled back the veil and parents were able to see watching remote schooling or whatever.
So I'm not sure which one is actually worse.
Is the parents being able to see what the curriculum that an AI would be produced, would be teaching the kids?
Is that worse to have the robot do it?
Or have an AI do it?
Or would it be worse to give your kids to the schools that exist or existed prior to COVID, knowing what we know now?
I don't know which one is actually prior.
Right now, I'd say they're both bad.
Those teachers, most of them probably agree with a lot of the things that AI might spit out because it was built by people who agree with them.
But again, if you can see what, if you know what type of curriculum the AI is going to be teaching, there's a whole spectrum of basically different educational, AI education type products that exist.
And some of them are actually produced by homeschool family type individuals.
And then others are totally crazy.
Far-left lunatics have put out some new English education app that a kid can get on.
And there's no telling what's going to get.
It's going to change so much.
Like if it's a private school, homeschool using a certain curriculum, you know, maybe it's Christian-based and you understand what's going on.
But in a public school, they could be changing the AI so much because in the physical world, they change, they move the goalposts all the time.
Like what was the problem?
I think that's one thing that I want to point out.
Like this morning, like I saw this tweet, right?
And this is just about what the DNC, these are what the DNC doesn't, like buzzwords the DNC is not allowed to use.
And you know that the DNC my entire vocabulary.
I mean it's things like it's like privilege, violence as an environmental violence.
You can't say dialogue, triggering, othering, microaggression, holding space, body shaming, subverting norms, cultural appropriation, overton window, existential threat, racial transparency, or sorry, radical transparency, stakeholders, the unhoused, food insecurity, housing insecurity, a person who immigrated, birthing person, cisgender, dead naming, heteronormative, patriarchy, LGBTQIA, BIPOC, allyship.
All of these things are really the backbone of intersectional intersectional.
Those are all banned.
Those are all words that the DNC shouldn't be using.
And the point that I'm making with that, though, is the human beings see when they get resistance and they're like, well, we need to change.
We need to change.
But they don't actually change what their message is.
They're just changing.
They're changing how it's delivered.
So what you're talking about is they're giving up on human beings, basically.
You're talking about giving up on people.
And it's not as bad.
I'm saying much better.
Right.
I'm not saying you specifically, but this question and this question of whether because you have some proportion, it's a very, very large proportion in the U.S. of teachers who it's a gay word, but woke.
Well, the vast majority.
They're woke.
Okay, yeah, fine.
But you're, A, that's a problem of the system.
There were plenty who weren't before, and there are plenty of very intelligent, educated, conservative people.
So the predominant attitude among teachers in, say, the 60s would have been profoundly conservative.
Pledge of allegiance every day, pro-American propaganda, essentially, in all of the rudimentary school books, right?
The elementary school books.
So that shift happened after the long march through the institutions.
But so the question that becomes, and it's a valid one, like in the case of Linda McMahon, she's pushing AI or A1 teachers, depending on what day you ask her.
And there's a company that I came across actually in Berkeley of all places, or I'm sorry, was it Stanford?
A woman who represented them basically described it as all AI teachers all day long with like two hours of human teachers talking about what they learned, right?
But it's all AI.
It's an experimental program.
I think the best way to think about all of this, again, isn't in some monolith that like, should we use AI teachers?
Should we not?
Because everybody's not going to do the same thing.
All of this is this vast global experiment.
No one has any idea what the outcome is going to be ultimately.
We're just finding out not by taking 20 kids and putting them in a cohort over here and experimenting with their brains with technology and taking 20 other ones and letting them grow traditionally.
And then after 20 years, seeing what happens and then applying it to society, that would be a warped, fucked up experiment to begin with.
But instead of doing that, we're just doing it with all kids, as many as possible.
So you have people like Sal Kahn, Mark Andreessen, Elon Musk, Bill Gates, Sam Altman, Greg Brockman, all of which, to some extent or another, most of them saying every child on the planet should have an AI tutor.
Totalizing vision of how this should go down.
Now, that's not going to happen.
I suspect you won't do that to your kids, right?
Or most of the people you know won't.
So this is an experiment.
And ultimately, and I'll leave it on this conceptual note.
Ultimately, this is an experiment that should be understood first and foremost spiritually, but on a practical level, on a Darwinian level and on a eugenic level, which are closely intertwined.
And over time, like say with birth control, the most advanced biological technology of its day, which dramatically changed the gene frequencies in the U.S. and the West especially, but across the world.
Those who use birth control had fewer children.
Those who didn't had a whole lot more.
Those who were more religious had more children.
Those who were irreligious had fewer.
And I think on both a Darwinian and a eugenic level, because we're talking about the same thing ultimately, it's either nature's Darwinism or human social Darwinism.
We're going to find out.
And it's going to be diverse.
There's going to be all of these different sort of cultural species upon which natural selection and artificial selection will act.
And I think the question that someone should ask is, will my mode of life, whether it's total cybergization or total Luddite or somewhere in between, will this continue?
Will this allow me to flourish now?
And on a long scale, like a long-term scale, will this allow me and my own to continue?
And it's a big question.
I don't think there's going to be a monolithic answer, at least I hope not.
I hope not, but I feel like we are moving towards a society that wants to be, especially after this administration has totally embraced all the bad people we agree on being bad.
They want that total control.
They want to consolidate all of your data that they've scraped off the internet that everyone already has access to in the government, but they can now consolidate it to accelerate how we can do things like pre-crime, which the UK is already starting to try out.
And that stuff is the future.
I'm not just thinking about me.
I'm thinking about my kids' kids and what world they're going to inherit.
And it's going to be, it's always getting worse, in my opinion, despite my hope in humanity.
We're surrounded by people who want to dominate everything at all times.
To jump in on the point on AI education, I think one thing that we often forget is how much also education has shifted just recently.
So even the lecture hall is a fairly recent technological innovation.
And I would argue that the lecture hall doesn't work very well.
If you go to college, several of us are recent college grads.
If you go, I mean, nobody's paying attention.
Nobody's learning anything.
It's a totally ineffective way, way to learn some professor who doesn't care, just monologues.
Why is no one paying attention?
It's because everyone's on their phones or that is part of it.
But even in kids' situations, even in classes that I've been in lecture halls where they remove technology and it's still just not a great platform.
It was sort of popularized in the post-war era with the GI bill.
Basically, colleges built all these large lecture halls and this is a way to pack a lot more students in through the education process.
And so when it comes to AI education, I'm a skeptic.
I don't want to see a single solution that's crammed on every kid's throat.
That feels like a really dark world or dark path to go down.
But in a world where, let's say, there are a thousand potential, there's a menu of AI tutors.
And maybe one of them, for example, let's say you're a homeschool kid or a homeschool parent and you hate teaching math and there's one that teaches times tables really well and it was built by it was built by somebody who's maybe shares your values.
Let's say Bryce made it and I trust Bryce and it's either that or not teach my kid math or send them to public school where you don't trust the teachers.
I think once again, it comes back to the human.
It's like, do I trust what Bryce built?
Is it effective?
I want to see maybe over a few years, did the kids who did it actually learn math?
And in that sort of a world, I think there will be potentially really quite excellent outcomes.
But I'm totally, I'm totally against every student being forced to only learn once that's allowed.
That's a terrible point that I wanted to make.
Like when you have when you have a market where you can actually select and say, well, I like this type of this type of curriculum.
And I know I trust the people that produced it because I, you know, for whatever, you know, personal reason you come up with, you know, say like Ron Paul has the has a Liberty Institute, right?
And you want to go with Ron Paul's AI that will teach you the curriculum from Liberty Classroom.
You know, Tom Woods promotes that too.
He's another big libertarian.
That's the kind of stuff that I'd be like, you know what, I feel good about that.
I would feel good about this AI, this curriculum package being downloaded into an AI that's on my computer or maybe, who knows, maybe even a robot.
And the robot is actually teaching the curriculum that I chose.
But if you have that option, I don't see that as a bad thing.
I don't see it as a bad thing, but I think the best teachers should be the ones that can be relatable to a human child and weave throughout their education stories and how it's applicable to the human world as opposed to some cold, sterile screen just beating your kid with information.
I agree.
And I know it can be nice, but I don't want my kids, whether they're through, if they go to college or not, going and just plugging in.
The amount of emotional, you know, the amount of emotion that's added to the conversation by both of you.
The gooner bots, the cold steel.
It's like, I mean, I understand that you guys are trying to make people see your perspective, but still it's like two writers who don't use AI.
Psychologically unstable and unable to control our emotions.
You're watching people go extinct, Phil.
To that point, though, just to say this, the long millennia old tradition of passing information down from human to human, even with the addition of writing, even with the addition of television and recordings, it's still been the predominant way, humans teaching other humans or guiding them through that education and that transmission, that link that goes back and back and back.
You could call it maybe an apostolic succession in some cases.
Don't go using words like that.
But that link from person to person, it means that that information is flowing through a flawed human example, a role model.
You can see whether that information has allowed them to flourish or not.
You all have that.
We all know the brilliant professor who's really useless.
And it kind of makes you wonder if all that information was really all that worthwhile.
On the other hand, we know the very kind of soft-spoken and concise professor who is excellent at so many different things.
Those sorts of human examples is going to be, it has been and will continue to be so crucial for the development of human beings.
I think that you bring in a robot on occasion, right?
You get, hey, here's your clanker teacher for the hour.
Don't use the hard R with her.
I think that, okay, fine.
There's always going to be the spectrum between the Amish and the cyborg.
Nobody's going to be 100%, except for the cyborgs and the Amish.
But none of us are going to be 100% on that spectrum.
It's just a matter of which way we're leaning and pushing.
And, you know, I'm not trying to stop anybody from doing the basic sort of cyborgization.
And, you know, I too have an implant in my forehead.
So I'm not trying to get too self-righteous about it.
But I do think that, again, that kind of Burkean suspicion of change.
Why is this change being pushed?
Is it really for my benefit or is it for the person pushing the benefit?
I think it's as simple as that.
And if you decide, yeah, I want my kid to grow up with clankers.
I want my kid to marry one.
I want Rosie.
Oh, my goodness.
You know, and that we'll see.
In the end of the day, it will be decided.
Ultimately, at the end of the day, I think it will be a spiritual question.
But on a practical level, it'll be a Darwinian question.
Which ones survived?
Which ones flourished?
And we'll just have to find out.
All right.
Well, I think we're going to go ahead and wrap things up.
So, Bryce, if you want to go ahead and kind of collect your thoughts and give the totalization of what you thought, go ahead.
Oh, no.
If you got anything you want to shout out your Twitter handle or whatever?
Sure, sure.
So I think just one point that, you know, what you said was thought-provoking.
I think for people who are wondering, who are scared, right?
What is AI going to do?
What is it going to transform?
How is it potentially harmful?
What should I fear?
You know, I think the thing that we can use, the common sense heuristic, is that historically, technology doesn't replace the things that we care most about.
It doesn't replace the core human things.
Technology usually replaces older technology.
And so there's probably room that can help us to guide where we use AI and also help us to be a little more at peace, that AI probably is not going to radically transform the world we live in.
But so you can follow me on Twitter, Bryce McDonald, and you can go to my pen tweet.
If you're interested in basically what Volus does, which is deploying AI in middle American industrial businesses, you can find us there at the penn tweet.
So I'm Nathan Halberstadt again on Twitter.
I'm at N-A-T Halberstat, H-A-L-B-E-R-S-T-A-D-T.
And I'd just say that this was an amazing conversation.
And I think what I appreciate about it is that we're all coming from the same prioritization of the human and we're assessing risk, I think, in slightly different ways or over different time horizons.
And I would actually love to do it again.
So I think it's a really just an excellent conversation.
Really respect everybody's perspective here.
Just plugging again my own stuff.
I'm the venture lead at New Founding.
So we run a rolling fund.
So if you're interested in investing, just DM me on Twitter.
And then if you're a founder and you're somebody who's passionate about solving the problems we've been talking about, also DM me and you can pitch us and we're really happy to talk with you.
So especially if you're a patriot, we want to talk with patriots.
Yeah, thanks.
Yeah, I would definitely echo, been an honor, been really fun.
And New Founding is awesome.
And what I read from your mission statement at Volus is an acceptable level of cybergization.
I got to say, it's also pretty awesome.
Now, on that note, I do think that it's best to build machines that work for people, right?
Instead of focusing on building smarter machines, cultivate better humans.
Humans first, humans first, humans first.
My book, Dark Aeon, AEON, Dark Aeon, Transhumanism, and the War Against Humanity, available everywhere on the B system.
You can pay for it at Amazon with your palm, or you can get a signed copy directly from me, dark Aeon Aeon.xyz, website, jobot.xyz, Twitter, slavechain at j-o-e-b-o-t-x-y-z, or Steve Bannon's War Room.
Yeah, that was a lot of fun, guys.
Really appreciate it.
You know, I hope that we have a positive future.
I always have hope in humanity, and I need to for my children, you know, despite thinking I do think that AI could, if we go one route, replace many things that shouldn't be replaced, pregnancy and knowledge and creativity and all that stuff.
But it's good to have these conversations, especially with guys who are in it that have ethics, as opposed to many who are in it right now getting a lot of money from our government who have no ethics, in my opinion.
But yeah, a lot of fun.
Thanks for having me.
You can find me online at Shane Cashman.
The show I host every Monday through Thursday is Inverted World Live, 10 p.m. to 12 a.m.
It's a call-in show.
A lot of fun.
And we'll see you guys next time.
And call Shane and debate whether clouds are real.
Thank you, everybody, for coming in and having such a great conversation.
Everybody's input was really enlightening, and I appreciate you all coming out.
I am Phil that remains on Twix.
I'm Phil that the band is all that remains.
You can check us out on Apple Music, Amazon Music, Pandora, Spotify, and Deezer.
Make sure you tune in tonight for Timcast IRL.
I will be here hosting again.
Tim is still out sick.
And check out clips throughout the weekend.
Export Selection