All Episodes
Aug. 22, 2025 - The Culture War - Tim Pool
02:05:56
Will AI Destroy Humanity? Can Humans Escape AI Doomsday Debate

BUY CAST BREW COFFEE TO SUPPORT THE SHOW - https://castbrew.com/ Become A Member And Protect Our Work at http://www.timcast.com Host: Phil @PhilThatRemains (X) Guests (For AI): Bryce McDonald @McD_Bryce (X) Nathan Halberstadt @NatHalberstadt (X) Guests (Against AI): Shane Cashman @ShaneCashman (everywhere) Joe Allen Producers:  Lisa Elizabeth @LisaElizabeth (X) Kellen Leeson @KellenPDL (X) My Second Channel - https://www.youtube.com/timcastnews Podcast Channel - https://www.youtube.com/TimcastIRL

| Copy link to current segment

Time Text
Ready to level up?
Chumba Casino is your playbook to fun.
It's free to play with no purchase necessary.
Enjoy hundreds of casino-style games like bingo, slots, and solitaire anytime, anywhere, with fresh releases every week, whether you're at home or on the go.
Let Chumba Casino bring the excitement to you.
Plus get free daily login bonuses and a free welcome bonus.
Join now for your chance to redeem some serious prizes.
Play Chumba Casino today.
No purchase necessary.
BGW Group.
Void were prohibited by law.
18 plus.
DNC Supply.
Isn't there a lot of people that think that it will usher in a new age for humanity?
But there are also a lot of people out there that have made significant warnings saying that, look, this ac actually could destroy humanity, and we're here today to talk about it.
So not a lot of monologue today.
We're just going to get right into it.
So joining us today is Bryce McDonald.
Introduce yourself, please.
I'm Bryce.
I'm the U.S. lead at Volus, which is a portfolio company of New Founding, and Volus implements AI in real-world American businesses like construction, mining, manufacturing.
So you're pro-AI generally, right?
Yes.
That's a fair statement.
Okay.
And also on the pro-AI side generally as well as Bryce McDonald.
Yeah, I'm Nathan.
I'm sorry, Jesus.
I'm sorry.
Nate Halberstadt.
I'm Nate Halberstadt.
I apologize.
I'm a partner at New Founding.
It's a venture firm focused on critical civilizational problems.
I lead investing in early stage startups and I'll also be taking the pro-AI side and Bryce and I will also be qualifying.
We're worried about some of the same risks of course but we see a positive path forward.
I do think that the risks are something that everyone that even people that are pro-AI they're at least aware of and they do take seriously.
But to talk about the negative sides, the possible dangers of AI we've got Joe Allen.
Yeah, Joe Allen, I am the tech editor at The War Room with Steve Bannon, occasional host, writer, not an expert or a philosopher as I'm oftentimes smeared as.
and failed Luddite.
Failed Luddite.
Yeah, I've not been able to successfully smash even my own machines.
That is a crying shame.
I'm sorry to hear it.
You know, I'm still going.
Awesome.
And we've got the inevitable Shane Cashman here.
Yeah, host of Inverted World Live.
We had Joe Allen on the show last night.
We got into I'm looking forward to this one.
So do you guys want to start with an outline of your positions, basically, so that way the viewers understand?
Or do you guys want to jump into anything in particular?
How do you guys feel like this should go?
I'm happy to lead us off.
I think we can probably all start by agreeing that I'd say he's one of the antichrists.
Antichrist.
And I think starting with, Bryce and I agree that AI technology presents a number of potential risks, especially for human well-being.
But I think we're excited particularly about this conversation because we agree with you on that.
I think we all come from a right-of-center perspective here.
We want basically a path forward for AI that works for Americans and especially the next generation of Americans.
I think we're concerned like, is this going to work for our kids and grandkids.
But, and I'm familiar with both of your work, Shane and Joe, and actually really respect the point of view that you guys come from.
So I'm excited about the dialogue because I think we can hopefully talk through some of the very serious concerns that we all share.
But ultimately, I think Bryce and I see a path forward where AI technology can actually shift opportunity towards the people who have been disadvantaged in the previous paradigm.
So we think about sort of the middle American skilled tradesmen or sort of the industrial operating businesses in America, people who have been really hurt by offshoring or by financialization from private equity or software as a service lock-inins from Silicon Valley and the rest.
We think there's a pathway that where AI can basically shift more power and autonomy to some of the people who we wish had had it up to this point.
There are still plenty of risks, but that's a part of where we will actually want to take the conversation is arguing that over the next decade or two, there could be sort of a golden age that emerges and AI will play a role in it.
And there will be lots of challenges and lots of serious things where we'll need to adjust policy.
And there could be people along the way who, we need to basically make sure that we minimize the number of risks for people and Americans along the way.
But that's the side.
the way but that's the side that we'll be taking Bryce maybe you want to yeah there's one thing that I want if you if you if Bryce if you could expand upon it one phrase that you used you said a narrow pathway how narrow do you think the pathway is do you think that it's it's you know more likely that there are going to be more negative consequences from the use of AI?
Or do you think that it's more likely that there will be positives?
Or do you think it depends on who's in control?
Look, with any technology, the positive and the negative are really closely intertwined, and I think our role as
Or, for example, trying to automate away all work or ruin education with cheating apps in AI, right?
We want to split out those negative elements and try to mitigate those.
And ultimately, I think it'll be a lot of pros and cons, but just like with social media, just like with the internet and even trains or electricity, there's going to be both positive and negative.
Joe, what's your feeling overall about the outlook that Do you think that that's in any way realistic or do you think that it's all pie in the sky that this is just a terrible idea that we should all fear?
And to that point, if you do think that it's a terrible idea.
We don't have the ability to prevent other countries.
So how do you think the U.S. would be best served moving forward, considering Russia, China, there's going to be AI companies all over the world.
And if the United States does prohibit it here, these companies are just going to go offshore.
They're going to go to other countries.
Yeah, there's a few different questions there.
So to the first question, how do I respond to the position presented here?
I'm not sure what we're going to argue about.
And I agree by and large, although as a writer, I have zero use for AI.
So it's very domain specific, right?
If you're in finance, you might have a lot more use for machine learning than I would.
And a lot of writers use AI to basically plagiarize, cheat, and offload their work to a machine.
Yeah, the question of U.S. competitiveness, especially in regards to China, China could leap ahead.
It's really a volatile situation with AI because simply transferring techniques and information and the technology to build these systems is all it's required for another country to begin building close to the level or approaching the level of the U.S. right now.
But this is all driven by the U.S. The AI race is driven by the US.
The AI industry is by and large centered in the US.
And the arguments made by people like David Sachs or Mark Andreessen, sort of dismissive.
position is in regards to the downsides, it's completely reckless.
I think probably disingenuous, although I don't know their hearts.
And I don't see, I can see from a national security perspective, if you have a very effective algorithm or AI or AIs that are used to, for instance, simulate the battlefield or analyze surveillance data or target an enemy.
Yeah, that I think is something that should be taken very seriously.
On the other hand, the way they're talking about it, it's as if.
flooding the population with goonbots and groomer bots is going to be essential to American competitiveness.
And I just, I don't see how having what right now, at least statistically, however much you trust surveys, to have a third of Gen Z students.
basically offloading their cognition to machines to do their homework, I don't see how that's going to make the U.S. competitive in the long term.
And the production of AI slop, the sort of relationships, the bonding that inevitably occurs with someone who doesn't either make themselves.
unempathic or sociopathic or is born that way.
The way large language models and even to some extent the image generators and video generators work, they trigger the empathic circuits of the mind.
One begins to perceive a being on the other side of the screen inside the machine.
I think that the social and psychological impacts are already evident and are going to be severe.
The economic impacts kind of up in the air.
It's an open question, but it doesn't look good.
And the mythos around existential risk, the AI is going to kill everybody.
or become a god and save everybody.
Again, the likelihood of that happening, probably low.
But I think that mythos itself is driving most of the people who are at the top of the development of this.
And I think that has to be taken very seriously.
Be like if you had Muslims, for instance, that ran all the top companies in the U.S. and supported by the U.S. government.
Maybe you like it, maybe you don't, but it's something you should take very, very seriously.
There's one point that you made that I actually want to kind of drill down on.
You said that it was really that AI development was driven by the United States.
And is it really, like, is it really driven by the U.S.?
As in, like, is it.
only because the United States is doing it?
Or is the tech actually a human universal that all countries actually would go after?
Because it's my sense, like we said, we talked about China or we talked about Russia.
I don't think that just because the United States is on the leading edge of this technology, I don't think that in the absence of the United States, these technologies would not develop.
Most of the innovative techniques come out of the U.S. and all the frontier labs are in the U.S. But the point that I'm making, that's just reaffirming that innovation is in the United States.
So if we stop, they would start or they would begin to catch them.
They would continue.
I don't, yeah, I don't see that.
Maybe.
Because the technology, the way the technology develops, even if it's not being developed in the United States, doesn't mean that other countries, or even if it's developed in other countries more slowly, doesn't mean that the technologies wouldn't be developed.
I actually hope that China and our other rivals across the country do develop and deploy these systems like we're doing in the U.S., because then they'll be plagued by goonbots and cheating.
and cheating.
China already has, hasn't they?
Like, they've got Well, they're actually much more Yeah, those are different questions from the goonbots.
But yeah, the tendency to.
disappear up into one's own brainstem, I guess, is a human universal.
I think if China begins to deploy recklessly their LLMs at the same scale as the U.S., but China's actually got really strict regulation, including protecting consumers, much more so than the U.S., and deepfakes, things like this, but also any kind of pro-
But yeah, I think if China borgs out and weakens its population as we have, it would be kind of like payback for the fentanyl.
So what would you guys jump in on that?
Do you guys think that the United States because the U.S. is where the leading edge is?
Do you think that if the U.S. pulled back other countries would also?
Because like I said, I think that just because a country lags behind the U.S. technologically doesn't mean they don't have the impulse or the desire to actually develop these technologies.
Without a doubt, there is something to this technology that's behind the goonbots, behind the particular instantiations of it, the applications.
One second, I want to jump in.
The idea of goonbots, like I understand that those are the flashy things and sex sells.
I understand that.
But that's really one of the smallest areas and I think one of the least important, right?
Because you're dealing with medical technology, AI and medical technology.
You're dealing with AI in multiple different fields.
almost becoming ubiquitous.
I think that as much as people like to say, oh, the porn bots are going to kill us, everyone's going to jump into the pod.
I think that that's actually just kind of a more of a slander on the technology more than or a way to slander the technology.
And again, this is not endorsing the idea of AI sex girlfriends or whatever, but even just the way that people were talking about it so far around the table, the goonbots.
The point of saying that is to slander the technology.
So agreed.
And there needs to be a positive vision for AI.
Like what we need to understand what is AI good for and what are humans good for?
And ideally, we keep AI out of the areas where humans are most capable and we're going to do just fine.
Areas like our relationships, of course.
areas like education and the core functions that basically people are experts in that AI is not experts in.
So AI is good for doing work that's frankly less humane.
So things that we don't like doing, paperwork, administrative work, bureaucratic work, that kind of stuff.
And in domains where AI is not adding value or it's obviously just terrible.
So you think about when you go onto Twitter or just the algorithm at any point, you see an increasing amount of just AI slop.
Or even in education, we've talked about like at a certain point, if the kids are just outsourcing all of their learning, we're going to figure that out.
And that's something that will need to be corrected if people are using AI in sort of in sort of strange or psychotic relationship dynamics.
I think, again, we'll sort of figure that out and solve that as well.
And really across all of these, my hope is that what will occur is that as we figure out that AI doesn't work well in this domain, it'll force people back into the in-person.
One example here would be, let's say, for example, Joe, you and I were going to go on a show together or just go do something together.
If I have an AI agent, for example, that sort of spontaneously reaches out to you and messages you, and then your AI agent responds back and it's scheduled, and maybe there's a whole conversation that happens, but neither of us are even aware that the conversation even happened, right?
That's a pretty weird thing.
And at a certain point, we sort of lose trust in those sorts of interactions where there isn't a firm handshake, where there aren't two people in the room together.
And so I think the natural sort of second order consequence of this, at least it seems to me that it'll force people to care more again about the in-person relationships.
So people in their community, their family, their church.
And also even things like proof of membership and sort of vetted organizations or like word of mouth referrals, right?
So somebody like Bryce basically says like, yo, you should really go do XYZ thing with Joe.
And I know Bryce in person and I wouldn't take that advice off of the internet anymore.
So you sort of bias more towards input from in-person.
And in some ways, I think that that solves some of the challenges we've been having with the internet and with social media, which has been terrible for young people, just in terms of anxiety and other things as well.
And so we want to have is a future with AI that doesn't look like Instagram or doesn't look like what social media did to people.
And I think it's possible.
Isn't it?
Do you think it would be accurate to say that because of the algorithms that social media uses to put things in front of people, wouldn't that qualify as AI and wouldn't the repercussions of having that kind of information, the way that information is fed to people, particularly young people, wouldn't that qualify as one of the negatives of AI that we've already seen even in its infancy?
That's true.
And I think it's worth bringing up the distinction between, I would say the last generation of AI, which is what you see in algorithms and social media.
It's what you see in drone warfare.
But a lot of maybe more positive elements as well, like ability to detect fraud in financial systems.
And there's a new generation of AI, which started with the...
And you could call that generative AI.
You could call that large language models.
But this is really the source of a lot of the hype in the last few years.
And it's a source of where we can actually think about automating all the worst parts of our companies or our personal lives.
But it's also the risk where the slop comes in, right?
So all the tweets that you see that are clearly made by a robot, that's this second generation.
You know, you're going to have to find something for us to disagree about.
Well, I mean, I tried pushing back on my point.
But to your point, I will say this.
You're talking about critical systems that are having AI integrated into them, medical, military.
Very critical systems, right?
At least in the context of what I was thinking of was assisting humans, not actually taking over by them.
So yeah, the Goombots, i think you're underestimating that just like digital porn, just like OxyContin, it may be something that is primarily concentrated among people who are isolated, people who are already mentally unstable, vulnerable, but that's a lot of people, and it's an increasing number of people.
But let's put the goombots aside and the groombots.
It gives us some indication as to how unethical and sociopathic people like Mark Zuckerberg and Elon Musk are, but we'll leave that aside.
Look at just the medical side of it.
I hear all the time from nurses and doctors about the tendency towards human atrophy and human indifference because of the offloading of both the work and the responsibility to the machine.
And, you know, studies are only so valuable, but there are a few studies that at least give some indication that that is somewhat intrinsic to the technology, or at least it's a real tendency of the technology.
One was published in The Lancet, and it was in Poland.
They followed, I believe, doctors who were performing colonoscopies.
I guess they were proctologists.
For the first three months, they did not use generative AI.
Then it was like two, three-month periods.
Then they followed them after three months of using the generative AI.
What they found, I mean, if there's one thing that in AI, I'm sure you guys agree, AI is a troubling term, right?
What are we even talking about when we talk about AI?
But they were using, you know, algorithms are very good at pattern recognition in images.
And so in radiology and other medical diagnoses, it's much better statistically than humans in finding like very small, mostly undetectable tumors or other aberrations.
So these doctors used it for three months, and they found that after three months, just three months of consistent use, they were less able, something like 20, 27%, something like that, less able to detect it with their own eyes, just because of the offloading of that work to the machine.
And in the military, there's, it's going to be a lot more difficult to suss out how, how But in the case of, say, the two most famous AIs deployed in Israel, Hapsira and Lavender, these are systems that are used to identify and track legitimate targets in Palestine.
And basically that means it fast tracks a rolling death machine.
And is it fine?
Is it not?
Don't know.
All we know is it's accelerating the kills.
So I think, but in both cases, what it highlights is how important those roles are, doctors., soldiers, so on and so forth.
And it also at least gives us some indication as to the problems of human atrophy and in the case of warfare, the real tragedy of kills that were simply not legitimate.
So to your atrophy point, right, if AI is better at...
And it's also still technically, I mean, it's in its infancy, right?
This is still a very, very new technology, only in the past couple years, two years possibly, that this is capable of even doing this.
And it's gone from the infancy to being able to detect better than human beings, wouldn't it make sense to say, look, it is a bad thing that human beings are relying on it to the point where they're losing their edge, essentially, basically, they're not as sharp as they used to be.
But moving forward, considering AI is so much better than humans, is that a problem?
And will that be a substantive problem?
Because you think in two years, considering how the advancements have gone in the past, in two years, it'll almost be, it'll be impossible for human beings to even keep up with AI.
Would it be a negative to say, oh, well, humans won't be so sharp?
Well, yeah, they won't be, but everyone relies on a calculator nowadays for any kind of significant math problems, no one's writing down and doing long division on a piece of paper anymore.
They always use a calculator.
Isn't that a similar condition or situation?
I really like the analog of the calculator.
One way of thinking about this that we like to use is that AI right now, especially think about ChatGPT, it's very good at replicating bureaucratic tasks and really bureaucracy.
So it's information tasks.
um in just like in the same way as you know to do a very large math problem you just plug it into a calculator and it does it quite quickly it's sort of the same thing in a business context or in sort of a administrative context.
Like AI today basically does what sort of entry level, it does quite well what entry level associates and analysts and people like this did five years ago in say an investment banking firm or a consulting firm or a law firm.
or even just sort of passing around basic information in a large bureaucracy.
You know, you could think about it as similar to before the calculator, there would have been sort of entry level people who were doing these just like extremely long math problems.
And I think a point that Bryce made earlier is that some of this stuff is actually, it's actually fairly inhuman.
being a cog in a bureaucracy is not necessarily the peak of human flourishing.
And so as long as new opportunities are materializing, and as long as there are still ways for Americans to continue working, forming families, then I don't necessarily see it as a terrible thing if certain very uneducated, inhuman types of jobs stop existing as long as new jobs are created that are more conducive for human flourishing.
I think it's disingenuous to compare this technology to technologies of the past and any advancements because those technologies are still even the calculator involves a human working with it, whereas AI is going to replace everyone.
And I understand the short term benefits that come with all of this, whether it's medical, military, which I disagree with, like lavender, I think is a terrible situation and allegedly has a 10% error rate.
But the idea that it's going to create a future where humans can do better things outside of this and not be a cog, I think we're a cog.
a cog in it right now i think we are the food source for ai and it's it's learning because of us right and the people who are building the ai they don't want the physical world these like the big names the altmans the teals the musks, you name them, they don't care for the physical world at all.
They don't want nature anymore.
They want data centers and AI to be doing everyone's job and to outsource everything from knowledge to accountability.
Some of them are transhumanists.
They are transhumanists, right?
And that is a big part that's baked into AI.
And a lot of the problems we see online with, say, censorship on Twitter, the algorithm still has that in its DNA.
The same things we had problems with four or five years ago.
And I understand AI's here.
There's nothing we can do about it.
It's kind of like gain a function, in my opinion.
We've started it.
It's here.
people just can't stop doing it but i feel like with this i i And it's going to draw us into apocalyptic complacency where we kind of are at already.
And people keep saying this technology is going to help out with people.
It's going to make things better.
It's going to make things easier.
It's going to be a tool.
Joe would say it's a tool.
And it certainly is a tool.
Right now, we're already seeing declining reading skills in schools.
People are more isolated than ever.
And I don't think it's going to get better all of a sudden because of the proliferation of AI.
I think it's going to make things much worse, much more fractured, and it's going to become this inevitable lobotomy of humanity.
Whereas previous advancements in technology, there was some sort of, we could work together despite some, you know, the consequences of something like Taylorism and the scientific management where you did become a cog in the factory.
No.
Oh, Mustafa.
Yeah, yes.
He talks about containment, right?
Like what you guys are saying, this narrow path forward, which I agree.
You know, we have to have some sort of narrow path here.
And, But he talks about containing the AI.
I think we're beyond that.
There's no containing it now, no matter what you do, because it's going to find its way into every household, no matter what.
It's in every institution.
It's in every country.
And we have countries like Saudi Arabia who want to go full AI government.
That's the path that it's taking.
And I think in the future, not so distant future, the AI is going to worry about containing us.
And that's what I'm fearful of.
I think that with this being drawn into apocalyptic complacency means it's going to destroy us because we built it and we allowed it to.
Look, I just saw a video the other night about AI and it was talking about it led with the idea that right now we're going through or have been going through a massive a massive die off of species right insects there's like 40 fewer insects on earth now and it's because of human beings and the the point that it was making was human beings didn't know that they were killing off insects in the ways that they were with pesticides and and
just deforestation and all of these these things that we were doing making the modern world and living in the modern world killed off 40 of the ecosystem that actually are necessary, as annoying as they are, and they can be, they are necessary.
But the point that it was making that we weren't aware that this is happening, and then they made the connection that should we have a super intelligent AI, not just agents that can help, but a super intelligent AGI, it's likely that it will start doing things of its own volition that we can't understand.
Like bugs didn't know why human beings were destroying them and destroying their habitat and why they were getting killed off.
They had no idea.
And when you deal with a sufficiently intelligent or a sufficiently more intelligent entity, humans can't understand it.
And right now, AI will start doing things that people can't understand.
We had this Wired piece here where AI was designing bizarre new physics experiments that actually work.
And the point is...
started using it to help them figure things out and it came up with novel methods to make finding gravitational waves easier and they didn't understand what had happened then the same thing has happened with chess right the there was a chess bot kind of your chess master zero yeah alpha zero maybe it could be an alpha zero but it everyone kind of thought that everyone all the chess masters know how the how chess goes and they see the moves and
they they know what moves you do in response etc etc and it it took alpha zero did a move that no one understood or the chess bot, I'm not sure the same one, just for accuracy.
And no one understood why.
But then it was like 20 moves later or something.
It won the game and no one understood.
how.
But one of the things that AI, another thing that AIs have started doing is there were two AIs communicating with each other and they literally created their own language.
And the people that were outside of the AIs didn't understand what was going on.
And now it seems that a lot of the big AI companies are just feeding information and AI will pump
I think the biggest problem, and I think a big danger with discussions about AI is to treat AI as though it is a sentient entity in itself and that it actually does things of its own volition.
And I think the, we need to, we need to realize, okay, how does it actually work?
How does, how does this technology work?
It's, it's, it's pretty magical in some cases.
You've, I'm sure everyone's used it and been really surprised at how effective it was at researching something or creating some some text but ultimately ai especially the the new version of large language models.
It's really compressing all the information that humans have created, that it's found on the internet, that its trainers have given it, and it spits it out in novel ways.
But we can't forget that humans are always at the source of this information.
Humans actually have some say in how the AI is structured, how it's trained.
And so we need to, I think by seeing AI as kind of a sentient being in itself, it distracts us from the fact that of who's actually training the AI, which I think is a critical question.
There are a lot of big companies who are doing this.
Thankfully, I think there's a diverse enough set of companies who are making AI models that we don't have to worry about a mono company like Google taking over the next 20 years.
To that point, though, isn't it the case that to say AI is a blanket term?
Because when you're talking about an LLM, that's one kind of AI.
But when you're talking about full self-driving in a Tesla, that's not an LLM, but that is artificial intelligence.
It's interpreting the world.
It's making decisions based on what it sees, et cetera.
So to use AI as a blanket term is probably an error.
And you can say, you know, that LLMs are just, you know, they just crunch the information that they have, that people are feeding it.
But when it comes to something like a full self-driving, which would that kind of AI would have to be used if you were to have a robot that was actually working in the world, right?
Like a humanoid robot.
It would have to have something similar to that, as well as an LLM.
Those two AIs are different, aren't they?
And how do you make the distinction between the multiple types of AIs?
and say, well, this one is actually kind of dumb because it's just crunching words that we said, but this one's actually not kind of dumb because it's interpreting the world outside.
And that's so much more information than just a data set.
So on that point, two distinctions to be made there.
One.
One, when Shane is speaking, always as a shaman, Shane is speaking from a cosmic point of view.
He's seeing not just the thing, but the thing in relation to the room and the stars and so on and so forth in the metaphysical realm beyond.
When Bryce and Nathan are talking about artificial intelligence, they're talking about very specific machine learning processes that are for very specific purposes and also very specific to the culture that you're trying to build.
And I think that both of those are valid perspectives.
And I think that people using these digital tools for, at least with the intent of benefiting human beings, at least the ones who count, right?
The ones close to you, then we're probably better off, even if I reject it entirely.
But so that's, I think this is a distinction to be made, right?
right?
And it's one of the problems you're talking about AI, right?
When Shane's talking about the kind And I myself think that you have to balance those things.
But to the point about AI as a term, it's very unfortunate.
I mean, you could call it for a long time as machine learning, right?
AI, when it was coined, like 1956, John McCarthy, and taken up by others, Marvin Minsky, what they were talking about is what we now call artificial general intelligence.
They were just meaning a machine that can think like a human across all these different domains.
And nothing like that exists, not really.
You could say the seed forms are present, but that is just a dream and has been a dream for some 70 odd years.
So say you take that distinction though between the LLM and I hear what you're saying as far as just compressing information, but it does a lot.
It's more than just a JPEG.
You know, I mean It hallucinates.
Yeah, it's capable of all sorts of similar to human reality.
It's not real reasoning you could go on all day about.
It's not really reasoning.
Okay, fine, but it can solve puzzles.
An LLM, which was not really intended for that purpose, is able to solve puzzles, make its way through mazes can do math.
LLMs aren't made to do math.
And yet, as you scale them up, they're better than human beings.
They're solving complex problems at PhD levels.
Yeah, well, LLMs not so much.
But yes, they do better than the average person.
They do better than us.
I understand correctly, Grock does that.
And Grock is an LLM.
Yeah, but it's not better than a mathematician.
Whereas specific AIs that are made.
with math or coding.
Actually, there's an example of kind of generalist tendency and there was a math Olympiad that OpenAI got the gold in, right?
Their algorithm got the g gold in.
And there was also a coding contest.
And it was the same algorithm.
If I'm not mistaken, it was trained to do coding, not math.
I could have flipped.
But one way or the other, it was trained for one purpose.
It was able to excel in both.
So yes, it is quite different, though, from robotics control or even image recognition systems, even if they're more integrated now.
Like GPT-5, before it was like, it was a very rapid transition from these very narrow systems to like Google's Gemini was multimodal.
Everybody made a big deal out of it.
You have an LLM that's able to kind of reach out and use image tools and use audio kind of tools, right?
Like to produce voice and all that.
Now it's integrated into basically one system over just a course of a few years.
And I don't think that anytime soon you're going to get the soul of a true genius writer or genius musician or genius painter out of these systems, right?
It's just going to be slop for at least the near term.
But you do have to recognize what you're talking about, superhuman intelligence, right?
Superintelligence as defined by Nick Bostrom would include something like Alpha Zero or even Deep Blue.
Like back in the 1970s, Gary Kasparov.
I think that fantasizing is probably not something to get stuck in, but these fantasies are not only driving the technology, but the technology is living up to some of those fantastic sort of images.
So in the case of Alpha Zero, Alpha Go was trained on previous go moves alpha zero started from scratch yeah just the rules and played against itself until it developed its own strategies and is now basically a stone wall that can't be defeated same with uh at least the best drone piloting systems outperform humans that's um yeah that's kind of a a feature and maybe it's an emerging feature but it's a feature of ai once it defeats human beings
once it gets better than humans There's never a time where a human beings lose this.
Yeah.
I think that isn't that, if that's the goal for, you know, for these, these developers, right, wouldn't those, wouldn't that kind of speciality and, and, and I guess, what's the word I'm looking for, just, just that kind of capability, isn't that something that you could
If it's better at finding to the point that we were talking about earlier about finding cancers, if it's better than humans ever will be and it always is better and it gets so good that it doesn't miss cancers, isn't that a positive for humanity?
No.
I don't think so.
I don't think you can outsource that kind of optimism to this false God and count on that forever.
I think outsourcing so much stuff to the machine will just eliminate humanity.
And at a certain point in that world, there is no more humans.
Well, I mean, so they'll be living in their little 15-minute cities hooked up to the metaverse.
and disease might not even be a thing because they'll be eating vaccine slop.
So then your opinion is it's better to have the real world with disease and cancer and everything.
Yeah, that's part of humanity, unfortunately.
There's risk out there.
And once you start to play God, it goes wrong.
and I don't think we should know.
Well, I mean There's a line where like we are, there's obviously medicine and we're trying to heal people.
And I don't agree with that.
But if people have the option, is it really making people a transhumanist?
No, again, you don't have.
You become transhumanist by default.
There are people that don't go to the doctor currently.
There are people that say, I don't want to.
I mean, you've got unclaws of people.
If that's an option and you're not forced to do any of this stuff, isn't it more immoral to prevent people from having the option than to say that everyone, like that everyone, you know, if you have the option, isn't that the desired outcome where people can make the decision themselves?
Yeah, I understand having the option is fine.
I just don't think in the not so distant future, there won't be an option.
So you think that it's all just authoritarianism all the way down?
It seems to go that way.
I mean, I think all these people want total control.
They've totally rebranded to be a part of our government right now.
They haven't been before.
Now they're front-facing.
So would you say that the Biden administration had the proper, Well, and other nefarious operations.
The Biden administration selected certain companies, but there was no competition.
Wouldn't would you think that the Trump administration's outlook or their approach is a better approach?
Or do you think that it's just you're just straight up no on it?
It doesn't matter who's in office because they are parasites.
Silicon Valley is parasitic and they take advantage of every administration.
They're shape shifting ghouls who will take advantage like Zuckerberg was all in and totally fine to censor all of us during covid.
And then all of a sudden he saw Kamala wasn't going to win, so hey.
hey now he's shape-shifting to mago light you know with a new haircut and a chain on roading i think here it's really important to emphasize the distinction between ai as it currently exists and what it could become further down the road.
I mean, at least AI as it exists right now, it still obeys and it follows human prompting, right?
So it is in some sense.
In some sense, it is rational, but it lacks will and it lacks agency as of today.
Well, we've seen it try to rewrite itself to avoid being reprogrammed.
You know, we've seen it try to, like Phil is saying, developing secret languages at a minimum.
We could say it follows heuristics that are designed by humans and that humans are still capable of modifying.
And there's probably more longer conversations there.
But to what Bryce was saying earlier, there are still things that are unique about humans that AI has not yet replicated.
And the question of what's ahead, it's still, it's not obvious that AI will ever fully have a rational will.
right and it's not obvious that ai will ever actually have reflective consciousness we may be here we see bits and pieces in certain news stories maybe there's like science fiction about it there are people in tech like i was just to say should we take them seriously so there are transhumanists who of course this is their vision for the future and we should take it seriously but the future we take the people who are building it seriously because they do have those concerns where it might be sentient it might become sky net i think these are the real concerns but it's not it's not where AI at least stands today.
So as it stands today, again, humans are still unique in the sense that they have a soul, they have moral responsibility, that they have a rational will.
And for now, AI is a lever that humans are using.
And there is a risk that that changes in the future.
But I think it's just really important to make that distinction.
Yeah, but there's also, then you have AI Lavender for now, which has a 10% error rate killing innocent people in Gaza.
What's the error rate for humans, though?
Probably also bad, but I'd rather have the urgency of humanity.
Depends if they're legal or illegal.
I'd rather have the urgency of humanity behind war than outsourcing the death of war to a robot like it's a video game.
Because then that will just perpetuate the forever wars that we're already in, but then it will be constant bombing everywhere.
There's no reason to think that any of that ends if humans are in control.
There's actually more reason to think that.
to think that if ai and robots were in control that it would end i think it'll be consolidated that power will be consolidated into into the ai and then there's no saying no to it.
to a certain point.
No saying no to the United States right now.
Well, but despite not agreeing with administrations, and I have many opinions about this current one, you can hopefully sometimes not go to war and have a politician.
who says, I'm not going to start that war or join that war, even though we're funding all these wars right now.
You were talking earlier about, you know, you know, sickness is part of the human condition.
War is part of the human condition.
War existed before human beings, right?
I think humans should be participating in these risks.
I don't think we should be creating things to keep doing it while, you know, we're being governed at home in this future by the tech overlords.
So the vision you're putting forward there, and I don't know if you're making the argument or I'm just making, I'm trying to make the argument of pushback on everyone's idea here.
But that idea that perhaps algocracy, a system in which the algorithm determines most of the important outcomes and the processes by which the algorithm determines most of the important outcomes and the processes by which the algorithm determines most of the important outcomes and the processes by which we get to them.
That dream, whatever form you want to, however you want to package it, transhumanism, posthumanism, the sort of EA and long-termism point of view that in the future there will be more digital beings than human beings, or just the effective altruism.
Effective altruism, yeah.
Or just kind of the nuts and bolts, like what you're saying.
If you turned war over to the algorithm to decide what target is legitimate, what target is not, what strategies are superior, which strategies are not, that it would fix war.
The problem with these technologies right now is acceleration and offloading of responsibility and offloading of human agency to the machine.
But at the top of that are human beings who are creating the systems, human beings determining how the systems are used.
But for right now, what we know is that people like Mark Zuckerberg run companies that are willing to produce bots that are intended to form emotional bonds with children and at least up until last week it was company policy to allow them to seduce them in softcore ways or you have people like Alex Carp who are very reckless in their rhetoric around how their technologies are being used by governments around the world,
including Israel, including Ukraine, U.S. obviously, and a number of our allies, that accelerate the process of killing other human beings.
And is it a 10% error rate?
Is it 50% error rate?
Is it a 1%?
knows.
We just know that that means that they are killing at an ever faster pace and they have the justification of the machine and the human beings.
And I don't want to put it all on Palantir, right?
You have Lockheed Martin, Boeing, Raytheon, counterparts across the world who are creating similar systems.
But these systems, especially in the case of warfare right now, yes, some parts of it are offloaded to the machine.
But at the top of that hierarchy is a human being made making these these decisions and It comes down to whether you trust their judgment in the use of these machines and piloting these machines.
And at present, I would say in the case of Gaza and in the case of the reckless war in Ukraine, which has killed, you know, so many Russians and Ukrainians and just devastated the area, I don't trust the humans at the top of this system.
They are predatory.
Seemingly inherently so.
So then that's actually a question about the humans, though.
It is a question about AI.
I think all these are questions about humans because in the case, in warfare, it's a little bit different, but in the case of education, in the case of corporate life, right?
Business life, in the case of social life, it's both about the humans at the top producing, deploying, investing in these machines and it's about the humans at the receiving end who by and large are choosing this.
They're like, oh yeah, this is great.
I'm doing my homework.
I'm going to research it so much faster.
You know, and so it's by and large right now, it's yes, it's in the hands of humans.
I think that the ideas about sentient AI or willful AI, AI with its own volition, I don't think that those ideas should be discounted because it has more decision-making power, more degrees of freedom than it did before.
Yes, humans are in those, it's a symbiosis, right?
It's like a parasite needs a a human host to prompt it, but it is still, in my view, by and large, parasitic, not mutually beneficial.
And it's parasitic because it is, yes, the machine is the medium, but it's parasitic because the people at the top are parasitizing us or preying on us by way of that machine.
It should, I think, be that Zuckerberg should be held over the fire.
Alex Carp should be held over the fire, not.
the algorithm, but the algorithm is the vehicle by which they are accomplishing their aims.
And again, I simply don't trust their moral judgment.
Like when they talk about summoning the God or Elon used to say it was summoning the demon talking about AI.
He's rebranded to saying it's summoning a god, right?
And I think at some point there will not be a human at the top, you know?
And I know we might have disagreements on the technocracy.
And like the way I see it as right now is we're consolidating into a technocracy.
And Project Stargate was a big promotion of that, in my opinion.
And with these new technocrats is that they believe.
that we should have a monopoly.
You know, Tiel isn't a monopoly.
They like people like Curtis Yarvin, who's a monarchist.
And I think through AI, not too far down the road is when they develop their digital king or their digital monarch that will be at the top at some point and make these rules which is how you i think build a sky net situation which sounds ridiculous but is literally some things that even elon musk warns about he uses sky net why do you think that it sounds ridiculous uh to some people listening i think they don't think that ai could uh evolve into a sky net situation okay so to that point just i want to go around the room here do you guys think that agi is something that
kind of agi super intelligence is something that is possible because there are people that say i don't think it'll ever actually be that smart they're they're like oh that doesn't have the compute power we'll never be able to have that kind of that kind of uh that kind of agi that that intelligence isn't isn't something that can be artificial do you do you think it is i?
I think it's possible.
I don't think it's certain that it will happen.
It's possible.
I certainly don't think it's possible.
And I think a good analogy would be, let's just say you have mechanized like industry that creates things that are very efficient.
Let's just say they create clothes, right?
We have very good machines to build clothes.
But today, the highest quality clothes are all handmade by humans.
And I think there's an analogy.
Only certain people can afford.
Yeah.
You know, for a very small amount of people can afford those nice clothes.
Well, but there's still humans at the top who are the best at what they do.
And they're using probably the most human types of skills to make those clothes.
And I think if you expand that to an entire economy, what the promise of AI is actually that over time humans are actually doing higher and higher values of work.
Not that fewer and fewer humans are working or that there's fewer and fewer humans, but that actually humans are more flourishing than before.
To me, that sounds like when a communist tells me they can create utopia on Earth.
Oh, you're going to just sit around and write poetry.
I don't think that's the future with AI.
I don't know about it being sentient.
I think it's very possible.
But what I do believe in is that it will become the thing that we've outsourced all of our life to and that at some point we'll just be subservient to that.
Despite it, it might not be sentient, but it will'll oversee everything and i think that's a very big consequence for humanity i think the consequences outweigh any of the short-term benefits that you guys are talking about to your point i think if it becomes sufficiently intelligent it won't matter if it's actually sentient behind the screen or yeah right it will seem sentient to us and they might even redefine what consciousness means at some point to include the ai AGI is a tricky one because what is AGI, right?
Artificial general intelligence.
What does that mean?
It was, I think, originally coined in 1997.
So for the purposes of this discussion, not just artificial general intelligence, but artificial superintelligence.
Superintelligence, yeah.
And again, another one that the definition has changed pretty dramatically.
Like the right now, the fashionable definition, you hear people like Eric Schmidt from ex-Google CEO now partnering with the Department of Defense.
It has been for years.
His definitions that he's been running with, and Elon Musk also, it's just fashionable now to say the artificial general intelligence is an AI that would be able to do any cognitive task that a human could do.
Presumably it would do it better because it's going to be faster.
It's going to have a wider array of data to draw upon all these different things but that's the general AGI definition that's fashionable now before when you know set the 1997 definition aside, before it really started with Shane Leg of Google and was popularized by Ben Goertzel roughly 2008 or so.
And for them, it was more about the way in which it functioned.
They wanted to distinguish artificial narrow intelligence.
So a chess bot, a goonbot, a war bot, any of those bots from something that could do all those, right?
It could play chess and it could kill and it could have you gooning all day long and general intelligence.
And it could be accomplished either by building a system that was sufficiently complex that this general cognition would emerge.
And I think that's what Sam Altman and Elon Musk are betting on with scaling up LLMs and making them more complex.
Or it could be like the Ben Goertzel vision where you have just a ton of narrow AIs that kind of operate in harmony.
And that is now what we call multimodal.
Superintelligence.
Nick Bostrom really put the stamp on it 2014 with his book, Super Intelligence.
And for him, it could be either a general system, it could be narrow systems, it could be any system that kind of evades, that excels.
beyond humans and ultimately the danger of them being you lose control of it.
Now, Eric Schmidt, Elon Musk, people like this are going with superintelligence just means it's smarter than all humans on Earth.
I'm not exactly sure what that means, but that's the definition.
Whatever you're talking about, none of that shit exists, right?
All that shit's a dream.
But, like me,
And then with this, you're talking about AGI, like a system that can generalize across domains, concepts, it's wild to see the rapid advance from the implementation of Transformers, 2015 OpenAI, and the real breakthroughs in LLMs.
to the multimodal systems that became just in a few years later, became much more popular, to more integrated systems.
So before, an LLM meant a chatbot.
Now, an LLM means a system that can recognize images, produce images, produce voice.
It is more general.
It's not general in its cognition so much except there are certain seemingly emergent properties that are coming out like we were just talking about a moment ago llms doing math better and better and better lms solving puzzles in mazes better and better and better llms In some sense, I hear this a lot, actually, that people say, oh, I'm working on this problem.
And I turned to the LLM and it solved it.
And, you know, I have a good friend.
He's a lawyer.
And he was doing a case analysis.
And he had done it himself.
He had already gone through all of it.
But he wanted to see if ChatGPT couldT and it came up with basically the same thing that he had spent many, many hours on in just a few minutes.
So it's like the AGI that they're talking about, like what Sam Altman seems to have been pitching before the big GPT 5 flop is something that is more like a human than a machine.
And that doesn't exist, but it is the case that the technology, you take that big cosmic vision or all those cosmic visions of what this godlike AI would be, it can't be denied that the actual AI in reality is slowly but surely, faster than I'm comfortable with, approximating.
that dream.
And unless it hits a wall, unless it hits a serious S curve and just flattens out, it's at least worth keeping in mind that it's not a stable system.
Anything, I think, could happen.
So I'm into the idea of it being a tool and it can be a good collaborator for people.
You know, when talking to AI, if you want to help edit something, I understand.
that that's an awesome thing uh but when you talk about like the narrow path you're creating or trying to create with ai what does that entail is are we talking about trying to implement regulations from the top down uh or is it something you're doing within your company yeah so maybe to start would actually be a historical analogue here.
The railroad in the 19th century in Europe, I think, is actually really interesting.
We could also talk about social media and the internet as other potential analogues.
But the reason why I find it particularly interesting is we sort of underestimate how transformative it was for the average person, let's say, in Britain.
They basically went from being in these fairly isolated towns.
with some capacity to basically travel between them, either like with horseback or carriage or something like these.
But the railroad enabled much faster travel for humans.
It was actually the fastest they'd ever traveled, right, when they got on the railroad.
And the railroad stations went into these people's towns and brought random people, like large numbers of people into their towns, and also industry through their towns as well.
And in the early days of the railroad being built, it was actually extremely contentious.
So the wealthy aristocracy basically opposed it on the grounds of just not wanting railroad tracks going through their land that they felt like they had ownership over.
And then on the other end, sort of more sort of working class types, I mean, a lot of times the station would just go like going right in the middle of their town where they lived.
And it was unbelievably loud.
It was bringing all these people they didn't know right into the middle of where they lived.
It made them feel unsafe.
And there were a number of high-profile accidents where the train would just come flying through and just smash into something, and lots of people were injured.
And the reason why I bring this up is basically that there's a lot of people It was actually extremely disruptive to the day-to-day life of many people.
And there was resistance.
So there's several interesting examples where basically aristocratic types would basically oppose the railroad track going in.
Again, more of the working class type people in multiple instances when a new station was opening would stand and block the way of the train as it was going in.
One interesting side note I would put here is that this was not the case in America when the railroad went in.
The reasoning there being, I think in America, there was this manifest destiny thing was going on.
And we also, in America, we sort of wander outside and stare at the moon, I think, a little bit more.
America, the United States, like it was a much more, much bigger area to cover.
Exactly.
So it was less disruptive.
It was more about forging something new versus disrupting something that already existed.
Right.
And so this is why I think the railroad industry.
in europe is a pretty interesting analog um so so one other thing that's worth noting is that the media and the media and doctors were actually circulating large numbers of ostensibly fake stories about trains.
They talked a lot about madness that would emerge from.
So somebody would stab somebody on a train and the doctors and media would say that it was sort of the jogging of their brain and the rattling on the train made them go insane and it caused the stabbing.
But of course, it's just humans getting fights.
And it was just a fight that happened on a train and somebody got stabbed.
And similarly, a lot of the headlines would describe, they actually describe the trains as shrieking and demonic.
And there was a lot of this sort of language initially used.
And so it's a classic like luddites versus accelerationists sort of type dynamic.
I bet you're a big fan of the Pessimist Archive, huh?
I'm not familiar actually.
It's a great compendium of all these sorts of stories you know the radio is turning your daughters into whores uh you know the the the rifle will allow the engine to kill all the whack men you know these sorts of things i i think the exact these exaggerated examples from the past obviously you see it now right like ai is gonna um kill us by 2027 we'll all die right although that's not what the 2027 paper said but anyway that that sort of idea we're all gonna die That would actually be quite relieving.
A lot of the anxiety would be relieved and we would know then what was up.
We would not only know what was up with the eye, we would know what was going on with the afterlife, and we could all do it together rather than just one at a time, which is really not fair to get singled out.
You know what?
You're in Pierce, Yoshi, grab a drink sometime.
Is it better for humanity?
But that's probably not going to happen.
And so it's very similar to climate change in the sense that climate change sucks up all the oxygen in the room and the problems of species loss, the problems of habitat loss, and the problems of pollution kind of lose a lot of the public attention that they should have because these are things that you can see very clearly and you can measure very clearly.
Climate change models, let's just say that's a little bit more hypothetical.
And in the same sense, like the major problems the immediate problems of ai i think are going to be again it's already evident the psychological and social impacts what does it mean when human beings begin to become companions with machine teachers.
You look to the AI as the highest authority on what is and isn't real.
And you train children in this global village of the damned to become these sort of like zombified human-AI symbiotes.
And then beyond the social ramifications of that, and having like you trained your AI on your grandma, right?
And everything's given, like your grandma's there, like, you know, bitching about the mashed potatoes or whatever, you know, on a screen, on an iPad.
You know, these sorts of things, how far will it go?
Don't know how far did opium go and fentanyl and Octicon?
How far did porn go?
Pretty far.
Beyond that, though, you've got the economic concerns and it's all really up in the air, right?
Is AI going to make your company more competitive?
Is AI going to replace all your workers?
I mean, you look at, was it Clarna?
Clarna, and they made this big announcement, we're replacing all our workers with AI and customer service.
And then they were like, oh, actually, we're hiring again because they didn't work out so hot.
Maybe it'll be like that, or maybe it'll be more like companies like Salesforce or Anthropic, where the coders really are being replaced.
The low-level coders are being replaced.
But these economic concerns, and I think for you, especially for you, Bryce, the economic angle is clearly you take it very seriously.
And I read the Volus mission statement that doesn't include anything.
I mean, it's like basically a rejection of the whole transhumanist vision, subtly, but a subtle rejection of it, but an embrace of these technologies in their more, I guess, humble forms and you know like low level forms and narrow forms and it makes sense but i i really think ultimately though that that that long-term vision because these are the frontier companies right and they're driven by the long-term vision they got all the money they've got the government support they're you know the
carousel of the federal federal government has now given favor to meta has given favor to palantir has given favor to the whole a16z stack um their vision of the future is going to make a big difference kind of regardless of the technology like people have been able to hypnotize whole tribes by like waving totems around like oogabut you know and it's like,
if you can do that and you've got a totem that can actually talk and do math, you're talking about a religious revolution beyond anything that's been seen before.
And I think that all of those things, like all of those problems are just beyond the scope of like the nuts and bolts day to day.
Like, does my AI get...
Right.
Like I understand there's been fearmongering throughout the ages, whether it's Phil and I talk about the synthesizer sometimes.
We talk about the trains.
The thing I think that sets AI apart is that it is a vector for almost everything about humanity.
You know, it's about education, it's about children.
children and safety it's war it's going to be expression with regulations where they're trying to say you can't do deepfakes and whatnot so it really everything kind of falls into the black hole of ai and becomes a much bigger existential crisis although i understand existential crises that the luddites who i agree with because they were afraid they weren't anti-tech they were more anti-being replaced by mass autonomy so they were still using their own technology of the day the og luddites but they didn't like that factories were being built and filled with machines that took everyone's jobs, right?
And that is one, just one part of what AI will be doing.
I think the point of that story was actually, was to highlight basically that there were serious risks and things went wrong and humans got on top and figured it out and solved it.
They moved more of the stations further out from the city.
They put in place better safety measures on the trains, et cetera, et cetera.
You're absolutely right that AI feels like it is more transformative and the risk profile is potentially higher.
I don't think we're quite there yet, but it's definitely in the future it could get significantly more risky.
I would say one risk that exists today, and this was something that I ran into directly, is for example, the sort of, the way that AI can allow people today to sort of mass scam Americans, right?
So I got this email from a guy named Akshat, who was No, he's a man in India.
Okay.
And he was sending 40,000 emails today, but they're extremely well tailored, right?
In the email, he mentioned portfolio companies that we work with, names that I recognized.
It was a very well tailored email and it was generated using an LLM.
He's blasting out tens of thousands of these.
And essentially, in this case, Akshat was basically offering us to offshore our labor, so our associates at New Founding, for a quarter of what we're currently paying them, right?
And he's able to do that.
And basically, it's not just using Mailchimp, right?
He's using, he's basically collecting data in order to correct in order to produce the right sort of targeted email and I mean essentially it's a form of scamming that's that's that's using AI in order to be more effective.
You can think about how this could apply to like if that was targeted at your grandmother or something like that.
And so I think that's like very practically today it's about blocking people like that and making sure that AI very practically right here and right now isn't used to harm Americans while we continue to monitor these these further out risks.
And but it's also important not to not to confuse the two.
And to that point, I think when we when we treat AI as kind of some autonomous or it's some technology, like it's a train steamrolling and we either get off the tracks or we take it.
I think that's the wrong way to think about AI because that treats it as something that we're powerless.
We're completely passive.
And I think there's a lot of Dumerists out there who want to talk about the deep state and the globalists and we're just a tool.
We're a cog in their machine.
And that takes away the agency that actually is what makes humans different from AI.
So I guess I would actually want to add maybe spin a positive vision for what AI could be.
And I think part of the way to solve the AI problem, to define that narrow path for the future, is actually for people to start building things using AI appropriately that actually make America better.
And I think what does that look like?
There's the fear that AI actually enhances this military-industrial complex.
That's fair.
I think AI is actually very different from a lot of technologies that have come around, like the airplane, like the computer, like the internet.
All these have been started as military technologies.
And that's actually kind of their natural bent is as as military tools and then they trickle down to large global enterprises and then finally to consumer applications right but ai interestingly its first application at least when we're talking about large language language models is actually for individual people to help make their lives better to reduce monotonous work.
And I think the way I see it is that AI is going the other way around, that we can actually use AI effectively in small businesses where humans who are really high agency, virtuous, good leaders can actually get more done and they can have more success with AI because they're able to get higher leverage.
I think that's part of the trick to get you so addicted to AI to allow the machine into you so it's harder to divorce it from you and easier to control you.
Again, if I could add just an old man's spurgout, just for a second.
AI did actually in the early, early conceptual phase was deeply tied to the military.
Alan Turing, for instance, Marvin Minsky, the pioneers, deeply tied to the military.
And maybe this isn't AI specifically, or at least he was kind of cybernetics and Norbert Wiener, who aside from having one of the funniest names ever, he was a military man and he's writing about and thinking about with cybernetics, cybernetic systems and human-machine integration, human-machine symbiosis, thinking about it from the purposes or for the purposes of military dominance.
And so it is, even though you're right, like the LLM revolution comes out of Google, right?
Really.
I mean, taken up by OpenAI and largely civilian, but the idea of thinking machines and the development of various algorithms deeply tied to military institutions and military concerns.
And I want it to be what you're saying.
I want that idea of like it can just be a great collaborator for the individual, which should be great.
But it's something, all these things are always hijacked.
And the people who are either building this stuff right now, most of them, not all of them, and the military-industrial complex, like they have no ethics.
So a lot of those things will be eventually, they're already turned against us.
Can I give, I'll give a very practical example.
And I think that, again, that's a serious, that is a serious risk and we should continue to monitor that.
So we have a portfolio company.
It's called Upsmith.
And specifically they work with like plumbing, HVAC, these sorts of companies and individual business owners.
The average company size they work with is five people.
And there's around 300,000 to 500,000 of these sorts of companies in America, right?
So it's like working Middle America tradesmen type individuals.
As it stands today, When they want to book to go to somebody's house to do some repairs, they actually have to outsource most of this to basically these companies that take care of all of the overhead.
A lot of it's actually offshore labor or at a minimum, it's these like, again, extremely soulless, sort of bureaucratic type jobs.
And what happens also is that these plumbers lose a lot of business.
They often will lose up to half of people who call them to book to hand this thing swapped or whatever.
They're just doing work and they miss the call.
They don't get back to them.
It's not booked.
And basically, American skilled tradesmen are missing out on a lot of value or they are capturing the value and forking over a huge amount to basically Indians in India who are running some phone bank or whatever.
So one of our portfolio companies, it's called Upsmith.
Right now what they do is they basically have an agentic AI tool that just takes care of bookings for these plumbers.
So if you call or text or anything like that, it'll basically just take care of calendaring and booking.
the whole exchange will happen back and forth and it'll just send the plumber to the next house.
Now, I think from, from, from, And that helps him, if he doesn't own a house yet, he can buy a house.
He can get married.
He can, you know, do these sorts of things.
And I see that as very practically positive for Americans.
And it's actually shifting, again, sort of economic opportunity away from bureaucrats, away from offshore to the house of a guy in Philadelphia or whatever.
And so New Founding.
at least, we're interested in finding those opportunities and backing those ones where it's very clear that this is going to help Americans.
And I think hopefully that helps to give an example of the sorts of ways that AI can more practically be of benefit, despite the presence of the risk.
I'm totally against a seven-year-old getting onto some artificial intelligence, sort of doom loop where they're, et cetera, et cetera, as we've been talking about.
Very weird things can happen.
And so there have to be guardrails and such.
But do I think that that plumber should have to rely on Indians to buy?
Oh, yeah.
I'm with you.
I don't know.
That's one of those short-term benefits I can see being positive from AI for people.
I think that's great.
I guess I'm more just concerned about down the future, how it's going to work and maybe how using AI so much and getting businesses and people basically addicted to it or relying on it so heavily.
In this case, it gets integrated into the business.
And then something happens down the road and they try to pass regulations and this can be a big thing of like people.
perhaps will riot, not riot, but like revolt against these regulations, which I'm skeptical on regulations, how that works, because they've been, they've been.
married to AI basically in their companies and I do see a future where we're gonna have these big conversations these big fights in politics about what we're allowed to do with AI because some people have been abusing it like the scammers you're saying and It's gonna be like another two-way debate, but with AI I don't know like how do you guys feel about regulation?
How do you regulate?
I mean, I think it's at least a step most of the problems with artificial intelligence are gonna be dealt with positively on the level of just human choice personal choice you most of the deep problems that we see right now everything from the ai psychosis where it's uh you know people kind of disappearing into their own brainstem sort of phenomenon.
These are things that human beings chose to do by and large.
So, and that's, that's, I think, for me, the most important thing is to at least make people aware and activate some sense of will and agency to put up cultural barriers so your kid doesn't become an AI symbiote.
Well, that's something that I think that people have learned just from social media.
Again, I consider social media algorithms as kind of, you know, infant AI as it is.
And so people are seeing the negative consequences and seeing bad things that can happen for their kids, or at least the smart people are noticing it, and they're not allowing their children to have, you know, screens all the time.
It is rather disheartening when you go out to dinner or whatever and you see families that have like a kid sitting there with a screen and it's like, well, that's the only way, you know, that's the only way I'll eat or whatever.
It's a really terrible, terrible development.
I think that there needs to be more emphasis put on informing people of how bad that is for children, but that kind of Like you're saying, that kind of agency, that kind of discretion by parents is what really will prevent people from getting into this situation in the first place.
I don't think the majority of people that are having problems with social media, whether they're problems delineating between reality and what's actually social media or music, I don't think they're people that are actually well-adjusted adults.
There tend to be people that are young people that didn't have, that are not younger than me.
I'm an old guy.
I'm 50, right?
So I was one of the kids that was like, be home before the streetlights come on, but otherwise get out of the house.
And so I had a lot of learning how to function in the real world by myself as a kid.
And I think that that kind of thing is something that's important for kids.
And I think that parents need to do that kind of stuff as opposed to just handing them the screens and stuff.
Well, to that point, though, if I can continue.
The personal choice is the first bulwark, right?
And it is, people are taking an active role in whether or not they're not just being told this is the future.
You need to turn your kid into an AI symbiote and just doing it, right?
tons of people screen-free and phone-free childhoods.
There are laws being passed in certain countries and institutional policies being put in place in schools and other institutions.
You can't just sit around on your phone and disappear into your own brainstem.
That's good.
But when it comes to military AI, when it comes to Dr. Oz, the inestimable..
Oz, who was shilling chips for your palm on his show to his credulous audience just a few years ago, now is saying that in just a few years, you will be considered negligent as a doctor if you do not consult AI in the diagnosis or prescription process.
And that goes well beyond just personal choice.
now an institutional policy or perhaps a law and so that even Just the ability to say no.
In many instances, you can't say no and won't be able to say no.
And so the laws are going to be important.
And I think that right now at the state level, if you look at the most, the states that are most inclined to legislate, California, for instance, and you look at the 18 laws that they got in place.
Things like you can't make deepfakes of people or you can't use someone's image against their will, mostly tailored to actors and whatnot, right?
You have to get permission.
You can't make child porn with AI.
can't defraud people.
Some of them overlap, but these 18 laws that people talk all the time, well, America will fall behind China.
They'll have more goombots than us.
And I don't buy it.
I don't buy it at all.
I think that if you look at just the most heavily regulated state, California, it's reasonable.
And the 19th law, SB 1047, I finally remembered it, that basically would hold AI companies accountable for damages done just like you would do with an automobile company or you would do with a drug company.
Well, that one got killed.
And I think it's a very reasonable law.
Or if you look at Illinois, that, you know, it is Pritzker.
I can't stand that guy.
And Illinois politics are super democratic and very corrupt and yet they had the wherewithal to pass a law saying that you can't have a licensed therapist bot.
You can't have people sitting around talking to bots and charge money for people talking to your licensed bot.
And as a licensed therapist, you can't just hand over your client to a bot legally.
And it's a very reasonable law.
So I think above journalists, And it'll get kind of like with abortion laws, we'll get to see, ultimately, who was right?
Does AI really turn you into a superhuman and give you superpowers, or does it make you into an atrophied shlub?
So two points that I want to talk about with what you just said.
First of all, I'm probably more skeptical of therapists than I am of any AI and the concept of therapy in general.
I think it's literally just, you know, it's just pay me money to talk to me.
So I don't, honestly, I don't think that an AI would do worse than a therapist because I think therapists are the, the, like the pinnacle.
What if the eye is just the compression of all of the worst therapists?
Well, I mean, the suicide rate spike.
Therapy has all, like in its, in and of itself, in my estimation, at least for men, is like the pinnacle of snake oil.
So there's that.
But second of all, you said that, you know, you were talking about whether or not there would be mandates for using AI for diagnosis.
Is there any other realm in which the process for diagnosis is, is, is actually even people care about for the most part, like aside from a, Or is it really the important part of that?
Like, are you getting.
right diagnosis.
I would say...
Because if AI can actually make sure that your diagnosis is correct 95% of the time as opposed to, say, 70% of the time, because...
And the more strain there is on the health care field, the fewer doctors that are actually well trained and stuff, the fewer you have, the worse the results actually are going to become.
So would it really be a problem for if the government were to say, look, you have to at least run it through the AI and see what the AI says.
You don't have to rely on it.
You have to mandate it.
I don't think you can mandate it.
I think that's a problem.
To tell a private physician if if have to do that.
Well, I mean, there's all kinds of mandates in in the healthcare field.
Why is that different?
But when it comes to like using an AI to mandate that, then like what AI is acceptable, I just don't see how that works.
Well, I mean, again, the results would dictate, right?
Like if you've got an AI that's that's that's got a 99% success rate.
Right.
Right.
And if you've got one, you know, one algorithm by one company that actually has a 99% success rate, why wouldn't you use that?
Or why wouldn't you, why would you have a problem with them using it?
I have no problem with them using it, but I just, I have a problem with the mandate.
Yeah, and you do, I mean, you have this claim, right?
Like, for instance, a lot of the studies, comparative studies with radiology, how well is the AI able to detect cancer?
And usually it's these like tiny, tiny, tiny, tiny, tiny little tumors, right?
That the radiologist can't do just with his eyes.
But that's very specific to that field.
And there's also the issue, so I mean we also know that while you don't want to necessarily bank on your immune system.
And so you have a lot of more kind of second order effects that can come out of that.
If you have an AI that finds every tiny little aberration and the next thing you know somebody's getting run through some devastating chemotherapy, you know, on the basis of this AI, it's much more complicated than saying the AI is 99% better than a human.
There's all these other elements that go into it.
And then in diagnosis, I mean, we're not talking about necessarily just visual recognition.
When we're talking about doctors turning to AI for a diagnosis or to come up with a therapy, we're mostly, we're largely talking about LLMs.
And a lot of them are very specific, tailored LLMs that are trained on medical studies and the doctor would then turn he would have his own opinions she would have her own opinions and then turn to the lm for guidance as an expert right if you're a general practitioner you defer to experts on various things to come to the the solution but real quick so the the question I think is not going to be answered because a company says our AI is 99% accurate or
90% accurate or 50% downstream, looking at the actual outcomes of patients to really know a statistical success rate for an AI would take enormous amounts of study, right?
And meta studies.
And we don't have that right now.
We just have advertising.
And so if we don't have the studies in place, like there was this whole thing that happened in 2021 or late 2020, uh, where there was a big medical crisis and that without any real rigorous testing or study,
studies suddenly the advertising won the day and suddenly you had soft mandates in america and hard mandates elsewhere and we still don't really have a clear statistical understanding of what happened and what damage was done.
Right, you were going to say something?
Isn't it fair that we all want, we all.
We all want the humans to bring back the humans to be in charge, right?
So in the case of the doctor, a doctor who's actively ignoring very important, relevant industry standard tools to make a diagnosis, we might call that negligence.
And I think that would be fair.
the responsibility should fall on the doctor who's making the bad diagnosis in this case.
Just like the responsibility for a business who's doing...
If the AI, on the other hand, is a consumer product and it's causing children or adults to have psychosis, well, maybe the AI company should be responsible.
And so I think the worry with regulation is that you're mandating things that aren't, that have unintended consequences.
You're mandating things that aren't well proven because this is how you're supposed to do it or because this is your ideology.
But I think that's a concern with regulation, even regulation of AI.
I think what we need to do is bring back humans to be in charge of the AI in a way that humans.
have been swept aside in a lot of ways way before AI for the last few decades.
And like how do you define negligent when it comes to it?
Because if it were 2021, like Joe's saying, the AI would have said, I'm sure, go get every shot that you're told to get, you know, because it was built by people who buy vitamin A. I think in this case, then, right, what Bryce would argue is that the doctor should use the AI.
but the doctor does not have to listen to the AI.
Right.
So the doctor could then evaluate maybe what multiple different AI tools say, his own individual judgment, some of the own, his own tests that he did, his relationship with the patient, and then make a decision.
And I think that's totally fine because you could, through AI, one of these short-term benefits when it comes to medicine, this doctor could potentially have access to so much of your family history to make a way better decision for you, which can be awesome.
Scary.
Yeah, because then China's going to hack it and create a bioweapon personalized just for you, which they're probably already doing with their mosquitoes.
It's literally happening.
It's happening.
The point do you make about 99%?
Let's say it happened, right?
And the studies did show 99% of the time the AI gets it right.
And 99% of the time, the AI robot gets the surgery right.
And 99% of the time, the AI teacher is better than a human teacher.
99% of the time, if you're looking.
for a mate, the AI is the one to ask.
Algorithmic eugenics is the way to go.
So on and so forth.
Is God real?
Does God love me?
99% of the time, the AI is going to tell you the correct answer.
I wonder then, Cashman, if we can defer for a moment to the wisdom of Shane's shamanic visions, and I think that we should, all practical things, all due respect to the practical matters.
What we're talking about is total replacement by machines, total replacement.
It seems like that's just inevitable.
Like I understand the short-term benefits of helping the middle class because the middle class right now has been genocided in my opinion like there's the middle class is suffering whatever's left of it and it's terrible and any way we can help them is great but i i think the the difference in the conversation would be you guys see a positive vision because there's so many short-term benefits and we're seeing of course but down the road probably not too far down the road there is apocalyptic consequences that are going to be born out of it.
And it's not like we're just creating out of thin air.
We're listening to these people talk like Altman talking about we have to rewrite the social contract.
That's scary stuff.
You know, these guys who, they purchased their children, now they can grow them in a robot that China's creating, you know, or Elon talking about artificial womb factories where they can have 30,000 wombs, you know, where your baby can grow.
These things are so antithetical to humanity that I, and I don't think that is in the distant future because we have things like ORCID, this IVF company that does genetic testing.
And I understand that the positives of genetic testing, although I disagree with people saying, well, then I'm not going to have that baby.
I'm going to abort this baby.
That's disgusting to me.
But people are doing that now.
But what ORCID is doing is saying that we want this to be the common thing amongst people.
That is how we should be creating children through eugenicists, through this like brave new world IVF high-tech society.
And it's kind of like what happened with cesareans.
Cesareans happen all the time now because it's easy for the doctors.
You know, it's easier to schedule.
But we're robbing like the miracle of birth.
You know, obviously sometimes it just doesn't happen right.
And you need extra help in the hospital.
Totally understand it.
But we've made this stuff the common, you know, and same will go for AI.
Despite the short-term benefits.
The people in charge are using it for nefarious reasons against us, against everyone else.
And will it replace the people in charge?
I even think that is the case., even though someone like Mark Andreessen will say venture capitalists are fine.
I think he's wrong.
I think he's wrong about a lot of things, and he's certainly wrong theirs.
I think they can replace anything at a certain point, even things I love.
And then there's a whole discussion about whether people care.
You know, you're talking about writers and AI pumping out slop.
I don't like it.
I'm sure you don't like it.
Detested.
But there's a ton of people who don't care.
You know, you can make AI music and people are fine with that.
I don't like it.
You know, and I can appreciate like, wow, that sounds good.
It's crazy.
But you are removing the human because you can just put in a prompt and then you get a whole song.
And then I hate that.
We were primed by Katie Perry and Kesha.
What happened?
Well, I think the issue that you're highlighting is that transhumanists really are the problem.
And it's not just AI, right?
because you're going into these other domains of technology where it's also a problem.
And so, once again, I think what will keep us grounded is appreciation of what makes humans unique, understanding humans as they actually are, and making sure that...
And so to whatever extent AI is dominated or is controlled by transhumanists, that's a problem.
And I think we share your...
I totally agree, but I don't think it's just unique to transhumanists.
They're the ones creating it and they're the ones with these insane visions of the future.
But it's, you know, this idea is in everyone now.
You know, everyone is kind of transhumanist adjacent, especially in power.
Well, there's certainly the, the, right?
That's their method of parenting.
And I think the key is actually to take a collaborative approach to AI and other technology rather than an oppositional approach of standing up on the train track and saying, stop.
That's the exact image I had in my head.
It's like if the only thing that you're saying or doing is to do what, you know, conservatives do, just standing there and saying, no, to progressives, no, stop, you're going to get bowled over.
Yeah.
By the way, that's not, just to be clear, that's not my position.
I know you weren't singling me out, even though I saw that glint in your eye.
I wouldn't stand in front of the train.
I would be more likely to find other strategies that didn't involve me getting run over.
But my argument is basically is similar to the conservative argument against porn, right?
And similar to the conservative argument against, isn't it?
It depends.
I mean, you have the Ron Paul's, who I would consider to be profoundly conservative in like a Berkean sense, but he wouldn't say that porn should be illegal or that drugs should be illegal or that guns should be illegal.
But what you have to do, I think, and this is why I appreciate guys like Nathan and Bryce, and this is intuitive.
Correct me if I'm wrong, but I get a certain sort of provincial or tribal sense from you guys that you are kind of conservative in the classical sense you the people closest to you are more important than like all of humanity big H because they're the people closest to you and I think that should be the scope for most people unless you're the president making irresponsible decisions about artificial intelligence or the CEO of a corporation making vampiric and predatory decisions about artificial intelligence.
Like from our standpoint, I think that it's not like this cosmic thing where if AI succeeds, that means everybody's going to be a troad monkey.
Or if AI falters, then we're all just going to go back to the woods.
So many different lifestyles already exist and cultures already exist.
There's going to be huge pockets of homogenization due to technology, but there's also going to be huge pockets of individuation among people, individual people, and differentiation among cultures.
So I have actually a lot of faith.
You guys are going to be okay.
I think you'll be just fine.
You're going to put those cultural barriers in place.
And that is, I think, the value of conservatism, of being suspicious of change, because very often any push for change.
isn't necessarily going to be changed for your benefit or your kids' benefit or your community's benefit.
The change, this radical change, is more likely to benefit the people pushing for it.
It may be mutual, but in the case of porn, drugs, maybe even the trains if you really care about, say, the bison, or maybe the entire technological system if you don't want trash islands in the Pacific, microplastic in your balls, dead bugs everywhere, and black rhinos.
shuffled off into heaven.
These sorts of things, you know, it's ultimately the conservative or the anti-tech or the quasi-Luddite position.
if employed properly simply means I am going to remain human despite the advertising and despite whatever new gadget comes my way.
Yeah, and my appeal to the people is like, I don't want to stand in front of it either.
I don't think stopping this stuff is possible.
You know, it's like the war on drugs or the war on guns, the war on terror.
It never works.
But like what we're saying, and I think we're agreeing on is it's going to have to happen from the bottom up and ethics and people.
And I don't, that's going to be really tough because people are very flawed, no matter if they're in power or not.
That's just how we are.
But, you know, I think that is a possibility, but I do think, like we were talking about last night, Ted Kaczynski made some pretty good points in 1995 about the industrial revolution and its consequences for humanity.
Very wrong about what the mail is for, though.
Yeah, yeah, I'll say that for YouTube for sure.
Phil's right.
But I think he saw a lot of the issues we're in now, and we're just now dealing with it.
I mean, people are now coming to, we'll go on YouTube and look up his manifesto and be like, wow, this guy was a genius.
He was a prophet.
He was a time traveler.
He might not like the time traveler part, but he doesn't want to do that.
But I think that is the future.
And I think he saw what we see, especially in leftists, but it's not just unique to leftists, is that there is this need to control and destroy at all costs.
That is human nature.
That's something we're going to have to contend with for sure every time there's a new advancement.
So it's not going to go away.
And I also don't agree with regulations.
I don't know how you regulate this.
I understand.
I want to make sure no one can make child porn with AI or at all and stuff like that, but getting rid of certain things that you can do as an expression, whether people like it or not, you know, because like Milani would pass that law about deepfakes and whatnot.
And I think child porn is a part of that, but then it's just deepfakes of people.
i think you should be able to do that stuff you know i don't really like using ai but you're Go to your bedroom.
Like these things, you would consider that to be a crime.
You can't go to my bedroom as a deepfake.
Yeah.
So with deepfakes, basically it's the line between what is caricature, what is cartooning, and what is impersonation.
So, you know, a cartoon of, you know, Donald Trump dancing with dollar bills falling everywhere on the graves of Gaza children.
That sounds familiar.
Yeah, that's just a cartoon.
But if you had a deepfake of Donald Trump saying, you know, all the children of Gaza were wrongly murdered, and then he ends up getting blown up by.
golden pager or something like that well then that's a deep fake that is impersonation but is the person is the person who made the deep fake should they be held accountable yes really in the company i think that to an extent if you're if your software is capable of producing a very photorealistic or video realistic deep fake and you've deployed it to the masses to just sow chaos and you knew that was what was going to happen of course you should be held liable google for instance like they have among the most advanced mid-journey 2,
you know, among the most advanced video generation AI, right?
There's all these guardrails in place to.
keep you from impersonating famous people.
Small-scale malicious, kind of like cyberbullying deepfakes, I expect to see that anyway.
But it kind of just shows like the something inherent in the technology, this capability that would require great moral restraint on the part of like most of the population in the case of deepfakes, in the case of bioweapons, in the case of even like the construction of IEDs, in the case of flooding the world with slop.
You either have laws in place, somewhat draconian probably in many cases, to keep that from happening, or you rely on the moral fortitude of the people.
In either case, that's going to be a tough question.
That's why we're in a precarious situation.
I mean, I think that generally you guys are in mostly, except for Shane, Chain, Chain.
Do you think AI is inevitable that it will be a danger in the future to people, no matter what, if we have no guardrails?
Well, I think if AI holds the potential that the doomers.
think that it has, which it hasn't realized that potential yet, as we've been discussing, then what's most important is that the people who are involved, the people who are building it who are mastering it are on our side are virtuous and are people who care about humans and so maybe the the the the most risky scenario that i see is that but to that point to your point about caring about humans all of the things that we have talked about when it comes to like the medical field and
stuff all of that stuff is in service to humans so How do you square that circle?
I think I agree.
And I think this is what we've been talking about is there are lots of really excellent practical short-term applications that seem like they're going to benefit Americans.
But then there's the longer-term existential risk.
And I think there, that's where I see sort of the call, in some ways it's like the call to adventure, basically.
It's like to even young men in America who actually care about people and care about the direction of civilization to actually be a player in this and to not stand in front of the train, but maybe to like get, to maybe help guide the train in a direction that's conducive for human flourishing.
And I think that that's something that's of critical civilizational importance over the next few decades.
and the question is how are you going to go about the guiding and I think
There may have to be some maybe better mobilization around it than that because if every state has its own AI, you've probably heard these arguments, but if every state has its own AI, America will be completely crippled in its ability to advance AI and will fall behind other nations, So we are a nation and we should act in tandem as a nation.
But maybe, And maybe there's different types of AI.
Maybe there's a red state AI and a red and a blue state AI..
It's an Amish AI, is there?
Can we get an Amish AI?
You've got Gab AI, which I would say is deep red, I guess you would say.
And then you've got XAI, Jim and I. And so that's why I'm less worried about this monopolistic future because we've already seen AI companies who don't agree with one another and that express very different worldviews, libertarian, progressive, et cetera.
Specifically, what sort of state laws would gum up the entire national AI agenda?
Regulation in general, you've heard this from the libertarians, it benefits large companies because small companies don't can't afford to comply with all the regulation, right?
And so Europe is chronically technically backwards because they have, well, one of the reasons that impacts this is because they have so many small regulations that is death by a thousand paper cuts.
And I think what we need to avoid in the U.S., we need to coordinate so that we're not giving death to the AI industry by a thousand paper cuts because of all the benefits that it can give us economically.
Well, specifically, that's abstract, but specifically what laws that are either on the books now in different states, Texas, California, New York, municipality, what laws are threatening U.S. AI dominance or proposed laws like SB-1047, liability of companies?
What laws?
Because you hear this all the time from Sachs, Andreessen, you know, it'll destroy, China will win if we don't, if you say that you can't, you know, create deepfakes.
Yeah, what laws would threaten the U.S. national agenda or these companies?
So you know the legislation better than I do, what's out there, what's passed, what's on the table.
But it's not about legislation, any particular legislation.
It's about the idea of bringing more lawyers into the room to enforce the regulations and companies.
And so what is the actual, how should we handle regulation?
is clearly some of these laws that were passed in California seem actually very reasonable and positive for AI and protecting humans and human flourishing with AI I think we should probably take those in mind and find out a way to
and rule in a common sense way that will actually help.
So we have to have a common, a positive vision in mind instead of just anti-regulation, pro-regulation.
Can I add one element would be I think any sort of positive, I mean, we'd be extremely in favor of those.
One thing that's a recent shift is if you look at young families today, so families, there are around 25 million families English-speaking who have children under eight.
And in that cohort, 85% of them are screen time conscious with their children.
And that's a big shift from just a decade or a decade or two ago.
Now, you could ask the same question, how many of them are like AI conscious of what AI chatbots and things their children are interacting with?
And it's probably a much lower number, at least right now.
And so my hope would be that one, parents take it upon themselves to be much more protective around AI and the ways that their children engage with it.
It's not necessarily that I'm entirely against children engaging with AI.
It just needs to be in an environment that's conducive for them to do well.
So no Rosie the robot.
Yeah.
No Jets and that.
But then also, right, so there's parents just need to be educated on this side of the question.
And then there is just policy.
Like certain things should be banned, certain things, especially in the schools and things like that.
And that's a much longer conversation.
And what exactly is inbounds versus out?
It gets much more technical and we probably wouldn't get into all the rest.
So I feel like the argument that is made is, you know, the potential for danger from AI is so great that we need humans in the loop has evidently produced negative results because you know you've you've never had so many people saying no I want to homeschool my kids because I don't want teachers around my kids because I know what the teachers think
and I know what they're teaching in the schools.
Ever since COVID, like, kind of pulled back the veil and parents were able to see, you know, watching, you know, remote schooling or whatever.
So I'm not sure which one is actually worse.
You know, is the parents being able to see what the curriculum that an AI would be producing, would be teaching the kids, is that worse to have the robot do it?
Or, you know, have an AI do it?
Or would it be worse to give your your kids to the schools that exist or existed prior to COVID knowing what we know now?
I don't know which one is actually.
Right now, I'd say they're both bad.
Those teachers, most of them probably agree with a lot of the things that AI might spit out because it was built by people who agree with them.
But again, if you can see what, if you know what type of curriculum the AI is going to be teaching.
There's a whole spectrum of basically different educational, AI education type products that exist.
And some of them are actually produced by homeschool family type individuals.
And then others are totally crazy.
Far left lunatics have put out some new English education app that a kid can get on.
And there's no telling what's going to happen.
It's going to change so much.
Like if it's a private school, homeschool, using a certain curriculum, maybe it's Christian based.
You understand what's going on.
But in a public school, they could be changing the AI so much because in the physical world, they change.
They move the goalposts all the time.
Like what was wrong, what was right.
There's one thing that I want to point out.
Like this morning, like I saw this tweet, right?
And this is just about what the DNC doesn't like buzzwords the DNC is not allowed to use.
And you know that the DNC has-My entire vocabulary probably.
Well, it's things like privilege, violence as in environmental violence.
You can't say dialogue, triggering, othering, microaggression, holding space, body shaming, subverting norms, cultural appropriation, Overton window, existential threat, racial transparency, or sorry, radical transparency, stakeholders, the unhoused, food insecurity, housing insecurity, a person who immigrated, birthing person, cisgender, dead naming, heteronormative, patriarchy, LGBTQIA, BIPOC, all of these things are really the backbone of intersectional-Those are all banned.
Those are all words that the DNC shouldn't be using.
So the point that I'm making with that though is the human beings see when they get resistance and they're like, well, we need to change.
We need to change.
they don't actually change what their message is they're just changing they're changing how it's delivered so you're what you're talking about is they're giving up on human beings basically you're talking about giving up on people And it's not as bad.
I'm not saying you specifically to this question and this question of whether because you have some proportions, a very, very large proportion in the U.S. of teachers who, it's a gay word but woke.
Well, they're a majority.
They're woke.
Okay, yeah, fine.
But A, that's a problem of the system.
There were plenty who weren't before, and there are plenty of very intelligent, educated, conservative people.
So the predominant attitude among teachers in, say, the 60s would have been profoundly conservative.
Pledge of allegiance every day.
pro-American propaganda essentially in all of the rudimentary school books, right?
The elementary school books.
So that shift happened after the long march through the institutions.
But so the question that becomes, and it's a valid one, like in the case of Linda McMahon, she's pushing AI or A1 teachers, depending on what day you ask her.
And there's a company that I came across actually in Berkeley, of all places.
I'm sorry, Stanford.
A woman who represented them basically described it as all AI teachers all day long with like two hours of training.
But it's all AI.
It's an experimental program.
I think the best way to think about all of this, again, isn't in some monolith that like, should we use AI teachers?
Should we not?
Like, what?
Because everybody's not going to do the same thing.
All of this is this vast global experiment.
No one has any idea what the outcome is going to be ultimately.
We're just finding out not by taking 20 kids and putting them in a cohort over here and experimenting with their brains with technology and taking 20 other ones and letting them grow traditionally and then after 20 years seeing what happens and then applying it to society, that would be a warped, fucked up experiment to begin with.
But instead of doing that, we're just doing it with all kids, as many as possible.
So you have people like Sal Khan, Mark Andreessen, Elon Musk, Bill Gates.
Sam Altman, Greg Brockman, all of which, to some extent or another, most of them saying every child on the planet should have an AI tutor.
totalizing vision of how this should go down.
Now, that's not going to happen.
I suspect you won't do that to your kids, right?
Or most of the people you know won't.
So this is an experiment.
And ultimately, and I'll leave it on this conceptual note, ultimately, this is an experiment that should be understood first and foremost spiritually, but on a practical level, on a Darwinian level and on a eugenic level, which are closely intertwined.
And over time, like say with birth control, the most advanced biological technology of its day, which dramatically changed the gene frequencies.
in the US and the West especially, but across the world.
Those who used birth control had fewer children.
Those who didn't had a whole lot more.
Those who were more religious had more children.
Those who were irreligious had fewer.
And I think on both a Darwinian and a eugenic level, because we're talking about the same thing, ultimately, it's either nature's Darwinism or human social Darwinism, we're going to find out, and it's going to be diverse.
There's going to be all of these different sort of cultural species upon which natural selection and artificial selection will act.
And I think the question that someone should ask is, will my mode of life, whether it's total cybergazation or total Luddite or somewhere in between, will this continue?
Will this will allow me to flourish now?
And on a long scale, like a long term scale, will this allow me and my own to continue?
And it's a big question.
I don't think there's going to be a monolithic answer, at least I hope not.
I hope not, but I feel like we are moving towards a society that wants to be, especially after this administration has totally embraced all the bad people we agree on being bad, you know, they want that total control.
They want to consolidate all of your data that they've scraped off the internet that, you know, everyone already has access to in the government, but they can now consolidate it to accelerate how we can do things like pre-crime, which the UK is already starting to try out, you know, and that stuff is the future.
I'm not just thinking about me.
I'm thinking about my kids' kids and what world they're going to inherit.
And it's going to be, it's always getting worse, in my opinion, despite my hope in humanity.
We're surrounded by people who want to dominate everything at all times.
To jump in on the point on AI education, I think one thing that we often forget is how much also education has shifted just recently.
So even the lecture hall is a fairly recent technological innovation.
And I would argue that the lecture hall doesn't work very well.
If you go to college, several of us are recent college grads.
If you go, I mean, nobody's paying attention.
Nobody's learning anything.
It's a totally ineffective way to learn.
There's some professor who doesn't care, just monologues.
Why is no one paying attention?
Is because everyone's on their phones?
That is part of it.
But even in classrooms, I've been in lecture halls where they remove technology and it's still just not a great platform.
It was sort of popularized in the post-war era with the GI bill.
Basically, colleges built all these large lecture halls and this is a way to pack a lot more students in through the education process.
And so when it comes to AI education, I'm a skeptic.
I don't want to see a single solution that's crammed on every kid's throat.
That feels like a really dark world or dark path to go down.
But in a world where let's say there are a thousand potential, there's a menu of AI tutors.
And maybe one of them, for example, let's say you're a homeschool kid or a homeschool parent and you hate teaching math and there's one that teaches times tables really well and it was built by somebody who maybe shares your values.
Let's say Bryce made it and I trust Bryce and it's either that or not teach my kid math or send them to public school where you don't trust the teachers.
I think once again, it comes back to the human.
It's like, do I trust what Bryce built?
Is it effective?
I want to see maybe over a few years.
Did the kids who did it actually learn math?
And in that sort of a world, I think there will be potentially really quite excellent outcomes.
But I'm totally against it.
That's a terrible point that I wanted to make.
When you have a market where you can actually select and say, well, I like this type of curriculum and I know I trust the people that produced it because I, you know, for whatever, you know, personal reason you come up with, you know, say like Ron Paul has the has a Liberty Institute, right?
And you want to go with Ron Paul's AI that will teach you the curriculum from from Liberty Classroom.
You know, Tom Woods promotes that too.
He's another big libertarian.
That's the kind of stuff that I'd be like, you know, I feel good about that.
I would I would feel good about this.
AI, this curriculum package being downloaded into an AI that's on my computer or maybe, maybe, who knows, maybe even a robot.
And the robot is actually teaching the curriculum that I chose.
But if you have that option, I don't see that as a bad thing.
I don't see it as a bad thing, but I think the best teachers should be the ones that can be relatable to the human child and weave throughout their education stories and how it's applicable to the human world as opposed to some cold, sterile screen just beating your kid.
with information.
And I know it can be nice, but like I don't want my kids, whether they're through, if they go to college or not, going and just plugging in.
The amount of emotion that's added to the conversation by both of you.
The gunner bots, the cold steel.
It's like, I mean, I understand that you guys are like trying to make people see your perspective, but still, it's like two writers who don't use AI.
psychologically unstable and unable to control their emotions.
You're watching people go extinct, Phil.
To that point, just to say this, the long millennia-old tradition of passing information down from human to human, even with the addition of writing, even with the addition of television and recordings, it's still been the predominant way.
Humans.
humans teaching other humans or guiding them through that education and that transmission, that link that goes back and back and back.
You could call it maybe an apostolic succession in some cases.
Don't go using words like that.
But that link from person to person, it means that that information is flowing through a flawed human example, a role model.
You can see whether that information has allowed them to flourish or not.
You all have that.
We all know the brilliant professor who's really useless.
And it kind of makes you wonder if all that information was really all that worthwhile.
On the other hand, we know the very kind of soft-spoken and, you know, a concise professor who is excellent at so many different things, right?
Those sorts of human examples is going to be, it has been and will continue to be so crucial for the development of human beings.
I think that you bring in a robot on occasion, right?
You get, hey, here's your clanker teacher for the hour.
Don't use a hard R with her.
I think that that, okay, fine.
There's always going to be the spectrum between the Amish and the cyborg.
Nobody's going to be 100%, except for the cyborgs in this Amish.
But none of us are going to be 100% on that spectrum.
It's just a matter of which way we're leaning and pushing.
And, you know, I'm not trying to like stop anybody from doing the basic sort of cyborgization.
And, you know, I, I too have an implant in my forehead.
So I'm not trying to get too self-righteous about it.
But I do think that, again, that, that kind of Burkean suspicion of change.
Why?
is this change being pushed?
Is it really for my benefit or is it for the person pushing the benefit?
I think it's as simple as that.
And if you decide, yeah, I want my kid to grow up with clankers.
I want my kid to marry one.
I want Rosie.
I didn't know you were into integration.
Oh, my goodness.
You know, and that we'll see.
In the end of the day, it will be decided.
Ultimately, at the end of the day, I think it will be a spiritual question, but on a practical level, it will be a Darwinian question.
Which ones survived?
Which ones flourished?
And we'll just have to find out.
All right.
Well, I think we're going to go ahead and wrap things up.
So, Bryce, if you want to go ahead and kind of collect your thoughts and give the totalization of what you've thought, go ahead.
Oh, no.
If you got anything you want to shout out, your Twitter handle or whatever.
Sure, sure.
So I think just one point that...
technology doesn't replace the things that we care most about.
It doesn't replace the core human things.
Technology usually replaces older technology.
And so there's probably room that can help us to guide.
where we use AI and also help us to be a little more at peace that AI probably is not going to radically transform the world we live in.
But so you can follow me on Twitter, Bryce McDonald, and you can go to my pen tweet if you're interested in basically what Volus does which is deploying AI in Middle American industrial businesses you can you can find us there at the pen tweet great so I'm Nathan Halberstadt again on Twitter I'm at NAT Halberstadt HALBER STADT and
I just say that this was an amazing conversation and I think what I appreciate about it is that we're all coming from the same prioritization of the human and we're assessing risk I think in slightly different ways or over different time horizons and I would actually love to do it again so I I think it's a really just an excellent conversation and really respect everybody's perspective here.
Just plugging again my my own stuff.
I'm the venture lead at New Founding.
So we run a rolling fund.
So if you're interested in investing, just DM me on Twitter.
And then if you're a founder and you're somebody who's passionate about solving the problems we've been talking about, also DM me and let's, you can, you can pitch us and we're really happy to talk with you.
So, so especially if you're a patriot, we want to talk with patriots.
Yeah, I would definitely echo, been an honor, been really fun and new founding is awesome and what I read from your mission statement at Volus is an acceptable level of cyborgization I got to say it's also pretty awesome on that note I do think that it's best to build machines that work for people right instead of focusing on building smarter machines cultivate better humans humans first humans first humans first uh my book dark aon
aeon dark aon transhumanism and the war against humanity available uh everywhere on the bee system you can pay for it at amazon with your palm or you can get a signed copy directly from me dark aon aeon.xyz XYZ website jobot.xyz Twitter slave chain at joebotexyz or Steve Bannon's Warren Yeah, that was a lot of fun guys really appreciate it.
You know, I hope that we have a positive future.
I always have hope in humanity and I need to for my children, you know, despite thinking I do think that AI could if we go one route replace many things that shouldn't be replaced pregnancy and knowledge and creativity and all that stuff but uh it's good that to have these conversations especially with guys who are in it that have ethics as opposed to many who are in it right now uh getting a lot of money from our government who have no ethics, in my opinion.
But yeah, a lot of fun.
Thanks for having me.
You can find me online at Shane Cashman.
The show I host every Monday through Thursday is Inverted World Live, 10 p.m. to 12 a.m.
It's a call-in show.
A lot of fun.
And we'll see you guys next time.
And call Shane and debate whether clouds are real.
Thank you everybody for coming and having such a great conversation.
Everybody's input was really enlightening, and I appreciate you all coming out.
I am Phil that remains on Twix.
I'm Phil that the band is all that remains.
You can check us out on Apple Music, Amazon Music, Pandora, Spotify, and Deezer.
Make sure you tune in tonight for Timcast IRL.
I will be here hosting again.
Tim is still out sick.
Export Selection