Amjad Masad debunks AI doomsday narratives, tracing Silicon Valley’s Palestinian roots and critiquing transhumanist cults like Yudkowsky’s Singularity Institute for pushing dehumanizing ethics. He rejects AI sentience claims, citing Gödel’s theorem, but warns of military misuse—like Israel’s AI-driven civilian casualties—and corporate overreach in regulation. While optimistic about AI tutoring, he cautions against centralized control, linking tech’s shift from "Man-Machine Symbiosis" to woke dogma. The episode exposes how Silicon Valley’s ideological fractures—from Musk’s influence to Democratic censorship—threaten innovation and free speech, urging support for independent platforms. [Automatically generated summary]
They made it so that anyone can use a computer, but they also made it so that no one has to learn to program.
The original vision of computing was that this is...
This is something that's going to give us superpowers, right?
JC Licklider, the head of DARPA, while the internet was developing, wrote this essay called The Man-Machine Symbiosis.
And he talks about how computers can be an extension of ourselves.
It can help us grow.
There's this marriage between the type of intellect that computers can do, which is...
High-speed, arithmetic, whatever, and the type of intellect that humans can do is more intuition.
But since then, I think the consensus has changed around computing, which is, and I'm sure we'll get into that, which is why people are afraid of AIs replacing us.
This idea of computers and computing are a threat because they're directly competitive with humans, which is not really the...
The belief I hold.
There are extensions of us.
And I think people learning the program, and this is really embedded at the heart of our mission at Repled, is what gives you superpowers.
Whereas when you're just tapping, you're kind of a consumer.
You're not a producer of software.
And I want more people to be producers of software.
There's a book by Doug Hofstadter.
Roshkoff, Douglas Roshkoff, is called Program or Be Programmed.
If you're not the one coding, someone is coding you.
Someone is programming you.
These algorithms on social media, they're programming us, right?
I can't balance my checkbook, assuming there are still checkbooks.
I don't think there are.
But let me just go back to something you said a minute ago.
That the idea was originally, as conceived by the DARPA guys who made this all possible, that machines would do the math, humans would do the intuition.
I wonder, as machines become more embedded in every moment of our lives, if intuition isn't dying or people are less willing to trust theirs.
I've seen that a lot in the last few years where something very obvious will happen.
And people are like, well, I could sort of acknowledge and obey.
What my eyes tell me and my instincts are screaming at me, but the data tell me something different.
I feel like my advantage is I'm very close to the animal kingdom and I just believe in smell.
But I wonder if that's not a result of the advance of technology.
Of computing as a replacement for humans versus an extension machine for humans.
And so you go back, Bertrand Russell wrote a book about a history of philosophy and history of mathematics and going back to the ancients and Pythagoras and all these things.
And you could tell in the writing, he was almost surprised by how much intuition played into science and math and, you know, in the sort of ancient era of advancements in logic and philosophy and all of that.
Whereas I think the culture today is like...
Well, you got to check your intuition at the door.
So back to AI. And to this question of intuition, you don't think that it's inherent.
So in other words, if my life is governed by technology, by my phone, by my computer, by all the technology embedded in every electronic object, you don't think that makes me trust machines more than my own gut?
But ultimately, you're giving away a lot of freedom.
It's not just me saying that.
There's a huge tradition of hackers and computer scientists that...
Kind of started ringing the alarm bell a really long time ago about the way things were trending, which is more centralization, less diversity of competition in the market.
And you have one global social network as opposed to many.
Now it's actually getting a little better.
And you had a lot of these people start the crypto movement.
I know you were at the Bitcoin conference recently and you told them...
I'm excitable on the basis of very little information.
Well, actually, tell me, what is your theory about the threat of AI? I always want to be the kind of man who admits up front his limitations and his ignorance.
And on this topic, I'm legitimately ignorant.
But I have read a lot about it, and I've read most of the alarmist stuff about it.
And the idea is...
As you well know, that the machines become so powerful that they achieve a kind of autonomy.
And they, though designed to serve you, wind up ruling you.
I mean, briefly, and we'll get to existential risk in a second, but he talked about this thing called the power process, which is he thinks that it's intrinsic to human happiness to struggle for survival, to...
Go through life as a child, as an adult, build up yourself, get married, have kids, and then become the elder and then die, right?
And I also think just my childhood, I was always different.
When I had hair, it was all red.
It was bright red.
And my whole family, or at least half of my family are redheads.
And, you know, because of that experience, I was like, okay, I'm different.
I'm comfortable being different.
I'll be different.
And, you know, that just commitment to not worrying about anything, you know, about conforming, or like, it was forced on me that I'm not conforming just by virtue of being different and being curious and being, you know.
I'm good with computers and all that.
I think that carried me through life.
I get almost a disgust reaction to conformism and mob mentality.
We've traveled to an awful lot of countries on this show, to some free countries, the dwindling number, and a lot of not very free countries, places famous for government censorship.
And wherever we go, we use a virtual private network, a VPN, and we use ExpressVPN.
We do it to access the free and open internet.
But the interesting thing is when we come back here to the United States, we still use ExpressVPN.
Why?
Big tech surveillance.
It's everywhere.
It's not just North Korea that monitors every move its citizens make.
No.
That same thing happens right here in the United States and in Canada and Great Britain and around the world.
Internet providers can see every website you visit.
Did you know that?
They may even be required to keep your browsing history on file for years and then turn it over to federal authorities if asked.
In the United States, internet providers are legally allowed to and regularly do sell your browsing history everywhere you go online.
There is no privacy.
Did you know that?
Well, we did, and that's why we use ExpressVPN.
And because we do, our internet provider never knows where we're going on the internet.
They never hear it in the first place.
That's because 100% of our online activity is routed through ExpressVPN's secure encrypted servers.
They hide our IP address, so data brokers cannot track us and sell our online activity on the black market.
We have privacy.
ExpressVPN lets you connect to servers in 105 different countries.
So basically you can go online like you're anywhere in the world.
No one can see you.
This was the promise of the internet in the first place.
Those didn't seem like they were achievable, but now they are.
ExpressVPN, we cannot recommend it enough.
It's also really easy to use, whether or not you fully understand the technology behind it.
You can use it on your phone, laptop, tablet, even your smart TVs.
You press one button, just tap it, and you're protected.
You have privacy.
So if you want online privacy and the freedom it bestows, So Kaczynski's thesis that struggle is not only inherent to the condition but an essential part of
And I'm kind of threat-oriented anyway, so people with my kind of personality are sort of always looking for the big bad thing that's coming, the asteroid or the nuclear war, the AI, slavery.
But I know some pretty smart people who, very smart people, who are much closer to the heart of AI development who also have these concerns.
And I think a lot of the public shares these concerns.
Your view of it, much better informed view of it, is that there's been surprisingly and tellingly little conversation about the upside of AI. So instead, it's like, this is happening, and if we don't do it, China will.
And those people believe that's a good thing because the world now sucks so much and we are imperfect and unethical and all sorts of irrational and whatever.
And so they really wanted for the singularity to happen.
And there's this young guy on this list.
His name is Eliezer Yudkowsky.
And he claims he can write this AI.
And he would write really long essays about how to build this AI.
Suspiciously, he never really publishes code.
And it's all just prose about how he's going to be able to...
Build AI. Anyways, he's able to fundraise.
They started this thing called the Singularity Institute.
A lot of people were excited about the future, kind of invested in him, Peter Thiel most famously.
And he spent a few years trying to build an AI. Again, never published code, never published any real progress.
And then came out of it saying that not only you can't build AI, but if you build it, it will kill everyone.
So he kind of switched from being this optimist, you know, singularity is great, to like actually AI will for sure kill everyone.
And then he was like, okay, the reason I made this mistake is because I was irrational.
And the way to get people to understand that AI is going to kill everyone is to make them rational.
So he started this blog called Less Wrong.
And Less Wrong walks you through steps to becoming more rational.
Look at your biases.
Examine yourself.
Sit down and meditate on all the rational decisions you've made and try to correct them.
And then they start this thing called Center for Advanced Rationality or something like that, CIFAR. And they're giving seminars about rationality.
Yeah, I mean, that's what I read on these accounts.
They will sit down and they will audit your mind and tell you where you're wrong and all of that.
And it caused people a huge distress.
Young guys all the time talk about how going into that community has caused them a huge distress.
There were offshoots of this community where there were suicides, there were murders, there were a lot of really dark and deep shit.
And the other thing is they teach you about rationality, they recruit you to AI risk, because if you're rational, you're a group, we're all rational now, we learn the art of rationality, and we agree that AI is going to kill everyone, therefore everyone outside of this group is wrong, and we have to protect them.
And so they convince each other of all these cult-like behavior.
The crazy thing is this group ends up being super influential because they recruit a lot of people that are interested in AI and the AI labs and the people who are starting these companies were reading all this stuff.
So Elon famously read a lot of Nick Bostrom as kind of an adjacent figure to the rationality community.
He was part of the original mailing list.
I think he would call himself a rational But he wrote a book about AI and how AI is going to kill everyone, essentially.
I think he moderated his views more recently, but originally he was one of the people that are kind of banging the alarm.
And, you know, the foundation of OpenAI...
It was based on a lot of these fears.
Elon had fears of AI killing everyone.
He was afraid that Google was going to do that.
I don't think everyone at OpenAI really believed that, but some of the original founding story was that.
And they were recruiting from that community.
So much so when Sam Altman got fired recently, he was fired by someone from that community.
Someone who started with effective altruism, which is another offshoot from that community.
Well, I think they're just unsatisfied with human nature, unsatisfied with the current ways we're constructed, and that we're irrational, we're unethical.
And so they start...
They long for the world where we can become more rational, more ethical by transforming ourselves, either by merging with AI via chips or what have you, changing our bodies, and fixing fundamental issues that they perceive with humans via modifications and merging with machines.
Yeah, I mean, so their philosophy is utilitarianism, is that you can calculate ethics, and you can start to apply it, and you get into really weird territory.
Like, you know, if there's all these problems, all these thought experiments, like, you know, you have two people at the hospital requiring some organs of another third person that came in for a regular checkup, or they will die, you're ethically...
You're supposed to kill that guy, get his organ, and put it into the other two.
And so what they think is, in the future, if we made the right steps, there's going to be a trillion humans, trillion minds.
They might not be humans, they might be AI, but there are going to be trillion minds who can experience utility, can experience good things, fun things, whatever.
If you're utilitarian, you have to put a lot of weight on it.
Maybe you discount that, sort of like discounted cash flows.
But you still have to posit that if there are trillions, perhaps many more people in the future, you need to value that very highly.
Even if you discount it a lot, it ends up being valued very highly.
So a lot of these communities...
They end up all focusing on AI safety because they think that AI, because they're rational, they arrived, and we can talk about their arguments in a second, they arrived at the conclusion that AI is going to kill everyone.
Therefore, effective altruists and rational community, all these branches, they're all kind of focused on AI safety because that's the most important thing because we want a trillion people in the future to be great.
But when you're assigning value, It's sort of a form of Pascal's wager.
You can justify anything, including terrorism, including doing really bad things.
If you're really convinced that AI is going to kill everyone and the future holds so much value...
More value than any living human today has value, you might justify really doing anything.
And so built into that, it's a dangerous framework.
I feel kind of weird just talking about people who just generally I like to talk about ideas, about things.
But if they were just like a silly Berkeley cult or whatever, and they didn't have any real impact on the world, I wouldn't care about them.
But what's happening is that they were able to convince a lot of billionaires of these ideas.
I think Elon maybe changed his mind, but at some point he was convinced of these ideas.
I don't know if he gave them money.
There was a story at some point that he was thinking about it, but a lot of other billionaires gave them money and now they're organized and they're in D.C. lobbying for AI regulation.
They're behind the AI regulation in California.
Actually, profiting from it, there was a story in PirateWares where the main sponsor, Dan Hendricks, behind SB1047 started a company at the same time that certifies the safety of AI. As part of the bill, it says that you have to get certified by a third party.
There's aspects of it that are less profit from it.
By the way, this is all allegedly based on this article.
I don't know for sure.
I think Senator Scott Wiener was trying to do the right thing with the bill, but he was listening to a lot of these cult members, let's call them.
And they're very well organized.
And also, a lot of them still have connections to the big AI labs.
Some of them work there, and they would want to create a situation where there's no competition in AI. Regulatory capture, per se.
I'm not saying that these are the direct motivations.
All of them are true believers.
But you might infiltrate this group and direct it in a way that benefits these corporations.
It is a great app that I am proud to say I use, my whole family uses.
It's for daily prayer and Christian meditation.
And it's transformative.
As we head into the start of school in the height of election season, you need it.
Trust me, we all do.
Things are going to get...
Crazier and crazier and crazier.
Sometimes it's hard to imagine even what is coming next.
So with everything happening in the world right now, it is essential to ground yourself.
This is not some quack cure.
This is the oldest and most reliable cure in history.
It's prayer.
Ground yourself in prayer and scripture every single day.
That is a prerequisite for staying sane and healthy and maybe for doing better eternally.
So if you're busy on the road headed to kids' sports, there is always time to pray and reflect alone or as a family, but it's hard to be organized about it.
Building a foundation of prayer is going to be absolutely critical as we head into November, praying that God's will is done in this country and that peace and healing come to us here in the United States and around the world.
I think the fact that it's a stupid cult-like thing, or perhaps actually a cult, does not.
I think you do have to discount some of the arguments because it comes from crazy people.
But the chain of reasoning is that humans are general intelligence.
We have these things called brains.
Brains are computers.
They're based on purely physical phenomena that we know they're computing.
And if you agree that humans are computing, and therefore we can build a general intelligence in the machine, and if you agree up to this point, if you're able to build a general intelligence in the machine, even if only at human level, then you can create a...
Billion copies of it.
And then it becomes a lot more powerful than any one of us.
And because it's a lot more powerful than any one of us, it would want to control us or it would not care about us because it's more powerful.
We don't care about ants.
We'll step on ants.
No problem.
Because these machines are so powerful, they're not going to care about us.
And I sort of...
Get off the train at the first chain of reasoning.
But every one of those steps I have problems with.
The first step is the mind is a computer.
And, you know, based on what?
And the idea is, oh, well, if you don't believe that the mind is a computer, then you believe in some kind of spiritual thing.
And I think this is, again, another error of the rationalist types is that just assume that we're so much more advanced in our science than we actually are.
Yes, it's an assertion that is fundamentally wrong.
And the way he proves it is very interesting.
In mathematics, there's something called Gödel's incompleteness theorem.
And what that says is there are statements that are true that can't be proved in mathematics.
So Gödel constructs a number system where he can start to make statements about this number system.
So he creates a statement that's like, This statement is unprovable in system F, where the whole system is F. Well, if you try to prove it, then that statement becomes false.
But you know it's true because it's unprovable in the system.
And Roger Pernod says, because we have this knowledge that it is true by looking at it, despite we can't prove it.
I mean, the whole feature of the sentence is that it is unprovable.
Therefore, our knowledge is outside of any formal system.
Therefore, the human brain is, or our mind is understanding something that mathematics is not able to give it to us.
And Bertrand Russell spent his whole, you know, big part of his life writing this book, Principia Mathematica.
And he wanted to really prove that mathematics is complete, consistent, you know, decidable, computable, all of that.
And then all these things happened.
Gödel's incompletes theorem.
Turing, the inventor of the computer, actually, this is the most ironic piece of science history that nobody ever talks about.
Turing invented the computer to show its limitation.
So he invented the Turing machine, which is the ideal representation of a computer that we have today.
All computers are Turing machines.
And he showed that this machine, if you give it a set of instructions, it can't tell whether those set of instructions will ever stop, will run and stop, or will complete to a stop, or will continue running forever.
It's called the halting problem.
And this proves that mathematics have undecidability.
It's not fully decidable or computable.
So all of these things were happening as he was writing the book.
And it was really depressing for him because he kind of went out to prove that mathematics is complete and all of that.
And, you know, this caused kind of a major panic at the time between mathematicians and all that.
It's like, oh my God, like our systems are not complete.
So it sounds like the deeper you go into science and the more honest you are about what you discover, the more questions you have, which kind of gets you back to where you should be in the first place, which is in a posture of humility.
You know, just really people who shouldn't have power, just dumb people with, you know, pretty ugly agendas.
But we're talking about the world that you live in, which is like unusually smart people who do this stuff for a living and are really trying to advance the ball in science.
And I think what you're saying is that some of them, knowingly or not, just don't appreciate how little they know.
Yeah, and you know, they go through this chain of reasoning for this argument.
And, you know, none of those are at minimum, you know, complete.
And like, you know, they don't, just take it for granted.
If you even doubt that the mind is a computer, you're, you know, I'm sure a lot of people will call me heretic and will call me like, you know, all sorts of names because it's just dogma.
You could argue that neural networks are sort of intuition machines, and that's what a lot of people say.
But neural networks, and maybe I will describe them just for the audience, neural networks are inspired by the brain.
And the idea is that you can connect a network of small little functions, just mathematical functions, and you can train it by giving examples.
Give it a picture of a cat.
And let's say this network has to say yes if it's a cat, no if it's not a cat.
So to give it a picture of a cat and then the answer is no, then it's wrong.
You adjust the weights based on the difference between the picture and the answer.
And you do this, I don't know, a billion times.
And then the network encodes features about the cat.
And this is literally exactly how neural networks work, is you tune all these small parameters until there's some embedded feature detection, especially in classifiers, right?
And this is not intuition.
This is basically automatic programming, the way I see it.
So, so, so, so, you know, this, this chain of reasoning, um, I, I can go through every point and present, present arguments to the contrary, or at least like present doubt, but no one is really kind of trying to deal with but no one is really kind of trying to deal with those Um,
And my view is that I'm not holding these doubts Very, very strongly.
But my view is that we just don't have a complete understanding of the mind.
And you at least can't use it to argue that a kind of machine that acts like a human but much more powerful can kill us all.
The cool thing about them is they can mix and match different functionalities that they learn from the data.
So it looks a little bit more general.
But let's say we collected all data of the world, we collected everything that we care about, and we somehow fit it into a machine and now everyone's building these really large data centers.
You will get a very highly capable machine that will kind of look general because we collected a lot of economically useful data and will start doing economically useful tasks.
And from our perspective, it will start to look...
I don't doubt we're headed in some direction like that.
But we haven't figured out how these machines can actually generalize and can learn and can use things like intuition for when they see something fundamentally new outside of their data distribution, they can actually react to it correctly and learn it efficiently.
On the most fundamental level, you began that explanation by saying we don't really understand the human brain, so how can we compare it to something because we don't even really know what it is.
There's a machine learning scientist, Francois Chollet, I don't know how to pronounce French names, but I think that's his name.
He took an IQ-like test where you're rotating shapes and whatever.
An entrepreneur put a million dollars For anyone who's able to solve it using AI. And all the modern AIs that we think are super powerful couldn't do something that a 10-year-old kid could do.
And it showed that, again, those machines are just functions of their data.
The moment you throw a problem that's novel at them, they really are not able to do it.
I'm not fundamentally discounting the fact that we'll get there, but just the reality of where we are today, you can't argue that we're just going to put more compute and more data into this, and suddenly it becomes God and kills us all.
Because that's the argument, and they're going to D.C., and they're going to all these places that are springing up regulation.
This regulation is going to hurt American industry.
It's going to hurt startups.
It's going to make it hard to compete.
It's going to give China a tremendous advantage.
And it's going to really hurt us based on these flawed arguments that they're not actually battling with these real questions.
I think that all humane action flows from that belief.
And that...
The most inhumane actions in history flow from the opposite belief, which is people are just objects that can and should be improved and I have full power over them.
That's a totalitarian mindset and it's the one thing that connects every genocidal movement is that belief.
So it seems to me as an outsider that the people creating this technology have that belief.
We've had brains for half a billion, if you believe in evolution and all that.
We have had brains for half a billion years.
And we've had kind of a human-like species for half a million years, perhaps more?
Perhaps a million years?
There's a moment in time, 40,000 years ago, it's called the Great Leap Forward, where we see culture, we see religion, we see drawings, we saw very little of that before that, tools and whatever.
And suddenly, we're seeing this Cambrian explosion of culture.
So, you know, I think that these machines, I'm betting my business that on AI, getting better and better and better, and it's going to make us all It's going to make it all more educated.
This technology, large language models where we kind of fed a neural network, the entire internet, and it has capabilities mostly around writing, around information lookup, around summarization, around coding.
It does a lot of really useful things and you can program it to kind of pick and match between these different skills.
You can program these skills using code.
And so the kind of products and services that you can build with this are amazing.
So one of the things I'm most excited about this application of the technology, there's this problem called the Bloom's Two Sigma problem.
There's this scientist that was studying education.
And he was looking at different interventions to try to get kids to learn better or faster.
Or have just better educational outcomes.
And he found something kind of bad.
Which is, there's only one thing you could do to move kids, not in a marginal way, but in two standard deviations from the norm.
In a big way.
Better than 98% of the other kids.
By doing one-on-one tutoring, using a type of learning called mastery learning.
One-on-one touring is the key formula there.
That's great.
I mean, we discovered the solution to education.
We can up-level everyone, all humans on Earth.
The problem is we don't have enough teachers to do one-on-one touring.
It's very expensive.
No country in the world can afford that.
So now we have these machines that can talk, that can teach, that can present information.
That you can interact with it in a very human way.
You can talk to it.
It can talk to you back, right?
And we can build AI applications to teach people one-on-one.
I was going to save this for last, but I can't control myself.
So I just know, being from D.C., that when the people in charge see new technology, the first thing they think of is, like, how can I use this to kill people?
So what are the military applications, potentially, of this technology?
Like, we get privacy, but you're not allowed it because we don't trust you.
Right.
But by using your money and the moral authority that you gave us to lead you, we're going to hide from you everything we're doing and there's nothing you can do about it.
In a world increasingly defined by deception and the total rejection of human dignity, we decided to found the Tucker Carlson Network, and we did it with one principle in mind.
Tell the truth.
You have a God-given right to think for yourself.
Our work is made possible by our members.
So if you want to enjoy an ad-free experience and keep this going, join TCN at tuckercarlson.com slash podcast, tuckercarlson.com slash podcast.
There was this huge expose in this magazine called 972 about how Israel was using AI to target suspects, but ended up killing huge numbers of civilians.
Well, that was, I mean, that promise makes sense to me.
I would just, I fervently want it to become a reality.
I have a, we have a mutual friend who's showing me a name who's so smart and a good, humane person who's very way up into the subject.
Participates in the subject.
And he said to me, well, one of the promises of AI is that it will allow people to have virtual friends or mates, that it will solve the loneliness problem that is clearly a massive problem in the United States.
And I felt like, I don't want to say it because I like him so much, but that seemed really bad to me.
But so my question to you as someone who lives and works there is, what percentage of the people who are making decisions in Silicon Valley will say out loud, you know, not I'm a Christian Jew or Muslim, but that, like, I'm not, you know, there is a power bigger than me in the universe.
Well then, okay, so you're just kind of making the point unavoidable.
Like, if the machines, as you have said, and it makes sense, are the sum total of what's put into them, then, and that would include the personalities and biases of the people putting the data in.
And it is a snake eating its tail at some point because Of course, you serve the baser human desires and you create a culture that inspires those desires in a greater number of people.
In other words, the more porn you have, the more porn people want, actually.
I wonder about the pushback from existing industry, from the guilds.
If you're the AMA, for example, you mentioned medical advances.
That's something that makes sense to me.
For diagnoses, which really is just a matter of sorting the data, like what's most likely.
And a machine can always do that more efficiently and more quickly than any hospital or individual doctor.
And diagnosis is like the biggest hurdle.
That's going to actually put people out of business, right?
If I can just type my symptoms into a machine and I'm getting a much higher likelihood of a correct diagnosis than I would be.
After three days at the Mayo Clinic, who needs the Mayo Clinic?
Eventually, it took me writing a little bit of software to collect the data or whatever, but I ran the AI, I used the AI, I ran the AI once, and it gave me a diagnosis they haven't looked at.
And I went to them, they were very skeptical of it, and then we ran the test.
Right now at Replit, we are working on a way to generate most of the code for you.
We have this program called 100 Days of Code.
If you give it 20 minutes, do a little bit of coding every day, in three months, you'll be a good enough coder to build a startup.
Eventually, you'll get people working for you and you'll upscale and all of that, but you'll have enough skills.
In fact, I'll put up a challenge out there, people listening to this, if they Go through this and they build something that they think could be a business or whatever.
I'm willing to help them get it out there, promote it.
We'll give them some credits and cloud services, whatever.
Just tweet at me or something and mention this podcast and I'll help them.
A-M-A-S-A-D. But there are a lot of entrenched interests.
I mean, I don't want to get into the whole COVID poison thing, but I'm revealing my biases.
I mean, you saw it in action during COVID where, you know, it's always a mixture of motives.
Like, I do think there were high motives mixed with low motives because that's how people are.
You know, it's always a bully base of good and bad.
But to some extent, the profit motive prevailed over public health.
That is, I think, fair to say.
And so, if they're willing to hurt people to keep the stock price up, I mean, what's the resistance you're going to get to allowing people to come to a...
But for someone like me, I'm not going to talk to a doctor until he apologizes to my face for lying for four years because I have no respect for doctors at all.
I have no respect for anybody who lies.
Period.
And I'm not taking life advice and particularly I'm sure there's a doctor out there who would apologize, but I haven't met one yet.
So for someone like me who's just, I'm not going to a doctor until they apologize, this could be literally life-saving.
So, but, you know, Scott, like, really, startups that just started, right?
Like, you know, typically, no one is...
No one's really thinking about it.
And it's very easy to disadvantage startups, like you just talked about, with healthcare, regulation, whatever.
Very easy to create, regulatory capture, such that companies can't even get off the ground doing their thing.
And so they came up with this agenda that we're going to be the firm that's going to be looking out for that little guy, the little tech, which I think is brilliant.
And part of their argument for Trump is that AI, for example, the Democrats are really excited about regulating AI. One of the most hilarious things that happened, I think Kamala Harris was invited to an AI safety conference, and they were talking about existential risk.
And she was like, Well, someone being denied healthcare, that's existential for them.
Someone, whatever, that's existential.
So she interpreted existential risk as any risk is existential.
And so that's just one anecdote.
But there was this anecdote where she was like, AI, it's a two-letter word.
And you clearly don't understand it very well.
And they're moving very fast at regulating it.
unidentified
They put out an executive order that a lot of people think.
You know, it's already changing, is what I'm saying.
It's already a lot of these things are being reversed.
It's not perfect.
But it's already changing, and that's, I think, just a function of the larger culture change.
I think Elon buying Twitter and letting people talk and debate moved the culture to, I think, a more moderate place.
I think he's gone a little further, but I think it was net positive on the culture because it was so far left.
It was so far left inside these companies, the way they were designing their products, such that George Washington will look like there's a black George Washington, what have you.
I'm probably responsible for the most amount of people learning to code in America because I was like a...
The reason I came to the U.S. is I built this piece of software that was the first to make it easy to code in the browser.
And it went super viral.
And a bunch of U.S. companies started using him, including Code Academy.
And I joined them as a founding engineer.
They had just started two guys, amazing guys.
They had just started, and I joined them.
We taught like 50 million people how to code.
Many millions of them are American.
And the sort of rhetoric at a time, what you would say is like, coding is important because it'll teach you how to think, computational thinking and all of that.
Maybe I've said it at some point, but I've never really believed it.
I think coding is a tool you can use to build things, to automate things.
It's a fun tool.
You can do art with it.
You can do a lot of things with it.
But ultimately, I don't think you can...
Sit people down and make them more rational.
And you get into all these weird things if you try to do that.
People can become more rational by virtue of education, by virtue of seeing that taking a more rational approach to their life yields results.
I just thought it was a certain Because I have to say, without getting into controversial territory, every person I've ever met who writes code is kind of similar in some ways to every other person I've ever met who writes code.
But I didn't realize that is the vector if you want to reach sort of business-minded people who are not very political but are probably going to, like, send money to a buddy who's bundling for Commonwealth because, like...
And the reason why we had this conformist mob rule mentality that people call woke, The reason that we're now past that almost, you know, it's still kind of there, but we're on our way past that is because of the First Amendment and free speech.
And again, I would credit Elon a lot for buying Twitter and letting us talk and debate and push back on the craziness, right?
Just a sidebar, I feel like we had a bad experience with Arabs 23 years ago and what a lot of Americans didn't realize, but I knew from traveling a lot of the Middle East, yeah, it was bad.
However, that's not representative of the people that I have dinner with in the Middle East at all.
Someone once said to me, like, those are the worst people in our country, right?
And no, I totally agree with that strongly.
I always defend the Arabs in a heartfelt way.
But no, I wonder if some of the, particularly the higher income immigrants, recently I've noticed, are like parroting the same kind of anti-American.
Crap that they're learning from the Institute.
You know, you come from Punjab and go to Stanford, and all of a sudden, like, you've got that same rotten, decadent attitudes of your native-born professors from Stanford.
I think foreigners, for the most part, do appreciate it more, but it's easy.
I talked about how I just try not to be this conformist, really absorb everything around me and act on it.
But it's very easy for people to go in these one-party state places and really become part of this mob mentality where everyone believes the same thing.
Any deviation from that is considered cancelable.
And you know, you asked about the shift in Silicon Valley.
I mean, part of the shift is like, yeah, Silicon Valley still has a lot of people who are independent-minded.
And they see this sort of conformist type of thinking in the Democratic Party.
And that's really repulsive for them.
Where, you know, there's like a party line.
It's like, Biden's sharpest attack.
Sharpest attack.
Everyone says that.
And then the debates happen.
Oh, unfit, unfit, unfit.
And then, oh, he's out?
Oh, Kamala, Kamala, Kamala.
It's like lockstep, and there's no range.
There's very little dissent within that party.
And maybe Republicans, I think, at some point were the same.
Maybe now it's sort of a little different.
But this is why people are attracted to the other side.
By the way, this is advice for the Democrats.
If you want Silicon Valley back, maybe...
Don't be so controlling of opinions and be okay with more dissent.
Well, you have to relinquish a little bit of power to do that.
I mean, it's the same as raising teenagers.
There's always a moment in the life of every parent of teenagers where a child is going in a direction you don't want.
You know, it's a shooting heroin direction.
You have to intervene with maximum force.
But there are a lot of directions a kid can go that are deeply annoying to you.
And you have to restrain yourself a little bit.
To preserve the relationship, actually, if you want to preserve your power over the child, you have to pull back and be like, I'm not going to say anything.
There's Matt Taibbi and Michael Schellenberger and a lot of folks did a lot of great work on censorship and the government's kind of involvement in that and how they push social media companies.
I don't know if you can put it just on the Democrats because I think part of it happened during the Trump administration as well.