Eric Schmidt | The Ben Shapiro Show Sunday Special Ep. 120
|
Time
Text
I used to say, ten years ago, that this world was optional.
That if you don't like it, just turn it off.
It's okay.
No problem.
You can't take a break from it anymore.
It's become the way you educate, the way you entertain, the way you make money, right?
The way you communicate, the way you have a social life.
So we're going to have to adapt these platforms to these human regulations.
I used to think, it's fine to have all that bad stuff online because people can just turn it off.
But they can't now.
Our guest is credited for scaling Google into the 21st century, taking the company from its Silicon Valley startup roots to a global tech leader.
Eric Schmidt was Google's CEO for 10 years, 2001 to 2011.
During that time, the company launched Google Maps, Gmail, Google Chrome, and bought YouTube, just to name a few of its many milestones.
After serving as CEO, he became executive chairman and then technical advisor for Alphabet Inc., Google's parent company, totaling 19 years at Google, not to mention nearly 20 years prior, leading other tech and software-based companies.
In 2020, Eric left Google for new ventures, notably contributing $1 billion through his philanthropic organization to fund talented people offering science and technological innovations to the world's largest and hardest problems.
He also co-authored a book this year alongside Henry Kissinger and Dan Huttenlocher, The age of AI and our human future.
The three of them use their diverse careers, Eric writing from his entrepreneurship and technology business expertise, to inform on the current state of artificial intelligence and to raise essential questions for a world fully integrated with AI.
Eric gives me his thoughts on our not-too-distant future.
Plus, we discuss Mark Zuckerberg's transforming Facebook into the metaverse, whether Google is a monopoly limiting all others in its market, and whether big tech is censoring what they consider misinformation.
This is the Ben Shapiro Show's Sunday special.
This show is sponsored by ExpressVPN.
Don't like big tech and the government spying on you?
Visit ExpressVPN.com slash Ben.
Just a reminder, we will be doing some bonus questions at the end with Eric Schmidt.
The only way to get access to that part of the conversation is to become a member.
We're going to go deep at the end on the impacts of AI on human reason, how it's going to redefine what it means to be a human.
You're not going to want to miss it.
Head on over to dailywire.com.
Become a member.
You'll have the full conversation.
Eric, thanks so much for joining the show.
Thank you, Ben.
I've wanted to be on for a long time.
I really appreciate that and hope you survive whatever blowback comes your way to us having an open conversation.
So why don't we start with the topic of your brand new book, which is artificial intelligence.
So for a lot of folks who are not versed in this, including me, when I hear artificial intelligence, I tend to think of, you know, You know, the Iron Giant or Gizmo, basically robots that talk back to you.
But obviously, AI is much more sophisticated and all encompassing than that.
I think people are not fully aware of what AI means for the future of humanity.
So why don't we start by defining what AI is?
Well, first, thank you for this, and it's exactly the right starting question.
Most of your audience, when they think of AI, they think of the following.
They think of a man who creates a killer robot.
The killer robot escapes, and the female scientist somehow slays the robot.
And that is precisely not what we're talking about here.
That's great fiction and it's a great movie, but it's not going to happen that way.
What's really going to happen is that our world will be surrounded by digital intelligences.
And these intelligences are partially like humans, and they'll make things more efficient.
They'll make new discoveries.
They'll be around.
They'll help educate our children.
They'll discover new things.
We can talk to them.
They'll provide companionship.
They'll also change the way we think about war and our own identity.
We wrote the book to explore those questions.
So, first question that sort of comes to mind immediately for me here is whether human beings are ready for this.
So, Brett Weinstein and Heather Hying wrote a book recently where they talked about the fact that so much of the stuff that we interact with even now is well beyond the capacity of sort of our evolutionary adaptability.
That if you look back to sort of the lizard brains we developed in the savannas and jungles hundreds of thousands of years ago and we were trying to survive.
We are now using devices and we really don't know the impact of those devices on how we think.
Obviously, the sort of stuff that comes to mind for a lot of folks is the use of social media.
The addictiveness, for example, of social media.
Are human beings going to be able to adapt to a world that is filled with all of these things that human beings were not privy to in the natural environment?
It's gonna be really stressful, and we need to have a whole new philosophy about how we're gonna deal with what this does to humanity.
It's interesting that 400 years ago, we went through something which is a transition from the age of faith, where people believe that God told them what to do, to what is called the age of reason, where you could have critical thinking, and you could look at a situation, and you could say, is that good for society, or good for me, or a moral judgment, and so forth.
And this age of reason spawned the Industrial Age, everything we see around us.
Before that, we were very, very primitive.
It was called the Dark Ages.
So, we believe, and why we wrote the book, is that we're going to go into a new epoch of human experience.
Because humans are not used to having a human-like assistant, partner, opponent, and so forth, in their world.
If you look at what happened with social media, People like myself, who helped build these systems, had no idea that they would be used by, for example, the Russians to try to change our elections, or the addictive behavior would drive people crazy with special interest groups, especially full of falsehoods.
We didn't know that this would have happened.
If we'd known it, would we have done it?
I don't know.
But this time, these tools are much more powerful.
They're much more capable of, for example, driving you crazy, making you addicted, and also educating you and making you more powerful.
We want to have a philosophical discussion about what we're doing to humans with this technology.
So in a second, I want to get back to sort of the political impact of technology that you sort of mentioned there.
But I want to talk to you about what exactly these technologies are going to look like on a more practical level.
So you talk about them teaching your kids or you talk about them educating you, creating new innovations.
How fast are we going to get to this sort of very sophisticated AI, because right now one of the limitations on AI is that it doesn't seem to be capable of the sort of mass innovation that human minds are capable of at this point.
But you're talking about essentially machines being able to create new machines, which is a different sort of thing.
How's humanity going to deal with that?
And what does that mean for the future of, say, work and job creation?
Again, incredibly good question.
And we don't really know.
There is a dispute, even among the authors, on how quickly this stuff will happen.
I would say 15 years.
Others would say 25 years.
Does it really matter?
There's so much investment in this space that I can tell you that it's going to happen in some form, and it'll happen more quickly than we think.
And we're not prepared for it.
And when I say we're not prepared for it, we're not prepared for the new companies, the impact on the development of children.
How do you raise a child where the child's best friend is not human?
What happens, for example, you have a bear, and you give the kid a bear, and every year you give the kid a smarter bear, and at 12, the bear is the kid's best friend, and the bear is watching television, and the bear says to the kid, I don't like this TV show, and the kid says, I don't either.
Is that okay?
That's a really significant change in human existence, which is being built now in various forms.
Do we really want to put our kids through that?
I'm not sure, but I want to have the debate about it.
I'll give you another example.
Let's think about war and conflict.
One of the characteristics of war is everything is occurring faster.
So what happens when the AI system, which remember is not perfect, it makes mistakes, says you've got 24 seconds to press this button before you're dead.
You can't see the missile, but I can.
Do you really think a human's not going to press that button?
I don't think we know how to deal with the compression of time, this intelligence, and the characteristic of this intelligence is it's dynamic, it's emergent, Right?
It's imprecise, so it makes mistakes, and it's learning.
We've never had a technology partner, a platform, an object, a machine that was like that.
So one of the things that's been in the media a lot lately is obviously Facebook's change over to meta.
Zuckerberg is now talking about the metaverse and sort of you're now living in a different reality.
And this is, as I say, a wide difference from the sort of way that we have lived for all of human history, which was Whatever our aspirations, whatever our sense of identity, they were bounded by the reality around us, right?
You could think that you were the strongest guy in the room and that lasted precisely as long as you didn't run into the guy who's stronger than you.
You might think that you were invulnerable precisely up until the time you fell off a cliff and hit a rock at the bottom.
But in a virtual reality space, in a place where you're never forced to actually confront other human beings in a real human way, when you can create yourself, when you can self-create your own identity, it seems like kind of a recipe for In some ways, the end of humanity as we know it, and that's not meant in terms of mass death, but certainly in terms of what we think our limitations are.
And I think that actually could breed some pretty bad things, not just a feeling of freedom.
You know, the metaverse has been being worked on for almost 30 years.
And the basic idea here is that you put AR or virtual reality headsets on and you live in a virtual world where you're more handsome, you're more beautiful, you're more powerful, you're richer, you have a comfy life.
It sounds great, doesn't it?
Wouldn't you prefer to be in that life than the reality of your life where you have to struggle and where the commute is terrible and people are yelling at you and so forth?
Wouldn't it be wonderful to be in such a virtual reality?
I'm not sure.
I'm worried that people will decide that this virtual reality is so addictive, it's so fun to be that fake person than who you really are, that people will cross over.
They'll get up in the morning, they'll do whatever they have to do, and then they'll put the headsets on, and they'll just sit there for hours and hours and hours.
That's not a great future for humanity.
Eric, in just a second, I want to ask you about that future for humanity and what it looks like in terms of separating people into two groups.
We've talked a lot about income inequality in the recent past, but I'm wondering if we're now going to get to a true sort of human inequality that is very bad for the future of civilization via these sorts of devices in one second.
First, let's talk about life insurance.
What's easier than opening a can of cranberry sauce, getting free life insurance quotes with Policy Genius?
If someone relies on your financial support, whether it's a child, an aging parent, even a business partner, you do need life insurance.
It's just a thing that you need.
I mean, let's say, for example, that you have been Given the gift of being able to create ice from your fingers, but suddenly you lose control of that gift and you put an entire city under the threat of complete extinction thanks to an endless winter, you might think to yourself, man, they're going to be mad.
Should have gotten life insurance.
Policy Genius makes it easy to compare quotes from over a dozen top insurers all in one place.
Why compare?
Well, you could say 50% or more on life insurance by comparing quotes with Policy Genius.
You could save $1,300 or more per year on life insurance by using Policy Genius to compare policies.
The licensed experts of PolicyGenius work for you, not the insurance companies, so you can trust them to help you navigate every step of the shopping and buying process.
That kind of service has earned PolicyGenius thousands of five-star reviews across Trustpilot and Google.
And eligible applicants can get covered in as little as a week thanks to an award-winning policy option that swaps the standard medical exam requirement for a simple phone call.
This exclusive policy was recently rated number one by Forbes Advisor, higher than options from Ladder, Ethos, and Bestow.
Getting started is simple.
First, head on over to PolicyGenius.com slash Shapiro.
In minutes, you can work out how much life insurance coverage you need and compare personalized quotes to find your best price.
When you're ready to apply, the PolicyGenius team will handle the paperwork and the scheduling for free.
PolicyGenius does not add on extra fees.
Head on over to PolicyGenius.com slash Shapiro right now.
Get started.
PolicyGenius.
When it comes to insurance, it's nice to get it right.
So, Eric, you were talking about, you know, whether people are just going to plug into virtual reality and just live there.
And it does remind me of a thought experiment that the philosopher Robert Nozick once posited, the experience machine.
And he specifically said that if you gave people a choice whether they could plug into essentially this.
They could plug into a world in which all risk was abated.
You were going to live a very happy life.
You're going to get all the same emotional responses you would in real life.
Would you plug in?
And he posited that for a lot of people, the answer would be no, because you want to have a real impact on the world.
I wonder if you think that humanity is going to break down into two groups, people who say yes to the experience machine and people who say no to the experience machine.
And frankly, whether virtual reality, which is not, in fact, reality, reality is going to innervate our civilization so entirely that people who don't plug in, people who are less technologically adept, People who care less about the digital world actually end up inheriting the earth while the most sophisticated players in the space are distracted online, essentially.
You know, a lot of people in literature have talked about this, and they've talked about a drug called Soma, for example, which in literature would sort of take all your cares away.
I'm not sure humans do very well in a careless world where they don't have mission and meaning.
It's also possible that the inverse of what we just said is going to occur.
That we're going to get into this virtual reality that seems so incredibly sexy and then we're going to have all the same behaviors.
We're going to have greed and envy and bad behavior and All of the things that we don't like about the physical world.
One of the things that we assumed when we built all of this that we're now dealing with was that connecting everyone would be a great thing.
And I'm very proud of the fact that I worked really hard to connect everyone in the world.
So don't be shocked that they actually talk to each other and that they say things to each other that you don't agree with.
It was noble to connect the world and then we discover we don't like some of the things they're talking about.
That's crazy.
So, why don't we figure out a way to keep everybody connected but also grounded in physical reality, but also keep them safe, right, from stalking and trolling and some of the worst kind of behaviors.
Imagine if you're trolled in physical life, but then you go to the virtual world and you get trolled there too.
How depressing.
Another possibility, which we talk about in the book, is that this network of AI system control, right, which ultimately affects traffic and the way you get pharmaceuticals and the way you pay your taxes and the way you are learning, is so overwhelming that people will become nativist or naturalist.
And they'll say, I'm revolting against the robot overlords.
It's not a robot, it's a computer.
But I'm revolting against this digital world and the control that it has over me.
That I believe in my own individual freedom and the only way I can do that is to turn it off.
And my guess is that 10 or 20 percent of the people will ultimately say, turn this thing off.
So Eric, this does raise a question that Andrew Yang has raised with regard to artificial intelligence.
We talked to him before his run for mayor of New York and he was running for president at the time talking about universal basic income.
The idea being that AI was going to become so sophisticated that it took over a lot of the fields that we don't tend to think of as AI driven.
So typically job loss due to technology in particular industries has been relegated toward the lower end of the income spectrum.
It's been in manufacturing gigs or it's been in self-checkout counters at the local Walmart.
But it hasn't been in the so-called creative fields.
And as AI gets more sophisticated and as it moves up on the food chain in terms of what kind of jobs it can replace, are we looking at the possibility of mass unemployment?
And if so, what is the solution to that?
Redistributionism may not be enough because it turns out that people kind of like working.
They need something to do.
And the sort of fantasy that people are then going to spend their off hours painting, especially if the AIs can paint better than we can, you could be looking at a total loss of human meaning here.
You said that so well.
Can I argue with your premise for a second?
Sure.
Let's just use some data.
I've spent the last 10 years listening to people tell me that the most endangered job in the world is truck driving.
That truck driving will be replaced by robots, by computers.
Self-driving trucks will take over.
What's the number one shortage in terms of people, in terms of people we can put in jobs in America today?
Truck driving.
So I think there's something wrong with this basic argument.
And I think what happens is that, yes, some of these jobs will go away.
But they're going away because the markets are getting more efficient.
I'm a capitalist.
Capitalism makes market more efficient.
More efficient markets produce different jobs in a larger space.
So if you start from the premise that our goal is to have economic growth, Which benefits everyone.
The rich people, the poor people, everyone.
The answer to a lot of the problems in the world is a job.
Right?
So we want to create as many jobs as we can.
Now, if we go back to your argument that AI will displace jobs, well, so far, we've just been through this horrific pandemic, and we're trying to turn the economy back on, and we can't find enough people who want to work.
That's a different problem than we don't have jobs for them.
We have the jobs for them, but maybe they're not in the right place, or they're not skilled, or they're not motivated, or whatever.
It's more complicated than what people like you and me think.
So it seems to me that the following things are true.
First, there will be lots of jobs, but AI will displace a lot of current jobs.
Think about a security guard.
The security guard basically sits around and watches all day.
Well, a computer's better at that than the security guard.
See, if all the security guard is doing is watching, then that can be replaced.
But if the security guard is also your brand, they're warm and friendly and we love you and we welcome you to our home or to our business or what have you, well, then that's a function that maybe AI won't replace.
So you're going to see people pushed into more human roles And there'll be lots of them.
Eventually, the scenario that everybody talks about is going to be true.
Some number of decades, maybe a hundred years, the technology world will be so good that we can grow food, we can build buildings cheaply, literally everyone can live like a millionaire today.
This is a long time ago, long after I'm dead.
It'll probably be true.
We don't know what people will do, but one thing I'm sure is that people are not going to be painting.
They're going to find meaning.
You know, in other words, we may get rid of the security guard role, but people will come up with more legal structures, or they'll come up with more medical structures, or something else.
But to say that somehow innovation will ultimately eliminate meaning in jobs is not supported by history.
So one of the things that this raises for me is the question of a civilization without children.
Because you mentioned kids before and the kid who makes friends with the teddy bear who then becomes their best friend and is essentially sentient and incapable of making decisions and interacting with kids.
But I want to ask a different question, which is we are a very adult-driven society, a society that is built around the needs and pleasures of people who are capable of giving consent.
We're not a society that's particularly driven by people having kids or what people should do with those kids.
And it seems like the AI world is driving more in the adult direction that if we are interacting online, we're not interacting physically.
That kids are a lot of work, right?
I have three of them under the age of eight and they are an awful lot of work.
And you know, it's not as much work as existing in a VR reality where you can hang out with all of your adult friends all day.
The West is already experiencing a pretty large-scale crisis of underpopulation, not overpopulation.
Every single major Western country is now reproducing at less than replacement rates.
And it seems like, you know, the distractions that are created by not only an AI world, but also by interactive beings that are capable of fulfilling your every need, right?
This could totally undercut the future of human reproduction, of raising healthy children.
It's dangerous stuff.
This is completely unappreciated, and you've nailed it.
We need to have human growth, that is, the number of people.
My position is simple.
We need more immigration.
We need more reproduction.
We need more families.
We need larger numbers of kids.
Because that's wealth.
The ultimate answer to your life is the legacy that you leave in your children and your grandchildren and so forth, and the extraordinary achievements that they will develop because of the training and the fatherhood, in your case, that you were providing.
We can't lose that.
It's so important.
And I'm completely in agreement with you that we're building systems that are addictive for humans.
And by addiction, I mean addictive for adults.
We see this in all sorts of ways.
Dating is later.
Marriage is later.
Family formation is later.
The number of children is smaller.
It's true, by the way, in every developed country, and it's also true in China.
So let's talk about the threat of China.
So you mentioned China in this context.
China has been pouring money into AI.
China obviously has been using technology as a wedge in order to build connections with a wide variety of nations, whether it is trying to offer 5G with strings attached to countries that are underdeveloped, or whether it is trying to through direct foreign aid, make alliances with countries that have access to rare earth minerals, China's become much more threatening on the geopolitical stage, but it's underappreciated what they're doing in the technological space, both in terms of subsidizing technology and
in terms of stealing technology from the West. How do we deal with the threat of China in the space? How threatening is it if China gains an advantage in the AI space?
Well, I was fortunate to be the chairman of a commission that the Congress set up, a bipartisan commission on national security and artificial intelligence.
And we concluded that the threat from China is quite real, that they are at the moment over-investing compared to the United States.
They're producing more PhDs.
The quality of their work this year is now to the point where it's on par or better than the best papers that we're producing.
So they have arrived, and they have arrived quickly, and they're over-investing.
The consequences of this could be very, very significant.
You mentioned Huawei and 5G, but think about TikTok.
TikTok uses a different artificial intelligence algorithm.
Your teenagers are perfectly happy to use it, and I don't mind if the Chinese know where your teenagers are, but I do mind if they use that platform to begin to affect the way we communicate with each other.
And so there's every reason to think about the AI world that we want in front of us represents Western values.
That is the Western values of democracy and free speech and free debate and so forth and so on, which are certainly not present on the Chinese side.
There are also all sorts of national security issues.
I want to be careful about your statement about Cold War, because that's a way of representing a certain thing.
China, by the way, has an economy which on a purchasing power basis is larger than America's now.
Russia was never anywhere near that.
Also, if you look today, after all this discussion about China, we have more trade between China and the U.S.
this month than we ever have.
So we're not in a Cold War, we're in a rivalry.
And the rivalry has huge stakes.
In the rivalry, imagine if China becomes the dominant player in AI, semiconductors, computer science, energy, transportation, face recognition, electronic commerce, and other services.
That's my whole world.
That's everything in tech.
Trillions and trillions of dollars of companies that are being formed today are in danger if we don't invest.
And our report basically said the government needs to help with research funding.
There's lots of hiring that has to occur within the government.
We have to build partnerships with like-minded countries, South Korea, Japan, European countries, and so forth.
We've got to take this seriously.
We've gotten bipartisan support of this because both sides are concerned about this for different reasons.
So we may be able to actually affect change.
Imagine if the semiconductors that we use in our conversation right now and that your viewers are using today were made in China and had Chinese infiltration in one form or another.
You wouldn't be comfortable and I wouldn't be comfortable.
You have to really think about this from the standpoint of national security and open communications.
So Eric, you mentioned they're not using the language of Cold War, but instead using the language of rivalry.
I think that many of us in the West tend to think of China in terms of rivalry because, as you mentioned, there is so much trade with China because we are so integrated with their economy.
But China is obviously pursuing a mercantilist economic policy while we are pursuing a much more free market policy.
Which means, do they see it as a rivalry or do they see it as an actual Cold War?
Well, I'll tell you what the Chinese have said publicly.
I've not talked to them privately because we haven't been able to go to China for two years because of COVID.
What they say publicly is that the history of China was it was this enormous middle kingdom.
And for various reasons involving the open wars and so forth, 150 years ago, they were embarrassed.
They withdrew.
They were put under pressure.
Their strength was constrained by historic accidents.
They've all been taught this.
And now this new president, President Xi, who's not that new anymore, is basically using this language to, shall we say, inflame nationalism, to build confidence.
He gives speeches now which say, we will never, ever be at the thumb of these other oppressors.
That's the language that they use.
This is fomenting enormous self-confidence.
And that self-confidence is fine as long as it stays in its region.
The problem with the self-confidence on either side is it can lead to a series of strategic missteps, a series of overconfident moves.
Dr. Kissinger, who's a co-author with us on our book, talks a lot about the potential that the current situation feels a lot like the preamble to World War I, where you had rapid industrialization, new technology, old rivalries that got stoked, and then a series of errors precipitated a horrendous war.
And I can tell you, having worked for the U.S.
military for five years in my 20% job when I was at Google, a real conflict with China would be so horrific, it has to be avoided.
We have to find ways to coexist with China, even if we don't like what they're doing.
And they have to find ways to coexist with the system we built, even if they don't like what we're doing.
Having that distrust, but a working relationship is better than world destruction.
So how should technology companies deal with China?
One of the big issues, obviously, is that it seems to a lot of folks, including me, that China has taken advantage of our free, open markets in order to cheat.
I mean, they've stolen an enormous amount of Western technology.
It seems as though they dictate terms to Western corporations.
Western corporations want to do business in China, and China basically then forces those Western corporations to abide by particular rules In order to even do business there, that obviously, on a public level, it's most clear with regards to, for example, the NBA, where you had a GM of the Houston Rockets criticizing China over Hong Kong, and the NBA came down hard on the GM of the Houston Rockets, lest China close its markets to the NBA.
But you've seen this in the tech sphere also, when you're at Google.
Obviously, there was significant controversy over Google search results and Chinese censorship.
How should tech companies be dealing with the fact that China says, if you want to play in our pond, you got to play by our rules?
Well, indeed, Google left China in 2010 in a very controversial move because Google did not want to be subject to the rules that you just said.
I hate to be blunt about this, but the fact of the matter is you have the rise of another power of a similar size that operates differently from the way we do.
We can complain all we want to about the things that they do, but since we're not willing to bomb them and we're not willing to attack them, the use of force to force our way is not going to work.
We're going to have to find ways and incentives.
I'll give you an example.
The Trump administration and a fellow, the guy who did it is a guy named Dan Pottenger, who's really, really smart under Trump.
figured out that Chinese semiconductor leadership is really, really important to keep back.
And our recommendation in our report is that we try to stay two generations ahead of China in semiconductors.
So the Trump administration, for example, identified technology in a company called ASML.
It's complicated, but basically it's extremely small lines that go under the wafers.
And this particular company, which is a Dutch company, is the only source of that.
And the Trump administration basically made it impossible for that hardware and software to make it to China.
That's a good example of the kind of move that we need to consider to make sure that strategically our model continues to win.
I understand the issue of the history and openness and they have a different system, but it's sort of like, okay, what do you want to do about it?
I think the answer is, instead of complaining about them, there's lots to complain about.
Why don't we complain about ourselves and fix ourselves?
And in particular, why don't we focus on technology excellence, platform excellence, doing the things we do best.
The American system is different from the Chinese system.
The Chinese system is integrated, essentially an autocracy.
They have something called military-civil fusion.
There's no difference between the military and businesses.
The universities are all interlinked and funded by the government.
The sum of all of that is just a different system.
Why don't we make our system stronger?
Our system is the union of the government and universities and the private sector.
I'll give you another example.
Who made the best vaccines?
Well, here, again, under Trump, Operation WorkSpeed, the government guaranteed the products, independent of whether they worked or not.
The universities developed the technologies, and the businesses took huge risks and made a gazillion dollars and saved us.
That's the best of America.
They all worked together to solve a problem.
That's what we should be doing.
So in a second, I want to ask you on a broader level about isolation versus interconnectedness with China.
And I also want to get into some of the issues as to who controls the AI world and who exactly should we be trusting with the developments of this technology?
First, let's talk about how you send your mail.
Now, this holiday season, the lines at the post office are going to be super long.
Everybody's getting back out there.
You've got to send some packages to friends and family.
Well, what if you want to do that, but you don't want to go to the post office?
Well, do what we do.
Use Stamps.com.
Stamps.com lets you compare rates, print labels, and access exclusive discounts on UPS and USPS services all year long.
It just makes sense, especially if your business sends more mail and packages during the holidays.
Here at Daily Wire, we've used Stamps.com since 2017.
No more wasting our time.
Whether you're selling online or running an office or a side hustle, Stamps.com can save you so much time, money, and stress during the holidays.
Access all the post office and UPS shipping services you need without taking the trip.
And get discounts you can't find anywhere else, like up to 40% off USPS rates and 76% off UPS.
Going to the post office instead of using Stamps.com is kind of like taking the stairs instead of the elevator.
I mean, you might get a little more exercise, but also, would you really want to do it, like, every single day?
Just take the elevator.
If you spend more than a few minutes a week dealing with mail and shipping, Stamps.com is a lifesaver.
You'll save a lot of time and a lot of money.
This is why you should be doing it.
Save time and money this holiday season with Stamps.com.
Sign up with promo code Shapiro for a special offer that includes a four-week trial, free postage, digital scale, no long-term commitments or contracts.
Head on over to Stamps.com, click the microphone at the top of the page, enter promo code Shapiro.
Okay, so let's talk about interconnecting this versus isolation, particularly in regards to the tech world.
So you have these enormous multinational corporations, transnational corporations, places like Google.
So you mentioned that Google withdrew from China in 2010.
At the same time, we do want to have impact for Western companies in China.
So how exactly should tech companies be playing that game?
I think it's unlikely that the large tech companies will have a big role in China.
given the amount of isolationism and autarky that the Chinese regime is pursuing?
I think it's unlikely that the large tech companies will have a big role in China.
Tesla has just done a large development in China, and I'm sure that the Chinese will take advantage of that incredible transfer of knowledge, and it will also help the Chinese local manufacturers.
I think it's just how they operate.
I think the most likely scenario is that we're going to see information technology in the West, and then a different kind of information technology in China and the BRI countries, who are its partners.
And the reason is that China and her partners are pretty much unified in the way they want people to express things, the way they want to deal with anonymity, the way they want to deal with the messy aspects of free speech and the internet.
Whereas in the West, we're pretty committed to dealing with it and supporting it in some way.
So, if you look, all of the interesting tech companies that do anything online are essentially blocked or limited in China in one way.
It's interesting, by the way, that Apple has managed to create the following interesting structure.
Apple is regulated by the Chinese, and they fight all day as to what goes on in the App Store.
So Apple could withdraw from China, but they want their revenue.
But on the other hand, China is also the manufacturing source for Apple's products, which is of great value, the Chinese.
And so they have an uncomfortable relationship.
So the most likely scenario is either no relationship or an uncomfortable relationship of the kind that I've described with Apple.
In Google's case, various people have tried to re-enter in various ways, and they've not been successful so far.
So I want to move kind of back to the domestic sphere and back to the West for a second, because a lot of what we're talking about here, as far as machine learning and AI, we've seen sort of the proto version of that in how we deal with social media.
And that, of course, has been incredibly messy.
You mentioned early on Russian interference or attempted interference in the 2016 election.
Obviously, from the right side of the aisle, there's a strong belief that a lot of the tech companies, particularly including YouTube, Google, Facebook, that these places Yeah, our designing algorithms that that may not be free with regard to transfer of information that somebody is setting the rules.
There's a lot of distrust as to who is setting the rules at these companies, generally speaking.
Now, I'm not comfortable with the government setting the rules because I think that the government generally tends to be run by political hacks who then do exactly what what benefits them.
But at the same time, there's a lot of disquiet, and I think a lot of it is justified, as to who sets the rules.
So when there's a hate speech regulation on Google, or if there's a hate speech standard on YouTube, how do you define terms like this, and who defines what is sort of the best public square?
When you take that sort of disquiet and you extend it to things as all-encompassing as AI, you can see why people are freaked out.
This is a good preamble to the general point we make in our book.
That the tools for manipulating information, and I mean that broadly for good or bad, are going to be very broadly distributed.
And so the software to do the kinds of scenarios on any side that you described will be open source and available to pretty much any actor, a company, an institution, a liberal organization, a conservative organization, and so forth.
And that manipulation is not so good for society.
In the book, what we say is that eventually people will have their own assistants, basically software that understands what they care about and also make sure the sources of manipulation of them are honest.
In other words, it says, I'm going to choose this article for you and this other article is really false and I'm really not going to choose it for you unless you want me to.
It'll be under your control.
It'll be the only way that people deal with this extraordinary explosion of what I'm going to just call misinformation and disinformation.
It'll come from governments, it'll come from our own government, it'll come from others, it'll come from evil people and well-meaning people.
And that's a big change in human society.
We were all, all of us were grown up thinking that you should just believe whatever people told you.
And now a better understanding is that people are trying to manipulate you and you have to find your own way of thinking through this.
With respect to the bias question, there's no question that the majority of the tech companies are in liberal states and the majority of their employees are, for example, Democratic supporters and so forth and so on.
That's, I think, to some degree, cultural.
There are ways to address what you said.
The most important thing is public pressure to make sure you have balance.
And calling for regulation is always a problem.
Because bringing in a regulator on either side, right, It's highly likely to fix things in their current way and prohibit innovation.
The way we're going to compete with China is by innovating ourselves ahead.
What I want is innovation that's consistent with our values which include open discourse, freedom of expression, freedom of thought, freedom of seeking things.
Eric, one of the things that's come up in this context is the question of monopoly.
So there have been a number of political actors who have suggested that companies like Google effectively are monopolies because they control so much of the ad revenue market or because they control so much of the search market.
Do you think that Google is a monopoly?
And if so, what should be done about that?
How should we view the problem of monopoly given that while there are a wide variety of sources that are available on Google, Google does control, for example, the ranking of search results?
Yeah, the Google question we've had since I've been there, which is more than 20 years, and the question goes, OK, would you like someone else to decide how to do ranking?
So ranking is an algorithmic decision that computers make, humans don't make.
And my first month at Google, somebody called up and said, you're biased.
And I said, well, the computer makes the decision.
And they said, no, there must be a human making the decision.
And I sat down with the engineers, and I'm convinced that there was no bias.
Because the algorithm made the decision.
If you want to regulate the algorithm, then write down how you would like it to be different.
What are the rules?
And you'll find it's extremely difficult.
The people who are calling for antitrust often say these big companies, which includes the FANG, I guess we're going to call them the MANG companies now because Facebook became meta.
Not sure quite if that's a good idea or not.
But the key thing about the Meng stocks is they're very large and very integrated.
And some people say, ah, let's break them up.
So Elizabeth Warren, for example, said, let's take the App Store and separate it from the iPhone.
Let's take YouTube and separate it from Google, et cetera.
There's a list.
The problem with these things is they don't actually, they make a small number of companies.
It's good for the shareholders to break them up because the value in aggregate will go up, right?
That's the bizarre thing.
It's good for the shareholders in sum because of the distributions they'll get, but the problem is it doesn't actually address the problem that people, it just essentially creates more big players.
You have to come up with a way of addressing the network effects in these platforms and the good execution.
And I haven't seen it.
I think we'd be better off regulating the worst aspects, you can make a list, I can make a list, rather than breaking them up.
In other words, if there's a specific thing that you don't like, that Facebook does, Apple does, or Google, write it down, right?
And try to figure out a way to regulate it in such a way that it's a fair regulation.
That's better than trying to break all these things up.
So in a second, I want to ask you about how the algorithms do work because obviously somebody built the algorithm in the first place.
There have to be some sort of inputs or maybe I'm wrong about that.
So I want to ask you about that in just one moment.
First, let's talk about buying jewelry for a loved one this holiday season.
It's the holiday season.
You know what that means.
You have to figure out what to get for your loved one.
Mom, your girlfriend, your wife.
Well, you know the answer to this, gentlemen.
And the answer is fine pearl jewelry, which is not going to break your budget.
At the Pearl Source, you get the highest quality pearl jewelry at up to 70% off retail prices.
Why?
Well, because Pearl Source cuts out the middleman by eliminating those crazy markups by jewelry stores and selling directly to you, the consumer.
You can shop securely from the comfort of your home at The Pearl Source.
You'll find the largest selection of pearls available anywhere.
Every jewelry piece is custom-made specifically for you.
With global supply chain problems and shipping carriers expecting major delays of delivery times, as you get closer to the holidays, now is indeed the best time to start shopping for the holiday season.
So do not wait!
The Pearl Source offers fast and free two-day shipping on every order with zero contact delivery.
Everything comes beautifully packaged, In an elegant jewelry box, it's ready to be given as a gift.
I know this because I've given my wife stuff from The Pearl Source.
She loves it.
It is spectacular.
And here's the thing.
For a limited time, listeners to The Ben Shapiro Show Sunday Special can take 20% off your entire order.
Don't wait until it's too late to do that holiday shopping.
Head on over to ThePearlSource.com slash Ben.
ThePearlSource.com slash Ben.
Enter promo code Ben at checkout for 20% off your entire order.
If you want fine pearl jewelry at the best prices online, go straight to The Source.
ThePearlSource.
ThePearlSource.com backslash Ben.
Enter promo code Ben at checkout.
Alrighty, so let's talk about the algorithms.
So we hear this a lot in the tech world.
And for those of us who aren't in the tech world, it's somewhat confusing.
So we'll hear something like, you know, the algorithm makes the decision.
And those of us who are not in the tech world say, right, but somebody wrote the algorithm.
There have to be some sort of variables that are used as the inputs.
And so do the biases of the people who are building the algorithms come out in how the algorithm actually operates?
So for example, if the algorithm suggests that certain types of speech are less favored than others or certain types of speech, which are more favored by more people because they are put out by the legacy media, are quote unquote more trustworthy than other sources.
Is that a problem with the algorithm?
Or is that a problem with the inputs?
Or how does that get separated?
I don't want to claim that these algorithms are perfect, but I can tell you how they're really built.
You've got teams of tens of people, hundreds of people, and they write an awful lot of code, and they test it, and they test it, and test it.
And their success is based on some objective function, like a quality score or a click-through rate.
Now, let me give you the extreme version.
A company, a social media company, only focuses on revenue.
To get revenue, you need as much engagement as possible.
How do you get the most engagement?
You get as much outrage as possible.
You get addicted to the outrage and you can't get rid of this.
And the money goes up and up and up and up.
That's the natural outcome of these social networks if you want to maximize revenue.
Hopefully the leaders of these companies are trying to do other things than just maximizing revenue.
But it's a good example where the algorithm, which is using this fictional example, all it wants is more money.
So all it's gonna do is get more attention and drive you insane, right?
I don't know how you regulate that, but that's an example of a company that would be very good for its shareholders, but a bad company to be part of.
How do we bridge the gap between, I think there have been a few semantic games that have happened over the course of the last several years.
It's really kind of fascinating to see how the media treatment of big tech companies and social media companies changed radically in the aftermath of 2016.
So in 2012, there was wide praise for the social media companies, How they had affected change in, for example, the Arab Spring, or how they connected people during the original Black Lives Matter movement, or how they had connected people during the 2012 campaign.
And by 2016, after Trump won, there seemed to be a pretty radical shift in how the media started treating social media.
Suddenly, social media was the bugaboo.
Social media had opened the door too broadly.
It had allowed too much disinformation.
And the question of disinformation, which is effectively, there's a definition to that term, right?
Disinformation being an opposing state actor or Some sort of entity that is antithetical to the interests of the United States actively putting out information that is wrong in order to undermine the institutions.
That's disinformation.
Then there's a shift semantically from disinformation to misinformation.
Misinformation meaning either something that I think is false or something that is actively false.
And all these lines started to get blurred.
This is where, as a conservative, I start to get very wary is the notion that there is some sort of great gatekeeper that There can be a gatekeeper function performed by an algorithm that goes beyond simply saying, we're not going to allow articles to be disseminated that say 2 plus 2 equals 5.
Instead, we are going to go to an outside group of, for example, fact checkers who may be biased in their own right, and then use an aggregate score, which looks objective, but actually isn't because the inputs are not objective.
And one of the things that I think many of us on the right are pushing for is less regulation specifically because we want more speech on these platforms, not less speech.
That's why I think that it's very odd to see the common cause being made between some on the left who want the speech heavily regulated on these platforms and some people on the right who want the platforms broken up.
What's the best way to open up these platforms if you are, for example, a startup, right?
My company was a startup.
We started this company in 2015.
This company was launched on the back of the power of social media and then pretty much every day we have to fight in the press and we have to fight social media, you know, algorithms that are attempting to upgrade, for example, the New York Times and maybe downgrade sites like ours because we aren't quite as established.
What you're describing is the current world that we did not anticipate 10 years ago.
The fantasy that we all had was that we would see an enormous explosion in speech and we would see a great deal of decentralized actors.
A very, very large number of small players who would collectively be very interesting.
Because I agree with you that we need more speech.
We need more speech from everyone.
And we also, by the way, need more listening to everyone.
And instead what we've ended up with is a concentration of essentially large vertical sites which have some of the aspects that you're describing.
What's the answer?
I hope it's not government regulation.
I agree with you.
I hope the answer is more competitors.
So what will it take to get competitors to all of these players?
An example is that a former Google executive has left to create a competitor to Google, which is run very differently.
We'll see how well it does.
In my view, that's a welcome thing.
We need competition, for example, for Facebook and the way it operates.
We need different ways of experiencing ourselves.
In a situation where the former president was banned by Twitter and Facebook, he needs a competitor who will host him and will give him an audience so that his voice can be heard.
That makes sense to me on those terms.
Now the problem you have, so let's assume we have all of that.
Let's assume the answer is more speech and more good speech.
There doesn't seem to be a penalty for lying anymore.
There doesn't seem to be a penalty for absolute factual falsehoods.
And especially in the case of the COVID situation, where they affect lives, We've got to come up with one, some solution which is not an overlord, and not a regulator, but which basically causes people to believe that we should focus on facts and not fiction.
And we haven't figured that out yet.
At Google, and Google's not perfect, we face this question.
And Google is full of falsehoods.
But we have a ranking algorithm, and hopefully the falsehoods are lower than the non-falsehoods.
And when we were building Google, what we did is we would do trade-offs between revenue and quality.
And I arbitrarily decided that whenever we had an improvement, half of it would go to quality and half of it would go to revenue, because I couldn't decide between the two.
So that's an example of informed decision that, in my view, helped the company become the high gold standard of answers.
But it's not perfect.
We need equivalent conversations.
But let me tell you that if you're in the business of peddling falsehoods, Which you better not.
But if you are in that, and you're peddling falsehood, then there's no check on you.
Shame on you.
Right?
Get your facts right.
And I'm not talking about theories and so forth.
I'm talking about basic facts.
Do your research.
What I've learned as a citizen, is when people make claims, did you know this?
And did you know that?
People say all sorts of things.
You know, I check.
I check because I don't want to be misinformed by my friend.
Let alone by any form of media.
And I would encourage people, you need to be your own best advocate for truth.
And do your own research.
Don't just believe what people tell you.
So Eric, in the COVID context, it's actually a really interesting context to talk about this, because you're right that there is a category of disinformation from particular states, actually, and certainly misinformation and actual untruths that were put out with regard to COVID, ranging from the origins of COVID, perhaps, to the effectiveness of the vaccines, which are in fact highly effective against hospitalization and death, although not nearly as effective against infection or reinfection.
But one of the things that came up in this context, and it really does go to human nature, and this may go to the broader conversation about AI, is that because the data were constantly moving, because there wasn't a lot of data about COVID at the beginning, it was a brand new virus.
And because that data was being gathered, there was a lot of shifting that happened by the institutional players in the society. We had Anthony Fauci, for example, claiming at the beginning that people should not wear masks and then admit that he was telling a platonic lie and probably should wear masks. And then we were told after the vaccines were available that you shouldn't mask. And then after Delta came up, then it was you should wear a mask again. And this has happened across the board and the sort of shifting and moving by the institutions itself mirrored by the tech companies, which basically said the institutions are the best available source of information.
Therefore, if you contradict them, we're going to downgrade your information.
It actually drove, I think, part of the backlash to the actual truth in a lot of these cases.
I think it drove a lot of anti-vax sentiment that the tech companies seem to be, quote unquote, silencing other forms of argumentation, even if they were right to silence particular notions about, you know, completely unfactual ideas about COVID.
We've been through a pandemic that occurs hopefully less than every 100 years.
Sorry, more than every 100 years.
Very rare.
And there were lots of mistakes made at the beginning, including the ones you described and many others in my view.
There was a point sometime last summer Where the facts became clear that the vaccines would work, and God save us, it really worked.
And people are alive today.
It's also true that masks work.
It's also true that public health matters.
It's also true that you as a COVID, if you have COVID, you're most infectious the two days before you have symptoms.
So to say that I'm just going to have my own COVID and have a good time, which is, in my view, a perfectly fine, speaking as a sort of libertarian view, perfectly fine.
It's okay for you to do that.
It's not okay for you to do that for other people.
That's where the narrative got confused.
My issue now is we are now at a point where the science is well established.
I think it's not okay for people to propagate falsehoods that involve population health and people's lives.
I just honestly don't think it's okay.
And that's independent of whether the Democrats made a mistake or the Republicans made a mistake.
There's plenty of blame to go around.
I've been advocating, by the way, for a private commission to go and actually analyze all this.
It would have to be bipartisan.
Everything that I've done in the commission space has been bipartisan and it's worked well.
You can bridge across the aisle around important things for our society.
We are all Americans, right?
Somehow we forget in the fight that we are Americans, that we have the same culture and the same values.
We've got to figure out solutions.
I will tell you that the ease with which people can market falsehoods Because of the new tools.
And I'm not taking sides on this.
It's dangerous.
Because a lot of people are just trying to get through the day.
They just want someone to help them understand what's going on.
They don't want to become their own experts on everything.
They have lives and busy and so forth.
They want to watch TV or whatever it is.
We need to help them and make sure they're not inundated with falsehoods.
Okay, so Eric, you're talking about, you know, the informational question and sort of what should be allowed and what shouldn't.
And I agree with you, there's definitely bright line issues, right?
If you say that N95 is completely ineffective, that's obviously a lie.
It's obviously not true.
If you want to say that the federal government of the United States unleashed this on people, just in order to kill them and take control. That obviously is untrue. And there are a lot of patent untruths out there. But one of the questions that I think is going to crop up, as we mentioned, more and more often as AI becomes more and more a part of our lives, is who gets to make the call as to what is a bright line lie and what is just stuff that is sort of controversial. And there are edge cases. And my fear is that the edge cases start to bleed over into more mainstream points of view. The Overton
window starts to shrink because we've seen that happen in nearly every area of American public speech, that the Overton window has shrunk pretty radically. And when that happens, there are two dangers. One is that actual innovative ideas and questions get left out. And the other is that eventually it breeds a massive backlash.
When you wish enough people out and enough perspectives out into the cornfield, they tend to unify and then attack the center.
We need to get back to the way the world, at least that I grew up in, where speech was considered okay if it was controversial, but it was also debated.
And it was not done in a screaming way.
I think it's incredibly important that our universities reflect all the points of view and that they welcome them.
And that there be public debate over it.
I think it's fine for people to make these kinds of claims.
I just don't want the computers to then sell them.
One way to understand my view is to say I'm really a strong believer in adult public speech.
Free speech.
I'm not in favor of the equivalent speech for computers.
In other words, the fact that you're a crazy person, I'm sorry to say this, you're a crazy person who has a crazy idea, that's fine.
But it's not okay in my view that your crazy idea then becomes magnified to the point where it becomes a dominant theme when it's just frankly something that you made up.
We haven't figured out as a society how to address this thing.
The tools that we're developing are addictive.
The more clicks, the more revenue, the more success, the more fame.
We've got to find a balance there.
and the more exciting the narrative, and by the way, the more disruptive and the more aggressive the narrative, the more clicks, the more revenue, the more success, the more fame.
We've got to find a balance there.
If you look at history, when the printing press came out, the original broadsheets and so forth were regulated.
Advertising was regulated because it got out of control.
It began to affect the society.
We're now about to make everyone be a publisher and everyone be a potential misinformer.
We've got to sort that out.
In our book, what we say is that this affects the way people will believe that they're human.
I was in a lunch restaurant in Florida recently, and I was looking at everybody, and everyone at the table, this is at lunch, was on their phones, rather than looking at each other and interacting.
Maybe they were bored with each other, I don't know.
Right?
That's how addictive these tools are.
The AI system is going to get so good that it'll be able to generate addiction for you.
It's so precise that you'll be so consumed by it.
And why do I know this?
Because it's reproducible.
We collectively, as the tech industry, can see what you care about, and we can give you more and more and more and more.
And frankly, that's a pretty good life.
Oh my God, another one!
Oh my God, another one!
When I was growing up, we had music with records where the songs were in order.
There was track four and track eight.
Now your playlists are assembled for you by computers.
Well, a world where the playlist of your life and your information is assembled for you It's pretty powerful to be the group that's assembling that.
And what if they, for example, promulgate values we don't agree with, like racism or sexism or doing bad things to children or things like that?
How do we want to handle that?
I'm not asking for more regulation, but I'm asking for some judgment here on the use of these tools.
Eric, it seems to me that a lot of what we're talking about here is only going to be solved in the offline world.
So we're constantly looking for solutions to the problem because the problem exists in the online world or in the social media space.
But I mean, as I've found, it's the ability to actually see and speak to one another as human beings that's really important.
And as AI becomes much more powerful, as we spend more and more time online, society seems to be getting significantly worse.
It's a lot easier to treat people badly anonymously than it is in person.
Simultaneously, the deepest connections are the ones that can be found in person.
There's a great irony to the fact that all of these tools that we're supposed to bring people together have sort of flattened our existence in a pretty significant way.
There is something innate to the human-to-human connection that is necessary in order to build actual social capital.
And as we spend more time online, which was supposed to generate social capital, is it building more social capital or is it just fragmenting us more by flattening us?
Well, the good news is the bowling alone theory, which was 20 years ago, was false, right?
The bowling alone theory was that we would all be sitting isolated watching television at home.
In fact, if you look at our young people, they spend literally every minute online of their waking day.
Their phones are next to their bed.
If they wake up in the middle of the night, they get on their phone.
If they get up in the morning, the first thing they check is their phone.
So we already know the answer of where is your teenager.
Your teenager is online on their phone.
We also know where your adult friends are.
We also know where your kids are.
We also know where your parents are.
They're all online.
So we've answered that question.
The question is, what kind of a world do we want?
I used to say, ten years ago, that this world was optional.
That if you don't like it, just turn it off.
It's okay.
No problem.
I don't mind.
I think it's good to take a break.
You can't take a break from it anymore.
It's become the way you educate, the way you entertain, the way you make money, right?
The way you communicate, the way you have a social life.
So we're going to have to adapt these platforms to these human regulations.
I used to think it's fine to have all that bad stuff online because people can just turn it off, but they can't now.
And what you're going to see, whether we like it or not, is if the tech companies don't figure out a way to organize this in such a way, they will get regulated on this, and the initial regulator will be Europe.
And by the way, in case we're confused on this, China has solved this problem.
China does not allow anonymous browsing.
Furthermore, so you're logged in, they know who you are, and they have tens of thousands of people who watch for forbidden speech.
They have very sophisticated AI algorithms that watch for what people say, and then the police come knocking.
We don't want that.
I mean, I do wonder, you know, as a religious Jew, one of the big things for us, and it seems to be more and more relevant every day, is the Sabbath, right?
This enforced, mandatory, get-offline kind of circuit breaker that exists every Friday night to every Saturday night.
And it really does, I think, ground you in a certain reality.
On a social level, I'm not talking about legislation here.
On a social level, it seems to me that it's going to be very important that human beings do find a way to disconnect from this technology, at least for a little while, and start to re-ground themselves in a material reality that exists offline.
I agree with that.
I worked in Utah for a while, and in Utah the LDS Mormons have a similar procedure involving evenings where they're connected only to their family because their families are so important.
I went with Governor Bill Richardson to North Korea a few years ago, and one of the things that we did is we left our phones before we went to North Korea because obviously you couldn't use them.
And for three days, we had an experience that I'd not had in decades.
We talked to each other, we got to know each other, the traveling group, we enjoyed our trip.
The moment we got back out of North Korea, we're online, we're back at it.
It was as though we didn't learn anything from our three days offline, and we went right back.
That's how powerfully addicted these technologies are.
So, in a second, I do want to ask you a couple of more questions, but if you want to hear Eric Schmidt's answers, and these are going to be deep questions, then first, you have to head on over to Daily Wire and become a member.
Go to dailywire.com, click subscribe, you can hear the rest of our conversation over there.
Well, Eric Schmidt, it's been a pleasure having you here.
The book, again, is The Age of AI and Our Human Future.
Go check it out, it's available for purchase right now.
Eric, thanks so much for joining the show, really appreciate it.
Thank you, Ben, as always, and I'll see you soon.
Our guests are booked by Caitlin Maynard.
Editing is by Jim Nickel.
Audio is mixed by Mike Coromino.
Hair and makeup is by Fabiola Cristina.
Title graphics are by Cynthia Angulo.
The Ben Shapiro Show Sunday Special is a Daily Wire production.