Martin Ford argues that AI, driven by Moore's Law and deep learning, will rapidly substitute human labor across all sectors, creating a jobless future necessitating Universal Basic Income. He warns that while algorithms could reduce bias, their current optimization for engagement risks radicalization, and the looming singularity poses existential threats from autonomous weapons in a US-China-Russia arms race. Ford contrasts a potential post-scarcity utopia with a worst-case scenario of social unrest, concluding that society must adapt through federal UBI programs to avoid political demagoguery and financial collapse. [Automatically generated summary]
So, first off, before we dive directly into robots and AI and all that, just tell me a little bit about your background, what brought you to writing a book like this.
Okay, so I studied computer engineering in college and then I worked as an engineer, a design engineer, for several years.
Then I went back and studied business.
Eventually I ended up starting and running a small software company up in Silicon Valley.
I ran that for many years, but even in the course of running that, I saw the impact that all this technology was having on jobs at my business and at businesses like it.
And that really got me thinking about this issue.
And so about 10 years ago in 2009, I wrote my first book called The Lights in the Tunnel, which really argued that artificial intelligence was going to be the next big thing in computing.
And then it was going to have a dramatic impact in particular on the job market.
And that book did well enough that it led to an opportunity to write this book in 2015, which really got quite a bit of attention.
And since then, I've kind of shifted my career to really be a futurist.
focused on what AI and robotics means for society and for the economy and especially for the job market.
Yeah, I mean, this was an issue that was very much off the radar back then.
It came with a fair amount of stigma.
And the reason is that this concern or fear that machines might take a lot of jobs and there might be unemployment, it's an old issue.
It's come up many times in the past.
Going all the way back to the Luddites, right, in England 200 years ago.
And so there's actually this term, Neo-Luddite, for a person that is once again worrying about this issue, and so it was quite stigmatized.
So in 2009, when I wrote the book, I was one of the earliest people to get out there with this, but since then things have definitely changed.
I see a lot of people much more concerned about this, even professional economists and so forth.
So there definitely has been a shift in mentality over this last 10 years when we've seen things like the advent of self-driving cars that look like they're going to be arriving soon and so forth.
Well, the most important thing is what you might call Moore's Law, the fact that the power of computers is accelerating, doubling every two years.
And it was obvious that computers were going to get dramatically more powerful.
And there had to be an application for that.
It has to be something you can do with that.
And it became obvious to me that artificial intelligence was going to be the thing.
And AI means essentially solving the same kinds of problems that the human brain can solve, right?
And it means machines that in a limited sense are Taking on cognitive capability.
They're beginning to think.
And that means that technology is going to begin to compete with and substitute for human beings in a unique way, something that we've never seen before.
And as that scales across the whole economy, as all kinds of jobs, skilled jobs and unskilled jobs, I think that it becomes pretty clear that it's going to have dramatic implications.
I, especially in this book, Rise of the Robots, I used a very broad meaning for that, basically to mean anything that is automating something and taking over, you know, things that people can do.
And very often that's just going to be software.
If you want to be more precise and technical, a robot is when you take artificial intelligence and you put it into a physical machine that can physically manipulate the environment.
But what we're talking about is much broader than that.
So we're already seeing people like lawyers and doctors being impacted.
And it's not physical robots, it's often just software, artificial intelligence.
And actually, you know, building physical robots that have dexterity, that can manipulate the environment, that's actually one of the hardest aspects of this.
And in some ways, that may be where progress is going to be slowest, where in knowledge-type work, you know, someone is sitting in front of a computer doing some routine task, cranking out the same report again and again, that may actually be much easier So that's interesting.
So the idea part of it is easier to replicate than the physical part, even though you'd think that just building a robot that can move the way you want it to move or something, that seems technically easier than figuring out how to think like humans.
It is not just the speed of computers that have gotten faster and smaller, of course,
and now they're in our iPhones, but it's the speed of communications bandwidth,
it's memory capacity.
So we've seen this very broad-based acceleration in technology, and that's a huge part of it.
The other thing is that there have been some breakthroughs in artificial intelligence, especially in the hottest area of AI, which is called deep learning or deep neural networks.
We've seen dramatic progress there, and that's the thing that's really revolutionizing The field, and the other thing that's happened is that we are now throughout our whole economy and society collecting huge amounts of data, right?
There's all this data out there that wasn't there before, and this data is basically the resource that is used to train these smart algorithms, and that's what artificial intelligence looks like right now.
It's primarily machine learning.
And this is just going to be incredibly disruptive.
You know, there are going to be places where we're going to need regulation.
You can't just rein it in.
I mean, it's progress.
It's happening.
It's happening in part because of a competitive dynamic, right, within capitalism, between companies, between Google and Facebook and Goldman Sachs all competing to build the latest technology.
There's also a competition between United States and China.
All of that is gonna push it forward relentlessly and trying to stop it is probably kind of a fool's errand.
It's probably not possible and probably not really advisable.
What I think we have to do are find ways to adapt to all of this progress.
And in some places that may mean certainly regulation.
And in other cases it will mean finding ways to address issues like unemployment and inequality
An algorithm is just essentially a computer program.
It's something that goes step by step and does something.
What we've seen Recently, though, is the emergence of a new kind of algorithm called machine learning algorithms.
And this is what's really disruptive.
And the difference between a machine learning algorithm and a traditional computer programming algorithm is that, you know, historically, some programmer has sat down and told the computer to do what to do step by step.
With machine learning, instead, you've got a smart algorithm that looks at lots and lots of data and then figures out for itself what to do.
You know, where we put out a long-form show, people are going to watch a full hour of our discussion.
That's not really what the algorithm wants.
It wants you, from the way we understand it from some insiders, they want you to constantly be clicking on videos and basically fall into this click hole to just keep the machine going more and more and more.
Now, I understand why they want that sort of attention going in different places and all that.
For what I do, I don't love the algorithm at the moment, if that makes sense.
If you're a programmer, and you're at YouTube, and you don't want people to be radicalized, or just, you don't, even if you just don't want people to have to just be endlessly clicking, like, there's this game to keep people addicted to all of these things.
And it's like, I understand that.
We could put out a zillion clips so I could, you know, I could chop everything into two-minute things and we could put them out and it would probably help us in terms of views and all of those things.
That may be the kind of place where maybe some regulation has to come in and say, you know, well, you're gonna have to put some constraint on this if Google doesn't make the decision to do that itself.
Well, deep learning is a kind of artificial intelligence.
It's right now the hottest area of AI and deep learning or deep neural networks basically means a system that is loosely designed on the way a brain would work.
It has artificial neurons that are roughly similar to the neurons in your brain and that's the way it works.
This is an idea that's been around since the 1950s at least, but just within the last six years or so, we've seen just an explosion in this technology.
We've now got systems that can translate languages from Chinese to English, that can Do better than people at recognizing visual images.
We've got radiology systems that can look at medical images and find cancer there, and in some cases can do that better than human doctors.
So this is absolutely the hottest area of AI.
It's also what's enabling self-driving cars, for example.
Well, I think that's one of the real problems with it.
I mean, we're ultimately going to have to make a choice as to whether we want to allow that progress to continue and get the enormous benefits of that progress But if that's going to come at a cost of making some set of our population unemployable or maybe de-skilling jobs to the point where people just don't have adequate incomes, even if they do have a job, then we've got to find a way to adapt to that.
And that's why, for example, I've talked a lot about a universal basic income as one possible approach to that.
But I'm very much against the idea that we should stop progress because this is where we are.
This is what Progress is gonna look like in the future, and we don't wanna stop it, because progress is the thing that has made us better off over the course of centuries.
Is there any evidence that ever in history you could stop progress, actually?
And even if we wanted to stop it right now, let's say you laid out the greatest case why this thing is gonna run out of control, it's gonna put half of us out of business, you know, income inequality is gonna go crazy, poverty, et cetera, et cetera.
Is there any case where technology existed that we somehow put the brakes on it?
Because it's like, we may try to regulate some of it, but if China's not regulating it, or if they're doing it in a different way... Exactly, that's one of the biggest problems, is that you would put your country at a disadvantage, unless you could do it on a global basis.
So I want to talk a little bit more about UBI, but before we do that, can you talk a little bit just about how this has affected certain industries and how some industries haven't quite been affected yet?
So, in general, the point I would make is that it's going to affect everything.
I mean, artificial intelligence is going to be like a utility.
It's going to be like electricity, right?
And no one says what industries are most impacted by electricity.
I mean, everything relies on electricity, right?
And the same will be true of artificial intelligence and machine learning.
In the long run, it's everywhere.
In the near term, clearly manufacturing has already been impacted by automation generally.
We've seen a dramatic decrease in advanced countries in the number of people employed in manufacturing.
That's going to continue.
The robots and the automation used in factories are going to get a lot more effective, more dexterous.
They'll be able to do the jobs that right now only people can do.
But it's going to scale to many, many other areas in finance.
It's going to have a dramatic impact.
A lot of white-collar jobs there, where people are sitting in front of a computer cranking out reports or something, right?
Recently, I saw something that the CEO of Deutsche Bank, one of the big banks, said he thought He could get rid of half of his employees in the relatively near future using this technology.
Healthcare is an area where it's going to be slower because it's really hard.
You've got doctors and nurses that need to engage with patients on a one-on-one basis and provide a lot of individual human-like service.
And that's one of the reasons that health care costs are so high in the United States right now, because we have not seen the kind of productivity increases there that we've seen in, say, manufacturing.
Although I would say in general, doctors are relatively safe because they are highly regulated, right?
There are all kinds of rules about medicine and you need to have a doctor or a pharmacist there.
So those roles are relatively protected where if you're a white-collar job in some corporation sitting in a cubicle somewhere, you don't have any protections at all.
So for that reason, I would worry a bit less about doctors disappearing in the near term.
But in healthcare, there definitely are gonna be lots of applications.
You already see robots in hospitals delivering things.
You see robots beginning to be used in elder care, looking after older people,
which is certainly one of the biggest opportunities because we have this aging population.
Pharmacy robots are a huge thing.
There are already robots that do thousands and thousands of prescriptions in hospitals and so forth very efficiently.
So this is coming.
It will take a little bit longer in healthcare than in some other areas,
but eventually it's going to be everywhere.
In retail, you know, there are [BLANK_AUDIO]
Walmart is beginning to introduce robots and of course Retailing general is migrating more and more toward Amazon.
Yeah, which in theory means that jobs You know, they might move from a retail store to an Amazon
warehouse But once the jobs go to that Amazon warehouse now, they're
in a very controlled environment And there are already lots of robots there and those robots
are gonna get dramatically better in the next five or ten years
So in effect, you could have a giant Amazon warehouse that we've all driven by one of these huge monstrosities, and it could basically be run by all robots, and you're ordering things online.
I mean, right now inside those warehouses, you have huge numbers of robots.
And the robots will do something like bring a whole shelf of inventory to a worker, but then the worker's got to reach in there and grab the item off the shelf and put it in a box, because the robot right now can't do that.
It doesn't have the visual perception and the dexterity to do that.
But that will change over the next five to ten years.
And so those environments are going to become a lot less labor-intensive.
That's not to say they'll be fully automated, but there are going to be fewer jobs there.
Something to worry about because this is one of the brightest areas right now for job creation, right?
So we're gonna watch a certain sector or many sectors of jobs just disappear altogether and yet at the same time I guess the counter argument or the people that would say we shouldn't be so alarmed about this would say well all the cost of everything will go down because the robots will be able to do things at a much more efficient, cheaper level, right?
So people won't need as much disposable income, that sort of thing.
The other thing is that the really big ticket items, the things that really are putting people underwater are housing, education, healthcare, and these are exactly the areas where technology is, at least in the short and medium term, going to have the least impact, right?
I mean, housing in particular.
Someday we might have big 3D printers that make it really cheap to produce housing.
But but there's still a problem with land, right?
And if you're in Los Angeles or up in San Francisco, then there's no land, right?
It's already, you know, you know, it's already very scarce.
And that's what drives property values so high.
So you can't solve that problem necessarily just with technology.
Okay, so that's the right transition then to universal basic income.
So my default position on UBI, and I've heard arguments on both sides, and I think I told you I have Andrew Yang coming in soon, and we'll discuss it further.
My default position is that if you give people just enough to survive, that you're sort of stealing just like the most basic human right of just like go get something for yourself.
And that it's gonna create this class of people, sort of not by their own fault, That will just have the bare minimum to get by and then they'll be able to stay at home and play video games and watch porn and basically do nothing all day long and that's actually taking something from them rather than giving something to them.
That's sort of my high level philosophic position on it.
The argument I would make is that once a society reaches a certain level of prosperity, as we have, if you want to continue to have capitalism and a market, it's very important to have the kind of incentive that you're alluding to there.
But I would argue that maybe the incentive doesn't have to be so daunting that if you don't work, you're living on the street or eating out of garbage cans, right?
That maybe it's enough to say that You can basically survive if you're not motivated, but you're not going to have a terrific life.
You're not going to have a great life.
And I think that a number of studies have been done with basic income that show that when you give people this money, they don't, in fact, just drop out and stay home and do nothing.
They are actually motivated to do something more.
They invest in their family and education.
They work if they can.
They maybe start a small business.
So actually, if you give people that basic safety net, You can create an environment where people are actually more willing to take risks.
So, for example, they might start a new business.
They might be willing to leave a safe job where they're not learning anything, they're not growing, and work for a startup company, do something more risky.
See, that's the whole key to a basic income, and that's what makes it different from other forms of safety net, is that it is unconditional, in the sense that everyone gets it.
Now, what that means is that if I get my basic income and I choose to just play video games, then I'll have that basic income.
But if someone else is more ambitious, they get their basic income and they go and work, even if it's only part time, they start a small business, then they get the basic income and they also get that additional income.
We don't tax it away, at least not at the lower level.
So the key point is that the person that is productive, that is willing to do something to work, will always be better off than the person that does nothing.
And that's really key to it, because The problem with our existing social safety net is exactly what you said, that if you do something, find a job, then you lose those benefits.
Right, and that's exactly what's called a poverty trap, right?
You get into a situation where you look at the options around you and anything you do doesn't make you better off or even makes you work so often so you're stuck there, you can't move.
The worst possible example of that in the United States is the Social Security Disability Program, which is intended for people that are injured on a job and then they can't work.
But actually a lot of people now are gaming it, probably because they're desperate, they need an income.
And so they'll go and tell their doctor they've got pain in their back or something and they'll get through the loops and they'll get onto this program which gives you an income.
But once you get there, you can't even be seen to be able-bodied.
People are worried even to go and work in their garden or something because someone will see them and then they'll lose their benefits, right?
So that's a really terrible example of this kind of income, where basic income, we give it to everyone, and it's unconditional, and then we encourage people to do more, right?
And that's really important.
That's one of the strongest arguments for a basic income scheme.
Well, you know, one issue there is that a basic income is mobile, right?
So maybe you don't have to live in Los Angeles or San Francisco.
You can take Your basic income and you can move to Detroit right and there you might have a pretty decent life You can have housing there much cheaper, right?
So the difference between a basic income and a job is that you can take it everywhere So people would kind of readjust and they might some people might choose to leave leave high-cost areas and and live in cheaper places and so forth So how do you fund all this?
I think inevitably one of the things that we are seeing with the economy largely as the result of technology is that more and more income is going to capital and less is going to labor.
So businesses and investors and people like that are getting more income, and average working people are getting less.
So what that means for the future, I think it's inevitable that we're going to have to tax capital more and labor less.
And that may involve higher business taxes or taxes on the wealthiest people that have access to lots of capital.
I mean, that's, as a libertarian, you might find that objectionable, but I think it's inevitable.
Ultimately, if you're going to have a taxation scheme, you have to tax the people that have the money, right?
You can't Get blood from a rock, as they say, right?
So I think one advantage of these programs is that they're gonna start it at a low level and we can imagine that as technology advances and society becomes more prosperous that that could be raised over time.
But initially it's gonna be a very low level so I don't think we have to worry too much about destroying the incentive for people to work and so forth.
It's gonna give people Very very minimal cushion But they're still gonna have that incentive to work, right?
It's so interesting because it just does set all my libertarian bells off that well the second you give it to somebody so we give a thousand bucks to everybody well immediately you're gonna have politicians saying this isn't enough and it's then we have to make it more and we have to make it more and that then that becomes the cycle and We're always shifting money around and it's just because no one's ever gonna no matter what basic level we get most people to no one's ever gonna be like all right well we're okay then.
I would say though that You know, a basic income, or there are other flavors of it, a guaranteed minimum income, a negative income tax, these are ideas that in the past have been embraced by libertarians.
Friedrich Hayek was a big proponent of a guaranteed minimum income.
Milton Friedman was for a negative income tax.
And the idea is that you're creating really a market-based safety net, right?
Rather than having government house people, feed people, control industries, or try to take over businesses and run them in a way that artificially creates jobs and so forth.
Rather than doing that, just give people some money, let them go out and participate in the market.
So it actually is a market-oriented, libertarian approach to having some kind of safety net.
But I think the idea of it being politicized, that's a real concern.
One thing I actually have suggested in some of my writing is that We might set up a separate institution to kind of manage that, maybe something like the Federal Reserve that would be independent and not part of the political process and might manage the level of a basic income, because you could actually use it also to
To respond to recessions, for example.
If there's an economic downturn, maybe pay people a bit more.
And then that would help you get out of the recession, right?
That would be kind of a Keynesian response to it.
So I think there are a lot of possibilities there, but you're right.
We don't want every politician running on the platform of, I will increase your basic income, right?
So that's something that we need to think about from the beginning.
As I said, you know, maybe put it in the hands of a separate institution.
One other thing I proposed is that maybe we can build incentives into a basic income.
If people stay in school, Pay them a bit more than people that just play video games.
Or if people go and work in the community, you know, help other people, pay them a bit more.
So that I think it's really important to have sort of a ladder for people that they feel they can somehow do better.
Because that, I mean, the issue you raised everywhere, or raised before, that we could create this class of people that just, you know, do nothing is something to be concerned about.
But there are ideas that we can, I think, employ to really address that.
I mean, there may be some places in the country, not Los Angeles, but some places you could live on $1,000.
So if you're really in a bad situation and you're living in L.A., you could pack up and move somewhere where maybe $1,000 will allow you to survive, right?
That's a possibility.
But I think what most people will do is they will take that $1,000 and they will use that to sort of cushion The difficult times, but they will still be motivated to find a job, to start a business, to do something.
They will just have more options, more freedoms in terms of the choices that they make.
I guess so much of this has to do with just the strange way we deal with politics.
So, for example, you'll have politicians saying, we have to have $15 minimum wage, and then you can walk into Burger King or McDonald's now and order on an iPad, because they're basically saying, all right, we're not going to pay our people this much.
So everything just becomes sort of uber-politicized, right?
And that's why a basic income is maybe a better approach because we just give it to everyone.
And the problem with raising the minimum wage is that it may be good for many workers, but it can actually also increase the incentive to automate or to do other things.
So it could actually result in fewer jobs.
So giving people a basic income and preserving the incentive for them to still work or to do other things, I think, is a very attractive proposition.
When you sort of play out, you know, 5, 10, 20 years from now, Do you think things are going to just be beyond drastically different in a way we really can't think about, because of the speed of all of this, because of everything we've talked about here, that we really can't even envision the ways that this is going to go?
Architects of Intelligence: The Future of AI00:11:22
He expects what he calls the singularity, something that is going to completely change the whole paradigm.
He thinks that within just 10 years we're going to have human level artificial intelligence and so forth.
So that's possible.
The problem is that it's very unpredictable.
We don't know how fast all of this is coming.
What I've really focused on is sort of the practical impacts of AI and robotics and the impact on the job markets.
And what I would say is that within five to ten years, we're going to definitely see a fairly dramatic and unambiguous impact on the job market and on the economy.
Oh no, in Silicon Valley there are many people very interested in this idea of living forever.
You know, the Google people, Larry Page and Sergey and Peter Thiel is very into this as well.
He's even, I think, played around with supposedly blood transfusions and stuff like that.
So, yes, the Silicon Valley elite is, you know, they're true believers in terms of This idea that technology is going to completely transform things and that the future is going to be dramatically different from the past and that we're going to potentially even have the possibility of living forever.
The singularity basically means a point at which technology takes off and begins accelerating at a rate that becomes incomprehensible to us.
So it comes from basically a black hole, right?
The center of a black hole is what's called a singularity where the laws of physics break down and you can't see beyond that point.
So the term singularity was coined as a way to express the idea that technology reaches a point where It's just completely unpredictable beyond that point, because things are moving so fast.
And most people that think about this associate that with the advance of what's called super intelligence, or machines that are smarter than us.
Not only human level intelligence, but a machine that, you know, is smarter than any human being.
Maybe dramatically so.
Maybe so smart that it makes us look like a mouse or an insect.
And there are very, very serious thinkers that are focused on this.
One of the most prominent ones is Nick Bostrom, who I also interviewed in my latest book, Architects of Intelligence.
So he believes that this is a real issue and he's working on Finding ways to build systems that will be controllable, even if they become super-intelligent.
Right, but is the problem with that what we hit on earlier, which is maybe we here in America hopefully figure out some systems that are gonna make some sense, but if the Chinese figure out a system that's a little bit different, or just some random guy in his garage in Mexico figures out some other thing that we still have, that basically there's just no way to manage it, seems like the big problem here.
Yeah, and that's one of the scariest aspects of this is That competition and the fact that there would be an incredible first mover advantage to whoever gets there first.
Whoever builds the first super intelligent system You know, is going to be way ahead of everyone else.
And the reason is that most people believe in what's called an intelligence explosion or kind of iterative improvement, where basically once a machine reaches human level intelligence or gets beyond that, it's going to turn its attention to its own code, right, to building better versions of itself.
So it's going to continuously engineer a smarter version of itself.
And that's something that could explode rapidly.
So whoever gets there first, They're essentially uncatchable.
So that is going to set up a competitive environment between the US and China and Russia and so forth.
But there are a lot of people working on doing this in a safe way.
OpenAI is another good example.
That's the organization that was set up by Elon Musk.
Right.
And some other people and they're actively working on building systems that they basically they're trying to get there first to be the first one to to create a generally intelligent system and to do it in a way that is safe.
So I think that's a good thing.
There's there's some real focus on that and investment in that area.
Well, there are a number of breakthroughs that you have to have.
You have to have machines that can learn the way people learn.
Right now, as I said, we've got machine learning, deep learning, which is highly dependent
on lots and lots of data, in particular, labeled data.
So you can train one of these algorithms to recognize pictures of a dog,
and you would give it maybe a million photographs that had, that was, you know, these photos would be labeled
either there's a dog there or there's not a dog there.
And based on this, it could learn and eventually recognize dogs at a superhuman level.
But that's not the way a human child learns, right?
A human child, you can point to a dog and maybe you only need to do that once before the kid needs to learn.
And so one of the biggest initiatives is teaching machines to learn from less data in an unsupervised way, in a way that people can.
And then you've got to have the ability to think generally, to conceive new ideas, to be creative, To understand that one thing causes another thing, as opposed to just two things being correlated.
To develop counterfactuals, to imagine, I've got this plan for the future, but if I tweak this one thing, then this is what's going to be different.
These are all uniquely human ways of thinking, and it's going to take a lot of breakthroughs before we have a machine that can do all those things.
And there's just a lot of disagreement, even between the very smartest people, Working this field about how long it's going to take for those kinds of breakthroughs to happen.
How concerned are you about the unbiasing that seems to be happening when it comes to the algorithm?
So, for example, the famous case that everyone talks about is that if you Google American scientists, that it happens to be, it's just a function of things, that most famous American scientists who have done most of the breakthroughs, most, not all, happen to be white men.
But that Google is unbiasing the searches, so it includes more black people or more women or things like that, which nobody has a problem with, no one in their right mind
has a real problem acknowledging that there are scientists of every color and
gender and all those things, but they're unbiased things that are not, so it's
not really factual sort of what we're putting in the algorithm and
that where that could lead us seems pretty scary. That's ultimately a decision
for society I suppose how we want to address those issues.
I mean, the whole issue of bias in algorithms is a huge issue in artificial intelligence.
People are working on addressing that.
And that operates in both ways.
I mean, there definitely and absolutely have been legitimate cases of algorithms that are biased against people of color and so forth, for example, and gender too.
I mean, I know that one company, for example, stopped using An AI system that was used to screen resumes because it was biased against women and so forth.
What happens is that, again, these are systems that are learning from data, right?
But where does that data come from?
Originally, it comes from people.
So if people are biased in some way and they're generating this data, Then an algorithm, a machine learning algorithm comes along and is trained on that data, it will pick up that bias.
Yeah Well, it's funny that that's sort of the theme of all of this because I'm even trying to sort of figure out as you're talking Are you optimistic about this or or pessimistic about sort of where this could all lead and I I definitely sense both sides there Yeah, I mean I tend to be You know, speaking more holistically, including, you know, there are many, many issues with AI.
And this is something that is not really science fiction.
I mean, we were talking earlier about super intelligence and the Terminator, where the machines actively are making a choice to kill us.
That's science fiction.
That lies far in the future.
But the idea of having thousands of swarming autonomous drones that were not intelligent independently, but were, you know, programmed by somebody else to do something to attack someone or so.
So basically, this is something that could happen.
So the idea being that, okay, Amazon moves to drone delivery, and then someone hacks into the system, and instead of these drones dropping packages at our door, they're flying through our windows and attacking people on the streets, or whatever, yeah.
Or it could be someone, you know, manufacturing a huge numbers of these drones, and then installing autonomous software, because the barrier to entry here is pretty low.
I mean, these are These are weapons that some people could be like weapons of mass disruption.
If you had enough autonomous drones, that would be incredibly dangerous, right?
Now, if you look at something like nuclear weapons, in order for a country to have nuclear weapons, you've got to be a nation-state.
You've got to have resources on that level in order to develop nuclear weapons.
But these kinds of technologies Where you're talking, you know, and there's a lot of overlap between the commercial sector and things that could be done on the security or military side.
You could go on Amazon, you could purchase a thousand drones, and then maybe you could, you know, engineer them to be weapons or something.
These are something that, you know, there's a much lower threshold there.
People in a basement somewhere could be doing this kind of stuff, right?
But it is quite scary and Many people in the AI community are very passionate about this.
In particular, there's an initiative in the United Nations to actually ban fully autonomous weapons, for example.
And the real worry is not just that militaries would use these kinds of weapons, but it would go beyond that.
And you would have, you know, this kind of shady arms dealers that now sell machine guns are selling autonomous drones.
And so they then become available to terrorists and all kinds of people.
And this is a really scary scenario.
One of the people I interviewed, Stuart Russell, who's a professor at UC Berkeley, created a YouTube video called Slaughterbots that you can go on and watch.
It's really quite terrifying.
It shows you exactly what could be done with huge numbers of swarming autonomous drones.
Again, it's not science fiction.
It's something that could happen in the next 5-10 years.
Are you familiar at all with the anti-technology movement?
The more that you pay attention to the technology movement and the amount of people that are trying to either get off the grid or limit the amount of time online and that whole thing?
I mean, that's, I think, a natural response to a lot of this.
I mean, the worst example of that is Ted Kaczynski, the Unabomber, who actually wrote a manifesto that was published, I think, in the New York Times.
But if you go and read that manifesto, this is a guy that, okay, he's crazy, he's a murderer, all of this.
But if you read that manifesto and not know that it was written by him, he's raising a lot of the issues that we are now talking about.
You know, the issues that technology could be a threat, the issue that we might become so dependent on this technology that we lose our agency, right?
Our ability to think for ourselves and so forth.
So, you know, even back then, these people were thinking about this.
And so this is a natural response.
And one of the things I fear the most is that If we don't find a way to adapt to these technologies and find a way to leverage all this progress on behalf of everyone so that everyone is better off, there's going to be a bigger and bigger backlash.
People are going to turn against the technology.
And that might mean not just going off the grid and living as a hermit, but actually becoming much more adversarial to the system.
And that might happen politically, it might happen in some places
even in the form of social unrest and so forth as things get bad enough if we really have unemployment.
So I just think it's critically important that we begin to really address these issues
and have an honest discussion about them so that we can avoid that scenario.
Is there a sci-fi movie that you think handles some of this in the best way?
So not going all the way to Terminator tomorrow, but are there some movies that you think have sort of teased out some of the more realistic futures or closer futures?
Yeah migrated there and earth became Foster became basically You know, a dystopian nightmare.
All the regular people were stuck on Earth.
And you're seeing that already, of course, with wealthy people moving to gated communities and so forth, and elite cities like San Francisco, where things are becoming so unequal.
And I really worry that that's the kind of That's sort of the ultimate irony of what's happening in Silicon Valley, right?
I mean, you just said it.
It's like San Francisco.
wealthy people that are benefiting from this technology and are maybe using the
technology to protect themselves from the masses, right, and everyone else is
San Francisco and the Bay Area is ground zero for this technological revolution.
And then right in their backyard, you see, you see the inequality, right?
You see what's happening.
And this is something that's going to scale out, right?
To everywhere, basically.
So we really need to get control of that.
And if we don't, it's I think, It's going to tear our society apart, right?
It's going to ultimately lead to some real problems in the United States and in other countries that are less stable, that have less, you know, solid institutions that we have.
It could be even worse.
You're going to see governments fall and things like that in some countries as a result of this, I think.
How does the information war factor into all of this?
You know, one of the things that people that watch this show are always talking, you know, everyone's talking about fake news all the time or that we're just being handed things.
You know, the algorithm pushes us stories that are favorable maybe to one side of the political aisle or something like that.
And then we're all going to also siphon off into our own Informational realities, basically, and sort of we'll live in the same physical world, but digitally we're gonna just accept different truths.
Exactly, and that's what makes it even more scary, is that all of this disruption is coming at us at a time when we are just incredibly polarized, where to some extent we're living in different universes.
Our ability to even talk to each other seems to be limited.
How are we gonna You know, address these issues.
How are we going to respond to these incredibly disruptive forces when we can't even sit down and have a conversation and agree on the same facts, right?
That's a real problem, and there is evidence that it's getting worse and worse.
I hope that there can be more of this, because we really need everyone, and that includes people on the left and people on the right, to be able to talk to each other about this, because otherwise We're going to have what we have now.
We've got a Congress that literally can do nothing.
And I'm concerned no matter who wins in 2020 the presidency, you know, it's likely that the Congress will still be divided, right?
The same political polarization and social polarization.
Social media polarization will be there.
So how are we gonna address these kinds of issues?
So that gets to what you were talking about earlier that you need something sort of separate from the government in a way to sort of be if something could oversee some of our ability to deal with this technology it almost in a way it can't be politicized because of the way our system is Right.
I mean, specifically in terms of having a basic income, I think there are good reasons to put that in the hands of a separate institution.
And the Federal Reserve would be a good example of that.
The Federal Reserve, which controls interest rates, is relatively independent right now, although Trump has tried to to play around with it.
But if it weren't for the fact that we had an independent Federal Reserve, I think we would be in much bigger trouble now than we are.
And so there is an argument for maybe taking some essential functions of government away from the political process and having a kind of a technocratic approach to that.
But I think that within that kind of a time frame, there's going to be an unambiguous impact on the job market.
So if we don't get control of this, we will see unemployment, at least among some workers.
We'll see greatly increased inequality, even beyond what we see now.
We will see more and more anger.
More people left behind, possibly even social unrest as a result of that.
We will see the rise of people like Trump, which you might characterize as kind of a demagogue, someone that preys on the fear that people have, right?
Maybe, you know, points to immigrants as opposed to technology as being the primary cause of this and so forth.
And the other part of that, though, is that there's also an economic aspect to that,
that as people are unemployed or have lower incomes, they have less money to spend, right?
Now they're not driving the economy.
So the whole economy suffers.
So we could have also a financial crisis, a recession.
Perhaps people can't pay their debts and we get into a situation like we had in 2008, right?
So that's sort of the worst case scenario.
The best case scenario is that we find a way to adapt to this.
Maybe with something like a basic income.
So we address this issue of people not having jobs or not having adequate incomes.
So they still have money to spend.
They go out and they spend their money in the economy.
There are all kinds of new products and services and exciting things for people to spend money on.
There is incredible opportunity for entrepreneurs, for people like Elon Musk or the next Steve Jobs, to create things Based on this new technology and people have the income that they need to buy these products.
Things do, in large measure, get less expensive.
So people have greater purchasing power.
We have enormous breakthroughs in science and medicine.
We all live longer and healthier lives.
So there are enormous benefits to artificial intelligence.
It's going to become the primary tool that's used In scientific research, in solving problems like climate change, developing new forms of clean energy, you know, medical breakthroughs, all of that.
The key thing is that we want to make sure that we get that stuff and that we get it for everyone.
In other words, we want to be able to leverage it on behalf of everyone rather than just a few people.
And I think that if we can find a way to navigate through this, that it's incredibly Um, optimistic and where it kind of ends up in the far future is maybe something like Star Trek, right?
Where you've got what has been called kind of the post scarcity economy, an economy of abundance where You know, people don't have to worry about the basics of life anymore.
I mean, in Star Trek, you've got the materializer, right?
Everything is basically free.
People don't have to work a nine-to-five job to survive.
They're out traveling the universe or whatever, doing things that are genuinely meaningful to them.
And I think that's sort of the vision that we should have as we, you know, anticipate the development of these technologies.
But in order to get there, We've got to be honest about what the implications of this is.
We need to have an honest discussion and come up, I think, ultimately with some real policies to address the risks and the downsides that are going to come with this progress.
And just pick up on all the incremental progress, and then I suppose, if everything you're saying is right, one year we're gonna do it, and everything will look so absolutely different, we won't even be able to look back and make any sense of what we're talking about.