All Episodes
May 31, 2019 - Rubin Report - Dave Rubin
01:03:14
AI & the Threat of a Jobless Future | Martin Ford | Rubin Report
Participants
Main voices
d
dave rubin
14:28
m
martin ford
48:45
| Copy link to current segment

Speaker Time Text
dave rubin
Joining me today is the author of the New York Times bestseller, Rise of the Robots, Technology and the Threat of a Jobless Future, Martin Ford.
Welcome to The Rubin Report.
martin ford
Thanks for having me.
dave rubin
I'm glad to have you here, sir, because dystopian futures, robots, Skynet, all of it, very much in my wheelhouse.
And I want you to explain it all to me.
Are you ready?
martin ford
Yes, definitely.
dave rubin
Alright, let's do it.
So, first off, before we dive directly into robots and AI and all that, just tell me a little bit about your background, what brought you to writing a book like this.
martin ford
Okay, so I studied computer engineering in college and then I worked as an engineer, a design engineer, for several years.
Then I went back and studied business.
Eventually I ended up starting and running a small software company up in Silicon Valley.
I ran that for many years, but even in the course of running that, I saw the impact that all this technology was having on jobs at my business and at businesses like it.
And that really got me thinking about this issue.
And so about 10 years ago in 2009, I wrote my first book called The Lights in the Tunnel, which really argued that artificial intelligence was going to be the next big thing in computing.
And then it was going to have a dramatic impact in particular on the job market.
And that book did well enough that it led to an opportunity to write this book in 2015, which really got quite a bit of attention.
And since then, I've kind of shifted my career to really be a futurist.
focused on what AI and robotics means for society and for the economy and especially for the job market.
dave rubin
So I think there are gonna be some huge challenges there for us. Right, so we're gonna unpack all
of that stuff, but when you were writing about this in 2009, were people saying, "Nah, this is just pure science
fiction."
martin ford
Yeah, I mean, this was an issue that was very much off the radar back then.
It came with a fair amount of stigma.
And the reason is that this concern or fear that machines might take a lot of jobs and there might be unemployment, it's an old issue.
It's come up many times in the past.
Going all the way back to the Luddites, right, in England 200 years ago.
And so there's actually this term, Neo-Luddite, for a person that is once again worrying about this issue, and so it was quite stigmatized.
So in 2009, when I wrote the book, I was one of the earliest people to get out there with this, but since then things have definitely changed.
I see a lot of people much more concerned about this, even professional economists and so forth.
So there definitely has been a shift in mentality over this last 10 years when we've seen things like the advent of self-driving cars that look like they're going to be arriving soon and so forth.
dave rubin
What markers were you seeing back in 2009, or a little bit before that even, that were sort of pushing you in this direction?
martin ford
Well, the most important thing is what you might call Moore's Law, the fact that the power of computers is accelerating, doubling every two years.
And it was obvious that computers were going to get dramatically more powerful.
And there had to be an application for that.
It has to be something you can do with that.
And it became obvious to me that artificial intelligence was going to be the thing.
And AI means essentially solving the same kinds of problems that the human brain can solve, right?
And it means machines that in a limited sense are Taking on cognitive capability.
They're beginning to think.
And that means that technology is going to begin to compete with and substitute for human beings in a unique way, something that we've never seen before.
And as that scales across the whole economy, as all kinds of jobs, skilled jobs and unskilled jobs, I think that it becomes pretty clear that it's going to have dramatic implications.
dave rubin
So when people think about robots, I think, like, there's a few different ways you can think about it.
You can sort of think about AI, which is sort of this amorphous thing that people sort of don't contextualize into a physical object.
Then they think of robots, they think of, like, you know, C3PO and R2D2 and everything else.
What, if you were just saying robots, what exactly are you talking about?
martin ford
Right.
I, especially in this book, Rise of the Robots, I used a very broad meaning for that, basically to mean anything that is automating something and taking over, you know, things that people can do.
And very often that's just going to be software.
If you want to be more precise and technical, a robot is when you take artificial intelligence and you put it into a physical machine that can physically manipulate the environment.
But what we're talking about is much broader than that.
So we're already seeing people like lawyers and doctors being impacted.
And it's not physical robots, it's often just software, artificial intelligence.
And actually, you know, building physical robots that have dexterity, that can manipulate the environment, that's actually one of the hardest aspects of this.
And in some ways, that may be where progress is going to be slowest, where in knowledge-type work, you know, someone is sitting in front of a computer doing some routine task, cranking out the same report again and again, that may actually be much easier So that's interesting.
dave rubin
So the idea part of it is easier to replicate than the physical part, even though you'd think that just building a robot that can move the way you want it to move or something, that seems technically easier than figuring out how to think like humans.
martin ford
It seems like that from our perspective.
Very often the reason is that to do these knowledge-based jobs, It requires a lot of education and training, right?
But actually, once you implement the technology, it actually can often be the reverse.
The hardest thing to do is to build a physical robot that has dexterity, that has visual perception, that can move around the way a person does.
To build, as you said, the kind of robot like C-3PO from Star Trek.
That's totally science fiction.
We don't have anything remotely like that.
dave rubin
It seems like every now and again you see a video on YouTube where they're getting a little closer.
They've got a robot jumping over something and ducking under something.
martin ford
Exactly.
You see those robots in particular from a company called Boston Dynamics.
It's doing very impressive things, but those videos are highly choreographed.
The robots are controlled by someone that's outside the picture. This is not a thinking autonomous robot running
unidentified
around doing stuff by itself Okay, so we're not in Terminator land just yet. Not not
martin ford
anytime soon at all. That's far in the future But that you know, we shouldn't
Allow we shouldn't be distracted from the fact that there are things happening now
They're gonna have a really dramatic impact But it's not the science fiction stuff that you see in the
iRobot movie and stuff like that, right?
dave rubin
Right.
So how much of the conversation is about all of this is about what you referenced a moment ago
about just the speed of technology and that every two years the power doubles and all that,
that we're all walking around with iPhones or we have super computers in our pocket
and the way we can transmit information across the globe like that.
And just how much of this is just related to speed more than anything else?
martin ford
Right, that's a very big part of it.
It is not just the speed of computers that have gotten faster and smaller, of course,
and now they're in our iPhones, but it's the speed of communications bandwidth,
it's memory capacity.
So we've seen this very broad-based acceleration in technology, and that's a huge part of it.
The other thing is that there have been some breakthroughs in artificial intelligence, especially in the hottest area of AI, which is called deep learning or deep neural networks.
We've seen dramatic progress there, and that's the thing that's really revolutionizing The field, and the other thing that's happened is that we are now throughout our whole economy and society collecting huge amounts of data, right?
There's all this data out there that wasn't there before, and this data is basically the resource that is used to train these smart algorithms, and that's what artificial intelligence looks like right now.
It's primarily machine learning.
And this is just going to be incredibly disruptive.
dave rubin
Yeah, so is part of the potential problem here that we're building things that will be more powerful than us, and we don't really understand that.
So it's like we're putting so much information in our brains all the time.
Maybe our actual physical brains can't take all this in.
Like we don't have enough RAM in our physical brains for all of the information that we're constantly slamming ourselves with.
martin ford
Well, it's definitely true that these smart algorithms, I mean, they can look at, you know, Huge amounts of data.
Millions and millions and millions of data points, right?
Which no human being could do.
So we already have algorithms that, in a very narrow sense, in terms of doing very specific things, are superhuman, right?
They can vastly outperform what any person does, and they do things that we don't really understand.
A good example of that would be Wall Street, right?
Where you've got these trading algorithms, that can actually look at machine readable news.
I mean, companies like Bloomberg actually make news products that are designed for machines, not for people.
These algorithms can read that news and then analyze it and then trade on it within, you know, tiny fractions of a second.
So that would be an example of where technology is already getting ahead of what we can understand.
dave rubin
Yeah.
What do we do to rein some of it back in occasionally?
Is there any, or is it just once we start the process with anything like this, we just don't know where it ends?
martin ford
You know, it's a difficult question.
You know, there are going to be places where we're going to need regulation.
You can't just rein it in.
I mean, it's progress.
It's happening.
It's happening in part because of a competitive dynamic, right, within capitalism, between companies, between Google and Facebook and Goldman Sachs all competing to build the latest technology.
There's also a competition between United States and China.
All of that is gonna push it forward relentlessly and trying to stop it is probably kind of a fool's errand.
It's probably not possible and probably not really advisable.
What I think we have to do are find ways to adapt to all of this progress.
And in some places that may mean certainly regulation.
And in other cases it will mean finding ways to address issues like unemployment and inequality
that will result from all of this progress.
dave rubin
Yeah, so let's just define some basic terms because I think we end up throwing out a lot of big terms here and then people are confused.
So just, when people are talking about the algorithm, can you just explain in simple terms what is the algorithm?
We're doing this on YouTube right now.
People are always obsessed with the algorithm.
martin ford
An algorithm is just essentially a computer program.
It's something that goes step by step and does something.
What we've seen Recently, though, is the emergence of a new kind of algorithm called machine learning algorithms.
And this is what's really disruptive.
And the difference between a machine learning algorithm and a traditional computer programming algorithm is that, you know, historically, some programmer has sat down and told the computer to do what to do step by step.
With machine learning, instead, you've got a smart algorithm that looks at lots and lots of data and then figures out for itself what to do.
So in essence, it's kind of programming itself.
dave rubin
So is there a way to control it then?
Or is it just actually uncontrollable?
Because once it's learned enough, It just doesn't need the programmer anymore.
martin ford
Well, it's not so much that it's uncontrollable, but that, you know, we don't really want to control it.
The whole point is to unleash it and let it learn and do things.
That doesn't mean that it's in any sense out of control or it's a danger to us or anything like that.
dave rubin
Well, the reason I was asking was sort of through a YouTube algorithm lens.
Like, one of the things we're finding out is they just want to keep you clicking all the time.
unidentified
Right.
dave rubin
You know, where we put out a long-form show, people are going to watch a full hour of our discussion.
That's not really what the algorithm wants.
It wants you, from the way we understand it from some insiders, they want you to constantly be clicking on videos and basically fall into this click hole to just keep the machine going more and more and more.
Now, I understand why they want that sort of attention going in different places and all that.
For what I do, I don't love the algorithm at the moment, if that makes sense.
martin ford
Right, right.
So that depends on how they optimize the algorithm.
But what's happening there is that you've got millions and millions of people watching YouTube videos.
And if they watch an entire video, then that will create a data point that says, you know, they were really interested in this.
If they start watching for a brief time and then they click away, then it'll show that they're less interested.
And then an algorithm comes around and looks at millions of those data points.
And can make recommendations for other videos.
And as you said, I think what we've seen is that The videos shown to people become more extreme, right?
So if you're interested in something, and then you'll get a more extreme version of that, and that's how to get people to click.
And a lot of people have been, you know, raising the alarm over that, because that is kind of radicalizing people, right?
dave rubin
So what do you do about that?
If you're a programmer, and you're at YouTube, and you don't want people to be radicalized, or just, you don't, even if you just don't want people to have to just be endlessly clicking, like, there's this game to keep people addicted to all of these things.
And it's like, I understand that.
We could put out a zillion clips so I could, you know, I could chop everything into two-minute things and we could put them out and it would probably help us in terms of views and all of those things.
But I just don't want to play that game, sort of.
martin ford
Right, right.
Technically, I don't think that's a difficult problem.
That depends on what the designer wants to do.
But the whole problem is that the algorithms are designed to make the most money for Google, right?
And that's what's driving this.
So it's not...
I don't think it's a computer design problem.
It's a capitalism problem, a profitability problem.
It's the fact that Google is a publicly traded company and its investors want it to make as much money as possible.
And that's what drives it to design algorithms that maximize profitability.
dave rubin
Keep you clicking.
martin ford
That may be the kind of place where maybe some regulation has to come in and say, you know, well, you're gonna have to put some constraint on this if Google doesn't make the decision to do that itself.
dave rubin
Yeah, I mean, this is where I'm not a regulation guy, but it's like they're pushing me to my limits.
I mean, this is what I keep saying about all the tech companies right now when it comes to censorship and everything else.
It's like they're not giving us much of a choice here.
martin ford
Right, I mean it's a challenging problem for sure to figure out, you know.
For AI to figure out what's in a video in terms of, is there something there that should be censored or not?
That's the decision they're making.
And that's quite different from just optimizing, getting people to watch videos.
Because in order to do that, you don't care what's in the video.
That's the whole problem, right?
It's just tracking statistics.
But if you actually want to analyze what's in a video, is there something there that's dangerous?
Are you inciting people to violence or something like that?
For artificial intelligence to figure that out is still much, much harder.
Why we're running into this problem, I think.
dave rubin
How is deep learning different than artificial intelligence?
That's just the next level of artificial intelligence?
martin ford
Well, deep learning is a kind of artificial intelligence.
It's right now the hottest area of AI and deep learning or deep neural networks basically means a system that is loosely designed on the way a brain would work.
It has artificial neurons that are roughly similar to the neurons in your brain and that's the way it works.
This is an idea that's been around since the 1950s at least, but just within the last six years or so, we've seen just an explosion in this technology.
We've now got systems that can translate languages from Chinese to English, that can Do better than people at recognizing visual images.
We've got radiology systems that can look at medical images and find cancer there, and in some cases can do that better than human doctors.
So this is absolutely the hottest area of AI.
It's also what's enabling self-driving cars, for example.
dave rubin
So is this the great catch-22 of all of robotics?
Is that it's doing these incredible things and then, as you talk about in the book, it's going to put a lot of people out of work?
martin ford
Well, I think that's one of the real problems with it.
I mean, we're ultimately going to have to make a choice as to whether we want to allow that progress to continue and get the enormous benefits of that progress But if that's going to come at a cost of making some set of our population unemployable or maybe de-skilling jobs to the point where people just don't have adequate incomes, even if they do have a job, then we've got to find a way to adapt to that.
And that's why, for example, I've talked a lot about a universal basic income as one possible approach to that.
But I'm very much against the idea that we should stop progress because this is where we are.
This is what Progress is gonna look like in the future, and we don't wanna stop it, because progress is the thing that has made us better off over the course of centuries.
dave rubin
Is there any evidence that ever in history you could stop progress, actually?
And even if we wanted to stop it right now, let's say you laid out the greatest case why this thing is gonna run out of control, it's gonna put half of us out of business, you know, income inequality is gonna go crazy, poverty, et cetera, et cetera.
Is there any case where technology existed that we somehow put the brakes on it?
martin ford
I'd be quite skeptical that we'd be able to do that.
Again, in part because of competition, not just between companies, but between countries.
Maybe we would do it, but then China didn't put the brakes on and they would pretty soon be vastly ahead of us, right?
So that would be a problem.
dave rubin
Is that the catch-22, then, for regulation?
Because it's like, we may try to regulate some of it, but if China's not regulating it, or if they're doing it in a different way... Exactly, that's one of the biggest problems, is that you would put your country at a disadvantage, unless you could do it on a global basis.
martin ford
And of course, doing anything on a global basis is incredibly hard, as you see with climate change, for example.
So again, my perspective is that rather than trying to slow it down, What we should do is find a way to adapt to it.
Just let it run, but understand what the implications are going to be and figure out a way to adapt to that.
And that's where the idea of a basic income comes in.
dave rubin
So I want to talk a little bit more about UBI, but before we do that, can you talk a little bit just about how this has affected certain industries and how some industries haven't quite been affected yet?
martin ford
Right.
So, in general, the point I would make is that it's going to affect everything.
I mean, artificial intelligence is going to be like a utility.
It's going to be like electricity, right?
And no one says what industries are most impacted by electricity.
I mean, everything relies on electricity, right?
And the same will be true of artificial intelligence and machine learning.
In the long run, it's everywhere.
In the near term, clearly manufacturing has already been impacted by automation generally.
We've seen a dramatic decrease in advanced countries in the number of people employed in manufacturing.
That's going to continue.
The robots and the automation used in factories are going to get a lot more effective, more dexterous.
They'll be able to do the jobs that right now only people can do.
But it's going to scale to many, many other areas in finance.
It's going to have a dramatic impact.
A lot of white-collar jobs there, where people are sitting in front of a computer cranking out reports or something, right?
Recently, I saw something that the CEO of Deutsche Bank, one of the big banks, said he thought He could get rid of half of his employees in the relatively near future using this technology.
Healthcare is an area where it's going to be slower because it's really hard.
You've got doctors and nurses that need to engage with patients on a one-on-one basis and provide a lot of individual human-like service.
And that's one of the reasons that health care costs are so high in the United States right now, because we have not seen the kind of productivity increases there that we've seen in, say, manufacturing.
dave rubin
Right.
So what could we do to see that change?
martin ford
We're beginning to see evidence of that.
As I mentioned, you've got systems now that can read medical images.
So you're going to begin to see the introduction of artificial intelligence in medicine.
I don't think that it will, for the most part, at least in the near term, it's not going to completely replace doctors.
But it will become kind of a second opinion.
It will run alongside a doctor.
You know, we'll always be there.
We'll make every doctor be able to perform at the level of the best doctor, right?
Because there'll be this incredible intelligence there.
So that will be of enormous benefit.
dave rubin
And then eventually, if you just extrapolate that down the road, it could replace the doctor too, right?
Sure, eventually.
Like in the movie Alien and a million other things.
martin ford
Exactly.
Although I would say in general, doctors are relatively safe because they are highly regulated, right?
There are all kinds of rules about medicine and you need to have a doctor or a pharmacist there.
So those roles are relatively protected where if you're a white-collar job in some corporation sitting in a cubicle somewhere, you don't have any protections at all.
So for that reason, I would worry a bit less about doctors disappearing in the near term.
But in healthcare, there definitely are gonna be lots of applications.
You already see robots in hospitals delivering things.
You see robots beginning to be used in elder care, looking after older people,
which is certainly one of the biggest opportunities because we have this aging population.
Pharmacy robots are a huge thing.
There are already robots that do thousands and thousands of prescriptions in hospitals and so forth very efficiently.
So this is coming.
It will take a little bit longer in healthcare than in some other areas,
but eventually it's going to be everywhere.
In retail, you know, there are [BLANK_AUDIO]
Walmart is beginning to introduce robots and of course Retailing general is migrating more and more toward Amazon.
Yeah, which in theory means that jobs You know, they might move from a retail store to an Amazon
warehouse But once the jobs go to that Amazon warehouse now, they're
in a very controlled environment And there are already lots of robots there and those robots
are gonna get dramatically better in the next five or ten years
dave rubin
So in effect, you could have a giant Amazon warehouse that we've all driven by one of these huge monstrosities, and it could basically be run by all robots, and you're ordering things online.
Well, we're getting very close to that.
martin ford
There's definitely a lot fewer people.
I mean, right now inside those warehouses, you have huge numbers of robots.
And the robots will do something like bring a whole shelf of inventory to a worker, but then the worker's got to reach in there and grab the item off the shelf and put it in a box, because the robot right now can't do that.
It doesn't have the visual perception and the dexterity to do that.
But that will change over the next five to ten years.
And so those environments are going to become a lot less labor-intensive.
That's not to say they'll be fully automated, but there are going to be fewer jobs there.
Something to worry about because this is one of the brightest areas right now for job creation, right?
dave rubin
So we're gonna watch a certain sector or many sectors of jobs just disappear altogether and yet at the same time I guess the counter argument or the people that would say we shouldn't be so alarmed about this would say well all the cost of everything will go down because the robots will be able to do things at a much more efficient, cheaper level, right?
So people won't need as much disposable income, that sort of thing.
martin ford
That's right.
I mean, that's absolutely true.
I'm very skeptical that that kind of solves the problem.
No, I mean, you can think about it.
If you don't have a job at all, then your income is zero.
dave rubin
It doesn't matter how cheap stuff is.
martin ford
The other thing is that the really big ticket items, the things that really are putting people underwater are housing, education, healthcare, and these are exactly the areas where technology is, at least in the short and medium term, going to have the least impact, right?
I mean, housing in particular.
Someday we might have big 3D printers that make it really cheap to produce housing.
But but there's still a problem with land, right?
And if you're in Los Angeles or up in San Francisco, then there's no land, right?
It's already, you know, you know, it's already very scarce.
And that's what drives property values so high.
So you can't solve that problem necessarily just with technology.
dave rubin
Yeah.
martin ford
So as incomes fall, you know, many people are not going to have the income to really cover the basics.
And that that's going to be a big problem.
dave rubin
Okay, so that's the right transition then to universal basic income.
So my default position on UBI, and I've heard arguments on both sides, and I think I told you I have Andrew Yang coming in soon, and we'll discuss it further.
My default position is that if you give people just enough to survive, that you're sort of stealing just like the most basic human right of just like go get something for yourself.
And that it's gonna create this class of people, sort of not by their own fault, That will just have the bare minimum to get by and then they'll be able to stay at home and play video games and watch porn and basically do nothing all day long and that's actually taking something from them rather than giving something to them.
That's sort of my high level philosophic position on it.
martin ford
Right.
The argument I would make is that once a society reaches a certain level of prosperity, as we have, if you want to continue to have capitalism and a market, it's very important to have the kind of incentive that you're alluding to there.
dave rubin
Yeah.
martin ford
But I would argue that maybe the incentive doesn't have to be so daunting that if you don't work, you're living on the street or eating out of garbage cans, right?
That maybe it's enough to say that You can basically survive if you're not motivated, but you're not going to have a terrific life.
You're not going to have a great life.
And I think that a number of studies have been done with basic income that show that when you give people this money, they don't, in fact, just drop out and stay home and do nothing.
They are actually motivated to do something more.
They invest in their family and education.
They work if they can.
They maybe start a small business.
So actually, if you give people that basic safety net, You can create an environment where people are actually more willing to take risks.
So, for example, they might start a new business.
They might be willing to leave a safe job where they're not learning anything, they're not growing, and work for a startup company, do something more risky.
dave rubin
Right.
But is the inherent problem that then if they start getting some success, then they lose the UBI?
martin ford
No, no.
See, that's the whole key to a basic income, and that's what makes it different from other forms of safety net, is that it is unconditional, in the sense that everyone gets it.
Now, what that means is that if I get my basic income and I choose to just play video games, then I'll have that basic income.
But if someone else is more ambitious, they get their basic income and they go and work, even if it's only part time, they start a small business, then they get the basic income and they also get that additional income.
We don't tax it away, at least not at the lower level.
So the key point is that the person that is productive, that is willing to do something to work, will always be better off than the person that does nothing.
unidentified
Right.
martin ford
And that's really key to it, because The problem with our existing social safety net is exactly what you said, that if you do something, find a job, then you lose those benefits.
dave rubin
Yeah, so that the cliff has to be really high to be willing to leave those benefits.
martin ford
Right, and that's exactly what's called a poverty trap, right?
You get into a situation where you look at the options around you and anything you do doesn't make you better off or even makes you work so often so you're stuck there, you can't move.
The worst possible example of that in the United States is the Social Security Disability Program, which is intended for people that are injured on a job and then they can't work.
But actually a lot of people now are gaming it, probably because they're desperate, they need an income.
And so they'll go and tell their doctor they've got pain in their back or something and they'll get through the loops and they'll get onto this program which gives you an income.
But once you get there, you can't even be seen to be able-bodied.
People are worried even to go and work in their garden or something because someone will see them and then they'll lose their benefits, right?
So that's a really terrible example of this kind of income, where basic income, we give it to everyone, and it's unconditional, and then we encourage people to do more, right?
And that's really important.
That's one of the strongest arguments for a basic income scheme.
dave rubin
Yeah, so let's get into some of the nuts and bolts of it.
First off, do you view it as something that would have to be done federally?
Because obviously, if you live in Los Angeles or San Francisco, your cost of living is way higher than, say, if you live in Missouri.
martin ford
Right.
dave rubin
So is this a federal program?
Are we throwing this to the states?
martin ford
Yeah, I would imagine it needs to be done on a national level.
And the reason is what you can think of as kind of like the kind of adverse selection problem you get in In insurance, right?
If Los Angeles has a basic income, then people all over the country are going to move to Los Angeles to get that, right?
And they're going to show up here and overwhelm the system.
So it needs to be national rather than local.
unidentified
Right.
dave rubin
But what do you do about the disparity in cost of living in all these places?
martin ford
Well, you know, one issue there is that a basic income is mobile, right?
So maybe you don't have to live in Los Angeles or San Francisco.
You can take Your basic income and you can move to Detroit right and there you might have a pretty decent life You can have housing there much cheaper, right?
So the difference between a basic income and a job is that you can take it everywhere So people would kind of readjust and they might some people might choose to leave leave high-cost areas and and live in cheaper places and so forth So how do you fund all this?
dave rubin
That always is the big one.
Are you scrapping all the social programs that exist right now?
Are you taxing billionaires out the wazoo?
Some combination thereof?
What do you think?
martin ford
Definitely you need to raise more revenue.
I think inevitably one of the things that we are seeing with the economy largely as the result of technology is that more and more income is going to capital and less is going to labor.
So businesses and investors and people like that are getting more income, and average working people are getting less.
So what that means for the future, I think it's inevitable that we're going to have to tax capital more and labor less.
And that may involve higher business taxes or taxes on the wealthiest people that have access to lots of capital.
I mean, that's, as a libertarian, you might find that objectionable, but I think it's inevitable.
Ultimately, if you're going to have a taxation scheme, you have to tax the people that have the money, right?
You can't Get blood from a rock, as they say, right?
unidentified
Right.
dave rubin
So how do you decide what the level is?
Now, I get you could live in L.A.
and the cost of living's high, and then maybe you'd say, all right, well, I can't make it here in a way I want to, so I want to go somewhere cheap.
But how do you figure out, well, what is it that is the basic stuff?
I mean, it's UBI, so what is the basic stuff that people are supposed to have?
martin ford
Well, I mean, in terms of the level of the income, most people are talking about around $1,000 a month.
Finland had an experiment where it was like 600 euros or something.
So these are pretty low amounts.
I mean, you know, can you imagine living on $1,000 a month, right?
dave rubin
I used to do it.
It was not fun.
martin ford
It's not so easy.
So I think one advantage of these programs is that they're gonna start it at a low level and we can imagine that as technology advances and society becomes more prosperous that that could be raised over time.
But initially it's gonna be a very low level so I don't think we have to worry too much about destroying the incentive for people to work and so forth.
It's gonna give people Very very minimal cushion But they're still gonna have that incentive to work, right?
dave rubin
It's so interesting because it just does set all my libertarian bells off that well the second you give it to somebody so we give a thousand bucks to everybody well immediately you're gonna have politicians saying this isn't enough and it's then we have to make it more and we have to make it more and that then that becomes the cycle and We're always shifting money around and it's just because no one's ever gonna no matter what basic level we get most people to no one's ever gonna be like all right well we're okay then.
martin ford
That's a real concern.
I would say though that You know, a basic income, or there are other flavors of it, a guaranteed minimum income, a negative income tax, these are ideas that in the past have been embraced by libertarians.
Friedrich Hayek was a big proponent of a guaranteed minimum income.
Milton Friedman was for a negative income tax.
And the idea is that you're creating really a market-based safety net, right?
Rather than having government house people, feed people, control industries, or try to take over businesses and run them in a way that artificially creates jobs and so forth.
Rather than doing that, just give people some money, let them go out and participate in the market.
So it actually is a market-oriented, libertarian approach to having some kind of safety net.
But I think the idea of it being politicized, that's a real concern.
One thing I actually have suggested in some of my writing is that We might set up a separate institution to kind of manage that, maybe something like the Federal Reserve that would be independent and not part of the political process and might manage the level of a basic income, because you could actually use it also to
To respond to recessions, for example.
If there's an economic downturn, maybe pay people a bit more.
And then that would help you get out of the recession, right?
That would be kind of a Keynesian response to it.
So I think there are a lot of possibilities there, but you're right.
We don't want every politician running on the platform of, I will increase your basic income, right?
That wouldn't be good.
dave rubin
Right.
And that just strikes me as sort of real politic related to all of this.
It's just the way people are.
Once you give them something, they want more.
I don't blame people for that.
It's just sort of Exactly.
martin ford
So that's something that we need to think about from the beginning.
As I said, you know, maybe put it in the hands of a separate institution.
One other thing I proposed is that maybe we can build incentives into a basic income.
If people stay in school, Pay them a bit more than people that just play video games.
Or if people go and work in the community, you know, help other people, pay them a bit more.
So that I think it's really important to have sort of a ladder for people that they feel they can somehow do better.
Because that, I mean, the issue you raised everywhere, or raised before, that we could create this class of people that just, you know, do nothing is something to be concerned about.
But there are ideas that we can, I think, employ to really address that.
unidentified
Right.
dave rubin
So what would you say to the person listening that's going, well, wait a minute, a thousand bucks?
I can't do anything with a thousand bucks.
There's basically nowhere I can get rent.
You know, how am I going to do anything close to living?
martin ford
Right.
So for most people now, a thousand dollars is not going to be enough, but it will provide a cushion, right?
That's sort of the whole idea.
We're starting off with $1,000 and that's not going to be enough so you'll still have to work.
dave rubin
So is that the biggest confusion related to all this?
That I think people hear UBI and they think that it's just enough.
Just enough so you can get by.
But you're saying that's actually not really what's going on here.
martin ford
It may not be enough initially.
I mean, there may be some places in the country, not Los Angeles, but some places you could live on $1,000.
So if you're really in a bad situation and you're living in L.A., you could pack up and move somewhere where maybe $1,000 will allow you to survive, right?
That's a possibility.
But I think what most people will do is they will take that $1,000 and they will use that to sort of cushion The difficult times, but they will still be motivated to find a job, to start a business, to do something.
They will just have more options, more freedoms in terms of the choices that they make.
dave rubin
Yeah.
martin ford
And that's why you mentioned you're having Andrew Yang in.
He actually calls this program the Freedom Dividend.
That's what he's named it.
And that's exactly what it is.
It gives you more options, especially for someone that's living month to month and really has no income.
The number of choices that you have in that scenario is just very limited.
dave rubin
Yeah.
I guess so much of this has to do with just the strange way we deal with politics.
So, for example, you'll have politicians saying, we have to have $15 minimum wage, and then you can walk into Burger King or McDonald's now and order on an iPad, because they're basically saying, all right, we're not going to pay our people this much.
So everything just becomes sort of uber-politicized, right?
martin ford
That's right.
And that's why a basic income is maybe a better approach because we just give it to everyone.
And the problem with raising the minimum wage is that it may be good for many workers, but it can actually also increase the incentive to automate or to do other things.
So it could actually result in fewer jobs.
So giving people a basic income and preserving the incentive for them to still work or to do other things, I think, is a very attractive proposition.
dave rubin
When you sort of play out, you know, 5, 10, 20 years from now, Do you think things are going to just be beyond drastically different in a way we really can't think about, because of the speed of all of this, because of everything we've talked about here, that we really can't even envision the ways that this is going to go?
martin ford
I think if you go out maybe a little further than that, 20, 30 years, it really, really gets hard to imagine what the future looks like.
My latest book, Architects of Intelligence, is a series of interviews with the top people in AI.
One of the people I talked to is Ray Kurzweil.
The big futurist.
dave rubin
He thinks that... He'll be alive then.
Yeah, yeah, yeah.
martin ford
He absolutely thinks he's going to live forever.
He expects what he calls the singularity, something that is going to completely change the whole paradigm.
He thinks that within just 10 years we're going to have human level artificial intelligence and so forth.
So that's possible.
The problem is that it's very unpredictable.
We don't know how fast all of this is coming.
What I've really focused on is sort of the practical impacts of AI and robotics and the impact on the job markets.
And what I would say is that within five to ten years, we're going to definitely see a fairly dramatic and unambiguous impact on the job market and on the economy.
dave rubin
So Ray is basically going to live long enough to see the robots take over one way or another.
martin ford
That's what he believes.
dave rubin
Yeah.
martin ford
You know, Ray is he's already 70.
But if you've seen his photos recently, He looks a lot younger than he did a while ago, so at least he's had some work done.
Whether the stuff under the hood is better or not.
dave rubin
Right.
Is he the only person doing that sort of thing?
There must be some other people.
martin ford
Oh no, in Silicon Valley there are many people very interested in this idea of living forever.
You know, the Google people, Larry Page and Sergey and Peter Thiel is very into this as well.
He's even, I think, played around with supposedly blood transfusions and stuff like that.
So, yes, the Silicon Valley elite is, you know, they're true believers in terms of This idea that technology is going to completely transform things and that the future is going to be dramatically different from the past and that we're going to potentially even have the possibility of living forever.
dave rubin
Yeah.
Can you explain the singularity?
martin ford
The singularity basically means a point at which technology takes off and begins accelerating at a rate that becomes incomprehensible to us.
So it comes from basically a black hole, right?
The center of a black hole is what's called a singularity where the laws of physics break down and you can't see beyond that point.
So the term singularity was coined as a way to express the idea that technology reaches a point where It's just completely unpredictable beyond that point, because things are moving so fast.
And most people that think about this associate that with the advance of what's called super intelligence, or machines that are smarter than us.
Not only human level intelligence, but a machine that, you know, is smarter than any human being.
Maybe dramatically so.
Maybe so smart that it makes us look like a mouse or an insect.
dave rubin
Right, so that's where my sci-fi brain in every movie that I've ever seen,
in every dystopian future, says, well, why would the robots need us at that point?
Exactly. - And if anything, wouldn't they just see us as an annoying hindrance
or a vestige of the past, and why wouldn't they wanna get rid of us?
martin ford
Right, and that's a real concern.
That concern, which is what you see in the Terminator movie, right?
And even more than that, the related concern of what's called the control problem,
which is that if we created a superintelligence, something that's far beyond us,
maybe it won't actively want to destroy us, but it might act in ways that are not, you know--
dave rubin
That was iRobot.
martin ford
Right.
And there are very, very serious thinkers that are focused on this.
One of the most prominent ones is Nick Bostrom, who I also interviewed in my latest book, Architects of Intelligence.
So he believes that this is a real issue and he's working on Finding ways to build systems that will be controllable, even if they become super-intelligent.
And so this is an issue that he's focused on.
dave rubin
But the inherent problem being that if you create the super-intelligence, it probably at some point can get around that.
I mean, I know that's not mind-blowing to him.
martin ford
Right, right.
That's the whole idea.
that once we have a super intelligence then you know it's so far beyond us that
we can't control it anymore. So what what people are working on is basically
principles of computer science that will hopefully allow them to build these
systems in a way that that will remain aligned with what we want them to do
dave rubin
even when they're super intelligent.
Right, but is the problem with that what we hit on earlier, which is maybe we here in America hopefully figure out some systems that are gonna make some sense, but if the Chinese figure out a system that's a little bit different, or just some random guy in his garage in Mexico figures out some other thing that we still have, that basically there's just no way to manage it, seems like the big problem here.
martin ford
Yeah, and that's one of the scariest aspects of this is That competition and the fact that there would be an incredible first mover advantage to whoever gets there first.
Whoever builds the first super intelligent system You know, is going to be way ahead of everyone else.
And the reason is that most people believe in what's called an intelligence explosion or kind of iterative improvement, where basically once a machine reaches human level intelligence or gets beyond that, it's going to turn its attention to its own code, right, to building better versions of itself.
So it's going to continuously engineer a smarter version of itself.
And that's something that could explode rapidly.
So whoever gets there first, They're essentially uncatchable.
So that is going to set up a competitive environment between the US and China and Russia and so forth.
unidentified
Yeah.
martin ford
So that's something to worry about.
But there are a lot of people working on doing this in a safe way.
OpenAI is another good example.
That's the organization that was set up by Elon Musk.
Right.
And some other people and they're actively working on building systems that they basically they're trying to get there first to be the first one to to create a generally intelligent system and to do it in a way that is safe.
So I think that's a good thing.
There's there's some real focus on that and investment in that area.
But at the same time, there's also a lot of hype.
dave rubin
Yeah.
martin ford
People like Elon saying some pretty over the top things.
I do think that To some extent, that's a bit dangerous because, again, this is something that is probably pretty far in the future.
I would say probably at least 50 years away.
There's a big debate over that.
Again, in my latest book, I interviewed all these people.
I asked them this question.
How soon are we going to have a computer that is at the level of a human being in terms of intelligence?
And the predictions I got ranged from 10 years from Ray Kurzweil to nearly 200 years.
Well, there are a number of breakthroughs that you have to have.
You have to have machines that can learn the way people learn.
to the end of this century, so pretty far in the future.
dave rubin
What are the markers that cause people to have a different response to that question?
So why would someone like Herzweil say 10 and then someone else says 200?
martin ford
Well, there are a number of breakthroughs that you have to have.
You have to have machines that can learn the way people learn.
Right now, as I said, we've got machine learning, deep learning, which is highly dependent
on lots and lots of data, in particular, labeled data.
So you can train one of these algorithms to recognize pictures of a dog,
and you would give it maybe a million photographs that had, that was, you know, these photos would be labeled
either there's a dog there or there's not a dog there.
And based on this, it could learn and eventually recognize dogs at a superhuman level.
But that's not the way a human child learns, right?
A human child, you can point to a dog and maybe you only need to do that once before the kid needs to learn.
And so one of the biggest initiatives is teaching machines to learn from less data in an unsupervised way, in a way that people can.
And then you've got to have the ability to think generally, to conceive new ideas, to be creative, To understand that one thing causes another thing, as opposed to just two things being correlated.
To develop counterfactuals, to imagine, I've got this plan for the future, but if I tweak this one thing, then this is what's going to be different.
These are all uniquely human ways of thinking, and it's going to take a lot of breakthroughs before we have a machine that can do all those things.
And there's just a lot of disagreement, even between the very smartest people, Working this field about how long it's going to take for those kinds of breakthroughs to happen.
dave rubin
How concerned are you about the unbiasing that seems to be happening when it comes to the algorithm?
So, for example, the famous case that everyone talks about is that if you Google American scientists, that it happens to be, it's just a function of things, that most famous American scientists who have done most of the breakthroughs, most, not all, happen to be white men.
But that Google is unbiasing the searches, so it includes more black people or more women or things like that, which nobody has a problem with, no one in their right mind
has a real problem acknowledging that there are scientists of every color and
gender and all those things, but they're unbiased things that are not, so it's
not really factual sort of what we're putting in the algorithm and
that where that could lead us seems pretty scary. That's ultimately a decision
martin ford
for society I suppose how we want to address those issues.
I mean, the whole issue of bias in algorithms is a huge issue in artificial intelligence.
People are working on addressing that.
And that operates in both ways.
I mean, there definitely and absolutely have been legitimate cases of algorithms that are biased against people of color and so forth, for example, and gender too.
I mean, I know that one company, for example, stopped using An AI system that was used to screen resumes because it was biased against women and so forth.
And there have been other examples.
dave rubin
And where does that bias come from in a situation like that?
martin ford
What happens is that, again, these are systems that are learning from data, right?
But where does that data come from?
Originally, it comes from people.
So if people are biased in some way and they're generating this data, Then an algorithm, a machine learning algorithm comes along and is trained on that data, it will pick up that bias.
dave rubin
So basically, we're the flaw in the system.
martin ford
Absolutely.
dave rubin
More than anything else, really.
martin ford
Right.
But there is a hopeful note there, which is that fixing bias in a human being is very hard, right?
I mean, we don't really know how to do that.
And we know that it does exist to some extent.
But fixing it in an algorithm could be a lot easier, right?
It's basically tweaking some bits.
So as we--
dave rubin
Well, I guess it depends who's doing it, though.
martin ford
Right, exactly.
As long as it's done in a careful, proper way.
But we can imagine a future where algorithms, as they're employed more,
maybe as kind of a check on decisions, or maybe in some cases, actually making decisions,
it could actually be a less biased world and not a more biased world.
But you're right, there are huge numbers of issues running in both directions there.
dave rubin
Yeah Well, it's funny that that's sort of the theme of all of this because I'm even trying to sort of figure out as you're talking Are you optimistic about this or or pessimistic about sort of where this could all lead and I I definitely sense both sides there Yeah, I mean I tend to be You know, speaking more holistically, including, you know, there are many, many issues with AI.
martin ford
Things that we should be concerned about.
Bias is one.
Security, the ability of people, bad people, to hack into a system and do evil things with it.
The potential for weaponization is another thing that many people are really, really worried about.
The idea that you can have autonomous weapons.
and not just one autonomous weapon that might independently kill people
without a human in the loop, but you could have thousands of them swarming, right?
dave rubin
That's the Skynet portion of this.
martin ford
Truly terrifying.
And this is something that is not really science fiction.
I mean, we were talking earlier about super intelligence and the Terminator, where the machines actively are making a choice to kill us.
That's science fiction.
That lies far in the future.
But the idea of having thousands of swarming autonomous drones that were not intelligent independently, but were, you know, programmed by somebody else to do something to attack someone or so.
So basically, this is something that could happen.
dave rubin
So the idea being that, okay, Amazon moves to drone delivery, and then someone hacks into the system, and instead of these drones dropping packages at our door, they're flying through our windows and attacking people on the streets, or whatever, yeah.
martin ford
Or it could be someone, you know, manufacturing a huge numbers of these drones, and then installing autonomous software, because the barrier to entry here is pretty low.
I mean, these are These are weapons that some people could be like weapons of mass disruption.
If you had enough autonomous drones, that would be incredibly dangerous, right?
Now, if you look at something like nuclear weapons, in order for a country to have nuclear weapons, you've got to be a nation-state.
You've got to have resources on that level in order to develop nuclear weapons.
But these kinds of technologies Where you're talking, you know, and there's a lot of overlap between the commercial sector and things that could be done on the security or military side.
You could go on Amazon, you could purchase a thousand drones, and then maybe you could, you know, engineer them to be weapons or something.
These are something that, you know, there's a much lower threshold there.
People in a basement somewhere could be doing this kind of stuff, right?
dave rubin
You're tipping them off right now.
martin ford
Well, I think they know already.
But it is quite scary and Many people in the AI community are very passionate about this.
In particular, there's an initiative in the United Nations to actually ban fully autonomous weapons, for example.
And the real worry is not just that militaries would use these kinds of weapons, but it would go beyond that.
And you would have, you know, this kind of shady arms dealers that now sell machine guns are selling autonomous drones.
And so they then become available to terrorists and all kinds of people.
And this is a really scary scenario.
One of the people I interviewed, Stuart Russell, who's a professor at UC Berkeley, created a YouTube video called Slaughterbots that you can go on and watch.
It's really quite terrifying.
It shows you exactly what could be done with huge numbers of swarming autonomous drones.
Again, it's not science fiction.
It's something that could happen in the next 5-10 years.
dave rubin
Are you familiar at all with the anti-technology movement?
The more that you pay attention to the technology movement and the amount of people that are trying to either get off the grid or limit the amount of time online and that whole thing?
martin ford
Right.
I mean, that's, I think, a natural response to a lot of this.
I mean, the worst example of that is Ted Kaczynski, the Unabomber, who actually wrote a manifesto that was published, I think, in the New York Times.
But if you go and read that manifesto, this is a guy that, okay, he's crazy, he's a murderer, all of this.
But if you read that manifesto and not know that it was written by him, he's raising a lot of the issues that we are now talking about.
You know, the issues that technology could be a threat, the issue that we might become so dependent on this technology that we lose our agency, right?
Our ability to think for ourselves and so forth.
So, you know, even back then, these people were thinking about this.
And so this is a natural response.
And one of the things I fear the most is that If we don't find a way to adapt to these technologies and find a way to leverage all this progress on behalf of everyone so that everyone is better off, there's going to be a bigger and bigger backlash.
People are going to turn against the technology.
And that might mean not just going off the grid and living as a hermit, but actually becoming much more adversarial to the system.
And that might happen politically, it might happen in some places
even in the form of social unrest and so forth as things get bad enough if we really have unemployment.
So I just think it's critically important that we begin to really address these issues
and have an honest discussion about them so that we can avoid that scenario.
dave rubin
I assume you're a fan of Black Mirror?
martin ford
I haven't really watched that, but I've heard a lot about it.
But yeah, those kinds of scenarios are science fiction now, but they are every day becoming reality.
dave rubin
Is there a sci-fi movie that you think handles some of this in the best way?
So not going all the way to Terminator tomorrow, but are there some movies that you think have sort of teased out some of the more realistic futures or closer futures?
martin ford
Yeah, there are several.
There was one movie a few years ago called Elysium.
Yeah, which really got at the issue of inequality because what happened?
dave rubin
Oh, that's the one with the Artificial earth.
martin ford
Yeah, and all the rich people.
Yeah migrated there and earth became Foster became basically You know, a dystopian nightmare.
All the regular people were stuck on Earth.
And you're seeing that already, of course, with wealthy people moving to gated communities and so forth, and elite cities like San Francisco, where things are becoming so unequal.
And I really worry that that's the kind of That's sort of the ultimate irony of what's happening in Silicon Valley, right?
I mean, you just said it.
It's like San Francisco.
wealthy people that are benefiting from this technology and are maybe using the
technology to protect themselves from the masses, right, and everyone else is
dave rubin
literally left behind. So that's sort of the ultimate irony of what's
happening in Silicon Valley, right? I mean you just said it.
It's like San Francisco they've got all these great minds are up there creating all
this incredible technology, absurd amounts of wealth.
And then if you go out on the streets of San Francisco, the amount of homelessness is through the roof.
martin ford
Yeah, yeah.
dave rubin
And crime and everything else.
martin ford
San Francisco and the Bay Area is ground zero for this technological revolution.
And then right in their backyard, you see, you see the inequality, right?
You see what's happening.
And this is something that's going to scale out, right?
To everywhere, basically.
So we really need to get control of that.
And if we don't, it's I think, It's going to tear our society apart, right?
It's going to ultimately lead to some real problems in the United States and in other countries that are less stable, that have less, you know, solid institutions that we have.
It could be even worse.
You're going to see governments fall and things like that in some countries as a result of this, I think.
So it's really something to be concerned about.
We need to have some sort of a plan.
dave rubin
Yeah.
Well, that's what everyone's trying to figure out right now is what is that plan, I suppose.
martin ford
Exactly.
And I think this is one of the biggest challenges we're going to face in the future.
You know, this is AI is going to be one of the primary forces that shapes the future.
It's going to be incredibly disruptive.
And of course, it's going to happen in parallel with other things.
Climate change.
Geopolitically, the rise of China, migration, right?
These are all huge things that are happening already.
All these things are coming at us in parallel.
They're going to hit all together, and I really worry about kind of a perfect storm.
So we really need to begin to get a handle on all this.
dave rubin
How does the information war factor into all of this?
You know, one of the things that people that watch this show are always talking, you know, everyone's talking about fake news all the time or that we're just being handed things.
You know, the algorithm pushes us stories that are favorable maybe to one side of the political aisle or something like that.
And then we're all going to also siphon off into our own Informational realities, basically, and sort of we'll live in the same physical world, but digitally we're gonna just accept different truths.
martin ford
Exactly, and that's what makes it even more scary, is that all of this disruption is coming at us at a time when we are just incredibly polarized, where to some extent we're living in different universes.
Our ability to even talk to each other seems to be limited.
How are we gonna You know, address these issues.
How are we going to respond to these incredibly disruptive forces when we can't even sit down and have a conversation and agree on the same facts, right?
That's a real problem, and there is evidence that it's getting worse and worse.
That's largely the impact.
dave rubin
That's why I'm doing this.
This is my little firewall right here, basically.
unidentified
Exactly.
martin ford
I hope that there can be more of this, because we really need everyone, and that includes people on the left and people on the right, to be able to talk to each other about this, because otherwise We're going to have what we have now.
We've got a Congress that literally can do nothing.
And I'm concerned no matter who wins in 2020 the presidency, you know, it's likely that the Congress will still be divided, right?
The same political polarization and social polarization.
Social media polarization will be there.
So how are we gonna address these kinds of issues?
dave rubin
So that gets to what you were talking about earlier that you need something sort of separate from the government in a way to sort of be if something could oversee some of our ability to deal with this technology it almost in a way it can't be politicized because of the way our system is Right.
martin ford
I mean, specifically in terms of having a basic income, I think there are good reasons to put that in the hands of a separate institution.
And the Federal Reserve would be a good example of that.
The Federal Reserve, which controls interest rates, is relatively independent right now, although Trump has tried to to play around with it.
But if it weren't for the fact that we had an independent Federal Reserve, I think we would be in much bigger trouble now than we are.
And so there is an argument for maybe taking some essential functions of government away from the political process and having a kind of a technocratic approach to that.
dave rubin
All right, so my last question will be a two-parter.
Paint me a future.
If we get some of this under control and we deal with this technology maturely and properly, give me sort of what do we look like in about 10 years.
And then if we lose control and we don't do the proper things and don't have the proper firewalls, what does the future look like in 10 years?
martin ford
Well, I think that maybe looking even beyond 10 years.
dave rubin
How far do you want to go?
martin ford
Well, let's say 15, 20 years.
But I think that within that kind of a time frame, there's going to be an unambiguous impact on the job market.
So if we don't get control of this, we will see unemployment, at least among some workers.
We'll see greatly increased inequality, even beyond what we see now.
We will see more and more anger.
More people left behind, possibly even social unrest as a result of that.
We will see the rise of people like Trump, which you might characterize as kind of a demagogue, someone that preys on the fear that people have, right?
Maybe, you know, points to immigrants as opposed to technology as being the primary cause of this and so forth.
dave rubin
Right, those other people are stealing your job.
martin ford
It's a lot easier to always point to another human being than it is to point to an intangible force like technology.
So we could have a future where everything is just very, very ugly,
and a lot of people are really struggling.
dave rubin
Don't end me on that future.
martin ford
And the other part of that, though, is that there's also an economic aspect to that,
that as people are unemployed or have lower incomes, they have less money to spend, right?
Now they're not driving the economy.
So the whole economy suffers.
So we could have also a financial crisis, a recession.
Perhaps people can't pay their debts and we get into a situation like we had in 2008, right?
So that's sort of the worst case scenario.
The best case scenario is that we find a way to adapt to this.
Maybe with something like a basic income.
So we address this issue of people not having jobs or not having adequate incomes.
So they still have money to spend.
They go out and they spend their money in the economy.
There are all kinds of new products and services and exciting things for people to spend money on.
There is incredible opportunity for entrepreneurs, for people like Elon Musk or the next Steve Jobs, to create things Based on this new technology and people have the income that they need to buy these products.
Things do, in large measure, get less expensive.
So people have greater purchasing power.
We have enormous breakthroughs in science and medicine.
We all live longer and healthier lives.
So there are enormous benefits to artificial intelligence.
It's going to become the primary tool that's used In scientific research, in solving problems like climate change, developing new forms of clean energy, you know, medical breakthroughs, all of that.
The key thing is that we want to make sure that we get that stuff and that we get it for everyone.
In other words, we want to be able to leverage it on behalf of everyone rather than just a few people.
And I think that if we can find a way to navigate through this, that it's incredibly Um, optimistic and where it kind of ends up in the far future is maybe something like Star Trek, right?
Where you've got what has been called kind of the post scarcity economy, an economy of abundance where You know, people don't have to worry about the basics of life anymore.
People, you know, focus on other things, right?
I mean, in Star Trek, you've got the materializer, right?
Everything is basically free.
People don't have to work a nine-to-five job to survive.
They're out traveling the universe or whatever, doing things that are genuinely meaningful to them.
And I think that's sort of the vision that we should have as we, you know, anticipate the development of these technologies.
But in order to get there, We've got to be honest about what the implications of this is.
We need to have an honest discussion and come up, I think, ultimately with some real policies to address the risks and the downsides that are going to come with this progress.
dave rubin
So we shall see.
I hope so.
Are the three words that'll sort of take us out of this one.
We should probably do this every year.
You wanna do one of these every year?
martin ford
Absolutely, I'm happy to come back.
dave rubin
And just pick up on all the incremental progress, and then I suppose, if everything you're saying is right, one year we're gonna do it, and everything will look so absolutely different, we won't even be able to look back and make any sense of what we're talking about.
martin ford
Right, right.
Once we get past that singularity, everything will be different.
Export Selection