Max Tegmark: Can We Prevent AI Superintelligence From Controlling Us?
|
Time
Text
One of the most memory-hold stories of 2023 was that the CEOs of all of these companies, Sam Altman, Demis Asabes, Dario Amadei, signed a statement saying, "Hey, this could cause human extinction.
Let's be careful." They said it.
It's kind of like if Oppenheimer and Einstein had written a letter warning that nuclear weapons could kill people, and then everybody just sort of forgot about it.
Few people understand artificial intelligence and machine learning as well as MIT physics professor Max Tegmark.
He's the author of Life 3.0: Being Human in the Age of Artificial Intelligence and founder of the Future of Life Institute.
The painful truth that's really beginning to sink in is that we're much closer to figuring out how to build this stuff than we are to figuring out how to control it.
Where's the US-China AI race headed?
How close are we to science fiction-type scenarios where an uncontrollable AI can wreak major havoc on humanity?
And how do we prevent that?
Do you know that AI is currently the only industry in the U.S. that makes powerful stuff that has less regulation than sandwiches?
This is American Thought Leaders, and I'm Janje Kellek.
Max Tegmark, such a pleasure to have you on American Thought Leaders.
Thank you!
We've been seeing an incredible growth in AI, especially through these chatbots.
That's what all the kind of fuss is about, maybe some also in robotic and some other areas, some very interesting applications.
But I have this sense, I have this intuition that what's really happening is much, much greater than that at a level that perhaps I can't even grasp at this point.
So why don't you tell me what's really going on here?
So it is indeed hard to see the forest for all the trees because there's so much buzz in AI every day.
There's some new thing.
So if we zoom out and actually look at the forest, you know, we humans have spent a lot of time over the millennia wondering about what we are, how our bodies work, etc.
And hundreds of years ago, we figured out more about muscles and built machines that were stronger and faster.
And more recently, The name artificial intelligence is coined in the 50s.
People started making some progress.
Since then, it's mostly been chronically overhyped.
The people who started working on this said they were going to do a summer workshop in Dartmouth and kind of figure it out.
It didn't happen.
But for the last four years, So, to give you an example, I've been doing AI research at MIT, as a professor there for many years, and as recently as six years ago, almost all my colleagues thought that building a machine that could master language and knowledge, kind of at the level of ChatGPT4, was decades away.
Maybe it would happen.
In 2050.
And they were obviously all wrong.
It's already happened.
And since then, AI has gone from kind of high school level to kind of college level to PhD level to professor level and beyond in some areas, which leads us close to the point where He said, look, folks, if you ever make machines that are just better, can outthink us in every way, they're going to take control.
That's the default outcome.
But, he said, chill out.
It's far away.
Don't worry.
I'll give you a test, though, a canary in the coal mine when you need to pay attention.
It's called the Turing test.
That's when machines can master language and knowledge.
Which is what's now happened with ChatGPT4 and so on.
And so basically, after a lot of hype and failed promises, we're now at the point where we're very close to probably the most important fork in the road in human history, where we have to ask ourselves, are we going to somehow build Inspiring tools that help us do everything better and cure diseases and make us prosperous and strong?
Or are we going to just sort of throw away the keys to the planet by building some alien, amoral machine species that just sort of replaces us?
I really want to touch on this.
I'm very fascinated about how you talk about the AIs that we're creating as alien.
And that's very interesting.
But just very briefly, what is the Turing test and how has it been passed now?
The Turing test, the way Alan Turing defined it, is an imitation game.
You basically get to interact with a machine, maybe by typing.
And the question is, can you tell the difference between a machine and a human?
There was actually a nerd paper that was written by a bunch of scientists recently where they've now found that machines pass this test more often than humans, even.
They're even better at convincing people they're human than humans are.
But in practice, passing the Turing test means that you really master language and basic knowledge and so on the way that today's chatbots can clearly do.
And the reason this test is so important is because if you ask yourself, why is it, you know, that when you go down to the zoo here in Washington, it's the tigers that are in the cages, not the humans.
Why is that?
Is it because we are stronger than the tigers or have sharper teeth?
Of course not.
It's because we're smarter, right?
And it's kind of natural that the smarter species will dominate the planet.
And that was the basic point Turing made, and it still stands.
People who don't see this, I think, make the mistake of thinking of AI as just another technology, like the steam engine or the internet.
Whereas Turing himself was clearly thinking about it as a new species.
That might sound very weird because why your laptop isn't a species, of course.
But if you think of a species as something that can make copies of itself and have its own goals and do things, we're not there now with AI.
But if you imagine in the not distant future that we have robots that can just outthink us in every way Of course, they can also build robot factories and build new robots.
They can build better robots.
They can build still better robots, etc.
And then gradually get more and more advanced in us.
That checks every box of being a new species.
And at that point, you know, before that point, we have to ask ourselves, is that really what we want to do?
You know, we have this incredible gift.
Placed here as stewards of this earth, you know, with the ability to use technology for all sorts of wonderful stuff, and I love tech as much as anyone.
But don't we want to aim for something more ambitious than just replacing ourselves?
Again, before I start talking about this, you know, the sort of this alien foreign thing, which we seem to be creating ourselves.
I've been thinking a lot about how the It's always been a bit politicized, but in the last decade it's been sort of taken to a whole different level.
You know, this is something we used to have as a huge challenge for us at Epoch Times to cut through that.
That's part of the reason I think we've had a lot of growth is because I think we have.
You have, you know, Verity News, now it's called, and a kind of an attempt to do the same sort of thing.
It's also been demonstrated that many of these chatbots, even at this, you know, what you, I guess you would call still a primitive level, right, of AI development, they're huge holes in their knowledge and really biased ways of thinking.
It's like the values of the programmers have basically kind of gone into them.
We're not even talking about creating this AGI or anything like that.
That itself seems a huge problem at this primitive level.
Your thoughts?
For sure.
I mean, George Orwell pointed that out.
If someone can control Basically, the same thing as being a good journalist, which means that you pursue the truth, you follow the truth, whatever it is, even if it's not what you had hoped you were going to find, you know.
And I sometimes get asked what I mean by being a scientist.
My definition would be that a scientist is someone who would rather have questions they can't answer than answers they can't question.
You're laughing here, but think about Galileo.
He asked some questions that the authorities didn't want him to ask.
Look at people more recently who asked questions that were deemed inconvenient.
All sorts of stuff that got them blocked on Facebook or whatever, and then later turned out to be true.
And what we see is this is a quite timeless thing.
You know, George Orwell was on to something.
The first thing that people who want to manipulate the truth will do is accuse their critics of spreading disinformation.
So as a scientist, I get really annoyed when people say, oh, just follow the science.
Being a scientist is not like being a sheep and following what you're supposed to follow.
It's asking all the tough questions, whether people get pissed or not about that.
And that's why I created the very new site where it's just like in science, you know, it's hard to figure out the truth, let's be honest.
Very hard.
Very hard.
You had the world's smartest physicists for hundreds of years believing in the wrong theory of gravity, and that wasn't even political.
They believed in Newton, and then Einstein came along 300 years later and said, "Oops, we were all wrong." So the idea that you can just figure out the truth by having some government-appointed or Facebook-appointed committee that's going to decide it for you is ridiculously naive, right?
If it was so easy to figure out the truth, you could shut down the universities, shut down all the news.
I think science has the best track record historically of ultimately getting to the truth, closer to the truth.
And so the idea with Verity was just to sort of emulate that a little bit in the media.
Rule number one is you don't censor anybody, even if they're dressed badly or don't have a fancy degree.
If they have a good argument, you've got to listen to them.
I'm pretty sure that someone back at MIT is going to criticize me for saying, hey, Max, why are you on the...
You know, I'm a scientist.
I will talk with anybody.
I will listen to anybody also.
And that's the scientific method, you know.
If you go to verity.news, all the different perspectives are in there.
And we actually use machine learning to figure out what people from the different sides of a controversy all agree on.
We call those the facts.
And then we also separate out where they disagree.
So people can just go in if they're busy, you know, oh, this is...
That's how I want to have it as science, as scientists too.
If someone is arguing about a vaccine or anything controversial, I want to say, okay, these ones say this, these say this.
Now I can decide for myself.
One of the big challenges is that we have something that I kind of call, I call the megaphone.
But it's like when you have, you know, an information ecosystem where there's, for some reason, consensus on an issue, no matter how crazy.
It turns out to be years later, right?
And then that kind of dominates the information ecosystem, right?
So according to your algorithms, you might think because of that, well, that's the truth.
Because look, all these different media agree on this point.
You know, one example was this idea of Russiagate.
There was this false narrative that the president was a Russian agent.
Everybody was writing it.
You would think you would have believed it almost, right?
But except that we were...
And of course, you know, you're vindicated later, but you suffer along the way.
And the question is just like Galileo.
But is Verity News able to distinguish between that?
I'm just fascinated by what you're trying to do here.
Yeah, we actually did an interesting experiment.
This was with a student, an MISD student, Samantha D 'Alonso, where we gave AI a million news articles to read.
And asked it if it could automatically figure out which newspaper wrote the article just by reading the words.
And it was remarkably good.
And it put all of the newspapers on a two-dimensional plane.
The obvious left to right, you know, like if it's an issue, an article about abortion, it was pretty easy to tell.
I discovered that if it talks about fetuses or unborn babies, it could make some guesses as to what views the journalist had about that matter.
But it was a whole separate axis also.
That the AI just found by itself, which is what we call the establishment bias.
Are you basically saying the same thing that everybody else is saying, or are you saying It's often newspapers that are a little bit farther away from power that don't always echo the big talking points of the corporations and so on.
So there is an objective way of ultimately noticing this.
And what Verity tries to do is just make sure we sample from all the corners of this for every article.
And in particular, whenever there's a topic, you know what the controversy is going to be about it.
So you make sure you look at newspapers on both sides of it and see what do they agree on.
So that the viewer can come in very quickly and say, okay, this is what happened, here's what they say, here's what they say, and make their own mind up.
It's incredibly important to me that we never tell people what to think, but that we make it easier for them to make up their own mind.
But what you just said is really super interesting, right?
Because what if the AI is injecting inherent bias?
I mean, this dovetails perfectly, right?
Of course it is.
And in two ways.
I mean, one is just from its training data and so on.
But also, if you have a country and all the media is owned by one guy, the media will start writing nicer things about that guy.
There was a study about what happened to the views of the Washington Post after Jeff Bezos bought it.
You will not be so surprised you fall off your chair if I tell you they found that it started criticizing Amazon less, you know.
And now we're in a situation where OpenAI, Google, etc., etc., these companies are controlling an ever larger fraction of the news that people get through their algorithms, which gives them an outsized power.
It's only the beginning of an ever more extreme power concentration that I think we'll see.
If we keep giving corporate handouts to these AI companies, let them do whatever they want, that they will define the truth.
And the truth will always include, oh, we tech companies are great and should be allowed to do whatever we want.
One of the things that shocks me is that there are actually in the literature of some of these AI companies, some of them appear to be wanting to create.
So I'm going to get you to tell me what that is for the benefit of our audience.
And also, that is astonishing and almost like a kind of, to me, like some sort of odd quasi-religious commitment, right?
Which scares the living daylights out of me.
So please comment.
Yeah.
So first of all, artificial intelligence is just non-biological intelligence.
The idea being that anything that has to do with information processing, you don't need to do the information processing in carbon atoms and neurons in brains.
We found you can also do it with silicon atoms in computers, right?
Intelligence itself in computer science is defined simply as the ability to accomplish goals.
It doesn't matter whether the thing has any kind of...
We just measure it.
Can it beat you its chest?
Can it drive a car better than you?
Can it make more money than you at the stock market?
That's how we operationally define intelligence.
So what's artificial general intelligence then?
Well, the AI we have so far is still pretty stupid in many ways.
In many cases, it's quite narrow also that it can do some things and not others.
For example, most AI systems today are very passive.
What you have there in your iPad can answer questions, but it's not outdoing anything in the world.
It's not making plans for what it's going to do next year, right?
Artificial general intelligence is defined simply as AI that can do everything we can do, basically, as well as we can or better.
And some of these companies have been put on their website that their goal is to basically automate all economically valuable work.
People who say there will always be jobs that humans can do better than machines are simply predicting that there will never be AGI, Artificial General Intelligence.
They're betting against these companies.
For many years, people thought that AGI was science fiction and it was impossible.
We already talked about how people predicted even that ChatGPT was impossible, or at least until 2050.
So we should eat humble pie, and I think it's increasingly If we get to there, where machines can do all our jobs, that also includes doing the job of making better AI.
So very quickly, the R&D timescale for making still better AI will shorten from months to years for people's human researchers to maybe weeks or months or days.
And machines will keep just improving themselves until they bump up against The next bottleneck, which is the law of physics.
But there is no law of physics saying you can't have something, processing information vastly better than us, right?
You're a very smart guy, but your brain is limited by the size of your mom's birth canal, right?
And moreover, your brain runs on 20 watts of electricity, like a light bulb, you know.
and you can easily put millions of watts into your computer, my laptop So once machines start improving themselves, they can get as much smarter than us, I think it's obvious, as we are smarter than snails.
And if you had the idea that some of us snails were going to control us, or they were going to build us and then control us, you know, you should laugh at this, because it's pretty obviously a tall order.
So what I describe now, And this is exactly what Alan Turing warned about.
He was not worried that some sort of calculator was going to take over the world.
He was worried that we were going to make self-improving machines that would eventually sort of take over.
And there's a really interesting historical analogy here, because I know you're quite into history.
So Enrico Fermi, the famous physicist, in 1942, he built the first ever nuclear reactor actually under a football stadium in Chicago.
It totally freaked out the world's top physicists when they learned about this.
Not because the reactor was dangerous in any way, it wasn't, you know, but because they realized, oh my, this was the last big hurdle to building the nuclear bomb.
That was like the Turing test.
For nuclear bombs.
Now they knew maybe it'll take three years, four years, two years.
But it's still going to happen.
It's going to happen.
It's just a matter of engineering now.
In fact, it took three years there.
The Trinity test was in July of 1945, the first bomb.
It's exactly the same with the Turing test when that was passed now with things like JAT-GPT-4, that it became clear that from here on, out to building things, we could actually lose control over it.
It's just engineering.
There's no fundamental obstacle anymore.
And instead, it's all going to come down to political will.
Are we going to want to do this or not?
Can I inject some optimism in this?
Because I'm just feeling a little gloomy in the direction we're going here.
We still have not actually had a nuclear war between the superpowers because there was a political will not to have one.
So we didn't have one, you know, even though there were some close calls.
It's absolutely not a foregone conclusion either that we're just going to hand over the keys to our future to some stupid alien machines because, frankly, almost nobody wants it.
There are actually a very small number of tech people who want, who have actually gone on YouTube and places like that and said that they want humanity to be replaced by machines because they're smarter somehow and that's better.
I think of that as digital eugenics.
I can't even imagine what these people are actually thinking.
It's insane.
Isn't it by definition, maybe?
I don't know.
Yeah, I think they're sort of salivating over some sort of digital master race.
I'm not a psychologist.
I'm not the one who can get into their minds exactly.
I believe in free speech.
I believe they should have the right to want what they want.
But I'm on Team Human.
I'm not on their team.
I want our children to have the same rights to a long and meaningful life that we have.
And frankly, I think you're on my side on this.
I'm deeply on your side, but my concern, I mean, we're talking about this, right, is that I think the people who are actually working on this are And that's the concern, right?
And the other part is that with the nuclear bomb, there's this mass physical devastation that happens when you launch one.
This is somehow different.
This is ethereal.
The mass physical devastation doesn't happen until you've already sort of...
The biggest difference of all is we have all seen videos of nuclear explosions.
And nobody has seen a video of or anything about AGI or superintelligence because it's in the future.
So the uphill battle you and I have here, if we want to make other people pay attention to this, is imagine we were warning about a nuclear Winter in 1942.
People will be like, "What are you even talking about?" If we do get there, there can be plenty of devastation.
The most likely outcome is that it just causes human extinction.
We talked about suppression of memory-holding of inconvenient truths.
One of the most memory-hold stories of 2023 was that the CEOs of all of these companies, Sam Altman, Demis Asabe, Starry Amadei, signed this statement saying, hey, this could cause human extinction.
Let's be careful.
They said it.
It's kind of like if Oppenheimer and Einstein had written a letter warning that nuclear weapons could kill people and then everybody just sort of forgot about it.
So there could be devastation, but it feels so abstract now because we're...
Today's AI can, of course, brainwash people, manipulate people, cause power concentration, do censorship.
But it's not today's AI that we could lose control over.
It's the AI that we'll get in the future if we let it happen.
The huge shift that's happened after we passed the Turing test, though, is that this future is no longer decades away, right?
Dario Amode, who leads Anthropic, one of the top American companies, predicts that it'll happen by 2027, two years from now, will have, AGI will have what he calls a country of geniuses in a data center, where these geniuses in the data center, these AIs, are better than Nobel laureates in physics and chemistry and math at their various tasks.
So most likely we'll get to the point where this could happen during Trump's presidency.
Trump is the AI president.
This can be what he does during his watch, which determines how this goes.
I promised you some optimism.
So here's the optimism.
And this dystopian future, almost nobody wants it except for some fringe nerds.
And that's the main...
There's not even any partisan divide on that.
Moreover, I think what's beginning to happen is people in the NATSEC community are beginning to also realize that this is a national security threat.
If you're working in the US NATSEC community and your job is to keep tabs on other countries that could pose a NATSEC threat, and then you hear some guy talking about a country of geniuses in a data center.
Do I really want some San Francisco-based nerd who's had too much Red Bull to drink, you know, to make decisions about something that could overthrow the U.S. government?
The truth, the painful truth that's really beginning to sink in is that we're much closer to figuring out how to build this stuff than we are figuring out how to control it.
But that's also the good news, because nobody in the NADSEC community wants us to build something uncontrollable.
Even if you start talking about weapons, you want controllable weapons.
We don't want uncontrollable weapons.
And this is the way the NADSEC community will think in every country.
Take China, for example.
I'm pretty sure Xi Jinping would like to not lose his control.
So if some Chinese tech company could overthrow him, what do you think he's going to do about that tech company?
One of the things that drives innovation is technological competition.
I think that's obvious, right?
I think it's a major driver.
You're trying to kind of get ahead of the other, right?
And this is driving, I'm sure, this little chatbot mass evolution, quick evolution that's happening right now.
The difference is that the moment you let the genie out of the bottle, the moment that thing can start building itself.
And I mean, I don't know at what point.
We're in an arms race.
You mentioned Communist China, Xi Jinping, a completely amoral governing regime.
There's no moral boundary to be crossed.
You're right.
The moral boundary is control.
Absolutely, the Chinese Communist Party wants to not relinquish an iota of control.
That's true.
Any technology that's, you know, outlawed or problematic, you know, CRISPR babies would be a great example, though technically the guy didn't go to jail for that, but you know all that stuff is happening, okay?
Why?
Because there is no moral boundary, and there's an arms race, technological arms race in all these areas, so we can get ahead.
And, you know, all these American companies have built AI centers, you know, notably Microsoft, and so forth, so you know all that.
Information has been siphoned and used, and I'm convinced that the NATSEC people or the military people are going to be like, "Hey, we've got to keep pushing the envelope here because otherwise those other guys are going to be ahead." And then the genie comes out of the bottle, and now it's doing its own thing.
How does that not happen?
Yeah, those are really profound questions.
So the key thing is this envelope that you talk about.
Think about what is the edge of this envelope, actually.
It's not so simple that AI is just either you have less of it or you have more of it.
Think if there's a space of different kinds of AI systems with different properties.
For example, you have a lot of intelligence in your iPad there, but it's narrow intelligence, not very general, and it's, for example, it can't drive, and it doesn't have agency.
It doesn't have goals that it's out pursuing with a robotic body in the world.
At least I hope.
Yeah.
And the thing that we clearly don't know how to control is actually only when you combine three separate capabilities, when you have not just very strong domain intelligence at what you're good at, like AlphaFold for curing cancer or whatever, and very strong generality, so it knows how to manipulate people.
Produce new pandemics or whatever.
And also agency, autonomy.
It's only when you put those three together that we don't know how to control it.
It's also only when you put those three all together that it can do all the human jobs and cause a lot of negative upheavals maybe in our society.
But if you have only one or two of those traits, it's perfectly possible both...
You have autonomy and generality in your intelligence, etc.
So you're not going to be replaced by something which lacks one of those.
So if you think of the Venn diagram, I'm being a nerd now, there's like a donut shape where you just pick one or two of those traits but not all three.
That lets you kind of have the cake and eat it.
And I think that that's where we're going to end up in the doughnut, so to speak, because We don't have it now.
What do you need from this?
Yes, you need, of course, you need autonomy, and you need domain intelligence.
You better be really good at driving.
But do you really need it to know how to make bioweapons?
Do you need to have so much generality in it that your car can persuade you to vote for your least favorite politician?
You probably wouldn't even want that.
You'd just take heart.
Can you just stick to driving, please?
And similarly, if you want to cure cancer, there's no reason to teach a cancer-curing AI how to manipulate people or how to drive cars.
This is my optimism.
Where the race is going to shift is towards tool AI.
The kind of AI here in the donut where we know how to control it.
To me, a tool is what I can control.
I like to drive a powerful car, not an uncontrollable car, right?
Similarly, what military commander in any country is going to want an uncontrollable drone swarm?
I think they would prefer the controllable kind, right?
Yes.
So they're not going to put all this extra AGI stuff in it so it can persuade the commander of something else.
We will see a competition of who can build the best tools between companies, between countries.
But there will be at some point a hard line drawn by the Natsuk community saying no one can make stuff that we don't know how to control.
And you're right that people in different companies, different countries have very different views of things.
But the Soviet Union It's not like there wasn't a great trust between Ronald Reagan and Brezhnev, right?
But they still never nuked each other because they both realized that this was just suicide.
And they never even signed a treaty to that effect.
There was no treaty where Reagan hugged Brezhnev on his stage and said, I promise not to nuke you.
Because the US knew So you had this red line that nobody crossed.
And I think building uncontrollable, smarter than human machines is exactly the same kind of red line.
Once it becomes clear to everybody that that's just suicide and no one is going to do it, instead, what people will do is...
Do you know that AI is currently the only industry in the U.S. that makes powerful stuff that has less regulation than sandwiches?
If someone has a sandwich shop in San Francisco, before they can sell their first sandwich, some health inspector checks out the kitchen.
And a great many other things, yes, correct.
Yeah, if someone in OpenAI has this crazy, uncontrollable superintelligence and wants to release it tomorrow, they can legally do so.
You know, it used to be that way also for, for example, for medicines.
Have you heard of thalidomide?
Yes.
Yeah, so, you know, it caused babies to be born without arms or legs, right?
If there were no FDA, I'm sure there would be all sorts of new thalidomide equivalents released.
On a monthly basis in the U.S., and people wouldn't trust medicines as much, and it would be a horrible situation.
But because we've now just put some basic safety standards in place, saying, hey, if you want to sell a new drug in the U.S., first show me your clinical trial results.
And it can't just say in the application, I feel good about this.
It has to be quantitative and nerdy.
This many percent of the people actually get the benefit.
This many people actually get this side effect, that side effect.
The companies now have an incentive to innovate and race to the top.
Within a couple of miles of my office, we have one of the biggest biotech concentrations in the US.
Those companies spend so much money innovating on safety, figuring out how to make their drugs safe.
they take it very seriously because that's the way you get to market first.
If you just had even the most basic safety standards for AI, They would shift from spending 1% of their money on safety stuff to spending a lot more so that they could be the first to go to market with their things.
And we would have vastly better products.
And I'm not talking about some sort of horrible innovation stifling red tape, of which there has been much, of course.
I'm talking about some very basic stuff.
I think it would even benefit the AI industry itself.
The worst thing that could happen to our American competitiveness in AI is that some dudes with hubris just ruin it for everyone else and cause a huge backlash.
Right.
Well, so unfortunately, regular viewers of the show will understand that we've often kind of criticized the regulatory systems for...
Not as bullish, although I do believe that there's an attempt at the FDA right now and other health agencies to create the scenario that you exactly described more, more so, created more so.
I just want to be clear here.
I'm not some sort of starry-eyed person who thinks that regulation necessarily is all great.
There's horrible examples of regulatory capture, massive overreach.
100%.
But there's also examples like thalidomide, where I don't know anyone of any political persuasion who thinks it's good that people were allowed to sell the thalidomide.
So somewhere in the middle is the optimum amount of safety standards.
I was only saying this for the benefit of the people who are skeptical of our regulatory system.
I'm just saying that having less regulations on...
Yes.
Okay.
I think we can absolutely agree on that.
Okay.
So now, so your approach is kind of tool.
Think of build AIs as tools.
Yeah.
Aspire to the sort of sentience and the three circles kind of coalescing around.
Not too much generality in particular, I think, is the issue.
We should not just try to build replacements for ourselves.
Aspire to just build replacement humans.
We should build tools that make our lives better.
We talked about safety standards.
A safety standard on sand, which might be like, should not give you Salmonella, safety standard for AI.
Before you release it, your job is to persuade the FDA for AI or whatever it's called that this is a tool that can be controlled.
And actually, it's even better.
Of course, if the thing has no agency, if it's just a chatbot answering questions or telling you how to cure cancer, or But you can even have agency.
I don't lose any sleep over future self-driving cars taking over the world.
They have agency autonomy.
They really want to go from A to B safely, etc., etc.
But they lack the generality to pose a real threat.
As long as we don't teach them again how to manipulate people.
And make bioweapons and all sorts of other stuff that they really don't need to know.
And there's a lot of cool science happening, actually.
This is what we work on in my MIT research group, actually.
How can you take an AI that knows too much, stuff that's not necessary for the product, and distill out of that an AI that just knows what it needs to know to really do the job?
That's really, that is really the question, because how do you know that you didn't inject that code that is, you know, I can tell you actually.
Yeah.
It's, the way you do it is, Suppose you figure out on your own a new algorithm for navigating a rocket to the International Space Station.
How do I know for sure that you're not secretly going to instead crash it into the White House or something?
Well, I don't know.
And then I can look at that, and other people can look at it, to make sure that all it does is steer the rocket to the space station and not into the White House.
I don't have to trust you anymore to trust the tool you built me.
And the amazing thing is AIs now are getting very capable at writing code.
So what we work on in my group is if you have an AI that's figured out how to do your tool, you tell it to write some computer program that is the tool and also write a proof that it's going to actually do exactly what you want.
Now you have a guarantee actually that It's a little bit like if you talk to someone who's a mathematician, and you ask, why do you trust this theorem from Irene, from this other mathematician who seems very untrustworthy?
You'll answer, well, because I checked the proof.
I don't have to trust the one who wrote it.
Because AIs are getting so good at doing programming now and even proving things, we're opening this new chapter where we actually have the ability to make tools that we know are always going to remain tools.
Many people tell me, I'm not worried about the future of AI because AI will always be a tool.
Based on hope, I say instead, let's not worry about AI because we will insist that AI be a tool.
Companies will then innovate to always guarantee that, and then we actually create the tool AI future where we can all prosper.
Basically everything we want, when I talk to people from industry, from work in hospitals, really anyone, people we want are tools.
To figure out how to cure cancer, to figure out how to avoid the stupid traffic accidents, to make ourselves more strong and prosperous and so on.
That's what we really want, not build some sort of crazy digital master race that's going to replace us.
I am absolutely convinced that this needs to be regulated.
I'm not a fan of overregulation by any means, but obviously So there's, you know, it's actually kind of interesting.
There's a legislation that's being put into, you know, the so-called Big Beautiful Bill, right, that actually would prevent states from regulating AI, right?
And that's odd.
Given, I mean, I'm kind of, it's something I've been thinking about, given our conversation here.
And I'm not even convinced the regulation is going to work, but I still think it obviously is needed, right?
in order to at least try to keep them tools and not this other thing that you describe.
Yeah, this so-called preemption legislation where they basically It's clearly just a corporate boondoggle.
We talked about power concentration before and how these companies are often trying to shape public opinion in their favor.
So, you know, it's not surprising that they're also trying to just make sure that they can keep their monopolies and be allowed to continue doing, you know, whatever they want when the oversight, you know, it's not surprising that they're pushing it.
The idea of just taking away constitutional rights from states for 10 years on an important issue, I think would make some of our founding fathers just turn it over in their grave.
It's just so transparent that this is a corporate handout.
It's a corporate welfare boondongle that they're asking for.
If you pass something like this, next thing they're going to ask for is that AI companies should pay negative income tax.
But steel manning it, what is the argument?
It's the same argument that the car companies made against seat belts and the maker of thalidomide made against why there should be any...
Oh, you're going to destroy the innovation, destroy the industry, etc., etc., etc.
I see.
And then, of course, the argument that works best in Washington also to get rid of any regulation is to say, but China.
Well, this is the arms race.
This is, I, and in some situations, it's compelling because you're thinking to yourself, look, we definitely don't want them to be imposing It would be a terrible world.
Right.
But, I mean, communist China I'm talking about.
But the key point is, first of all, that if Sam Alton wants to release the digital master race, and the state of Texas says, no, you can't do that, saying that he has to release the digital master race so that Americans get destroyed by American people, We don't want anyone to do this, including our own companies.
There are, of course, very serious geopolitical issues.
Of course there are.
But that does not mean that lobbyists aren't going to be crafty and try to twist them into arguments that are very self-serving for them.
Does that make sense?
It makes perfect sense.
I mean, that's kind of what lobbyists do, right?
Yes.
Something that you seem to be quite interested in, and it's this idea of the Compton constant.
Again, I guess we're going pessimistic again here.
This is like a measure of how likely we're going to get wiped out by AI, if I'm not mistaken.
This obviously has utility.
Having been a modeler myself at one point, I know that a model can only be as good as the information you put in.
How could we possibly know the variables to fit into something like this?
And does it even have any real utility because of that?
I can't understand the dimensions that would fit into it, but perhaps you can.
Please tell me.
So here's how I think about this.
The way to get a good future with AI is to treat the AI industry like every other industry, with some quantitative safety standards.
So if some company says we're going to release thalidomide, and the FDA says, does it have any risks?
Does it cause birth defects?
And the company says, we think the risks are small.
The FDA is going to come back and say, how small exactly?
What's the percentage?
Show us your clinical trial.
This is how we deal with airplanes also, nuclear reactors.
Just open a nuclear reactor here in D.C. You have to actually show a calculation that shows that the risk is less than 1 in 10,000 meltdown per year, stuff like this.
So we would like to do the same thing for future very powerful AI systems.
What's the probability that they're going to escape control?
And the reason we call this the Compton Constant in this very nerdy paper my MIT group released is In honor of the physicist Compton, who estimated before the first nuclear bomb test, the Trinity test, that the probability of that igniting the atmosphere and killing everyone on Earth was less than 1 in 30,000.
And this was a very influential calculation he made.
It was because that number was estimated to be so small that the U.S. military decided to go ahead and do the Trinity tests in July of 2008.
I'm pretty sure that if you had calculated that it's 98% chance that it's going to ignite the atmosphere, they would have postponed the test and commissioned a much more careful calculation.
So our very modest proposal is that you should similarly require the AI companies to actually make a calculation.
Yeah, they say the risk is small that they're going to lose control, but how small?
And we actually did this analysis.
We took one of the most popular approaches pushed by the companies for how they're going to control something smarter than them, which is that you have a very smart AI controlled by a slightly dumber AI, controlled by a slightly dumber AI, etc., kind of like Russian dolls stacked inside of each other.
We found that the Compton Thompson was higher than 90% chance that you just lose control over this thing.
Maybe there are better methods they can come up with in the future.
But right now, I think it's fair to say that we really have no clue how to control vastly smarter than human machines, and we should be honest about it.
And we should insist that if someone wants to release them into the United States, they should first actually calculate the content, quantify it, just like the pharma industry and the...
I find it really interesting your portrayal of this AI, which we're trying to not create in a way, right?
You know, aliens are a very common topic for humanity for, you know, quite some time now.
But you're saying, no, we're actually making that.
We're actually creating those things.
And the way they think could become very quickly extremely unimaginable to us, how they exist, how they make decisions.
Even the sort of the moral dimensions, is there even anything like that?
And I'm going to be humble here and say that this is not something I can give a glib answer to.
And I think if someone else does give a good answer, I can explain why they're overconfident.
What we do know for sure We humans have so much in common with each other.
Even fierce political opponents, you know, still love their children.
They'll have so many other very basic things in common that they would by default not have in common with machines.
at all.
You could imagine very alien minds that just don't give a damn about children and think of children as the way they would think of another obstacle, a stone or something in the way, and just completely We have a very common heritage, all humans on this earth, right?
It gives us this bond.
We're opening up a Pandora's box here of other kinds of minds where it would be a huge mistake to think that they're going to be anything like us by default.
And what we see now, there's a sadly large number of people who already have AI girlfriends, for example, which are entirely fake.
They have big googly eyes and say sexy things, but there's absolutely no evidence.
That there's anyone home or that this is anything other than just total deception.
And let's not fall for that, you know.
The fact that there is such a vast space of possible AIs we can make can also be used very much to our advantage.
We can actually figure out, let's figure out how to make AI tools that will be loyal servants to us and help us cure cancer and help us with all sorts of other things.
This is not like talking about nuclear war where either something bad happens or nothing happens.
If we get this right, it will be the most amazing positive development in the history of humanity.
We spent most of our time on Earth here with a life expectancy of 29 years and died of famine and all sorts of curable diseases.
You know, even my grandfather died of a kidney infection that you could just cure with.
With antibiotics.
Technology is empowering us to make life so much more wonderful and inspiring for us, for future generations and so on.
And if we can successfully amplify our technology with tools, you know, that help us solve all the problems that our ancestors have been stumped by, we can create a future that's even more inspiring, I would say, than anything science fiction writers really dreamt of.
Let's do that.
You know, in the story of Icarus, you know, he gets these wings with which he could have done awesome stuff, but instead he just kept obsessing about flying to the sun.
Precisely.
You know, artificial intelligence is giving us humans Well, so, you know, of course you have the Future of Life Institute, and you actually have a whole kind of series of, you know, I guess you could say proposals or approaches, and you have an action plan for AI.
Give me some more concrete thoughts about what you think should happen to keep the genie in the bottle, so to speak.
Our government is developing a new action plan for what to do about this, which is really great.
They're thinking, putting a lot of thought into it.
Some very easy wins, for starters, would be to make sure that they get input not just from a bunch of tech bros in San Francisco, but also from, for example, the faith community, because I strongly believe we need a moral compass to guide us here.
And that has to be something more than the profit of some San Francisco company.
There has to be people in the room who are driven by their own inner North Star.
The conversation has to be not about just about what we can do, but what we ought to do and And for that reason, I'm actually very excited that We need to focus on what's actually good for Americans, not just for some tech CEO in San Francisco.
We need to build these tools that make America strong and powerful and prosperous, not some sort of dystopian Replacement species.
So that's one thing.
A second thing, I think, a very concrete thing, is let Texas and other states do what they want.
Don't try to usurp their power and let some lobbyists here in D.C. decide everything.
I think it's just start treating the industry like all the other industries, again, by having just some basic safety standards.
They can start extremely basic, like before you release something that Show us a calculation.
What's the chance that you're going to lose control over it?
And then let the government decide whether it gets released or not.
And ultimately, I think it shouldn't be scientists like me who decide exactly what the safety standard should be.
This should be decided by democratic means.
But it takes a while to build up any kind of institutions for safety standards.
So we can start with that right now.
In other words, more regulation or...
You set a bar.
Industry hates uncertainty and vague things.
But if you say this is the bar, if you're going to sell a new cancer drug, you better make sure it saves more lives than it takes.
Right.
If you're going to release new AI...
You better show that you can control it.
The benefits outweigh the harms.
Basic standards and then how they get enforced exactly can leave to the experts.
This has been an absolutely fascinating conversation.
I greatly enjoyed it.
Any final thought as we finish?
Just to end on a positive note again, you know, ultimately technology is not evil.
It's also not morally good.
I want it to be a tool.
I want us to put a lot of thought into how we make sure we use this tool for good things, not for bad things.
You know, we have solved that problem successfully with tools like fire.
You can use it for very good barbecues or for arson.
And we put some regulations in place and so on so that the use of fire generally is very positive in America.
And same for knives.
Same for so much else, and we can totally do the same thing with AI.
And if we do that, we will actually, building the most inspiring future you can imagine, where seemingly incurable diseases get cured, where we can have a future where people are healthy and wealthy and prosperous and live incredibly inspiring lives.
It's this hope, not the fear, which is really driving me.
Well, Max Tegmark, it's such a pleasure to have had you on.
Thank you so much.
I really enjoyed this conversation.
Thank you all for joining Max Tegmark and me on this episode of American Thought Leaders.