All Episodes
Aug. 26, 2015 - Art Bell
02:18:12
Art Bell MITD - Marcus Hutter Artificial Intelligence
Participants
Main voices
a
art bell
34:55
m
marcus hutter
01:21:26
| Copy link to current segment

Speaker Time Text
art bell
From the high desert and the great American Southwest, I bid you all good evening, good morning, good afternoon, wherever you may be around the world.
Every single one of those time zones around the world covered by this program, Midnight in the Desert.
I'm Mark Bell.
Now, it's going to be a fascinating evening.
We have a professor here all the way from Australia, and it's going to be all about artificial intelligence.
Rules for our show?
Not a lot of thinking there.
They're simple.
There are two of them, no bad language, and only one call per show.
Very simple.
Okay, so not a lot of news to...
You heard about the two former co-workers killed on live TV.
I guess it's a reality world we live in now, huh?
So this killer kills his former colleague and then posts the video on social media.
Only in this day and age, huh?
He described himself as a human powder keg just waiting to go boom.
unidentified
Well, he went boom.
art bell
Horrible day for two former co-workers at a TV station.
I mean, just horrible.
And the Dow Jones industrial average finally up 600 points on Wednesday, so everybody went, looks like China may not, but wait.
The guys who watched this say, no, not too quick.
Nice 600 points, but there may be some very, very difficult days ahead.
Not for us, though.
We've got a fascinating night tonight.
Marcus Hutter is a professor in the RSCS at the Australian National University in Canberra, Australia.
He received his PhD and BSc in Physics from the LMU in Munich and a habilitation MSc and BSc in Informatics from the TU Munich.
Since 2000, his research at IDSIA and now ANU is centered on the information theoretic foundations of inductive reasoning and reinforcement learning, which has resulted in over 100 publications and several awards.
His book, Universal Artificial Intelligence, published in 2005, develops the first sound and complete theory of artificial intelligence.
He also runs the Human Knowledge Compression Contest, offering winners a prize of, get this, 50,000 euros.
50,000 euros.
And I guess we'll soon find out if that has yet been claimed.
You know, somehow that one just isn't professor.
It's just not good for a professor.
So we'll hold that one.
We'll give Professor Hutter something else, like this, for example.
Professor Hutter, coming up from Australia in a moment, it's an amazing world when you can interview somebody like they're in the studio, but they're on the other side of the world.
unidentified
We'll be right back.
The master's from me and the spring-a-pointed mind.
art bell
But I am still alive.
By the way, I've got a few people I want to thank.
I want to thank Telos, Joe Talbot for the great audio, Keith Rowland, my webmaster, Heather Wade, producing.
ScreamGuys, distributing, LV.net, getting it there.
Sales, Peter Eberhardt, TuneIn Radio, and of course, I've been forgetting, Lee Ashcraft, Dark Matter News.
How could I forget?
And for tonight, Ben, this one's for you.
For Ben Gardner, very good friend of mine for well over 30 years, who passed during the night.
Rough stuff indeed.
All right.
All the way across the world, we reach now to Professor Hutter in Australia.
Professor, good afternoon there.
marcus hutter
Hello, Art.
It's a pleasure to e-meet you and to be on your show.
art bell
Well, it sure is good to have you.
I have wanted to do a show on artificial intelligence now for a very, very long time.
And obviously, you're qualified.
I could barely get through your bio.
marcus hutter
I extra send your short one.
art bell
It is good.
You're right.
It's good and short, but some of it would require explanation, I think.
Actually not, I guess.
Reinforcement learning, inductive reasoning, and reinforcement learning.
Interesting.
All right, so I'm not sure where we begin, but at the beginning of what you've said here is where I'm going to start, and that is how is, you know, you were promised years ago, Professor, robots, and they would be doing our dishes and housework by now.
They're not around.
Kind of likewise with AI, we've been talking about it for so long now.
How is it evolving?
marcus hutter
Yeah, it's indeed true that the talk about that AI will come is already very old.
So, I mean, even before the advent of computers, people thought about intelligent machines.
But then, when the first computers came, it became more a reality.
And some researchers promised, okay, it only will take five or ten years in the 1950s or 60s, and we will be there.
And now we have the 21st century, and we are still not there, but there is progress.
And if you see all the inventions we have made in the last 50 years, so I mean, best chess players are not computers and not humans anymore, then we have speech recognition, which works now pretty well.
I mean, think about theory.
Not as well as we humans can do it, but well enough to be useful.
We have self-driving cars now.
So we have all kinds of things which came out of decades-long research.
So there's progress, but we don't have yet the generally intelligent machines on a human level.
That's right.
But just because we didn't succeed in the last 50 years doesn't mean that we could not succeed in the next 50 years, right?
Some things take time.
And given the progress, admittedly in a sense, slow progress, although on a cosmological scale, it's a fast progress, we're on the right trajectory.
art bell
All right.
But you mentioned Siri.
How advanced is Siri?
marcus hutter
So that's an interesting question.
So the interesting thing is that many things we believed to be very hard to achieve in AI, so things we thought acquire a lot of intelligence and therefore should be hard to solve, turned out to be rather easy.
I don't want to oversimplify, but one of the first big successes was Deep Blue, playing chess on a grassmaster level.
And I mean, chess was the drosophilia per excellence for intelligence.
And so this problem has been solved only 20 years ago.
On the other hand, simple things like speech recognition, right?
Every child with five years of age or so can understand speech, took much longer to solve or get algorithms which work reasonably well.
And so your question was how complicated this algorithm is.
It's hard to put a number on it.
So it turns out that the good old-fashioned approach to AI, where you design a system particularly to solve a certain problem, has limitations.
I can come to the details later.
And the more modern approach, which is called the machine learning approach, where you design actually, in a sense, a very dumb system, like a baby, and then you train it to learn or acquire a certain skill.
So that this approach is much more flexible.
art bell
Is that the right word, Professor?
Train it?
That implies the learning process.
marcus hutter
Yes, exactly.
So we call it training data.
We have training data, and then we have to test data to test the system.
So that's a technical expression for it.
And so these modern speech recognition systems are so-called, I mean, they're different kinds, but popular one is so-called hidden Markov models, which is just a certain statistical structure.
And then you give the system a lot of training data, so speech and the corresponding text.
And then the system learns to the correlation between the recorded speech and the associated text.
And then once it's trained, you can talk to it in the future and then it will translate it into text.
And there, of course, variants.
So there are systems which work extremely well with small vocabulary and separate words and for a particular speaker.
And if you want to overcome the speaker dependence or the separate word recognition or you want to have a large vocabulary, then things become more complicated.
But the systems get better and better.
unidentified
Okay.
art bell
If zero is zero and ten is artificial intelligence, that's a very narrow scale.
What is Siri?
marcus hutter
So it is important to distinguish between systems which solve a very particular narrow task.
So for instance, playing chess well, understanding speech, maybe recognizing faces, and more general systems which are able to do a wide variety of jobs, like say a household robot, right?
You can walk around, put your dishes in the dishwater, cook or something, and maybe even drive a car and all these kinds of work.
art bell
We wish, yes.
marcus hutter
And so we should distinguish whether we talk about narrow AI for particular tasks or general AI, like the humans can do.
So if we only talk about narrow AI, so we have speech recognition capacity on a scale between 0 and 10, and say humans are maybe 9 or 10 or so, then I would say theory is maybe on a level of 7 already.
art bell
Oh.
But not in AI.
You're just talking about speech recognition.
marcus hutter
Exactly.
Only on this narrow task, speech recognition.
While for chess, for instance, on a scale from 0 to 10, we are already at 10.
But for face recognition, on this scale, maybe we are at the level of 3 maybe.
And for car driving, maybe we are also at 3 or 4.
And for other tasks like cooking, we are at 1 maybe or so.
unidentified
Right.
art bell
Got it.
Okay.
So at the moment, what would you say is state of the art?
marcus hutter
Can you specify the question a little bit more?
art bell
Sure, what is the cutting edge of the path toward artificial intelligence?
In other words, the best work being done.
marcus hutter
Okay, so that requires a long answer.
So I could talk about the various application domains where we have made significant process.
And I already mentioned a couple of them, like speech recognition, face recognition, chess, car driving, and so on.
So then I could also talk about various approaches towards AI.
And maybe I should concentrate first on that.
And there is, for instance, the neural network approach, Where we are heavily inspired by nature.
So the human brain is a neural network.
We try to understand it as well as possible and then simulate something closely or loosely inspired on a computer.
That's a huge area across many disciplines.
And a lot of progress has been done in this area.
And for instance, two years ago, the European Union has announced the biggest research project ever in history in Europe.
It's 1 billion Euro invested in, I think, 80 or so research institutes to simulate a whole human brain within the next 10 years.
unidentified
Do you think that is possible?
marcus hutter
I don't think they will get there within the 10 years.
They will make significant progress.
Maybe it takes 20 or 30 years.
Eventually, I think we will be able to get there.
10 years is a bit overambitious.
But I like the goal in principle.
So that's the neural network approach.
I can talk more about that, but that's actually not my area of research.
Or I now tell you a little bit about the other approaches.
art bell
So would you say the most interesting AI going on right now or path toward AI is probably in Britain or is it there in your lab?
marcus hutter
In Britain?
art bell
Yes.
The European Union, I'm sorry.
marcus hutter
It's hard to compare, right?
I mean, this project is a 1 billion Euro project, and my research area attracted maybe a couple of million dollars.
art bell
I just wanted to give you an opportunity to say we're way ahead here.
marcus hutter
What is we?
unidentified
I mean, Europe, maybe, yeah.
marcus hutter
Or you could say I am.
So my first PhD student worked on this theory of Universal AI, which I've developed early this century.
And then he co-founded with two other people the company called DeepMind, working both on neural network approach and the Universal AI approach.
And just last year, this company has been bought by half a billion dollars by Google.
So there's also interest.
And all my former PhD students and other students get absorbed by Google DeepMind, and they're working there now.
So there's also interest in, let's call it, my approach to AI.
And there are, of course, many other approaches to AI.
art bell
All right.
Here's a question for you.
Google itself, professor, is absolutely amazing.
Now, it's obviously not AI as in self-awareness or anything, but it's the amount of information that Google has, some people wonder if that will not eventually, just the massive amount of information and processing power, eventually lead to some kind of AI.
Is that something to think about?
marcus hutter
Yeah, well, that's the digital Gaia hypothesis.
We can separate it from Google.
So take the Internet, right?
This is billions of interconnected computer, lots of services and virtual agents doing all kinds of jobs in the background autonomously.
Of course, much of it is triggered by the user input.
But even if all users would stop, lots of these processes would run out.
And the digital Gaia hypothesis is that if you have a structure which is complicated enough, such as the Internet, maybe a little bit more, maybe another ten-fold increase in computing power or something, then suddenly it will awake.
Maybe it comes up with a sentence, I'm thinking, therefore I am, or something like that.
art bell
Can you envision that?
unidentified
Is that possible?
marcus hutter
I couldn't rule it out.
I think that's possible.
art bell
Well, it already, to the average person, seems a very smart place.
And when you talk about a tenfold increase in either storage or information, that's really, at the speed we're going, not that much, actually.
We can go tenfold in not very many years, I would think.
marcus hutter
Yeah, in ten years or so.
art bell
Wow.
Now, of course, there is the traditional people who talk about the marriage of artificial intelligence at some level and robotics.
The way it's sounding, AI is going to be ready before the robotics are.
What do you think?
marcus hutter
Yeah, I think so, too.
It seems to be very hard to produce very versatile robots with capacity similar to a human.
And for many jobs, I mean, you need sort of hands, right?
And well, there are robotic hands out there, but then you have the power problem and so on.
But I think in any case, to a large degree, we can separate robotics from AI.
I mean, many people confuse it.
They ask me, what do you do?
I say, I work in AI.
Oh, you build robots.
No, there's a big difference between the brain and the body.
art bell
You bet.
marcus hutter
You can have physically dysfunctional people like Roger Penrose, sorry, Stephen Hawking, and be very smart, right?
You don't need a body for that.
And the same is with AI research.
You can do a lot without thinking or doing robotics.
And if you need an interface, you can have a virtual agent living in a virtual three-dimensional environment and test your systems.
And coming back to your question, yes, I think the, although I'm not so sure, maybe true AI is 30 years or 20 years away, and who knows?
I mean, these robots are getting better and better.
It's not so clear what we will have first, you know, really versatile robots or super intelligent algorithms which we can then put into the robots.
But in any case, I think for many applications, we don't need the robots, right?
If you have an interface to your computer and you can talk to a virtual face and ask questions and this system provides services, obviously it can't cook then for you.
That's good enough.
art bell
Yes, let's try a definition here.
You said there might be robots and then super intelligence perhaps that you could install into them.
The difference between super intelligence and artificial intelligence is what?
marcus hutter
Well, many people use these terms in different ways.
And artificial intelligence is just in general if you design an algorithm which does something which is traditionally regarded as an intelligent task, like driving a car, speech recognition, playing chess, and so on.
And super intelligence is usually used if the system is smarter or does it better than a human.
Often it is used only if the system has also the broad capacities like a human does.
I mean you could say for instance that deep blue is super intelligent in playing chess because it's better than any human.
Sometimes a super intelligent machine is a machine is only called super intelligent if it has a broad range of capacities, not just playing chess, but driving car and so on.
art bell
Right.
But that, I suppose, although I suppose intelligence could be built into a car itself, it wouldn't need a body to drive as does a human.
It could just simply be the car and we're beginning to get cars that are capable of driving themselves.
marcus hutter
Exactly, and we have that and it has just recently been legalized in California self-driving cars.
So this will have interesting consequences.
So for instance, assume you can now, or in the next couple of years, you can buy these self-driving cars.
They are much safer and, of course, more convenient.
And you would think, okay, I don't know numbers exactly, but I think there are 30,000 deaths per year in the United States for car accidents.
And assume this number gets halved to 15,000.
So you save 15,000 lives per year by having these self-driving cars.
But what I would expect is that you will not get thanks, but you have saved 15,000 lives.
You will have 15,000 lawsuits while these self-driving cars have killed 15,000 people.
And somehow we have to deal with that, right?
Who is responsible then for these accidents?
art bell
Well, certainly what we have now, Professor.
Although in California, a self-driving car couldn't help but be a big improvement.
unidentified
I don't know if you've been to L.A. lately, but bad, bad.
art bell
All right.
Back to artificial intelligence for a moment.
Professor, do you envision that artificial intelligence could ever become self-aware?
That's a big E, right?
marcus hutter
Yeah, that's a big, deep philosophical question.
I could answer, or I have a counter-question.
How can I be sure yet you are self-aware?
I know that I am self-aware, but maybe you are just a machine pretending to be self-aware.
Of course, I would not do that because I'm polite, right, like everybody else.
No, go ahead.
art bell
Cast suspicion on me.
It's fine.
marcus hutter
So what we do is, right, we interact with other humans because they behave similarly.
And I believe myself that I'm conscious and have feelings and so on.
And these other humans behave similarly.
The most natural assumption is that they're also conscious about themselves and have feelings and so on.
And let's now imagine we have at some point we have intelligent robots.
Take the robot picture because it's so visual and they walk around and they behave intelligently and they feel pain right.
Or maybe they don't feel pain, they just try to avoid dangerous situations, but they act like it.
So I think certainly we humans will ascribe them, or most humans will do that, consciousness and feelings and so on.
Whether they really have this consciousness or not is a much harder question, which we can talk a little bit about it, although I'm not an expert on that.
art bell
No, but it's really interesting.
marcus hutter
One example.
A couple of years ago, when in Japan and then around the world, the Tamagotchi, you know, this very primitive, simple device where you had to raise, you know, some virtual creature, just a display on a black and white thing.
And people got very attached to it.
So I think the same will happen to a much larger degree if you have these intelligent machines.
Okay, but I guess your question was about are they really sentient?
Are they really conscious?
Or do they just display this behavior?
art bell
Or can we even define what...
Other than self-awareness, what test might there be for consciousness?
Can you imagine one?
marcus hutter
The only test, not the only test, but one test is well similar to the Turing test, right?
Via teletype or so, you talk to a human and you talk on a different channel.
You talk to a machine and you try to find out by an intelligent conversation or whatever means whether your conversational partners are conscious or self-aware or so on.
And if you can single out the machine with higher chance than 50% that it appears to be not conscious, then maybe it isn't.
But if you can't distinguish it from your interaction, then this is, I think, a very good test for that the machine is conscious.
There are also other means we could do.
For instance, for humans, we could, and these experiments are done, measure the brain activity and then get self-reports from the test subject what they feel and whether they feel conscious and what they have thought and so on.
And that's called the neural correlate of consciousness To relate conscious experience to activity patterns in neurons.
So then you get a theory about which patterns in neurons create consciousness.
And maybe this theory can be transferred also to robots if they are built in a similar way, say, the neural network approach.
art bell
All right.
Professor, hold tight.
We are at a...
You are at a breakpoint, Professor.
unidentified
Sorry, we've got a break here, so hold tight.
art bell
This is Midnight in the Desert.
Professor Hutter is my guest from Australia, and we'll be right back.
unidentified
You'd think that people would have had enough of a silly love song.
Well, look around me and I see it as I'm sold.
Thank you.
Take a walk on the wild side of midnight from the Kingdom of Nye.
This is Midnight in the Desert with Art Bell.
Please call the show at 1-952-225-5278.
That's 1-952.
Call Art.
art bell
There he goes again, stuttering a little bit.
Don't know what happens here, actually.
All right, well, quite a bit of recognition already, Professor.
I've got a message on the computer.
It says, wow, Marcus Hutter, the AIXI guy?
This is an excellent guest.
And so there you have it.
As soon as you explain what AIXI is.
marcus hutter
Okay, so that's my baby, my theory, which I developed 15 years ago, and I'm still developing it.
So I've mentioned already that there's lots of progress in specific problems which require intelligence, but the big goal is to develop general systems.
One obstacle is that we don't really know what intelligence is.
And after a couple of detours in my life, so about 1998, I had the key insight.
Some of it already was done by Solomonov in the 1960s, which I rediscovered and then found later, is that compression, data compression, is closely related to being able to predict well, which is a key ingredient to then making good decisions and then to act well, which you want for an intelligent agent.
art bell
Define what you mean by data, if you would, by data compression.
marcus hutter
So in general, by data, I mean, let's assume you have a file stored on your computer and you want to save space.
What you do is you compress it, you put it in a zip file.
And so that's what I mean with data and data compression.
More generally, it is any kind of information which you can store digitally and then you compress it.
And if you now compare that, what scientists do?
We have data and we look for patterns.
So what does it mean for looking for a pattern?
So assume I see a sequence which is 1, 0, 1, 0, 1, 0, 1, 0.
So this looks like a pattern, okay, 1, 0, repeated.
So we have now a theory how the sequence is generated, how what the sequence is, namely repeatedly 1, 0.
And if we want to do a prediction, we may well predict maybe the next two bits are again 1, 0 and then 1, 0 again.
So what we have effectively done, we constructed a simpler description of our data, rather than just raw storing it, we said print one, zero and repeat for a couple of times.
And so that was a very simple example, of course, but that's also how IQ sequences work.
So you see an IQ test, one, two, three, four, five, what comes next, right?
It seems to be a number, and then you add one, you add one, you add one.
You can write a little program which does that.
And if you then run the program, it will predict one, two, three, four, five, and then six comes next.
And so this principle generalizes.
Whatever data you see, it could be weather data or stock market data, you look for a simple explanation, which means a simple description of your data, and then you can use that for making predictions, which is a key component for intelligent behavior.
So that is one aspect.
So that's one aspect.
And then, so there's one half of the IXI model.
And the other half then does the following.
Now we have a theory how to predict the future.
Of course, not perfectly, right?
We can make mistakes.
And then you need a mechanism for using this for making decisions.
And that's called sequential decision theory that has been developed a long time ago.
It's a solid theory, which is out there.
And what I've just done is combined this sequential decision theory with this universal scheme of making predictions.
And the theory I coined, ICSI.
And this is a mathematical theory which you can then theoretically analyze and what you can prove in a certain sense that this is the most intelligent agent possible.
art bell
All right.
Let's take the stock market as an example of something that you might want to predict.
You can certainly predict trends.
You can record compress history and look at it over long periods of time and I suppose be somewhat successful.
But then there are so many variables that begin to come along, like the bubbles that we create around the world, whether it's in China or the U.S., our housing bubble here, and I guess there is there.
And then, of course, you've got the even more complex aspect of human emotion tied up.
And so it gets very complex, does it not, trying to predict?
marcus hutter
Yes, it is a very complicated problem to predict the future, indeed.
If we could do it well, we could all become rich, right?
But the theorems and the algorithm doesn't say it will be an excellent predictor in all circumstances.
But what you can show is: let's assume you have some data, weather data, stock market data, and there is some unknown algorithm which could do reasonably well or to a certain degree on this data.
Then this meta-algorithm, which is called Solomonov induction, will do as well.
And we already know Solomonov induction.
So what that means is you can have data from any source, and Solomonov induction will work as well as any other conceivable algorithm.
If there is, of course, no way to predict the future, if you have random noise or maybe you have really the efficient market hypothesis and nobody can predict the future, then Solomonov induction will fail too.
The statement is that if something works well, Solomonov induction works at least as well.
So we can always use this and nothing else can work better.
art bell
All right, there's two things that I don't understand.
You've got to understand you're talking to a layman here.
I understand the concept of data compression, for example, but I do not understand why data compression makes prediction and or even intelligence more possible than otherwise.
It saves space, yes, but what does it do?
And I understand that you've got a competition about this, and we'll talk about that, but I've got to understand it first.
What is it about compression that gets us going faster toward intelligence or prediction?
marcus hutter
It's not going faster there, but it's a necessity.
So there's something called a very old principle, which is called Occam's razor principle.
And Albert Einstein also paraphrased it.
It said we should do everything as simple as possible, but no simpler.
So many scientists have paraphrased this principle.
And what it says is the following.
So if we have data and we want to understand it, we come up with theories.
And these theories should be as simple as possible, not to save space or to make it easier to understand for us, but simpler theories are better theories, are better for prediction.
More precisely, if I have two theories which describe my existing data equally well, the simpler theory is the more likely one to make good predictions for the future.
I have given you already two examples.
One is, say, IQ sequences like 1, 2, 3, 4, 5.
I could give you a crazy model, right, which predicts that the next number is 17.
1, 2, 3, 4, 5, 17.
But nobody would predict that in an IQ sequence.
And normally 17 doesn't come up.
6 is the next number.
Because it's much simpler.
You just add 1 to the number.
It's a very simple explanation.
To give you maybe a more real-world example, so in the old times where we had the geocentric world model or model of our solar system, the Sun rotated around Earth and the other planets.
And initially, it was fine, but then with more precise observations, one realized, oh, these are not really circles.
These are distorted circles.
And then we have the epicycles.
And the model became more and more complicated.
And then we came up with a heliocentric model.
Oh, if you put the Sun in the middle and we allow for ellipses, then everything is easy.
All planets are just ellipses around the Sun.
It's a much easier explanation of our observations compared to the geocentric model.
And indeed, this simpler model also then makes much better predictions.
Like you could predict the further planets out there based on distortions of the ellipses.
art bell
Okay.
I'm still not understanding the aspect of compression.
Maybe you just explained it to me and it went right past me.
marcus hutter
Okay, I do it try once more.
You see, and sorry, I have to give a very simple example, otherwise, it will take hours.
So I give you the sequence, or your data is 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, say up to 1,000.
So this is the data on a piece of paper.
And now you know it, I don't know it.
And now you try to tell me what is on your piece of paper.
You could either fax me the whole piece of paper and I get a number sequence, or you could do something else.
So what could you do to tell me what is on your piece of paper?
art bell
I guess I could, well, let me think.
What could I do?
So perhaps impress you.
I guess I could put it in an.
I don't know, Professor.
marcus hutter
So I tell you.
So what you do is you write a very small program which says print the number one, then add one to the number, and then print this number, and then you repeat it 1,000 times.
So you print a 1, add 1, it's 2, print this number, add it.
So you have a very short program which prints 1, 2, 3, 4, 5, 6.
Every child can sort of write this program and prints the first thousand numbers.
And that is what I mean with compression.
You have now a very short description what is on your piece of paper.
So there's very little information content in the sequence 1, 2, 3, 4, 5 up to 1,000 because it can be highly compressed.
It's just all numbers between 1 and 1,000.
So I gave you a very short sentence.
Think about a linguistic description of your data.
And this description, if this is shorter than the original description, so count the number of symbols, right?
unidentified
Right.
marcus hutter
Yeah?
Then this is called compressed.
unidentified
Okay.
art bell
All right.
Actually, I do get it now, finally.
you have associated a contest of sorts with this, and I assume that you're looking for a better way to compress a shorter way, a simpler way, until you finally get to the ultimate simplicity to achieve something, right?
marcus hutter
Yeah, exactly.
And the motivation is, and I explained it on this website in detail, the website is price.hutter1.net.
More or less what I've just explained in the last couple of minutes, but with much more detail and much more examples.
And here I don't take these simple number sequences or stock market data, but the goal of AI is to build human-level intelligent systems.
And we humans have a lot of knowledge.
So what I took is I took one gigabyte of Wikipedia and don't take it too literally, but you can estimate the symbolic knowledge.
So not the visual knowledge, but the symbolic knowledge in an adult human brain, which is about one gigabyte.
So I just took a chunk, one gigabyte of Wikipedia because it contains a lot of human knowledge as a very crude proxy in a human brain.
And now if you sort of understand or believe the relation between compression and prediction and intelligence, this means that the better we are able to compress this one gigabyte, the more we have understood how to build intelligent systems.
And the contest is about beating the existing data compressors.
art bell
What would win the prize?
marcus hutter
Ah, so if you beat the existing record, at the moment we are at 16 megabytes, if you beat the existing record by 1%, you get 1% of the price, which is 50,000 Euro.
If you beat it by 10%, so your compressed file size is 10% shorter, you get 10% of the 50,000 Euro and so on.
unidentified
Wow.
art bell
And I take it nobody has yet collected.
marcus hutter
No, that's not true.
So there have been several entries, and there was one Russian guy who won three times in a row over a couple of years, each time 3% of the price.
And so there was progress.
So about 10% or so, we could improve over the state of the art since we started the contest in 2006.
And now it has stalled a little bit.
It seems to be very hard to beat this Russian guy.
art bell
All right.
You mentioned earlier a little Japanese model that people actually became emotionally attached to.
When you get the opportunity, there is a British television series called Humans about very, very, very human-like, artificially intelligent beings.
And I need not give you the plot, but it is very interesting, and they are striving to get a little bit of code that will tip them over the edge to become sentient, to become self-aware.
And at any rate, what happens is these AI robots are in people's homes as helpers, just as we always imagined it might be doing the dishes, bringing you a drink, letting you sit on the couch, I guess, and rot away, doing everything for you.
But the people became extremely attached to their AI companions, to the degree that at some point, now this is just television, of course, but at some point some of them disregarded their partners otherwise for their AI companion.
If you look far into the future and imagine something like this, can you imagine that occurring?
marcus hutter
Easily.
I mean, there are already now people who are more attached to their dog than to their partner.
You don't need an eye for that.
art bell
No, it's true.
It's true.
So apparently there's a little bit of danger there.
Even if they're nothing but, I guess, super intelligent is the way I would put it, it will be enough so this attachment will occur.
And I wonder if that is a healthy and a good or a very bad thing.
marcus hutter
I think that's not for me to decide.
I mean, that really depends on society and how the society changes and what they value.
And it is often that things which are very different from how it is now, we feel that this is not good.
But once it's there, you know, we like it.
So it's really hard to tell.
I mean, to give you an analogy, you know, privacy was once held quite high.
And nowadays, young people are absolutely ready and easy to give up any aspect of privacy, right?
I mean, wherever they are, it is locked because once you have a mobile phone, your position is locked.
And you should expect that all this data at some point becomes public.
I mean, if Julian Assange is able to get into military computers and put it on Wikileaks, right, all this other data will eventually also become public.
So where you have been in your life in the last 10 years will become public.
In London, there are more surveillance cameras than there are people.
Everything is recorded.
And actually, sort of with AI technique, then it is also possible to use this data in an efficient way.
So, and I mean, I really don't like this surveillance, but most young people have no problem with it.
They accept it easily.
And going back to your example with having artificial partners rather than real partners, it could easily be that we get used to it and most people prefer it, right?
Like some people nowadays prefer their dog.
art bell
I suppose so.
Let me ask this.
Science doesn't really consider the moral ethical Consequences necessarily of what it does.
In other words, if we can create something close to a sentient being that people would become that attached to, people in your business probably wouldn't think very hard about the ethical consequences of it because you can do it.
unidentified
Is that unfair?
marcus hutter
It's not our majority business, but it's not true that we don't think about it.
I mean, those people who work on AGI or care about AGI are usually quite knowledgeable about this.
I mean, not everybody can write and be very active in this area, but at least I consider that too.
I mean, I have written maybe one or two papers in this area.
So we are aware of that.
But on your example isn't the worst case, right?
I mean, there are some things which seem to be objectively good or bad.
Say, eradicating all sentient life on Earth seems to be an objectively bad thing.
So we should be definitely concerned about that.
art bell
You know what?
Eradicating all life on Earth is an absolutely perfect place to break.
And that's what we're going to do right now is break.
Eradicating all life on Earth.
Could it be that a sentient machine would look around, think about all the life on Earth, and decide that the best solution is to eradicate it?
This is Midnight in the Desert.
unidentified
You're raging into the night with Midnight in the Desert.
To be part of the show, please call 1-952-CAL ART.
That's 1-952-225-5278.
art bell
It is.
My guest is Professor Hutter at the cutting edge of artificial intelligence, AI.
That's what we're talking about.
And he was mentioning the eradication of all life on Earth.
And Professor, there are many people who feel that if a machine, an artificially intelligent machine, were to contemplate the situation of man, actually this goes to a movie.
It's always easier for people to relate to movies.
Transcendence is perhaps one example.
That any machine would look at and predict the consequences of what man is now doing on Earth and perhaps decide to eradicate us, which, frankly, might not be a completely irrational decision.
And that's of some concern to people.
What do you think?
marcus hutter
Yes, definitely, this is a possibility, but there are also more benign outcomes.
I mean, just because the majority of science fiction end up with humans being eradicated or marginalized doesn't mean that this is also the most likely outcome in reality.
But it's a definite possibility which we have to consider carefully.
But it's also possible to have good outcomes naturally.
So for instance, to give you one analogy, I mean, we humans dominate the planet Earth in a certain sense.
But on the other hand, there are lots of insects like, say, ants around.
And as long as these ants don't go into our house or are in the countryside, we don't mind.
They exist.
We just coexist.
We don't go out and eradicate them because mostly we don't care about these ants.
And this might be a possibility that superintelligences, they develop completely different goals and habits and they don't care about us whether we exist or not.
And it could well be that we just coexist.
Maybe they go into space.
They don't care about living on Earth anymore or something.
Of course, we should not sort of just blindly hope that this outcome will be realized, or we should be aware of that there are worse outcomes.
art bell
You are aware of the laws of robotics, right?
marcus hutter
Yeah, yeah, sure.
art bell
Okay, so can we take something like the laws of robotics and apply them to AI, or once you achieve AI and sentience, you might not be able to dictate anything at all in the long term?
unidentified
How do you feel about that?
marcus hutter
So, I mean, as you know, with Asimov's law of robotics, I mean, most of his short stories are centered around these laws.
And then, although the robot literally respects these laws, things still break down.
And that is indeed a real problem.
And there's research going on in this direction.
Actually, also, for instance, Google DeepMind, I mean, so they're well aware of this problem.
So they have a safety team considering future of AI safety.
And they're considering these questions.
How could we build provably benign or friendly AI?
I'm personally not too convinced that this approach will succeed, but it is definitely an approach which should be considered because, you know, maybe it will work.
Another approach is this reinforcement learning approach, which you mentioned at the beginning, where you raise your intelligent robots.
I mean, like a baby, it's blank slate at the beginning, and then it interacts with humans.
It gets a rebot signal, a positive one for good behavior.
You punish it like the carrot and the stick for bad behavior.
And then, with some luck, and again, it's hard to prove that this will work out, but there is a chance because the system grew up in a human environment, that it then will value human life and will not, you know, as soon as it has surpassed human intelligence, will suddenly decide to kill us all.
art bell
Well, there is a.
marcus hutter
It's like with children, right?
You raise your child, and usually, you know, your children don't suddenly kill you in order to inherit all your money, even if they could get away with it.
art bell
Not generally.
marcus hutter
Because they have some, develop some attachments to the parents.
art bell
Hopefully.
unidentified
Yes.
art bell
Well, you mentioned a dog and, you know, positive reinforcement and negative reaction in order to slowly teach.
But, you know, every now and then you still get a bite.
You still get bitten is what I'm saying.
And so while you might attempt to guide with positive reinforcement AI in a certain direction, might even seem that you're succeeding, there could easily come a point at which it would decide to bite.
marcus hutter
Yeah, that can well be.
And we have to be aware of that and try to avoid it.
It's mine liked with nuclear energy, right?
We can use it for good, or we can use it to destroy ourselves.
But AI also actually, I mean, has also the chance of lowering the risk of our extinction.
So of course AI, if it's there, they could decide to destroy humanity.
But on the other hand, AI is able to make better decisions and try to avoid other wrong decisions we might make.
So in sum, it is not clear whether in the long run actually AI helps us to lower the risk of self-extinction or increases it.
So for instance, we could use AI for better governance, right?
So I mean, if I think about if I have a benign AI making completely rational political decisions compared to irrational politicians who are often interest, have self-interest just to stay in power, I'm not sure what type of government I prefer.
I mean, things can go wrong with humans governing countries as well, not only with computers.
I think the most realistic movie of computers taking over is actually from the 1970s, so already 45 years old.
It's called Colossus the Forbing Project.
art bell
One of my favorites, actually.
marcus hutter
Worth to look at.
I mean, of course, there's little action, but that makes it even more realistic.
art bell
Oh, no, I love the movie.
marcus hutter
Oh, you know it.
Okay, yeah.
So I think that's still the most realistic movie.
Sorry for that.
art bell
That was interesting.
marcus hutter
AI taking over.
And in the end, I mean, that was a good outcome.
Yes, we lost sort of some power, but the system was benign to humans and governed it in a way which was in the long run probably much better for humans than if humans stayed sort of in control.
I mean, at least in this movie.
art bell
Would you now be comfortable with whatever level of AI we now have with putting the best AI we have in charge of our national defense?
marcus hutter
No, currently not.
I mean, the AI systems are way too limited to make intelligent decisions.
art bell
something like global thermonuclear warfare?
marcus hutter
I mean, the only smart decision is not to start.
art bell
Well, yeah, of course, that's right.
marcus hutter
The algorithm is super simple, right?
Don't press the button.
unidentified
Yes.
art bell
Yeah, indeed.
But we do have early warning radars.
We have various technological capability that could be fed into a machine that would look at that and decide if, in fact, we were truly under attack or we were simply responding to some piece of space junk re-entering or even some benign launch somewhere or a series of launches that could be misinterpreted by a human but might just be looked at and
discarded right away by a good machine.
marcus hutter
Yeah, look, you have to look at these systems, how well they work.
I mean, if a human makes a decision based on some visual image and he can make an error, or they let a computer do it, it can make an error.
And by sufficient experiments, it is determined that the computer is more reliable, then it makes sense to put the computer in charge.
I mean, the best thing probably would be that still a human is in the loop and can override the decision or makes the final judgment.
Although I would then expect that people more and more rely on the judgment of the computer because they give you the example of car driving, right?
I talked about that before.
30,000 deaths per year.
Autonomous car driving, maybe we reduce it to 10,000 just to pick out some number, right?
And if that's the case, although now we have sort of machines killing 10,000 people per year, it's the rational thing to do.
It's still better than humans killing 30,000 people per year in car accidents, right?
unidentified
Sure.
art bell
Sure.
Well, all right.
marcus hutter
Of course, we always have to consider the long-term danger, right?
I mean, sort of like with the nuclear power plants, right?
You know, although they have the highest safety standards, some things can go wrong, and you have to plan for this worst-case scenario.
Like we do for Humans, right?
I mean, you can have a president who suddenly becomes insane, right, and tries to press the button.
That's the reason why multiple people have to, I think, as far as I know, have to agree to that.
So we need, of course, extreme safety measures for extremely dangerous things, like we need when humans are in control.
art bell
Do we have machines that are capable of making decisions at those kinds of levels yet?
I think not, right?
marcus hutter
Well, I think not the bigger strategic and even tactical decisions.
So it's more on a very low scale, you know, interpreting images, whether this is a missile or just a fly or something, and then classifying it.
I think there are the systems or there can be many systems which already are superior to humans on this very low level, but not on the higher level.
To give you another example, if you want to go away from the military, so one of the earliest successes in AI were expert systems, and interestingly, the areas of most success were medicine and law.
I'm joking because these are rather simple areas, right?
You have a lot of knowledge, but the inference you do is, in a sense, rather simple.
So if this and this precondition holds, then you have this disease, or then you can conclude this and this in the law.
And so I could imagine that maybe the next job which is lost to computers is lawyers, because you need a lot of knowledge, which is easy for computers, and then following the rules is sort of also easy, and then picking the right ones, or the ones which favor your outcome is maybe also not so hard.
And then you have the lawyer, and they have the opponing lawyer, and that can maybe be automated.
The judge, in judging then, all this information, that is probably the last thing which can be automated.
art bell
Well, would you imagine an attorney, an automated attorney that is making a case and, of course, has available to it all the case law in the world?
marcus hutter
I could easily imagine that and to be superior than any human lawyer.
art bell
But not the judge.
marcus hutter
Not the judge.
Not for a much longer time.
art bell
But many, many times, Professor, cases are won or lost.
I'll say won in this case on emotional pleas made to juries that the silicon lawyer you're describing wouldn't be able to make.
marcus hutter
I'm not so sure whether it could not make it.
unidentified
Really?
art bell
I mean, he had a very difficult childhood.
He was whipped by his dad when he was five years old.
So it's no wonder he did what he did, right?
marcus hutter
Yeah, but this is, I think, the relation between certain behaviors and certain histories and emotional, say, significance or strength could be learned by a machine learning system and then exploited once you tell the system,
I don't claim, you know, like with lawyers, I mean, most lawyers, you know, are not really emotionally evolved in the case.
They just present it in a way.
It's like robots, right, presenting an emotional case to get the jury on their side.
unidentified
Yes.
art bell
Yes, exactly.
All right.
Hold tight.
unidentified
If the glove doesn't hit, you must be quit.
I don't know.
Maybe.
art bell
We'll be right back.
unidentified
Coming to you at the speed of light in the darkness.
This is Midnight in the Desert with Art Bell.
Now, here's Art.
art bell
World class on artificial intelligence.
Professor Marcus Hutter is with us from Australia.
And that is exactly what we are discussing: AI.
And Professor, I get messages as I do the program.
Here is one for you from Tammy in Texas, who says, Art, would you please ask Dr. Hutter what he thinks about Stephen Hawking's prediction that AI will be the end of humankind?
How would this occur?
Thank you.
marcus hutter
It could be, as I said before, but it depends on how we approach it, which is maybe only partly in our control, but maybe we have some control over it.
So to give you some scenarios how the future could look like, so we could develop AI artificially.
We have suddenly these super intelligent systems, which are so smart, they feel that humans are dangerous.
They kill us all like Terminator-like scenario.
Okay, so that's one scenario.
Another scenario is that we enhance ourselves.
So for instance, we already enhance ourselves.
We have all smartphones.
We can look up all the knowledge in the world in seconds.
We don't do simple calculations anymore.
That's all in our smartphone.
So next step is, and experimentally it's already possible, that they implant chips in our brain.
And rather than viewing at the result and speaking or typing, we just think about a question and the questions get answered.
And if you do that early on, if you implant it early on in your life, it will become extremely Natural.
It's like with an exoskeleton.
So, if you have ever used an exoskeleton, after a while it feels like this is your own body, especially if it has sensors.
So, you will have the feeling that this external knowledge and the external processing capacity and maybe decision-making capacity, which you then have in your artificial part of the brain, that feels like you.
And the computers get better, the algorithms get better, and more and more of knowledge and decision-making you will delegate to your artificial brain.
And it will come very natural, right?
Like looking up in a book.
And, you know, we slowly evolve in this direction.
We become all cyborgs.
And, you know, eventually, and as long as it's sort of slowly, you know, people adapt and get used to it.
It feels pretty natural.
And, you know, maybe there is a point in time when you realize, oh my God, I haven't used my own biological brain.
In the last two years, I delegated everything to my artificial part.
Maybe I can switch that off this biological piece.
It's not really necessary anymore.
art bell
It atrophied anyway.
marcus hutter
So have we now eliminated or destroyed humanity or not, or have we just evolved to our higher intelligence in this scenario?
art bell
Okay, well, what specifically do you think Hawking is concerned about?
marcus hutter
Look, Hawking is not an expert in artificial intelligence, and that's often the problem if people who are very famous in one field start to talk about a different field.
So I think he doesn't have any more credibility in having fear than many others.
And there is a valid risk, and we have to be careful, like with all technological developments.
But there's also a chance, and this depends on society, on political structures, and so on, that we can get it right.
art bell
All right.
Now, let me try this one out on you.
Again, I'm not sure.
marcus hutter
So before you can continue, can I add one more thing?
So I don't think the solution is to stop AI research, because if you do it, then others, say Chinese or other countries or it will go underground more dangerous.
So maybe one could slow it down, make more control mechanisms, and eventually we will have ethics committees looking over AI research and checking the danger and these things.
art bell
That would be wonderful.
unidentified
Yeah, but we can prevent that.
art bell
We will also have black budget stuff going on behind the scenes that nobody knows anything about because we always do.
Now, I guess I want to ask a little bit about the possibility of a human mind or a machine, I guess, getting to the point where a human mind could be uploaded or downloaded,
however you want to think about it, into a very large or very compressed, for you, Professor, data center, and the brain itself successfully transferred.
marcus hutter
Yeah, the movie Transcendence was about that.
unidentified
It was.
marcus hutter
And to a large degree, it was nicely made.
And that's another possibility which I have not mentioned yet, right?
So you have a sufficiently powerful computer.
And in a sense, that's the most easy solution to the eye problem.
The only thing we need to have is sufficiently fine-grained brain scanners and then simulate the human brain in the computer.
And I strongly believe, I'm not a philosopher, but I can give you some halfway convincing arguments why I believe that.
I think consciousness will transfer, identity will transfer, all those things, like in the movie Transcendence.
And then you can have a virtual life in a virtual environment.
And then at some point, the movie breaks down.
And, you know, when these robots come and then get implanted in the robots, and then you have these fights just to have a movie.
art bell
You said consciousness will transfer, you believe.
marcus hutter
Yes.
art bell
So you believe then that consciousness is an end product of X amount of processing and storage?
marcus hutter
Yes, I think consciousness, I mean, I cannot prove that, right?
Nobody can do that, that consciousness is just an emergent phenomenon.
Certain structures and certain information processing will lead to the emergent phenomenon of consciousness, like we have, you know, emergent form of turbulences or fractals or all kinds of other things.
I cannot prove that.
There are some arguments, you know, like the neuron replacement argument.
You replace one of your neurons by an artificial neuron, which has the same functionality.
Do you think you will lose your consciousness?
Probably not, right?
And then you replace another one, another one, another one.
And either you gradually reduce consciousness, then your behavior would change, right?
If you're unconscious, you behave differently.
Or it's suddenly you will lose your consciousness.
But then the question is at which neuron you will lose your consciousness.
So there are some philosophical arguments why consciousness will be transferred.
art bell
If you were able to transfer the entirety of a human brain into a machine, would it be different in that, for example, it would not require sleep?
Or would consciousness, by its very nature, require a rest period, do you think?
marcus hutter
If you transfer a human brain in the computer and simulate it one-to-one, it will also require sleep.
There's some reason why the human brain requires sleep.
I mean, there are various theories for that, for processing the information which has been acquired during the daytime and stored up properly in the other theories.
So this will stay.
But I don't think that consciousness is necessarily related to a sleep pattern.
I mean, this has been a problem.
art bell
I'm sorry, I was not really trying to relate, just ask if sleep would still be required, and you're saying yes, which is very interesting because all of that data is now in a machine that can run, assuming it's powered up properly 24 hours a day, but you think there would be apparently some sort of deterioration without sleep.
marcus hutter
Yeah, look, the brain does something during our sleep.
It's not switched off, right?
And so it processes the information stored in the brain and does something with it.
We don't know exactly what, and it seems that this period is necessary to stay sane.
Okay, what does it mean to stay sane?
That means that the pattern, the future neural firing pattern in your brain is such that you can function well.
And if you just transfer the brain to the computer, nothing will change that, right?
The patterns still work in exactly the same way.
It's just simulated.
So you need, if you want to simulate it, sleep.
art bell
Oh, that's absolutely fascinating.
All right, Professor, hold tight.
We are at a breakpoint.
So the brain in a machine would have to sleep as a human does.
even though it's all silicon or whatever it is that we finally make or store a human brain in.
unidentified
Wow.
Oh, ain't all right.
Little driving on a Saturday night.
Come walk me.
Yeah.
That's why I'm easy.
I'm easy like Sunday morning.
Wanna take a ride?
Your conductor, Art Bell, will punch your ticket when you call 1-952.
Call Art.
That's 1-952-225-5278.
art bell
Nothing easy like Sunday morning about this show.
Professor Hutter is my guest, world class on artificial intelligence.
The real thing.
So if you have questions about this, I know I do.
I have more of them.
I would invite you to join us in one of two ways.
We have a telephone way, which is simply 952, area code 952-225-5278.
Again, 952-225-5278.
Put a one in front of that.
Most people have free nationwide calling, or is it North America, actually?
And the other way, of course, is Skype.
We have Skype for you.
And if you're able to, please put us into Skype and then dial MITD51 in North America.
That's M-I-T-D-5-1.
Outside of North America, you can reach us at MITD55.
M-I-T-D55.
Once again, Professor Hutter.
And Professor Chris, down there with you in Sydney, Australia, asks, would you please ask the professor if it is possible that the Internet has already perhaps awakened and is observing and judging us right now?
marcus hutter
Okay, that's a funny question I haven't heard before.
And it's so smart, right, that it will not show up and tell us that it has awakened, right?
unidentified
Eggs, of course, yes.
marcus hutter
No, I don't think so.
I mean, we understand the processes quite well which are running around, and I don't think there's anything even remotely there which could be regarded as intelligent on a human level.
art bell
That we know of.
I mean, science is frequently surprised, no?
marcus hutter
Yeah, but I mean, it's not that we don't know what is behind the moon or that you look at sub-microscopic particles.
I mean, we have designed the Internet.
There are IT experts who know pretty well which processes are running there.
I mean, not each individual process, but, you know, on average, you know, packages which are running around.
And at least I have no evidence that there are some super intelligence already in the internet, which is hiding from us and not telling us that it is super intelligent.
It's like, so what's the best proof that superintelligent aliens exist is that they have not contacted us.
art bell
There is that.
There is that.
All right, Professor, you've already suggested that one day it will be possible to pull the information from a human brain and transfer it to a machine.
And you feel that the consciousness, the person's consciousness and awareness would transfer to a machine, just fine.
It would happen.
So then there becomes another question, and that would be that perhaps whoever it is transferred into the machine doesn't like it there and would want, let's say, another body.
So what can go up can go down, what can be transferred into the machine, presumably then could be transferred out of the machine and into another, I don't want to say blank human brain, but one that's prepared properly.
marcus hutter
I mean, in principle, technologically, in the future, I don't see why it would be impossible.
It feels that this is maybe more complicated.
You know, scanning something is often easier than producing it.
So we would have to maybe take an existing brain which is dead for whatever reason but still can be revised And then restructure the whole brain.
So, yeah, it seems possible.
I mean, another possibility is why put it back into a human body if robotics has advanced and we have good robot bodies.
So, why not download it into a robot, right?
And then you have a robot body, a real one.
art bell
Well, if you had an intellect, for example, like Hawking, and nobody is around forever, and there is an opportunity to store his intellect as well as his person, you've mentioned, in a machine, do you think that ultimately somebody like Hawking would be satisfied in that machine?
It's an interesting question, actually.
marcus hutter
I mean, I don't know Hawking personally, but I would bet that he would be very happy to have a robot body.
I mean, it's definitely better what he has now.
art bell
I didn't say anything.
marcus hutter
He'd prefer that to a new human body.
art bell
Professor, I did not say anything about a robot body.
I am going to assume that eventually we're going to get a machine or a computer, whatever, that will be able to store the contents of a human brain.
And I'm sure we'll get there before we'll get to the robot body.
I could be wrong, but could he, I guess, subsist or even thrive at a pure intellectual level?
marcus hutter
Yeah, in my opinion, absolutely.
I mean, what you, of course, need, yeah, it's a brain in a wet or a virtual brain in a wet is not enough.
You need to interface it then with some virtual world.
But these virtual realities already exist.
I mean, most popular are the 3D shooting games.
So they look fantastic, these worlds, right?
I mean, they're not perfect, right?
But you need, of course, if you have a virtual brain, you need a virtual body and a virtual environment to interact with.
Otherwise, probably a normal person would go mad.
art bell
Well, that was a very important question, that part about going mad.
Or could it be that somebody who is so intellectual would find it nevertheless satisfying to be able to continue that intellectual process, even though bound inside of electronics?
marcus hutter
Yeah, you are bound inside electronics, but you need to interact somehow with something, right?
I mean, if you shut out all input and output from Stephen Hawking...
art bell
Oh, absolutely.
marcus hutter
Probably he wouldn't...
You put prisoners or other people in a black hole with no interaction, and people get quickly quite crazy.
So the brain needs stimulus, but this stimulus can be virtual, too, right?
So you have just a virtual world created electronically inside the computer, and you interface this virtual brain with this virtual world.
And I can easily believe that many people would be quite happy in these worlds.
I mean, there are some people who play these games like Second Life and others for many, many days per day.
And probably, if you ask them if you could do that 24 hours, would you be happy with it?
And they probably or possibly say yes.
art bell
They very well might.
You wrote that life may become a disposable thing once it can be copied.
What do you mean?
marcus hutter
Yes, so I mean, once you have the possibility to move a brain from one place to another, it is usually then easy to copy it, like with software, right?
I write software once you copy it.
And there are interesting philosophical and psychological questions.
So what if my real body stays alive and then I have also escaped virtual one?
So who is me?
I think these philosophical problems can also be solved.
I mean, you cannot have definite answers, but the most plausible result is that the physical me will believe it's me as well as the virtual me.
And now I make 10 copies.
And each of the copy thinks it's me.
It's fine.
It sounds crazy and there will be lots of consequences.
And one is with the disposable life, which I will come to in a second.
But it's all, you can get used to it, right?
And once that comes, I think we will get used to it.
So now assume you have a virtual life and you can make copies of yourself as you want.
And tomorrow I want to do some fun activity, say some bandi jumping, and I jump down and the rope breaks and I die.
Oh, not really a problem, right?
I have my backup copy at home.
I just reactivate the backup copy.
It's also me.
I just lost this one exciting day, which maybe wasn't too fun anyway because the rope broke.
And that's it.
So if you continue this line of thought, there's no harm in doing any risky activities because if you die or you damage yourself, you just reactivate your backup.
art bell
Yes, but Professor, we are, to at least some degree, products of our environment.
So if you had a copy of yourself and that copy was separated from you for a significant amount of time, months, years, the environmental changes wrought on that copy of you would almost make it a completely different.
marcus hutter
Yes, I agree.
So if you wait too long or long enough, the paths will diverge and these will become different persons and you would care about your own life more about than this copy because this is a different person.
That's why I said you should make a daily, at least daily backup.
It's like if you ever go to sleep and wake up in the morning, is it really me waking up or is it a different person which just has the same experience as the person which went to sleep in the evening before?
So a day is probably safe, or maybe on an hourly basis.
But if you wait too long, yes, I mean, these are different persons and you're completely right.
art bell
All right.
Would you mind taking a few calls?
marcus hutter
No problem, sure.
art bell
Okay.
Sushi, hello there.
unidentified
Well, good evening, Art.
art bell
Good evening.
unidentified
This is Sushi Dog from California, Sacramento.
art bell
Yes, sir.
unidentified
Art, you hit platinum tonight.
art bell
I beg pardon?
unidentified
You hit radio platinum with Dr. Hunter.
art bell
Oh, yes, he's like something.
Yes, indeed.
unidentified
Do you have a question?
Yes, Dr. Hunter, I went back home to Japan not too long ago.
And one of the crazes are a virtual reality baby on an iPhone.
And this is for young girls who have to take care of this virtual baby.
And what it does is it gives them some responsibilities and it teaches them how much responsibility is to take care of a baby through crying, feeding, changing diapers, et cetera, et cetera.
And I do believe the Japanese are on the cutting edge of what you're talking about in robotics.
Now, what Art was talking about just a little while ago was I wish I could have electrodes connected to the part of my brain for dreaming with the USB port, and that I'm going to download my dreams to a thumb drive, not necessarily a machine.
Now, my last question to the professor is, Professor, I'm very fascinated about your human knowledge compression contest.
And my question to you is, is this contest as difficult as Hilbert's tenth problem being solved?
marcus hutter
Okay, thanks for your questions.
Although the first two were not really questions, but statements.
But that's very interesting.
I wasn't aware of the virtual baby on the iPhone.
It seems to be a Tabagotchi version 2.0.
I will look that up after the show.
That is quite interesting.
But as far as I understood, there was no question there.
And I agree that Japanese are at the forefront with robotics.
So they're really, really good at this.
And to the last one with my contest.
So it asks you to compress better than the current record, which is up to a limit possible, and it's a decidable problem.
But because you related to Hilbert's problem, it is indeed undecidable whether a certain compression is the best possible.
So we can actually possibly achieve the best possible compression, but we can never prove that this is the best possible compression.
So in theory, there is something undecidable, like with Hilbert's problems and many other problems, like Gödel's sentence and so on.
In practice, the only thing we need is we need to compress sufficiently well.
I mean, also, we humans are not perfect compressors, perfect decision makers.
We just do it well.
And so with the artificial systems, also, we only need to do the compression well, better than we are currently able to do with technology.
But we don't need to solve a Hilbert's type problem.
art bell
Yes, may I add or ask that with compression and with your ongoing contest, is there essentially the equivalent of a Moore's law involved?
marcus hutter
Not really.
I mean, Moore's Law says that computing speed and power and memory doubles every 1.5 years, which helps, of course, to compress better.
But the contest itself, there is no limit.
Well, there is a limit, actually.
Unlike with Moore's Law, there may be no limits, right?
So in this sense, it's quite unlike Moore's Law.
art bell
Well, there may be a practical limit, or maybe I shouldn't say that.
With today's technology, never say that, I guess.
All right.
Let's go to the phones.
And hello there.
You are on the air from Ontario, I believe.
unidentified
Yes, Ray from Ontario, Canada.
Yes, sir.
My question to the doctor is, what makes him think that they're going to achieve artificial intelligence on a digital platform?
Meaning, you take things like algorithms.
All algorithms are stored statistics.
There are things that computers can do right now and they can't do from the day they started.
Okay, they go by zeros, ones, on, off, open, closed.
A computer can say yes or no, but a computer cannot say maybe.
Now going there, I don't think you're going to be finding artificial intelligence on the digital platform.
I think you have to combine analog with the light spectrum.
There's a bit to it, and I think that's where the answer is.
That's my question.
marcus hutter
Okay, so that's a good question, whether sort of digital computers are enough to create AIs.
And you just propose that you need some analog mechanism.
And your argument was that computers can only deal with yes and no's and zeros and ones.
But if you look at the human brain, right, I mean, the level at which we would look at it or believe to need to look at it is the neurons.
And they have digital fire patterns, right?
I mean, they get the input, the synapses, and then they have a firing pattern.
So that's pretty much digital.
It's even sort of electrical currents, okay?
So maybe you want to argue, okay, that's not the right level in the brain.
There's another analog level, like the chemistry or so, which is important.
You could do that too.
But then you go down a level further and look at the atoms.
I mean, quantum mechanics is digital.
So all of physics, or even if you don't argue it's digital, most physical theories, actually all established physical theories, can be simulated in a computer.
So they are digital in the sense.
And if they deal with real numbers, you can approximate them to an arbitrary position with a digital computer.
Look, we can multiply real numbers with a digital computer, although it's not designed for it.
So there is actually strong evidence that everything which is in the universe can be simulated on a digital computer.
We haven't come across anything which can't.
We haven't done that for the brain yet.
And maybe the brain is really different from all other matter in the universe.
But where is the argument?
You need some plausible argument.
Why should the brain be different from all other pieces of matter in the universe?
art bell
Most plausibly, it is not different, right?
marcus hutter
Yes, I agree.
Most plausible is not, because there is no experiment, no motivation.
I mean, there's the concept of consciousness.
You could argue, I mean, some people do that, you know, that, well, or maybe humans have a soul, and the brain has consciousness, and this is something very non-physical, and this cannot be copied into a digital computer.
But there's no real support for this hypothesis from all of science.
art bell
Well, Professor, you brought up the word, not me.
Soul.
marcus hutter
Speak word.
Soul word.
art bell
That soul word.
That soul word.
So you dismiss the concept of a soul as understood in the popular world?
marcus hutter
Yeah, I'm not sure what a soul actually is.
I mean, it usually has a religious connotation.
I mean, I don't dismiss the concept of consciousness and self-awareness and qualia.
But as explained before, they seem to be emergent phenomena.
I mean, it is still absolutely fascinating, and I don't want to downplay this phenomenon.
art bell
Professor, I'm sorry, I really hate cutting you off here, but I have to.
We'll be right back.
Professor Hunter is my guest.
unidentified
Run in the shadows, damn your love, damn your life.
It's not radio, but it is what's next, exclusively on the Dark Matter Digital Network, Midnight in the Desert, with Art Bell.
Now, here's Art.
art bell
World class, world class in artificial intelligence.
Dr. Marcus Putter is with us from Australia.
Hard to tell he's all the way in Australia, isn't it?
Sounds just perfect.
Okay, Professor, back to for a moment, and probably only a moment, the concept of soul.
While many, many, many humans believe we have a soul, it is not definable.
And I guess ultimately self-awareness is, but a soul, not so much.
And you just simply don't give it much credence, right?
marcus hutter
Yes, that is right.
I don't know what it is, right?
I mean, maybe you can, if you strip off the religious connotation, maybe you could call the soul, I don't know, the stream of self or something over time or over life.
So it seems to have no scientific credence.
unidentified
Okay.
art bell
Here's a question for you.
If a brain is downloaded into a computer, would it actually still be that person or simply a good clone?
We're just talking about the same thing.
unidentified
It is that person, right, Professor?
marcus hutter
So people diverge on this question.
So it's not that we as scientists or philosophers all agree on that.
I believe, because of several philosophical and psychological arguments, that it's still this person, although we have to define what does it mean to be this person.
I gave you the analogy.
If I go to bed and sleep and wake up in the morning, am I still the same person?
Well, by convention, yes.
But maybe it's a clone which just has the same memory.
But I mean, usually we find this line of thought absurd.
But if we upload our mind, because that is not a natural process so far, we are not so sure whether it will be the same person.
But I'm convinced once we do that on a regular basis, and these people wake up and they will say, no, it's me, right?
And then everybody trusts what these people experience.
But of course, I cannot prove that or disprove that.
I can only give arguments for and against it.
art bell
Should an intelligent machine have, do you think it could have, free will?
Free will is an interesting subject all by itself, isn't it?
Could it have free will?
marcus hutter
Yeah, you should have a three-hour show just on free will.
art bell
On that, I bet.
marcus hutter
Yeah, a lot has been written about the free will, and it's also a difficult thing, and there are a lot of paradoxes about the free will.
I don't find it particularly puzzling.
I mean, I spent a significant amount of time in studying this phenomenon privately when I was young, and then I came to the conclusion there's nothing mysterious about it, neither in humans nor in machines.
So I think one way to get your head around the free will is that you realize that you can describe the world in different ways.
And there's a third-person perspective describing the world from the outside, and a first-person perspective, describing it from your own perspective.
And if you describe it from your own perspective, you know, then you make deliberations, you make decisions, and it feels like that you have a free will and you decide for this or that.
Maybe you make a better decision or you randomly choose something.
On the other hand, and that's fine, there's no problem with this description.
On the other hand, if you look at it from the outside, a human, like with an algorithm, is just algorithms that operate according to the laws of physics.
And whatever you think and do is decided ultimately by the law of physics.
So stimulus in, physical processing in your brain, action out.
And there is no act of freedom in this description.
And both descriptions, I think, can easily coexist.
And there is nothing wrong with either.
So if we now go from humans to machines, right, if we describe them from the outside in the third person's perspective, yes, it's just an algorithm operating according to some rules.
No free will.
No decisions are made.
But again, if you look at from the inside, and the machine would look itself from the inside, it would have the feeling of having a free will.
art bell
That's right, it would think it had free will.
And just for my audience, we did touch on this earlier, but you don't think that ETs exist, or at least you were suggesting that if they did and if they had greater intelligence than ours, they would have been in contact already?
marcus hutter
I'm not aware of that I ever expressed this belief.
So when I was young, I was pretty convinced that there must be at least outside.
So the typical argument, I mean, the universe is big and we have Drake's equation and why just on Earth, why not elsewhere, and so on.
So now I'm less optimistic, but it seems to be that it's very difficult to evolve life.
So lots of conditions need to be right, and I mean, this Drake equation is really, really, very crude.
So I'm open also to the possibility.
That doesn't mean that I strongly believe in it, that there is no other life out there.
So maybe the universe had just accidentally the right size for one life to emerge and also one planet where life emerges.
I don't know.
I really don't know.
And if you don't know anything, you should use Laplace's law of uniformity or indifference.
You should assign a 50-50 chance to it.
And maybe ET searchers, they have much more knowledge and they can give other chances, but I would say it's a 50-50 chance that there are ETs out there.
art bell
So at your age or at your development intellectually, you've come from probably imagining nearly 100% when you were young looking up at the stars to about half about 50% now, huh?
marcus hutter
Yes, yeah.
art bell
All right.
Let's go to the phones.
Hello, you're on the air with Professor Hutter.
unidentified
Hi, I'm on the air.
art bell
Yes, you are.
unidentified
Hi, this is Brian from Santa Cruz.
art bell
Yes.
unidentified
I have a question for the doctor, and that relates to expansion of the universe, time travel, the law of the differential, and how it relates to his theory of compression and artificial intelligence, and the information inside his head not really being inside his head, but all around him.
art bell
Okay, so your question is, is the information not really in his head, but all around him?
Is that what your question is?
unidentified
It basically comes down to the universal computer versus these Newtonian computers that we're using right now.
And so have you worked with a universal computer?
Is that possible yet?
And does entanglement fit into your theory as it relates to time travel and expansion of the universe?
marcus hutter
I must admit, I don't see any connection between the expansion of the universe, time travel, and artificial intelligence.
This seems to be quite separate phenomena.
There are some people who talk about that maybe our brain is a quantum computer.
art bell
Yes.
marcus hutter
And the mystery of the measurement process in quantum mechanics is related to consciousness.
But I think these are interesting speculations.
I mean, there are a lot of speculations out there what the measurement process means, how quantum gravity could work, what consciousness is.
And I also believe that these are some of the more interesting speculations, actually, but I think they are pure speculations.
art bell
How many years do you think we are away from what we really could call AI, or as you would define AI now?
How many years are we from it?
marcus hutter
Exactly 27 years.
unidentified
Oh, my.
marcus hutter
I'm joking.
unidentified
Good.
art bell
Glad to hear that.
All right, Scott, wherever you are, you're on the air with the professor.
Hi.
marcus hutter
No, I also have a more serious answer, but I can give you that later.
unidentified
Okay.
All right.
art bell
I like 27 years.
It was intriguing.
Go ahead, Scott.
unidentified
Hey, great show.
I'm really enjoying it.
As far as speculations about quantum mechanical processes in consciousness, have you considered that photosynthesis has been shown to use quantum electron transfer and cytoskeleton microtubules of lattice type A found in neurons next to synapses appear to exhibit superconductivity, which is a quantum effect, and are capable of affecting conformal changes in proteins.
So given that, how would you model quantum processes in AI involving maybe entanglement and non-locality, which has been shown in remote viewing experiments?
marcus hutter
I think you're referring to the Penrose-Hammeroff theory, right, about the microtubule in the brain and that they are important for intelligence or consciousness.
I don't know too much about it.
I have read one or two papers and the critique that it seems to be on the wrong scale this microtubuli for quantum mechanical processes to be relevant.
And as I said before, I think these connections, and including this theory, is pure speculation, but one of the more interesting speculations, I must say.
unidentified
A quick question on the symbolic compression.
Is that similar to Shannon entropy?
marcus hutter
Yes, yeah.
So Shannon actually, in the 1940s, estimated how well humans can compress text.
So what he did is he presented humans a piece of text and cut it up somewhere in the middle of the world and then asked to predict the next letter or sometimes the next word.
And then you can have a probability distribution and estimate from the predictive power the amount humans can compress text.
And he determined that humans can compress text to about 1 to 1.4 bits per symbol per letter.
And if you compare that to the currently best compressors in my compression contest, we are at 1.4 bits.
So we are at the upper bound of Shannon's estimate, how humans perform.
So if one wants to speculate, you know, we are already touching, at least from a compression perspective, human abilities.
I think that is a little bit too optimistic, but it's an interesting thought.
unidentified
Great.
Thanks very much.
art bell
All right.
Thank you.
And my goodness, let's go to the phones and say, Gardnerville, I believe you're on the air.
You will be when I get this up.
Hi, I'm sorry.
Go ahead.
unidentified
I have a few questions.
And what I'm worried about is, you know, once the robots are made, and who will the robots end up being programmed by?
Like, for instance, the physicists usually create a lot of good ideas.
And then in time, like the government, military take over, and then they use it against the enemy or us.
And like the robot could actually end up being a killing machine.
It always ends up to be who the programmer ends up being.
And if there's a lot of robots eventually, how will we know what it's going to be programmed to do?
art bell
Well, that's up to us, isn't it?
unidentified
No, depending on the mentality of the programmer.
See, he's talking about all the great ideas of the robot and what it can do at this point.
art bell
Well, he's talking about artificial intelligence, not so much robots, but, you know.
marcus hutter
Right, but it's related enough to be.
So, but I can always, you know, it's always interesting to transfer these questions back to humans.
You know, somebody has a child and trains it to be a serial killer, right?
What do we do about it?
And that is definitely a possibility, and that happens.
So, and others raise their children in a benign way, and they become useful citizens.
And depending, again, on the approach we have for AI, that can happen for these robots too.
So, you will buy a household robot, and you train this robot to do your dishes and cleaning and so on, and it becomes a friendly helper.
You give the same robot to the military and train it to fight in a war, and it will come a fighter.
And this is, of course, a CV risk if these machines are very powerful.
And there is where the government then has to step in and come up with regulations and with laws and so on.
unidentified
You know, my personal opinion, like the robot would have no soul.
And for me, a soul helps govern our decisions and usually for the good of all.
And without the soul, it seems the robot is then, again, left up just to the programmer.
art bell
Well, again, young lady, you're speaking to a man who believes the soul is not a thing, as we understand it.
unidentified
No, no, I know, I understand that, but this is just my point of view.
marcus hutter
No, no, even if the soul doesn't exist, I think I forgot your name, you have a point.
And it is most likely not that it is a little bit too naive to think that the programmer does the final touch in sort of whether the system becomes benign or malicious.
It's like sort of we build a knife, right, and the knife can be used for something good or bad.
And the factory, I mean, there are many scenarios, but one scenario is that the factory produces a rather neutral robot, and then it can be used for good or used for bad by the customer.
It's less so, it's less likely that the company will design bad robots or good robots.
That's what is the good old-fashioned AI approach, which probably will not succeed.
The more modern machine learning approach will be that you create a blank baby and then you raise this robot baby to become what you want to be.
And then, of course, you have to be careful that we raise it in an appropriate way.
unidentified
Okay.
art bell
All right, ma'am, I've got to leave you.
I'm sorry.
Thank you for the question, Zoe.
It's good.
My guest is Professor Marcus Hutter.
He is world-class on AI.
unidentified
When there's nothing but a slow-flowing dream That your fear seems to hide deep inside Oh, baby.
Part of the Dark Matter Digital Network.
This is Midnight in the Desert with your host, Art Bell.
Now, here's Art.
art bell
Professor Marcus Hutter is my guest.
World-Class AI.
He's coming to us from Australia, and we're discussing a lot of aspects of this very, very interesting stuff.
Professor, you know what a torrent is, right, in the world of downloading?
marcus hutter
Yeah.
art bell
A torrent is made up of many, many, perhaps hundreds, thousands of little parts of a movie or music or some digital something or another that is contributed by many, many people to, we'll think of it as a single source, little bits coming from everybody.
So what would stop in some new world that we've been imagining this morning, somebody from torrenting a human mind?
marcus hutter
This is an odd question.
So, I mean, first we would need to sort of digitize the human mind.
And what's the point of then torrenting it?
art bell
Well, I suppose it would be...
we always worry about privacy, right?
If it were stored in a million different places, um,
marcus hutter
So maybe a more interesting related question is what if you scan a human brain and then rather than sort of the traditional view, you have a single machine and you run this brain and you have this individual, you distribute this on the whole internet and run little bits and pieces of the brain in the internet and what would happen?
What would be the consciousness?
Where would the individual be located?
Maybe that's the question.
art bell
That was the spirit of my question.
Yes, I didn't have it right, but that's it.
marcus hutter
Okay.
Okay, I haven't thought about this.
Okay, I would need to think a little bit longer, but my spontaneous answer would be it doesn't really matter where the processing happens.
It matters where you interact with either the real world or the virtual world.
So let's assume I upload my brain in the internet.
It's completely distributed, the computation, but my observations still, just to give an example, still come from a physical robot, and my decisions relate to the physical robot, and then this physical robot moves.
Then there are good arguments that you would believe you are still in the robot, that the actual calculations happen in the internet, you wouldn't feel.
Okay.
The other question was the privacy question, which I could also maybe answer somewhat if you want to.
art bell
Sure.
marcus hutter
Okay.
So where with the privacy question, I think privacy goes down the drain in any case, whatever we do.
And so we will have more and more surveillance in any case.
And in the virtual world, it's even easier.
I mean, in principle, you can set up the society, also a virtual society, to have pockets of privacy.
But I have the strong feeling that there is no will to do that.
There's always the safety and security aspect, which in the foreground, look, we need another camera here, another access to files.
And even in the United States, you are not allowed to use super secure encryption so that the agencies can decrypt the message.
So I think this is lost hope.
art bell
So eventually, Professor, the United States might as well amend the Constitution and remove the Fourth Amendment because it simply could not be enforced any longer at a point soon to come?
marcus hutter
Enforce, there is no will to enforce it.
So people are happy with the surveillance and they buy the security argument.
And, you know, let's stay in the real world.
You know, at some point you will have everywhere surveillance cameras except maybe in the toilet, in the bedroom.
But ultimately, all crime will be performed there, and then we will have it there, too.
And similar argument then holds for the virtual society.
You know, maybe I may be wrong.
Maybe there's a rethinking at some point and there's mass outcry that we need more privacy and we will return to it.
And technologically, it would be possible.
art bell
Haven't seen any signs of that outcry yet.
marcus hutter
Sorry?
art bell
Haven't seen many signs of that outcry yet.
marcus hutter
No, me neither.
There's always some information leaks.
People complain a little bit.
It goes to the newspaper and then it's forgotten the next day.
art bell
All right.
Let's go back to Skype and say, Billy, hello.
unidentified
Hello, Art Bill.
art bell
Yes, sir.
unidentified
Your inquiry and questions could carry our collective consciousness to the next exponential plateau of human evolution.
For the guest, I propose and suggest that we are looking in the wrong place.
And I have an energy language Google site demonstrating the reasons for my proposition.
art bell
We're looking in the wrong place for what?
unidentified
For where the organizing force of artificial intelligence is at all.
Modern information theory and computer languaging has boxed itself in with an assumption about the fundamentals of binary.
Unix, Linux, C ⁇ all language their instruction based on a coerced serial regard for hexadecimal, octogramic binary.
This imposes a necessary sequence code on what otherwise seems to us to be random binary noise until given instruction by the rules governing that coerced hexadecimal sequence.
I didn't mention it in Your Dean Raden interview about consciousness, but I challenge our very concept of randomness and consciousness.
Intelligence is simply organizational intelligence.
Organizational potential and apparent randomness would therefore simply be orders at a scale we are not broad enough in scope to make sense of.
Ellie Araway listened to random noise in the movie Contact in order to seek a message.
Guess what?
I found a message.
I have an inherent fractal cyclic message embedded in bifurcative and therefore decision-making binary sequence.
art bell
Okay, do you get any of this?
Are you getting this, Professor?
Because I'm not.
marcus hutter
Let me try to convert that into two questions.
unidentified
Hold on, Carl.
marcus hutter
So one question seems to be, or denial is that intelligence can be simulated on a conventional sequential digital computer.
And these questions have been asked before because, I mean, if you look at the human brain, it's a massively parallel system which may even have analog components.
And the computer is mostly sequential.
There is some valid argument.
So the current structure of our computers may not be the most suitable for building intelligent systems.
But first, we are not there yet.
We can build parallel computers.
We can even have, with LISP machines, we can build, it has been built, neural computers.
And on the other hand, that's the Church Turing thesis and the universality of general computers.
You can always simulate one system by another system.
So even if a parallel neural spiking architecture is the better or the necessary substrate to simulate intelligence, we can simulate maybe inefficiently and maybe that's not the best way to do it, but in principle, you can do that.
art bell
Professor, I've got to stop you right there.
unidentified
I've got to stop you right there.
Music Want to take a ride?
Your conductor, Art Bell, will punch your ticket when you call 1-952.
Call Art.
That's 1-952-225-5278.
art bell
All right.
Professor Marcus Hutter is with us all the way from Australia, world class on artificial intelligence.
And it's been quite a discussion, no question about that.
Let's go back to the professor and somebody also internationally blocked something or another.
unidentified
Hello.
This is Black Thorne.
Hey, Art.
art bell
Hey, where are you?
unidentified
I'm in Southern Canberra, the same city Professor Hunter is from in Worton Valley.
art bell
All the way around the world and back.
unidentified
Yep, that's right.
Okay.
Art, thinking of you, Ben's family, and may Ben Be at peace.
art bell
Thank you.
unidentified
Professor Hunter, let's say 20 years down the road, you're working and all your goals are achieved, and you create this being that is self-conscious, self-aware, can make decisions on its own.
And let's say in 20 years down the road, let's say you're working with Google and this being that you've created decided that it wants to be free.
Is it going to, just thinking in the future, do you picture these beings that you're going to create in the future, once you achieve your goals, they'll be able to make these decisions and break away from the companies?
Or will they be like these corporate slaves?
marcus hutter
I don't think that it's easy to enslave a system which is more intelligent than yourself.
There are some fun experiments, sort of the boxing experiment.
You put an AI in a box and it tries to convince you to let it out by all kinds of means and usually it will work and it will solve the problem.
So I think once machines achieve human-level intelligence, they have the capacity to free themselves easily.
Maybe they don't have the motivation to do so.
I think it's likely that they will be motivated to do so, but you could think of individuals which are very happy, where the goal of their life is to serve others, even if they're more intelligent than their masters.
So that could be.
I think it's less likely than the other scenario that they free themselves.
Yeah, I think that answers your question.
unidentified
Yeah, well, let me ask you this.
Let's say if the computer, they create a computer or an AI that wants to be free.
Do you see corporations, I mean corporations there to make money?
Let's say if they create these amazing self-aware things that want to be free, do you see corporations killing them or would they have rights?
art bell
That's a really good question.
Will intelligent machines have rights?
marcus hutter
Yeah, that's a very good question.
And it's an ethical or moral question.
And it depends to some degree, to a significant degree, what we decide.
I mean, do we have animal rights?
Hundreds of years ago, they didn't have any rights.
You could do whatever you wanted to do with them.
Now they have rights.
Same with former slaves, right?
They had very little rights.
Now there is no slavery anymore.
They have the same.
All humans are the same.
And I strongly believe once these intelligence machines are out there and they interact with us, and let's assume they interact in a benign way, So let's consider this scenario that the majority of people want to assign them rights.
You don't want to see a robot to be mistreated, you know, if it behaves like a living or sentient organism.
So that is the degree to which we have a choice.
But maybe, you know, ultimately these machines will fight for their rights and we don't have a choice.
unidentified
Okay.
Thank you, Professor.
marcus hutter
Ozzy, Ozzy, Ozzy.
Ozzy, Ozzy, yeah.
unidentified
Ha, ha, ha, ha, ha, ha, ha, ha.
marcus hutter
And we can meet in person sometime.
art bell
Yes, well, that sounds like it would be easy.
Let's go back to the phones and let's see.
Here, wherever you are, you're on the air.
Hello?
If you heard a little bong sound, you're on the air.
Going once, going twice, going.
Hello there in Tonopaw, you're on the air.
unidentified
Hi, Ari.
Glad to hear you back.
art bell
Thank you.
unidentified
The problem that Marcus has got is in the computer power that we have now.
We're digital on two frequencies, on and off.
If we go with a third, reverse polarity, we quadruple our ability to compress the information that he's trying to compress.
marcus hutter
I think we had the digital question before, and now it's going to three states instead of two.
I think I give a somewhat impolite answer, which means that it doesn't help us at all, because you can always convert a three-state into two bits, and you gain nothing with it.
So maybe we go on to the next question.
unidentified
Okay.
Here's the point.
A long time ago, they created a code.
And that code was used by the ancient Hebrews.
And it was able to go out with three digits, one, five, and ten.
They were able to create a language that was able to be self-correcting, which is the problem that you're looking at.
What they did is each individual letter had a number and a value and a symbol.
At the end of each line of text, something similar to what we do now, they put a line cipher, which gave the grand total of the line of code that was written.
If there was an error in the code or a missing digit or whatever, they would look at the end of the line cipher and to be able to reconstruct that line of code.
art bell
Does this make any sense, Professor?
marcus hutter
I think that is very interesting from a historical perspective.
I didn't know that, but it is well known and actually used everywhere on hard disks, on every hard disk in a modern computer.
So you can have error correcting codes also with just a binary alphabet.
And then if you think about you want to go beyond binary, I mean, think about our genetic code.
Our genetic code is based on four bases, A, C, G, and T. So you have a quadruple alphabet.
You can choose any you want.
It doesn't make any difference.
You can always convert it.
art bell
Okay, Dale on Skype, you're on there with the professor.
unidentified
Hi.
art bell
Hello, Dale.
unidentified
Hey, Arval.
How are you doing?
art bell
Just fine.
unidentified
Great.
So back to the data compression thing.
I'm a little confused because right now I've been associated with data centers in the last few years.
And right now, we can have almost limitless storage capacity.
And we can move data thousands of times faster than we could three to five years ago.
So when you compress data, you make it unusable, right?
Because it's compressed.
It's not in the right format.
And then to use it, you have to decompress it.
So I'm trying to understand how that creates a limit to developing artificial intelligence.
Thanks.
I'll take my question off the air.
Okay.
marcus hutter
Okay, well, that's a good question.
So in theory, if you're just concerned with the limits of artificial intelligence and what can the most intelligent system achieve and you ignore these computational issues, then this compression approach is completely fine and you just ignore this aspect.
If you go to practical systems that you really want to implement, then of course computation time is always limited, even with Moore's law.
I mean, even if it's a million, billion-fold more, it is always limited.
So we need to take that into account.
And there's a trade-off between compressing better and better and being able to use this compressed information.
And yes, indeed, we have to get this trade-off right.
The better we compress, the better will be decisions of this agent, but the slower it will run.
And depending on the systems we have, we have to find the right compromise.
art bell
Do you consider yourself more of a physicist, mathematician, computer scientist, or philosopher, or all of the above?
marcus hutter
Or nothing of the above.
art bell
Or nothing of the above, yes.
marcus hutter
So yeah, that's a good question.
So my heart is in physics for whatever reason, but I'm sitting in a computer science department and proving mathematical theorems about philosophical questions all day.
So I'm a bit of everything.
But I mean, AI is traditionally within computer science.
So most of my work would be regarded within computer science, a little bit in philosophy, a little bit in statistics and mathematics, and not so much physics anymore.
Just my heart is still in physics.
art bell
If you, I assume, have students, and many of them probably want to follow in your shoes or try to, what advice do you give them?
marcus hutter
So I would say if you want to do a PhD, only do it if you're very eager to do that.
I mean, it's a life decision, right?
I mean, it's not a job, right?
Like any other job.
You really have to put everything into it.
Otherwise, I wouldn't even start with it.
And then if you decide to do a PhD and I would say that if you're just smart, but not ultra smart, then take some problem.
Often it's good to listen to advisor, work on these problems, get your research out there and see how it works out.
There is no point in trying to solve the hardest problem and then having no output within three years.
I mean, if you're ultra smart, then have these high goals like maybe finding the theory of everything in physics or really building AGI systems.
Have that as your ultimate goal and take a fraction of your time, maybe 20%, and think about it.
But then on a day-to-day basis, break it up into smaller problems, work on these small problems and make tiny, tiny progress.
And that's the same with me.
I mean, my goals are very high, but my daily work is I look at a tiny sub-problem in this big problem and I make tiny, tiny progress every day or maybe every month towards this goal.
So my daily life is much less exciting than sort of the ultimate goal.
And if you start with research, you think about you want to solve these big problems and don't waste time with solving these smallish problems that's under your dignity, that usually doesn't work.
So be modest in your daily work, but have your long-term dreams.
art bell
Professor, it has been said that people like yourself make their biggest breakthroughs, their biggest, the very best minds, do it at a very young and tender age, usually in their early or late 20s at the most.
I mean, the young people are the ones who make these incredible discoveries.
Is that true in your field?
unidentified
Um...
The...
The...
marcus hutter
The really brightest researchers often have their great breakthroughs early on in many disciplines, also in my field, especially if it's a little bit more applied.
It seems the more theoretical the research, the I mean, you need in all kinds of fields, you need a lot of knowledge, but then look at mathematics.
Often, you know, the Fields Medal winners, you know, are close to 40.
Sure.
So you can push it, but it is true the intellectual capacity seems to drop down quickly.
So in a sense, I was 29 when I had my great breakthrough, and since then it's going down, if you want to.
art bell
Okay.
I was wondering how you would answer that.
On the phone in Winter Haven, wherever that is, you're on the air.
Hi.
unidentified
Hey, Art.
This is Tom from Florida.
Winter Haven, Florida.
marcus hutter
Welcome.
unidentified
Thank you.
I have a question about when, I may have missed this earlier, but when does he project, what year does he project AI to become self-aware?
art bell
27 years.
unidentified
About 27 years.
Kidding.
And my other comment is, do we really want to go through with this?
Because it brings up all the political questions.
And plus, I mean, the world is almost overpopulated already.
And if you create these artificial beings, it will just add to that overpopulation.
art bell
Or figure out a way to winnie it down some.
marcus hutter
Professor?
unidentified
Do we really want to?
marcus hutter
Yeah, two good questions.
Do we really want to go this path?
It seems we have little choice, right?
I mean, in principle, we have the choice, right?
We could say, no, we won't want technology anymore.
We go back or we just stick to the status quo.
But then there's the next smartphone which is a little bit better, and I want it.
I don't want to keep sort of the one which I have now.
And so there's always these little improvements every year or every five years, and people want it.
And how do you want to prevent that?
I mean, if you have a dictatorial system, right, that may be possible.
But in a democracy, it's very hard to stop this trend.
The most likely way is that you have on the way a major, but no too drastic, catastrophe, which makes people rethink and then stop this path.
But it seems at the moment there is no way to stop the technological path, and that goes inevitably in the direction of more and more intelligent systems.
At some point, we overpass this threshold and have human-level AI.
And your second question with the overpopulation, it's yes, I mean, if you have virtual intelligences, it is very easy to copy them.
And you get immediately an overpopulation in your virtual world, or at least, you know, that can easily happen.
But at least it's a different kind of overpopulation, right?
I mean, they don't need crops or other food.
I mean, they need electricity, right?
But there is in principle enough out there.
And as Art said, it could prevent overpopulation, right?
I mean, the other guy who said there is this sort of Tamagotchi version 2.0 where we have virtual babies on your smartphone.
And, you know, maybe people get more and more sort of satisfied with virtual babies because you can switch them off when you want to.
They are not so annoying.
And maybe that solves the overpopulation problem.
art bell
Okay.
Caller, we're rapidly running out of time.
Any last statement?
unidentified
Well, you know, just as long as, you know, maybe sometimes people shouldn't always give what they want.
And we may not want the strategy that they use to control the overpopulation.
art bell
Listen, all well said.
Thank you.
Professor, is there anywhere you would like to direct people who want to learn more?
marcus hutter
You could go to my website, and I have a site of recommended reading where I lay out for all kinds of levels high school students, undergraduate students, recommended books and online material to get started.
art bell
Okay, which one is that?
Is it wwwhutter1.net?
marcus hutter
Yes, exactly.
And then you just follow the link about artificial intelligence and then introductory references.
And there's a whole list of literature from, you know, easy literature which everyone can read on the bad side to the most sophisticated mathematical stuff sorted.
art bell
Yeah.
Professor, it has been a pleasure, an absolute pleasure to have you on.
You're a brilliant mind.
Thank you for spending three hours with us.
marcus hutter
Thanks.
It has also been a great pleasure for me meeting you virtually, finally.
Yeah, I enjoyed the show.
Thank you very much.
art bell
Good night, sir.
There you have it.
Professor Marcus Hutter, AI, the real thing, folks.
I'm Art Bell, worldwide, all those time zones out there.
Everybody have a great night.
We'll do this again tomorrow.
Export Selection