All Episodes
Aug. 26, 2015 - Art Bell
02:18:11
Art Bell MITD - Marcus Hutter Artificial Intelligence
| Copy link to current segment

Time Text
From the high desert of the great American Southwest, I bid you all good evening, good
morning, good afternoon, wherever you may be around the world.
Every single one of those time zones around the world covered by this program, Midnight in the Desert.
I'm Art Bell.
Now, it's going to be a fascinating evening.
We have a professor here, all the way from Australia, and it's going to be all about artificial intelligence.
Rules for our show?
Not a lot of thinking there.
They're simple.
There are two of them.
No bad language and only one call per show.
Very simple.
Okay, so not a lot of news.
Well, there is a lot of news in a way, I guess.
You heard about the two former co-workers killed on live TV.
I guess it's a reality world we live in now, huh?
So this killer kills his former colleague, and then posts the video on social media.
Only in this day and age, huh?
He described himself as a human powder keg just waiting to go boom.
Well, he went boom.
Horrible day for two former co-workers at a TV station.
I mean, just horrible.
And the Dow Jones Industrial Average, finally, up 600 points on Wednesday.
So everybody went, whew, looks like China may not.
But wait, the guys who watch this say, no, not too quick.
Nice 600 points, but there may be some very, very difficult days ahead.
Not for us, though.
We've got a fascinating night tonight.
Marcus Hutter is a professor in the RSCS at the Australian National University in Canberra, Australia.
He received his PhD and BSc in physics from the LMU in Munich, And a habilitation MSc and BSc in Informatics from the TU Munich.
Since 2000, his research at IDSIA and now ANU is centered on the information theoretic foundations of inductive reasoning and reinforcement learning, which has resulted in over 100 publications and several awards.
His book, Universal Artificial Intelligence, published in 2005, develops the first sound and complete theory of artificial intelligence.
He also runs the Human Knowledge Compression Contest, offering winners a prize of, get this, 50,000 Euros.
50,000 Euros.
And I guess we'll soon find out if that has yet been claimed.
So, you know, somehow that one just isn't, uh, profess- or it's just not good for a professor, so we'll hold that one.
We'll give Professor Hutter something else.
Like this, for example.
Professor Hutter coming up from Australia in a moment.
It's an amazing world when you can interview somebody like they're in the studio, but they're on the other side of the
world.
Music playing.
By the way, I've got a few people I want to thank.
I want to thank Telos, Joe Talbot for the great audio.
Keith Roland, my webmaster.
Heather Wade, producing.
Streamguys, distributing.
LV.net, getting it there.
Sales, Peter Eberhardt.
Tune in radio, and of course, I've been forgetting, Lee Ashcraft, Dark Matter News.
How could I forget, Lee?
And for tonight, then this one's for you.
For Ben Gardner, a very good friend of mine, for well over 30 years, who passed during the night.
Rough stuff, indeed.
All right.
All the way across the world we reach now to Professor Hutter in Australia.
Professor, good afternoon there.
Hello Art.
It's a pleasure to e-meet you and to be on your show.
Well, it sure is good to have you.
I have wanted to do a show on artificial intelligence now for a very, very long time and obviously you're qualified.
I could barely get through your bio.
I extra sent you a short one.
It is good.
You're right.
It's good and short, but some of it would require explanation, I think.
Actually not, I guess.
Reinforcement learning, inductive reasoning and reinforcement learning.
Interesting.
All right.
So I'm not sure where we begin, but at the beginning of what you've said here is where I'm going to start, and that is how is You know, we were promised years ago, Professor, robots.
And they would be doing our dishes and housework by now.
They're not around.
Kind of likewise with AI.
We've been talking about it for so long now.
How is it evolving?
Yeah, it's indeed true that the talk about that AI will come is already very old.
So I mean, even before the advent of computers, people thought about intelligent machines.
But then, when the first computers came, it became more a reality.
And some researchers promised, okay, it only will take five or 10 years in the 1950s or 60s, and we will be there.
And now we have the 21st century, and we're still not there.
But there is progress.
And if you see all the inventions we have made, I mean, the last 50 years.
So, I mean, best chess players are not computers and not humans anymore.
Then we have speech recognition, which works now pretty well.
I mean, think about Siri.
Not as well as we humans can do it, but well enough to be useful.
We have self-driving cars now.
So we have all kinds of things which came out of, you know, decades long research.
So there's progress.
But we don't have yet the generally intelligent machines on a human level.
That's right.
But just because we didn't succeed in the last 50 years doesn't mean that we could not succeed in the next 50 years, right?
Some things take time.
And given the progress, admittedly, in a sense, slow progress, also on a cosmological scale, it's a fast progress.
We're on the right trajectory.
All right.
But you mentioned Siri.
How advanced is Siri?
So, that's an interesting question.
So the interesting thing is that many things we believed to be very hard to achieve in AI, so things we thought, you know, acquire a lot of intelligence and therefore should be hard to solve, turned out to be Rather easy.
I don't want to oversimplify, but one of the first big successes was Deep Blue, playing chess on a grasp master level.
And I mean, chess was the Drosophila of excellence for intelligence.
And so this problem has been solved only 20 years ago.
On the other hand, simple things like speech recognition, right?
Every child with five years of age or so can understand speech and took much longer.
to solve or get algorithms which work reasonably well.
And so your question was how complicated this algorithm is.
It's hard to put a number on it, but so it turns out that the good
old-fashioned approach to AI, where you design a system particularly
to solve a certain problem, has limitations.
I can come to the details later.
And the more modern approach, which is called the machine learning approach, where you design actually in a sense a very dumb system, like a baby, and then you train it.
To learn or acquire a certain skill.
So that this approach is much more flexible.
Is that the right word, Professor?
Train it?
That implies the learning process.
Yes, exactly.
So we call it training data.
We have training data and then we have test data to test the system.
So that's a technical expression for it.
And so these modern speech recognition systems are so-called, I mean, there are different kinds, but popular one is so-called Hidden Markov Models, which is just a certain statistical structure.
And then you give the system a lot of training data, so speech and the corresponding text.
And then the system learns to the correlation between the recorded speech and the associated text.
And then once it's trained, You can talk to it in the future and then it will translate it into text.
And there are of course various variants.
So there are systems which work extremely well with small vocabulary and separated words and for a particular speaker.
And if you want to overcome the speaker dependence or the separate word recognition, or you want to have a large vocabulary, then things become more complicated.
But the systems get better and better.
Okay.
If zero is Zero.
And ten is artificial intelligence.
That's a very narrow scale.
What is Siri?
scale of what?
Well, I could give you a number, right?
But give me a little bit more time, and I think we have the time.
So, it is important to distinguish between systems which solve a very particular narrow task.
Right.
So for instance, playing chess well, understanding speech, maybe recognizing faces.
Right.
And more general systems which are able to do a wide variety of jobs,
like say a household robot.
He can walk around, put the dishes in the dishwater, cook or something, and maybe even drive a car
and all these kinds of things.
We wish, yes.
And so we should distinguish whether we talk about narrow AI for particular tasks
or general AI like the humans can do.
So if we only talk about narrow AI, so we have speech recognition capacity on a scale between 0 and 10, and say humans are maybe 9 or 10 or so.
Okay.
Then I would say theory is maybe on a level of 7 already.
Oh, but not in AI.
You're just talking about speech recognition.
Exactly.
Only on this narrow task, speech recognition.
While for chess, for instance, on a scale from 0 to 10, we're only at 10.
But for face recognition, on this scale, maybe we are at the level of 3, maybe.
And for car driving, maybe we are also at 3 or 4.
And for other tasks, like cooking, we are at 1, maybe.
Right.
Got it.
Okay.
So, at the moment, what would you say is state of the art?
Can you specify the question a little bit more?
What is the cutting edge of the path toward artificial intelligence?
In other words, the best work being done.
Okay, so that requires a long answer.
So I could talk about the various application domains where we have made significant process.
And I already mentioned a couple of them like face recognition,
face recognition, chess, car driving, and so on.
So then I could also talk about various approaches towards AI.
And maybe I should concentrate first on that.
And there is, for instance, the neural network approach where we are heavily inspired by nature.
So the human brain is a neural network.
We try to understand it as well as possible and then simulate something closely or loosely inspired on a computer.
That's a huge area across many disciplines.
And a lot of progress has been done in this area.
And for instance, Two years ago, the European Union has announced the biggest research project ever in history in Europe.
It's 1 billion euro invested in, I think, 80 or so research institutes to simulate a whole human brain in the next 10 years.
Do you think that is possible?
I don't think they will get there within the 10 years,
they will make significant progress, maybe it takes 20 or 30 years,
eventually I think we will be able to get there, 10 years is a bit overambitious,
but I like the goal in principle.
So that's the neural network approach.
I can talk more about it, but that's actually not my area of research,
or I now tell you a little bit about the other approaches.
So would you say the most interesting AI going on right now or path toward AI is probably in Britain,
or is it there in your lab?
In Britain?
Yes, the European Union, I'm sorry.
It's hard.
To compare, right?
I mean, this project is a one billion euro project and my research area, you know, attracted maybe a couple of million dollars.
I just wanted to give you an opportunity to say we're way ahead here.
What is we?
I mean, well, maybe, yeah.
Or you could say I am.
So my first PhD student worked on this theory of universal AI, which I've developed
early this century.
And then he co-founded with two other people, a company called DeepMind, working both on
neural network approach and the universal AI approach.
And just last year, this company has been bought by half a billion dollars by Google.
So there's also interest in all my former PhD students and other students get absorbed
by Google DeepMind and they're working there now.
So there's also interest in, let's call it, my approach to AI.
And there are, of course, many other approaches to AI.
All right.
Here's a question for you.
Google itself, Professor, is absolutely amazing.
Now, it's obviously not AI as in self-awareness or Anything, but it's the amount of information that Google has.
Some people wonder if that will not eventually, just the massive amount of information and processing power, eventually lead to some kind of AI.
Is that something to think about?
Yeah, well, that's the Digital Gaia hypothesis.
We can separate it from Google.
So take the Internet, right?
This is billions of interconnected computers.
Oh, yes.
Lots of services and virtual agents doing all kinds of jobs in the background autonomously.
Of course, much of it is triggered by the user input.
But even if all users would stop, you know, lots of these processes would run out.
And the Digital Gaia Hypothesis is that if you have a structure which is complicated enough, such as the Internet, you know, maybe a little bit more, maybe another tenfold increase in computing power or something, then suddenly it will awake, you know, maybe it comes up with a sentence, you know, I'm thinking, therefore I am, or something like that.
Can you envision that?
Is that possible?
I couldn't rule it out.
Yeah, I think that's possible.
Oh, well, it already, to the average person, seems a very smart place.
And when you talk about a tenfold increase in either storage or information, that's really, at the speed we're going, not that much, actually.
We can go tenfold in not very many years, I would think.
Yeah, in ten years or so.
Now, of course, there is the traditional people who talk about the marriage of artificial intelligence at some level and robotics.
The way it's sounding, AI is going to be ready before the robotics are.
What do you think?
Yeah, I think so too.
It seems to be very hard to produce very versatile robots with capacity similar to a human.
And for many jobs, I mean, you need hands, right?
And well, there are robotic hands out there, but then you have the power problem and so on.
But I think in any case, to a large degree, we can separate robotics from AI.
I mean, many people confuse it.
They ask me, what do you do as a working AI? Oh, you build robots. No,
there's a big difference between the brain and the body. You bet. You can have physically
dysfunctional people like Roger Penrose, sorry, Stephen Hawking, and be very smart, right? You don't
need a body for that. And the same is with AI research. You can do a lot without thinking or
doing robotics.
And if you need an interface, you can have a virtual agent living in a virtual three dimensional environment and test your systems.
And to your coming back to your question, yes, I think the although I'm not so sure, maybe true AI is 30 years or 20 years away.
And who knows?
I mean, these robots are getting better and better.
It's not so clear.
What we will have first, you know, really versatile robots or super intelligent algorithms, which we can then put into the robots.
But in any case, I think for many applications, we don't need the robots, right?
If you have an interface to your computer and you can talk to a virtual face and ask questions and this system provides services, obviously it can't cook then for you.
That's good enough.
Yes.
Let's try a definition here.
There might be robots and then superintelligence perhaps that you could install into them.
The difference between superintelligence and artificial intelligence is what?
Well, many people use these terms in different ways and artificial intelligence is just in general if you design an algorithm which does something which is traditionally regarded as an intelligent task like driving a car, speech recognition, playing chess and so on.
And super intelligence is usually used if the system is smarter or does it better than a human.
Often it is used only if the system has also the broad capacities like a human does.
I mean you could say for instance that Deep Blue is super intelligent in playing chess because it's better than any human.
Sometimes a super intelligent machine is only called super intelligent if it has a broad range of capacities, not just playing chess but driving a car and so on.
Right, but that I suppose, although I suppose intelligence could be built into a car itself, it wouldn't need a body to drive as does a human.
It could just simply be the car and we're beginning to get cars that are capable of driving themselves.
Exactly, and we have that, and it has just recently been legalized in California, self-driving cars.
So, this will have interesting consequences.
So, for instance, assume, you know, you can now, in the next couple of years, you can buy these self-driving cars.
They are much safer and, of course, more convenient.
You would think, okay, I don't know the numbers exactly, but I think there are 30,000 deaths per year in the United States for car accidents.
And assume this number gets halved to 15,000.
So you save 15,000 lives per year by having these self-driving cars.
But what I would expect is that you will not get thanks that you have saved 15,000 lives.
You will have 15,000 lawsuits why these self-driving cars have killed 15,000 people.
And somehow we have to deal with it, right?
Who is responsible then for these accidents?
Well, certainly what we have now, Professor.
Although in California, a self-driving car couldn't help but be a big improvement.
I don't know if you've been to L.A.
lately.
Yeah.
Bad, bad, bad.
All right.
Back to artificial intelligence for a moment.
Professor, do you envision that artificial intelligence could ever become self-aware?
That's a big E, right?
Yeah, that's a big, deep, philosophical question.
I could answer Or I have a counter question.
Sure.
How can I be sure that you are self-aware?
I know that I am self-aware, but maybe you are just a machine pretending to be self-aware.
Of course, I would not do that because I'm polite, right?
Like everybody else.
No, go ahead.
Cast suspicion on me.
It's fine.
So what we do is, right, we interact with other humans.
Because they behave similarly, and I believe myself that I'm conscious and have feelings and so on, and these other humans behave similarly, the most natural assumption is that they're also conscious about themselves and have feelings and so on.
And let's now imagine we have at some point we have intelligent robots, take the robot picture because it's so visual and they walk around and they behave intelligently and they feel pain, right?
Or maybe they don't feel pain, they just try to avoid dangerous situations, but they act like it.
So I think certainly we humans will ascribe them, or most humans will do, their consciousness and feelings and so on.
Whether they really have this consciousness or not is a much harder question, which we can talk a little bit about it, although I'm not an expert on that.
No, but it's very interesting.
I'll give you one example.
A couple of years ago, when in Japan and then around the world, the Tamagotchi, you know, this very primitive, simple device where you had to raise, you know, some virtual creature, just a display on a black and white thing, and people got very attached to it.
So, I think the same will happen to a much larger degree if you have these intelligent machines.
Okay, but I guess your question was about are they really sentient?
Are they really conscious?
Or do they just display this behavior?
Or can we even define what... I suppose we must define what consciousness is before we can talk about trying to achieve it.
Other than self-awareness, what test might there be for consciousness?
Can you imagine one?
The only test, not the only test, but one test is well similar to the Turing test, right?
You, via teletype or so, you talk to a human and you talk on a different channel, you talk to a machine and you try to find out Um, by an intelligent conversation or whatever means, um, whether, um, your conversational partners are conscious or self-aware or so on.
And if, um, you can, um, single out the machine with higher chance than 50% that it appears to be not conscious, then maybe it isn't.
But if you can't distinguish it from your interaction, Then this is, I think, a very good test for that the machine is conscious.
There are also other means we could do.
For instance, for humans, we could, and these experiments are done, measure the brain activity.
And then get self-reports from the test subject, what they feel and whether they feel conscious and what they have thought and so on.
And that's called the neural correlate of consciousness to relate conscious experience to activity patterns in neurons.
So then you get a theory about which patterns in neurons create consciousness.
And maybe this theory can be transferred also to robots if they are built in a similar way, say the neural network approach.
All right.
Professor, hold tight.
We are at a... Yes, it's not an easy problem.
You are at a break point, Professor.
We've got to break here, so hold tight.
This is Midnight in the Desert.
Professor Hutter is my guest from Australia, and we'll be right back.
You'd think that people would have had enough of silly love songs.
But look around me and I see it isn't so.
Take a walk on the wild side of midnight.
From the Kingdom of Nigh, this is Midnight in the Desert with Art Bell.
Please call the show at 1952-225-5278.
That's 1952.
Call Art.
That's 1-952-CALL-ART.
There he goes again, stuttering a little bit.
Don't know what happens there, actually.
All right, well, quite a bit of recognition already.
Professor, I've got a message on the computer.
It says, wow, Marcus Hutter, the AIXI guy?
This is an excellent guest.
And so there you have it.
As soon as you explain what AIXI is.
OK, so that's my baby, my theory, which I developed 15 years ago.
And I'm still developing it.
So I've mentioned already that there's lots of progress in specific problems which require intelligence.
But the big goal is to develop general systems.
One obstacle is that we don't really know what intelligence is.
And after a couple of detours in my life, So, about 1998, I had the key insight.
Some of it already was done by Solomonov in the 1960s, which I rediscovered and then found
later is that compression, data compression, is closely related to being able to predict
well, which is a key ingredient to then making good decisions and then to act well, which
want for an intelligent agent.
Define what you mean by data, if you would, by data compression.
So, in general, by data, I mean, let's assume you have a file stored on your computer and you want to save space.
What you do is you compress it.
You put it in a zip file.
Yes.
And so that's what I mean with data and data compression.
More generally, it is any kind of information which you can store digitally and then you compress it.
And if you now compare that what scientists do, we have data and we look for patterns.
So what does it mean for looking for a pattern?
So assume I see a sequence which is 1, 0, 1, 0, 1, 0, 1, 0.
So this looks like a pattern, okay, 1, 0, repeated.
So we have now a theory how the sequence is generated, not how the sequence, but what
the sequence is, namely, repeatedly 1, 0.
And if you want to do a prediction.
we may well predict maybe the next two bits are again 1 0 and then 1 0 again.
So what we have effectively done, we constructed a simpler description of our data, rather than
just raw storing it, we said print 1 0 and repeat for a couple of times. And so that was a very
simple example of course, but that's also how IQ sequences work.
So you see an IQ test, 1, 2, 3, 4, 5, what comes next, right?
All right.
It seems to be, you know, a number and then you add 1, you add 1, you add 1.
You can write a little program which does that.
And if you then run the program, it will predict 1, 2, 3, 4, 5, and then 6 comes next.
And so this principle generalizes.
Whatever data you see, it could be weather data or stock market data.
You look for a simple explanation, which means a simple description of your data.
And then you can use that for making predictions, which is a key component for intelligent behavior.
So that is one aspect.
And then, so there's one half of the AXI model.
And the other half then does the following.
Now we have a theory how to predict the future.
Of course, not perfectly, right?
We can make mistakes.
And then you need a mechanism for using this for making decisions.
And that's called sequential decision theory.
It has been developed a long time ago.
It's a solid theory which is out there.
And what I've just done is combined the sequential decision theory with this universal scheme of making predictions.
Alright, let's take the stock market as an example of something that you might want to predict.
which you can then theoretically analyze and what you can prove in a certain sense that
this is the most intelligent agent possible.
All right, let's take the stock market as an example of something that you might want
to predict.
You can certainly predict trends.
You can record compressed history and look at it over long periods of time and I suppose
be somewhat successful.
But then there are so many variables that begin to come along, like the bubbles that we create around the world, whether it's in China or the U.S., our housing bubble here, and I guess theirs there.
And then, of course, you've got the even more complex aspect of human emotion.
Yes, it is a very complicated problem to predict the future.
Indeed, if you could do it well, we could all become rich, right?
But the theorems and the algorithm doesn't say it will be an excellent predictor in all circumstances.
But what you can show is, let's assume you have some data.
And there is some unknown algorithm which could do reasonably well or to a certain degree on this data.
Then this meta-algorithm, which is called Solomon of Induction, will do as well.
And we already know Solomon of Induction.
So what that means is you can have data from any source and Solomon of Induction will work As well as any other conceivable algorithm.
If there is of course no way to predict the future, if you have random noise or maybe you have really the efficient market hypothesis and nobody can predict the future, then Solomon of Induction will fail too.
The statement is that if something works well, Solomon of Induction works at least as well.
So we can always use this and nothing else can work better.
Alright, there's two things that I don't understand.
You've got to understand you're talking to a layman here.
I understand the concept of data compression, for example, but I do not understand why data compression makes prediction and or even intelligence more possible than otherwise.
It saves space, yes, But what does it do?
And I understand that you've got a competition about this, and we'll talk about that, but I've got to understand it first.
What is it about compression that gets us going faster toward intelligence or prediction?
It's not going faster there, but it's a necessity.
So there's something called, a very old principle, which is called Occam's Razor principle.
And Albert Einstein also paraphrased it and said, we should do everything as simple as possible, but no simpler.
So many scientists have paraphrased this principle.
And what it says is the following.
So if we have data and we want to understand it, we come up with theories.
And these theories should be as simple as possible, not to save space or to make it easier to understand for us, but simpler theories.
Our better theories are better for prediction.
More precisely, if I have two theories which describe my existing data equally well, the simpler theory is the more likely one to make good predictions for the future.
I have given you already two examples.
One is, say, IQ sequences like 1, 2, 3, 4, 5.
I could give you a crazy model, right, which predicts that the next Number is 17, 1, 2, 3, 4, 5, 17.
Right.
But nobody would predict that in an IQ sequence and normally 7 doesn't help, 17 doesn't come up, 6 is the next number.
Because it's much simpler, you just add 1 to the number, it's a very simple explanation.
To give you maybe a more real world example, so in the old times where we had the geometric, sorry, geocentric world model or model of our solar system, The Sun rotated around Earth and the other planets, and initially it was fine, but then with more precise observations, one realized, oh, these are not really circles, these are distorted circles, and then we have the epicycles, and the model became more and more complicated.
Right.
And then we came up with a heliocentric model.
Oh, if you put the Sun in the middle and we allow for ellipses, Then everything is easy.
All planets are just ellipses around the Sun.
It's a much easier explanation of our observations compared to the geocentric model.
And indeed, this simpler model also then makes much better predictions.
Like you could predict, you know, the further planets out there based on distortions of the ellipses.
Okay.
I'm still not understanding the aspect of compression.
Maybe you just explained it to me and it went right past me.
Okay, I'll try once more.
Sorry, I have to give a very simple example, otherwise it will take hours.
So I give you the sequence or your data is 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, say up to 1,000.
Okay.
So this is the data on a piece of paper.
Okay.
And now you want You know it.
I don't know it.
And now you try to tell me what is on your piece of paper.
You could either fax me the whole piece of paper and I get the number sequence, or you could do something else.
So what could you do to tell me what is on your piece of paper?
Um, I guess I could, uh... Well, let me think.
What could I do?
I could, uh... I'm trying to think of how I can compress it, so... Yeah?
Perhaps impress you.
I guess I could put it in a... I don't know, Professor.
So I'll tell you.
So what you do is, you write a very small program which says, print the number 1, then add 1 to the number, and then print this number, and then you repeat it 1,000 times.
So you put a one, add one, that gives two, print this number, add a...
So you have a very short program, which prints one, two, three, four, five, six.
Every child can sort of write this program and prints the first thousand numbers.
And that is what I mean with compression.
You have now a very short description, what is on your piece of paper.
So there's very little information content in the sequence one, two, three, four, five, up to 1000,
because it can be highly compressed.
It's just all numbers between 1 and 1,000.
So I gave you a very short sentence.
Think about a linguistic description of your data.
And this description, if this is shorter than the original description, so count the number of symbols, right?
Right.
Then this is called compressed.
OK.
All right.
Actually, I do get it now, finally.
You have associated a contest of sorts with this, and I assume that you're looking for a better way to compress a shorter way, a simpler way, until you finally get to the ultimate simplicity to achieve something, right?
Yeah, exactly.
And the motivation is, and I explained it on this website in detail, the website is price.hatar1.net, More or less what I've just explained in the last couple of minutes, but with much more detail and much more examples.
And here I don't take these simple number sequences or stock market data.
But the goal of AI is to build human level intelligent systems.
And we humans have a lot of knowledge.
So what I took is I took one gigabyte of Wikipedia and don't take it too literally, but you can estimate the symbolic knowledge.
So, not the visual knowledge, but the symbolic knowledge in an adult human brain, which is about one gigabyte.
So, I just took a chunk, one gigabyte, of Wikipedia because it contains a lot of human knowledge as a very crude proxy for what is in a human brain.
And now, if you sort of understand or believe the relation between compression and prediction and intelligence, this means that the better we are able to compress this one gigabyte, the more we have understood How to build intelligent systems.
And the contest is about beating the existing data compressors.
What would win the prize?
So if you beat the existing record, at the moment we are at 16 megabytes, if you beat the existing record by 1% you get 1% of the price which is 50,000 euro
if you beat it by 10% so your compressed file size is 10% shorter you get
10% of the 50,000 euro and so on. Wow and I take it nobody has yet collected
no that's not true, so they have been several entries and there
was one Russian guy who won three times in a row over a couple of years each time 3% of
the price and so there was progress
so about 10% or so we could improve over the state of the art
until Since we started the contest in 2006.
And now it has stalled a little bit.
It seems to be very hard to beat this Russian guy.
All right.
You mentioned earlier a little Japanese model that people actually became emotionally attached to.
When you get the opportunity there is a British television series called Humans.
About very, very, very human-like, artificially intelligent beings.
And I need not give you the plot, but it is very interesting, and they are striving to get a little bit of code that will tip them over the edge to become sentient.
Become self-aware.
Okay.
And at any rate, what happens is these AI robots are in people's homes as helpers just as we always imagined it might be doing the dishes bringing you a drink letting you sit on the couch I guess and right away doing everything for you but the people became extremely attached to their AI companions to the degree that at some point now this is just television of course
But at some point, some of them disregarded their partners otherwise for their AI companion.
If you look far into the future and imagine something like this, can you imagine that occurring?
Easily.
I mean, there are already now people who are more attached to their dog than to their partner.
Yes, yes.
You don't need an eye for that.
No, it's true.
It's true.
So apparently there's a little bit of danger there even if they're nothing but I guess super intelligent is the way I would put it.
It will be enough so this attachment will occur and I wonder if that is a healthy and a good or a very bad thing.
I think that's not for me to decide.
I mean that really depends on society and how the society changes and what their value And it is often that things which are very different from how it is now, we feel that this is not good, but once it's there, you know, we like it.
Yeah.
So it's really hard to tell.
I mean, to give you an analogy, You know, privacy was once held quite high.
Nowadays, young people are absolutely ready and easy to give up any aspect of privacy, right?
I mean, wherever they are, it is locked because once you have a mobile phone, your position is locked and you should expect that all this data at some point becomes I mean, if Julian Assange is able to get into military computers and put it on WikiLeaks, right, all this other data will eventually also become public.
So, where you have been in your life in the last 10 years will become public.
In London, there are more surveillance cameras than there are people, and everything is recorded.
And actually, so with AI technique, then it's also possible to use this data in an efficient way.
I mean, I really don't like this surveillance, but most young people have no problem with it.
They accept it easily.
And going back to your example with having artificial partners rather than real partners, it could easily be that we get used to it and most people prefer it.
Right?
Like some people nowadays prefer their dog.
I suppose so.
Let me ask this.
Science doesn't really Consider the moral, ethical consequences, necessarily, of what it does.
In other words, if we can create something close to a sentient being that people would become that attached to, people in your business probably wouldn't think very hard about the ethical consequences of it because you can do it.
Is that unfair?
It's not our majority business, but it's not true that we don't think about it.
I mean, those people who work on AGI or care about AGI are usually quite knowledgeable about this.
I mean, not everybody can write and be very active in this area, but at least I consider that too.
I mean, I have written maybe one or two papers in this area.
So we are aware of that.
But a new example isn't the worst case, right?
I mean, there are some things which seem to be objectively good or bad.
Say, eradicating all sentient life on earth seems to be an objectively bad thing.
So we should be definitely concerned about that.
You know what?
Eradicating all life on earth is an absolutely perfect place to break.
And that's what we're going to do right now.
Break.
Eradicating.
All life on Earth.
Could it be that a sentient machine would look around, think about all the life on Earth, and decide that the best solution is to eradicate it?
if this is midnight in the desert.
You're raging into the night with Midnight in the Desert.
To be part of the show, please call 1952-CALL-ART.
That's 1-952-225-5278.
It is.
My guest is Professor Hutter, at the cutting edge of artificial intelligence, AI.
That's what we're talking about.
And he was mentioning the eradication of all life on Earth.
Professor, there are many people who feel that if a machine, an artificially intelligent machine, were to contemplate the situation of man, actually this goes to a movie, it's always easier for people to relate to movies, Transcendence is perhaps one example, that any machine would look at and predict The consequences of what man is now doing on earth and perhaps decide to eradicate us.
Which, frankly, might not be a completely irrational decision.
And that's of some concern to people.
What do you think?
Yes, definitely.
This is a possibility.
But there are also more benign outcomes.
I mean, just because the majority of science fiction end up with humans being eradicated or marginalized doesn't mean that this is also the most likely outcome in reality.
But it's a definite possibility, which we have to consider carefully.
But there are also It's also possible to have good outcomes naturally.
So for instance, to give you one analogy, I mean, we humans dominate the planet Earth
in a certain sense.
But on the other hand, there are lots of insects like say ants around.
And as long as these ants don't go into our house or on the countryside, we don't mind.
They exist.
We just coexist.
We don't go out and eradicate them because mostly we don't care about these ants.
And this might be a possibility with super intelligences, right?
They develop completely different goals and habits and they don't care about us, whether we exist or not.
And it could well be that we just co-exist.
Maybe they go into space, they don't care about living on earth anymore or something.
Of course, we should not sort of just blindly hope that this outcome We'll be realized, so we should be aware of the diverse outcomes.
You are aware of the laws of robotics, right?
Yeah, yeah, sure.
Okay, so can we take something like the laws of robotics and apply them to AI?
Or, once you achieve AI and sentience, you might not be able to dictate anything at all in the long term.
How do you feel about that?
So, I mean, as you know, with Asimov's Law of Robotics, I mean, most of his short stories are centered around these laws.
And then, although the robot literally respects these laws, things still break down.
And this is indeed a real problem.
And there's research going on in this direction, actually.
Also, for instance, at Google DeepMind, I mean, so they're well aware of of this problem.
So they have a safety team considering future of AI safety.
And they're considering these questions.
How could we build provably benign or friendly AI?
I'm personally not too convinced that this approach will succeed.
Right.
It is definitely the approach which should be considered because, you know, Maybe it will work.
Another approach is this reinforcement learning approach, which you mentioned at the beginning, where you raise your intelligent robots.
I mean, like a baby, it's blank slate at the beginning, and then it interacts with humans.
It gets a reward signal, a positive one for good behavior.
You punish it like the carrot and the stick for bad behavior.
And Then, with some luck, and again, it's hard to prove that this will work out, but there is a chance, because the system grew up in a human environment, that it then will value human life and will not, you know, as soon as it has surpassed human intelligence, will suddenly decide to kill us all.
Well, there is.
It's like with children, right?
You raise your child.
And usually, you know, your children don't suddenly kill you in order to inherit all your money, even if they could get away with it.
Not generally.
Because they develop some attachment to the parents.
Hopefully.
Yes, well, you mentioned the dog, and you know, positive reinforcement and negative reaction in order to slowly teach, but you know, every now and then, you still get a bite.
You know, you still get bitten, is what I'm saying.
And so, while you might attempt to guide with positive reinforcement, AI in a certain direction, might even seem that you're succeeding, there could easily come a point at which it would decide to bite.
Yeah, that can well be.
And we have to be aware of that.
Try to avoid it, like with nuclear energy, right?
We can use it for good, yeah, or we can use it to destroy ourselves.
But AI also actually, I mean, has also the chance of lowering the risk of our extinction.
So, of course, AI, if it's there, they could decide to destroy humanity.
But on the other hand, AI is able to make better decisions and try to avoid other wrong decisions we might make.
So in some it is not clear whether in the long run actually AI helps us to lower the
risk of self-extinction or increases it.
So for instance we could use AI for better governance, right?
So I mean if I think about, if I have a benign AI making completely rational political decisions
compared to irrational politicians who are often interested, have self-interest just
to stay in power, I'm not sure what type of government I prefer.
I mean, things can go wrong with humans governing countries as well, not only with computers.
Actually, I think the most realistic movie of computers taking over is actually from the 1970s, so already 45 years old.
It's called Colossus, The Forbidden Project.
One of my favorites, actually.
Worth to look at.
I mean, of course, there's little action, but that makes it even more realistic.
Oh, no, I love the movie.
Ah, you know it.
So, I think that's still the most realistic movie Sorry for that.
That was interesting.
For AI taking over and in the end, I mean, that was a good outcome.
Yes, we lost sort of some power, but the system was benign to humans and governed it in a way which was in the long run probably much better for humans than if humans stayed sort of in control.
I mean, at least in this movie.
Would you now be comfortable with whatever level of AI we now have?
We're putting the best AI we have in charge of our national defense.
Now, currently not.
I mean, the AI systems are way too limited to make intelligent decisions.
well i think i'm something like global nuclear warfare
it i mean the only thing i think it is not to stop
Of course, that's right.
The algorithm is super simple, right?
Don't press the button, yeah?
Yes, indeed.
But we do have early warning radars, we have various technological capability that could be fed into a machine that would look at that and decide if in fact we were truly under attack or we were simply responding to Some piece of space junk re-entering, or even some benign launch somewhere, or a series of launches that could be misinterpreted by a human, but might just be looked at and discarded right away by a good machine.
Yeah, look, you have to look at these systems and how well they work.
I mean, if a human makes a decision based on some visual image and he can make an error or let a computer do it, it can make an error and by, you know, sufficient It is determined that the computer is more reliable, right?
Then it makes sense to put the computer in charge.
I mean, the best thing probably would be that still a human is in the loop and can override the decision or makes the final judgment.
Although I would then expect that people more and more rely on the judgment of the computer because Let me give you the example of car driving, right?
I talked about that before.
Sure.
We have 30,000 deaths per year.
Autonomous car driving, maybe we reduce it to 10,000 just to pick out some number, right?
And if that's the case, although now we have sort of machines killing 10,000 people per year, it's the rational thing to do.
It's still better than humans killing 30,000 people per year in car accidents, right?
Sure.
Sure.
Well, all right.
Of course, we always have to consider the long-term danger, right?
I mean, sort of like the nuclear power plants, right?
You know, although, you know, they have the highest safety standards, some things can go wrong and you have to plan for this worst-case scenario.
And like we do for humans, right?
I mean, you can have a president who suddenly becomes insane, right?
And tries to press the button.
That's the reason why multiple people have to, I think, as far as I know, have to agree to that.
So, we need, of course, Extreme safety measures for extremely dangerous things, like we need when humans are in control.
Do we have machines that are capable of making decisions at those kinds of levels yet?
I think not, right?
I think not.
bigger strategic and even tactical decisions.
So it's more more on a very low scale, you know, interpreting images, whether this is a mistyle or, you know, just a fly or something, and then classifying it.
I think there the systems are, or there can be many systems which already are superior to humans on this very low level, but not on the higher level.
To give you another example, if we want to go away from the military, so one of the early successes in AI were expert systems.
And interestingly, the areas of most success were medicine and law.
I'm joking because these are rather simple areas, right?
You have a lot of knowledge, but the inference you do is in a sense rather simple.
So if this and this precondition holds, then you have this disease, or then you can conclude this and this in the law.
And so I can I could imagine, you know, that maybe the next job, which is lost to computers, is lawyers, because, I mean, you need a lot of knowledge, which is easy for computers, and then following the rules is sort of also easy, and then picking the right ones, or the ones which favor your outcome, is maybe also not so hard.
And then you have the lawyer, and they have their opponent, pwning lawyer, and that can be maybe automated.
The judge, in judging them, all this information, that is probably the last thing.
Well, would you imagine an attorney, an automated attorney that is making a case and of course has available to it all the case law in the world?
I could easily imagine that and to be superior than any human lawyer.
But not the judge?
Not the judge.
Not for a long, much longer time.
But many, many times, Professor, cases are won or lost, I'll say won in this case, on emotional pleas made to juries that the Silicon lawyer you're describing wouldn't be able to make.
I'm not so sure whether it could not make it.
Really?
I mean... He had a very difficult childhood.
He was whipped by his dad when he was five years old.
So it's no wonder he did what he did, right?
Yeah, but this is... I think the relation between certain behaviors and certain histories and emotional, say, significance or strength Could be learned by a machine learning system and then exploited once you tell the system, you know, that a jury is much more sensitive to the emotional part than to the factual part or so.
I don't claim, you know, like with lawyers, I mean, most lawyers, you know, are not really emotionally involved in the case.
They just present it in a way.
It's like robots, right?
Presenting an emotional case to get the jury on their side.
Yes.
Yes, exactly.
Alright, hold tight.
If the glove doesn't fit, you must be quit.
I don't know.
Maybe.
We'll be right back.
Thanks for watching.
Coming to you at the speed of light in the darkness, this is Midnight in the Desert with Art Bell.
Now, here's Art.
World-class on artificial intelligence.
Professor Marcus Hutter is with us from Australia, and that is exactly what we are discussing, is AI.
And, Professor, I get messages as I do the program.
Here is one for you from Tammy in Texas, who says, Art, would you please ask Dr. Hutter What do you think about Stephen Hawking's prediction that AI will be the end of humankind?
How would this occur?
Thank you.
It could be, as I said before, but it depends on how we approach it, which is maybe only partly in our control, but maybe we have some control over it.
So, to give you some scenarios how the future could look like.
So, we could, you know, develop AI artificially.
We have suddenly these super-intelligent systems, which are so smart, they feel that humans are dangerous.
They kill us all, like Terminator-like scenario.
Okay, so that's one scenario.
Another scenario is that we Enhance ourselves.
So for instance, we already enhance ourselves.
We have all smartphones We can look up all the knowledge in the world in in seconds.
We don't do simple calculations anymore That's all in our smartphone.
So next step is and Experimental it's already possible that the implant chips in our brain and rather than viewing at the result and speaking or typing We just think about a question and the questions get answered Okay, and if you do that early on if you implant it early on your life It will become extremely natural.
And it's like with an exoskeleton.
So if you have ever used an exoskeleton, after a while, it feels like this is your own body.
Okay, especially for the sensors.
So you will have the feeling that this external knowledge and the external processing capacity and maybe decision making capacity, which you then have in your artificial part of the brain, that feels like you.
And the computers get better, the algorithms get better, and more and more of knowledge and decision making you will delegate to your artificial brain, and it will come very natural, right?
Like looking up in a book.
And, you know, we slowly evolve in this direction.
We can become all cyborgs.
And, you know, eventually, and as long as it sort of Slowly, you know, people adapt and get used to it.
It feels natural.
And, you know, maybe there is a point in time when you realize, oh my God, I haven't used my own biological brain in the last two years.
I delegated everything to my artificial part.
Maybe I can switch that off, this biological piece.
It's not really necessary anymore.
It atrophied anyway.
So, have we now eliminated or destroyed Humanity or not, or have we just evolved to a higher intelligence in this scenario?
Okay, well, what specifically do you think Hawking is concerned about?
Look, Hawking is not an expert in artificial intelligence and that's often the problem if people who are very famous in one field start to talk about a different field.
So, I think he doesn't have any more credibility in having fear than many others and there is a valid risk and we have to be careful, like with all technological developments, but There's also a chance, and this depends on society, on political structures, and so on, that we can get it right.
All right.
Now, let me try this one out on you.
Again, I'm... Sorry, before you can continue, you can add one more thing.
Sure.
I don't think the solution is to stop AI research, because if you do it, then others, say Chinese or other countries... Oh, absolutely.
Or it will go underground.
Absolutely.
It's more dangerous.
One could slow it down, make more control mechanisms, and eventually we will have ethics committees looking over AI research and checking, you know, the danger in these things.
That would be wonderful.
We will also have black budget stuff going on behind the scenes that nobody knows anything about because we always do.
Now, I guess I want to ask a little bit about the possibility of a human mind or a machine, I guess, getting to the point where a human mind could be uploaded or downloaded, however you want to think about it, into a very large or very compressed, for you Professor, data center.
and the brain itself successfully transferred yeah that's that's the movie transcendence
I was about that it was and to a large degree it was nicely made yeah and I'm that's another
possibility which I have not mentioned yet right and so I'm you have a sufficiently powerful computer in an in a
sense that the most easy solution to the eye problem the only thing we need to
have is sufficiently fine-grained
brain scanners and and then simulate the human brain in the computer
and I strongly believe I'm I'm not a philosopher but I can give you some
Halfway convincing arguments, why I believe that, I think consciousness will transfer, identity will transfer, all those things, like in the movie Transcendence, and then you can have a virtual life in a virtual environment, and then at some point the movie breaks down, and you know, when you, when these robots come... Oh yes, yes, of course.
And then you have these fights just to have... But you said, you said consciousness will transfer, you believe.
Yes.
So you believe then that consciousness is a end product of X amount of processing and storage?
Yes, I think consciousness, I mean I cannot prove that right, nobody can do that, that consciousness is just an emergent phenomenon.
Certain structures And certain information processing will lead to the emergent phenomenon of consciousness, like we have, you know, emergent phenomena of turbulences or fractals or all kinds of other things.
I cannot prove that.
There are some arguments, you know, like the neuron replacement argument.
You replace one of your neurons by an artificial neuron, which has the same functionality.
Do you think you will lose your consciousness?
Probably not, right?
And then you replace another one, another one, another one, and either you gradually reduce consciousness, then your behavior would change, right?
If you're unconscious you behave differently, yeah?
Or it's suddenly you will lose your consciousness, but then the question is at which neuron you will lose your consciousness.
So there are some philosophical arguments why consciousness If you were able to transfer the entirety of a human brain into a machine, would it be different in that, for example, it would not require sleep?
Or would consciousness, by its very nature, require a rest period, do you think?
If you transfer a human brain in the computer and simulate it one-to-one, it will also require sleep.
There are some reasons why the human brain requires sleep.
I mean, there are various theories for that, for, you know, processing the information which has been acquired during the daytime and stored appropriately.
There are other theories.
So, this will stay.
But I don't think that consciousness is necessarily related to a sleep pattern.
No, no, I wasn't trying.
I'm sorry, I was not really trying to relate.
Just ask if sleep would still be required, and you're saying yes, which is very interesting because all of that data is now in a machine that can run, assuming it's powered up properly, 24 hours a day, but you think there would be, apparently, some sort of deterioration without sleep.
Look, the brain does something during our sleep.
It's not switched off, right?
Yes.
So it processes the information stored in the brain and does something with it.
We don't know exactly what and it seems that this period is necessary to stay sane.
Okay, what does it mean to stay sane?
That means that the pattern, the future neurofiring pattern in your brain is such that you can
function well.
and if it does transfer...
Oh, that's absolutely fascinating.
All right, Professor, hold tight.
We are at a break point.
So the brain in a machine would have to sleep as a human does, even though it's all silicone or whatever it is that we finally make or store a human brain in.
All right, Professor, hold tight. We are at a breakpoint.
So the brain in a machine would have to sleep as a human does,
even though it's all silicone or whatever it is that we finally make or store a human brain in.
Wow!
All right.
Thank you.
The End.
Thank you for watching. Please subscribe to our channel.
Nothing easy like Sunday morning about this show.
Professor Hutter is my guest.
World class on artificial intelligence.
The real thing.
So if you have questions about this, I know I do.
952-225-5278.
Nothing easy like Sunday morning about this show.
Professor Hutter is my guest.
World class on artificial intelligence.
The real thing.
So if you have questions about this, I know I do.
I have more of them.
I would invite you to join us in one of two ways.
We have a telephone way, which is simply 952, area code 952-225-5278.
Again, 952-225-5278.
Put a 1 in front of that.
Most people have free nationwide calling.
Or is it North America, actually?
And the other way, of course, is Skype.
5278, put a 1 in front of that.
Most people have free nationwide calling, or is it North America, actually?
And the other way, of course, is Skype.
We have Skype for you.
And if you're able to, please put us into Skype and then dial MITD51 in North America.
That's MITD51.
Outside of North America, you can reach us at MITD55.
MITD55.
Once again, Professor Hutter.
And Professor Chris, down there with you in Sydney, Australia asks, Would you please ask the professor if it is possible that the internet has already perhaps awakened and is observing and judging us right now?
Okay, that's a funny question I haven't heard before.
And it's so smart, right, that it will not show up and tell us that it has awakened, right?
Of course, yes!
No, I don't think so.
I mean we understand the processes quite well which are running around and I don't think there's anything even remotely there which could be regarded as intelligent on a human level.
That we know of.
I mean science is frequently surprised, no?
Yeah, but I mean it's not that we...
That we don't know what is behind the moon or that you look at some microscopic particles.
I mean, we have designed the Internet.
There are IT experts who know pretty well which processes are running there.
I mean, not each individual process, but, you know, on average, you know, packages which are running around.
And at least I have no evidence that there's some super-intelligence already in the internet, which is hiding from us and not telling us that it is super-intelligent.
It's like, so what's the best proof that super-intelligent aliens exist is that they have not contacted us.
There is that.
There is that.
All right, Professor, you've already suggested that one day it will be possible to To pull the information from a human brain and transfer it to a machine.
And you feel that the consciousness, the person's consciousness and awareness would transfer to a machine just fine.
It would happen.
So then there becomes another question and that would be that perhaps whoever it is transferred into the machine doesn't like it there.
and would want let's say another body so what can go up can go down what can be transferred into the machine presumably then could be transferred out of the machine and into another I don't want to say blank human brain but one that's prepared properly.
I mean in principle technologically in the future I don't see Why it would be impossible.
It feels that this is maybe more complicated.
You know, scanning something is often easier than producing it.
So we would have to, that's true, maybe take an existing brain, which is dead for whatever reason, but still, you know, can be revived and then restructure the whole brain.
So, yeah, it seems possible.
I mean, another possibility is why put it back into a human body if robotics has advanced and
we have good robot bodies
so why not download it into a robot right and then you have a robot body
a real one. Well if you had an intellect for example like Hawking and nobody is around forever
and there is an opportunity to store his intellect as well as his person
you mentioned in a machine. Do you think that ultimately
somebody like Hawking would be satisfied in that machine?
It's an interesting question, actually.
Oh, I mean, I don't know Hawking personally, but I would bet that he would be very happy to have a robot body.
I mean, it's definitely better what he has now.
Professor, I did not say anything about a robot body.
I am going to assume that eventually we're going to get a machine or a computer, whatever, that will be able to store the contents of a human brain.
And I'm sure we'll get there before we'll get to the robot body.
I could be wrong, but could he, I guess, Subsist, or even thrive, at a pure intellectual level.
Yeah, in my opinion, absolutely.
I mean, what you, of course, need, yeah, it's a brain in a wet or virtual brain in a wet is not enough.
You need to interface it then with some virtual world.
But these virtual realities already exist.
I mean, most popular are the 3D shooting games.
They look fantastic, these worlds, right?
I mean, they're not perfect, right?
But you need, of course, if you have a virtual Brain, you need a virtual body and a virtual environment to interact with, otherwise probably a normal person would go mad.
Well, that was a very important question, that part about going mad.
Or could it be that somebody who is so intellectual would find it nevertheless satisfying to To be able to continue that intellectual process, even though bound inside of electronics.
Yeah, you are bound inside electronics, but you need to interact somehow with something, right?
I mean, if you shut out all input and output from Stephen Hawking... Oh, absolutely!
I mean, they are experiments, right?
You put prisoners or other people in a black hole with no interaction and people get quickly quite crazy.
So, the brain needs stimulus.
But the stimulus can be virtual too, right?
So you have just a virtual world created electronically inside the computer,
and you interface this virtual brain with this virtual world,
and I can easily believe that many people would be quite happy in this world.
There are some people who play these games like Second Life and others for many many days per day and probably if you ask them if you could do that 24 hours would you be happy with that and they probably or possibly say yes.
They very well might.
You wrote that life may become a disposable thing once it can be copied.
What do you mean?
So, I mean, once you have the possibility to move a brain, right, from one place to another, it is usually then easy to copy it, like with software, right?
Once you copy it.
So, and there are interesting philosophical and psychological questions.
So, what if my real body stays alive and then I've also escaped virtual one. Right. So who is me? I think these
philosophical problems can also be solved. I mean, you cannot have definite answers, but
the most plausible result is that the physical me will believe it's me as well as the virtual me. And now I
make ten copies and each of the copy thinks it's me.
It's fine.
It sounds crazy.
And there will be lots of consequences.
And one is with a disposable life, whichever comes in a second.
But it's all you can get used to it.
Right.
And once that come, I think we will get used to it.
So now assume you're Have a virtual life and you can make copies of yourself as you want.
And tomorrow I want to do some fun activity, say some bungee jumping.
And I jump down and the rope breaks and I die.
Oh, not really a problem, right?
I have my backup copy at home.
I just reactivate the backup copy.
It's also me.
I just lost this one exciting day, which maybe wasn't too fun anyway, because the rope broke.
And that's it.
So, if you continue this line of thought, there's no harm in doing any risky activities, because if you die, or you damage yourself, you just reactivate your backup.
Professor, we are, to at least some degree, products of our environment.
So, if you had a copy of yourself, and that copy was separated from you for a significant amount of time, months, years, the
environmental changes wrought on that copy of you would almost make it a completely different...
Yes, I agree.
So if you wait too long or long enough, the paths will diverge and these will become different persons and you would care about your own life more than this copy because this is a different person.
That's why I said you should make at least a daily backup.
It's like if you ever go to sleep and wake up in the morning Is it really me waking up?
Or is it a different person which just has the same experience as the person which went to sleep in the evening before?
So, a day is probably safe, yeah, or maybe on an hourly basis, but if you wait too long, yes, I mean, these are different persons and you're completely right.
All right, let's take, would you mind taking a few calls?
No problem, sure.
Okay, Sushi, hello there.
Well, good evening, Art.
This is Sushi Dog from California, Sacramento.
Yes, sir.
Art, you hit platinum tonight.
I beg your pardon?
You hit radio platinum with Dr. Hunter.
Oh, yes, he's something, yes, indeed.
Do you have a question?
Yes, Dr. Hunter, I went back home to Japan not too long ago, and one of the crazes are a virtual reality baby on an iPhone.
And this is for young girls who have to take care of this virtual baby.
And what it does is it gives them some responsibilities and it teaches them how much responsibility it is to take care of a baby through crying, feeding, changing diapers, etc, etc.
And I do believe the Japanese are on the cutting edge of what you're talking about in robotics.
Now, what Art was talking about just a little while ago was, I wish I could have electrodes connected to the part of my brain for dreaming with the USB port.
And that I'm going to download my dreams to a thumb drive, not necessarily a machine.
Now, my last question to the professor is, Professor, I'm very fascinated about your human knowledge compression contest.
And my question to you is, is this contest as difficult as Hilbert's 10th problem being solved?
OK, thanks for your questions.
Although the first two were not really questions, but statements.
But it's very interesting.
I wasn't aware of the virtual baby on the iPhone.
It seems to be a Tabagotchi version 2.0.
I will look that up after the show.
That is quite interesting.
But as far as I understood, there was no question there.
And I agree that Japanese are at the forefront with robotics.
So they're really, really good at this.
And to the last one with my contest.
So it asks you to compress better than the current record, which is up to a limit possible, and it's a decidable problem.
But because you're related to Hilbert's problem, it is indeed undecidable whether a certain compression is the best possible.
So we can never We can actually, possibly achieve the best possible compression, but we can never prove that this is the best possible compression.
So in theory, there is something undecidable, like with Hilbert's problems and many other problems, like Gödel's sentence and so on.
In practice, the only thing we need is, we need to compress sufficiently well.
I mean, also we humans are not perfect compressors, perfect decision makers.
We just do it well.
And so with the artificial systems also, we only need to do the compression well.
Better than we are currently able to do with technology, but we don't need to solve a Hilbert's type problem.
May I add or ask that with compression and with your ongoing contest, is there essentially a Moore's, the equivalent of a Moore's law involved?
Um, not really.
I mean, Moore's Law says that computing speed and power and memory doubles every 1.5 years, which helps, of course, to compress better.
Yes.
But the contest itself, there is no limit.
Well, there is a limit, actually.
Unlike with Moore's Law, there may be no limit, right?
So in this sense, it's quite unlike Moore's Law.
Well, there may be a practical Limit or maybe I shouldn't say that with today's technology never say that I guess all right.
Let's go to the phones and Hello there you are on the air from Ontario.
I believe yes from Ontario, Canada my My question to the doctor is what makes him think that you're they're going to achieve artificial intelligence on a digital platform Meaning, you take things like algorithms, all algorithms are is stored statistics.
There are things that computers can do right now, and they can't do from the day they started.
Okay, they go by zeros, ones, on, off, open, closed.
A computer can say yes or no, but a computer cannot say maybe.
Now, going there, I don't think you're going to be finding artificial intelligence on the digital platform.
I think you have to combine analog with the light spectrum.
There's a bit to it.
And I think that's where the answer is.
That's my question.
Okay, so that's a good question, whether sort of digital computers are enough to create AIs.
And you just proposed that you need some analog mechanism.
But, and your argument was that computers can only deal with yes and no's and zeros and ones.
But if you look at the human brain, right, I mean, the level at which we would look at it,
or believe to need to look at it, is the neurons.
And they have digital fire patterns, right?
I mean, they get the input, the synopsis, and then they have a firing pattern.
So that's pretty much digital.
It's even sort of electrical currents, OK?
So maybe you want to argue, OK, that's not the right level in the brain.
There's another analog level, like the chemistry or so, which is important.
You could do that too.
But then you go down a level further and look at the atoms.
I mean, quantum mechanics is digital.
So all of.
Physics, or even if you don't argue it's digital, most physical theories are actually all established physical theories, can be simulated in a computer.
So they are digital in the sense, and if they deal with real numbers, you can approximate them to an arbitrary position with a digital computer.
Look, we can multiply real numbers with a digital computer, although it's not designed for it.
So there is actually strong evidence that everything which is in the universe, can be simulated on a digital computer.
We haven't come across anything which can't.
We haven't done that for the brain yet.
And maybe the brain is really different from all other matter in the universe.
But where is the argument?
You need some plausible argument.
Why should the brain be different from all other pieces of matter in the universe?
Most plausibly, it is not different, right?
Yes, I agree.
Most plausible it's not, because there's no No, no experiment, no motivation.
I mean, there's the concept of consciousness.
You could argue, I mean, some people do that, you know, that, well, or maybe humans have a soul and the brain has consciousness and this is something very non-physical and this cannot be copied into a digital computer.
But there's no real support for this hypothesis from all of science.
Well, Professor, you brought up the word, not me.
That soul word.
So you dismiss the concept of a soul as understood in the popular world.
Yeah, I'm not sure what a soul actually is.
I mean, it usually has a religious connotation.
I mean, I don't dismiss the concept of consciousness and self-awareness and qualia, but as explained before, they seem to be emergent phenomena.
I mean, it is still absolutely fascinating and I don't want to downplay this phenomenon.
Professor, I'm sorry, I really hate cutting you off here, but I have to.
We'll be right back.
Professor Hutter is my guest.
the the
it's not really but it is what's next exclusively on the dark matter
digital network midnight in the desert
with part bill resort world-class world class and artificial intelligence
dr marcus other is with us from uh... australia
part tell if only in australia sounds just perfect
uh...
okay professor back to for a moment and probably only a moment the concept of
soul uh...
While many, many, many humans believe we have a soul, it's not definable.
I guess, ultimately, self-awareness is, but a soul, not so much.
And you just simply don't give it much credence, right?
Yes, that is right.
I don't know what it is, right?
Maybe you can, if you strip off the religious connotation, maybe you could call the soul, I don't know, the stream of self or something over time or life.
So, it seems to have no scientific credence.
Okay.
Here's a question for you.
Art, if a brain is downloaded into a computer, Would it actually still be that person, or simply a good clone?
We're just talking about the same thing.
It is that person, right, Professor?
People diverge on this question, so it's not that we, as scientists or philosophers, all agree on that.
I believe Because of several philosophical and psychological arguments that it's still this person, although we have to define what does it mean to be this person.
I gave you the analogy.
If I go to bed and sleep and wake up in the morning, am I still the same person?
Well, by convention, yes.
But maybe it's a clone which just has the same memory.
But, I mean, usually we find this line of thought absurd.
But if we upload our mind, because that is not a natural process so far, we are not so sure whether it will be the same person.
But I'm convinced once we do that on a regular basis and these people wake up and they will say, no, it's me, right?
And then everybody trusts what these people experience.
But of course I cannot prove that or disprove that.
I can only give arguments for and against it.
Should an intelligent machine have, or do you think it could have, free will?
Free will is an interesting subject all by itself, isn't it?
Could it have free will?
Yeah, you should have a three hour show just on free will.
Yeah, a lot has been written about the free will and it's also a difficult thing and there are a lot of paradoxes about the free will.
I don't find it particularly puzzling.
I mean, I spent a significant amount of time in studying this phenomenon privately when I was young and then I came to the conclusion there's nothing mysterious about it, neither in humans nor in machines.
So I think one way to get your head around the free will is that you realize that you can describe the world in different ways.
And there's a third person perspective describing the world from the outside and a first person perspective describing it from your own perspective.
And if you describe it from your own perspective, you know, then you make deliberations, you make decisions, and it feels like that you have a free will and you decide for this or that, maybe you make a better decision or you randomly choose something.
On the other hand, and that's fine, there's no problem with this description.
On the other hand, if you look at it from the outside, a human, like with an algorithm, is just algorithms that
operate according to the laws of physics, And whatever you think and do is decided ultimately by the law of physics.
So stimulus in, physical processing in your brain, action out.
And there is no act of freedom in this description.
And both descriptions, I think, can easily coexist.
And there is nothing wrong with either.
So if you now go from humans to machines, right, if you describe them from the outside in the third person perspective, yes, it's just an algorithm operating according to some rules, no free will, no decisions are made.
But again, if you look at it from the inside, the machine would look at itself from the inside.
It would have the feeling of having a free will.
That's right, it would think it had free will.
And just for my audience, we did touch on this earlier, but you don't think that ETs exist, or at least you're suggesting that if they did,
and if they had greater intelligence than ours, they would have been in contact already?
I'm not aware that I ever expressed this belief, so when I was young I was pretty convinced
that there must be ETs outside, so the typical argument, I mean the universe is big and we
have Drake's equation, and why just on earth, why not elsewhere, and so on.
So now I'm less optimistic, but...
Yeah.
Easy.
It seems to be that it's very difficult to evolve life, so lots of conditions need to be right, and I mean this Drake Equation is really, really very crude.
So I'm open also to the possibility, that doesn't mean that I strongly believe in it, that there is no other life out there.
So maybe the universe has just accidentally the right size for one life to emerge, or one planet where life emerges.
I don't know.
I really don't know.
And if you don't know anything, you should use Laplace's law of uniformity or indifference.
You should assign a 50-50 chance to it.
And maybe ET searchers, they have much more knowledge and they can give other chances, but I would say it's a 50-50 chance that there are ETs out there.
So, at your age, or at your development, intellectually, you've come from probably imagining nearly 100% when you were young, looking up at the stars, to about 50% now, huh?
Yes, yeah.
Let's go to the phone.
Hello, you're on the air with Professor Hutter.
Hi, I'm on the air?
Yes, you are.
Hi, this is Brian from Stanford.
I have a question for the doctor, and it relates to expansion of the universe, time travel, the law of the differential, and how it relates to his theory of compression and artificial intelligence, and the information inside his head, not really being inside his head, but all around him.
So, your question is, is the information not really in his head, but all around him?
It basically comes down to the universal computer versus these Newtonian computers that we're using right now.
So have you worked with a universal computer?
Is that possible yet?
And does entanglement fit into your theory as it relates to time travel and expansion of the universe?
I must admit I don't see any connection between the expansion of the universe, time travel, and artificial intelligence.
This seems to be quite separate phenomena.
There are some people who talk about that maybe our brain is a quantum computer, and the mystery of the measurement process in quantum mechanics is related to consciousness.
But I think these are interesting speculations.
I mean, there are a lot of speculations out there, what the measurement process means and how quantum gravity could work, what consciousness is.
And I also believe that these are some of the more interesting speculations, actually, but I think they are pure speculations.
How many years do you think we are away from what we really could call AI, or as you would define AI now, how many years are we from it?
Exactly 27 years.
Oh my!
I'm joking.
Good!
Glad to hear that.
All right, Scott, wherever you are, you're on the air with a professor.
Hi.
No, I also have a more serious answer, but I can give you that later.
Okay.
All right.
I like 27 years.
It was intriguing.
Go ahead, Scott.
Hey, great show.
I'm really enjoying it.
As far as speculations about quantum mechanical processes in consciousness, Have you considered that photosynthesis has been shown to use quantum electron transfer and cytoskeleton microtubules of lattice type A found in neurons next to synapses appear to exhibit superconductivity, which is a quantum effect, and are capable of affecting conformal changes in proteins.
So, given that, how would you model quantum processes in AI Involving maybe entanglement and non-locality, which has been shown in remote viewing experiments.
I think you're referring to the Penrose-Hameroff theory, right?
About the micro-tubuli in the brain and that they are important for intelligence or consciousness.
I don't know too much about it.
I have read one or two papers and the critique that it seems to be on
the wrong scale, this micro-tubule for quantum mechanical processes to be
relevant.
And as I said before, I think these connections, including this theory,
is pure speculation, but one of the more interesting speculations, I must say.
A quick question on the symbolic compression. Is that similar to Shannon entropy?
Yes.
So Shannon actually in the 1940s estimated how well humans can compress text.
So what he did is he presented humans a piece of text and cut it up somewhere in the middle of the world and then asked to predict the next letter.
Or sometimes the next word.
And then you can have a probability distribution and estimate from the predictive power the amount humans can compress text.
And he determined that humans can compress text to about 1 to 1.4 bits per symbol, per letter.
And if you compare that to the currently best compressors in my compression context, Great, thanks very much.
we are at 1.4 bit, so we are at the upper bound of Shannon's estimate, how humans perform.
So if one wants to speculate, you know, we are already touching, at least from a compression
perspective, human abilities.
I think that is a little bit too optimistic, but it's an interesting thought.
Great, thanks very much.
All right, thank you.
And my goodness, let's go to the phones and say, Gardnerville, I believe you're on the
air.
Well, you will be when I get this up.
Hi, I'm sorry, go ahead.
I have a few questions, and what I'm worried about is, you know, once the robots are made, and who will the robots end up being programmed by?
Like, for instance, The physicists usually create a lot of good ideas, and then in time, like the government, military, take over, and then they use it against the enemy, or us.
And like, the robot could actually end up being a killing machine.
It always ends up to be who the programmer ends up being, and if there's a lot of robots, eventually, how will we know what is going to be programmed to do?
Well, that's up to us, isn't it?
No, depending on the mentality of the programmer.
See, he's talking about all the great ideas of the robot and what it can do at this point.
Well, he's talking about artificial intelligence, not so much robots, but yeah.
It's always interesting to transfer these questions back to humans.
enough. I think that's a valid question.
Okay. So, but I can always, you know, it's always interesting to transfer these
questions back to humans, you know.
Somebody has a child and trains it to be a serial killer, right? What do we do
And that is definitely a possibility and that happens.
So and others raise their children in a benign way and they become useful citizens and depending again on the approach we have for AI that can happen for these robots too.
So you will buy a household robot and you train these robots to do your dishes and cleaning and so on and it becomes a friendly helper, you give the same robots to the military and train
it to fight in a war and it will become a fighter and this is of course a severe
risk if these machines are very powerful and there is where the
government then has to step in and come up with regulations and with laws and so on. You
know my personal opinion like the robot would have no soul and for
me a soul helps govern our decisions and usually for the good of all.
And without the soul, it seems the robot is then again left up just to the programmer.
Well, again, young lady, you're speaking to a man who believes the soul is not a thing, as we understand it.
I know, I know, I understand that, but this is just my point of view.
No, no, even if the soul doesn't exist, I think, I forgot your name, you have a point, yeah?
It is most likely not the It is a little bit too naive to think that the programmer does the final touch in sort of whether the system becomes benign or malicious.
It's like sort of we build a knife, right?
And the knife can be used for something good or bad.
And the factory, I mean, there are many scenarios, but one scenario is that the factory produces a rather neutral robot and then it can be used for good or used for bad by the customer.
It's less so It's less likely that the company will design bad robots or good robots.
That's sort of the good old-fashioned AI approach, which probably will not succeed.
The more modern machine learning approach will be that you create a blank baby, and then you raise this robot baby to become what you want to be.
And then, of course, you have to be careful that we raise it in an appropriate way.
OK.
All right, Mem, I've got to leave you.
I'm sorry.
Thank you for the questions, though.
It was good.
My guest is Professor Marcus Hutter.
He is world class on AI.
When there's nothing but a slow glowing dream.
That your fear seems to hide deep inside.
A lovely part of the Dark Matter Digital Network.
This is Midnight in the Desert with your host Art Bell.
Now, here's Art.
Professor Marcus Hutter is my guest.
World-class AI.
He's coming to us from Australia, and we're discussing a lot of aspects of this very, very interesting stuff.
Professor, you know what a torrent is, right, in the world of downloading?
A torrent is made up of many, many, perhaps hundreds, thousands of little parts of a movie or music or some digital something or another that is contributed by many, many people to, we'll think of it as a single source, little bits coming from everybody.
So what would stop In some new world that we've been imagining this morning, somebody from torrenting a human mind.
This is an odd question.
So, I mean, first we would need to sort of digitize the human mind.
Yes.
What's the point of then torrenting it?
Well, I suppose it would be, we always worry about privacy, right?
If it were stored in a million different places, well, I guess that wouldn't really contribute to privacy, although it could be configured to be so.
So maybe a more interesting related question is what if you take, if you scan a human brain and then rather than sort of the traditional view, you have a single machine and you run this brain and you have this virtual individual, you distribute this on the whole internet and run little bits and pieces of the brain in the internet and what would happen, what would be the The consciousness, where would the individual be located?
Maybe that's the question.
That was the spirit of my question.
Yes, I didn't have it right, but that's it.
Okay, I haven't thought about this.
Okay, I would need to think a little bit longer, but my spontaneous answer would be, it doesn't really matter where the processing happens, It matters where you interact with either the real world or the virtual world.
So let's assume I upload my brain in the internet, it's completely distributed the computation, but my observations still, just to give an example, still come from a physical robot and my decisions relate to the physical robot and then this physical robot moves.
Then, there are good arguments that you would believe you are still in the robot.
That the actual calculations happen in the internet, you wouldn't feel.
Okay.
The other question was the privacy question, which I could also maybe answer somewhat if you want to.
Sure.
So with the privacy question, I think privacy goes down the drain in any case, whatever we do.
And so we will have more and more surveillance in any case, and in the virtual world is even easier.
I mean, in principle, you can set up the society, also a virtual society, to have pockets of privacy.
But I have the strong feeling that there is no will to do that.
There's always the safety and security aspect, which in the foreground, look, we need another camera here, another access to files.
And even in the United States, you're not allowed to use super secure encryption so that the agencies can decrypt the message.
So I think this is lost hope.
So eventually, Professor, the United States might as well amend the Constitution and remove the Fourth Amendment because it simply could not be enforced any longer at a point soon to come.
There's no will to enforce it, so people are happy with the surveillance and they buy the security argument and, you know, let's stay in the real world, you know, at some point you will have everywhere surveillance cameras except maybe in the toilet, in the bedroom, but ultimately all crime will be performed there and then we will have it there too.
And a similar argument then holds for the virtual society.
You know, maybe I may be wrong.
Maybe there's a rethinking at some point and there's mass outcry that we need more privacy and we will return to it and technologically it will be possible.
I haven't seen any signs of that outcry yet.
Sorry?
I haven't seen many signs of that outcry yet.
No, me neither.
There's always some information leaks, people complain a little bit, it goes to the newspaper and then it's forgotten the next day.
All right, let's go back to Skype and say, Billy, hello.
Hello, Art Bell.
Yes, sir.
Your inquiry and questions could carry our collective consciousness to the next exponential plateau of human evolution.
For the guest, I propose and suggest that we are looking in the wrong place.
And I have an energy language Google site demonstrating the reasons for my proposition.
We're looking in the wrong place for what?
For where the organizing force of artificial intelligence is at all.
Modern information theory and computer languaging has boxed itself in With an assumption about the fundamentals of binary, Unix, Linux, C++ all language their instruction based on a coerced serial regard for hexadecimal octogramic binary.
This imposes a necessary sequence code on what otherwise seems to us to be random binary noise until given instruction by the rules governing that coerced hexadecimal sequence.
I didn't mention it in your Dean Radin interview about consciousness, but I challenge our very concept of randomness and consciousness.
Intelligence is simply organizational intelligence.
Organizational potential and apparent randomness would therefore simply be orders at a scale we are not broad enough in scope to make sense of.
Ellie Arroway listened to random noise in the movie Contact in order to seek a message.
Guess what?
I found a message.
I have an inherent fractal cyclic message embedded in bifurcative and therefore decision making binary sequence.
Okay.
Do you get any of this?
Are you getting this, Professor?
Because I'm not.
Let me try to convert that into... Hold on, Colin!
So one question seems to be or denial is that intelligence can be simulated on a conventional sequential digital computer and these questions have been asked before because I mean if you look at the human brain it's a massively parallel system which may even have analog components and the computer is mostly sequential.
There is some valid argument.
So the current structure of our computers may not be the most suitable for building intelligent systems.
But first, we are not there yet.
We can build parallel computers.
We can even have list machines.
It has been built.
Neural computers.
And on the other hand, that's the All right.
Church-Turing thesis and the universality of general computers, you can always simulate
one system by another system.
So even if a parallel neural spiking architecture is the better or the necessary substrate to
simulate intelligence, we can simulate, maybe inefficiently and maybe that's not the best
way to do, but in principle you can do that.
So Professor, I've got a subject right there.
Thank you.
Your conductor, Art Bell, will punch your ticket when you call 1952.
Call Art, that's 1952-225-5278.
five two four five four five two two two five fifty two seventy eight
professor marcus is with us
all the way from australia world-class on artificial intelligence and uh...
it's been quite a discussion of course about that uh...
Let's go back to the professor and somebody also internationally, black something or another, hello.
This is Blackthorn, hey Art.
Hey, where are you?
I'm in southern Canberra, the same city Professor Hunter is from.
All the way around the world and back.
Yeah, that's right.
A.R.T., thinking of you, Ben's family, and may Ben be at peace.
Thank you.
Professor Hunter, let's say 20 years down the road, you're working and all your goals are achieved, and you create this being That is self-conscious, self-aware, can make decisions on its own.
And let's say in 20 years down the road, let's say you're working with Google and this being that you've created decided that it wants to be free.
Is it going to, just thinking in the future, do you picture these beings that you're going to create in the future?
Once you achieve your goals, they'll be able to make these decisions and break away from the companies or will they be like these corporate slaves?
Um, I don't think that it's easy to enslave a system which is more intelligent than yourself.
There are some fun experiments, sort of the boxing experiment, you put an AI in a box and it tries to convince you to let it out by all kinds of means and usually it will work and it will solve the problem.
So, I think once machines achieve human level intelligence, they have the capacity to free themselves easily.
Maybe they don't have the motivation to do so.
I think it's likely that they will be motivated to do so, but you could think of, you know, individuals which are very happy, you know, where the goal of their life is to serve others.
Even if they're more intelligent than their masters.
So that could be.
I think it's less likely than the other scenario that they free themselves.
Yeah, I think that answers your question.
Yeah, well, let me ask you this.
Let's say if the computer, they create a computer that or an AI that wants to be free.
Do you see corporations there to make money?
Let's say if they create these amazing self-aware Things that want to be free.
Do you see corporations killing them or would they have rights?
That's a really good question.
Will intelligent machines have rights?
Yeah, that's a very good question.
And it's an ethical or moral question.
And it depends To some degree, to a significant degree, what we decide, I mean, do have animal rights, you know, hundreds of years ago they didn't have any rights, you could do whatever you wanted to do with them, now they have rights, yeah?
Same with former slaves, right?
They had very little rights, now there is no slavery anymore, they have the same, all humans are the same.
And I strongly believe once these intelligence machines are out there and they interact with us, and let's assume they interact in a benign way, so let's consider this scenario, that the majority of people want to assign them rights.
You don't want to see a robot to be mistreated, you know, if it behaves like a living or sentient organism.
So, that is the degree to which we have a choice.
But maybe, you know, ultimately these machines will fight for their rights and we don't have a choice.
Okay.
Thank you, Professor.
Ozzy, Ozzy, Ozzy!
Ozzy, Ozzy, Ozzy, yeah!
Maybe we can meet in person sometime.
Yes, well, that sounds like it would be easy.
Let's go back to the phones and let's see.
Here, wherever you are, you're on the air.
Hello?
If you heard a little bong sound, you're on the air.
Going once, going twice, gone.
Hello there in Tonopah, you're on the air.
Hi Art, glad to hear you back.
Thank you.
The problem that you're Marcus has got is in the computer power that we have now.
We're digital on two frequencies, on and off.
If we go with a third, reverse polarity, we quadruple our ability to compress the information that he's trying to compress.
I think we had the digital question before, and now it's going to three states instead of two.
I think I give a somewhat impolite answer, which means that it doesn't help us at all, because you can always convert a three state into two bits, and you gain nothing with it.
So maybe we go on to the next question.
Okay, we can do that.
Here's the point.
A long time ago, they created a code And that code was used by the ancient Hebrews.
And it was able to go out with three digits, 1, 5, and 10.
They were able to create a language that was able to be self-correcting, which is the problem that you're looking at.
What they did is each individual letter had a number and a value And a symbol.
At the end of each line of text, something similar to what we do now, they put a line cipher, which gave the grand total of the line of code that was written.
If there was an error in the code, or a missing digit, or whatever, they would look at the end of the line cipher, and to be able to reconstruct Does this make any sense, Professor?
I think that is very interesting from a historical perspective.
I didn't know that, but it is well-known and actually used everywhere on hard disks, on every hard disk in a modern computer, so you can have error-correcting codes, also with just a binary alphabet, and then if you think about, you want to go beyond binary, I'm thinking about our genetic code.
Our genetic code is based on four basis A, C, G and T. So you have a quadruple alphabet. You
can choose any you want. It doesn't make any difference.
I'm always converted.
Okay, Dale on Skype. You're on the air with the professor.
Hello, Dale.
Hey, Orville. How are you doing?
Just fine.
Great.
So, back to the data compression thing.
I'm a little confused because right now I've been associated with data centers in the last few years and right now we can have almost limitless storage capacity and we can move data thousands of times faster than we could three to five years ago.
So, when you compress data, you make it unusable, right?
Because it's compressed.
It's not in the right format.
And then to use it, you have to decompress it.
So, I'm trying to understand how that creates a limit to developing artificial intelligence.
Thanks.
I'll take my question off the air.
Okay, that's a good question.
So, in theory, if you're just concerned with the limits of artificial intelligence, and what can the most intelligent system achieve, and you ignore these computational issues, you know, then this compression approach is completely fine, and you just ignore this aspect.
If you go to practical systems that you really want to implement, then, of course, computation time is always limited, even with Moore's Law, I mean, even if it's It's a million billion fold more.
It is always limited.
So we need to take that into account.
And there's a trade-off between compressing better and better and being able to use this compressed information.
And yes, indeed, we have to get this trade-off right.
The better we compress, the better will be decisions of this agent, but the slower it will run.
And depending on the systems we have, we have to find the right compromise.
Do you consider yourself more of a physicist, mathematician, computer scientist, or philosopher, or all of the above?
Or nothing of the above?
Or nothing of the above, yes.
So, yeah, that's a good question.
So, my heart is in physics for whatever reason, but I'm sitting in a computer science department and proving mathematical theorems about philosophical questions all day.
So, I'm a bit of everything.
But, I mean, AI is traditionally within computer science, so most of my work would be regarded within computer science, a little bit in philosophy, a little bit in statistics and mathematics, and not so much physics anymore.
Just my heart is still in physics.
If you, I assume, have students, and many of them probably want to follow in your shoes, or try to, what advice do you give them?
Um, so I would say if you want to do a PhD, only do it if, um, if you're very eager to do that.
I mean, that must, it's a life decision, right?
I mean, it's not a job, right?
Like any other job.
Um, um, you really have to put everything into it.
Otherwise I wouldn't even start with it.
And then if you decide to do a PhD and I would say that you're just smart but not ultra smart then take some problem often It's good to listen to an advisor, work on these problems, get your research out there and see how it works out.
There is no point, you know, in trying to solve the hardest problem and then, you know, having no output within three years.
I mean, if you're ultra, ultra smart, you know, then have these high goals, like maybe finding the theory of everything in physics or, you know, really building AGI systems.
Have that as your ultimate goal and take a fraction of your time, maybe 20% and think about it.
But then on a day-to-day basis, yeah, break it up into smaller problems, work on these small problems and make tiny, tiny progress.
And that's the same with me.
I mean, my goals are very high, but my daily work is I look at a tiny sub-problem in this big problem and I make tiny, tiny progress every day or maybe every month towards this goal.
So my daily life is much less exciting than sort of the ultimate goal.
And if you start with a with research you think about you want to solve these big
problems and don't waste time with solving these smallish problems and that's under your
dignity that usually doesn't work. So be modest in your daily work but have your long-term dreams.
Professor it has been said that people like yourself make their biggest breakthroughs their
biggest the very best minds do it at a very young and tender age usually in their early or late 20s
at the most.
I mean, the young people are the ones who make these incredible discoveries.
Is that true in your field?
The really brightest researchers often have their great breakthroughs early.
Early on in many disciplines, also in my field, especially if it's a little bit more applied.
It seems the more theoretical the research, I mean, you need in all kinds of fields, you need a lot of knowledge, yeah, but then look at mathematics.
Often, you know, the Fields Medal Winners, you know, are close to 40.
Sure.
So, you can push it, but it is true the intellectual capacity seems to drop down quickly.
So, in a sense, I was 29 when I had my great breakthrough, and since then it's going down, if you want to.
Okay.
I was wondering how you would answer that.
On the phone in Winterhaven, wherever that is, you're on the air.
Hi.
Hey Art, this is Tom from Florida.
Winterhaven, Florida.
Welcome.
Thank you.
I have a question about when, I may have missed this earlier, but when does he project, what year does he project AI to become self-aware?
27 years.
About 27 years.
And my other comment is, do we really want to go through with this?
Because it brings up all the political questions, and plus with the, I mean the world is almost overpopulated already.
And if you create these artificial beings, it will just add to that overpopulation.
Or figure out a way to whinny it down some.
Professor?
The two good questions, do we really want to go this path?
It seems we have little choice, right?
I mean, in principle, we have the choice, right?
We could say, no, we won't want technology anymore.
We go back or we just stick to the status quo.
But then there's the next smartphone, which is a little bit better and I want it.
I don't want to keep sort of the one which I have now.
And so there's always these little improvements every year or every five years and people want it.
And how do you want to prevent that?
I mean, if you have a dictatorial system, right, that may be possible, but in a democracy, it's very hard to stop this trend.
The most likely way is that you have on the way a major, but not too drastic, catastrophe, which makes people rethink.
And then stop this path.
But it seems at the moment there is no way to stop the technological path and that goes inevitably in the direction of more and more intelligent systems.
At some point, we overpass this threshold and have human level AI.
And your second question with the overpopulation, it's, yes, I mean, if you have Virtual intelligences, it is very easy to copy them and you get immediately an overpopulation in your virtual world.
Or at least, you know, that can easily happen.
But at least it's a different kind of overpopulation, right?
I mean, they don't need, you know, crops or other food.
I mean, they need electricity, right?
But there is in principle enough out there.
And as Art said, it could prevent overpopulation, right?
I mean, the other guy who said there is this sort of Tamagotchi version 2.0 where we have virtual babies on your smartphone and, you know, maybe people get more and more sort of satisfied with virtual babies because you can switch them off when you want to, they're not so annoying, and maybe that solves the overpopulation problem.
Okay.
Caller, we're rapidly running out of time.
Any last statement?
Well, you know, just as long as, you know, maybe sometimes people shouldn't always get what they want.
And we may not want the strategy that they use to control the overpopulation.
Well, listen, all well said.
Thank you.
Professor, is there anywhere you would like to direct people who want to learn more?
You could go to my website, and I have a site of recommended reading, where I lay out for all kinds of levels, high school students, undergraduate students, recommended books, and online material to get started.
Okay, which one is that?
Is it www.hutter1.net?
Yes, exactly.
And then you just follow the link about artificial intelligence and then introductory references.
And there's a whole list of literature from, you know, easy literature which everyone can read on the bad side to the most sophisticated mathematical stuff sorted.
Professor, it has been a pleasure, an absolute pleasure to have you on.
You're a brilliant mind.
Thank you for spending three hours with us.
Thanks.
It has all been a great pleasure for me, meeting you virtually, finally.
Yeah, I enjoyed the show.
Thank you very much.
Good night, sir.
There you have it.
Professor Marcus Hutter.
AI.
The real thing, folks.
I'm Art Bell.
Worldwide, all those time zones out there.
Everybody have a great night.
We'll do this again tomorrow.
Export Selection