All Episodes
Dec. 19, 2017 - The Unexplained - Howard Hughes
01:04:13
Edition 325 - Thomas Anglero

To end 2017, a look into our "AI" future with Thomas Anglero - IBM Norway Head ofinnovation...

| Copy link to current segment

Time Text
Across the UK, across continental North America, and around the world on the internet, by webcast and by podcast.
My name is Howard Hughes, and this is the last edition of 2017 of The Unexplained.
Thank you very much for being part of my story over this last year.
Thank you for all of your emails that continue to come in.
Please keep them coming.
No shout-outs on this edition.
I do plan on a future edition to put a lot of your points together and address all of those at that point.
But before we get into the guest this time, somebody who I think is very appropriate for this point at which we find ourselves in human history.
Thomas Anglero is the man's name.
He is the head of innovation at IBM in Norway, and he's got a fascinating take on AI, technology, and all the rest of it.
But more about that in a moment.
This bit is about you.
I just really want to thank you as we come to the end of this year for continuing to support me.
You're the family that I will probably never meet, but you mean a lot to me.
Your support, the kind things that you say about all of this, make it all worthwhile.
I think if you've been with me for a number of years, you know the story of this show.
We started as a radio show.
The radio show ended.
You asked me to continue it as a podcast, and I did that for 10 years.
And then they asked if I would bring a version of it back to radio, which I am now doing.
But this show online continues and will continue.
And I'd just like to thank you, really, for being with me as I've developed and as I've learned over this period, which I have.
You know, I've learned a lot about myself and a lot about doing this kind of material.
But in life, I've always believed, and I think it's what keeps you young, you never stop learning.
And this has been a year where I've learned and I'm sure there will be more to learn as we go into 2018.
So thank you very much for being part of my story and thank you for the things that you say, the guest suggestions and the pointers that you give me.
Whichever part of this world you're in, and we have listeners now in places like Kuala Lumpur, in Perth, Western Australia and Sydney and Melbourne, and in Christchurch, New Zealand and Tokyo, Japan, and every city of the UK and the United States and Canada as well.
And pretty much every other country on earth, I'd say.
So look, we're an enormous worldwide family, and this can only develop.
So thanks very much for keeping me going over this last year, through everything.
Thomas Anglero, the guest on this show, we're going to talk about a lot of the hot-button issues that are very much dominating the news this year.
You know, the mainstream media didn't much understand these things a year ago, but now they're kind of beginning to cotton onto it.
Now, we were there, I think, well before them as we are in so many things.
But, you know, that's the point of doing shows like this, Winnie, because you're not going to get all of the facts and you're not going to get all of the perspectives ever from the mainstream media.
Why?
Because they have their own axes to grind, they have their own viewpoints and their own interests to protect, I guess, might be one way of putting it.
So that's why, you know, you get to hear things here.
So Thomas Anglero will talk about AI, robotics, and all that other cool stuff that perplexes some people and excites others on this edition.
If you want to get in touch with me, go to the website theunexplained.tv, follow the link, send me a message, and thank you very, very much at this point in the year to Adam Cornwell at Creative Hotspot in Liverpool for all of the hard work that he does for me.
Thanks, Adam.
All right, let's get to Oslo, Norway.
I think today, just as cold as London, UK.
They don't quite have as much daylight at this time of year as we do, but it's pretty cold there and here.
And say hello to Thomas Anglero, the head of innovation for IBM in Norway.
Thomas, thank you for coming on my show.
Can I ask you first, you know, you have one of the most important positions in technology in Europe, indeed in the world.
First of all, give me the job description.
I guess we need to do that.
Of course.
What a way.
I have to live up to that introduction.
That's interesting.
It's going to be an interesting talk.
The title is Director of Innovation for IBM in Norway.
And I am responsible for many things because innovation is a hype word, but we'll get into all what that really means.
But the primary responsibility I have is for IBM's Watson, which is their artificial intelligence or cognitive computing platform, and bringing it to the market.
So it's been a hell of a ride the last two and a half years.
So you're absolutely right now, in any part of the world and any way you look at it, at the cutting edge of everything.
Yeah.
You know, I think most of the world is saying, wow, this is new.
This is exciting.
This is cutting edge.
And for me, because I spend my entire day, my entire life, existence now is inside of this.
For me, I see this as a repeat of history all over again, you know, where we've been at these cusps of massive changes in society.
And this is just where at it again.
You know, the moment before Edison turned on the light bulb, the moment before we turned on electricity, Tesla invented alternating current.
Just before that, you know, the world was in darkness.
And then all of a sudden we went into light.
Today we are in, we call it electrical and mechanical, but then we're going into robotics.
We're going into artificial intelligence.
So as exciting as it is, in my mind, we're just being humans again.
And we're just at that next cusp.
You know what I mean?
So I'm not downplaying it, but I'm saying that we've been here before.
Really?
Now, that's interesting because earlier this year, I interviewed a man called James Burke.
You may be familiar with him.
If you were brought up in the US, I don't know if you were or in the UK.
If you were around here 25 years or so ago, this guy was Mr. Technology on TV.
I mean, he is still an enormous presence here.
And James writes much on science and technology.
And James is a man who embraces all of it.
But even he had some misgivings about the nature of the, I won't say threat, but challenge that faces us going forward.
Because in his belief and in other experts in this field that I've spoken to, we are facing something the like of which we've never really faced.
Yes, it is a technological leap forward, and we've had those before with the wheel and flight and fire and everything, but never something so very comprehensive that goes so much to the core of everything we do.
And yet you say this is just almost another day at the technological office.
Well, I don't say it's just another day.
I'm saying from the human side, from the insecure side.
Because what I do is I spend a lot of time with management teams of companies throughout all of Europe and expanding into now the Middle East as well.
And they are massively insecure.
Because here you have a management team, people who 30, 40 years, they built their reputation experience on knowing everything.
They are the masters of the universe, of that industry, of that market or whatever.
But they look forward and they are scared as hell because they know people who are 20 years and younger are in better control, have a better understanding than they do.
Never have I seen so much insecurity in people 50 plus.
And what I'm trying to tell them and what I'm telling them directly is that we've been here before.
This is what we do as humans.
You know, the whole exploration thing.
So this is not a life crisis.
This is just another moment, another opportunity.
I'm a massive optimist.
Now, you have the whole people who paint the picture of artificial intelligence, that the robot's going to take over, that the software is going to kill us all, the whole Terminator Judgment Day, things like that.
I downplay that to the point of where, listen, if you program a machine to kill you with a machine gun, it will kill you with a machine gun.
That's human choice, right?
But look at it as if you're driving a car.
If you drive the car down the street, it won't hurt anybody.
If you drive the car down a sidewalk, you'll kill a whole bunch of people.
It is us who are programming it that will determine the outcome of what the software will do.
Well, I get what you're saying, but how come I am reading at the moment, even within the last week as we record this, that robots are becoming able to make decisions and functional gambits, functional moves for themselves?
That contradicts what you said to some extent, doesn't it?
No, but you see, the self-learning systems are so exciting.
All right.
So self-learning systems are scary as hell because a self-learning system does not have an endpoint in terms of its judgment.
It doesn't even know what judgment is.
We have ethics.
At least some of us are brought up with ethics if we had good parents, you know, who taught us what is right and what is wrong.
But a machine, a code, does not.
I like to give examples of where machines and AI, software, algorithms, they're not limited by our physical world.
If you program a machine or a lot of machines to understand the quantum world, when it makes an estimation or a calculation, it will actually think about its decision or its final thought or its conclusion.
It will take into consideration the quantum world.
We human beings do not take into consideration the quantum world.
We say one plus one is equal to two.
We do not take into consideration that an electron can be in front of me on my fingertip and the same electron can be halfway around the universe.
We cannot possibly exist in that world, but machines can.
I think self-learning machines, machines that could take in consideration all these things, and this can take into consideration hundreds of millions of things per second, and it takes into consideration things that we don't take into consideration, will lead us to a very interesting world.
But that does not have to be a scary world or a dark world.
It will allow us to, as you were saying, let's say, cure diseases that we've never been able to cure before.
It will allow us to simulate human responses and things like that.
It will allow us to have a better world, but it's not to be scary.
I mean, I think all of that's great.
And listen, I do embrace all of this.
You know, we have to march forward into technology, but we have to be doing it in a mindful way.
What you just said is that artificial intelligence can take on board when making and formulating decisions and actions many more parameters than we can.
They can think literally way out of the tech box, which is a great thing.
But that doesn't stop them from making the wrong choice or decision, does it?
Yeah, but you got to understand what's the definition of the word wrong.
Well, wrong might be the destruction of civilization as we know it.
Understood.
Understood.
I'm not trying to get into that type of argument.
That's not where I'm going at all.
And the software will take into account or not take into account things.
It doesn't have ethics.
It doesn't have morals.
You have to actually program that in.
If you don't put it in there, if you don't take your child and say, don't steal.
The fact of the matter is that you are innocently almost relying upon the people doing the programming to do it ethically and do it right.
And as you know, you've only got to look around the world at some of the regimes and people who are in power in this world.
You know, not everybody is so benignly motivated.
Yeah.
We are, the technology right now is that there are two types of way you could work with the technology today.
One is self-learning systems, which we've spent a moment talking about.
And the other one is human-trained systems.
And that's where majority of the software solutions provided by the big players are at, where human beings are training the software.
And I'm so happy that we came out the box.
You know, the B-tide segment for this industry was that the software will be taught by human beings.
The question or the nervousness that most people have is with these self-learning systems that don't have a mother and father.
And I keep on going back to that picture because that's the problem.
If the thing is just left to make its own decisions, it doesn't know that wiping out the planet of all human beings, just because that's the best environment, that's not correct.
So let's not forget the other side of it.
And that is that most of the big players today, you have to train the software.
And training the software, you literally have a Q ⁇ A with the software, telling it what's right, what's wrong.
You ask it to make a conclusion.
It makes a conclusion.
You go, very good.
Now take this in consideration.
Don't take this in consideration next time.
And it very much ends up being like if you had a kid in kindergarten, very much the same type of thing.
Good boy, but don't do that.
And that's the way it is.
But that assumes that they will learn.
Obviously, they will learn and we will Help them to learn through mistakes, but some of those mistakes along the way could be pretty catastrophic.
Look, I don't want to paint a worst case scenario here, but these are what a lot of people who are commentating on this matter, these things are what they're saying.
Of course, because it gets people's attention, it sells newspapers, blah, blah, blah.
We've all heard these discussions before.
Now, the implementations that I've seen happen around the world are not these ones that the doom and gloom scenarios.
People, companies are utilizing the software to increase farm yield like never before, to increase efficiencies in IT systems, increase efficiencies in production systems, to increase lighting, to smart cities, everywhere.
And the thing is that these beautiful things that the software solutions are doing every moment of the day and expanding exponentially around the world aren't that interesting to tell people about.
Okay, I could cut 10% off my electrical bill.
This is not exciting, but it's much more exciting to talk about, you know, Judgment Day.
Well, 10% off my power bill would excite me, let me tell you, Thomas, at the moment.
Sorry.
Another bad example.
Very good, very, very, very, very good example, I'd say.
Okay.
Yeah, and I totally understand where you're coming from.
And, you know, there are many exciting vistas.
I was interested in one of the words that you used to describe some of the outcomes there from technology as we deploy it now.
You said beautiful.
Do you think these things and what they do are beautiful?
Yeah, of course.
Look, we, how, this is an amazing moment in time.
You can be going for a walk in the woods and you're relaxing and breathing the beautiful fresh air, the sun's hitting your face.
And we all have these moments of where we connect some dots in our head.
And we go, what if, blah, blah, blah?
And in that moment, with the software, you can actually ask the software.
And the software will check hundreds of thousands, millions of different references.
And in a moment, it'll come back to you and say, well, with 92% chance, what you said is possible, this, this, this, this.
And you can continue your walk.
But these moments of epiphanies that we sort of have, we not sort of have, we have all the time as human beings, we can actually get a result or an answer or insight to that epiphany, to do something with it and continue on this beautiful walk and have an epiphany after an epiphany after epiphany.
And we can actually save the planet.
We can actually start doing amazing things.
This means we give the power of the ages to everyone on the planet.
The amount of things we will solve, the incredible world we will produce instantaneously is in our hands.
This is where we're at.
We're not going to annihilate each other.
We're going to make this world a better place.
I am so not on this whole thing about annihilating each other.
Human beings love each other.
That is our premise.
That is why we're here.
And I'm a techno nerd and I'm a hardcore.
The deeper I get into AI, the deeper I reaffirm that we are here for love.
This is our moment.
This is not our moment to annihilate each other.
This is our moment to give each other hugs.
And I see this all the time.
We just have to get through the first layer and that's called insecurity and concern.
Once we get through that with the software, you'll end up at the place where everybody else is using software saying, wow, this is beautiful.
Okay, no, I understand all of these bright new vistas.
And, you know, a lot of them I'm really looking forward to, Thomas, I have to tell you.
I mean, look, technology has enabled us to have this conversation without me having to go into the office today.
It's a freezing cold day here in London.
You're used to it being absolutely sub-zero in Oslo.
Here in London, we're not, and we've got that this morning.
So, you know, this is great.
This morning, as we record this conversation, I can be here.
Technology did that for me.
I love technology.
But we have to be mindful of the fact that we are algorithmically profiled constantly at the moment.
And I don't know whether you've seen the book by Kathy O'Neill about algorithms and the fact that algorithms are not necessarily a good thing because they make decisions, block decisions about us that may be wrong and may be deleterious to us.
You know, that is part of the future that we're walking into.
It's not all sunshine and roses, although I accept that a good portion of it probably is.
No, it's not all doom and gloom.
Kathy, I absolutely disagree with that.
Listen, algorithm has been around.
Listen, when you got out of bed this morning, the bed sheet you had on, the determination of what was the price of that bed sheet, the distribution to get it to the store, which you bought it from, was all done by algorithm.
The breakfast you had this morning, it was probably 90% influenced by an algorithm.
Algorithms has been a part of our lives and have been a part of our lives for decades.
This is nothing new.
This is nothing coming that's going to change anything.
This is the way it's been for at least 20 to 30 years.
We have been algorithm-based society for that long period of time.
So anybody that says that algorithms are scary doesn't know what the hell they're talking about.
This is fact.
All right.
Well, an algorithm, though, may logically and sensibly decide that a program like this, done by a person like me independently, has no place and that this kind of broadcasting can be most efficiently done by one huge organization and one show of this kind.
An algorithm would decide that.
But liberty, freedom, free speech tells us, no, we need diversity.
Isn't that so?
The algorithm will decide that, but the algorithm that decides that has been influenced or set up by a human being who says that those parameters are the best.
So the algorithm being influenced will end up with that determination.
Okay?
So don't just make believe as if it's just the algorithm making this determination.
It is influenced probably by somebody sitting at a desk who's looking to save money.
What's the best way to get our message to the masses?
And if that conclusion does not equal a human being, that is fine.
But you have to put those type of the parameters into the algorithm.
So it's not just the algorithm.
There's always a human being behind it.
Also, the future, There are going to be new positions.
You know, with the change in technology, these leaps in technology that we have every 50 years, depending upon what industry, old jobs disappear, new jobs appear.
So this is a normal thing.
I always talk about the guy in the elevator, the elevator man.
A lot of people don't ever remember seeing an elevator man.
I guess I'm super old.
I didn't think I was so damn old.
I know what you're talking about, but just explain for our listener.
Sure.
So the elevator man was typically a very nice gentleman, and it was almost always a gentleman.
I remember seeing a woman, which tells us how it was back then.
He had a hat on, a very long top hat, very high in height.
And as you entered the elevator, he would open up the gate because there was a gate and he would welcome you and he would ask you which floor you were going to.
You would tell him the floor or what you were looking to shop for.
He goes, oh, that's the third floor.
He would close the gate and they would turn, there would be something on the wall, which he would turn up and around.
And he would take you to the third floor.
We got to the third floor.
He would turn up and back to stop the elevator, open the gate, and there you were.
There was no electronics.
There was no button to push an elevator.
There was an elevator, man.
I liked that.
I remember stores in London.
I think Harrods used to have that.
Maybe still does.
I kind of like that.
It's warm and human.
And, you know, the guy says to you, or the woman says to you, third floor, menswear, soft furnishings, you know, toilet requisites.
I like that.
I like that too.
And I believe that, you know, at least here in Norway, customer service is pretty much dead.
And that could be a whole other radio show.
But if you want to differentiate in the market, and we see it a lot in the U.S. with Zappos, how they have become so huge, gargantuan.
And they say, reason why, we are all about customer service.
Completely, whatever the customer wants, we do.
And so I believe that example like the elevator man, where technology, a button, a little LED light with the number two written on it, replaced the elevator man or the number three, four, five, depending on the floor.
If you bring that back, that little touch of humanistic will make such a difference.
Everybody loves being greeted in the morning or you're having a rough day and you have somebody smile at you.
How are you doing today?
I think that's a wonderful thing.
Now, that there is a perfect example of technology may have replaced the elevator man, but us, what makes us human, can never be replaced.
And here's the challenge I give to anybody and everyone.
I want your listeners for a moment to think about the first time they fell in love.
Do you remember your first time you fell in love, Howard?
I do.
It's a few years ago.
That's okay.
It doesn't matter.
It could be a few lifetimes ago.
Do you remember?
And it makes me smile.
Yes, it does.
It's very intense.
Now, do you remember the first time you were kissed?
The first time you gave somebody a kiss?
What I want you to do for a moment is remember the moment, the moment just before you gave her or him a kiss.
Do you remember how your stomach was?
Terrified.
Exactly.
Do you remember you were sweating, but at the same time, you were cold.
Your toes were curling up.
You were a complete mess.
Your hormones were raging.
Your body was going through something like it's never been before, correct?
But totally.
I mean, were you there?
You summed it up.
We all were there, but we weren't sharing the moment.
My point is this.
That moment, code it.
Write an algorithm for that.
It's not possible.
That moment is what makes us human.
We don't have any idea how to code that.
We can make assumptions, but your moment and my moment are so drastically different.
The moment before you kissed her and the moment before I kissed the girl that I kissed is not the same thing.
And what it meant to us is not the same thing.
And how it changed our life is not the same thing.
You cannot write an algorithm for that.
Well, then that means that technology can never completely take over from us.
No, it can never completely.
No.
It could allow us to be the ultimate version of us.
If we want to use it that way.
Or we could use it wrongly, as many technologies have been used wrongly before.
But you also could use it to be the ultimate version of you.
Well, here's the problem, isn't it?
We are going to be, and I think a lot of people will have said this, and I'm saying this now, that we are going to be dependent on who is doing the programming.
And if it's somebody who's straight out of programming school, maybe they're 21, maybe they're 20, maybe they're younger, they have no experience of life, and maybe they've been taught to do what they do very quickly because everything's done to a budget and budgets are getting thinner all the time.
You are going to be depending on that person to program ethics, rightness, so many parameters into a machine essentially, and yet that's being done on a budget.
I worry about that.
No, no, I agree with you.
And as you're talking, the example that comes to me is Facebook.
Facebook's algorithm, somebody somewhere has written an algorithm that when I go onto my Facebook account, the news that I see has been decided by somebody else, right?
They're giving me their vision of the world, even though I'm seeing news from my friends that I have on my friends list, but it's also determining that this post from my friend, I will never get to see.
I will never know that they posted it, but it will make sure that it's influenced by saying you'll get to see this, this, this, and this.
So there are algorithms, and we have a perfect example today in Facebook that it's been influenced by Facebook and it's making us see Facebook's version of the world.
And I get really annoyed, and I do mean annoyed, at its constant suggestions of things that I should want to buy.
You know, I deliberately don't buy those things when I see them pushed at me by Facebook.
I don't want something else telling me what to think.
Facebook, you gotta understand, I spend a lot of time in Facebook ads, and that's a whole other different part of my brain set.
So Facebook ads, advertising on Facebook, once from Facebook's point of view, they're just trying to make money.
You know, they saw Google make its fortune on advertising.
So they're going to push you as much as they want to push to you.
But in terms of a business, I think Facebook ads is an amazing thing.
Literally, for 50 cents, I can expose my product to hundreds of thousands of people.
Think about that when you think about marketing and communication departments who still buy magazine ads.
You buy a half a page, a quarter page in a magazine ad.
It costs you $50,000, $100,000, Euro, whatever you want.
And you hope that that magazine, that ad on page 37, that takes up a quarter of a page, that somebody, fortunately, is going to leave it open in some place of significance or it's going to be just stored in some cabin or some doctor's office.
Nobody will ever turn to page 37.
But you spent $50,000 on that quarter page.
But for 50 cents, I can guarantee to get my ad in front of tens of thousands or hundreds of thousands of people.
Understand that the business in which Facebook is doing with ads is from an opportunity point of view for young people, for startups.
This is revolutionary.
And those corporations that are sticking to the old world and the old way of doing things, I don't understand.
It's just nobody's there.
You're buying a piece of paper that you hope somebody looks at.
While Facebook and online people are there and their eyes are looking at the ads, you're guaranteed.
So I'm sorry for changing this up, but you just, you touched on something that's revolutionary, and that is that Facebook and the pushing of the ads, this is a opportunity to create new worlds, new scales of this county.
You know how many kids are in high school today that are making millions of dollars and their teachers are telling them you're going to amount to nothing?
And the kids are sitting there looking at them in a pair of jeans and going, don't you understand how much money I have?
And that's my Ferrari out in the parking lot.
This is where we're at.
Do you think this is another view of all of this is that a lot of this happens and we get young teenage millionaires and stuff like this because it's new, because we're playing with it, because we're so preoccupied with it, we're very aware of it.
Once it becomes as natural as breathing, then the bubble's going to burst.
It's not going to be that special.
It's not going to be that great driver of all good things.
Absolutely.
It won't be, but that is the nature of innovation.
And innovation is amazing.
It's creator of a bubble.
We ride the technology to the very end, and then it becomes standard, right?
No big deal.
But in that time, in parallel, other innovations are bred from that initial innovation.
So we need to have this innovation to breed more innovation and breed more innovation and breed more innovation.
That's what it's all about.
This is a natural progression of things.
The problem is it favors the economies of scale.
Now, you know, we'll both have studied economics, won't we, at school, university, whatever.
The economies of scale, we are taught, are a good thing.
If you can make a thing cheaper, if you can consolidate production, if you can get fewer people to do it, whoop-de-doo, that's great news.
However, the problem with this in a tech world is that we are now focusing wealth, power.
I mean, you work for IBM, but IBM's not the biggest player in all of this.
It's a huge player, but we're beginning to consolidate power over our lives.
And this is a philosophical question more than anything else.
In fewer and fewer hands.
That surely can't be a good thing, can it?
You're touching on a whole bunch of things in your one statement.
Understand, let's touch on the economies of scale, and then we'll touch on the other one, which is putting all the power into certain, just a few players' hands.
In terms of economy of scale, when everyone is using algorithms and code to reduce prices, increase efficiencies, lower prices to get into more people's hands, your competitor at some time, or all your competitors at some time, will have similar technology.
And there will be a point in which everybody cannot go any further down in terms of pricing.
And what that will leave is the human element and the innovative part of the human being and the mind and the creativity to distinguish you in the market.
So understand that the coming AI and the coming algorithm, not the coming, they're already here.
They've been here for quite some time.
And what they're doing is they're creating all these opportunities.
They're creating differences in the market and all that.
It's going to lead us to the same thing that we've seen before, is that we're all using similar technologies and we're back to being innovative.
Right.
Right.
So it's a little bit like somebody who runs a bus company.
Bus technology is freely out there.
Anybody can go and buy a bus.
You can get yourself licensed and start a company.
Whether you succeed or not depends on how you do it.
Yeah, right.
Yes.
I mean, if you see that what you're saying is that the technology to that level is going to be there and available to everybody.
It won't just be the big corporations.
So it depends on your human ingenuity to come up with an Uber or something that is revolutionary and different and see if you can make it.
Yeah, absolutely.
And that's why so many people, when they spend their time listening to these arguments and discussions and maybe this radio broadcast, what they should be doing is getting inspired and doing something with it.
This is their Uber moment.
This is their Facebook moment to create this innovation that's going to change the world.
And I don't think, and I think people undercut themselves, do you know how amazing you are?
You know, you have the technology, the artificial intelligence algorithms, what they're doing, they're computating hundreds of millions of different data points, taking into account data from the environment, from electrical grids and from databases.
And they're doing this all instantaneously for almost nothing.
And they're giving you this ability to ask it a question.
My God, what do you want to create?
How do you want to change the world?
But people sit back and they go, this is scary.
Don't say scary.
Go for it.
Be the next Uber.
Be the next whatever.
This is an incredible opportunity moment.
And yet there's so much.
And you keep asking me questions as if this is scary and this sort of like dark side.
I don't zix on the dark side.
My job and what I do is I see the upside all the time.
All the time.
I always start every meeting I have, every discussion, every presentation I give, and I present all over the world.
I conferences all over the world.
Everyone starts with the dark questions.
But people, you should see when a person gets that moment of gift, when they understand what I'm saying, they go, My God, it's fear.
My fear is holding me back.
I could do any of these things.
I go, exactly.
Do you remember when you were a kid and you were curious and you wanted to conquer the world?
You wanted to be a fireman.
I wanted to be a doctor.
Nothing's going to stop me.
Go back to being a kid.
For the first time ever, you have the access to the experts, thousands of experts, all at the cost of nothing, the cost of a cup of coffee.
But of course, you can't do any of that if a robot's taking your job.
I'm talking about the software.
I'm talking about the ability that the software brings you.
That's what I'm talking about.
But if you're in a situation, and we don't want to dwell on the dark side, I want to get to the cool stuff too.
But if you're in a situation, but I have to ask these questions because these are the hot button questions right now.
If you're somebody who's been turfed out of your job because of the march of technology, however excellent it may be, your ability to access technology, to dig yourself out of that and recreate yourself, is going to be, unless we make changes in society, pretty limited.
Yeah, and this, but then, okay, I agree with you completely, but understand that you just took a word, there's a, in that sentence, you use the word technology, and you can take that out, and if we go back 10 or 20 years ago, you're going to put something else in, right?
When the choo-choo train came, it replaced another industry.
When the automobile came, it replaced the horse and carriage.
This is just the latest thing that's coming to replace.
Don't just say that this is the technology, that the algorithms are going to end everything.
This is just the next one in a line of successions of innovations that come and replace things that were in the past and make things more efficient.
Do you believe that there will be in the new world, and I know that you're not in government, but do you think that in this new, very ultra-high-tech world, that there will be a place for, a job for, a niche for every person, even people who've been displaced by the rise of technology?
I believe that in this new world, the new world will look like the old world.
And those people who aspire to be or do something will be that and will do that.
I stay away from that whole thing of everybody will have a job because I grew up in a very poor neighborhood where people were, there were second, third generation of government assistants.
And that is the, they loved hearing that because that meant they're going to sit on their sofa and not do anything and the government's going to take care of them.
So to understand only because of my childhood that when I hear that, I get very cautious because, no, if you're not ambitious and you're not going to do something, then nothing's going to happen for you.
And I know that's very heavy, but I'm tired of that because these are people I grew up with.
If you in the new world, as you so call it, if you truly want to be something or do something or accomplish something, even more than before, are there abilities and technologies and software and solutions to make you be that and become that?
But will everybody ever have a job if you want it and if you're hungry for it?
But I don't think there's going to be any more handouts in the future than there are today.
I think just with my past, you've got to work really hard to get where you're going and you can't expect things to be given.
You will accept, I guess, I hope, maybe you will accept that this technology will displace a lot of people and on a scale that we haven't seen before.
And somewhere down the track, and it's a big philosophical issue, probably not for us and not for today, has got to come up with a plan for those people, you know, an idea for those people, an ethos by which to live.
I don't agree that it's going to displace people on a scale that we've never seen before.
I disagree.
I disagree.
Let's see that one play out.
I don't believe that at all.
Do you believe the big tech firms who employ technology that displaces human beings ought to be taxed extra because they will be gaining the rewards from that?
They'll be gaining all the benefit from the investment in capital.
They won't have to pay people anymore.
Do you think that they, this is an argument that's raging here at the moment, that they should be taxed a little more?
You can't just ask me a general tax question because tax, understand, before I worked with IBM, I was director of innovation for the Norwegian tax authorities.
So you're asking me a tax question.
I cannot accept a general question like that about taxes.
Taxes are very specific to an industry, to a corporation, to a region.
So you can't ask me such a general question.
I know too much about taxes.
I missed that line in your biography.
That's fantastic.
And so tax is also nationalistic and all that.
Every country has its different reason for taxing different corporations and things like that.
But you know what I'm saying?
The profits will accrue to the corporations and they will have a lower wage bill.
So they'll be making more money.
Whereas the people who work for them may well be, unless they're very innovative in your terms, will be making less money.
But if that, look, that is true.
That is true.
If the timeline going forward, that corporations, the definition of a corporation and the quantity of corporations and the ratio of corporations to private people stays the same.
But as we go forward and as the corporations benefit from the algorithms and the AI going forward, we, the parallel line, now I'm talking like an engineer.
I apologize to the readers if I bore them.
Here are the listeners.
The parallel line is the private individual will realize that they too can become a corporation.
They too can benefit.
And all of a sudden people are going, hey, these tax breaks for the corporations are great because my sole proprietorship, I'm benefiting too.
Remember, never has been so easy before for a single individual to become a corporation and then to benefit from the technology and the software and then reap the benefit from the taxes.
You cannot, because the only assumption you're making is saying that everything's going to stay the same and the corporation is just going to benefit.
Nothing stays the same.
That's why we're having this discussion now, because the future will change drastically.
Everything will change.
The definition of what corporations are, private people are, the definition of an algorithm change, everything will change in the future.
Nothing's going to stay the same.
So that question is not valid.
It's only valid for a short period of time.
I mean, looking forward.
I'll argue against myself now and say that here I am sitting in my own accommodation, recording this conversation with you totally independently.
When I first came into all of this, and that's a quarter of a century ago, I had to depend on going into studios.
I had to use other people's equipment.
You know, I had to let them to some extent call the shots.
Now, because of technology, in my little tiny world, I'm empowered.
So I partly understand what you're saying, you know, because it impacts on my world.
Absolutely.
30, 40 years ago, what would you have been the equivalent of?
What would I have been the equivalent?
Well, I don't know.
The same job description, probably, broadcaster, journalist, all those things that you do.
No, but my point is that 40 years ago, you would have been the equivalent of a major radio station.
Your ability to broadcast the number of people around the world that you're communicating to.
Well, I would either have been, there was a choice.
You'd have been employed by a big company and you'd have used their facilities and gone on their radio station.
Or you try to somehow send tapes out to people or send newsletters out or whatever, but you couldn't communicate with people directly in the way that you can today.
It just wasn't possible.
How many people listen to your show?
A lot.
A lot.
Let's just say it's a mass audience and it's growing all the time.
Exactly.
And 40 years ago, what would have been the cost to get that mass audience?
More money than I could have dreamt of because you would have had to hire satellite time and buy a radio station or whatever.
Thank you.
Thank you.
That's my point.
You just proved my point.
Okay.
So look, okay.
You know, that's my point.
To that extent, I'm on side, but a lot of things I worry about because we depend, as you say, maybe I'm less concerned about the machines than the people because the people are the ones who do the programming and they're the ones who may not foresee some of the consequences that they should have foreseen.
For example, you will have known the story a couple of months ago about the chat bots.
This appeared in all of the media and maybe it was misreported.
Two chat bots started communicating with themselves in a language that they devised to make it easier for them.
They had to be turned off.
Goes the story that made all of the mainstream media.
That's a bit of a concern, isn't it?
And that's not their fault.
They were programmed by somebody.
No, because why?
So I have to undo almost all these stories in the media.
Listen, you have two twins.
No, you're not two twins.
You have twins.
Not two twins.
You have twins.
Sorry.
You have twins.
And as they grow up, it has been proven that twins form their own language because they have a state shortcut.
They understand what the other one's going to say, the communication, all that.
They do form their own language.
You see movies about this all the time.
It is the way in which we of human beings communicate and software.
It is the nature of communication in which you find a shortcut to communicate the same message.
What the software did was it just found shortcuts the same way we find shortcuts.
The media took the story and made it to be the most scary thing in the world.
It is just the way communication is.
It is the same thing as, and I don't know Japanese, but I've seen in some movies where they make fun of languages.
They have a Japanese person talking for like 30 seconds and then the person, the English translator, says one word and vice versa.
American talks God for two hours and a Japanese person says one sentence.
You know, there are shortcuts.
It is just the nature of linguistics.
So that story was another example of the media hyping up something that is so fundamental to just communication, but it makes for a great story to scare the crap out of people.
But it's just the way we communicate.
No, I don't buy that one either.
And I'm not trying to discount a little bit of negativity.
Well, these stories have no validity.
Okay, well, neither of us was there.
We don't know.
So I'm talking about something that I read in the papers, right?
But if the story was correct, then these two chatbots devised a language that the people who put them together and oversaw them could not understand.
And that's where it started to get worrying.
Maybe that's misreporting.
I don't know.
And maybe it's true.
And what's wrong with that?
Is that bad?
Well, if I want to know what they're doing, yes, it is bad.
You know, maybe if a couple of computers come up with a language that says, okay, we're going to push the big red button that detonates a nuclear weapon down at 20 past three this afternoon, but the people overseeing those programs don't actually understand what they're saying.
That's a bit of a concern, isn't it?
Back to the doom and gloom.
All right.
Listen, we could argue this around and around and around.
Understand this.
Today, right now, we have sewer systems that are run by algorithms.
We have that make sure that the sewers of your city, my city are not backed up.
And they're doing things and communicating at millions of times per second and making sure everything is going very, very well.
We don't check on what they're doing a million times every second.
We assume that they're doing what they're supposed to be doing right.
They have conditions and things built into them to make sure that based upon this, they do that and all that.
And we trust them.
You know, again, and I repeat, the software, this moment we have in time, is one of the biggest opportunities we have to create the society, the world that we always wanted.
If you want to spend time and say that everything, you know, this is scary and all that, then you're missing out.
Well, maybe we can agree on this.
The fact of the matter is that no matter how many people may email me, may complain, may write in newspapers that it's a terrible thing and, you know, the world is going to hell in a hand card, as we say here, and everybody's job's going to go and all the rest of it.
The fact of the matter is, we both know there is no stopping it.
So what we have to do is to control it in a humane, decent, and far-sighted way.
Yes, we could definitely agree with that one.
Finally, we agree on something.
But no, I had to put all this stuff to you because this is the hot button stuff that people are talking about, especially as we go into 2018.
Because look, robotics and AI has been the big topic, apart from space research, in this last year.
If you thought it was big this year, it's going to be huge next year.
We all know that.
And on that theme, what do you make of Sophia?
Now, I don't know whether Sophia is exactly what she appears to be, But Sophia, the robotic person designed by a company, I think, called Hansen Computing, who is now a citizen of Saudi Arabia and is being taken on a tour of the world, answering questions in a very friendly fashion to people on stages everywhere.
What is that all about?
Where's that headed?
I think Sophia is a nice piece of plastic with a bunch of wires inside being run by algorithms and software.
Nice job.
A good project.
She looks not quite human, but on the way.
What happens when we devise robots that are exactly like us?
Then, oh boy, then the sex industry grows even bigger, I guess.
But that's a whole other topic.
And of course, that is absolutely burgeoning now.
And that worries me tremendously because then that's a bit of a threat to humanity.
I mean, not only the fact that, well, they may devise away, but you can't breathe with them.
But mentally, I don't think it's good for the people who might want to get involved with those things.
I'm trying to put this delicately, if you know what I'm saying.
I worry about our humanity.
Sure.
Let me just go back in time and explain why I went to the sex industry, because again, I've been at this thing for a while.
So early, going back 20 years ago, back in the early days of voice or IP or internet telephony, it's not even called those things anymore.
We don't talk about it.
What makes Skype, Skype, the ability to use the internet as the ability to communicate with voice and video.
In those early days, I remember I was working for a company called, well, I shouldn't say the company, but I was working for a company.
And we were approached by Playboy Magazine.
And this is, well, years before the dot-com thing went big.
And Playboy Magazine offered us a million dollars for us to use our software to white label it, which means Playboy would just put their branding on top of it, and they would provide it to the whole world.
And we had voice and we had video and all that.
But my point was that that business, the sex business and all that, has funded so many technological leaps.
They funded the next versions of the best audio codecs and video codec.
A codec is a software that actually converts audio to digital and digital back to audio and vice versa.
They are probably one of the best people at sensing where the next big trends are.
They're going to change things.
So they actually are the reason why the internet has a lot of, a lot of the technology by the internet was actually funded by the industry.
And now with artificial intelligence and these robots that look like human beings, they're funding the hell out of that.
So I think it's inappropriate.
A lot of people get sick about the discussion, but understand that they are the reason why we made so many technical leaps, because they have the money to fund these type of things.
So just-Well, you know, I don't think we have the time to get into the morality of any of that, because it is such a multi-level discussion about the- And it's an unpalatable fact.
But fact it is.
But as I say, it's a multi-level moral discussion that...
And I apologize.
Just that I was there when the whole thing happened.
But you're absolutely right.
It has been such a driver and I guess will continue to be so.
But, you know, robots like Sophia are a great party trick.
It's great to see her answer questions.
I'm not sure how much preparation she's had for those questions, but it looks damned impressive.
Now, if you think of the pace at which this is all marching forward, where are we going to be in five years?
Not alone, 10.
This is just a guess.
five years is way, way, way far away in a technological world, right?
We will be, I think in five years, not, not, I think, Five years from today, we human beings will be hybrid where we will have technological pieces into us, electronic pieces in us.
And I think that's, and that, again, that brings up big moral and ethic discussions as well.
I'm not saying it's right or wrong.
I'm just saying that this will go on.
We will rush to this.
And the reason why we will rush to this is because accidents still will happen.
People will lose arms.
Injuries will occur.
And if anybody who's listening knows if they have somebody in their family is sick or something happened to them, they will do anything, anything to help that loved one to come back to have a quality of life.
And technology is becoming mature all the time.
And it's going to allow us to help our loved one and bring them back.
I don't know if bringing them back is appropriate, but to have a better quality of life.
So I get the moral ethical discussion.
I'm not saying it's right or wrong.
I want to stay away from that because there's some uses of this that is just wrong and I hate it.
But if you have someone sick in your family and they're suffering and with technology, with 3D printing today, you know, there's amazing stories out with 3D printing, you could 3D print an arm that could actually pick up a can.
And there's a gentleman who's actually doing that and he provides all the software for free on the internet.
And I think it's from the $5, he's provided over 50,000 people around the world with a 3D arm from your elbow down and a hand that picks up something where these kids never had a hand or it got cut off or something for $5.
You could give somebody an arm.
We shouldn't stop this, right?
We actually should fund it.
But we have to be careful because merging the electronic world into our human body and all that brings us to someplace very different.
But to help people who are sick or have been hurt or lack something, absolutely.
But where we're going, it's very, very, I use the word interesting because it can be, I don't know if I would, this is a question you have to ask yourself.
Do you want to have chips all inside of you?
Do you want to have nanobots running through your blood cells, killing parasites and bacterias before they cause any damage inside of you so you'll never be sick?
Selfish human answer to that is, you know, if they're not Going to give me further health problems if they're going to enhance my life, if they might make me live a little longer and be a little healthier while I'm doing it.
Probably, yes, I do.
If they fundamentally change me and what I am, then I'm not sure about that.
I mean, I love the idea, and this is going off on another tangent.
I love the idea of being able to save the essence of whatever I was, or let's leave me out of it, maybe more worthwhile people than me, but to save my memories, the essence of me, some electronic way, so that when I'm not here anymore, or when a person of worth is not here anymore, we can access their wisdom.
Now we're heading down a slippery soap in a very interesting discussion.
So if we look at ourselves like a computer, your memory, your brain, parts of your memory, your brain, I'm sorry, are as a hard drive.
And I know there are technology companies today who are looking at saying, if we're like a computer, can I download all of Howard's, we'll leave you out of it, sorry, all of a person's memories into an actual hard drive.
And then I use some algorithm to simulate decisions and stuff like that.
And Howard could then live forever.
And then you look at Sophia in which she has the, I don't know what her skin is made of.
It was silicone and I'm sure they're using some more advanced plastic polymers and stuff like that today.
But then we're moving into the world of androids and clones and things like that.
I mean, from a technical point of view, if I just have to continue this discussion as the nerd I've been talking, like I've been talking about, from a technical point of view, we are heading towards that at a rapid rate and we're going to be very good and successful at doing that.
The question then, when I really leave my nerd side and go to the person who I am, Thomas, the question is, who am I?
Is that who I am?
Am I just a hard drive of memories and algorithms that simulates with the assumptions and conclusions I would make?
Or am I more than that?
So that's where we've come to a much bigger discussion.
But that discussion we need to start having because the technology is racing there because the bits and pieces of technology are helping to save people's lives or to replace limbs that are missing.
But when we put all that technology together, we could say, you know what?
I could pick you and put you inside of that and you can live forever or a version of you can live forever.
So in the next five years, we're going to have that type of discussion probably sooner than five years.
I can see that it would have practical uses.
For example, if you were able to clone a human being and send the clone to be the first feet on Mars so that they don't have to risk what you'll have to risk, that would be a good thing.
But again, we come back, don't we, to the theme that we've had all the way through this.
It all depends on the human factor.
People having to make decisions and who we entrust those decisions to.
The best that we've got now are the governments that we have.
But the problem is that this is a big world.
It's all interconnected.
We're going to be pushed more towards global governments because you have to have the same standards of decisions, hopefully, made everywhere.
Otherwise, you get chaos.
So I agree with everything you said, except that we have to trust the other people.
Night, that's wrong.
The software is being put into your hands.
Right now, things like Facebook, we have to entrust.
Absolutely.
There's no doubt about that.
Because that was one dot all of these really super smart artificial intelligence algorithms, things like that.
But there are startups there now.
Oh, man, I think the number of startups is increasing exponentially, logarithmic, every single year in terms of artificial intelligence startups creating new algorithms.
And their purpose is to give you, the listener, you, the end user, the power, not the corporations like Facebook.
That's what people are not emphasizing enough.
That where we're going is that the software companies, the new software companies are enabling the end user, the human being, to make the decision.
The first generation of these companies were the big players, and they all focus on reselling this to major corporations who have the funds to make us profitable.
We understand that's business, no problem.
But the speed in which this thing moves is that the mature software is coming out and is letting the end user benefit without having to rely upon Big Brother or government or corporation.
That's the difference, and that's the side of the story that nobody's spending enough time on.
We, the people, you, will be more in charge of your life and you'll have the ability to achieve the destiny that you always wanted to achieve.
We don't have that today with our limitation with the way the government is set up and all these other things we can branch off in that discussion.
But the technology and the algorithms are coming to help you and give you more power back.
It's not entirely in my hands, though, is it?
Because I am, to a large extent, kept within bounds by whatever limits there are of the current Windows operating system, for example.
I am limited by the broadband speed that is available to me in the place that I live, that is provided in this country by one organization that puts in the infrastructure.
So it's not entirely up to me, everything.
No, no, no, but that's you're proving again my point.
So exactly what you just said now are the limitations of our 1.0 society, okay?
That we, the end user, did not have the ability to control.
Everything was limited and controlled by some bigger entity, some bigger corporation.
The end of that 1.0 society in which I didn't have the control is coming slowly to an end.
And the emergence of this new society where the end user has control is emerging.
So exactly what you said is exactly what my point is.
Just before there was electricity and you hit a switch, remember that, can you imagine being there the first time you hit a switch and a light bulb came on in your room, your kitchen, your bedroom, living room, and you never had light there in the evening ever before.
This was a revolution.
It opened to change the entire world.
We're at that point, the same point, in which the end of the 1.0 society and that the control of someone else making decisions for me is coming to the end.
And the AI is coming.
It's going to let me have the life that I wanted to have and allow me to have the life that I control and I make the decisions.
So we're not racing towards Brig Brother.
We're racing towards an individualist society in which Big Brother is even more paranoid than it was before.
Oh boy.
You've given us a lot To think about, and look, I've been putting to you some of the points that have been put to me, and some of my own concerns, admittedly, as well.
But some of the people who said to me that they have misgivings are at professor level, and those are the things that they have been saying.
But I don't doubt we are going to be enormously empowered by technology, as you say.
And you know, just speaking personally, I'm really excited because I know what it's done for me, and I have a feeling in my gut of what it could do for me.
And I want it to do that.
So I want to march into that future.
I'm just concerned about the human element and how we control it.
But I think we've sort of agreed on how important that is.
Final question for you, and thank you for doing this in Oslo, Norway today.
Looking forward, we're staring down the barrel of 2018.
You know, a new year is always a big and scary prospect.
This has been the year of technology.
As you look forward to 2018, what is the thing that excites you most about the prospect of 2018?
What should we be looking for?
Voice.
Voice is the most exciting thing going forward.
2018 is the year of voice.
Now, what I mean by that is Amazon Alexa.
I'm not sure if everybody knows what that is.
I think they do by now, yes.
Yeah, by now, exactly right.
But in some countries, like here in the Nordic countries, we don't have Amazon.
I mean, you get Amazon shipped to you, but we don't have Amazon Alexa and services.
So Amazon Alexa is this device that sits in your living room.
You can talk to it.
It talks back to you and all that.
This is incredible.
The voice, not just Amazon Alexa, but voice.
And let me give you examples and explain to you why.
You can take, and listen, let's combine the artificial intelligence discussion to this one, put the whole picture together because I've seen it done.
You can train the algorithm, the software, to be an expert in whatever you are passionate about.
So I'm talking to everybody out there who's passionate about building model airplanes, passionate about collecting stamps, whatever you're passionate about.
You can teach the software to be that expert in that area, right?
And then you can take that software that's an expert in that area.
Amazon, as well as Microsoft and Google, have APIs, which are interfaces to their software, to their services, that allow you to provide this expertise to all of the people who have, let's say, to Amazon, the Amazon Alexa.
That means that your voice, your knowledge is going to be exposed and provided to everyone in the world.
You now, your love, your passion, what you wake up in the middle of the night saying, wow, you know what I could do tomorrow?
Everything.
If you're a person like you who's running a radio station and you want to make more money, you want to go up against the big guys, you're going to do that.
You're going to financially make money because the platforms like Amazon are being built.
Who's paying for it?
The end users paying $99 as it is broadcast for an Amazon Alexa.
They're paying for you to have you in their house and your expertise provided to them 24 hours a day.
I've seen people put in all the knowledge of an insurance company, launch the service, and now they're a global insurance company.
Say a tree falls in my house at 3 o'clock in the morning.
I need to contact my insurance company.
I'm probably going to put on hold for half an hour before somebody wakes up.
But there's people, a single person now is providing an insurance company on Amazon, Alexa.
And I can have a discussion with the insurance company.
They will fill out the form.
It'll make an appointment for me.
It will do everything moments after this has happened.
They're there for me 24 hours a day.
They're there when I need them.
The expertise is there all the time.
This is a global scale that we've never seen.
People and their passions, no matter what you're into, is going to be accessible to the entire world.
The platform built by these big players, the end users paying for it.
2018 is the year of voice.
That and podcasting, the fact that people putting their passion into their voice and providing it to the world, 2018 is voice.
It is insane.
Your business, your revenue, your thinking should be in voice.
And what is voice?
It's just people communicating back to what we've always done.
The revolution has taken us back to take us forward.
So the answer to your question is all about voice in 2018.
Phew.
I hope we talk again, Thomas.
Thank you very much.
And you've given us all a lot to think about.
And I'm going to go away and have a cup of coffee and have a big think, I think.
Listen, I'm wishing you a great 2018.
I hope it's good for you.
You too, Haran.
All the best.
Thank you very much.
It's an honor to be on your show.
Thank you.
Thank you.
Thomas Anglero, fascinating man, your thoughts about him.
And thank you to him for giving me that amount of time to talk about these issues that we need to address again here because I think they are pretty fundamental to each and every one of us in our lives, whether we think they are or not.
Thank you very much for being you, for being there for me, and I wish you a great holiday season.
And I hope that you live your dreams in 2018.
Don't let anything hold you back because, you know, life's too short, I guess.
So thank you.
Emails always welcome across the holiday season.
Your thoughts for what I can do with this show in 2018, gratefully received.
Thanks for being there.
My name is Howard Hughes.
This has been The Unexplained.
I am in London.
And until next we meet, please stay safe.
Please stay calm.
And above all, please stay in touch.
Thanks very much.
Take care.
Export Selection