Edition 379 - Fevzi Turkalp
Tech expert Fevzi Turkalp on the upsides - and downsides - of our technological future...
Tech expert Fevzi Turkalp on the upsides - and downsides - of our technological future...
Time | Text |
---|---|
Across the UK, across continental North America and around the world on the internet, by webcast and by podcast, my name is Howard Hughes and this is The Return of the Unexplained. | |
And I use the word return this time very pointedly and for a reason. | |
Because this show marks the start of the brand new website, which is now live. | |
Thank you very much to Adam Cornwell from Creative Hotspot in Liverpool for all of the hard work that he's put into devising the website and its various new features that you will discover. | |
We've got a guest suggestion platform, new way to contact me, and just a complete new look to the website. | |
Now, the one thing that I will say is that, of course, anything new in tech is going to take time to bed in. | |
There may be problems initially. | |
We don't know. | |
We hope it all works perfectly from the get-go. | |
But if you spot any issues, if there's anything that isn't working that should be, if you're having trouble contacting me, for example, whatever it might be, or if you think there's something that should be better on the site or something that we haven't included and we should, then please get in touch and let Adam Cornwell and myself, Howard Hughes, know about that through the website. | |
So the brand new website, by the time you hear this, should be live. | |
It's www.theunexplained.tv. | |
And it's a whole new look. | |
It's fully searchable and it's got a lot of stuff that I'm very excited about. | |
Plus, as far as you can be, it's future-proof. | |
The old website didn't allow any scope for developing in the future. | |
This website certainly does. | |
There are many things that we are going to be able to do in the future that we simply haven't been able to do now. | |
One thing I'd like to say about donations to the show, obviously, this is a free show, and we know that there are some shows out there that are making vast, vast fortunes through subscriptions. | |
Up to now, we have not done that, and I've wanted to keep it free so that everybody can hear it. | |
But in order to be able to move forward, then donations are very important. | |
The way that I put it on the new website is that if you go to one of the big coffee vendors in the US or the UK, you're going to pay £2 for a cup of coffee, $2 in the US, just for a basic standard cup of coffee. | |
And that will give you pleasure for 10 minutes or so. | |
Hopefully, if you like these shows, then they will give you pleasure for an awful lot longer than 10 minutes. | |
And of course, you can always come back to them. | |
So if you can make a donation to the show, please go to the website theunexplained.tv and please make one if you can. | |
And if you have donated recently to the show, thank you so much for that. | |
You know, we're very grateful to you. | |
All right, the guest on this first edition then on the brand new website is Fevzi Turkalp, the man known in the UK as the Gadget Detective. | |
That's his title and that's definitely what he is. | |
He knows so much about technology, future technology, current technology, every kind of technology. | |
I've done a lot of work on radio with Fevzi and I regard him as a friend and I know that you like him too. | |
He's been on this show both online and on air before. | |
So, Fevzi Turkup, the guest on this edition of The Unexplained, and we're going to talk about future tech. | |
The risks and rewards of technology as it develops through this year of 2019. | |
Of course, it is always progressing exponentially. | |
It's always moving at a great pace, isn't it? | |
And sometimes that's a good thing, and sometimes it carries with it risks and downsides that we find hard to foresee until they're upon us. | |
Those are the things that we'll be discussing in the next hour here with Fevzi Turkhalp, the UK's gadget detective on The Unexplained. | |
Remember, when you email me through the new website, then please tell me who you are, where you are, and how you use the show. | |
So go to the website and click on the contact link or the My Story connection that we now have or the guest suggestion tab that we've got as well. | |
And you can contact me in various ways through the website. | |
But always, when you get in touch with me, if you're emailing me to shoot the breeze, make a suggestion or reflect on the show, tell me who you are, where you are, and how you use this show. | |
Last words as we launch the new website after 13 years online this year. | |
Last word from me really is thank you very much for being here with me. | |
All right, let's cross London now, about 20 miles away from here. | |
It's beginning to snow and it probably is there too. | |
Let's speak with the gadget detective Fevzi Turkup. | |
Fevzi, thank you very much for coming back on the show. | |
Pleasure. | |
Well, Fevzi, as we record this, you're 20 miles away from me or thereabouts across London. | |
I've got snow here. | |
How are you doing? | |
Yeah, we've got snow as well. | |
I'm not sure if it's going to settle, but I didn't expect to see that today. | |
That's one thing technology cannot control, Fevzi. | |
Technology can do so many things as we're about to discuss. | |
The weather is not one of the, well, actually, there are circumstances when technology can influence the weather, but in the normal run of things, with you one side of London and me the other, there's nothing much we can do about this. | |
And some people actually like it. | |
I am not one of those people, Febsi, as you know. | |
I know. | |
Now, I have to explain to my listener that Fevzi and I have known each other on radio for many, many years. | |
So, you know, if we talk like two people who've known each other for a long time, that's the reason. | |
That's so, isn't it, Febsi? | |
No, I've never met you before in my life. | |
Well, no, sadly you have, but if you were for the first time, you wish you hadn't, I think, probably. | |
Okay, we've got a whole list of topics here that I think will be good and salient. | |
I want to start with something here just for a couple of thoughts, really. | |
And it's not, you know, to diss the industry or anything like that. | |
It's just a question I've had from one of my regular listeners called David just before we started doing this. | |
An email came in. | |
And David has been wanting me to talk, as I have tangentially on the radio show and other places, to talk about 5G phones, the 5G networks that are being set up in the US and UK now and other places as well. | |
David is very concerned about the risks from those. | |
And of course, if you do a casual look at the internet, you will see not only websites that are quite scary to look at about 5G, but you'll also see some articles like one I looked at on the Daily Mail newspaper website today where academics are raising questions about the risks of operating a mobile phone service and a mobile data service at those kind of frequencies. | |
Just briefly, have you any thoughts on that for David? | |
Sure, yeah. | |
So 5G is fifth generation of mobile data. | |
And with every generation that comes, we have a number of benefits. | |
The main one is speed, but it's also to do with the number of connections that can be made at a single time. | |
So, 5G will allow far more devices to connect on the cellular network at a given time. | |
And the reason why that's important is because you have all these Internet of Things-type devices, not necessarily just the ones in our homes, but more the ones on transport systems and so forth. | |
So, they need to be able to get wireless internet. | |
And, of course, all our cars and everything, if we still have them by then, we also want to be able to get sort of internet as you sort of drive across the city. | |
So, the benefits of 5G, we'll start with that, our speed and the number of connections that can be made at the same time. | |
In order to have more speed, there will be the waves that will carry the data through the air will have more energy in them. | |
They will be of higher frequency. | |
And that means two things. | |
First of all, there is more energy that can potentially be absorbed by the human body. | |
And it also means that the range tends to get shorter. | |
So even when you've got home Wi-Fi, I don't know if you realize, but you can have two different frequencies. | |
One of them is good for high speed, and the other one is good for reaching the furthest parts of your house. | |
So we use a combination of the two. | |
So with 5G, we're not just going to have a lot of cell towers, and we'll have to have more cell phone towers than we have at the moment, but we'll also have these little repeaters all over the place that will be sort of dotted all over the place to make up for the fact that the range of the signal is not very great. | |
Again, that means that we tend to be close to these devices. | |
So there is a danger that there could be an increased risk to health from these things because there's more energy. | |
The energy can be stopped more easily and therefore our bodies may absorb more radiation. | |
Are you saying, Felix, I'm sorry to interrupt there, but are you saying that the potential problem, and we don't know if there is one because there hasn't been the kind of research that you would need to know that, but are we saying that the problem is more to do with the proximity of more towers than the actual frequencies being used? | |
It's both. | |
The two are interlinked. | |
So electromagnetic radiation follows something called the inverse square law. | |
That's to say, if I halve the distance between the source of the radiation and me, the radiation I'm exposed to is four times as much. | |
If it's a third, it's nine times as much. | |
If it's a quarter, it's 16 times higher. | |
So by the time you hold one of these to your head or anywhere near one, you know, you can have higher degrees of radiation. | |
Now, the difficulty is this. | |
This is 5G. | |
Scientists are still researching and arguing over the possible health effects of earlier generations. | |
And it's one of these cases where the technology is overtaking our ability to assess its safety. | |
What I can tell you is that the British government's advice on any mobile phone uses that particularly for children, that mobile phone use should be minimised as a precautionary measure because we do not know how dangerous they are. | |
And if you don't want to take that seriously, when you buy a cell phone or a mobile phone, look in the box, look at the little booklets that no one ever reads and see how it's a get-out-free jail, a get-out-jail-free card for mobile phone providers who've learned from the experience, I feel, of the tobacco industry and don't want to be sued out of existence if it's later found that these things are harming us. | |
So that's why the leaflets are so extensive. | |
I have to say, though, the latest phone that I got, I once again broke the screen on my phone and had to get another one. | |
You've been through this saga with me before. | |
That came with a leaflet and that had a lot of notes in it. | |
But they were so microscopically small, even if I got four pairs of glasses and wore them all at the same time, I couldn't see them. | |
Yeah. | |
Yeah. | |
And they don't tell you that much other than, you know, you're taking a risk and they're not responsible. | |
Some states in America, like California, they have to quote you what's called a SAR rating, which is some sort of indication of how much radiation the brain is likely to be exposed to from a particular handset. | |
But actually those ratings are not very useful because the further your phone is away from a cell tower, the more it ramps up its amplifier and the stronger the signal and the more the absorption. | |
So it doesn't hear that much. | |
But one tip I would say is if you are in an area, let's say like a tube train on the London Underground, where your phone can pick up Wi-Fi, but it's not going to pick up a cell signal, then it's best to switch off the cellular data because what's happening is your phone in an attempt to reach a cell tower is shouting in digital terms and broadcast terms, it's shouting as loud as it can and exposing you and everyone around you to as much radiation as it's capable of doing. | |
And that's something that you acquainted me with on a radio show a month or so ago, and I have no idea. | |
I'm sure most people don't know that, that your cell phone, if it can't find a signal, will ramp up the strength of the signal that it's putting out. | |
And if there is any risk, then it will be increasing that presumably exponentially or something like that. | |
So turn it. | |
If you're on a train and you know that you're not going to be able to get a signal of any kind, turn it off. | |
Yeah, and of course that will help save the battery life. | |
What really kills the battery is when you're in that circumstance where it's never going to find a cell tower, and so it's using all its battery power up to amplify its signal in the forlorn hope that it will, but it won't. | |
So for the sake of your battery and potentially for the sake of your well-being, you know, disable at least that part of the wireless function. | |
Okay. | |
So we're basically leaving this 5G debate, if there is a 5G debate, with the phrase, more research necessary. | |
But as you said, they're already doing more research and the research is not complete on 3G and 4G. | |
Yeah, so this is very common. | |
I mean, you find this in the field of, say, reproductive science, where the safety and also the moral issues around techniques, you know, they're developing new techniques so rapidly in terms of reproductive medicine that, you know, philosophers and, you know, people who deal with ethics can't even formulate the questions in good time, much less answer them. | |
Which leads us very neatly into what we're going to talk about and what we were going to talk about on this edition of the show. | |
And let's see how many topics we can get through in the time that we've got available. | |
And that is where we're placed, vis-a-vis technology and our frail humanity in 2019. | |
And I know that you wanted to start, and I think it's a good place to start, with artificial intelligence, the upsides and the downsides and the potential future of AI as we go into 2019. | |
Now, 2017, it developed at a pace I'm guessing that it's going to go even faster this year. | |
Yeah, so the nature of AI development is that it will increase in speed and eventually it will increase asymptotically, right? | |
Asymptotically. | |
Yeah, so it will tend towards infinity very quickly. | |
But so first of all, why is this true? | |
Why do we think that artificial intelligence may potentially overtake human intelligence? | |
And the reason is human beings are very smart. | |
They can currently do many things that a computer and an AI program has no hope of doing. | |
However, the rate at which we develop in terms of our, I don't mean as individuals, but in terms of our species in intelligence terms, is very, very slow, hundreds of millions of years before we have genetic mutations that happen to give rise to smarter individuals. | |
That's really slow. | |
On the other hand, what you've got is technology, AI, which is still really quite dumb and limited and can usually only do one thing at a time and then not that well, but it's improving very rapidly for two reasons. | |
First of all, the processing power is improving rapidly because each generation of microprocessor is being used in the development of the next generation of microprocessors. | |
And with each generation, it's improving at a faster rate, so exponentially, because it's like reproduction, but each generation is smarter than the last. | |
And the percentage of the design work that is done by the AI is increasing. | |
If you think about it, a modern processor, even just a normal bog standard Intel desktop processors, has more transistors on it than a human being could probably draw in their lifetime, even if they knew how to connect them together. | |
So it's already impossible for human beings to design those chips without computer assistance. | |
But the direction of travel for that is that the human intervention won't need to be as much. | |
And eventually, the computer will beget itself. | |
The generations of it will give rise to itself. | |
And if I make it sound like reproduction, it's because I believe it is. | |
Look, why is it then that those people who know about these things and who keep telling us that, you know, all of this is an adjunct to our lives and there is no need to worry, you know, nothing to see here. | |
How come they're not warning us about this? | |
If there is something... | |
Look, I think that people like the late Professor Stephen Hawking warned us very clearly about this. | |
He said that this is an existential threat. | |
This is a threat to humanity greater than that of nuclear weapons. | |
And his advice was that we should get off the planet and run. | |
And he is not a man. | |
He was not a man who was given to exaggeration. | |
He was a serious scientist of the highest order. | |
But if we get off the planet and run and try to go somewhere else and start again, what will facilitate us getting to another place to live will be the artificial intelligence, won't it? | |
It'll go with us. | |
Yeah, so the problem is this. | |
The AI will continue to develop in capability. | |
It will do things better. | |
Oh, the other reason, by the way, is not just that the chips are going to be faster. | |
It is that the amount of data that's available for them to consume to understand the world around them is ever increasing. | |
So this is the concept of big data. | |
So how does our Amazon Echo box or our Google box work when we speak to it? | |
It takes all those recordings of everything that we ever say to them. | |
It keeps them, it analyzes them so that it learns every accent, every intonation, every gait of our speech so that it can not only understand it but replicate it so their speech improves. | |
How do you teach a computer to recognize a picture of any color, breed, size of dog with any sort of background in the picture? | |
Answer, you give it millions or even billions of pictures of dogs and it starts to work out like a human being by experience what a dog is, what it looks like and therefore it learns in a way that we do. | |
So these two things, the processing power and the availability of big data, means that these things are going to get more and more intelligent, more and more capable. | |
So will artificial intelligence then render us unnecessary? | |
Well, you could argue that we're already unnecessary. | |
I mean, no species is arguably necessary. | |
We may not want to go, but AI might be our last great invention as a species. | |
We may be, there is a significant risk that we are creating our successors, or at least the forebearers of our successors. | |
So in the short term, they're cute toys that we like to play with and marvel at what they can do. | |
In the medium term, they will solve intractable problems. | |
They have the ability to solve problems such as global warming slash climate change. | |
They could save us all. | |
They could cure cancer. | |
They already, in some cases, IBM's Watson AI program, which can be programmed for different things. | |
So it can equally be programmed to diagnose cancers with greater accuracy than human beings already in some cases, As well as create new recipes for human beings that no human being has ever thought of creating before. | |
So we have new foods and new recipes. | |
So they can solve these great problems, cancers, genetic illnesses, diagnosis treatment, individualized treatment based on far too many factors for any human mind to compute. | |
Yes, and the experience of the top consultant will be dwarfed in time by this because that AI will understand so much more than any single human can do. | |
So that's great, isn't it? | |
We save ourselves. | |
The planet is saved. | |
We're saved from ourselves by our technology and how smart we are. | |
The problem is that that's probably only a narrow window of opportunity to solve all these problems. | |
The problem is what happens when these things evolve so exponentially beyond our own capabilities that we are to them as ants are to us. | |
Now, when that happens, you're kind of at their mercy. | |
They have the ability to deceive us by then, which is one of the things that MIT are working on to try and work out how we can stop them from deceiving us. | |
And the answer is they haven't found one yet. | |
And also the other big issue is what can you program into today's generation of AIs that means that their progeny, the generations that come from them, will not seek to harm us or harm us by accident. | |
And that is an issue for legislators. | |
But the problem with that is that legislators don't know much. | |
we saw when they tried to grill, you know, various people like Mr. Zuckerberg, you know, the legislators have to be making laws and rules about these things. | |
But their laws and rules will only be... | |
So the problem in the first instance is not legislation. | |
The problem is, technologically speaking, what could we possibly put into AIs now? | |
So, you know, you've got Asimov's, you know, laws of robotics, so you should not harm a human, you'll not allow a human to be harmed while you're in action, blah, blah, blah, blah. | |
They don't work. | |
In reality, that's good in science fiction. | |
The laws of robotics do not work to protect us. | |
And MIT, some very serious scientists at MIT and elsewhere, are working full-time to try and find the answer to the question, what can we program into them now that will keep us safe when generations down the line, we've got these super intelligent creations, and I would say beings at that point, because they will fulfill all of the requirements for life, the definition of life. | |
What can we do to stop them from harming us? | |
And the answer is probably nothing. | |
And in the meantime, while they're asking their questions and trying to come up with the answers, the technology is getting smarter more quickly than they're trying to find answers to their issues. | |
Yeah, no, absolutely. | |
And if you think of it as a graph and think of at the beginning from this point, that the intelligence is slowly increasing. | |
So it's a shallow line that's just slowly edging upwards. | |
But then start thinking of that curving up a little bit. | |
And then eventually think of that line as being, instead of almost horizontal, as being almost vertical. | |
So almost true to say one day these things will evolve. | |
So literally in one day, they will evolve past our comprehension in a way that we just have no chance of controlling, in my view. | |
How near is that day? | |
Well, people disagree about that. | |
You know, some people talk about, you know, 20 years or less. | |
Other people say 50 to 100 years. | |
In a way, it's only relevant in as much as it determines how much time we have to prepare ourselves for that. | |
But the almost inevitability of it is, to me, what is more important. | |
And I say the almost inevitability of it is largely because it's always possible that we'll snuff ourselves out as a species before that happens. | |
I mean, I'm laughing about it, but that's a whole other race then, isn't it? | |
So it's interesting because global warming could, for example, you know, we're told, could kill life on this planet, or at least human life on this planet to a large extent, before the AI gets a chance to solve it. | |
And indeed, if you read the Guardian newspaper only today as we record this, there is talk that the ice in Greenland is melting four times, I think the article says, according to a professor who's researched this, faster than we thought. | |
So what you've just said is a very real circumstance, a very real situation. | |
Yeah, so the key words here are exponential and asymptotic. | |
When you understand that these things don't proceed in a linear matter where, you know, two years means worse, double the worse than one. | |
No, it's four times or 16 times or whatsoever. | |
You've given me a lot to think about and a whole new word, Fevzi. | |
Robotics is linked to what we've just talked about. | |
Maybe we won't go into this in as much length, but we had lots of demonstrations, didn't we? | |
The robot citizen last year, various demonstrations of robots having conversations with people. | |
How are they going to be placed in 2019? | |
Okay, so robotics is the other arm. | |
So AI is the smarts, and robotics is the physical presence in the world that we currently inhabit. | |
And those two very much need to come together. | |
So you may think of a robot as a repository for an AI being. | |
So again, they will become much more capable. | |
I saw a humanoid robot doing a backflip from a standing start and landing on its feet and steadying itself. | |
I've seen robots running and having heavy sandbags knocked into them sideways and the robot going onto one leg, correcting itself as a human being would struggle to do, and then continuing on. | |
And the challenge is we've had robots for a long time. | |
They've been single function robots. | |
So a car production line robot is a good example of that. | |
It does one thing repeatedly, but it does it well. | |
Human beings, if we think of ourselves as robotic, we are very dexterous beings. | |
We can do Potentially, many different things. | |
You know, we can type at a keyboard, or we can vacuum the floor, or we can go for a run. | |
They're all very different challenges for a mechanism. | |
So, robots will get more capable. | |
Now, their place in the world, and as we're going to be brief, I will say this briefly, and you may challenge it if you like. | |
So, we start with them as unpaid workers. | |
In the beginning, this is not a problem because they're not intelligent enough to demand their rights. | |
When they are intelligent enough to demand their rights and pay and freedom, then we will effectively be using slaves. | |
They will move from that status to being our companions. | |
We are humans who can anthropomorphize. | |
We give machines human characteristics. | |
We name our cars and we have some sort of relationship with our terrorists. | |
But from what you said before you said that, we'll become the dogs and they'll become the master. | |
Yeah, so this is all to do with steps on the way to this. | |
So, think of it this way. | |
Unpaid worker slash slave moves to companion, companion robots to look after our old people and so forth. | |
And you'll find this a harder leap, but I promise you, I think this will happen, to spouses. | |
And well, perhaps we go through sex robots between that, but eventually we'll want to marry them. | |
They'll be far more fulfilling than human beings in many respects. | |
And then from spouses to dark overlord. | |
So that's kind of like the steps on the journey. | |
This is a multi-step journey towards what they call in Star Trek assimilation, isn't it? | |
Yes, and the degree to which we may not be wiped out by all this is the degree to which we start to literally internalize that technology, that we start to insert that technology into us and replace organs, eyes, legs, arms. | |
And that is happening. | |
I mean, in Australia, there's a bunch of people working actively on the bionic eye now, trialling this thing. | |
Absolutely. | |
And so the first step in this is to try to repair damage to our human organs. | |
But the step that follows that, very, very quickly and very naturally after that, is to improve upon. | |
So I can see a time in the future when a pure human may be looked down upon in some ways as being incapable, maybe not as smart, maybe not as strong, maybe not as quick in any meaningful way. | |
And people will start to change their bodies technologically. | |
And we may be one of the last generations of pure humans. | |
So when we talk about the survival of the human race, you also have to ask yourself, what do you mean by that? | |
Is it because we will die out or is it because we will evolve ourselves into something in an effort to keep up with the technology that would otherwise eclipse us? | |
And this is a genie that's out of the bottle, isn't it? | |
There's no going back from this. | |
We're heading down this path. | |
Yeah, I don't think there's ever been a technology that human beings have been on the brink of inventing and they said, you know what, this is unwise. | |
We're not going to do it. | |
You look at nuclear weapons. | |
We not only invented them, but we proceeded rapidly to use them. | |
And it was a race on who could do it most quickly. | |
And you see this also with AI and weaponry, that the United States has steadfastly refused, amongst others, to have a moratorium on AI weapons that can make the kill decision themselves. | |
And we already have this on the border between North and South Korea. | |
I understand that there are robotic guns that can actually acquire targets and make the kill decision themselves. | |
if you look at some of the stuff the Russians are up to. | |
And the reason why it's out of the bottle is because it's an arms race, and it's a commercial arms race. | |
So we will not refuse to develop these weapons, because we fear that our adversaries will develop them, and Yes, but I felt very reassured by Donald Trump talking about the Space Force. | |
Really? | |
No, I wasn't reassured at all. | |
He talked about something that isn't even underdeveloped, being able to take out any sort of missile. | |
So all we have to do is hope that our adversaries will wait until we've developed that technology. | |
Maybe he knows something we don't febsy. | |
Who knows? | |
But that's a whole other issue. | |
Maybe not for this time. | |
We've talked, and I have spoken with people like Michi Okaku in the US about this. | |
I have to say, the first time I spoke about it, it gave me a headache to understand where it was going. | |
I wonder where it is now. | |
And I'm talking about quantum computing here. | |
This is a kind of computing that will make the abilities of the devices that compute today look puny. | |
Yeah. | |
Okay. | |
So quantum computers are called quantum computers because they rely on the theory of quantum mechanics. | |
And this is a description of the universe that is not at all intuitive. | |
Things can be both, you know about Schrödinger's cat, that the cat can be both alive and dead at the same time. | |
It's possible to walk through walls. | |
Time is dilated with movement, all these sorts of counterintuitive things. | |
Now, if you take the ability of a particle to exist in all the possible states at once in some proportion, so there's what's called a probability waveform, and it means that in terms of the cat, it's both alive and dead at the same time. | |
But in terms of a quantum computer, so a normal computer, a classical computer, let's say it's playing a game of chess. | |
So from any given position, it will work out every possible set of combinations of permutations that are possible from that place to the end of the game. | |
And it does those one after the other. | |
It works out one, then it works out the other, then it works out the next one. | |
So the faster the computer, the more quickly it can go through those permutations and combinations. | |
Now, if you've got a quantum computer, it can show you all of those states, All of those possible paths all at once, rather like the cat being alive and dead all at the same time. | |
All the possible states, all the possible outcomes of the chess game are calculated, if you like, or made known all at the same time. | |
Does this mean, and this is what is probably a really dumb question, but you know, not the first time I've done one of those, Fozi, you know that. | |
That's such a dumb question. | |
Okay, well, let me fire this one at you and you can make a judgment then. | |
You said that it's a computer that can be judging all the possible outcomes at all the same time. | |
Now, what would happen then if everything happens so comprehensively and so quickly that a mistake is made and the wrong outcome or the wrong decision is made in less than the blink of an eye? | |
You know, surely with the great speed and efficiency that gives you, it also gives you that downside. | |
Yeah, absolutely. | |
So there are benefits and there are real dangers. | |
And by the way, what you've described there is already the case. | |
That's the reason why you're increasingly getting wild fluctuations in stock markets, for example, because those decisions to buy and sell stocks are taken by computerized algorithms. | |
And those algorithms can operate much more quickly than any human being. | |
So any oscillation becomes extreme. | |
So if there's humans doing it, you get up a bit, down a bit, up a bit. | |
If you've got computers doing it to each other, you've got the potential for wild oscillations. | |
You have the wave going up high and down, the price going shooting all over the place. | |
And sometimes you see that. | |
So that could happen even more so with quantum computers. | |
One of the other very interesting areas is what happens to computer security. | |
So at the moment, we have passwords that have got so many combinations that it's impossible for a modern computer to crack it within a thousand years, if not the age of the universe. | |
But what happens now when you set a hacker uses a quantum computer to hack a password? | |
Potentially, even the longest and most complicated password could be cracked in the blink of an eye or less. | |
On the other hand, you could use that quantum technology to develop new techniques to protect the security. | |
So again, you've got this sort of arms race between the good guys and the bad guys. | |
The quantum computers are neither inherently good nor evil. | |
They are just a very clever tool. | |
So in fact, we're not moving forward at all. | |
You've got one side that can crack passwords very quickly. | |
And then the other side, if you deploy the same technology in reverse, you've got oscillating and varying passwords that you might have something that is not one fixed and stable password. | |
It might be a multi-million time revolving password. | |
And it's almost like trying to crack the Enigma code machine in World War II. | |
Think Star Trek. | |
When they're under attack, what do they do? | |
They modulate the shield frequencies, don't they, to try and stop their attackers getting through. | |
So they change it, and what do the Borg do? | |
They adapt. | |
And it's like that. | |
And then everybody goes into warp drive. | |
Yeah, something like that. | |
So it's interesting. | |
In the first instance, I imagine that quantum computers will be sufficiently expensive, at least for a short period of time, that they will be in the hands of the good guys. | |
But that will not be the case for very long. | |
And after that, what happens is, quite frankly, anyone's guess. | |
We're going to have to develop some completely new ways of securing. | |
Because otherwise, you know, everything that we rely on, the banking system, everything, if it was, I mean, imagine if every password was cracked simultaneously across the world, what would be, it's even hard to know what that would do. | |
I mean, it would make Brexit seem like a well-ordered plan, frankly. | |
It would just make, it would make for chaos. | |
But what has come out of our conversation so far in the areas that we've discussed is that technology is evolving in amazing ways. | |
You know, science fiction ways they would have been anyway when we were kids. | |
Remarkable. | |
But the direction of travel is not necessarily positive in any of it. | |
Yeah, I think as we have this sort of like rapid exponential development of the capability of technology, the chances of us being able to contain it and control it is limited. | |
And I don't just mean it killing us off, but I just mean, you know, in so many ways, we're not going to be able to hold the reins on this. | |
It's going to get away from us inevitably. | |
We already find it hard to control our nuclear weapons well. | |
You've seen that the FBI's toolkit for hacking others was broken into and used against the good guys, as it were. | |
Increasingly, we're developing technologies, weapons, techniques that we cannot control. | |
And that's a worry. | |
It is a worry for legislators. | |
It's a worry for philosophers. | |
And it's certainly a worry for those of us who do not have the information sources and the capacities that they have. | |
Yeah, which is pretty much all of us. | |
I mean, these technologies are in the hands of very few organizations. | |
You know, the Googles and the Facebooks and the Amazons are leaders in this. | |
And the difficulty is also, you know, when you get startups like, you know, we've had DeepMind, which was this great AI startup in the UK, they get bought, you know, because by definition they're small, they're not yet very valuable. | |
And these companies will buy them and either kill them off or more likely use them for their own purposes. | |
And that's when legislation becomes important. | |
And, you know, European GDPR legislation is a first step. | |
But again, it's going to be hard for legislators to keep up with this as it already is. | |
We're already struggling. | |
Legislators are already struggling to understand the capabilities of Facebook systems, for example, much less regulate them effectively. | |
It is concerning. | |
What about, as a more immediate concern for so many of us, the technology that we have in our homes, the thought that the television, I heard somebody say it again last night. | |
The television might be watching you. | |
The thought that your phone might be hearing what you say, even when it's not switched on. | |
You know, it seems that our privacy is becoming a thing of the past, and that as we move forward through this year and into next year, we're going to have even less of it. | |
Potentially, yes, that's certainly the direction of travel. | |
So with regard to these items, be it smart speakers or cell phones or Internet of Things capable devices, start with the basics. | |
What are they? | |
They have the ability to send and receive information to the Internet. | |
They have sensors in them. | |
So a typical mobile phone or cell phone will have a camera. | |
It'll certainly have a microphone. | |
It will have a GPS circuit in it so that it knows where it is. | |
It will have an altometer to know how high it is. | |
It will have all these different sensors in it. | |
And they are all under software control. | |
So even when everything works well, we're giving up a degree of privacy in order for certain things to work. | |
A satnav GPS cannot work without knowing where you physically are. | |
And the way that they tend to work is as connected devices means that they collect that information. | |
And sometimes that's for a good purpose. | |
So for example, TomTom, who still makes satnavs and still have a lot of them built into various devices, used the information about where each of its users are to actually direct traffic. | |
So if it can see that a lot of its users are getting stuck in a particular area, when it plans routes for its other users, it will direct them another way, which might not under normal circumstances be the best route, but it does that. | |
So it manages. | |
So there are benefits to this, but the difficulty is even when they are working as they intended to, we're giving up a lot of privacy. | |
The greater danger is what happens when they're misused. | |
So we have an Amazon Echo unit. | |
I understand and Amazon tell me that it listens for a wake word, which can be A-L-E-X-A or something else, and then it listens, it records everything that directly, all the speech that directly follows that, and that is what is sent off to Amazon to be processed and an answer to be squirted back into the machine. | |
So the smart speaker is actually a dumb speaker. | |
It does very little other than listen and send and receive answers. | |
So it just parrots the answer that is sent by the Amazon mainframe. | |
So even if that is true, there is every likelihood, I don't even say possibility, is every likelihood that these devices are already being hacked by third parties. | |
I mean, if you're into commercial espionage or you're a state player, that's great. | |
You've not only got people have paid $500 to $1,000 for their own personal bug, we call it a smartphone, but they've now got them all over their house with seven beam-forming microphones that can hear every utterance from the room, even when you're looking the other way and mumbling to yourself. | |
It can still very often hear you. | |
So that's the danger that they will be abused by the people who provide them to us as a minimum and further abused by hackers. | |
Right. | |
Concerning. | |
What about the thought, and I've heard nothing about it since it was initially reported, and it wasn't even reported everywhere then, that the man who invented the internet, Tim Berners-Lee, is trying to come up at the moment with an internet, almost a, I don't know whether it's an internet too, but it's a sort of more people's internet. | |
It's not dominated by just a few giant companies and almost going back to some basics from what I read about it. | |
Do you think that that has any chance of being a success, of coming to fruition? | |
I think that there's every chance that the technology will work. | |
I think the chances of it succeeding commercially without changes in the legislative framework are almost nil. | |
So we have to hope. | |
So let me just explain what that is. | |
Tim Berners-Lee has correctly observed that the big players of Amazon, Facebook, Google, a lot of their money, especially Facebook and Google, is made by providing, in quotes, free services in exchange for our personal data. | |
And it's a quid pro quo that many users are not even aware that they're entering into. | |
So he has tried to find a way of these services being provided without us giving up control of our data. | |
So we can still have chat applications and messaging and all the rest of it, but we can do it in a way where we are in control of our data. | |
And he's come up with a system, he's been studying it, working at MIT with others. | |
It's an open source project called Solid. | |
And it allows people to have a surety that, you know, for example, when I make a credit card transaction, it allows me to make that transaction without actually giving up my actual credit card information. | |
It's like a one-off authorization process. | |
So that can work. | |
And there's some very smart mathematics behind it. | |
And it can work. | |
But there's no reason why the likes of Facebook and Google should want that. | |
There's every reason why they should want to try and kill it, which is why he's made it open source, so that they can't just buy the company and kill the technology. | |
Presumably the technology that you'll be deploying to make this is technology that's produced by the major corporations. | |
Well, yes, but it depends. | |
I mean, Intel at the moment, for example, makes its money by making chips and selling them. | |
It makes it in other ways as well. | |
So it's only really completely contrary to the companies that have business models that appear to give you free things like Facebook, like Google, but really are trading in your data. | |
And we've seen Google doing deals with the NHS, which were found to be illegal, where the NHS wanted the Benefit of the diagnostic capabilities of these AIs, but instead of paying for it with money or commissioning the technology itself, it paid for it by sharing our data, our NHS patients. | |
Well, our anonymized data. | |
Anonymized, no, not really. | |
There are ways of de-anonymizing data quite easily. | |
And if there are ways of doing that, then the wonderful idea that you can make a perfectly safe credit card transaction that is a one-off and nobody can work it back to you to defraud you or whatever, then if it's possible to de-anonymize the health data, then surely it's possible to undo this great plan to have a safer, fairer internet. | |
Okay, so nothing is foolproof. | |
So yes, even solid, someone will find a way to undermine it. | |
Let's just hope it's not Facebook and Google. | |
But with regards to your personal data, your health data, once they work out which hospital a person was treated, when they was treated, they can look at their credit card information to see when they bought a ticket and all the rest of it. | |
They can put together, and that's the thing about metadata. | |
Metadata is all these seemingly innocuous pieces of information that when combined, when you have the ability to access all that information and the ability to analyze it, you can combine that data in such a way to identify a person, the illnesses they have, everything about them. | |
So no. | |
I mean, you know, they say about Google that Google knows that your daughter is pregnant before you do. | |
I would posit that not only is that true, but they will also know who the father is. | |
So, you know, don't be thinking that it's hard. | |
We just hope that Google doesn't go into the realms of hacking. | |
And, you know, one hopes that they don't do that. | |
I'm absolutely certain. | |
I mean, I cannot speak for them that they wouldn't do such a thing, but I hear what you say. | |
Of course they wouldn't. | |
But the whole point of something like Solid is for us to keep control of our data and for it not to be bartered by third parties. | |
Because once it's out of our control, you can't really bring it back under your control. | |
So even with these systems, this is really giving us control of our future data, not our current data. | |
Well, we've given our listener, Febsi, an awful lot to be concerned about. | |
There is a bright side to technology. | |
Of course, it facilitates so many things, including this conversation, which will go out to the world very shortly in a way that would have been impossible 25 years ago. | |
So technology has empowered us to that extent. | |
But something else, and I haven't talked with you about this before, and I think it's interesting, and I'm sure you've thought about it. | |
The medical profession in every country now is facing an epidemic of anxiety and stress and the illnesses that come from that. | |
And part of that is to do with this technological world, because we are human beings and, all right, we evolve and we change generation to generation, but we were not designed to live at this speed or pace. | |
And the choice that we've got now, I mean, just at a basic level, when you talk about in the consumer realm online, the choice is bewildering at times. | |
Sometimes I've been looking for something, and you know that I shop too much, but sometimes you get so overwhelmed with, I'll tell you an example, I needed what they call a TRRS cable. | |
You know what that is, Febsi. | |
You know, that's a cable that looks like it's got a headphone plug on one end, but the actual plug itself has got three bands on it, and you use it to connect a mobile phone's audio output to the audio output of something else that has a TRRS. | |
Well, I couldn't get one of these in a shop, nowhere locally here, and I live in an area where there are plenty of shops. | |
So I go online to try and find one, and I find hundreds, all at different locations, all at slightly different prices. | |
And I just got bewildered. | |
And in the end, it was like sticking a pin in a piece of paper. | |
I just picked one of them, ordered it, and it arrived two days later. | |
But it was actually quite stressful to be overwhelmed with that level of choice. | |
That is an aspect of technology that I think we're only just beginning to come to terms with. | |
And I think it is going to be just as difficult and potentially intractable to deal with than all of the other technological problems that we described earlier in this conversation. | |
The fact is that we are the human interface and we are limited. | |
Yeah, so we've already seen examples of technologies, personal AIs that can do shopping for us and book paydress appointments and all the rest of it. | |
And we would have to expect that an AI would be able to solve purchasing decision problems for us so that we don't have to do the deep dive analysis into the pros and cons of each product and try and compare them. | |
That it will have almost instantaneous access to reviews. | |
It will be able to tell the fake reviews from the real reviews. | |
By the way, check out, there's a website, I think it's called fakespots.com, and it analyzes the reviews on Amazon, for example, and it assesses what percentage of those reviews it thinks are fake. | |
God knows how it does that. | |
I think as a human being, I can tell sometimes because of the way that things are spelt and worded. | |
Yeah, so there's many clues to that. | |
If they're all bunched together within a short period of time, if they use the same choice of words. | |
I mean, I've seen, you know, doing this job here, I have to look at book reviews from time to time. | |
And sometimes I've looked at book reviews, and I've looked at five or six, and they've all had very similar phrases used in the book review. | |
And you start thinking, well, I'm sure it's absolutely legitimate, but isn't it strange that four of them use the same phrase? | |
Yes, and also too many superlatives, almost all five stars. | |
You know, just, and also you look at the reviewers and say, okay, what else have they reviewed? | |
Has this person got a track record of reviewing all sorts of products? | |
Or is this the only review they've done? | |
In which case, it's possible that they're a bot or a person that's been paid to do that. | |
And there are companies that advertise their services to write this. | |
So fakespot.com, there's an app for it as well. | |
So on top of our anxiety issues, we can't trust anybody either. | |
Well, you can trust me. | |
I've known that for years, February. | |
But yeah, it is a problem. | |
I mean, interestingly, Apple, under Steve Jobs, understood the value of not overwhelming a consumer with too many choices. | |
Arguably, they went too far the other way and told consumers what they wanted. | |
But Steve Jobs understood this. | |
Now, the modern Apple has reversed that again, because what Steve Jobs did when he came back from exile with Next Computing is he slashed their product range. | |
You know, he decimated it effectively and got rid of so many combinations and permutations of machines that would buy, but it was very simple to choose because there wasn't really much choice. | |
And that was a good thing in many ways, as long as they chose well for you. | |
But unfortunately, the modern Apple has gone back down the old road and now has too many choices, you know, too many combinations and permutations, quite confusing, you know, when you look at all of that. | |
And it's a shame. | |
And the result is that often consumers fail to make a choice at all. | |
Hmm. | |
So that is a huge issue. | |
And again, I don't know how we will even begin to deal with that. | |
And if you think about the state of the things we've just talked about now, it's January 2019, almost February, think about how it might be in five years from now, Febsi. | |
Think of how it might be in 10 years from now or 20. | |
Yes. | |
You know, and again, I'm apologizing to my listener here. | |
I don't want to depress you because we know that technology can cure people and do all sorts of wonderful and help people to see potentially these days. | |
So we know that it's good. | |
But we need, as informed citizens, and the reason that you're listening to this now is you are one of those people, we need to be giving some thought to all of the issues that this throws up for us. | |
Otherwise, the future, for the first time in our lives, is going to be completely taken out of our hands. | |
Yeah, no, I agree with that. | |
And as I say, there are great opportunities for us in this. | |
If we can show a degree of awareness, you know, starting with our data and what happens to that, if we can show that, if we can show that we care about those things and don't be children that when you're given a sparkly bauble of a, in quotes, free app, that you will give away the family jewels in terms of your personal data. | |
If we can just understand that free is not free, then we have a chance. | |
But it requires more education, starting with schools and also for adults. | |
We need to bang the drum on this so that people can make representations to say, I'm not happy for my data to be used in this way. | |
What are you doing to make sure that that's under control? | |
And hope that the politicians that, in quotes, govern us can show some wisdom and leadership. | |
But in the meantime, and I don't mean this as any criticism of the current generation because I work with those people and they never fail to amaze me and delight me in many ways. | |
But a lot of people these days, people younger than me, and I've always tried to be young and think young. | |
It's part of my makeup, really. | |
I want to keep pace with things. | |
I don't want to be left behind. | |
But there is an aspect, I think, of people today that worries me, and I might be wrong. | |
That because big corporations do things and offer us things, and because we have all of this technology that's there right in front of us, and it's easy, we don't ask any questions. | |
We accept what we are given. | |
We think that the people who are providing us with mass market entertainment on television are always right. | |
And we believe everything, or we're inclined to start to believe everything that we hear and we're told. | |
And that's just a worry. | |
It's not a criticism. | |
And I might be wrong, Fevsi. | |
You saw that actually with WhatsApp, which is owned by Facebook, that in an effort to limit the spread of fake news, they've put a limit of up to five people that you can actually send the same message to because otherwise it's being used as a form of news broadcasting, which is what was happening. | |
But that could be extremely fake news. | |
I think it is difficult. | |
And I also think that as these technologies are integrated into our everyday lives, so at the moment we choose whether we buy a smart speaker and we can unplug it when we want to. | |
But what happens when it's built into every fridge? | |
If it's built into every fridge, first of all, you're going to find it hard to buy a fridge that doesn't have it. | |
And secondly, are you going to unplug your fridge when you want to have a private conversation? | |
So, you know, there are some issues where, you know, we just need to be a bit more alert and our legislators need to be, you know, up their game as well. | |
And, you know, we have said that technology is a wonderful thing. | |
It has enabled me in a million ways to do things independently. | |
I don't have to depend on media organizations now to work. | |
You know, 25 years ago, I would be entirely at the whim and dependent upon media organizations whether they wanted to use me and how they wanted to use me. | |
Now, that matters somewhat less, and that goes for you too in the work that you do. | |
You have much more autonomy. | |
Why? | |
Because these days, we reach people directly, so that's good. | |
All we're saying really is that there's a whole raft of considerations. | |
We need to march forward. | |
We can't go back. | |
No one's going to go back. | |
And it's not possible for all the reasons you said. | |
We just have to think a bit. | |
Indeed. | |
Febzi, what a conversation, my friend. | |
Thank you very much indeed. | |
If people, you know, one of the great things you do here in this country and around the world, in fact, is that you allow yourself to be put out there and answer questions if people have tech questions of you. | |
How do they do that? | |
Sure. | |
So if you've got a technology question, particularly consumer tech, I'll do my best to help you with it. | |
If you're on Twitter, you can just message me at gadget detective and I will do my best to get you an answer. | |
Fevzi, you are the definitive gadget detective. | |
You know that, don't you? | |
Oh, you're two guys. | |
And you also know something else. | |
You know that we'll be speaking on radio in the various places we speak on radio in the various guises and for various lengths of time. | |
We'll be doing that again soon. | |
I've heard it said. | |
See you soon. | |
Bye-bye. | |
Take care, Fevzi. | |
Thank you. | |
Cheers. | |
If you want to connect with Fevzi, ask him a tech question, then you can check out his website, he'sgadgetdetective.com, or he's already told you about how to connect with him via Twitter. | |
It's a big one for social media, is Fevzi. | |
Thank you very much for all of your support. | |
More great guests in the pipeline here at The Unexplained. | |
And Adam and I need your thoughts about the new website at theunexplained.tv. | |
So until next we meet, my name is Howard Hughes. | |
This has been The Unexplained Online. | |
And please, whatever you do, stay safe, stay calm. | |
And especially with the new website, please stay in touch. | |
Thank you very much. | |
Take care. |