Across the UK, across continental North America and around the world on the internet, by webcast and by podcast, my name is Howard Hughes and this is The Unexplained.
Well, I hope that everything is fab in your world or as neofab as it can be in a world of political instability and world crisis.
You know, sometimes, I don't know if you find this, and maybe I've said this before, but sometimes I just have to turn the news off.
And when you get to a stage where you're a news person by training and you're turning the news off, you know that the world is a very different place from what it used to be.
But I find myself just hitting the off button.
And I find that that is quite therapeutic sometimes.
And I escape into this world where I can do conversations about things that are not connected with imminently going bust, the price of energy, what Putin is going to do next.
You know, I find that, and I know a lot of you do too.
I find that quite healing in a whole bunch of ways.
I hope everything is all right with you.
Thank you to Adam, my webmaster.
And thank you very much if you've donated to the podcast recently.
That's very kind of you.
Many, many thanks.
I know that there are some people, including somebody who emailed recently, I have mentioned this before, who don't like me doing weather reports or thank yous.
And I've just done some thank yous.
And I'm going to tell you that the sun is shining through my window and it is a beautiful autumn day at the moment.
That's the only bit of weather report I'm going to give you from London.
This edition, going to catch up with an old friend of mine and an old friend of this show's Fevzi Turkalp, the gadget detective, the man who's helped me out on so many technology stories on the radio and television.
We're going to talk about a number of questions that I think are hot-button topics connected with technology.
So Fevzi is the gadget detective.
Check him out online under that name.
And he is the man to talk about these things.
And it's a chance to talk where we don't have the same time pressures that we have on TV.
I know quite a few of you have been commenting on the fact that we're obviously under time pressure on the TV show.
And that's just a fact.
Everything seems to be moving more quickly.
There's always somebody counting me down.
And for me, the challenge and the task is to pack as much meaning and as much of me and the guest into those segments without it sounding compacted in any way.
But it's difficult.
So here on the podcast, we have a chance to talk in a more expansive kind of way.
If you get my drift.
If you want to contact me, you can always go to the website theunexplained.tv, follow the link and you can contact me from there.
If you want to make a donation to the show to help it keep going, then you can also go to the website theunexplained.tv and there's a link for that there.
We think of everything.
If you want to check out my Facebook page, and I strongly advise that you do, be part of that by going to the official Facebook page of The Unexplained with Howard Hughes.
And I also have a Twitter account at HughesOnAir.
Right, I'm going to stop talking now.
Let's get to the guest, Fevzi Turkup.
We're going to talk and have a deep dive into the world of tech.
Fevzi, thank you very much for doing this.
How are you?
I'm very well.
Thank you, Howard.
How are you doing?
Very good.
You know, we were debating before we started recording this that we're doing this.
You're on Wi-Fi and I'm at home.
And we're kind of just hoping since we're talking about technology and we're both supposed to know something about it that this is going to work.
Otherwise, it's going to be very embarrassing.
More for me than for you, I would say.
But there we are.
Howard, Howard, what could possibly go wrong?
Well, no, exactly, exactly.
Before we get into the deep technology and the peering into the future, which we do around this time every year, let me just ask you about how things are at the moment, because I get the feeling that the march of technology in general, when we're talking about 5G and advances in the gadgets and things that we use, everything seemed to have slowed down over the last year or so.
And I'm wondering if that happened.
Is that because of COVID, political instability that we see everywhere?
What is causing, if I'm right about this, the progress that we were making in fields like 5G technology, which I was told was coming this way, where I live very soon, that was 18 months ago.
And also all the gadgets that we buy, they seem to be in short supply and things don't seem to be updating as quickly as they did.
Long question, but I'd appreciate your thoughts on it.
Right.
Well, with regard to 5G, I think that was always going to take longer than people were given to understand.
And that's a given.
It's always the case with rollout because it's hugely expensive and problematic activity to do.
And because the companies don't like to play nice with each other, then that slows things down more than it needs to.
But that has been exacerbated by a perfect storm of chips shortages caused by COVID, people working from home, so you're not getting the same sort of innovation arguably going on.
Plus, political instability means that companies trying to diversify away from China.
Apple is trying to move a fair chunk of its production into India at the moment and looking at other suitable countries as well.
So it's a combination of all of these things.
Plus, you've got countries going into recession now.
Germany now expects to go into recession.
China is still operating its sort of zero tolerance towards COVID policy, which is causing huge lockdowns and instability.
And then obviously wars around our continent is not helping matters.
So all of these things together mean that it's very hard for companies to get hold of the parts that they need.
It's very hard for them to accurately predict what they will need.
And so even Apple, because Apple, you know, is such a huge customer to any chip manufacturer.
It often gets the first pun the bite of the Apple.
But even with them, they're struggling and products are being delayed and delayed and things that we had expected to see well before now are still not being released.
And is this necessarily a bad thing?
Because part of me, and I know, look, compared with most people I know, I'm old.
All right, I accept that, but at least I think I think young, maybe.
I got the impression For a while, that maybe things were beginning to overtake our capacity to appreciate them, enjoy them, and understand them.
So, a little bit of a slowdown is maybe philosophically not entirely a bad thing.
Yeah, I mean, there's many aspects to that question.
I guess what I would say to you is that it's always in my view that the best way to get the most out of your technology is not to keep buying new technology.
You might find it strange that someone that does what I would do would say that, but it is.
So, for example, you know, if you're interested in making the best photographs, taking the best photographs that you can do, the way to do that is not to buy the latest whiz-bang camera.
It is to get to know the camera that you've got.
And when I say camera, that obviously includes the cameras in our handheld devices.
So, that's not a bad thing.
I have phones coming in and out of here all the time.
I test them.
The personal phone that I bought, I have deliberately used it for six years and it's still fine.
It's a six-year-old iPhone and it's fine, and I won't replace it till next year, and that's always been my plan.
I try to buy the best technology I can get at the time, and then I try and use it for as long as I can.
And I really haven't had problems with it as well.
You know, the thing about particularly mature technology, so smartphones are relatively mature.
You know, in the early days of smartphone technologies, every year you'd have significant improvements.
Now, they're incremental improvements year by year, you know, sometimes not even noticeable.
So, instead of changing smartphones every couple of years, as we were, you know, minded to do in the past, there's no need to do that now because, you know, unless you're really going to push the bleeding edge of, you know, you want to do 8K video or something like that, you don't need the latest phone.
And in fact, it's much better to get the most out of the phone that you have.
Get to know your device well, whatever it is.
You will get better results.
I promise you, most people don't know how to use most of the features on our existing phones, for example.
I think you're dead right about that.
You know, I'm still learning things about my Samsung, and it's not an expensive one, but it does everything that I want it to.
And, you know, assuming that upgrades that come up from time to time, updates to the software, don't actually slow it down, then it does everything that I want it to do.
I just thought we would talk generally about the kind of consumer technology that people interface with at the beginning and where we stand in relation to all of that.
And of course, all of those problems that we talked about, the political instability and the shortage of chips and all the other stuff, seems to be making shortages in shops.
You know, I do have a look at the consumer electronic stores.
Still, I still go and look at them.
I don't buy a lot there now.
I buy a lot online, I'm sad to say, but there seem to be periodic shortages of things that we used to have in abundance.
And that just seems to be a way of life at the moment.
Yeah.
So I would say in the early part of the last three years that we've had COVID here, a lot of people, when they were not going out and they were not holidaying and they were not doing all these things, spent money on upgrading the technology in their homes.
So they got bigger screen TVs, surround sound.
There was actually quite a lot of money spent plus people who started working from home and then realized that their six-year-old computer didn't really cut it and then bought a new one.
So there was a lot of spending then.
And it was interesting because a lot of manufacturers had cut back either through necessity and or because they thought that demand would drop and they cut back and then they couldn't fulfill the demand.
So there was that problem.
Now, you know, in Europe and especially in this country, I would suggest, you know, people don't have spare cash.
You know, people are worried about how to pay their energy bills.
They're not going to shell out on, you know, expensive technology by and large.
And so now, again, it's this constant difficulty in trying to match, to correctly predict the demand for these products because the lead time is so long.
You know, it's not, oh, you can't just, you know, switch on another factory.
It take a couple of years or more to match it to the anticipated demand.
And that's the problem.
You know, we will have oversupply and then we'll have undersupply.
But in general, the chips are hard to get hold of.
And I don't see that improving in the next 12 to 18 months.
Okay, well, let's dive into the white heat of technology then and talk about the way things might be in the future.
Artificial intelligence makes the news every week.
As we record this 10 days or so ago, Ada, the robot, she was in the House of Lords in the UK addressing the House of Lords about the increasing use of artificial intelligence and robotics in the arts, for one thing.
And I thought that was pretty good, even though Ada had to be, I think, rebooted at one point.
I spoke on my TV show with Aidan Meller, who created Ada, and he thinks it's very important that we keep AI and where AI is going.
In the forefront of people's minds, do you think it is at the moment?
I don't think we are having the discussion as a country or civilization about what it is AI is going to do for us and what the dangers of it is and what sort of world we want to live in in the future.
There are all sorts of ways that the AI story could unfold.
And I'd suggest that it's not going to follow any plan in particular.
But that doesn't mean we shouldn't lay down guidelines.
We shouldn't start working on legislative structures that protect us.
For example, if artificial intelligence does replace the majority of jobs, as in the fullness of time it may, what will that mean for humans?
What is it to be human?
Are we wage slaves that will be emancipated by AIs and robotics doing the jobs that we had to do previously?
Or are we beings that are only really happy whilst we're striving towards meaningful goals?
Because if we are the latter and we all lose our jobs, then suicide and depression is going to go up, isn't it?
There are experiments going on with this now in some of the Nordic countries are experimenting with the idea of providing people with a living income, not job seekers' allowance, not in the expectation that you will find a job.
But the experiment is out of taxation, we will pay you an amount of money that you can not only survive on, but live on.
Well, you know, not be rich, but consume products, because you have to think of it from the commerce point of view.
If they reduce their costs and improve their efficiency by replacing human workers with automated systems, then how will they have consumers with money to buy their products and services?
So one idea is that we tax those companies profits in such a way that there is sufficient income to pay people to actually consume those services and products.
So it's kind of like virtual money going around in circles, but that's the way economies work anyway.
So we have to think about that because you can say that people will be unemployed or you can say they'll be liberated, but you have to think about what these people actually want to do.
And we're not having that conversation and we really ought.
We should be thinking about the opportunities, but also the threats that AI can pose and what we will do and what we won't do.
For example, should this country push for a moratorium on AI-based weapons systems?
Can such an agreement be verifiably brought into place?
Because verifiability is a bit like the nuclear arms race, right?
Everyone's working on AI weapon systems because they're afraid that all their enemies are working on them.
And in the same way as, you know, people had nuclear weapons because their enemies have nuclear weapons.
And that's, you know, so many people have now said that AI represents, artificial intelligence represents an existential threat to humanity, that it may be human beings' last invention and may supersede us and effectively wipe us out.
So why would we then go ahead and develop something that's that dangerous?
One is human nature, but it's also to do with commercial competition and military competition.
So in commerce, obviously a lot of money now going into automated vehicles, self-driving cars.
If a large taxi company, let's say Uber, took the view that, no, customers like human drivers, they like human interaction, we're going to stick with that.
And their competitors go down the self-driving route.
And suddenly we get to a point in the technology where the competitors don't have to pay their drivers and can offer rides for half the money.
Then Uber, who didn't, in my example, invest in the technology, would then go out of business.
So you have no choice.
You either have to keep up with the technology and the opportunities it brings you for efficiencies and savings, or you go out of business.
And of course, you can talk about, I'm sorry to jump in, but you can talk about legislation about these things.
But if you're the only country that is legislating to control them, you know that all of your competitors in world markets are unlikely to follow suit with you.
And so the international situation is a reflection of the individual situation or the company situation.
If the company that doesn't invest in these things is the one that's likely to go bust.
Similarly with economies, I think what I'm getting around to saying is that the only way to control this, if indeed we can, if this genie isn't out of the bottle already, is to do it internationally and to have internationally agreed frameworks.
And if you think about all of the climate change negotiations and things like that, how successful have they been really?
Good luck with trying to legislate about AI.
Yeah, no, it's difficult.
I mean, if you think about AI weapon systems, there have been various failed attempts at the United Nations to introduce moratorium on development of these weapons.
And those have been vetoed by countries that we consider to be, you know, relatively speaking on our own side, like, for example, the United States of America has vetoed such actions.
That's why we should have those discussions as a people, because as a people, we could arguably give a mandate.
Now, it may be that we say, well, we want these weapons.
I mean, look at Ukraine at the moment.
I'm not advocating for nuclear weapons, but Ukraine gave up its nuclear weapons after the collapse of the Soviet Union in exchange for security guarantees.
I dare say that it wouldn't have been attacked if it had retained them.
So, you know, these technologies are...
If you have, okay, we have smart drones and so forth.
You have, I think there are machine gun turrets on the boundary between North and South Korea that actually acquire the target themselves, identify and make the kill decision without human intervention, which is kind of scary, but that's where we're going with this.
If we then, you know, we've all seen those YouTube videos of Boston Dynamics robots dancing around and doing backflips and all the rest of it.
If one of the things that often stops wars is body bags coming home, a lack of soldiers and soldiers coming back in body bags, if instead of that happening, we can just ramp up production of robotic soldiers until the points where the robots start refusing and demand their rights,
but that's another thing, then warfare will not be self-limiting in the way that it can be for the reasons of resources, if it's possible to produce these things cheaply.
And it is interesting.
I mean, you look at what's happened in Ukraine in the last few days, that the Russians have apparently been flying Iranian drones, which are relatively cheap.
These drones are $20,000, which in weapons terms is tuppence.
And, you know, they can hover in the air and cause death and mischief and mayhem and fear.
And this sort of use of even consumer-grade drones, you know, DJI and other drones getting used very effectively in warfare.
So you've got these sort of low-cost, relatively low-cost things being developed that are just changing the nature of warfare.
And I just don't see how we can easily prevent the development of AI-based weapon systems because any agreement, even if people could come to an agreement, any agreement has to be backed by verification.
So it was possible to verify nuclear silos and so forth.
We could even verify to a degree enrichment of uranium in Iran.
But AI systems, not easy to spot an AI development lab from the sky.
Unless you can see robots running around in the backyard.
It's hard to spot.
It's an inevitability that these things are going to happen.
I guess from my point of view as an ordinary Joe sitting at home doing this, I would only really worry, and maybe I'm wrong about AI weaponry, if it reached the stage where it was able to take terminal decisions, like decisions that could destroy cities, destroy the place where I'm sitting at the moment.
If AI weaponry reached the stage where it could do that by itself, then we all have a problem.
Well, we already, as I mentioned, we already have AI weaponry that can make a kill decision.
And one of the problems, for example, with drones is it's possible to scramble the control signal of a drone and crash it or even take control of it.
So there's a lot of work that's been done on making drones fully autonomous.
So even if they are controlled, and maybe they won't need to be human controlled at all, maybe they'll just need to be thrown up in the air and they'll do their job.
But they won't rely upon the control signal.
They'll be able to identify, acquire, and destroy targets within the parameters of their programming, but they will, as they become more intelligent, it will be a bit like, you know, we bring up our children well, but sometimes they do things we don't approve of, right?
And it will be like that.
The AI technology will be more black box.
That's what's happening now.
The technology, the way that AI develops intelligence is not by people writing more lines of code.
It is by the AI algorithms being exposed to huge swathes of data, interpreting them in certain ways, and coming up with new ways of working.
So in the end, what we're ending up with is systems where even the human designers don't fully understand what they're doing and how they're doing it.
And that's intrinsically dangerous because we can't control it and we can't verify it.
Well, we were specifically told, weren't we, a couple of years ago?
I can remember interviewing people, people who know their stuff about artificial intelligence.
And those people saying to me in answer to the question, will this stuff ever get out of hand and start making its own decisions and then we need to worry?
They'd always say, oh, no, it'll never do that.
It'll always be within our power to constrain and control it.
I've been looking at the headlines over the last year or so, and there's a lot less of that talk around Fevsi.
Yeah, so the difficulty is that AI is being increasingly used to generate the next generation of AI.
If you look at Intel over the decades producing chips, when it produced chips, initially there were a few thousand transistors.
A human being could draw them, wire them together, and Bob's your uncle, you had some sort of rudimentary processing.
As the number of transistors on the processor went up and up, and it went into the billions now, you can't do that without automation and without AI because a human being's lifespan would not even be sufficient to draw that many transistors, much less connecting together in a meaningful way.
So what I'm saying is each generation of processor, if we're just talking about the hardware now, is increasingly designed by the current generation.
So the new generation, this becomes, it means that the time between new generations will reduce and the jump in performance will increase.
And it will just get out of our ability to control it.
So these systems will be, you know, you know what I mean by an asymptote, the curve will go up almost vertically.
And these things will just develop to a point where we don't understand how they work and we don't control them.
And because you've got this generational thing, there are people, there are humans now, where their job is to try and understand what it is that we could program into the current generation of AIs that would filter through the various generations and still prevent them from killing us.
And I have to say, so far, no one's come up with satisfactory answers.
You know, like you've watched the film, you know, iRobot and Asimov's, you know, laws of the three laws of robotics.
You know, you will not harm a human.
That can't be done when the AIs are effectively pro-creating.
They are creating the next generations themselves.
You can't do that.
You can't.
If a system is truly intelligent, it will have the ability to do things that you don't expect.
Otherwise, it's just a mechanical toy that does predetermined sequence of things.
And that's what it comes down to, the ability to do things that you don't expect.
Does this mean then that inevitably, Elon Musk and his ilk have it right, we have to get more and more aligned with computing technology, artificial intelligence technology as a species?
If you look at the Neuralink website, for example, I looked at it just before we did this, mission statement at the beginning says we are aiming to design a fully implantable, cosmetically invisible brain-computer interface to let you control a computer or mobile device anywhere you go.
Now, if you extrapolate forward from that, doesn't that mean that inevitably the future for us is to Become assimilated with the technology so that we don't go out of pace with it, get out of step with it.
If we want to keep up with it, we've got to become part of it.
Yeah, and I think that's right.
And I think, you know, there are some serious minds that say that we are probably one of the last generations of pure humans.
And by that, I mean cyborgs, you know, that we start to implant this technology.
So there's been Elon Musk's, one of his companies is doing a lot of work into brain-computer interfaces.
This is the ability of some sort of technology to read the brain signals that the human brain produces, interpret them, understand them, and then use them in various ways, either to communicate with other people or with the internet or possibly to control limbs.
So most of that brain-computer interface work is being done at the moment for things that most of us can agree might be a good idea, like helping people who are quadriplegics to control limbs again or allow the brain to control artificial limbs.
So that's a good thing, but we're not far away from people wanting to then develop further once they've worked out how to implant these things without having to do brain surgery.
And there are already ways of doing it without brain surgery, by the way.
But once you get to that point, then you'll have a situation where people start wanting to enhance us.
So we will have unlimited memory capacity.
Our brain's processing capabilities will be enhanced.
We will be given senses that human beings were never designed to have.
It's quite interesting because the experiments they've done with interfaces to the brain, the brain could control, if you've lost an arm, it could control an artificial arm.
Fine, it's controlling two arms.
But the brain can also control six arms.
It's very, very adaptable.
It's designed to take electrical input from the eyes and ears, the nerves that conduct, you know, from the eyes and ears and turn those into sensations.
But you could give us new senses and the brain will adapt to them.
So you could give us night vision.
You could give internet connection.
You could give backup storage.
You know, all of these sorts of things can be done.
The brain doesn't really know anything about eyes and ears and skin, you know, touch and smell.
What it knows is how to interpret a series of electrical impulses generated by sensory organs.
And therefore, it's possible to do some completely crazy things.
So to come back to your question, yes, if we are going to keep up with technology, we will have to be partly technological beings.
Otherwise, we're not long.
We will have given rise to a species or species that will supersede us.
So we have to do this.
And that sounds very worrying because it sounds to me, and I read the newspapers, that we're getting closer and closer to that.
And I wonder, yes, there are great benefits, as you say.
The ability to be able to make those who cannot see see, those who cannot walk walk.
Those are marvelous things.
But when you start taking it beyond that, and it begins to have literally a mind of its own, at what point do we cease to be that wonderful species we are, the human being, with our ability to make bad decisions, you know, our fundamental fallibility.
All the things that make us human are going to be going out the window.
And I think this is beginning to happen already with our interface with technology.
People are feeling less and interacting more, if you know what I'm saying.
They are ceasing to be as human as they were.
Yeah, I mean, we will have to, we will change.
And by the way, our technology is very capable of making bad decisions also.
One of the problems with AI systems is that they're trained on data, right?
They're trained on huge swathes of data.
And the problem with that is that sometimes the data contains bias.
That's why you've had AIs go onto the internet, interact with humans and start espousing fascist views within days.
So they are, in this sense, our children.
But the temptation is so great, Howard.
You know, the ability to reverse and cure dementia, Alzheimer's disease, Parkinson's disease, all these neurological diseases could be wiped away by these technologies.
You know, one in three people will get dementia.
Why would you not want to do something about that?
But the problem is, you know, we'll start with fixing that which is broken and then we will quickly move to augmentation enhancement.
And at what point do we cease to be human?
At what point are we hybrid?
You know, look, if I replace my arm with a mechanical arm, I'm still human.
If I replace it with two arms, I'm still human.
What about my legs?
What about my organs?
Is it really just the brain that makes me human?
You know, am I still human if I'm just a body that can't do anything, but my brain is still functional?
Am I still human?
Yes.
But then what happens when we augment the brain and the function of the brain?
At what point, at the moment, we're human.
We do Google searches.
That doesn't make us not human, right?
We have access to these huge amounts of information.
What if we don't need to make that request for information by typing or a voice request?
What if we just have to think it?
And then how is it different from trying to recall something that is stored locally in your mind as opposed to on the network that you're wirelessly linked to?
At what point do we cease to be human?
What does it mean to be human?
These are questions that are...
We saw that with reproductive medicine.
We talked about, back in the day, we talked about test tube babies.
And the technology became viable far be before people who worried about ethics and societies even considered the questions, much less answered them.
So now we have technologies here and on the horizon where a child can have two parents of the same gender, can have three parents, can be cloned, all of these issues and all the ethical questions that they give rise to.
And we're not answering those questions.
And we look at what our politicians are spending their time on.
And it's deeply worrying because the technology is, despite all we've said about chip shortages and this and that, the march of technology is endless.
And yet we're not preparing.
We're not asking the questions.
Because if we don't ask the questions, how can we possibly lay down a legal framework or any other framework, ethical framework for our scientists?
And we're not asking the questions.
And I don't want to alarm people.
I'm not here doing this to alarm people.
But we're not asking the questions because we're so busy wondering how we're going to pay our energy bills, which is a big scary thing for most of us, definitely me included.
How are we going to pay our energy bills?
What's going to happen to the economy as it shrinks in this country and other countries?
How are we going to deal with Putin?
How do we put the squeeze on him as he puts the squeeze on us?
So we're thinking about all those things.
But at the same time, this is like a pan on a stove.
It's slowly bubbling away to the point where it's going to boil over.
And we're not thinking about it.
And I really don't know what we do about that, Freddy.
I just don't.
Well, I have to say, Howard, I'm really quite concerned because there's always been a proportion of every human population that has not been the brightest, right?
And there are people that have believed weird things without any evidence to support them.
That's fine.
And a society can comfortably carry 10 or even 20% of its population not being capable of critical thinking.
The problem now is that with the use of social media and the way that social media is able to promulgate disinformation and misinformation, the deliberate misinforming of people and then the unfortunate sharing of it by innocent people who share articles on Facebook without even having read them in some cases.
I mean, there was a lot of fuss about that.
The problem now is humans will always disagree about what to do about the facts, but now we can no longer agree upon the facts.
Every subgroup has its own facts, and large parts of our society have no common framework for discussing things with other parts of our society.
And that's a really big problem.
If you've got a small boat, a dinghy, and you've got one idiot jumping up and down, the boat can survive.
if 30% of them are jumping up and down, the boat will sink.
And it doesn't...
And that we have some people that have been, through no fault of their own, been so indoctrinated and misinformed that they're rendered incapable of rational thoughts.
And the problem with it is that the technology permits that.
If you start taking that away from people, and I understand that they will say this and I can understand why they do, and I have a certain amount of sympathy with the view, you know, you take away their freedom.
If you take away their technology and their ability to communicate in those groups or whatever it might be, then you have taken away a bit of their freedom.
And maybe we should be looking at it from the other perspective.
We just have to enhance, you know, the majority at the moment.
We have to enhance the ability of the majority to see through lies and disinformation when it appears.
Yeah.
So in Finland, they're running a very interesting social experiment in Finland, and it's been quite successful so far.
They view disinformation as a disease that you can be vaccinated against.
And the vaccination is not one that goes into the arm, but is to teach people in the educational system, but of all ages, not what to think and what to believe, but how to exercise critical thinking, how to assess a piece of information in the way that a journalist does, right?
A good journalist will receive a piece of information.
They'll try and verify it.
Can they verify that information from another source?
What is the standing of the source?
Has that source been reliable?
Is that person who's written that article the real person?
Is the photograph in the article, does it show what it claims to show?
Do a reverse search to see if that photo has appeared elsewhere in a different context.
It could be something completely different.
So they are teaching people these basic, almost journalistic skills, but critical thinking skills to assess information.
I reiterate, not to tell them what to think and what to believe, but just giving them the tools to assess this barrage of information and misinformation that's heading our way, and particularly because it's being weaponized.
You know, there has to be, you know, something where we don't just take what we read on Twitter and Facebook at face value.
Ask yourself, who is saying this?
And then check their bona fides out.
What have they done?
What are they famous for?
Are they commenting on something that they're qualified to comment about?
We have free speech.
Anyone's allowed to say more or less within reason whatever they want.
But in specific areas that affect life and death, maybe it's an idea just to go and take a look at who is saying what is being said.
Which brings me to this.
Very much in the news, in the news this week, deep fake technology.
The ability to be able to create a copy of something, the ability to create somebody who might be an exact digital virtual clone of you or me.
This is proceeding at a pace, and I Wonder, there's another genie.
How do we put that one back in the bottle if we want to?
You don't.
You can't because the technology already exists.
It runs on people's PCs.
If you're willing to crunch a program long enough, you can produce a convincing deep fake on a PC worth a few hundred dollars.
And there's no way that that's going to change.
It's a battle.
Just as AI is used to create the deep fakes, AI can be used to help detect them.
And this thing, it crops up in so many ways.
I was thinking about uses of AI going into the future, and one of them is to make our computer systems more secure.
So you've got AI systems being used to hack computer systems, and then you've got AI systems being used to defend against AI hackers.
So these AI systems become our pawns, and they're on either side.
It's the same analogue as we talked about AI weapons systems.
We talked about AI in commerce, about how our competitors use AI to compete against us in the commercial space.
And it's the same in security, and it's the same with these sorts of disinformation media that's created.
And as the technology improves, Howard, of course, it will become impossible for a human being to differentiate between a deep fake and something that's genuine speech or video.
That's worrying.
It was worrying a year ago when we talked about it.
It's even more worrying now.
I have no idea what we would do about this.
And I know a question I asked you the last time we spoke about it is just as relevant now.
Maybe watermarking technology, veracity technology, something that is embedded within something that you yourself produce and know you've produced.
Maybe that's the way to do it.
But I don't see any great moves to adopting this as a standard.
No, and that comes back to how do we solve these problems?
So I've sort of suggested that one way of attacking the problem of, and I'll put deep fakes under disinformation, there can be other uses of it, but it's a powerful tool, isn't it, to show the president of the country saying something that he never really said, for example.
So critical thinking, that's one side of it.
We need to look at social media companies and their business models and what they are legally responsible for.
And that has to change.
The business model that companies like Facebook have where access is in quotes free.
And of course, it's not free because you're paying for it with your very valuable data.
If your data was not more valuable than you paying for it, then they would charge you for it in money.
But they make more money out of just scraping all the data off your soul as you go about your online business.
And what you need to do is actually prevent companies from having the business models that they have, either regulate them to within an inch of their lives or just ban them completely.
So you could have social networks where they're not allowed to capture your data in the way that they are.
The basic premise is that the data that you shed, the metadata that you shed, and by that I mean if I make a phone call, it's not just the contents of the call.
There's metadata that's been created that the mobile phone network will know.
They'll know my location.
They know who I rang.
They'll know how long I spoke for.
They'll know where the location of the person I rang is.
They'll see how frequently I ring that person, what time of day.
And you can start to make inferences from that metadata.
And you can monetize that data.
Now, if we just made a law that said the data that we shed is ours unless we knowingly give it away or sell it, then everything would change because then Facebook and others would not be able to learn all that about us and use that information to target advertising on behalf of their clients and make the money that they do.
If they were not allowed to do that, they would be forced to go back to more conventional business models where people are paid for the services that they provide.
And that would go a long way.
And the other thing that we should change is we should start making CEOs responsible, the companies themselves, and the CEOs personally responsible for the misdeeds of the businesses that they operate.
And we already have a precedent for this.
Some countries have the concept of corporate manslaughter.
Although companies have limited liability for their staff and shareholders, there are misdeeds that are so great that the individuals become liable.
And we should be increasing the liability of the companies.
So social media companies for a long time have tried to advertise that they are just a conduit, that they are like the copper wire in BT's networks in the UK, that they are no more responsible for what goes over their network than BT is for what's said over its own network.
They are in fact broadcasters, all of them.
They are broadcasters.
Yes, they're broadcasters, and although they may not be the generators of the content, they are the creators of the algorithms that decide what we see and what we don't see.
And that's, you know, if you allow everyone to say whatever they want to say, but then you have an algorithm that promotes certain speech, then you may as well have written it yourself, and you should certainly bear some responsibility for it.
And if you did that, the investment in AI systems to spot this sort of nonsense, be it deep fakes or out and out lies about whatever it is, would fall more greatly upon them.
And believe me, these extremely rich companies would make huge strides forward in very limited periods of time to take a lot of this stuff out root and branch.
It will always be a fight.
But at the moment, the business models that we're allowing them actually incentivize this sort of behavior.
So do you remember there was a whistleblower from Facebook giving evidence in the United States a few months back, and she spoke about A conversation that occurred in Facebook where there was a discussion about whether it should be possible on Facebook to share a story that you haven't opened and read yourself.
Because the promulgation of information that you haven't even looked at, how could you verify?
How could you apply critical thinking and decide whether that should be shared?
And it just was allowing these stories to spread like viruses without any critical oversight of them.
And there were people who were concerned about truth who were arguing in Facebook, apparently, according to the Whistleblow, that were arguing that actually, no, we should at the very least make sure that they've opened the document and possibly read it.
But the accusation was that people right at the top of Facebook vetoed that and wanted to allow unread articles to be promulgated across its network because the more things are shared, the more time people spend on the network, the more advertising they see and the more data that they create that can be captured and monetized.
So it is the business model that's wrong and that needs to be changed and it needs to be changed, you know, not just in the European Union or wherever.
There has to be a global agreement about the harm that these companies do.
There are people who think that these companies are more harmful than the tobacco companies were in their day and still are in some ways.
Well, you know, I think that's a very good analogy to make.
I can remember the era when people, I can remember dimly with being a little tiny boy.
My father used to smoke and then he was told that he would live for about a year if he didn't stop.
And because his lungs were deteriorating, he was a heavy smoker.
And boy, did he stop quickly.
I think we're in the era where information, and let's broaden it out from Facebook, technology generally, where information is a weapon.
It's harmful to people.
And control is a heavy word.
But it needs to be regulated.
Regulation, I think, is a better thing.
And as you say, making people at the top more corporately responsible for what happens is maybe one way forward.
But, you know, here's another debate that's probably not being forwarded.
It is not happening as much as it should.
And that is another worry.
Sorry, I was just going to say, interestingly, the European Union has been quite good on this.
I mean, GDPR regulations, although we all know about how annoying it is to have to select our cookies on every website, that's not been properly implemented, and that needs to be dealt with.
But GDPR, it's the first steps towards recognizing our right to privacy and that information belongs to us and not to them.
But even the Americans are starting to look at GDPR and see what they can take of that.
We're kind of going the other way, having exited the European Union.
We are sort of, well, we're going to go our own path and I guess we'll see where that leads us.
But at the moment, we're still getting those prompts saying refuse all but essential cookies.
And, you know, I very frequently opt, unless it's an organization that I know really well.
And even then, sometimes I'll think about it.
But, you know, mostly I try to opt out.
But it has to be said that individual organizations, corporations, companies implement GDPR differently.
Some of them will just give you refuse all except essential.
Some of them will say refuse everything and that's okay.
Some of them will make you go through a process so convoluted and minute where you're having to reject virtually every client they deal with for your data.
It becomes such a slow and torturous procedure that I think most people just hit allow all.
So that needs to be done.
I did it this morning.
I actually did it this morning because I was faced with, I estimate, more than 100 switches with no reject all option.
Look, there's two things to say about, well, three things to say about that.
That can be improved.
You know, that's kind of like one of the weaknesses in the implementation of GDPR that got GDPR a bad name in some quarters.
That's not to say that GDPR doesn't do some amazing things it does and a lot of them are sort of below the level that we even are aware of, but they're there to protect our rights and our privacy.
And it's a good thing.
So it can be done.
Now, sometimes some companies are deliberately, in my view, deliberately making it hard for us to block cookies because that's the way they make their money.
I think there's little doubt about that, I would say, from my own experience, but I might be wrong.
You're not wrong.
It's correct.
But the second thing to say is that sometimes the difficulty is to do with how deeply embedded companies like Google and Facebook and others are into other people's websites.
You get to the point where the website can't even switch off the things when it wants to.
And it invites you to follow links and read Google's policy and Facebook's policy and try and switch them off.
And who's going to spend, even if you've got the wherewithal to do it, who's going to spend 20 minutes plus on each one trying to work out how to switch off Google's tracking on it?
And then you've got companies, you've got companies like Apple who try and prevent tracking.
And then you've got the companies who's, you know, you're clearly telling them through Apple services that you don't want to be tracked.
And they're working on technological means of circumventing that instruction, that request.
That's just wrong.
If that's not illegal, it ought to be.
And that's the sort of thing that we should be changing.
But there will be people listening to this who say, well, that's just free enterprise.
It's the way things work.
No, it's not because what you've, in fact, got is you've got contracts that are not valid, right?
It's a well-known fact that all not all entities, so many entities have these terms and conditions that are pages long that you would probably need a law degree to understand.
And they know full well how long you spend on every page.
And still, they allow you to scroll to the bottom of thousands of words of terms and conditions and click, I have read and understood this, when they know full well that you haven't.
And the basis of, yeah, but they will accept it because it suits them.
And the basis of any contract is an informed agreement.
You have to understand what you're agreeing to, otherwise the contract is invalid.
What you really need is a nice class action to overturn the way that these contracts are implemented, to force them to make contracts that can be understood, and where the defaults are that we own our data and that you have to knowingly agree to sell your data in return for some sort of compensation.
That's called informed consent.
Okay, we're coming to the back end of this.
The future of the internet.
You and I have talked about whether the internet is doomed, whether the internet is going to be reborn in a kind of internet 2 or internet 2a version.
Do you think the internet as we have it now, it is being crammed to capacity, it's expanding exponentially.
Can it survive?
Yes, I mean, it will not only survive, it will continue to develop and become faster and ways of interacting and experiences through the internet will be enhanced greatly.
So the internet at the moment, for the most part, the way that we perceive it, is that we look at screens, we type keyboards, we move mice, we tap screens, you know, we move things around.
And we are sort of viewing the internet from the outside.
Our smartphone or our tablet is a window into this other world, effectively.
All this information, all these other people, we communicate, we can have video calls, but it's all rather two-dimensional, if you take my mind.
You're sort of separate from it, you're viewing into it, is what I'm trying to say.
That's going to change.
So, you know, there's been a lot of talk about the metaverse.
So the metaverse is effectively, instead of an internet that you view on a screen, it's an internet which surrounds you.
It's a world, just in the way that the real world surrounds you.
You turn your head, you see something different.
We sit around a table, we talk to each other, we undertake sports and activities and all these things will have their analogue and that analogue in the metaverse.
And that analogue will become, over time, more and more convincing to the point where it will not really be possible to work out whether we're in an artificially created world or a real world.
And there are some people who believe that the world that we live in now is an artificially created world.
They do, and there are many convincing arguments for that.
So we're going to, you think this is going to change, arguably dilute our perception of what is reality?
Because reality is going to be both electronic and physical?
Yeah, I think people will, as these things, you've already got it, right?
You think about these very addictive computer games, you know, and these very immersive computer games, battle games, all sorts of games, and they are very primitive compared to what is coming in terms of the experience, because we can still very much know that it's not real because we're still looking at it on screens.
We turn away, we see our bookshelf or our TV or our spouse or whatever, you know, so we're not completely immersed.
You know, we have more immersion, we wear headphones, and we do that.
But it will be much, these worlds will be very convincing and very seductive.
And people will choose to spend more and more of their time in these virtual worlds, just like people are addicted to games.
But it will be more.
And at some point, some people will live entirely into them.
You had a very early version of this.
You remember something called Second Life, which is still going online.
And that was, yeah, still there.
There's still people ambling around in Second Life.
For those who don't remember it, that was a sort of alternative reality where you create, as you say, an analogue of yourself, sometimes a slightly enhanced version of yourself, somewhat cooler, you know, somewhat better than yourself, who goes away and has a life in the metaverse.
Yeah, but people were setting up real businesses in Second Life.
I know people who ran businesses in it and made money, well, made profit in the world that they could bring out of the world into this world to use themselves.
That's doing my head in slightly.
You mean there were people who made businesses trading with characters who don't exist?
Yeah, so they had a character that would make things in that world, clothing, weapons, whatever it was.
And then they would sell them for credits, which they then can turn into cash.
They were running real businesses, you know, real world businesses, but providing virtual products.
And, you know, the other thing is, by the way, you know, this thing about the law having to catch up, the idea that virtual property has value in the real world.
So, you know, these things have real value and we have to have legal systems.
I know you're going to say that phrase in a moment that you introduced me to non-fungible tokens, where you can buy a virtual right to have or experience something.
You never actually own it, but you have a virtual right to it.
Yeah, well, I mean, that's a whole big other discussion.
But what I would say about non-fungible tokens is, in my personal opinion, they are not what a lot of people think they are and don't give them the rights.
You know, you have a non-fungible token for someone's photographs, but you don't own the copyright to the photographs.
The photographs can still be reproduced by other parties.
What if you've got a certificate that says you own the original?
What does it mean to own the original of a digital work?
I don't know.
You know, either non-fungible tokens need to change into something more legally meaningful or disappear.
Again, I'm older than I should be.
Yeah, well, I am older than I should be.
It all looks like smoke and behind.
Don't worry about that.
Well, in the virtual world, you'll be fine.
You know, I'm kind of hoping that that's going to be the case.
But in this world, it's smoke and mirrors.
What we used to call, us old fuddy-duddies, it's what we used to call smoke and mirrors.
Yeah, I honestly think when, I mean, look, you know, I'm not the best person to assess non-fungible tokens because there are commercial legal aspects of those that I'm not trained in.
But my sense of them, when I look at them and what I understand that they give you, in the absence of owning the intellectual property behind the non-fungible token, the non-fungible token doesn't give you that.
What does it really mean?
It's a bit like a signed photo or an original draft of the.
I mean, let's be fair and balanced about this.
If the non-fungible token gives you access that somebody who doesn't have a non-fungible token can't get, then you do have something.
But you can have a non-fungible token for a digital work of art, for example.
But let's say a video.
That video can still be on YouTube.
Other people can still put it out there.
You can't stop them.
All you have is a digitally signed certificate saying that you own the original of this.
But when it's a digital work, what does it even mean to have it?
I mean, this was an idea that was played with by Andy Warhol, wasn't it, when he started screen printing?
And he was playing with the idea, well, which is the original work of art?
I've just screen printed dozens of these.
They're all made by me, which is the original.
And you've got that difficulty.
I think it's a way that people are trying to monetize their But I suppose there would be, you know, not that I'm going to do that.
I suppose the benefit in that would be those who sign up for it get some kind of, let's leave me out of it, but get some kind of cachet by association.
So they get a good feeling.
How do you quantify a good feeling?
Well, you can't.
You know, a good feeling is something that comes from consuming something.
And, you know, that something might make you feel good.
So maybe that's the value in it.
I don't know, but I love discussions about non-fundable tokens.
Sorry.
All I would say about that is that my sense is that there's people buying them, not really understanding what their limitations are and what they're really buying and what more to point out.
Well, then once again, we come back at the end of this conversation to what my grandmother used to say.
There are some people in this world who've got more money than cents.
You know, more and more, my grandmother was nearly 100 when she died, and she had a phrase for every situation.
And my God, even though she didn't know the technology or the world that we live in, her Liverpool savvy shines through every single day in the things that she said.
And there's another one of them.
Okay, let's end this on a positive note, Febzi.
You and I used to talk a lot on the radio in various places about gadgets and cool stuff that we can look forward to.
So we look into 2023, which is not that far away now, scarily enough.
What do you think is going to be the cool thing to have, the cool thing that will be on the market that is about to be released?
I know that you get all the news releases and you find out about these things.
I mean, for example, look, I read yesterday about clothing that can charge devices with solar power.
Like you put your phone in your pocket and your phone will come out charged.
Well, that sounds pretty cool.
But what can we look forward to?
What's good?
Right.
So, well, on that point, I'm a huge fan of solar.
And one of the things that we've seen a lot now is these portable power packs that are essentially huge rechargeable batteries.
And this may be relevant now.
We're talking about, you know, potentially power cuts in the UK over the winter.
So you've got these huge batteries, which you can charge.
They're not just for like topping up your phone, like the small little batteries that we carry around.
These are things that you can plug mains devices into, and they're getting better and cheaper.
And they are being sold either with the option of or included with solar panels.
And I really, really like that idea.
And what's really interesting about these sort of solar technologies is that a number of trends are crossing now.
And it's fascinating.
One is that the solar technology is improving.
That's to say that a panel of a particular size is increasingly able to generate more electricity than the previous generation.
Two is that the technology is getting cheaper as more people adopt it.
You're getting economies of scale.
It's getting cheaper.
So if you wanted to stick solar panels on your roof, it may have taken you, let's say, 15 years to get your investment back before the savings in electricity would offset the huge cost of installing it.
And these things are still not cheap.
But the amount of electricity you can get out of them is more than it used to be.
The cost for the performance is dropping and the unit price of electricity is going up.
So those lines start crossing much earlier now.
So the break-even point, instead of being 15 years, might be seven or even four years, depending on what you buy, how much you spend, and how much unit price of energy keeps going up.
So we're getting to the point where it will soon be a no-brainer, but for people who can afford to do it, and that's the problem, isn't it, is what you really want is governments actually helping people invest in these technologies.
But you can get a situation where people will have not completely solar-powered houses, although some of them will, but even in this country, even where we are in the UK, which is often overcast, as the technology improves, it will be possible to not only generate electricity for your own needs, but when you have excess energy, even actually sell it back into the grid.
And that's kind of exciting to me.
Clothing, yes, it's kind of interesting what we do with material science, although I'm not sure how many of those will survive a wash and a tumble dry.
But that's a challenge.
You and I have talked about things like graphene in the past.
So graphene is this really exciting material that we keep hearing about.
It's a one atom thick sheet of carbon atoms in a sort of honeycomb pattern.
And it is considered to be the world's thinnest material, the strongest material for its thinness, and also the most conductive in both of electricity and heat.
So it's like this wonder substance, and we keep finding new things to do with it.
It's got this great mechanical strength, so you can put it into all these different materials, and those materials can either be super strong or super thin because they've got just a small amount of graphene.
You've got mobile phones now.
I think Huawei, the latest generation of smartphones, have adopted graphene-based thermal films.
So they're using graphene for heat dissipation because as smartphones get faster and faster, the challenge of dissipating heat from these relatively small devices that don't have fans in them becomes difficult.
So you've got graphene for that.
You've got graphene being used for energy storage.
So one of the big things holding back technology at the moment is battery technology, you know, the capacity, the speed with which you can recharge it, the size and the cost.
And there's quite a lot of research now into using graphene-based materials for batteries and supercapacitors so that we're going to get smaller batteries with higher capacity, which is hugely important and very important also for electric cars, by the way.
So it goes on and on.
It's almost like snake oil, like the snake oil salesman that says, it'll cure you of cancer.
It'll make you more attractive to women.
It will make women leave you alone.
It will do whatever you want it to do.
And graphene is kind of like that, but it's not snake oil.
It's going to deliver on the products.
It's huge.
I'm just at the end of this.
A couple of years ago, I was invited to go around the National Physical Laboratory in southwest London, where they are doing all kinds of, and I'm constantly singing its praises because I think it has an open day only once a year where people get the chance to go around there.
But they have some of the world's top scientists there working on mobile phone antenna technology, cancer breakthroughs.
And one of the things that they've been developing there is graphene.
And I remember standing in the graphene lab, I think it was three years, maybe four years ago.
And I was told there that this is the material of the future.
And I'm glad that, because I haven't read a huge amount about it lately, I'm glad that that graphene revolution is happening.
Febzi, you've done me proud again.
We're going to stare down the barrel of 2023 and many more conversations on TV and radio and various places.
But thank you very much for helping.
Absolute pleasure.
And if people want to get the benefit of your technical wisdom, ask you a tech question, just read what you're up to, where do they go?
Well, what I'd say is that there's a podcast, the Gadget Detective Podcast, which is available everywhere where you can get your normal podcast.
So when you go to get Howard's one, you know, chuck a pen in my direction, download mine as well.
Also on smart speakers, which I love, you can just say, you know, the keyword followed by, you know, play Gadget Detective Podcast.
And it comes out and I love it.
I even do it to my own and I don't really listen back to my stuff.
I just love the fact that it's available on smart speakers.
But any way you do it, almost all my output is available on the Gadget Detective podcast.
So it's kind of like a compendium of information.
If you want to ask questions, you can also find me on Twitter at Gadget Detective.
The rest, as they say, is just history.
And look what you're doing now.
Fevzi, thank you very much.
We'll talk again.
Thank you so much.
Cheers.
And as the man said, Fevzi Turkup is the Gadget Detective and has been for many a long year here in London Town.
If he's new to you, then please do check him out.
The Gadget Detective is the name that he goes under.
More great guests in the pipeline here at the home of the Unexplained.
So until next we meet, my name is Howard Hughes.
This has been The Unexplained Online, and please, whatever you do, stay safe, stay calm, and above all, please stay in touch.