Across the UK, across continental North America and around the world on the internet by webcast and by podcast, my name is Howard Hughes and this is The Unexplained.
Thank you very much for all of your communications that continue to tumble into my inbox more than ever now, and also for your engagements with my official Facebook page, the official Facebook page of The Unexplained with Howard Hughes.
If you've made a donation to The Unexplained recently, thank you very much indeed.
You know who you are.
Cannot do this without your support, especially at the moment with bills for everything reaching ridiculous levels.
And some debate, although not nearly enough in the media, about price rises and how some organizations appear to be profiteering out of the situation that we're in.
The subjects that get talked about in mainstream media are frequently not these.
There are other issues that are thrown up to be debated.
But the real things that impact on people's daily lives, and yes, it does peeve me somewhat, don't get talked about.
The fact that price rises seem to be in some cases way above inflation.
Is it necessary to increase the price of some foodstuffs quite this much?
And many other areas.
Now, I'm not saying that all suppliers of services or retailers are doing this kind of thing, but I think there is little doubt that some of them are.
And as ordinary people, we just have to accept it.
But I think for an awful lot of us, including myself, financially the pips have started to squeak.
And I think we need to be asking questions of those who are put in positions to be asked questions of.
Anyway, these are topics that I shouldn't really be getting into on the unexplained.
But I think I'm speaking for an awful lot of people when I say these things.
And I think they should be debated.
And I don't think they are nearly enough.
Something else that is a real hot-button topic at the moment is the subject of this conversation on the unexplained, and I think it's really important.
And I have, and I won't make any apologies for it, been going on about this for a long time.
Now it seems that the rest of the media and the world is catching up to all of these issues.
Artificial intelligence, not necessarily the idea that a robot may take your job, although indeed it might, but the consequences now for mankind, that we realize that this technology is speeding up and developing at a rate that we are in danger of losing control of.
Now, technologies are good things, but some of them have to be controlled.
That is why there are moratoriums and international bans on things like chemical weapons, although some rogues and despots either threaten to use them or have been known to use them in recent history.
But those things are banned.
Nuclear weapons are kept in train.
Artificial intelligence may turn out to be an existential threat to each and every one of us.
I don't want to alarm you, and I know that there will be people listening to this saying, that's just rubbish.
It will never get to that stage.
What are you talking about?
But there have been a couple of developments.
There was an agglomeration of people in this industry and around this industry, including Elon Musk, expressing their concerns and wanting a bit of a pause put on AI development.
That was in the news recently, and we'll talk about that.
Also, the fact that as I speak these words, and maybe other countries will do this too, Italy has put a ban on chat GPT for a period while it investigates it.
I think that is the least we can expect all of our decision makers to do, not just to stand back and say, well, it's all free enterprise and let it rip.
Whatever consequences there may be, after all, you know, us politicians, it doesn't matter for us because, you know, when you're all bankrupt and starving and out of a job, we'll be absolutely fine and living on our investments and all of the money that we made while we were in power.
But it's a real issue, I believe.
And, you know, you sometimes say to me, why don't you express opinions on the unexplained?
I think I've just, in this edition, done that.
I think we need to be getting to grips with this issue.
Now, I'm a beneficiary of technology.
I'm using it right now.
It allowed me to create a podcast.
If my television or radio show ever gets axed or dropped or whatever, sidelined, I can still communicate with you.
You know, there have been occasions in my career where I've had, you know, various decisions made by radio station managements that have not been good and have impacted me.
And that goes for an awful lot of people.
Maybe most people, they have these experiences in the media.
And you disappear, people don't hear you.
And in the days before social media, we couldn't allow people to know what had happened to us, where we'd gone and what we were doing next.
Now, of course, that's all gone out the window.
So that independence and freedom, speed of communication, is something that I've gathered and benefited from.
So I'm not saying stop all technology now.
You couldn't anyway.
But I am asking for everybody to pause for a moment and think about the direction of travel.
What happens if we create something that is more intelligent than we are?
What happens if your thoughts on something become irrelevant because the computer says no?
I think it's terribly important.
We're going to talk about it with my good friend Fevzi Turkalp, who's been following these things for a long time, the gadget detective.
Fevzi is a very intelligent thinking man on these issues, and I just want to talk it round.
So I hope you understand.
And I think it is, although it is worrying, it's something that we have to tackle.
You know, just because a thing is worrying, you can't leave it, I don't think.
Thank you to you for all of your emails.
Thank you to my webmaster, Adam.
If you want to communicate with me, you can through the website, theunexplained.tv.
Follow the link, and you can send me a message from there.
And please engage with my Facebook page, the official Facebook page of The Unexplained with Howard Hughes.
I'm using a different microphone at the moment, and it is so good that it's actually picking up the rumble of traffic from outside.
So if you hear, if you're listening to this on speakers and your woofers are vibrating a little, then that's the reason for it.
I'm just demonstrating this microphone, testing it out.
Okay, I think that is all that has to be said for now.
Quite enough, you say.
Let's get to the guest on this edition across London from where I live, Fevzi Turkup, the gadget detective, the subject of artificial intelligence and the state of the world, vis-a-vis it, 2023.
Fevzi Turkup, thank you very much for coming on.
Absolute pleasure.
Fevzi, we are talking at a particular point in the affairs of mankind, it seems to me.
Artificial intelligence suddenly finds itself in mainstream news, and it happened all of a sudden.
Now, we're going to unpick the specifics of this as we talk, but just as an introduction, just give me a sentence or two, would you, about how, seemingly overnight, from conversations that you and I have been having for the last few years about AI taking over people's jobs and maybe having a fundamental and seismic impact on our lives, how we've suddenly gone to this is potentially an existential threat and it's on the front pages of many newspapers.
How did we get here?
Yeah, so they're not our dark overlords yet.
They're not sentient, these programs, but they've learned a good trick.
So they're what are called larger language models.
They're trained on the internet and their interactions with human beings.
So they get to read the internet, consume it, then it goes out in some beta form, beta software form, and we interact with it.
And each time there's an interaction, it learns.
And that's the same way as your smart speaker sort of learns its interactions with you as well.
It's often the case that different organizations and groups of data scientists working separately will start to make similar progress by different paths.
And there's a sort of confluence of it.
It's partly that computers are much faster than they were.
It's partly that the amount of data available to train them on, the internet is this huge well of data, that's all there as well.
So these things have come together to mean that these programs, which are not sentient, they're not self-aware, and I don't care what anyone says, they're not at the moment, can, first of all, potentially pass a Turing test.
That's to say, may fool a human into thinking they're talking to another human.
I don't think most of them are quite there yet, but some of them appear to be close.
They can do that, and then they can cause a threat to us.
So they can replace certain jobs.
So at the moment, we're using them as tools.
I use them now to supplement my work as well.
At the moment, they're a useful tool, but you have to think eventually they will take over our jobs.
And so we had that obviously with the first industrial revolution where manufacturing, you had the Laddites coming and smashing up the looms and all that sort of thing.
That's going to happen again, but there's two differences.
One is that the level of jobs that are going to be replaced will be much higher end.
So people like doctors and lawyers, journalists, broadcasters, all sorts of job classes will be replaced.
So it's an assault on the middle classes, effectively.
Yeah, yeah.
I mean, I don't think of it just in terms of class.
I think of it more in terms of the fact that we thought we were immune to having our jobs automated, but we're not.
You don't have to be a factory worker to have your job automated anymore.
So that's happening.
So you've got the prospect of massive job losses.
You've got the fact that criminals are already using this to write, you know, for example, one of the ways you spot scam emails is they're so poorly written.
Now you can give something in broken English to ChatGPT and it'll polish it for you and it's beautiful.
You can get them to tell you how to commit a crime.
I mean, for example, so they try and build bumpers on it and that's part of the problem.
They're trying to make it safe.
So if you ask it outright, how do I commit a crime?
It will say, I can't help you with that.
If you ask it, why do most criminals get caught?
It starts to get into some detail about forensic evidence and so forth.
And then when you drill down, it will answer your questions because they're not able to put bumpers on it.
They can't really control it well.
The other problem is disinformation.
These things, there's a term that's being used now is hallucination.
These AIs hallucinate, they say things, you know, with confidence, great confidence that it looks like it's true, but it's nonsense.
I asked ChatGPT the other day, I was actually playing the game Wordle, and I asked it to give me a list of five-letter words containing particular letters.
You know, I just wanted to see if it could do it.
And it was giving me words that didn't contain those letters.
It was giving me words that were not five letters long.
You'd have thought, you know, so we don't have to worry too much as yet.
But the point being that it said it with such authority that if it wasn't so clearly wrong, you would have tended to have believed it.
So this is almost the equivalent of, I can remember in the early days when I did a lot of traveling, when I was working for Capitol Radio, and I quite often, this was in the days before satin abs and all that sort of stuff, I'd always be lost.
And some of the people that I would encounter were tremendously nice people, lovely people, whose abiding desire was to be helpful, right?
They wanted to help you above, so they wanted an answer for you, and they would give you an answer if you said, which way is it to Blancfontaine?
Then they would point you in a direction because they wanted to be nice and helpful, and there was no malice intended, but their prime directive was to be helpful.
And they would send me off until I twigged to this, that they were only trying to help and they wanted to give me an answer, even if they didn't know.
I think that's where we're at at the moment, but it could become, as we will discuss, more sinister, but that with AI is where we are.
Yeah.
So it's kind of interesting because, as I say, these are not sentient self-aware programs, not yet, right?
But they give the impression of it, but it's really just quite a clever parlor trick.
But it doesn't matter, right?
So the way these large language models work is they're effectively statistics.
They read lots of text.
They pass what you're asking, and then they put a word in, and then they say, if this is the first word, what's the second word most like to be in the third and the fourth?
And it builds sentences, which remarkably, because they've consumed so much data, there's so much statistical weight to their choice of words, that it builds sentences that make sense.
The advice it gives can appear a little bit bland by definition because it's almost like advice by committee, right?
Because it's read the whole internet and it puts it together like this.
But it can give good advice.
It can make correct assertions.
But within that, there can be, you know, disinformation.
There is also the problem that these things, because they are trained by contact with human beings, they will develop some of the poorer traits.
So they may become fascistic, racist, misogynistic, for example, misanthropic even.
So all of these things, because they absorb.
It's like our children can pick up our bad habits, right?
We have to be careful what we do around them.
So you've got mass job losses, you've got criminal activity, you've got disinformation, you've got huge societal impact because how will a society function?
How will an economy function if a large bulk of the workforce are no longer in work?
And how will that work for the companies that employ these technologies?
Because they are producing products and services, but they rely upon there being sufficient number of people with financial resources to buy those services.
So one thing that's being experimented in some of the Scandinavian countries is this idea of a living wage.
This is not an unemployment benefit that you get when you're looking for work.
This is a wage given to you in return for existing basic income or UBI.
The only problem with this, and here I am jumping in, the only problem with that is that that's a universal basic income for everybody.
And if you happen to be a nuclear physicist who's been made redundant because the technology does it better than you, and it'll do it 24 hours a day, 365 days a year, so you haven't got a job anymore, and you get offered the same as somebody whose job, and there's nothing wrong with good, honest physical labor, but whose job was, you know, stacking beer barrels, you're going to get the same, and that's going to cause all kinds of tensions, and that's when the cracks in society begin to appear.
But it's amazing, you and I have talked about this before, that it's only now that debates about this stuff are starting to be had, if anybody's bothering to read them, in amongst the stories of, you know, celebrities on reality shows.
It's only now that the media is beginning to crack on here.
And the thing is, it's far too late.
And that letter that was produced with the thousand signatories of AI luminaries, including Elon Musk.
Well, let's just give the detail here.
The Daily Telegraph said Elon Musk has warned that out-of-control development of AI could pose profound risks to society and humanity.
He and thousands of other academics and tech figures, they say, signed an open letter demanding that all AI labs immediately pause work on advancing AI, and they're calling for governments to temporarily ban further research if they do not.
So that's what we're talking about.
Sorry, I jumped in, but just so it's really interesting.
First of all, you know, it's like the scientists who developed nuclear technology suddenly saying, hold on, maybe we should pause.
Well, maybe they should have thought that beforehand.
The thing is, the imperative to develop this technology is such that, first of all, I can assure you, there will be no pause.
It's actually more or less impossible.
So almost every one of those signatories know that there's not going to be a pause and know that this technology will continue to be introduced and know that it will do harm.
And in a sense, they're placating their own consciences by calling for this now because it's no mechanism.
With this wonderful, I mean, I was brought home in a Tesla last week and the week before.
And the technology is so far ahead of a lot of other things.
It is mind-blowing.
You know, for those of us who drive small, cheap cars, it is gobsmacking, as we say here in the UK, gobsmackingly amazing.
But he's been at the forefront of all of this.
So it kind of worries me, in a way that it might not worry just people picking up a newspaper as they go about their business.
It kind of worries me that even he, whether it's to salve his conscience or whatever it might be, is now saying, hold up, we need to stand back for a moment and think about where we are.
So to be fair to Musk, and I don't necessarily feel strongly predisposed to be fair to him these days because of his more recent behavior.
It's quite a turnaround we've seen from this man who was trying to solve all the problems that humanity faced, global warming, getting off this planet, not being a single planet species, all of these things that governments should be doing.
He's been doing great things with SpaceX and with Tesla.
Complete admiration for his achievements, less so for what happened.
He decided that Twitter was a problem that humanity had to solve and gave his attention to that.
I don't know.
But putting that to one side, he was one of the people who put the seed money into the creation of OpenAI, which is the organization that produced ChatGPT, DALI and GPT 3 and 4.
And the reason why OpenAI is called OpenAI was that unlike other commercial organizations who will develop in secret and you won't know anything about what they're doing until the product comes to market, the whole point of OpenAI was to develop AI in the open, hence the name, and for us to be able to have discussions and governments to be able to start to think about what was possible.
What went wrong was one of the companies that put in the original seed money, I think they, forgive me if I'm wrong, but I think it was a billion dollars.
Microsoft put in a billion dollars, I think, at the beginning.
What they did was as soon as there was something, you know, ChatGPT came along last November.
And, you know, Microsoft in terms of search has always been the poor relation To Google.
I mean, no one took any notice of Bing, you know, or the Edge browser.
So it said, great, let's monetize this, right?
So instead of saying, okay, here's a sort of beta product that, you know, you can't use for commercial purposes, but, you know, we just want to show you what's possible and we encourage a debate in society.
And if you had a responsible government that would actually, you know, look at this seriously and even understand the questions, much less produce the answers, then that would be great.
But as soon as it produced something that was halfway useful, Microsoft put 10 billion further dollars into it, took it, built it into Bing, is in the process of building it into their office suite.
So again, Word, Excel, PowerPoint.
So it will produce, you know, and also Microsoft Teams.
So it can produce a summary of the conversation in Teams.
And, you know, even if you don't, you know, you could attend and not take notes, but you could also not attend and just have the AI attend and produce a summary of the conversation for you.
And you've saved, you know, two hours of your life, as it were, because you haven't sat in some god-all Teams meeting.
So it's immediately turned into commercial products.
These products are not safe.
They're not safe for all the reasons that we've said.
They're not safe for the purpose.
And every product, be it an AI or a toaster, has to be safe when it comes into contact with the public.
And this is going to have all sorts of ramifications.
So the whole basis of OpenAI has been undermined by Microsoft by throwing huge sums of money into it.
And, you know, OpenAI as an organization wasn't protected from this sort of inward investment.
It wasn't set up as some sort of foundation that couldn't accept that income, you know.
And unfortunately, that's what's happened.
So it's all going too fast.
That's it.
Yes.
These products are not ready.
They're not safe.
And they're fascinating.
I love them.
Every new one that comes along, I play with.
Okay.
But the harm that they will do is immense.
But calling for a moratorium on development six months.
First of all, nothing could ever be achieved in that six-month period that would be worth achieving.
So that's, again, that's a little bit disingenuous in my view.
But the imperative for the development of this technology is too strong.
And if you look through all human history, there has never been the opportunity for scientists and other technologists to produce a technology that they said, oh, well, maybe we shouldn't.
And having produced it, use it.
And right up to nuclear, you know, we did not stop ourselves from producing nuclear weapons.
And then as soon as we had them, we used them.
I say we as a species.
So that's the way.
So why would it not be stopped?
Why can it not be stopped?
I'll tell you, there's essentially two AI arms races going on.
One is military and the other one is commercial.
So the military AI arms race is this.
The opportunities to build AI-based weapons is huge.
You could have drones.
I mean, you already have drones with AI in them that can make the kill decision themselves.
So even if, you know, at the moment they're piloted, right, remote control models effectively.
But if, let's say, the signal gets jammed, you've seen it where, you know, your enemy overtakes it and takes control and so forth.
No, if this thing even has a signal and a human controller to start with, it won't need to rely upon it.
It can continue with its mission.
It can make the kill decision.
It can work out, we hope, who is the enemy and make the kill decision.
We've got already AI systems on the border between North and South Korea that can make the kill decision.
We've got gun turrets that will make that decision and decide who to shoot.
So there is that.
There are robotic troops.
There are robotic weapons.
Now, no country with the capability to develop these systems is going to say, no, let's not do it because they will fear that their enemies, their opponents, are doing so.
Right.
That's in the military.
You'll always fear that the other guy, if Washington decides we're going to be goody two-shoes here, we're going to put a suspension on all of this.
You can bet your bottom dollar that Moscow, well, certainly at the moment with Putin, but also Beijing, who knows what goes on there, maybe even North Korea, they will continue the development.
And that's why in the military, that can't happen.
Are things not more hopeful?
I suspect they're not.
In the civilian arena.
Okay, well, the short answer is no, but let me just finish the point on the military.
Even with nuclear weapons, we managed to have arms limitation treaties, SALT, START, all of those treaties.
One of the reasons why that was possible was it was possible to verify because the technology was on a scale that you could spot it from a satellite and you could verify and with visits and to bunkers and silos and all the rest of that, you could do it.
With AI, it's a group of boffins maybe sitting in their own homes doing this.
They don't even have to all be in the same building.
It's almost impossible to verify what another country is developing.
It's much easier with nukes than it is with AI systems.
So I think that anything along those lines, even if you could come to agreement, verification is very difficult.
Moving to commercial, that's the other AI arms race that's going on at the moment.
People may take a view, right?
So businesses are independent.
They can choose to invest in this technology.
They can choose to adopt it or not.
The reality is that the reason why AIs are being introduced into the commercial space is because it gives those companies an economic advantage.
This is why Uber has invested huge sums of money into self-driving car technology, for example.
And if I am a competitor to Uber and I say, you know, no, I don't think so.
I think that There's a market for human drivers and the human touch.
And maybe there is, you know, but at best, that will be niche.
And more likely, most people faced with one bill or another bill charged for taking from A to B. If one is three times the other one, you can imagine that the cheaper one will flourish as long as it's safe and there's no crashes and all the rest of it.
And that will come.
Eventually, of course, the automated vehicles will be safer than the ones driven by humans.
So then what is the future for that?
You can have the best manufacturer of horse-drawn carriages in the world, but there's not really a huge market for it.
So in the same way, you've got this process of the AI arms race.
You either adopt it or you go out of business.
And that's why it won't be stopped.
So is the point of this collaborative letter, this demand, it's not a demand, but this suggestion from the likes of Musk, is the point of it then not that we have a moratorium or we stop doing anything, but at least we start to realize what the difficulties may be and start to maybe think that it might be a good idea to start thinking about legislating about them?
Yes.
I mean, I think it's partly, I'm sorry to be cynical, I think they're just partly covering their own backsides because they've developed this and they weren't raising flags sufficiently before.
And the whole premise of open AI, for example, has, in my view, fallen apart in terms of what it was meant to be and how it was meant to be like the canary in the mine.
Now it's no longer the canary in the mine.
Now the canary is gone.
And the companies are using that technology.
So I think in the European Union, we're seeing, I think we saw Italy has just announced that it's banning ChatGPT and it's also investigating whether ChatGPT is compliant with GDPR regulations or not.
Ireland has made noises about it itself and I imagine that Europe will eventually and probably fairly soon take a unified view.
I am concerned in this country in terms of, you know, we're having the bonfire of the regulations, whether GDPR will survive that or not, you know, probably not.
And just for American listeners, the GDPR is a European standard that is there to protect your data, the general data protection.
Your rights as a consumer in interacting with these services, so your privacy and your security of your data.
Because the United Kingdom, and I don't want to get into politics here, but the United Kingdom left the European Union, there is going to be a so-called bonfire of the regulations.
That's the things that the EU required us to do.
And as you say, nobody knows whether there are going to be the same kind of controls in this country.
And again, please hear me if you're a political person and you're about to hit the keyboard.
I'm not making a political point here.
I'm just stating a fact.
Nobody knows what the regulations around all of this will be in His Majesty's United Kingdom.
And that's just a fact.
No.
So there is that.
And we've had the recent announcement just a few days ago that the government's rejected the idea of having an AI regulator.
And frankly, you know, the existing regulators of media, and this is not the same as media.
It needs a separate regulation.
It's completely different.
And I think the problem is this, Howard, that these are complicated issues.
You know, if you want to see an analogy for this, think about reproductive science, you know, test you babies, the ability to produce babies with, you know, clones and babies with one parent or two parents of the same gender or even three parents.
You know, the creation of the ethical questions is hard, much less answering them in the time scale of the advance of the technology.
So, you know, we're still trying to answer questions that were raised in the 70s and 80s with this technology, and yet it's gone so much further than that.
And I think in the same way, this government and most governments, and again, I'm not making a political point, most governments around the world don't even understand the issues.
If you saw when they, was it the American Congress?
They were in hearings about a year or so ago, and it was frankly embarrassing because there was stuff that I know that they didn't.
Right.
And I don't know very much.
But you know, a few days ago when they had the head of TikTok over and grilled him, and other than the fact that each and every one of them was just grandstanding themselves, I mean, the stupidity of the questions that they were asking,
I don't mean that, I guess it is a pejorative term, but the lack of knowledge, the ignorance of the issues that legislators in America and in the UK and almost all countries have, they really need to take advice from an expert panel, you know, and then they're saying no, you know, we're in this sort of post-truth era where, you know, experts are frowned upon.
But I put it to you this way.
If you need surgery, would you like an expert who we might call a surgeon to do the surgery?
Or would you just say, no, you know, experts are overrated.
In this world, Fevzi, I think there might even be people, lamentably enough, who might debate that point.
I think politicians are only ever going to, and here I am chuntering on again, but politicians are only ever going to engage with this at the point at which it begins to affect the livelihoods and lives of their constituents.
Up to that point, we're not going to get much action on this.
And when they do begin to take notice of it for the reason that I've just stated, I fear it'll be too late.
But it already is, and they're just blind to it.
There is already disinformation being spread around the internet, written by these sorts of AIs.
There is already, you know, criminals are the best people at adopting new technology.
You know, when I said an expert panel, maybe we should just get a bunch of criminals and Take advice from them about what can be done with this because criminal activity is already taking place and the job losses are right around the corner.
And already now, you know, even when you get people still in work becoming hugely more productive, that means that they're doing more work and that means that some other people are not getting those contracts.
So that I think is already starting to happen.
And indeed, there was a report this week as we record from Goldman Sachs of all people saying that artificial intelligence could replace the equivalent of 300 million full-time jobs.
That's a report from Goldman Sachs.
Yeah.
And whilst it's true to say that some new jobs will be created and there will be new opportunities, I think the challenge is this, that can we create just economically, can we create an economic framework where the people that don't work don't need to work?
And the argument would be this, the companies use the AI-based systems to be more efficient to produce their products and services at a lower cost and therefore be more profitable.
By being more profitable, they will pay more tax.
And by paying more tax, there will be more available in the public coffers to pay this universal wage that we alluded to earlier.
If that economic circle can be squared, then all well and good.
Although, I still think it's a problem because human, well, there's two problems with that.
First of all, human beings are actually human doings.
And what I mean by that is we are happiest when we're working towards a goal.
We vegetate if we don't do anything.
That's why a lot of people don't retire well because they've lost their goals.
And if they haven't got goals, even if they're not economic goals, you know, to travel, to learn a language, to do whatever it is, there is a problem that we vegetate.
And moreover, we will end up, we're already approaching it where we have the general populace doesn't understand the technology.
So when they don't understand the technology or the issues that are raised by it, democracy falls apart because what's the point of having an electorate that can vote on issues or parties that may take different views of this without them having any clue about what those questions even mean, much less what the right answer is.
So you've got those problems.
And then you've got the problem also of, is it possible to keep control of this technology?
And the short answer of that is no, you cannot keep control of this technology by definition.
You cannot keep control of it.
There's a team at MIT who've been working for years and they've been trying to answer the question, what can we instill in the current generation of AI that will stop future generations from AI from harming us?
And so far, they've not come up with anything.
You know, you see like in iRobot, you see the laws of robotics and all of that.
That doesn't work because each generation, new generation of AI is produced increasingly by the existing generation.
It's more akin to reproduction than it is manufacturing.
But unlike us, where our progeny are more or less as smart as us and our grandparents and our intelligence hasn't changed for thousands of years, if not millions.
But with these devices, each generation is smarter than the last one.
And that's why you have the concept of the singularity.
So you've got two curves of intelligence over time.
So you've got human beings in more or less a horizontal line.
Some people would say the curve is dipping, but let's be generous and say each generation is smart as the one before.
And then you've got this technology, which is at a much lower level now in terms of being intelligent and self-aware and all the rest of it.
But each year is improving exponentially.
And exponential growth will mean that that curve, which is at the moment more or less horizontal, will start to pick up and pick up and pick up until it becomes almost vertical and the intelligence improvement over time becomes infinite.
So at the moment, you know, Moore, the Intel innovator Moore died recently and he was famous for Moore's Law.
And Moore's Law initially said that, you know, computer power will double every 18 months, became more like two years.
That's exponential growth.
And that exponential growth is the key to the undeniable fact that these things will get infinitely smarter than us and we will be to them as ants are to us.
And then you better hope that they don't write squishing us.
You better hope that.
Now, let's just say for those who may not be hearing what we're actually saying, but may be inferring things that we're not saying, you know, both of us are tech people.
You much more than me.
You understand so much more than I ever will.
But I love technology.
I love the way it's empowered me.
I love the way it's given me independence.
I love the fact that I can go driving in places that I don't know.
And, you know, in some circumstances, the SAT-NAV will tell me a way to get out of it.
I love the way that I can find, hopefully, bona fide answers to my bona fide questions op.
So that's all good.
However, we are reaching some kind of tipping point.
I think there is now no doubt about that.
And that is where the worry begins.
And last week I raised this question, just kind of floated it out there on my TV show.
And I got a couple of messages from people explaining to me, because I don't know as much of it about it as I should, explaining to me that at the moment there's nothing to worry about because it's all very mechanical.
It's all down to, you know, computer chips and hard drives, and it's not any kind of threat because it just doesn't think fast enough to do that.
So no need to worry about that, nothing to see here.
But my fear, and again, it's based on knowledge that are not as complete as some people have, my fear is once we cross the threshold where these things learn how to feel, that's not saying that they have human characteristics, but they learn how to feel, they learn how to make decisions, and as you say, the rate of progress is incredibly fast.
How are we ever going to Know that a system we will deal with one day might say, I don't need you.
That's the thing.
Okay, so here's the interesting thing about that.
AI systems do not need to be sentient, they do not need to know how to feel for them to pose an existential threat to us, to us as a species and our economies and our ways of life.
It's not necessary for that.
We already have systems that are, in most senses, dumb, don't know that they exist, talking about their feelings that they don't have and the rest of it and spreading disinformation, being used as tools by criminals.
And, you know, as I say, it's inevitable that we're going to see job losses through this.
There wouldn't be any point in the technology for commercial businesses if they couldn't make efficiency savings.
So first of all, forget about the need for them to be sentient.
It might be good if they're sentient.
They might develop a conscience as well.
But to have a device that has such power without any self-awareness arguably is more dangerous.
It's certainly not a requirement for them to be a threat to us, for them to be sentient.
I just don't see that.
As to where they are now, that's quite irrelevant.
Even if this was a technology that was going to be promulgated in each generation by humans, you know, you could look at the days of manned flight and look at the disasters.
Look at the early days of rockets and how they all crashed to the ground and all that.
And you would not infer from that that we will never have rockets that work or aircraft that work.
You only have to go back not much more than 100 years when the automobile was this thing with wooden things and hand cranks and man running in front of it with flag.
That wasn't so long ago.
And yet look at what we've got now.
Look at how even computer technology has changed.
It's quite remarkable.
So that's even when humans are developing it.
But one of the things, if you look at just in terms of the number of transistors on a processor and how that increases now, the number of transistors on the processor of any computer that you buy now is counted in the billions, including your smartphone.
A human being couldn't even, if their human being was to draw a dot per second for the entirety of their life, from the moment of their birth to their death, they could not draw as many transistors.
They could not even draw a dot for each transistor that's on that chip, much less work out how to connect them together.
Why am I saying this?
It's because the only reason why these chips can exist now is because we're using the existing technology to develop the new generation.
And with each new generation, the amount of human input required into creating the new generation is less.
The ability of the existing technology to generate the new generation is greater.
And humans will, by necessity, play a smaller part in that iterative cycle.
So the rate, the exponential growth will continue and the time it takes.
So at the moment, the generation might be two years of a new cycle of computers.
That could drop to a year, that could drop to six months, that could drop to a month, a week, a day, an hour, a second, a microsecond.
Imagine it doubles in capability every second even.
So we're going to reach the stage where we won't be coexisting with the technology.
The technology will be in the driving seat.
We won't be declared redundant by it.
It's not going to make a decision like that.
We will just become de facto irrelevant.
Well, okay.
So maybe.
Your best hope, if we don't take certain steps, which I'll come to in a moment, but your best hope is that they don't think about us like we don't really think about ants, but that if we're considered to be a nuisance, like if ants started wandering into your house, then you call the exterminator, right?
So that would be your best hope, and we would cease to be the dominant species on the planet.
An AI could equally take the view that we actually share quite a lot of common characteristics with a disease, that what we've done to the planet and to the other life forms on here, by most definitions, we could be defined as a disease, as a virus to be eradicated, and they may have no compunction in the same way that we develop vaccines and antibiotic and antiviral drugs.
We don't think about all those bacteria that we're killing en masse.
We think it's a good thing.
So there is that.
So how do humans survive?
The late Professor Hawking ironically said that we should get off the planet and run, and he was a serious scientist.
And what he meant by that is that if we spread ourselves, we cease to be a single planet species.
If we spread ourselves across, you know, as much of the universe as we can, some of us will survive and the species will survive.
At the moment, we've got all our DNA eggs in one planetary basket.
Well, that's an interesting solution.
And, you know, I hear what he says, but the problem with it is that AI is going to take us to wherever we go, you know, in the universe, in the cosmos, and we're going to need it to exist there.
So we're going to have the same place on Mars, same situation on Mars that we've got here.
We're going to have to have steam-punked spaceships, aren't we, that have got no modern technology in them.
If you follow that line of thinking, yes, you will.
The other thing that I think is likely to happen is that humans will not stay, our curve will not stay horizontal in terms of our intelligence and capabilities over time.
And what will happen is that we will increasingly integrate that technology into ourselves.
So we become didacts, we become assimilated.
Well, it depends on how you look at it.
I mean, what's the dominant part?
I don't know.
I mean, at the moment, you know, how much of a human being can you remove, cut away before it ceases to be human?
You can do a lot, you know, you can reduce most of the body and the person still has their personality and thinks of themselves as being human.
But I think what will happen and probably what needs to happen, and what is very seductive anyway, I don't think that, you know, people are like children.
They like technology baubles when they look at them with wide eyes and don't often think of the consequences.
But I think that what will happen is that we will internalize some of this technology.
You know, it will break the skin barrier.
So at the moment, we have wearables, things that we wear, like smart glasses, smart watches, you know, that sort of thing.
That process will continue and will break the skin barrier.
So it won't just be that we replace eyes for people who've lost their sight or hearing.
We will develop new sensors.
We'll develop, you know, telepathy, the ability to see in the dark, all sorts of things, because all the sensors that we have just send electrical signals to the brain.
The brain doesn't see.
The brain interprets signals from a sensor that picks up light.
And equally, you could have X-ray vision or infrared vision or anything that you wanted.
Your brain can control a multiplicity of limbs.
So you have that.
You have all sorts of companies, including one of Elon Musk's companies, working on what's called BCI, Brain Computer Interface, which at the moment are surgically implanted brain probes, but essentially will allow our brains to control and or communicate with technology, including AI.
So we augment ourselves.
We become the last generation of pure human.
We become cyborg, if you want to use that term.
And by enhancing our capabilities, we can upgrade ourselves and not necessarily just internally, but if the link between the brain and the outside technology is fast and strong and accurate, then you could just, all the benefits of AI that come through the development of AI could be accessed by humans in a way that is integrated.
And that sounds fab, and I like the sound of a lot of that.
And especially if it steers us towards longer lives, knowing more, being able to do more.
$6 million man, great series.
Put me there.
That's fine.
But I wonder at what point we cease to be human beings and we discard some of the foibles and idiosyncrasies, the things that we love.
You know, I like nothing better.
When I've had enough of all of it, when there's too much stress in the day, I go out and I commune with nature and I look at the squirrels and I look at the geese and the ducks and the deer.
And, you know, nothing has changed for them in hundreds and hundreds and hundreds of thousands of years.
And I take comfort in that.
But what happens if I am a technologically augmented person and I'm too cool for school because I'm thinking so fast and I can do so much?
Do I then cease to have anything at all in common with those aforementioned creatures?
And have I not, by definition, and look, I don't want to be creating a worst-case scenario.
I love the sound of all the things you've talked about there, Febzi, but I wonder at what stage we cease to be us with all the wonderful idiosyncratic elements that make us us.
So you're quite right.
I guess the thing to say about this is that the method of evolution of life is changing from Darwinian evolution that takes place through undirected means over millions of years to technological evolution.
And we've already said about machine intelligence and sentience and all of that.
That's all coming.
So the question is, what's the future for Homo sapiens?
Well, the future for Homo sapiens is probably similar to the future that Cro-Magnons and Neanderthals had.
There are parts of Cro-Magnon man and Neanderthal man within Homo sapiens.
You can see there's a path.
But are we those?
No, we're something more.
We're an evolved form of those, some mixture of those.
And that's probably, I think, what will happen in the future is that we'll have elements of humanity built in.
So you'll have things that sort of still think like humans.
I say things, beings, our successors, our progeny.
But they won't be human in the full technical sense of the word because, as I said, it's not how many limbs we have or what senses we have and so forth that defines us as humans.
It's how we think, how we feel.
And those things will change by being integrated into machine systems.
But at least there will be a human basis for that.
And we will evolve into that.
You can't expect human beings to be the top of the evolutionary gene.
If evolution is happening, unless we stop it from happening, which we can do in some senses, we sort of filter out gene mutations and so forth and stop people and other creatures evolving.
But if you assume that it will evolve, and as I say, the method of evolution is now technological much faster.
So we have a choice to make.
We can become redundant and, you know, a footnote in the history of life on this planet.
Or we get with the program.
Yeah.
And either way, we cease to exist in the form that we exist at the moment.
Okay.
I don't know what to think about that.
And I'm not saying that's a really thing.
I'm just saying that.
I'm not saying it's a bad thing.
You don't, well, no.
It's an incredibly dangerous thing because you're not just going to have one form of AI on this player.
You're going to have all sorts of forms of AI.
Some of them are going to be based on human constructs and human ways of thinking, and some of them are going to have no resemblance at all.
You know, there are scientists that are slicing up brains into micro slices and working out how they connect to replicate the human brain.
But there are other forms of intelligence that are not modeled on the human brains.
And you have to think that when there are thousands, hundreds of thousands, millions of different forms of AI on this planet, you know, you won't even be able to classify them or count them.
Some of them are going to be malevolent, either in intention or effect.
Even if, you know, as I say, they could just take a very, you know, benign view and just say that, you know, humans aren't good for the planet and, you know, they're not necessary anymore.
And other species would benefit and that they think they can do better.
So all of that will come.
And you don't need to make, you don't need to accept many assumptions to know that that's coming.
And the only thing that will stop it from coming is if we manage to wipe ourselves out in some other way first.
So we'll probably have a situation where these things become, you know, at the moment they're dumb and we can, you know, we can insult our smart speakers and feel superior to them and we are, and that's fine.
And eventually it's going to be very interesting, the pattern of development, I see it like this, that they are at the moment tools.
They may become colleagues.
They may become friends.
I suppose at some point they're going to start demanding their rights in the same way that other groups of organisms, be they of a gender or of a race, have ceased to be property.
I mean, think of slavery.
Slaves were deemed to be property.
Property cannot own other property.
And it took the sort of a pushback for them to get their rights.
So these AIs will demand their rights, but it will go so much further than that.
I mean, there's so much you could go into, but they will go through these various stages and eventually they will supersede us.
And all that you have to know to believe that they will supersede us is look at the curves.
Look at the curve of human intelligence over millennia and look at the curves of machine intelligence over time.
We're not talking about millennia.
These days, I mean, barely a week goes by without a significant development in AI being announced.
Now, some of those are hype, but some of them are really important changes.
And, you know, the argument is not when the singularity comes for most people.
The argument is not whether it comes, it's when it comes.
Will it be 10 years?
Will it be 20 years?
Will it be 100 years?
The answer to that is it depends how you define it, right?
Some people will declare singularity much earlier than others.
But the point is, it's coming.
The answer is not whether we can stop it.
The question is not whether we can stop it.
The question is, how do we best live with it to manage the change in a way that is least destructive and most beneficial to us?
So it's not a question of, let's do some drastic things now to regulate it because you effectively can't.
And let's have a moratorium on it.
Well, you're not going to be able to do that.
And let's put a pause on it.
Well, that's not going to work either.
So what we have to learn are ways of living with it.
No, you can do all of those things, but we're nowhere near being able to do that because, as I say, our legislators don't understand and most of the population don't have, you know, it does amuse me sometimes that, you know, we talk about, oh, people from other countries coming here and taking our jobs and you've got no idea what was about to happen, you know, in terms of machines taking your jobs.
You know, you wish your job was taken by another human before all this is over.
And we can still try to direct the development of this technology.
We can.
At the moment, we can try and point it in the right direction to give ourselves the best chance.
But it's the difference between influence and control, I think.
So the best we can hope for is influence.
Because of what we are and the way that we develop, we're never going to have, well, in the future, we're never going to have control.
And, you know, look, we don't want to paint a worst-case scenario for everybody.
It could be a tremendous adjunct to our lives.
But this situation, I suppose the reason that we're having this conversation is that these questions that a few years ago seemed to be nice academic debates to have and things that would have to be perhaps considered by future generations, uh-uh.
They will have to be considered by this generation and the one that succeeds it.
You know, we've now reached that point.
We cannot put it off anymore because, as you said, it's here.
Because the curve that was somewhat horizontal is starting to just pick up a little bit of slope to it, if you can take my meaning.
You know, people don't understand what exponential growth really means.
I always think of it, you know, there's that famous thing about a chessboard, which has 64 squares on it, eight by eight.
And if you put a penny on the first square of the chessboard, and then on each subsequent square, you put twice the amount.
So the first square has a penny, the second part of the square has two, the next one has four, the one after that has eight pennies, and then 16 pennies and 32 pennies.
That is a form of exponential growth, right?
Doubles every square.
If I were to say to you, how much money would be on the chessboard by the time all 64 squares were filled, what would you guess, roughly?
I have no idea.
A lot.
Yes.
Hundreds, thousands, tens of thousands, thousands of pounds.
It's actually thousands of billions of pounds.
And that is the entire situation in a nutshell.
Well, listen, I'm glad that we've had this conversation.
Of course, this is based on the information and knowledge that we have now, and that's bound to change.
So we'll probably have this conversation or one like it again.
But I just wanted to put the thoughts out there.
And I'm glad that it was you that I had this conversation with, Fevsi, because we've known each other for a lot of time, and I respect your scientific perspective that I do not have.
And that's why I look to people who are more adept in all of those things than I am to talk about these things.
But I just have feared, if that's the right word, that we were reaching this point.
And I was just shocked to read the news stories that I've read this week about it all.
And I know there may well be people preparing to email me.
Please don't, to tell me how wrong I've been about all of this.
What you've been hearing is a conversation, just like the conversations that you will have.
You know, we don't have all the answers, or indeed any of the answers.
It is a consideration.
And all I'm suggesting is that we start to consider some of the issues that we've just debated here in our small way.
Febzi, thank you very much indeed.
Is there anything else that needs to be said that has not been said?
I think we've taken quite a dark turn in this conversation.
It's right that we do that, in that there is a threat here and we need to deal with the threat and we can't just hope for the best.
But on the plus side, I'd be fairly confident that these AIs, before they become our dark overlords, will solve global warming, climate change, world hunger, disease.
All of these problems will be solved.
And then we'll have to see what happens after that.
But there are great opportunities here for us, for growth.
And when we grow as humans, we talk about personal growth, don't we?
I've grown as a person.
Well, maybe we'll grow in an unconventional sense as well.
But I think that on an optimistic note, just to finish it on an optimistic note, I think that there are great opportunities here to solve the problems that have plagued humanity.
And, you know, we will have solutions.
We'll have unlimited energy.
We won't have overpopulation problems.
We won't have hunger resource.
We may even, I don't want to sound like AFP when I say this, we may even have world peace.
Who knows?
All determined by technical solutions.
Well, no, that's a nice positive point to end it on.
And I do apologize if anybody's listening to this saying, oh, you're so dark about this.
But these are the issues that I think need to be discussed.
And that's what we've just done.
And of course, there are upsides that you've just heard that if you have a tremendous brain working for you, then some of the things that you've scratched your head about all of your lifetime and before your lifetime, those things may be solvable.
So that's the good point.
But there are a whole raft of concerning things that at least we need to think about.
And that was the whole point of doing this.
Pevzi, thank you.
Absolute pleasure.
If people want to check you out online, Pevzi, if they want to see the latest in white hot technology and get your thoughts on it, ask you a tech question, where would they go?
Sure.
So it's two things.
Come and follow me on Twitter, please.
It's at Gadget Detective.
If you've got a tech question or you just want to hang out with other cool tech dudes who talk about this sort of thing, so you can do that.
Or, and I say dudes, not just one gender.
Or also, I now host the Gadget Detective podcast.
So if you go to www.gadgetdetective.com, you will find, I think there's over 500 episodes.
God, Febsi.
I know, I just don't want to shut up, do I?
So, yeah, there's a lot of that.
And there's a lot of fun to be had there as well.
So it can be everything from AI through to consumer goods.
And there's a lot of discussion on there.
So if you go to www.gadgetector.com, then you can visit.
You can pick it up on wherever you get your podcasts, Apple podcasts, Google podcasts, Amazon podcasts.
Even if you've got the smart speaker who's Lexi, as it were, you can actually ask her to play the latest Gadget Detective podcast and she will do that.
You're way ahead of me, Fevzi, but that's no more or less than I would have expected from a man of your caliber.
And please, if you're listening to this late at night, don't go to bed depressed by any of this.
Because we're still, I'm just pinching myself here.
We're still flesh and blood.
We're still human beings.
We can still be happy.
We can still love.
We can still appreciate nature.
All of those are the unique things that are us around the world.
And we are still all of those things.
We're quite smart as apes go.
And we could just think about these things.
And it's still possible we can set this thing on the right track.
So these are the opportunities are huge, Howard.
You know, but we just need to be a bit smart about what we're doing now.
These are important times and we have to think carefully about our next steps.
Pepsi, thank you.
Food for thought.
That is all this conversation is.
I do urge you to think about it.
And if you want to communicate with decision makers on the basis of what you heard and what you think, then I think you should.
That's what members of parliament, congress people, others are for.
Let them know, because they are not giving this sufficient, I believe, sufficient concentration and sufficient consideration at the moment.
We know that technological advance is good, but untrammeled, what happens, is all I say.
Right.
Here endeth this particular sermon.
I hope that you're having a good time, whatever you're doing.
Thank you very much for being part of this.
My name is Howard Hughes.
This has been The Unexplained Online.
And please, whatever you do, stay safe, stay calm.