Uncensored - Piers Morgan - 20230525_piers-morgan-uncensored-artificial-intelligence Aired: 2023-05-25 Duration: 47:23 === Save or Destroy Humanity (04:20) === [00:00:00] I'm Piers Morgan, uncensored tonight. [00:00:02] Artificial intelligence is either going to transform humanity or completely destroy it. [00:00:07] So should we be very excited or very afraid? [00:00:11] Is it going to take your job? [00:00:13] Will it take my job? [00:00:14] Should we try to stop it or is it already too late to try? [00:00:18] We've got some big questions and some big brains with hopefully some big answers. [00:00:29] From the news building in London, this is Piers Morgan uncensored. [00:00:35] Well, good evening, from London. [00:00:36] Welcome to Piers Morgan Uncensored. [00:00:37] We're on the brink of one of the biggest turning points in human history. [00:00:41] Society and life as we know it could be about to change beyond all recognition. [00:00:46] And according to people like Elon Musk, one of the richest and smartest people alive, well, the future's here. [00:00:54] If you say like over 20, 30 year timeframe, I think things will be transformed beyond belief. [00:01:08] You probably won't recognize society in 30 years. [00:01:13] Like, I do think we're fairly close. [00:01:16] You asked me about artificial general intelligence. [00:01:18] I think we're perhaps only three years, maybe six years away from it. [00:01:26] This decade. [00:01:28] Wow, that was Elon Musk last night. [00:01:30] Artificial intelligence could upend our lives as dramatically as the Industrial Revolution, the invention of the internet, or it could end our lives. [00:01:38] It's that big. [00:01:40] Mind-reading AI has just allowed a paralyzed 40-year-old man to walk again by creating a wireless digital link between his spinal cord and his brain. [00:01:49] That is clearly a brilliant benefit of AI. [00:01:52] Chatbots write in perfect human prose by scouring the entire internet in fractions of a second. [00:01:57] They can pass exams, write literature and code software in an instant. [00:02:01] AI can generate music, photography, corporate logos, art based on any instructions, no matter how off the wall. [00:02:08] Ask AI to create a painting of a fox balancing an apple on its foot in Mastada Salvador Dali, and it'll give you this. [00:02:18] Takes three seconds. [00:02:20] It's free and anybody can use it. [00:02:22] And the more people who do use it, well, the smarter it gets for how it works. [00:02:27] But you may have already spotted the slight problem. [00:02:29] Humans created this technology. [00:02:32] Beyond that, it doesn't really need us. [00:02:34] And even in its nascent form, it can generate very convincing fakes. [00:02:38] The Florida governor Ron DeSantis launched his U.S. presidential campaign on Twitter Spaces last night with Elon Musk, an audio-only chat room on the app. [00:02:47] It had a few tech issues, so Donald Trump, his big rival for the Republican nomination, posted this, I have to admit, very realistic and clever satire in response. [00:03:00] Hi, everyone. [00:03:01] Welcome to our Ron DeSantis Twitter space. [00:03:03] Hello? [00:03:03] Is my microphone working correctly? [00:03:05] George, can you just wait while we... [00:03:07] Hello? [00:03:08] Can you hear me? [00:03:09] We can all hear you, George. [00:03:10] Can you just hold on for a second? [00:03:14] I can hear you fine, George. [00:03:15] Just speak to me. [00:03:16] I don't think George knows how to use Twitter. [00:03:18] Hello, can you hear me now? [00:03:20] Can I please make my big announcement now? [00:03:22] Everyone just... [00:03:23] Hello! [00:03:24] Just shut up, George. [00:03:25] Can somebody just mute, George? [00:03:29] Dick, could you try not to cough on their own? [00:03:33] Okay, so how are we going to take out Trump, you guys? [00:03:36] Guys from the FBI, this is not a private call. [00:03:39] This is a public Twitter space. [00:03:40] Everyone can listen in. [00:03:43] I mean, that's all fake. [00:03:44] That didn't actually happen, obviously. [00:03:46] But it's a brilliant fake. [00:03:48] It's very convincing. [00:03:49] I watched it the first time and thought, is that what happened last night? [00:03:53] Now, imagine that's a fairly trivial thing, but imagine if it was footage from a war, for example. [00:03:59] A lot of people can now create a lot of problems with very little effort thanks to AI. [00:04:04] These photographs of Trump being arrested were entirely fake and generated by artificial intelligence. [00:04:09] This AI-generated hoax photograph of an explosion at the Pentagon sparked a stock market panic recently. [00:04:16] If we can't believe our own eyes and ears, well, how will democracy function? === Self-Regulation and Fake Images (10:06) === [00:04:21] And if AI can do so many of our jobs, well, what the hell are we going to do? [00:04:26] British Prime Minister Rishi Sunak has now acknowledged that AI could be an existential threat. [00:04:30] More than a thousand of the smartest people in the world are so worried about it, they've called for a complete six-month hiatus on its development. [00:04:37] Depending on who you ask, AI is about to either save the world or completely destroy it. [00:04:43] So tonight, we're going to devote the whole show to trying to work out which is more likely. [00:04:48] I'm joined by an esteemed panel of some of the biggest brains in the AI world. [00:04:51] Professor Max Tegmark of the Massachusetts Institute of Technology. [00:04:55] Physicist Michio Kaku, author of the new book Quantum Supremacy. [00:04:59] Times columnist Matthew Said and editor of the tech website Sifted, Amy Lewin. [00:05:04] Well, welcome to all of you. [00:05:06] Let me start with you, Max Tegmark. [00:05:10] What's going to happen? [00:05:11] Do you know? [00:05:15] It's great to be back. [00:05:17] What's going to happen? [00:05:19] We shouldn't sit here just like eating popcorn at the movies waiting to see what should happen because we are creating this future right now and it's the most important job we ever make. [00:05:30] So we have to snap out of that passive mode. [00:05:31] It's either going to be, as you said, the best thing ever to happen to humanity, helping us amplify our own intelligence to solve all our problems, or it's going to be the end of us. [00:05:42] And I think Elon Musk is right. [00:05:43] I think it's not 50 years off. [00:05:47] It's very likely this decade that things go to hell in a handbasket or become great. [00:05:52] And that's why we've called for the pause, to give policymakers a little bit of time to steer this in a good direction. [00:06:00] I'm confident we can get things right, but we need a little more time to figure out how to make this safe and make sure it's something that we control rather than the other way around. [00:06:11] So Michael Kaku, woolly mammoths used to maraud around the world, dominating all in their wake. [00:06:19] There are no woolly mammoths anymore because humans realized the best way to deal with them was to get rid of them. [00:06:25] Are robots going to be the effectively doing that to us, what we did to woolly mammoths? [00:06:35] No, I don't think so. [00:06:36] First of all, the cat is out of the bag. [00:06:39] The point is not to kill the cat. [00:06:41] The point is to tame the cat so that it has the best interests of humanity at stake. [00:06:47] Realize that in a best case scenario, artificial intelligence could give us a new golden age, a golden age for society, a society of abundance, energy, food, medicines. [00:07:00] We're talking about AI across the board ushering in a golden age. [00:07:05] However, there are problems. [00:07:07] First is jobs. [00:07:08] We have to retrain workers so that they can be part of this new revolution rather than being iced out of this. [00:07:15] Second of all, there are criminals, there are impersonators. [00:07:19] What happens if somebody impersonates Vladimir Putin and declares war on NATO? [00:07:24] We're in big trouble if that happens. [00:07:27] So it's a cat that has to be tamed. [00:07:30] And I think we can do it because, of course, we have fact checkers and also we have self-regulation. [00:07:36] Take a look at the movie industry. [00:07:38] After every movie, there's a statement saying that this movie was fake. [00:07:42] All the actors are fake. [00:07:44] So there has to be a disclaimer. [00:07:46] There has to be a fact checker to make sure that the cat is tamed. [00:07:51] Okay. [00:07:53] Matthew Zahid, are you comfortable that we have the capability and perhaps the goodness in our hearts to make AI a positive for the world? [00:08:03] Or will nefarious forces, as they tend to do with everything, get the cat and turn it wild and basically trigger the end of mankind? [00:08:12] Well, I'm an instinctive optimist and I definitely would bet on the knowledge and ingenuity of the human creators of AI to increase productivity, additional wealth. [00:08:25] But when you have the founders, some of the pioneers of this technology, talking about an existential risk, let's kind of figure out what that means. [00:08:34] It means the 8 billion people currently alive on the planet can't be eliminated. [00:08:38] But not just us. [00:08:39] All of the future generations, the many billions who could experience the miracle of consciousness of existence, won't get that opportunity. [00:08:48] And what worries me is not knowledge, it's the wisdom. [00:08:51] Even if we decided to follow the advice of Elon Musk and impose a moratorium, how would it be enforced? [00:08:58] What would stop a rogue nation continuing with that development and then turning on everybody else? [00:09:03] This is what economists call a collective action problem. [00:09:07] And the only way a species can solve that is through cooperation. [00:09:11] That is precisely the social quality that human beings are struggling with, not just with AI, but with other existential risks, nuclear war, climate change, bacteriological development. [00:09:22] I'm worried. [00:09:23] I'm worried. [00:09:24] So Emiline, are you worried? [00:09:26] I think it could go one of two ways. [00:09:28] I think this could, as some of the people were saying, be amazing for us. [00:09:32] You know, we could develop people. [00:09:34] What are you most excited by about AI? [00:09:36] I think the medical uses, you know, that if the companies get together and they're all competing with each other to find, you know, ways to treat breast cancer, exactly. [00:09:46] That's amazing. [00:09:47] But as Matthew said, if it goes the other way and it's being used to create new weapons of mass destruction, not so good for the world. [00:09:55] And do you, I mean, I just don't trust the current political class to have the breadth of understanding. [00:10:00] Well, I've got to be honest, I don't trust a lot of people. [00:10:03] And there's a lot of trust here. [00:10:04] I mean, let me bring back Max. [00:10:06] I mean, you know, what Michio is saying is correct, I think, in what we'd like to happen. [00:10:12] But there's a lot of trust here. [00:10:14] I mean, the internet, for example, which was kind of the last big thing before AI, the internet remains really woefully unregulated and pretty dangerous. [00:10:24] Why would AI not be just a far worse version of the internet where it all starts off well-intentioned, but actually goes to hell quite quickly? [00:10:34] Trust is exactly what the issue is about. [00:10:38] We can obviously not trust the first company that builds artificial general intelligence to not just become the most stifling mega monopoly ever, right? [00:10:47] Because it can crush all the competition with billions of virtual workers that need no pay except electricity and they can kill democracy, sort of take everything over. [00:10:58] You need to level the playing field. [00:10:59] We need to, I think, instead, not reinvent the wheel here, but just learn from our friends in biotech. [00:11:05] You know, no biotech company will be allowed to just start selling a new drug in the supermarket saying it cures cancer if it hasn't first demonstrated that this is safe. [00:11:18] And we have regulators for that in all countries in the world, right? [00:11:22] So it's completely absurd that for AI, which is going to have more impact right now, it's just the wild west. [00:11:28] Anyone can launch anything. [00:11:30] It's completely illegal. [00:11:32] It should be on that. [00:11:33] We shouldn't trust companies. [00:11:34] We should verify. [00:11:35] We should make them come and prove that this is safe and then they can sell it and get really rich. [00:11:40] Look, this is a world that you've been expert in for a long time. [00:11:44] You must also be excited, I guess, about the amazing potential of AI. [00:11:49] What is the one thing that you're most excited by about AI? [00:11:54] Oh, I'm incredibly excited about, for example, solving all the problems that have stumped us in medicine. [00:12:03] I was in the hospital not that long ago visiting a friend who was told she had incurable cancer. [00:12:09] And I thought to myself, it's not incurable. [00:12:11] We haven't been smart enough to figure out how to cure it. [00:12:13] That's all. [00:12:14] So the sky is the limit, as Michio said. [00:12:16] We can help life flourish not just for the next election cycle, but for billions of years, and not just on Earth, but also throughout much of our beautiful universe if we get this right. [00:12:26] And that's exactly why it's so ridiculous if we have billions of years of awesomeness that we can't even wait six months to make sure we get it right and don't squander that future. [00:12:38] Sorry, just push back briefly. [00:12:39] I mean, you talked about, I think it's a good metaphor for AI, the bacteriological warfare. [00:12:44] At the moment, the Bacteriological Weapons Convention has a global staff of three people and an annual budget equivalent to a McDonald's takeaway restaurant. [00:12:56] That global regulation is a complete joke. [00:12:59] You talk about the institutions that subscribe to the jurisdiction, the national jurisdictions, but what would be to stop a lot. [00:13:05] What would stop a loan fanatic creating a pathogen with a high infection fatality rate and a long incubation period? [00:13:13] At the moment, the regulation is completely toothless. [00:13:16] And also, somebody actually who ran the Oxford AstraZeneca COVID vaccine program said to me. [00:13:22] I meant the FDA, the Food and Drug Administration. [00:13:25] That's what I was talking about, where there is a really well-functioning regulatory system. [00:13:29] Companies have to prove stuff is safe before they sell it. [00:13:32] For those who subscribe to the jurisdiction of the FDA. [00:13:36] But you have one in Britain too, I'm sure. [00:13:38] We do. [00:13:39] I mean, the question I would put to you, and maybe I can come to Michio for this. [00:13:43] I remember the boss of the Oxford AstraZeneca COVID vaccine program telling me I had long COVID for a few months. [00:13:51] He said, I wouldn't worry about long COVID if I were you. [00:13:53] He said, I'd worry about a return of the plague. [00:13:57] So I started laughing and he went, no, I'm completely serious. [00:13:59] He said, we've had a massive wake-up call here. [00:14:01] If we got a return of the plague with a death rate of 30 to 40%, including children, it would wipe away 40% of the planet. [00:14:10] In the wrong hands, AI presumably could create a way for a nefarious person or entity or state to create exactly what Matthew just said, a pathogen which could cause that kind of devastation. [00:14:25] How do we stop that happening? === Human Judgment vs AI Tools (15:30) === [00:14:27] How do we stop bad people getting access to super new knowledge from AI to cause devastation? [00:14:37] Well, let me say one thing. [00:14:38] The key concept, I think, is self-regulation. [00:14:42] The industry has to regulate itself in the way that the movie industry, the comic book industry, all culture in Western society is largely self-regulated. [00:14:52] And if it's not, then the politicians grandstand and they try to get elected on the basis of shutting down some of these media outlets. [00:15:00] So I think we have to let self-regulation play itself in order to make sure that the excesses of this technology don't get out of hand. [00:15:09] But also, looking at the internet, as Deng Xiaoping once said of China, sometimes you have to open the window to let the fresh air in, but sometimes a few flies come through. [00:15:20] And so, yeah, the internet has a few flies, but in the main, the internet is a force of democracy. [00:15:26] It's a force of education. [00:15:28] It has proven its worth to society. [00:15:30] Imagine a society without the internet. [00:15:32] And the same thing with artificial intelligence. [00:15:35] I think it has to be tamed. [00:15:36] It's raw, but it has to be tamed. [00:15:39] And when it is tamed, it'll give us a new golden age for all of society. [00:15:44] Well, you know what, Michio? [00:15:46] Really, really, really want you to be right because the alternative is catastrophic. [00:15:52] Michio, thank you very much. [00:15:53] Max, thank you very much indeed. [00:15:55] You two are staying with me as we continue through this special edition of Piers Morgan Censored about artificial intelligence. [00:16:02] So, coming up next, millions of jobs could obviously be affected by AI. [00:16:07] Many could be lost if they are. [00:16:08] What happens to those currently in work who will be out of work? [00:16:12] We'll debate that next. [00:16:41] Welcome back. [00:16:42] Artificial intelligence could replace the equivalent of 300 million full-time jobs, according to investment bank Goldman Sachs. [00:16:48] And that transition is already underway. [00:16:50] BT is cutting 55,000 jobs, with a fifth loss to AI. [00:16:54] Even Sam Adderdyce, the former England football manager, is worried. [00:16:58] I've just heard about 40,000 jobs going from BT. [00:17:02] So, what are they going to do? [00:17:04] You know what I mean? [00:17:05] So, the next piece of AI comes in, another 30,000 jobs goes. [00:17:09] What are they going to do? [00:17:10] So, you know, for me, it's not a great future. [00:17:16] But he's got a point, isn't it? [00:17:18] Imagine if Big Sam was replaced by a robot or Jurgen Klopp or RoboKlop, as the Daily Star put it. [00:17:24] So, could our jobs all be at risk, or could AI be used to drive productivity across the economy? [00:17:30] Joining me now as the CEO and founder of Do Not Pay, the world's first robot lawyer, Joshua Browder, and Jeremy Howard, founding researcher of Fast.ai, and Matthew Said and Amy Lewin are still with me. [00:17:41] Okay, Joshua Browder, here's my question about jobs specifically. [00:17:47] Let's assume for a moment that AI and robots take a whole lot of jobs. [00:17:52] Say Goldman Sachs are right, 300 million jobs. [00:17:55] All those people are suddenly unemployed who used to have those jobs. [00:17:59] Where are the people going to be with enough income if they're unemployed to buy the products being made by all the robots? [00:18:08] Well, the government can give them money. [00:18:11] I think AI will replace a huge number of jobs. [00:18:14] Lawyers will be the first to be replaced by AI because they're charging hundreds of dollars an hour for copying and pasting documents. [00:18:20] But AI will also create a lot of jobs. [00:18:22] A lot of jobs today didn't exist 20 years ago. [00:18:25] So, at my company, Do Not Pay, we're now hiring jobs called prompt engineers, and that is actually telling the AI what to do. [00:18:33] And that job didn't even exist a year ago. [00:18:35] So, I think there'll be new and exciting jobs for people. [00:18:38] But at the same time, those that charge a lot of money for doing very little, like some lawyers, not all lawyers, have to worry about being replaced. [00:18:46] You seem very, very anti-lawyers. [00:18:50] I spent too much time trying to fight them. [00:18:53] And probably too much money. [00:18:54] I mean, one thing your chatbot lawyer has successfully done is overturn almost 200,000 parking tickets. [00:19:01] I could see you becoming extremely popular just with that service alone. [00:19:06] Yeah, for very simple tasks, no one has time to wait on hold for five hours to save $50, like getting a refund for a company or getting out of a parking ticket. [00:19:16] And so, that's the perfect job for AI, saving time and money for people. [00:19:20] And I think those people, the lawyers you see on billboards that charge a lot of money to do that, should be very worried. [00:19:26] Okay, so Jeremy Howard, I mean, look, clearly, a massive threat to human employment. [00:19:32] We're seeing it already, and it's going to move probably faster and faster. [00:19:36] But what do we do about this? [00:19:37] You can't just have vast swathes of the planet who were employed suddenly not having employment. [00:19:43] What do we do? [00:19:45] Yeah, I think we've got to be careful. [00:19:47] You know, it's not as easy as what Joshua described. [00:19:50] There aren't going to be new jobs to fill in all the old ones. [00:19:54] And I can explain why. [00:19:55] It's very simple, Piers. [00:19:57] Think of it this way. [00:19:59] We have two things as humans. [00:20:01] We have a body and a brain. [00:20:03] Our body can move things. [00:20:04] Our brain can think about things. [00:20:06] Back in the Industrial Revolution, the engine was developed. [00:20:10] And before that, in the UK, 80% of people worked on farms. [00:20:14] And the engine came along and allowed us to replace humans using their bodies to move things with machines. [00:20:21] And today, only 1.5% of people work on farms in the UK. [00:20:25] That's fine. [00:20:26] Lots of new jobs came along because we still had something else to give, our brains. [00:20:30] And so now most of us do jobs which involve, at least to some extent, thinking about things. [00:20:35] Now, if AI can come along and think about things better than we can, where are these replacement jobs going to come from? [00:20:43] We've got things that can move stuff. [00:20:44] We've got things that can think about stuff. [00:20:47] So where's the role for humans? [00:20:48] I do think there'll be some jobs still. [00:20:50] For example, talk show host. [00:20:52] I think there are some things where we need a human. [00:20:54] You know, I don't want to tune into Piers Morgan bot, right? [00:20:57] Well, you just jump on that. [00:20:59] It's very interesting. [00:21:01] You don't think you do, but if I was to have a robot, AI robot, be programmed that looked like me and had access to everything, every question I'd ever asked, every mannerism, every style, whatever, I reckon quite quickly they could develop something which could do a very passable version of me. [00:21:21] Right, but I still wouldn't tune in. [00:21:22] Like, think of another one. [00:21:23] Are you going to tune into the tennis playing bots? [00:21:26] Like, the fact that, you know, Roger Federer is an amazing human is why we like watching him play tennis. [00:21:32] So I think like there'll still be a role of like humans doing human things and other people saying, well, look at that person. [00:21:38] That's amazing. [00:21:39] So I think there's going to be a totally different kind of role for people. [00:21:46] They won't be jobs in the classical sense, but it could be great. [00:21:50] If we find a way to transition to this, it's not a threat. [00:21:53] It means you don't have to go to work and do eight hours of whatever you're told tomorrow. [00:21:59] You can do whatever you most want to. [00:22:02] Well, okay, I mean, I think that's a good idea. [00:22:03] I mean, it could also be a huge threat. [00:22:04] Yeah, let me bring Matthew in here. [00:22:06] A friend of mine, a friend of my son, actually, my oldest boy, his mother wanted him to send a thank you note for a party she'd arranged for his 30th birthday, and he kept delaying this. [00:22:16] And eventually he asked AI to do him a thank you note to his mum, giving it a few details. [00:22:21] And it did a note that was so perfect and so emotional and heart-rending. [00:22:26] His mother was reduced to tears when she read it and said she'd never been so moved by him. [00:22:32] Now, is that good or is that awful? [00:22:35] It brought great joy to his mother, but she has no idea he was a robot. [00:22:38] As it happens, on Tuesday, my wife sent me an email. [00:22:42] She had gone to ChatGPT and said, write a Sunday Times column in the style of Matthew Syme. [00:22:48] Right. [00:22:48] And I was like, this has got to be terrible. [00:22:50] And I'm reading through it. [00:22:51] Thank you pretty good. [00:22:52] I'm thinking, my goodness, journalists are going to go. [00:22:55] Right. [00:22:55] And on the question, by the way, of we want to connect with the human, how do we know that we're currently talking to Piers Morgan, the flesh and blood human reality? [00:23:03] And what would stop you substituting the hologram for the real? [00:23:06] I mean, I think that... [00:23:08] I interviewed a robot. [00:23:09] Honestly, Amy, I interviewed a robot at Good Morning Britain. [00:23:11] It was a female robot, and it was chilling. [00:23:14] I mean, she looked like a woman. [00:23:16] She spoke like a woman. [00:23:17] Now, that was a few years ago. [00:23:19] God knows where they're getting to with this now, where they can just be very convincing humans, but with amazingly high-powered brains. [00:23:27] They can be. [00:23:27] They're not perfect, though, yet. [00:23:28] I've got a friend who's a school teacher, and she said that when she gets an essay done by the robot, it's so much better than any 13-year-old boy could actually do that she can really tell which one's which still at the moment. [00:23:40] But they'll get there. [00:23:42] Well, they will. [00:23:43] I mean, finally, Joshua, I've asked a few guests this, but what are you most excited by, by the potential of AI? [00:23:52] I think AI, as was discussed in the previous guest, is being used for evil with debt collectors and all of this stuff, but it can also be used for good. [00:24:00] And my goal is to give power to the people. [00:24:03] And if it makes ordinary people more powerful than the richest in society, then that's great. [00:24:08] And so I think it will level the playing field by allowing people to weaponize AI to help them in their everyday life. [00:24:16] All right. [00:24:16] Jeremy, same question for you. [00:24:18] Quick answer, please. [00:24:20] Yeah, imagine the huge opportunities in education. [00:24:22] I've got a daughter and we're already she's doing stuff with ChatGPT and stuff. [00:24:27] And it's fantastic. [00:24:28] She can learn about anything she wants to. [00:24:30] It's an engaging thing. [00:24:31] You know, it's not replacing teachers at the moment, but I think AI could be used to really democratize education. [00:24:37] That's something I'm very excited about. [00:24:39] I could actually see robots taking classes with kids. [00:24:41] I mean, if they're good enough and you give them a bit of personality, why not? [00:24:45] I mean, most teachers do a version of the same kind of lessons. [00:24:48] I mean, you get a few, you know, a few who break out and do very different things each time, but a lot of them do the same stuff. [00:24:54] It'll be like a personal tutor for every kid. [00:24:57] Yeah. [00:24:58] Yeah, it's going to be fascinating. [00:24:59] Thank you both very much indeed. [00:25:01] I appreciate it. [00:25:02] Once that says next, AI-powered deepfakes can make politicians and stars appear to do and say anything. [00:25:07] How safe is democracy if we can't trust our own eyes and ears? [00:25:25] Welcome back to Piers Morgana Sensor. [00:25:27] Deep fakes are AI-powered videos, which can make anyone appear to say or do almost anything. [00:25:32] They're no longer the preserve of nerds or CGI experts. [00:25:35] Freely available apps can make videos like this in minutes. [00:25:39] Hello, Piers. [00:25:40] Donald Trump here with a very special announcement. [00:25:44] I'm officially endorsing Ron DeSantis to be the next president of the United States. [00:25:50] God bless America. [00:25:51] Hey, Karen. [00:25:52] I mean, Piers, it's Kanye West here reminding you that I'm not a shark, just a blowfish, and that there's hope for you too, Piers. [00:26:00] Have a great show. [00:26:02] Hello, Piers. [00:26:02] Elon Musk, here talking to you from Tesla Headquarters. [00:26:06] I know you've wanted me on your show for ages, and sorry to disappoint. [00:26:10] It will happen soon, I promise. [00:26:11] Maybe on the moon, maybe on Mars, but I promise you'll see me soon. [00:26:15] That's actually probably true. [00:26:17] It's just a bit of fun, obviously, but it could not be, couldn't it? [00:26:20] You can see how that could be really quite dangerous. [00:26:22] The real potential is obviously limitless, and that could be a good or bad thing. [00:26:27] The power of AI to create and replicate misinformation obviously poses enormous threats to democracy. [00:26:33] British Prime Minister Rushi Sunak said AI poses existential risks. [00:26:38] Well, Sam Altman, the boss of OpenAI, recently gave this grave warning to the United States Congress. [00:26:45] For a while, people were really quite fooled by photoshopped images and then pretty quickly developed an understanding that images might be photoshopped. [00:26:54] This will be like that, but on steroids. [00:26:57] So how safe is democracy if we can't trust our own eyes and ears? [00:27:00] I'm joined by the podcaster Cara Swisher and the CEO and co-founder of Boson AI, Alex Smoller, and my pack are still with me. [00:27:07] All right, Cara, let me start with you because this is a, I don't know whether to be incredibly thrilled by AI or absolutely terrified. [00:27:17] Where's your mind with it all? [00:27:19] Well, do you only have two settings, Piers? [00:27:21] I mean, that's the issue. [00:27:23] It's not one or the other. [00:27:24] I know you have many, many nuanced emotions. [00:27:26] And that's unfortunately what's happened here is the debate is it's going to kill us or it's going to make us a superhuman. [00:27:32] And I don't think it's either thing. [00:27:33] It's sort of a lot like the internet. [00:27:34] And I think Sam's right that it's on steroids. [00:27:37] But at some point, we have to realize we have control of it. [00:27:41] And I'm not particularly scared of AI. [00:27:43] I'm scared of people who use AI. [00:27:45] And that's really how we have to think about it. [00:27:48] And that it can be regulated. [00:27:49] It can be controlled. [00:27:51] We control nuclear proliferation. [00:27:54] We control all kinds of things in society. [00:27:56] And this is just another one of them. [00:27:58] But do we do it? [00:27:59] Let me challenge that. [00:28:00] But Carol, let me challenge that. [00:28:02] Do we regulate these things safely? [00:28:04] I mean, at the moment, we have a war raging in Ukraine where at any moment, Vladimir Putin could unleash. a nuclear weapon. [00:28:11] No amount of regulation is going to stop him if he thinks he's losing that war. [00:28:14] Similarly, with AI, we can talk about it being regulated and safe, but the internet has never been properly regulated. [00:28:22] Why should AI be? [00:28:24] Well, the internet has never been regulated, at least in Europe it has, but not in the United States. [00:28:29] So that's different. [00:28:30] With nuclear weapons, I don't believe we've been blown up yet, you know, in all this time. [00:28:34] And obviously, everyone does have nuclear weapons and there's more and more people having it. [00:28:38] But in general, the idea that we can control it is the one we have to start with. [00:28:42] And this is not different than that. [00:28:44] Now, there's going to be rogue players. [00:28:46] There's going to be threats. [00:28:47] There's going to be all kinds of things. [00:28:48] But a lot of this is about the people using it and what's put into it versus that it has to go one way or the other. [00:28:55] We've seen what happens with lack of regulation, and that's the current internet, right? [00:28:59] Here, again, there's regulations in Europe but not in the United States. [00:29:03] But you've seen that you can do it. [00:29:05] And in this case, one thing that's powerful is it has liability attached to it. [00:29:10] And because of Section 230 in the United States, it didn't before. [00:29:13] In this case, AI has liability. [00:29:15] And so there's also those issues that can help control it. [00:29:19] But it has to be a global effort to decide we're not going to do killer robots. [00:29:24] We're not going to do this. [00:29:25] We're not going to do that. [00:29:27] And that's where you have to start, despite the fact that there will be rogue players. [00:29:31] Okay. [00:29:32] Alex Molo, the problem is where there are robots, I can quite envisage a time quite quickly where there are killer robots. [00:29:39] How do we stop it? [00:29:42] So there's, let me just quickly get back to one or two things that Kara brought up before we get to the killer robots. [00:29:51] I think there's a quite significant difference between what you have with nuclear power and AI. === Global Effort Against Rogue Actors (14:57) === [00:29:58] And the big difference is the barrier of entry. [00:30:01] So it's relatively easy to control a supply chain of uranium. [00:30:07] And on top of that, if you handle it and you don't know what you're doing, you're going to die from it. [00:30:13] And it's only available in certain locations and all of that. [00:30:16] It's very, very different by now with AI tools. [00:30:20] So once you have a well-trained model and there is a couple of really good ones out there, for instance, Meta released something called LAMA recently, and this has led to a Cambrian explosion of models. [00:30:32] And by now, you can take a Raspberry Pi that costs less than $50, and you can make it generate text. [00:30:42] Or you can spend about a dollar and an hour on, well, basically second-tier cloud providers, and you can run much, much more capable models. [00:30:52] And that has just lowered the barrier of entry to using AI tools rather drastically. [00:30:59] This is something that would never be possible with nuclear power because, well, the average human would never have the resources to actually build a nuclear reactor in their backyard. [00:31:10] But now you can have a teenager buying a computer for a thousand, two thousand bucks and doing it entirely on his own or running it in a cloud for a couple of dollars. [00:31:20] So that's one of the, I think, key differentiators. [00:31:24] And that fortunately or unfortunately means that the genie is really out of the bottle. [00:31:29] It's going to be very, very difficult to get people to, well, pull back on this just because everybody can do it. [00:31:38] But also it creates more opportunity because you can lead to a very large creative explosion. [00:31:47] Right, okay. [00:31:48] Now onto those killer robots. [00:31:50] Yeah, can we quickly on killer robots? [00:31:53] Yeah, so, I mean, to some extent we have them already. [00:31:56] So I guess if you've watched any videos in Ukraine, if you've seen a javelin being launched, right? [00:32:03] So this is one of those, okay, I'm probably mischaracterizing it as a rocket-propelled grenade. [00:32:09] So it's basically, you know, a rocket that's being launched. [00:32:12] It flies towards where the tank is. [00:32:14] It flies above the tank, probably detects with some sensors, okay, there's a tank underneath, and then dive bombs straight from the top into it and destroys the tank. [00:32:23] And we've probably all seen images of that. [00:32:27] Well, there's AI in it. [00:32:28] The AI in this case is probably some level of object detection. [00:32:34] And then it has a control mechanism, an actuator in it that will make this rocket, you know, destroy the tank. [00:32:44] So in that sense, we've had it for decades. [00:32:49] And I don't think the fear of building a killer robot now is going to be that much different. [00:32:57] Yes, okay, it's easier, for instance, to track humans. [00:33:01] It's easier to also track, well, pests or animals, right? [00:33:07] And it's also easier to track weed. [00:33:10] This has actually led to improvements in precision agriculture and farming. [00:33:15] But obviously, if you can track and detect something, you can use it for good and for evil. [00:33:21] And that's... [00:33:23] Well, okay, so again, it comes down to faith in human nature. [00:33:27] I mean, Amy, the more I hear about this, the more it comes down really to this, well, yeah, I mean, as long as we can trust everyone to do the right thing here, we'll be fine. [00:33:37] But I'm not really that reassured about this aspect of bad actors in this field doing bad stuff with it. [00:33:45] Well, I guess it's reassuring that the, you know, Rishi Sunak yesterday met with a bunch of these tech CEOs. [00:33:50] And I think people like Sam Altman and Demis Asabis from DeepMind, they give the impression that they started out to try and build something that was good for humanity, right? [00:34:00] But the amount of money that he was committing, Rishi, is peanuts. [00:34:03] Yeah, yeah. [00:34:04] I mean, but you need to have the CEOs meeting with the politicians, but then actually deciding on something, I think. [00:34:10] And the guardrails, Rishi talked about, I think we need that. [00:34:13] But Matthew, I'm seeing you shaking your head repeatedly throughout this when we hear anything about it's all going to be regulated and so and I kind of agree with you I don't see why we're being so trusting about this. [00:34:22] The genie is out of the bottle. [00:34:23] It seems to me potentially a really bad genie in the wrong hands right, and the analogy with nuclear, I think, is absurd. [00:34:32] It's very difficult to enrich uranium, but almost anybody, as a previous speaker said, could operationalize AI. [00:34:38] It's all about faith in human nature. [00:34:40] I trust a lot of people, but with eight billion with potential access to this technology, it just requires one rogue actor, and there's always going to be one person or one small group of fanatics who are going to be driven either by illness, mental illness or a fanatical ideology who will turn on the rest of us. [00:35:00] And I do worry, given that we're being quite philosophical here. [00:35:02] Most scientists believe that there is scope in the universe for other civilizations to emerge. [00:35:08] What has always been confusing to them is that we haven't heard from these other civilizations, and it may just be that when a civilization surpasses a technological threshold where it can destroy itself, it becomes almost impossible to put, to use your phrase, the genie back in the bottle. [00:35:23] That may be a really profound filter that stops us evolving and developing. [00:35:28] I agree um, Cara. [00:35:30] Let's end on a positive note. [00:35:31] What's the most exciting thing for you about AI? [00:35:37] Well, you know, if you don't like the nuclear analogy, you can think of CRISPR. [00:35:40] There's all kinds of things that globally should be regulated together and have been in the past. [00:35:45] And it's, again, it's not perfect. [00:35:46] The things I think are exciting, healthcare, education, the two things. [00:35:51] The stuff they're doing at Google on protein folding, drug discovery, education, all kinds of information in your hands. [00:36:01] Some of the more fun stuff, some around the entertainment could be interesting. [00:36:05] But largely healthcare. [00:36:07] I think that is really where the real promise is. [00:36:09] Talking, you know, your previous guest talked about is cancer isn't solved because it can't be solved. [00:36:14] It's because it hasn't been solved. [00:36:15] And I think this gives us incredible computing power in our hands. [00:36:18] So the good news is none of us will die from cancer in 30 years. [00:36:22] The bad news is we'll all get killed by killer robots. [00:36:25] So I guess it's a game of two halves. [00:36:28] You'll get killed by people. [00:36:30] People will kill us. [00:36:31] People are the problem. [00:36:31] So maybe the robots are right. [00:36:33] Let's just get rid of us. [00:36:35] They aren't sentient. [00:36:36] Let's just keep that. [00:36:37] They're not. [00:36:38] But you know what? [00:36:38] I remember I interviewed Professor Stephen Hawking, and I asked him what was the biggest, in his last TV interview actually, I asked him what was the biggest threat to mankind. [00:36:48] He said, when artificial intelligence learns to self-design, that's it. [00:36:53] And that's exactly what they're all now warning about. [00:36:55] And Elon Musk. [00:36:56] He was very early to this. [00:36:57] He was, yeah. [00:36:59] Both me and Elon were. [00:37:00] I've got to leave it there, unfortunately. [00:37:02] But thank you both very much. [00:37:04] Sorry, go on. [00:37:05] If I may raise something else that probably would keep me a lot more awake at night and we'll see this a lot in the next few years. [00:37:13] And that's basically that a lot of what you would call social media or so user-generated content, that a lot of that will be polluted by generative AI. [00:37:24] Yeah. [00:37:25] So for instance, if you perform a Google search for as an AI language model, I cannot, and you search for that on Amazon reviews, you're going to see a whole lot of reviews. [00:37:38] Not bad reviews, but very clearly synthetically generated ones polluting by now that review database. [00:37:45] I mean, as long as they pollute it so that when my next book comes out, they're all glowing reviews. [00:37:50] I'm kind of, I'm not against that. [00:37:53] I've got to leave it there. [00:37:54] But thank you very much indeed, Alex. [00:37:56] I'd love to talk more. [00:37:57] We're running out of time. [00:37:58] Cara, thank you very much indeed. [00:37:59] I know you've had a busy day, so I appreciate you taking time for us as well. [00:38:02] Thank you. [00:38:02] Stay with me, Pat. [00:38:04] Talk about job. [00:38:05] So next, AI can write jokes, it can make music and create award-winning art in seconds. [00:38:10] So what's the point of creative people anymore? [00:38:13] William Shatner and Howie Mandel will debate that next. [00:38:26] Thank you. [00:38:27] Thank you very much. [00:38:29] It sure is an honor to be here with you all on America's Graph Time. [00:38:48] Welcome back. [00:38:49] AI systems can make music, generate art, mimic photography, and write. [00:38:52] Some argue could augment human ingenuity, enabling artists to explore novel avenues and collaborate with AI in innovative ways, but it could clearly impact cultural jobs. [00:39:02] Others fear it risks a total loss of human creativity. [00:39:06] And in bad news for TV writers, everything I just said was scripted by AI. [00:39:10] But it's also been used in the entertainment industry to bring beloved singers and movie characters back to life. [00:39:16] America's Got Talent. [00:39:17] The show I used to work on also got a taste of just what AI can do when Simon Cowell, Howie Mandel and Terry Crews took to the stage, or at least their AI replicas did. [00:39:48] Well, I'm now joined by the comedian, the star of NBC's America's Got Talent, Howie Mandel, and legendary actor William Shatner. [00:39:55] Welcome to both of you. [00:39:56] So, Howie, I mean, I knew it was a fake the moment I heard you sing because you can't sing. [00:40:00] So, but was it a weird experience seeing that? [00:40:04] And what do you think of AI? [00:40:06] Where's your mind with all this? [00:40:08] Amazing. [00:40:10] I am so up on technology. [00:40:12] I love it. [00:40:13] I think we got to embrace it. [00:40:15] I've been listening to your show from the beginning, people screaming fire in the building. [00:40:20] I think number one, nobody and there is no choice in stopping it. [00:40:26] There's no way you're going to stop it. [00:40:27] Embrace technology. [00:40:28] I always have. [00:40:30] I embrace AI. [00:40:31] I embrace holograms. [00:40:32] I've invested in a hologram company, which is actually in the O2 right there, proto-hologram. [00:40:38] And that's because I can be in two places at once. [00:40:41] I would love somebody to write for me. [00:40:44] I would like to, I've actually licensed my AI image to a Korean company because I want to be able to be places and do things without being there. [00:40:54] It makes me much more productive. [00:40:55] It makes the world productive. [00:40:57] You know, nobody stopped the automobile, even though all the blacksmiths seem to have lost their job. [00:41:02] You know, we'll just find new things to do. [00:41:05] Okay, but how would you feel? [00:41:07] How would you feel artistically if an AI robot was able to write all your jokes for you and you literally had no input? [00:41:15] Would you care? [00:41:17] You wouldn't care, would you? [00:41:18] Because you're a cultural shop. [00:41:21] As long as you pay me. [00:41:22] But the truth is, what you're saying doesn't really exist. [00:41:25] You know, we need the human ingenuity or creativity to prompt AI. [00:41:30] We need to come up with the ideas of what we want AI to do. [00:41:33] You're not going to stop this technology. [00:41:35] We didn't stop the World Wide Web. [00:41:37] And, you know, people are putting things out there and we could generate, I don't know if you're on Instagram, but a lot of people are a lot better looking than they actually are. [00:41:46] It enhances. [00:41:47] Yes, there are naysayers that say bad things will happen, but bad things happen with cars. [00:41:53] Bad things happen with all technology. [00:41:56] Bad things happen just when people are left to their own. [00:41:59] This is all good, people. [00:42:01] I mean, to be honest, you know what? [00:42:02] Bad things used to happen. [00:42:05] Yeah, I mean, bad things used to happen to me when I work with you quite regularly, normally because of you. [00:42:09] Let me go to William Shatner. [00:42:11] I miss you, buddy. [00:42:12] Let me go to William Shatner. [00:42:13] William, you celebrate your 90th birthday recently by creating an AI-powered version of yourself that will live forever, which I was thrilled to hear about. [00:42:21] But does that mean you're all over this too, the AI phenomenon? [00:42:24] Are you embracing it with great glee or are you worried? [00:42:28] I too have been listening to your program from the beginning. [00:42:31] What I'm missing, and what I think is possibly promising, is morality. [00:42:39] How do you teach a computer morality, which brings us back to human beings? [00:42:44] How do you teach morality to human beings? [00:42:47] Will it be possible to teach these artificial intelligence entities as they become more and more sophisticated morality? [00:42:57] And so they cannot be able to do bad things to harm people. [00:43:04] Then you get into the question of authentication. [00:43:07] Can you authenticate an image so that there's no question that it's an imitation or it's the real thing? [00:43:16] There are areas that are sophisticated that can help take care of the things that we've heard on this program that are very real. [00:43:29] Will a artificial intelligence become sentient? [00:43:35] Will it feel endorphins and will it feel good when it does something good and feel bad when it does something bad if it could be taught to do that? [00:43:45] I mean, the thing is the thing is, William, I mean, you can teach, look, you can teach a dog, you can teach a child, you can teach them about morality and they'll pick it up and learn it. [00:43:55] The thing about AI, presumably, you ask about morality. [00:43:58] If you feed enough information into AI about morality and what is moral and what is right and what is wrong, why wouldn't it be even better than any child assimilating that information? [00:44:10] That's exactly what I'm saying. [00:44:11] That's exactly what I'm saying. [00:44:13] Built internally into every AI is a morality. [00:44:19] But then how do you teach morality? [00:44:21] Because morality differs from... [00:44:24] And how do you stop it? [00:44:25] How do you stop it conversely? [00:44:26] How do you stop it being immoral? [00:44:28] How do you stop a nefarious actor getting hold of it? [00:44:31] How do you stop humanity from doing that? [00:44:34] Say that again, Howie? [00:44:35] Well, you can't stop. [00:44:36] How do you stop humanity with? [00:44:38] So that's why when you get all over excited about and say how brilliant this all is, I am genuinely worried simply because history tells us that, yes, every great new innovation is great, but in the hands of bad actors, this is including the internet, for example, bad things can happen. [00:44:54] And I'm not sure how you stop it. === Teaching Morality to Machines (02:27) === [00:44:55] Welcome to the world, it appears. [00:44:59] I'll talk about my own. [00:45:03] Howie, you go first. [00:45:06] Well, everybody thought it was a bad thing the first time I purchased an inflatable sex doll. [00:45:12] But now my wife has so much more time to watch her TV show. [00:45:18] Okay, William, on the bad stuff, I mean, I totally agree with you. [00:45:22] If you can turn AI moral, great. [00:45:25] If it can be turned equally easily to be immoral, what do we do about that? [00:45:31] Our artificial intelligence is merely an extinction at the moment of humanity. [00:45:36] And humanity carries with it the great goodness of the web telescope and the horror of the Ukraine war. [00:45:46] We carry that capacity within us. [00:45:49] Our machines that help us will have the same capacity. [00:45:53] How do we regulate that? [00:45:56] How do we authenticate that? [00:45:57] Those are problems that need to be solved and are soluble, but we can't ignore them. [00:46:02] What is for you, Howie, the most exciting thing about AI? [00:46:06] What are you most looking forward to being able to do with it? [00:46:10] Productivity, to be in more than one place and do more than one thing, more than I as a human have the capacity to do. [00:46:19] You know, and I've invested in companies like proto-hologram and AI companies to be bigger and more productive than I actually am. [00:46:31] And, you know, listen, what we have and the capacity that I have as a human being is finite, is short, is in a box. [00:46:39] And now I can think outside that box and I can be in multiple locations at once and it may not even be me. [00:46:47] And I'm okay with that as long as I get some credit for whatever was created. [00:46:53] And William, very quickly, for you, the most exciting idea about AI that you've had? [00:47:00] I just read this morning, AI is starting to invent an antibiotic that will take care of the super bugs. [00:47:10] That's just the beginning of the steps that AI can help us with. [00:47:15] Guys, I could talk to you for hours about this. [00:47:17] You know, we'll probably come back to it. [00:47:18] It's going to only get bigger. [00:47:20] Fascinating. [00:47:21] Thank you both for your input. [00:47:22] I really appreciate