The Megyn Kelly Show - 20210826_the-benefits-and-dangers-of-artificial-intelligenc Aired: 2021-08-26 Duration: 01:37:16 === Andrew Ang Joins the Show (02:39) === [00:00:16] Welcome to The Megan Kelly Show, your home for open, honest, and provocative conversations. [00:00:28] Hey everyone, I'm Megan Kelly. [00:00:29] Welcome to The Megan Kelly Show. [00:00:31] Oh, we have a fascinating show for you today. [00:00:33] Fascinating! [00:00:34] It's about artificial intelligence. [00:00:36] I've been asking my team to line up a show on this and we have the two greatest guys, the most brilliant, just greatest guys to talk about it with. [00:00:45] Don't you want to know where this is going? [00:00:47] Right? [00:00:47] Like, okay, there's Amazon Alexa and then there's something called super intelligent computers that are going to take over the world and possibly eliminate humanity. [00:00:58] Opposite extremes. [00:00:59] It can be wonderful and it can be life changing in a great way and it could also potentially be life extinguishing if it gets into the wrong hands and so on. [00:01:06] So we've got all these angles covered. [00:01:08] You are going to love, love, love this show. [00:01:11] We're going to kick it off with a guy named Nick Bostrom. [00:01:14] He's a professor at Oxford. [00:01:16] He's the director of something called the Future of Humanity Institute. [00:01:21] He's done so many things. [00:01:23] He's been a teacher at Yale, he did his postdoctoral fellowship at Oxford. [00:01:27] He's the founding director, as I say, of this Future of Humanity Institute. [00:01:30] That's at Oxford as well. [00:01:32] He researches the far future of human civilization. [00:01:35] Professor of philosophy at Oxford. [00:01:37] He has been included in foreign policies. [00:01:40] Top 100 global thinkers list repeatedly. [00:01:44] He was listed by Prospect Magazine and their list of world's top thinkers. [00:01:47] You get it? [00:01:48] Sensing a theme. [00:01:49] And he's probably best known for his incredibly bestselling book, Super Intelligence Paths, Dangers, Strategies. [00:01:58] It's been recommended by everyone from Elon Musk, who's a huge fan of our guest, Nick Bostrom, to Bill Gates. [00:02:04] And he is one of the leading thinkers on where super intelligence, what it is, where it's going. [00:02:10] and how it needs to be handled. [00:02:12] That's sort of where the machines become smarter than the humans. [00:02:16] We're going to talk to him. [00:02:17] Then we're going to be joined by a guy named Andrew Ang. [00:02:20] He's also incredibly brilliant. [00:02:21] I'm so excited to talk to these guys. [00:02:23] He's the founder of deeplearning.ai, co-founder of Coursera. [00:02:28] Coursera is huge. [00:02:30] This is the world's leading massive open online courses platform. [00:02:35] He's also an adjunct professor at Stanford. [00:02:39] He was the founding lead. of the Google Brain team. [00:02:42] He coined the term Google Brain. [00:02:44] He was the chief scientist at Baidu, which is China's Google. [00:02:48] There's no Google in China. [00:02:50] This is China's. [00:02:50] I mean, this guy's been, he's led a 1,300 person AI group for China's Google. === Risks of Superintelligence (15:32) === [00:02:56] All right. [00:02:56] So he's done, he's basically been in charge of everything. [00:02:59] He's a globally recognized leader in AI. [00:03:01] And I would describe him as more of a happy warrior when it comes to AI, very optimistic about it and what it can do. [00:03:07] I'm going to talk about how it could change your life for the better. [00:03:10] And I think you're going to be delighted with the show. [00:03:12] And I predict you'll be sharing it. [00:03:14] With everyone you know. [00:03:14] Okay, so we're going to start with our guests in one minute. [00:03:17] Real quickly, here's this. [00:03:24] There is so much that I want to go over with you. [00:03:26] Just treat me like I am AI 101 because I know almost nothing about this field, but am dying to know more. [00:03:35] And just having read what I've read now of your work and having listened to your TED Talks and so on, I'm terrified. [00:03:42] I'm terrified. [00:03:44] So. [00:03:45] Let's start here. [00:03:47] What is super intelligence? [00:03:50] I just use it as a term for any form of a general artificial intelligence that greatly surpasses humans in all cognitive abilities. [00:04:00] And so, in other words, when the machines get smarter than we are. [00:04:04] Yeah. [00:04:05] Okay. [00:04:06] And how likely is it to come into existence? [00:04:12] I think it is highly likely that it will eventually come into existence. [00:04:17] I think almost a certainty if we avoid destroying ourselves through some other means before. [00:04:23] But if science and technology continue to advance on a wide front, then I think eventually we'll figure out how to produce high level machine intelligence and super intelligence. [00:04:35] Is it in the works right now? [00:04:37] Well, I mean, in some sense, it has been in the works for a long time in that people have been trying to understand better how the brain works, how to use statistical methods to better extrapolate from past data, how to build faster computers. [00:04:53] All these are potential ingredients. [00:04:56] And of course, the field of artificial intelligence has really burgeoned in the last eight years or so with the deep learning revolution. [00:05:07] And so there's Quite a lot of excitement now about what is becoming possible with machine learning. [00:05:14] But predicting how far we are from being able to match and then maybe surpass human level intelligence is really hard. [00:05:21] And I think we just have to acknowledge that there's enormous uncertainty on the timeline for these kinds of things. [00:05:29] Now, we're going to be joined in after you by another guest who his belief is, well, the way he phrases it is there's two types of AI. [00:05:40] There's ANI, artificial narrow intelligence, and AGI, artificial general intelligence. [00:05:45] And he says artificial narrow intelligence is basically like the stuff we've seen already where you're typing on your computer and it recognizes the word you're typing and completes it, you know, or you're, I don't know, maybe Amazon Alexa or the self driving car, like those things that are improving our day to day living. [00:06:05] But general intelligence is what you're talking about, super intelligence, which is That's a whole different realm. [00:06:11] And that's the thing, as I understand it, that you're sounding the alarm on. [00:06:16] Yeah, or at least trying to draw attention to as something that would be very important. [00:06:22] I think it has an equally large upside if we get this transition to the machine intelligence era right. [00:06:30] But I do think also there are significant risks associated with this. [00:06:34] But yeah, I think it is useful to make this distinction between kind of specialized AI systems that can only do one thing. [00:06:41] Maybe sometimes at the superhuman level. [00:06:43] So, for a long time, we've had like chess computers that can beat any human. [00:06:48] But contrasting that to something that matches humans, say, in our general learning abilities and reasoning abilities that make it possible for a human to learn any of thousands of different occupations or to solve novel problems that you've never seen before and to use common sense. [00:07:07] So, why would we be seeking superintelligence? [00:07:11] You know, because we're going to get into the risks of it and, you know, the The possibility that machines not only get smarter than humans, but actually take over the world and possibly eliminate humans. [00:07:20] Why would we be even going down that route? [00:07:23] Why wouldn't we have just seen that future and said, why would we create another being on Earth that's smarter than we are that could take over this planet? [00:07:31] For the most part, we are not seeking super intelligence, but greater intelligence. [00:07:37] Like to have some AIs today, and it'd be nice if they made fewer errors and were a little bit more capable. [00:07:43] Then, of course, if we succeed in that, we would want them to be better still. [00:07:47] It's not so much that there are a lot of people who are specifically trying to create super intelligence, but there are huge drivers for making progress in having better forms of machine intelligence. [00:07:59] I mean, in general, it's not as if. [00:08:02] Human civilization has some kind of great master plan either, right? [00:08:06] I mean, we are not sort of having 100 year plans where we're going to, which technologies we're going to promote and which less so. [00:08:14] For the most part, things just happen and there are these local reasons why people do things. [00:08:20] And that I think is also true for the field of AI. [00:08:24] I know that you've said you, the possible scenario is we create a machine, a computer. [00:08:31] That has general intelligence below human level, but is superior mathematically. [00:08:37] And in this scenario, human beings understanding the risks of creating a super intelligent machine would take safety measures. [00:08:44] They would pre program it, for example, so that it would always work from principles that are under human control. [00:08:50] We would try to box it in with limitations. [00:08:53] We would try to be careful. [00:08:55] How do you see the possibility of that machine that we've tried to take these precautions with, nonetheless on its own? [00:09:02] Becoming a super intelligent being, for lack of a better word? [00:09:07] Well, so I think we will keep trying to make machines smarter. [00:09:12] And if we succeed in this, at some point they will become smarter than us. [00:09:17] I think at that point, once you have maybe even weak super intelligence, development is likely to be very fast for various reasons. [00:09:28] For a start, at this point, the technology would be extremely economically valuable. [00:09:33] So massive. [00:09:34] Investments would flow in to running these AIs on even larger data centers or applying even more human ingenuity to improve them still further. [00:09:44] At some point, you might also get this feedback loop when the AI itself is able to contribute to its own further improvement. [00:09:51] So you might get a kind of intelligence expulsion where you go from something maybe just slightly human level to something radically super intelligent within a relatively brief span of time. [00:10:05] And then the question becomes Would we be able to steer what such a super intelligent system would decide to do? [00:10:14] Like it would be very powerful for basically the same reasons that we humans are very powerful on this planet today compared to other animals. [00:10:23] The gorillas are much stronger than us, and cheetahs are much faster. [00:10:28] And yet, the fate of the gorillas depends a lot more on what we humans decide to do than on what the gorillas do. [00:10:36] If you have something that radically outstrips us in terms of its general intelligence, its ability to strategize, to develop new technologies, then it might well be that the future would be shaped by its preferences and its decisions. [00:10:50] And it might be non trivial for us to make sure that those are aligned with our human values, especially if we need to get it right on the first try. [00:11:00] Right. [00:11:01] You've been saying that for a while, saying if we're going to do this, we have to make absolutely sure that. [00:11:06] That they are aligned with our human values. [00:11:08] And there are all sorts of dangers in doing it anyway. [00:11:11] I mean, who's going to determine what the values are? [00:11:14] And what if not everybody's on the same page? [00:11:16] And what if we do it, but it gets into the wrong hands and people misuse it and so on? [00:11:23] But let me just stick with the gorilla thing. [00:11:25] That's interesting. [00:11:25] So, I mean, because I've heard you use the example of the tiger too. [00:11:29] The reason the tiger gets in the cage and can be controlled by us is because we have superior intelligence to it. [00:11:35] So it may be more powerful, more lethal, but we're smarter. [00:11:39] And so we can trick. [00:11:40] The tiger into the cage and keep it there. [00:11:42] And the same is true of the gorilla. [00:11:44] And so, in the scenario where we have a super intelligent machine, we're the gorilla. [00:11:49] Well, that would be one type of scenario or one type of risk that could arise from future advances in AI that the AI itself somehow takes over or runs amok or is. [00:12:02] Poorly aligned. [00:12:03] I think there are also scenarios in which we maybe manage to tie it to our purposes, but then we do with it as we have done with practically every other general purpose technology in human history, that we've also used it for a lot of bad ends, to oppress each other, to wage war against each other. [00:12:25] And so that's another way in which advances today could turn out to be harmful if they become a means of kind of amplifying human conflict. [00:12:34] Or if they empower more people to develop other dangerous technologies, like maybe you could use AI to more rapidly invent new biological warfare agents or something like that, that then might proliferate. [00:12:48] So I think there are several distinct classes of dangers that one would have to be aware of as we move into this future. [00:12:58] Well, I know. [00:12:59] I mean, you think that if you're the creator of it, you can control it, right? [00:13:03] You can program it such that it won't get smarter than you and it won't. [00:13:07] How could you? [00:13:08] I look at the computer on my desk. [00:13:09] How could it ever control me? [00:13:10] How is that? [00:13:11] It doesn't seem possible that it, because you're not talking about robots running around, you know, threatening us with knives and guns. [00:13:17] You're talking about this thing, this thing sitting on the desk, getting smart enough that somehow it's controlling humans. [00:13:23] And you think about that in the abstract and you think, how could that ever, that doesn't make any sense? [00:13:27] How could this thing sitting on my desktop ever control me? [00:13:30] Yeah. [00:13:30] I mean, presumably not the thing that actually sits on the desktop now, but I mean, you're right. [00:13:35] It's easy enough to not develop super intelligence for anyone. [00:13:39] Individual or group, but I think it's likely that we as a civilization will nevertheless do it. [00:13:46] And I think actually, probably we should be doing it. [00:13:49] I see it kind of as this portal in a sense that all plausible paths to a really great future will eventually go through. [00:14:00] Now, it might be that it would be wise for us to go a little bit more slowly as we approach this gate so we don't kind of slam into the wall on the side. [00:14:07] We certainly should be very careful with this transition. [00:14:11] I think it's kind of unrealistic to think that everybody, all the different countries, all the different labs would decide to refrain from pushing forward with this when it has such enormous potential for positive applications in the economy, for medicine, for security, for arts and entertainment, for practically any area at all where human intelligence is useful, which is pretty much any area. [00:14:37] So I think our focus should be not so much should we do it or not, but like how can we. [00:14:42] Position ourselves in the best possible ways. [00:14:47] Do the research in advance, say on how to align to find scalable methods for AI alignment, try as much as we can to build cooperative institutions and norms and practices around the deployment of AI, and then proceed cautiously. [00:15:05] But can you walk us through that scenario for people who don't? [00:15:07] I mean, this is a big concept for folks who don't work in your field. [00:15:11] How could it ever be that the machines would take over? [00:15:14] I mean, I know you've spoken about. [00:15:16] Look, it could happen. [00:15:17] They could be controlling us. [00:15:18] They could control all the other computers and things. [00:15:21] Humanity could cease to exist. [00:15:22] And we need to be cognizant of this possibility. [00:15:26] But how? [00:15:28] Well, I mean, so if we look at, say, humans have caused a lot of mischief over the course of history, that it's for the most part not because they use their own personal bodily strength to wield a sword and go around chopping people's head up. [00:15:43] It's they've used maybe their pen or their voice to issue commands to persuade others. [00:15:48] And then Thereby to exert great influence. [00:15:51] So, those modes of action would be available even just to a laptop sitting on a desk. [00:15:59] If it could print text on a screen, I think that's already enough. [00:16:03] For a sufficiently great intelligence to be very powerful. [00:16:07] But of course, there is no reason to think it would have to stop with these indirect methods. [00:16:13] You could maybe persuade humans to be your arms and legs, to do your work in some lab, to develop different robotic systems that you could use or hack into, or maybe develop some kind of nanotechnology that would then give you more direct actuator. [00:16:34] Access to the world. [00:16:35] I think there are many ways with a sufficient level of intelligence to kind of think above and around and through humans and achieve your ends. [00:16:47] It's also likely that if we develop this, we would want to give them access to a lot of stuff because that would make it more useful, right? [00:16:54] If you could have an AI that drives your car, that's more useful than an AI that just sits and tells you how to drive the car. [00:17:02] If it could run your factories, if it could. [00:17:04] Pilot area planes. [00:17:07] Maybe we will have a lot of robots by the time this transition happens so that there would be an even more ready made infrastructure for it to tap into. [00:17:15] Well, let's talk about the factories because I've heard you say that this super intelligent computer or these computers could create nano factories covertly distributed at undetectable concentrations in every square meter of the globe that would produce a worldwide flood of human killing devices on command and that AI would then achieve world domination. [00:17:39] What? [00:17:41] That doesn't sound good. [00:17:44] No. [00:17:44] And I think there is a kind of, I mean, it's kind of almost by definition impossible for us to know exactly what the best strategy would be that would come into view if you were a malicious superintelligence, because kind of by definition, it can think much more deeply in the strategic space than we can. [00:18:05] But I think what that particular scenario is meant to illustrate is the idea that. [00:18:10] One of the things that a superintelligence certainly could do would be to invent new technologies that we can already see are physically possible that we haven't yet, however, been able to actually manufacture and build because they involve a lot of detailed work to kind of figure out the specifics. === Managing AI Advancement (14:31) === [00:18:28] But if research were done on a digital time scale rather than on a kind of slow biological human time scale, then these futuristic technologies might become available quite quickly after you have superintelligence. [00:18:43] And then using those futuristic technologies would possibly be one way to leverage its power and get a kind of advantage. [00:18:52] Not the only one, but I think that's one possible path. [00:18:56] Whether it would be specifically by developing nanorobots or whether there's some other technologies that we haven't yet thought of, I think we'll just have to be agnostic about. [00:19:05] It makes me think of the movie War Games to go back to the 1980s when I grew up. [00:19:12] And in War Games, they created this computer that could. [00:19:17] Help with nuclear war and planning out the war games that the United States was going to be engaged in with presumably Russia. [00:19:24] And they couldn't stop it. [00:19:27] The computer sort of got a mind of its own. [00:19:29] It started, it was going to launch the missiles anyway, even when they had figured how to like turn off the computer. [00:19:36] But when they were trying to deal with it, first one guy says, Why don't you just unplug the damn thing? [00:19:40] Right. [00:19:41] And that wasn't going to work. [00:19:42] And then even when they found a way of telling it to stand down, It wouldn't stand out. [00:19:48] It had a mind of its own and it kept going. [00:19:50] Just to bring it to sort of an example that a lot of people may have seen. [00:19:54] Is that basically what we're talking about? [00:19:55] That once you've said before, it may not be possible to put the genie back in the bottle. [00:20:00] Once we create the thing, it's not going to be so simple to just. [00:20:03] Unplug it or tell it not to do the thing that we find awful? [00:20:08] Yeah, I think like, I mean, so imagine if like the apes that we evolved from were still around and they thought, well, maybe these humans were a bad idea. [00:20:17] Like it would kind of be hard for them to unwind what had happened. [00:20:22] And similarly, if you have some super intelligence, once in existence, it might have strategic incentives to avoid us shutting it down. [00:20:32] And so if it's very skillful at achieving its goals, then in particular, it would be skillful at Achieving this goal of preventing its own shutdown, or maybe it will make surreptitious copies in other computer systems so that it doesn't matter if sort of the original is terminated or spawn other sub agents that can do, you know, execute on its preferences. [00:20:56] So I think we shouldn't rely on that as the sole method of ensuring a safe future that we build systems, we don't bother to align them just to see what they do, and then planning, well, if things go wrong, we just unplug them. [00:21:09] We need a better strategy than that. [00:21:11] You know, when I was reading up on some of the things that you've written and said, it made me feel like here in America, we've had a couple hundred years of feeling pretty well protected from the world, thanks to these oceans that surround our country on the East and the West Coast. [00:21:26] And obviously, with nuclear weapons, that's less the case, but we've reached a detente with other nuclear powers that we understand it would be mutually assured destruction, and we don't launch those for the most part. [00:21:40] This threat doesn't recognize oceans, boundaries. [00:21:44] This makes everyone accessible. [00:21:46] If, as you've posited, the supercomputer, the superintelligence can create drones that come right to your doorstep and drop a bomb, or create an office robot that may be cleaning the carpets at night, but then assassinate the CEO when they turn around, like there's all sorts of nefarious ways in which it could be unleashed on people worlds away. [00:22:11] Yeah, I think with respect to super intelligence, like, yeah, I think we're all kind of eggs in the same basket. [00:22:19] At least with respect to this class of dangers that arise from the AI itself doing something that is on the line with its creator's intention. [00:22:31] So, yeah, I think we have a common cause there to try to figure out how to align these systems. [00:22:38] So, and I mean, I'm reasonably hopeful about that. [00:22:41] When I was writing this book of mine, I think. [00:22:44] Came out in 2014. [00:22:46] This was an almost entirely neglected field. [00:22:49] It looked like we were moving towards developing the most important technology ever. [00:22:55] And hardly anybody was thinking about what would happen if we were succeeding in this goal of AI. [00:23:00] It's all along been to not just do specific tasks, but to make machines generally smart like humans. [00:23:05] But it was like that was such a radical goal that the imagination exhausted itself in just conceiving of this possibility of matching humans, that it couldn't take the obvious next step that if we reach that, we will have super intelligence. [00:23:18] And then thinking about the consequences. [00:23:19] So, yeah, drawing attention to that was a big part of the reason for writing this book. [00:23:26] But since then, there has now sprung up a kind of technical subfield of people doing serious research and actually trying to figure out how to align arbitrarily capable AI systems by harnessing their ability to learn, to make them better able to learn and understand what our intentions are when we ask them to do something or what to train them on specific tasks. [00:23:53] That's crazy that that was only seven years ago, that this wasn't even being discussed that seriously by those academics and so on who are now taking such a hard look at it. [00:24:03] Meanwhile, This is probably going to be an industry that employs many of our children, grandchildren, and so on. [00:24:09] Yeah. [00:24:09] I mean, I think, I mean, certainly if there are advances in AI, it's going to have a big economic impact. [00:24:15] I mean, it might be that if you get super intelligence, then the effects on employment will be. [00:24:21] I mean, at some point, like if you have sufficiently generally capable AI, basically all jobs become automatable. [00:24:31] So I think in a good scenario, I mean, in some sense, the goal is full unemployment, right? [00:24:38] So the idea is to try to develop technologies so powerful that we don't have to do stuff we don't like to do. [00:24:49] And so if you define work as the kind of thing things people have to pay you to do, then yeah, almost all of that could theoretically be done by a sufficiently capable AI system. [00:25:01] For some tasks, it sounds totally unfulfilling. [00:25:03] It sounds awful. [00:25:04] Well, I think it would be a situation where we would have to rethink a lot of our assumptions about what it means to be human, kind of from the ground up. [00:25:16] I actually believe that there would be some extremely wonderful possibilities that would be unlocked by this, but it would require a pretty, yeah, grounds up rethink. [00:25:31] We would, for example, have to find our dignity and meaning in life, not Not in what we do for a living or like being a breadwinner, but in other areas, in relationships, in hobbies, in things we do for their own sake rather than as a means to some other end. [00:25:55] But yeah, I mean, that I think would be a kind of high quality problem to have for us. [00:26:00] I think first we need to make sure we don't kind of crash into something on the way there. [00:26:07] Well, and before we get to sort of the benefits, let's talk about the possibility of a terrorist getting a hold of this technology if we create it or it creates itself from something we've created, or even an actor like China, which is very advanced in the AI field. [00:26:26] And our defense secretary has made clear that this is an area in which we're equal. [00:26:32] We're not, at best, we're equal with China. [00:26:35] It's not like our military is so much more powerful than theirs. [00:26:38] It is. [00:26:38] But I'm just saying, in this department, Which is a potential security threat. [00:26:43] They're on par with us and they're working it and they aim to be the world leader in AI. [00:26:48] And we don't trust China for good reason. [00:26:50] So we do need to be worried about what they're going to create, not to mention, as I say, somebody more nefarious like a terrorist actor. [00:26:57] So, what is the likelihood of that? [00:27:00] Well, I think at present, the West is ahead in AI, certainly in this kind of basic research of trying to. [00:27:10] Develop general artificial intelligence. [00:27:14] But it's not a huge lead. [00:27:16] It's not like 20 years ahead or something like that. [00:27:19] The field is very open. [00:27:21] Researchers publish their findings. [00:27:23] And so other teams can catch up within like six months or a year or so. [00:27:30] I'm not so worried really about terrorists using AI for particular things. [00:27:37] I would be more worried about terrorists using, say, biological weapons, which At the moment, it would be a lot more destructive and are also becoming much easier to use or obtain through advances in synthetic validity. [00:27:56] But it is plausible that AI will become one dimension of a great power competition. [00:28:05] As it becomes an increasingly important both economic factor and also factor in national security. [00:28:16] Because I know you've said that the first super intelligence to be creative will have a decisive first mover advantage, that there will be a lot of power in being the first one to come up with it. [00:28:28] And so, I mean, how worried should we be that somebody not all that friendly to the United States will be the person who has it? [00:28:35] Yeah, well, I mean, it's possible that it would have this indecisive first mover advantage. [00:28:40] I'm not at all sure about that. [00:28:42] You could also imagine scenarios in which the transition happens a bit more gradually. [00:28:48] If it's not like an overnight or overweek thing where you get from human to radical superintelligence, but suppose it takes several years, then you could easily have multiple labs or countries being more or less going through this. [00:29:05] Transition in tandem, and you might then have a multipolar outcome. [00:29:10] But yeah, I do think it's potential for exacerbating conflicts of different kinds or empowering, say, despots to make themselves more immune from overthrow by intelligence applications, surveillance applications, and so forth. [00:29:29] That is certainly one concern. [00:29:33] I think it will be important to try to manage that both to kind of Avoid conflicts on that, but also because I think it might make the first danger harder to avoid the danger coming from the AI itself. [00:29:49] Like, if you're thinking about this, you suppose you are like some researchers, you've got to the point where you have something almost human level, you think with a bit more work, we can make it super intelligent. [00:30:01] Ideally, at this point, you would really want to go slow, right? [00:30:04] And really check everything, double check it, make sure it's all right, increment it. [00:30:10] Step by step, not just turning on the gas full throttle. [00:30:14] And maybe over several years, like trying to do this while having a lot of people helping you and looking over what you're doing to make sure it's right. [00:30:22] But if you are in some kind of arms race, then that might be very hard to do. [00:30:28] Like it might basically mean that if you go slow, it just means you lose the race and become irrelevant. [00:30:32] So you feel forced to rush ahead as quickly as you can. [00:30:36] And then you throw caution to the wind, and then this risk from the AI itself. [00:30:42] Creating kind of a destruction will increase. [00:30:45] So the two problems are connected. [00:30:47] Yeah, it's kind of like a Dr. Frankenstein situation. [00:30:50] It's a Frankenstein situation, right? [00:30:51] Where the entity you create becomes super dangerous and turns on you. [00:30:56] Even though we're so full of hubris, I think most humans would believe that they could continue controlling a machine. [00:31:04] Again, it's hard, I think, to conceptualize that the machine I control now will someday have the capability of controlling me. [00:31:12] But I know you've said right now the potential for this super intelligence, right now it's lying dormant, but it's akin to the power of the atom and how it laid dormant through much of our human history until 1945, in which case it was very much not dormant. [00:31:29] And we saw its power in really raw and disturbing ways. [00:31:35] Yeah, I think in general, we are kind of reaching into this giant urn of invention. [00:31:44] Almost like the picture of human history. [00:31:46] We've reached in, we've pulled out one ball after another, one idea, one technology. [00:31:50] And I think we've kind of been lucky so far, in that, for the most part, the net effect of all this technological progress has been hugely positive. [00:32:00] But if there is a black ball in this earth, like Some technology that could be some technology that is just such that it invariably discovers the civilization that invents it. [00:32:14] It looks like we're just going to keep reaching into this urn until we get the black ball if it's in there, right? [00:32:19] And while we have developed a great ability to pull balls out of the urn, we don't have an ability to put them back in. [00:32:26] We can't uninvent our inventions. [00:32:29] So it looks like our strategy, such as it is, is basically just to hope that there is no black ball in the urn. [00:32:37] And I think not just AI, but some other technologies as well could be potential black balls. [00:32:45] I alluded to synthetic biology before, which is one area where we might discover means that would make it a lot easier to create kind of highly enhanced pathogens. [00:32:57] In some sense, we were lucky with nuclear weapons. === Ethical Implications of Tech (05:31) === [00:33:00] They are enormously destructive, but at least they are hard to make, right? [00:33:03] You need highly enriched uranium or plutonium to be able to make a nuclear bomb. [00:33:08] And that requires large facilities, huge amounts of energy. [00:33:11] Really, only states can do this. [00:33:13] But suppose it had turned out that there were an easier way to do this to unleash the energy of the atom. [00:33:20] That before we actually did the relevant nuclear physics, like how could we have known how it would turn out? [00:33:24] But if there had been some easy way, like something baking sand in the microwave oven between two nickel plates or something like that, right, then that might have been the end of human civilization once we discovered how to do that, because then anybody. [00:33:39] Would be able to destroy a city. [00:33:42] And in a sufficiently large population, there's always going to be a few individuals who would choose to do that, right? [00:33:48] Whether because they're mad or they have some grudge or they have some extortion scheme or some ideology. [00:33:55] So we can't really afford a kind of democratization of the ability to cause mass destruction. [00:34:03] But if we discover some easily implementable recipe for this, then It looks like we are in a pretty dire situation. [00:34:13] Up next, how important is it going to be to have ethicists involved in creating these super intelligent beings, right? [00:34:20] It's not just the people who can make the machines function, it's those who can lay in some sort of an ethical code. [00:34:25] And is that even possible? [00:34:26] And then we're going to get into the future of humanity and how it could be helped by technology. [00:34:33] This is something that Nick has studied for a long time. [00:34:35] Could there be something like an anti aging pill coming our way? [00:34:40] How far away is that? [00:34:42] And also, cryogenics. [00:34:43] Is he going to freeze himself and why? [00:34:45] And should you stay tuned? [00:34:50] I know you've said you could use this technology or that some bad actor could or the super intelligent computer could for ethnic cleansing. [00:34:58] I mean, imagine if Hitler had control over this type of technology where he could target some particular groups this way. [00:35:07] It would be efficient, it could be a killing machine. [00:35:09] And we need this is why we need. [00:35:11] Ethical people creating the technology if it gets created at all. [00:35:14] And that leads me to something I heard recently, and I know you've been saying all along, which is not only are we going to need, if we're going down this route and we're going to create super intelligence or something less than super intelligent computers for the good that they can do, one of the most important roles we're going to have in this process will be ethicists, philosophers. [00:35:34] It's not all about kids now who are in robotics or kids who are somehow trying to study AI. [00:35:39] We're going to need people who consider and can even program for the ethics. [00:35:47] Of a well meaning life, of a well meaning existence. [00:35:51] Yes. [00:35:51] I mean, not necessarily people whose job title are ethicists at some university, but yeah, certainly ethics and other sources of wisdom about. [00:36:03] What we want and what we should be wanting, I think, will be important. [00:36:08] It's not just a purely technical problem, it's a kind of all of society problem how to figure out how to create a happy world with this new technology. [00:36:16] I mean, it will have economic implications, right? [00:36:18] We alluded to the security implications before. [00:36:22] And then more cultural dimensions are like what do we want the role of humans to be versus our technology and automation in the future? [00:36:34] I think it's like, yeah, it needs to draw on all the different aspects of human wisdom, such as it is. [00:36:42] It's not much to boast about, but we'll have to do our best, at least. [00:36:46] Yeah, I think it will require a much wider purview than just a narrow technical focus, although the technical focus also is really important for AI alignment. [00:36:54] I've read that many leading researchers in this field say it's extremely likely this will happen by as early as 2075 to 2090. [00:37:05] That's in the lifetime of our children right now. [00:37:08] Do you agree with that? [00:37:09] Could it happen that soon? [00:37:12] Yeah, I think that's a real possibility. [00:37:15] That's hard to get your arms around, right? [00:37:17] Like my own children alive right now could be dealing with computers in the corners of the world that are trying to erase humanity. [00:37:26] Could we do that today? [00:37:27] Or save humanity, right? [00:37:29] I mean, that's the other possibility or how possible. [00:37:32] Well, I'm not worried about that one. [00:37:34] That one sounds good. [00:37:36] I mean, that would be what people are aiming for almost universally. [00:37:44] Like AI researchers, I've spoken to, they're quite well meaning and idealistic people. [00:37:51] Some are just curious about what they're doing and think it's fun. [00:37:54] Some have a kind of vague general sense of wanting to do something good for the world. [00:37:58] So I think the intention is positive. [00:38:04] And it's just the outcome that there is more uncertainty around. [00:38:08] Right. [00:38:08] Because the dark side is this could be more dangerous than any pandemic, than nukes, than catastrophic climate change. [00:38:18] This could be more than an asteroid. [00:38:21] This could be the thing we're not really paying that much attention to, to your example about what was going on in 2014, that really is an existential threat to humanity on the Earth. === Uncertainty in Good Intentions (14:34) === [00:38:31] And on that front, one of the things I wondered in reading about you, Nick, is whether you worry about anything other than this? [00:38:38] I mean, if this were my world, That I were immersed in full time thinking about it. [00:38:43] I don't know that I'd worry about anything else. [00:38:46] Would I worry about the crime rate? [00:38:47] Would I worry about the erosions of free speech we're seeing, government power growing too large? [00:38:52] Some of it relates, but I just. [00:38:56] Do you walk through the day worried about nothing other than this? [00:39:00] No, I mean, I think it's useful maybe to have some division of labor here. [00:39:06] And also, I think it might, for somebody like me, be worth. [00:39:11] Trying to focus more efforts on relatively neglected areas where one extra person like me might make a bigger difference than I mean, global warming. [00:39:21] People, so many thousands of people have been worrying about this, and I think it's a smaller concern, but it's not the only one. [00:39:28] I mean, I think certain advances in biotechnology are quite concerning as well. [00:39:33] Um, that might just make it too easy to create really dangerous stuff. [00:39:40] Um, I mean, to make it concrete, so we have DNA synthesis machines that can print out. [00:39:46] DNA strings, if you have a digital blueprint. [00:39:52] We also have in the public domain the DNA sequence of a lot of really dangerous viruses and ideas how to make them even more dangerous. [00:40:04] As this sequencing technology becomes good enough to print out whole viral genomes, then, I mean, you just connect the dots. [00:40:14] That would give anybody with access to these DNA synthesis technology the ability to create things far worse than COVID. [00:40:22] And at the moment, like anybody can buy one of these DNA synthesis machines. [00:40:25] And if that continues to be the model as they improve in capacity, then we soon get to an unsustainable situation. [00:40:33] So there needs to be some kind of global regulatory framework. [00:40:38] I think imposed on the DNA synthesis market, where DNA synthesis is provided as a service. [00:40:44] Like if you're a researcher, if you need a particular string, there's maybe five or six companies in the world. [00:40:50] You send your request, you get your product back in a vial. [00:40:53] You don't need to have the machine yourself in a lab. [00:40:55] Or if you do, that would have to be some sort of controls. [00:40:58] That's just one example, but it's a real cauldron. [00:41:01] People are inventing so many cool new stuff in biotech all the time that new ways of creating mostly good stuff will come into view, but that could also be some. [00:41:12] Some bad stuff in there. [00:41:13] And it's a kind of wild west at the moment. [00:41:15] The ethos is very much open science. [00:41:17] Like, yeah, let's encourage biohackers. [00:41:20] Let's just make it available to all because that's nice. [00:41:23] You know, everybody has equal access to it. [00:41:25] But with some technologies, like with, say, nuclear weapons, we don't think that's the right way. [00:41:29] And I think biotechnology will have to similarly change to something. [00:41:36] Yeah. [00:41:36] Yes. [00:41:36] Yeah. [00:41:38] I mean, I think I can speak on behalf of my audience and say that we're glad that people like you, I know Stephen Hawking has joined you, many others have. [00:41:44] Sort of sketched out some priorities for making this field safer and imposing some guidelines so that we don't just jump into this willy nilly. [00:41:54] Glad you're out there. [00:41:55] It reminds me of my husband used to run an internet security firm and he used to speak of the white hat hackers and the black hat hackers. [00:42:03] And, you know, the Russians have black hat hackers who will try to get into your bank accounts and your private information and so on. [00:42:10] And he was in a firm that he would say he employed the white hat. [00:42:16] Hackers who understood how to do all that stuff, but would try to stay one step ahead of the bad guys to protect you. [00:42:21] And it seems like this is a field where everyone needs to have the same devious skills, but we have to make sure we have more people employing them for good than for bad. [00:42:30] Let me switch it to you with you for one second, because I do know that you, is it transhumanism that you've described yourself as into being transhumanism? [00:42:40] I think like your proselytization about the medical field and what could happen for humans during our lifetime to make life. [00:42:48] Better to make to improve life when it comes to these scientific advances made me feel hopeful. [00:42:52] You know, a potential anti aging pill, a potential, the potential to bring one back thanks to cryogenics after one dies. [00:43:00] Can you just talk about that a bit? [00:43:03] Yeah. [00:43:03] So, I mean, I sometimes think I'm mistaken for some kind of technophobe a lot because I spend a lot of time describing some specific concerns or dangers. [00:43:14] But, I mean, broadly speaking, I'm very excited about the potential to improve. [00:43:20] The human condition through advances in technology and including by enhancing the human organism itself in different ways. [00:43:29] Like, I mean, I'd love to see some kind of anti aging pill that really worked or something that could make us smarter or improve our quality of life in other ways. [00:43:41] And so sometimes that's referred to as transhumanism. [00:43:45] I don't tend to use the word very much because a lot of people use it for a lot of different. [00:43:51] Ideas which are kind of kooky, yeah, and it attracts some kind of miscellaneous funk that I wouldn't necessarily agree with on all issues. [00:44:00] So, um, but yeah, I think. [00:44:05] I think there's a lot of room for improvement. [00:44:09] Do you think academics are with you on that desire? [00:44:13] Because I've heard you say something like there are a lot of people in the field who are like, what about overpopulation? [00:44:18] And why would we want to extend our time here on Earth significantly? [00:44:25] The boredom of living longer is reason enough not to do that. [00:44:27] So do you think we have an academic field that is devoted to developing things like that? [00:44:33] Attitudes have been shifting slowly but steadily over the last 20 years or so. [00:44:40] I think there certainly used to be an extremely strong double standard. [00:44:46] If we take something that's slow, like doing something about aging, it was a big no go because people couldn't see why you would possibly want to do that. [00:44:54] But then at the same time, there were billions of fundings to try to fix cancer, to fix heart disease, to fix diabetes, to fix wrinkles. [00:45:04] All these things, right? [00:45:05] But then, like, the sum total of that is like aging. [00:45:07] Like, aging is what makes you more vulnerable to cancer, to heart disease, to diabetes, to wrinkles. [00:45:13] And so, for exactly the same reasons that you might not like these diseases and symptoms, for the same reason, you might not like the underlying thing that is creating a huge fraction of all of that. [00:45:24] So, it's not as if there was some weird special reason you would have to have for favoring life extension around aging. [00:45:31] It's just exactly the same reasons that apply in all its other cases. [00:45:34] But for some reason, there was this mental block that a lot of people had that they I think actually it was because the anti aging thing seemed unrealistic and fanciful. [00:45:44] So it was not evaluated by the same standards that a pill that is slightly better for treating some cancer would be evaluated. [00:45:51] It was more evaluated by reference to some traditions or some kind of spiritual wisdom where you're supposed to accept the fact that you're going to die and that's a sign of wisdom and you should come to terms with it rather than fight against it. [00:46:09] So, it was kind of the issue was placed in that mental bucket rather than this is something we can actually work on and make it work. [00:46:16] Yeah, but you raise a good point. [00:46:17] If that's your mindset, then why get chemotherapy? [00:46:21] Why not eat as many trans fats as you want, right? [00:46:24] Like we all take steps to prolong our lives, even when we get bad diagnoses. [00:46:29] Yeah. [00:46:29] Even though we may be people of faith and understand how this is going to end eventually. [00:46:33] Yeah. [00:46:34] Yeah. [00:46:34] I mean, to take care of your body seems good. [00:46:36] I mean, it's not to say that in every, in all circumstances, extending life farther. [00:46:41] I mean, if you're kind of Have a life that's not worth living, you're in some respirator and you just kind of kept alive under horrible conditions like that. [00:46:50] I'm not sure it's a great boon if that can be like two years of that rather than one year. [00:46:55] But if you have a good quality of life, if you're healthy, you enjoy life, maybe you can contribute to society in some way, then that seems something that is definitely worth preserving. [00:47:05] Or just if we can improve our senior years. [00:47:07] I mean, we've all watched our parents or grandparents deteriorate mentally. [00:47:10] Some would argue we're watching it right now with our president. [00:47:13] Okay, that was an aside. [00:47:15] But we've all watched that and thought to ourselves, Oh, I don't want that for them. [00:47:19] And if you could create a situation where we could live to our 80s or 90s, forget beyond, beyond would be delightful, but even that old, but sharp with mental acuity, why wouldn't we want that? [00:47:31] It'd be great. [00:47:31] But can I ask you, yeah, yeah, go ahead. [00:47:34] Yeah, I think if and when it actually is possible, like if there is actually a pill on the market at some point that does this, then I think a lot of the people who were previously expressing skepticism would kind of quickly come around. [00:47:49] How far away from that are we? [00:47:52] Well, I mean, from curing aging altogether, I mean, I think it seems quite hard to do that. [00:48:02] I mean, maybe superintelligence would. [00:48:04] Expedite that. [00:48:05] I think actually a lot of these things would happen soon after superintelligence. [00:48:08] But other than that, I mean, we've been kind of on the verge of curing cancer now for the last 50 years, right? [00:48:14] So we've made small incremental progress. [00:48:17] So the superintelligence is going to make us live forever and then kill us all off. [00:48:21] Something to look forward to. [00:48:22] One or the other. [00:48:23] Yeah, one of those. [00:48:25] Right. [00:48:25] They may have buyer's regret. [00:48:27] Can we talk about cryogenics for a minute? [00:48:29] Because this is something, you know, I think most of us grew up thinking, as I'm told now wrongly, Walt Disney had himself frozen so that he could be brought back to life. [00:48:38] But cryogenics is a real thing, and I know you're in favor of it. [00:48:42] Can you talk about it for a minute? [00:48:43] Well, so it's the idea that we know that at sufficiently cold temperatures, basically all physiological processes stop. [00:48:53] So, I mean, you know, things last longer if you put them in your freezer, right? [00:48:57] But if you put them in even colder temperatures, like in liquid nitrogen, then they can last for hundreds of years with basically no change. [00:49:08] So, the idea is if somebody dies today, If you freeze them, then you preserve whatever is there. [00:49:17] And of course, if you thaw them up, they are still dead because their tissue, well, A, that was the original thing that killed them, and B, the actual freezing process creates additional damage. [00:49:27] But if you think that technology will continue to improve, then maybe at some point in the future, the technology will exist to reverse whatever originally caused their death and to cure frostbite, the kind of damage that happens during freezing. [00:49:42] If you just preserve somebody long enough there, then there's some hope that the technology will one day exist to bring them back to life. [00:49:48] Unless you're sure that that technology will never be developed, then it seems like the conservative thing to do would be to put them in liquid nitrogen. [00:49:57] And if it doesn't work, well, they'd be dead anyway. [00:50:02] So the downside is you're hedging your bets. [00:50:05] Yeah. [00:50:05] Well, would you want to come back? [00:50:07] I mean, one of the fears I would have is what if they wake me up in the year 4000? [00:50:13] And it's terrifying. [00:50:14] It's like a caveman walking into 2021 and nothing is familiar. [00:50:19] No one he loves is around anymore. [00:50:23] It seems like a nightmare in some ways. [00:50:26] Yeah. [00:50:27] I mean, that's obviously very hard to know what the future would be like that you would be brought back into. [00:50:34] And so I think, yeah, that probably mostly comes down to whether somebody's like more of an optimistic person or a pessimistic person. [00:50:44] I guess that's true. [00:50:46] And what about what is the likelihood that the people in the future who may have these prolonged lifespans are going to want to come back and get people like you and me? [00:50:54] Well, forget me, but you. [00:50:55] I mean, you, they should want. [00:50:57] But seriously, what is the given overpopulation concerns and the limited size of the earth? [00:51:02] Why would they want to bring people back? [00:51:04] Well, I mean, it would be very cheap for them, given the resources in the future. [00:51:09] This would be like a drop in the bucket. [00:51:12] So if they have sentimental reasons or ethical reasons for doing this, I mean, I think certainly if. [00:51:20] If there were somebody who lived a thousand years ago and I could, you know, with a snap of a finger or, you know, bring them back, I think that would be a very nice thing to do. [00:51:30] And so. [00:51:31] Yeah, like what if it were Eve Lincoln? [00:51:32] You know, what if it were this historical figure who would be a source of. [00:51:35] I mean, or even if it were just some bumpkin, right? [00:51:37] I mean, like you could save a life, even if it were a life that started a long time ago and had been sort of in suspension. [00:51:45] That would be nice. [00:51:47] Do you think, do you have reason to believe that in that scenario, the brain would still have the information? [00:51:54] You know, upon revival that it had on the way out, that, you know, the brain retains information? [00:52:01] I think so. [00:52:01] I mean, it depends. [00:52:03] Obviously, somewhat on how you die. [00:52:05] So, if you die in a fire or you're lost at sea, right, then it's kind of gone. [00:52:10] But in a reasonably good scenario, I think it's likely that the information is basically preserved. [00:52:17] It's quite hard to destroy information. [00:52:20] So, think if you have a book, right, okay, and you tear up all the pages in small pieces. [00:52:25] So now you can't read the book anymore, but the information is still there. [00:52:29] Like in theory, you can put the pieces together in GAD if you have enough patience. [00:52:34] And I think it's similar with the brain, like the freezing damage will kind of, there are ice crystals forming. [00:52:40] Actually, they are trying to use various anti freeze agents and they are not literally freezing it, they are vitrifying it, setting aside these technical details. [00:52:50] I think there is like some kind of shoving around of different pieces that happen in this process, but the information is still there, most likely. [00:52:58] And with sufficiently advanced technology, it should be able to put it together, I think. === Careers Amidst Automation (03:47) === [00:53:06] My guess is if it happens at all, it will happen after superintelligence as a consequence of superintelligence. [00:53:12] Wow. [00:53:17] The last sort of question I have for you is a practical one, which is given that we may be looking at total unemployment, if there is superintelligence, total unemployment, what do you see as an important area for kids today, for young people today, kids in college to be looking at, or even younger? [00:53:36] You know, I have a Almost 12, 10, and eight year old. [00:53:40] It used to be definitely robotics. [00:53:43] And we talk about that a lot in schools today. [00:53:45] But, like, where do we steer our kids if they want to stay on the cutting edge of technology and future jobs? [00:53:52] And, you know, I know that one of the fields that artificial intelligence may take over is actually doing a pretty good job of right now is radiology. [00:54:01] They can read x rays in certain settings. [00:54:03] So, I don't know. [00:54:04] Maybe you don't want your kid to be a radiologist, but what is your thinking about sort of the wave of the future and the likely good industries to be in? [00:54:12] Yeah, I mean, so for kids, I mean, I think it depends a lot on the kid. [00:54:15] Like, you want to build on their unique strengths, and not everybody has to be a computer programmer, right? [00:54:21] That's a small part of the economy. [00:54:27] I think that, I mean, already today, we are at an unprecedented time of wealth and prosperity relative to any other time in all of human history. [00:54:39] And so, in addition to having A focus on finding a profitable career for your kid. [00:54:48] I think also equipping them to actually enjoy life and to do something meaningful in their life would be worthwhile because if not now, then when? [00:54:58] I mean, I guess after the singularity, maybe. [00:55:03] But so I think that would be important. [00:55:08] In terms of areas, I think computer stuff will continue to be important, but so will many, many other areas. [00:55:16] Areas in the economy. [00:55:18] I think maybe inculcating certain habits, like a habit of continual learning, like flexibility to be able and willing sometimes and feel empowered to kind of, there's a field you don't know anything about, some skill you don't yet have. [00:55:35] Well, I could try to learn it. [00:55:36] Like there's so much information online or training courses, or it's easier than ever to get access to new areas. [00:55:43] So giving them that sense of kind of. [00:55:45] Personal agency that I can take responsible for what I want to do and I can figure out how to learn it. [00:55:50] Or if I can't figure that out, I can figure out who to ask. [00:55:55] I think that would be useful across a very wide range of different scenarios. [00:55:59] They're going to have to stay nimble in the world that's coming their way. [00:56:02] That's for sure. [00:56:03] Thank you so much for your expertise. [00:56:05] It's been an absolute pleasure. [00:56:06] Oh, no, I enjoyed it. [00:56:08] All right, up next, Andrew Ang. [00:56:10] This guy was super high up, both at Google and at China's Google. [00:56:15] It's called Baidu. [00:56:16] He was their chief scientist and has led huge teams when it comes to developing the AI of both groups. [00:56:22] What does he think about all of this? [00:56:24] And is your computer already spying on you? [00:56:28] Is your government spying on you? [00:56:30] This is a guy who's been at some very well known, Big leading corporations when it comes to artificial intelligence and data amassing. [00:56:39] What does he have to say? [00:56:40] You're going to love this guy. [00:56:40] He's next. [00:56:41] Before we get to him, however, I want to bring you a feature we have here on the MK show called From the Archives. [00:56:47] This is where we bring you a bit of audio from our growing library of content, now nearing 150 episodes. === Democracy and Digital Debate (03:39) === [00:56:53] Hard to believe. [00:56:54] Today, we're going to go back to our 69th episode and one of our most popular with Tulsi Gabbard, a veteran, former congresswoman who shared with us some stories of her time in the military, in Washington, D.C. [00:57:06] And in the media ringer. [00:57:07] Here's just a bit on that, on the way she was covered during her 2020 presidential run against now President Biden and VP Kamala Harris. [00:57:16] Take a listen. [00:57:16] The ones who were writing about you, of course, this is the mainstream press's left wing, were writing bad things. [00:57:22] And the ones who control the airwaves weren't giving you any airtime. [00:57:27] That's exactly right. [00:57:28] And that's where the evidence of this kind of facade of a democracy comes to the forefront because you really have. [00:57:39] These corporate media interests who are, who most care about ratings and entertainment and how they can create conflict, you know, on a debate stage or push a narrative that they think will get more eyeballs to their screens. [00:57:57] And I put social media in this category as well, combined with a party that pre selecting who they wanted voters to. [00:58:10] To hear from. [00:58:11] And so that's where you saw a lot of, hey, you know, they're changing the standards for the debates as they go along. [00:58:17] You know, just as, you know, hey, okay, we're ticking up a little bit in the polls where we think we're going to qualify for another debate. [00:58:23] Oh, sorry, rules changed, you know, the day before or right when, you know, those new polls were coming out and just other things. [00:58:32] You know, the Democratic, the DNC saying, hey, you know, all presidential candidates, if you want to be featured in any of our publicity that we're putting out, Then you got to fork up, I think it was something like $175,000 to the DNC just to be included in their social media videos or whatever. [00:58:51] And I'm just like, no, I'm not going to do that. [00:58:54] I got people across the country who are giving $5, $10, contributing to my campaign because they believe in the kind of leadership that I'll bring and the message and the truth that I'm sharing with voters. [00:59:06] And they're certainly not giving me a whole bunch of money to go and then pass it on to. [00:59:12] To the DNC. [00:59:14] And so ultimately, that's where we saw time and time again, even small things, it's not that small, but things that went unnoticed. [00:59:23] For example, CNN had a bunch of town halls where they featured different candidates. [00:59:30] They only gave me one. [00:59:32] Most of the other candidates had more than one. [00:59:34] And someone called me one day and said, hey, I'm going through my CNN, it's not DVR, but if you go to CNN's, I guess, digital library, They had, you could replay the town halls of all the different candidates. [00:59:49] They're like, you're not on here. [00:59:51] Like, it's just not, it doesn't exist. [00:59:54] There's no option to find your town hall, but I can find every single other Democrat who ran for president on here. [00:59:59] And so there were things like that and more. [01:00:04] Forward blatant things that made it very clear that if the media makes a decision not to allow voters to hear from you, then, um, a voters really don't have the ability to make an informed decision in a true democracy, and then, b, the reality is that if you want to talk about issues, if you want to get information to people so they can make this informed decision, then clearly running for office is not the way to do it. === Automated Work and Safety (15:29) === [01:00:32] Gabbard has just struck a new deal with Rumble, the video social network YouTube competitor, so I think we're We're about to hear a lot more from her in the weeks to come and good. [01:00:42] And we, in the meantime, will keep bringing you more of our best episodes from the archives. [01:00:47] Up next, Andrew Eng. [01:00:50] You'll love him. [01:00:56] Thank you for being here. [01:00:57] I'm excited for this conversation. [01:00:58] We just wrapped up with Nick Bostrom, who he wasn't totally anti-AI, right? [01:01:05] He's pro-AI, but has some concerns about I think what you call artificial general intelligence, AGI, the long term game where you develop a machine that develops super intelligence. [01:01:19] So let's just start there. [01:01:20] What's your take on the likelihood that we will develop super intelligent machines in this century? [01:01:28] Nick Bostrom is an interesting character. [01:01:30] AI is the new electricity, it's transforming tons of industries, revolutionizing the way we do things in the United States and around the world. [01:01:37] As for artificial general intelligence, I think we'll get there, but whether it'll take 50 or 500 or 2000 years to make computers as intelligent as you or me or other people. [01:01:48] I think that's a really long term open research project. [01:01:51] It's the exciting thing. [01:01:52] Okay, I like 2000. [01:01:53] 2000 makes me feel better than by the end of this century when my kids are still, God willing, alive. [01:01:59] You know, I think that one of the problems with the whole field of AI is confusing in this way. [01:02:07] There's one type of AI called AGI, artificial general intelligence, things to do anything a human could do maybe someday. [01:02:13] And artificial narrow intelligence, which is AI that does one thing really, really well and is really valuable. [01:02:19] Turns out over the last 10, 20 years, we've had tons of progress in artificial narrow intelligence, those AI that do one thing really, really well. [01:02:26] So people say accurately, there's tons of progress in AI. [01:02:30] I agree with that. [01:02:31] But just because there's tons of progress in AI doesn't mean from where I'm sitting, I'm candidly not seeing that much progress toward artificial general intelligence. [01:02:39] So I think that's led to some of the unnecessary hype and fear mongering, candidly, about AI. [01:02:45] That makes me feel better. [01:02:46] I'm feeling better already. [01:02:47] Now, you know a thing or two about narrow, artificial narrow intelligence, just so the audience understands. [01:02:53] You've led teams at Google, and is it pronounced Baidu? [01:02:56] Forgive me for not knowing. [01:02:57] Oh, yes. [01:02:58] I started and led the Google Brain team. [01:03:00] I also ran AI for Baidu, which was a large web search engine company in China. [01:03:04] Because China doesn't use Google. [01:03:06] So, this is China's Google. [01:03:08] China's leading web search engine was Baidu. [01:03:11] And then I'm also really proud of the work that I did leading the Google Brain team, which is a team that helped a lot of Google embrace modern AI. [01:03:18] If you use Google, you're probably using technology that my former team wrote. [01:03:22] I should almost say that. [01:03:23] That's amazing. [01:03:24] So, now what are some of the fun things that you and your team have introduced into my life that I don't even know I should be thanking you for? [01:03:31] Don't thank me. [01:03:32] Thank the many millions, well, thousands of people around the world building these technologies. [01:03:37] I think that all of us use AI dozens of times a day, maybe even more, perhaps without even knowing it. [01:03:43] Thanks to modern AI, when you do an internet search, you get much more relevant results. [01:03:48] Or every time you check your email, there's a spam filter in there. [01:03:51] Kind of saving us from massive amounts of spam. [01:03:54] That's AI. [01:03:55] Every time you use a credit card, it's probably an AI trying to figure out if it is you or if someone stole the credit card and we should not let that transaction through. [01:04:03] So all of us, Probably use AI algorithms many, many dozens of times a day, maybe without even knowing it. [01:04:10] And what about the self driving car? [01:04:11] Because, you know, that makes the news every so often. [01:04:14] And it's interesting to me. [01:04:16] It's scary to me because you also hear some reports of crashes and you understand that, okay, the technology is not exactly where they want it to be yet. [01:04:22] But what do you see when it comes to self driving cars? [01:04:25] I think that many people, including me, collectively underestimated how difficult it will be to get to, you know, true fully autonomous self driving cars that could drive the way that. [01:04:38] That the person can. [01:04:40] I think we will get there, but it's been a longer road than any of us estimated. [01:04:44] When I drive these cars, I'm happy for the driver assistance technology. [01:04:48] I personally don't really fully trust them yet. [01:04:50] So I keep an eye on the road when I'm driving and one of these technologies is supposedly doing something. [01:04:55] Well, yeah. [01:04:56] So here's a dumb question. [01:04:57] I understand why somebody, if we perfect the technology, somebody like my mom, who's 80 and really not all that well physically, mentally, she's great, but physically, I could see why a self driving car would work well for her. [01:05:09] It's like you're a built in chauffeur. [01:05:11] But why do young, able bodied people need that? [01:05:14] Why is it an improvement for people our age? [01:05:18] I think that it depends a lot on the individual. [01:05:21] I sometimes find it fun to drive. [01:05:23] You know, if I don't know, take my daughter out on the road, drive around, that's fun. [01:05:27] But sometimes if I'm driving to work in traffic, it's like, boy, I wish someone else could do the driving for me. [01:05:33] And if a computer could do that, so I could maybe even sit in the backseat and, you know, play with my daughter, I would rather do that than be stuck in traffic. [01:05:40] So I think it depends a lot on the individual. [01:05:43] It's funny because I asked this having just yesterday. [01:05:46] I had to go to the city. [01:05:47] I'm in New Jersey for the summer. [01:05:48] I had to go to the city. [01:05:49] It's a couple of hours. [01:05:51] And I had the choice of driving myself, or sometimes we use a driver. [01:05:55] And I said, you know what? [01:05:56] I'm going to use a driver because I had a bunch of interviews to do today. [01:05:59] And I said, I want to read all my stuff. [01:06:01] And so it's a dumb question, right? [01:06:03] It's basically you can read everything if you have a self driving car. [01:06:06] It's going to make your life really convenient if it doesn't kill you or all the people around you. [01:06:11] And again, I think I know you have kids. [01:06:15] My kids are really young. [01:06:16] Part of me worries when they grow up, will they ever get in a car accident? [01:06:20] So, when my daughter grows up, if there's a computer that can drive her safer than if I were to drive or she were to drive herself, I think it'll make all of us better off. [01:06:30] How far away are we from that? [01:06:33] The AI world, we keep on making a lot of predictions, and sometimes we're not very good at predicting the timeline on which this will happen. [01:06:42] I think that self driving cars are kind of getting their unlimited. [01:06:46] Environments. [01:06:47] So I'm seeing exciting progress. [01:06:49] For example, if you're driving around in a constrained environment of a port, shipping stuff, or in a mine, or sometimes on a farm, that's actually kind of getting there. [01:06:58] If we're willing to rejigger some of the cities, I think we'll be there pretty soon. [01:07:03] I don't know. [01:07:04] I think it'll still be quite a few years. [01:07:06] I still be many, many, many years before we can drive in downtown New York or downtown New Jersey. [01:07:14] Yeah, I understand that they're not as good at picking up. [01:07:17] Things like the hand signals that a construction worker might be issuing to you. [01:07:22] They don't totally understand those things yet. [01:07:24] So they're not quite where they need to be. [01:07:28] Okay. [01:07:29] So let's talk about other ways in which AI is going to be helping our lives and how you see it. [01:07:35] Because one of the things that Nick said that concerned me was we're probably headed toward total unemployment, total, eventually in the distant future, once the machines become as smart as we think they're likely to get. [01:07:46] And that concerns me. [01:07:47] You know, I don't know what life looks like if nobody works for a living. [01:07:50] And if the machines are in control of everything. [01:07:52] So, what does the journey from here to there look like in terms of technological advances? [01:07:58] You know, I think that total unemployment, I'm actually skeptical. [01:08:04] Ever happen, or if it does happen, it may be, I don't know how many thousands of years away. [01:08:09] It turns out, let's demystify AI. [01:08:11] What can we make AI do? [01:08:13] What can we not? [01:08:13] It turns out, to get a little bit geek and technical, almost all of AI today is about input to output mappings, such as input an email, output, is it spam or not? [01:08:24] Or input a picture of what's in front of your car and output the position of the other cars. [01:08:29] Or input an x ray image and output a diagnosis does this person have pneumonia or not, or some other condition? [01:08:36] So that's Sort of the one idea, input output, that is creating 99% of the value of the economic value of today's AI system. [01:08:45] Turns out this is a ton of economic value. [01:08:48] The large ad platforms have an AI that inputs an ad and some information about a user and outputs, are you going to click on this ad or not? [01:08:55] Because if you can get people to click on more ads, there's direct impact on the bottom line of the large ad platforms. [01:09:01] So it's creating tons of economic value. [01:09:04] But frankly, this input output thing, if we think about how much more people. [01:09:08] Do, there's just so much more people could do. [01:09:11] I don't think anyone in the world has a realistic roadmap for getting the AGI. [01:09:16] So I think sometimes that concept has been overheightened, fear mongered. [01:09:22] I do worry about unemployment with every wave of technology, looking back, industrial revolution, invention of electricity. [01:09:29] I mean, well, all the people working on steam engines, they unfortunately really sadly lost their jobs. [01:09:34] Or we used to have human operated elevators, right? [01:09:39] There was someone standing in the elevator. [01:09:40] That would dower up and down. [01:09:42] When someone invented automatic elevators, those jobs went away. [01:09:45] So I worry about that for AI. [01:09:47] They'll create some amounts of disruption and affect work, but complete total unemployment, this input output mapping. [01:09:55] I don't see that piece of software replacing you anytime or me anytime soon. [01:10:01] Can you talk about the radiology thing? [01:10:03] Because I read about the work being done at Stanford with AI and radiology, but the conditions have to be just so. [01:10:10] Can you just talk about that? [01:10:11] Sure. [01:10:12] So I think that I'm excited about AI and its potential to improve healthcare. [01:10:17] But actually, some of my friends and I worked on AI that can input a picture of an X ray and output what's the appropriate diagnosis. [01:10:26] And it turns out we were able to show in the lab that we could diagnose or recognize many conditions as accurately as a board certified, highly trained doctor radiologist. [01:10:37] But it turns out that it worked great if we were to train on data we collected from our research from Stanford Hospital. [01:10:45] And then see if the system did work well on data from the same hospital, from the same set of X ray machines. [01:10:51] It turns out, if you take that AI system and walk it down to a different hospital down the street with maybe an older X ray machine, maybe the technician has a slightly different way of imaging the patient, the performance gets much worse. [01:11:03] Whereas any human doctor can walk down the street and diagnose at this other hospital, you know, kind of roughly equally well. [01:11:11] So I think that one of the challenges of AI is we have a lot of prototypes in the lab. [01:11:17] That you didn't read about in the news. [01:11:19] You see, oh, AI does this, what, diagnose that human radiologist or something. [01:11:22] You may read about it in the news. [01:11:23] But it turns out that we collectively in the AI field, we still have a lot of work to do to take those lab prototypes and put them into production in a hospital setting. [01:11:33] It will happen. [01:11:34] It's just that this will be some additional years of work before some of the things that have been promised come to fruition. [01:11:40] Well, the medical field is so ripe for help from this kind of technology. [01:11:47] I can think of a million ways in which it could. [01:11:48] Change lives and save lives, but it's really every industry. [01:11:51] I know you've been making the point, it's every industry that's going to be touched by this eventually. [01:11:55] But before we move off the medical field, may I just ask you about a report in the Wall Street Journal that got my attention? [01:12:01] Okay, among other things, they're talking about. [01:12:04] What we should expect in the next few years. [01:12:06] Toilets that screen for disease. [01:12:10] It says researchers at Stanford have developed a prototype toilet that uses an artificial intelligence trained camera to track the form of feces and monitor the color and flow of urine. [01:12:20] Why is this necessary? [01:12:21] Because it could potentially analyze micro stool samples to detect viruses like COVID 19 and blood. [01:12:27] It could potentially detect irritable bowel syndrome or colorectal cancer. [01:12:32] And here was the part forgive me, because I'm really just a 12 year old boy at heart. [01:12:35] That I wanted to ask you about. [01:12:36] So, the toilet could identify individual users by scanning their anuses, unique characteristics, or anal print. [01:12:43] Now, no one wants an anal print going off to some AI researcher, but this is happening. [01:12:52] This is actually, they're saying these units could cost between $300 and $1,000. [01:12:56] They could be rolled out in the next couple of years. [01:12:58] Is this what life is going to hold for us? [01:13:01] Yeah, let's hope not. [01:13:02] A lot of that description was, I think, a lot of that. [01:13:07] The description you read sounds disturbing. [01:13:10] Having said that, I think there are doctors that have to do many disturbing things for the good of the patients. [01:13:17] But I think a lot of us will not want this in our homes anytime soon. [01:13:22] But we'll see. [01:13:23] Doctors got to innovate. [01:13:25] We'll see what the FDA approves and what seems to be appropriate for patients that may need it, even if it doesn't seem like the right thing for everyone. [01:13:32] Because you know that's going to turn into one of these things where you get false alarms every other day and you're in the doctor saying, oh, my anal print suggested I've got colorectal cancer. [01:13:41] I don't know. [01:13:43] Sounds like there's a ton of internet means to be created off what you just said. [01:13:48] Listen, as somebody who's on camera for a living, a lot of my life, there are limits to how far I'm willing to go. [01:13:54] And I think I speak for a lot of people. [01:13:56] So, what about the other industries? [01:13:57] Like, how else could AI improve or negatively impact our lives over the next 10, 25 years? [01:14:05] One of the challenges I see is AI, as of today, has clearly transformed the computer. [01:14:11] Software, the consumer software internet industry. [01:14:14] The US use a website, the large website operating, app operating companies, almost all of them, or maybe all of them use AI to great effects. [01:14:22] One of the challenges that still faces us is ahead of us is figure out how to use AI to improve, transform, create value for all of the other industries out there. [01:14:32] So, for example, one thing I'm personally passionate about is manufacturing. [01:14:36] I think that for American manufacturing to be more competitive, The road forward is not to try to just try harder to do the jobs that were around 20 years ago. [01:14:48] I think it's America and, frankly, all nations around the world should race ahead to figure out how this technology can work for manufacturing and for all those other industries. [01:14:59] So, for example, it turns out that in many factories around the world today, there are tons of people standing around using their eyes to inspect your manufactured thing, like automotive component or pill bottle or food and beverage. [01:15:14] Food component to see if there's a defect on it. [01:15:17] I think AI is clearly going to be able to do a lot of that work in the near future in an automated way. [01:15:23] And if we in America want to embrace this technology, figure out how to use AI for automatic visual inspection, it's coming. [01:15:31] I'm working on it. [01:15:32] My friends are working on it. [01:15:33] I think that will help many industries become competitive. [01:15:36] But it turns out getting AI to work for manufacturing, for healthcare, for agriculture, these industries, there's actually a different recipe. [01:15:43] In terms of the stuff that I was doing, you know, at Google and other internet companies. [01:15:46] It doesn't quite work. [01:15:48] So there's something a little bit more needed. [01:15:50] But again, a bunch of us in the AI field are working on this. [01:15:52] I hope we'll get it. [01:15:55] Don't leave me now. [01:15:56] We got more coming up in 60 seconds. === Embracing Future Technology (10:09) === [01:16:02] Can we talk about Baidu for a minute? [01:16:05] Just talk about China and its approach to data because I know that they really want to be leaders in the AI field and that the United States is watching them and they're watching us. [01:16:13] Do you think that the Chinese are any better than the Googles of the world, where you were also a top guy at collecting information, synthesizing it, keeping an eye on people's habits, and so on? [01:16:27] Yeah, I think that China is phenomenal at some types of technology, the US is phenomenal at some types of technology. [01:16:37] I think we do live in a multipolar world where I see innovations in the US and Europe and China, really, frankly, all around the world. [01:16:44] And the AI community tends to be very global. [01:16:48] There is a global network where researchers in Singapore may publish a paper and then, like, two weeks later, it's running in some site in the United States. [01:16:59] And then someone in the UK will read it to and figure out something to apply and deploy in Europe. [01:17:04] So I think we live in a global world where Different teams sometimes collaborate and different teams sometimes compete. [01:17:13] I think, actually, one thing I will say a lot of people underestimate the importance of government support in the early days of AI. [01:17:22] So, not many people know this. [01:17:23] When I was running AI way back, before modern AI deep learning became popular, a lot of reasons I was able to do my work was because DARPA, the defense agency in Washington, D.C., was willing to fund some of my work. [01:17:37] So, I think without DARPA, You know, funding some of my research work, I don't know that I would ever have gone to Google to propose starting the Google Brain project. [01:17:44] So I think just ensuring American competitiveness is something I would love to see. [01:17:50] Where are we on the scale? [01:17:51] Are we the world leaders? [01:17:52] You know, you look at sort of the military superpowers and we know where we are, but where is America when it comes to AI? [01:17:58] I think that the two leading countries in the world in AI are quite clearly the US and China. [01:18:05] I think the US is the world leader in law. [01:18:09] Basic research innovations, but this is not a leap that we should take for granted. [01:18:14] And we just got to keep on working really hard. [01:18:16] And what about the creation of super intelligence? [01:18:18] Because I read something about you creating something about where a computer can recognize a cat. [01:18:26] I don't know. [01:18:27] You can tell me what it was. [01:18:28] But to me, that sounded like working toward developing super intelligence. [01:18:34] You know, a computer that can learn on its own and develop its own intelligence and improve its own intelligence. [01:18:40] But can you talk about that? [01:18:42] Where we are on it, what you've done on it, and whether you think, well, you know, how far along we are. [01:18:47] Yeah, the cat result. [01:18:49] When I was in the Google Brain team, one of the early results we had was we built an AI system called a neural network and had it watch tons of YouTube videos. [01:18:57] Basically, had it sit in front of the computer and watch YouTube videos for like a week. [01:19:02] And then we asked it, hey, what did you learn? [01:19:04] And to our surprise, one of the things I learned was it had figured out or had learned to detect this thing, which turns out to be a cat. [01:19:13] Because it turns out when you have an AI system, Watch YouTube videos for a lot. [01:19:17] It learns to detect things that occur a lot in YouTube videos. [01:19:20] So people faces occur a lot in YouTube, figured out how to detect that. [01:19:24] But also a lot of cats, right? [01:19:25] There's another internet meme on YouTube. [01:19:27] So it also figured out how to detect that. [01:19:29] It wasn't a very good cat detector. [01:19:32] But the remarkable thing about that was that it had figured out that there's this thing. [01:19:36] It didn't know it was called CAT, CAT, but there was this thing that it just learned to, boy, see a lot of this thing, whatever it is, I don't know what it is. [01:19:43] So it was pretty remarkable that AI system. [01:19:46] Your network had figured that out by itself. [01:19:49] Now, but again, between that and super intelligence or AGI, I think it's very far away. [01:19:56] I think that worrying about AI super intelligence today is a bit like worrying about overpopulation on the planet Mars. [01:20:05] I should hope that we will manage to colonize Mars. [01:20:09] And maybe someday we'll have so many people on Mars that we'll have children dying because of pollution on Mars. [01:20:15] And you may be saying, Hey, Andrew, how can you be so heartless to not care about all the children dying on Mars? [01:20:21] And my answer is, Well, we haven't even kind of landed people on the planet yet. [01:20:25] So I don't know how to productively defend against overpopulation there. [01:20:29] So I feel a little bit about it, I think it's fine if academics study it, publish some theories on what to do when we have AGI, but it's so far away. [01:20:38] I personally don't really know how to productively work on that problem. [01:20:44] Now, you are the co founder of a group called Coursera. [01:20:47] Is that how you pronounce it? [01:20:48] Yes, Coursera. [01:20:50] And I feel like this dovetails very nicely with one of the things that Nick was recommending when I talked to him about the future, our children, and so on. [01:20:57] And he was saying the one thing the kids of the future are going to need to be able to do is understand that learning is a lifetime process, right? [01:21:03] That nothing is as static as it used to be. [01:21:06] The world is changing so rapidly, and our kids are going to need to be able to handle information. [01:21:11] At an even more rapid pace than it now comes into their life, which is already faster than ever. [01:21:16] And I feel like this is one of the missions of Coursera to nurture lifelong learning. [01:21:21] Can you talk about it? [01:21:22] Because it sounds really interesting and it's been hugely successful. [01:21:25] Yeah. [01:21:26] So, yeah, through Coursera, hopefully, we can give anyone the power to transform their lives through learning. [01:21:33] I was teaching at Snapley University about a decade ago, actually over a decade ago, and put my class on machine learning, type of AI, on the internet. [01:21:42] Kind of to my surprise, 100,000 people signed up for it. [01:21:47] And I kind of did the math. [01:21:48] I was teaching 400 people, 400 students a year. [01:21:51] But when I did the math, I realized that for me to reach a similar audience, 100,000 people, teaching 400 people a year, I would have to teach at Stanford University for like 200 years. [01:22:03] And so based on that early traction, I got together with a friend to start Coursera to create a platform that now works with over 200. [01:22:15] Universities and other institutions and companies, in order to create online learning courses that pretty much anyone in the world can access. [01:22:25] That's so great. [01:22:26] I mean, so it's like for those of us who didn't go to Stanford or Harvard or what have you, but want access to that kind of education, though not full time, we can go here. [01:22:37] Yeah. [01:22:37] In fact, I'm going to share two thoughts relevant to all of you watching this. [01:22:44] If you want to learn about AI and not, you know, And cut through the hype. [01:22:49] One of the classes I'm most proud of is AI for Everyone on Coursera. [01:22:54] And I think I tried to give a non technical presentation of AI. [01:22:58] So, if you want to know how AI will affect your life in the future, how will AI affect your job, your industry, there are several hours of video that I hope will give anyone that's interested a non technical introduction to AI. [01:23:11] So, you can think about it strategically and know how it will affect you, but also learn to recognize and ignore some of the hype. [01:23:19] There's one other trend I'm excited about, which is with the rise of tech, I think we may, I hope, will eventually shift toward a world. [01:23:28] And this is relevant to all of you with children, for example, but I hope we'll shift toward a world where almost everyone will know a little bit about coding. [01:23:37] And I say this because many, many hundreds of years ago, we lived in a society where some people believe that maybe not everyone needs to read, right? [01:23:46] Maybe there are just a few priests and monks. [01:23:49] They had to learn to read so they could read the holy book to the rest of us or something. [01:23:53] And the rest of us, we didn't need to read, we just sit there and listen to them. [01:23:57] Fortunately, society wised up, and now with widespread literacy, we've figured out that it makes human to human communications much better. [01:24:06] I think that with the rise of computers in today's society, for good and for ill, this is a very powerful force. [01:24:12] I would love to see a lot of people able to just learn to code. [01:24:16] So, not just like not all of us need to learn to be great authors, right? [01:24:20] I can write, but I'm not a great author. [01:24:22] I don't think everyone needs to be a great programmer. [01:24:25] But for many of us, there will become a time where if you could write a few lines of code, get your computer to do what you want, just like literacy has created much deeper human to human communications. [01:24:40] I think if everyone can kind of learn a little bit of coding or computer literacy, then all of us can have much deeper interactions with our computers. [01:24:48] And it'll be very powerful for all of you in the future. [01:24:50] Well, it certainly had a massive impact on your life just reading your background. [01:24:55] How did you get into it at such a young age? [01:24:57] It was your dad, I understand? [01:24:59] Yes. [01:25:00] So my dad's a doctor. [01:25:02] And when I was a teenager, I was born in the UK, but I was living in Singapore at the time. [01:25:08] But so my dad was interested in AI for healthcare. [01:25:10] So, you know, he kind of, Taught me about his attempts to use, you know, frankly, like 1980s AI, which is not that advanced, to do medical diagnosis. [01:25:21] So that sparked up a lifelong interest in me. [01:25:24] You know, I do remember when I was in high school, I once had an internship, I once had a job as an office admin. [01:25:30] And I don't remember much from that job. [01:25:32] I just remember doing a lot of photocopying. [01:25:35] And even though I was like, whatever, you know, 15, 16 years old, I remember thinking, boy, if only, why am I doing so much photocopying? [01:25:43] If only we could write some software, have a robot or something. [01:25:46] Do all this photocopying, maybe I could do something even more interesting and more valuable. [01:25:52] And I think that for me was part of my lifelong inspiration to just write software that can help automate some of the more repetitive things so that all of us collectively can tackle more challenging and exciting things. [01:26:05] Well, it's so great because I tell you, I went out to Google and I spoke to a bunch of executives there a couple of years ago. === From Photocopying to Code (04:42) === [01:26:12] And I know that they try to give the coders some stress relief, some like a break. [01:26:20] Because it can be very intense work. [01:26:21] And one of the stations on campus was sword fighting. [01:26:25] I'm like, this is so great. [01:26:27] You know, they just, because, you know, you spend all day doing that. [01:26:29] It's very intense, and you do need a mental break, a break for your eyes, a break for your body. [01:26:35] So it's just a totally different way of approaching the workplace. [01:26:40] Yeah, I think, yeah, I find that I think clothing is hardware, but I find that almost, you know, when I look across our society, I think, Almost everything is hard work, right? [01:26:52] When I walk into manufacturing plants, some of the work that my company, Landing AI, does for manufacturing, I see the men and women on the manufacturing shop floor, and they're really smart at what they do. [01:27:04] And then I meet up with my friends from Google, and I think they're really smart at what they do. [01:27:10] I think that the world is a challenging, has lots of challenging, intellectually simulating, or physically challenging work for us to do. [01:27:19] And hopefully AI tools can help make things a little bit better for everyone. [01:27:23] Well, I like that you sort of decide where you're going to put your energies because I understand looking at you today in your blue shirt, it is no accident you are wearing that blue shirt. [01:27:30] And it is one of the areas of your life in which you've chosen to simplify and streamline your decision making. [01:27:36] Yeah. [01:27:37] I think, yeah, a few friends have asked me, you know, there's actually a core of clothes. [01:27:41] I think someone actually asked publicly, why does Andrew wear a blue shirt all the time? [01:27:46] So I used to wear either blue or like a light purple. [01:27:50] But then I realized every morning is like, oh, do I wear a blue shirt or a purple shirt? [01:27:54] Blue shirt, purple shirt. [01:27:55] I can't decide. [01:27:56] It's like, forget it. [01:27:56] I'm just buying a full stack of bushes and do that. [01:28:00] I don't know. [01:28:00] So you don't even have to think about it in the morning. [01:28:02] Vera Wang does the same thing. [01:28:03] Vera Wang, who dresses the most Beautiful, successful, you know, prominent people in the world just wear sort of a black, a column of black every day. [01:28:10] That's her uniform. [01:28:12] I did not know that. [01:28:13] Because she doesn't want to think about it, same as you. [01:28:16] Yeah. [01:28:16] Turns out there is a downside to this. [01:28:18] One day, one of my friends was working on an AI for fashion thing. [01:28:21] And I tried to express in a pinette, said, Well, if you want to do AI for fashion, how about this? [01:28:26] How about this? [01:28:26] And she said, Andrew, you have no credibility whatsoever when it comes to fashion. [01:28:31] So, okay, I have to ask you one other personal question. [01:28:34] Now, I understand you married somebody who's in robotics, and I read that you. [01:28:38] You used a 3D printer to make your wedding rings, which brought up a lot of things for me, which is number one, I do not understand the 3D printer at all. [01:28:48] My kids are using it at school. [01:28:49] It scares me. [01:28:50] I don't get it. [01:28:51] What is it? [01:28:51] What is it? [01:28:52] And how does it print out a wedding ring? [01:28:54] How does it produce a wedding ring? [01:28:56] Yeah. [01:28:57] So I think, so Carol is from Flint, Michigan, but we now are in Washington state. [01:29:03] So 3D printer takes, you know, one way that 3D printing works is it, Takes little bits of metal and melts them and kind of deposits little drops of metal until gradually you end up building a ring. [01:29:16] I'm not wearing a ring now. [01:29:17] I have it upstairs. [01:29:19] And then you end up with this incredible shape. [01:29:23] Whatever you can, almost anything you can imagine and program into a computer, it can just by putting little drops of plastic or little drops of metal or some of the substance, just create this incredible 3D shape that's maybe difficult to manufacture by other ways. [01:29:40] So, I don't know. [01:29:41] Actually, this is one fun thing about technology, right? [01:29:43] 3D printing still are really, really cutting edge technology, but now we have high school students able to use it. [01:29:51] I hope we like that for AI too, frankly. [01:29:54] I find that today AI seems a little bit mysterious, maybe a little bit overly so. [01:29:58] But actually, last week I was chatting with a few high school students in different parts of the country, talking about, you know, they're taking online AI classes from Coursera or from whatever. [01:30:08] And now we have high school students able to do things that If done just five or six years ago, would have been a chapter in a PhD thesis at a place like Stanford, right? [01:30:20] Really? [01:30:21] Like what? [01:30:23] Actually, so one thing happened to me. [01:30:25] I was attending a fair, make a fair, where I met this student that was demoing his robot that was taking pictures of plants trying to figure out if they were diseased, if they had a disease on the leaves or not. [01:30:38] So I looked at his work and I thought, boy, if this had been done five or six years ago, this would have been a chapter in someone's PhD thesis at Stanford University. [01:30:48] And you know what? [01:30:48] I asked him, How old are you? [01:30:50] And he said, Oh, I'm 12 years old. [01:30:53] So this is today's world. === Smart Home Privacy Struggles (05:46) === [01:30:54] No, no, no. [01:30:55] So this is today's world. [01:30:57] I think anyone in the world can go and learn this stuff and then implement this. [01:31:03] And even though some technology seems so cutting edge, I think that if someone out there is watching this and wants to learn it, a lot of tools are now on the internet. [01:31:13] Go learn it online from deep learning the AI of Coursera. [01:31:16] And then on your computer, you could actually start developing stuff that. [01:31:20] While not decutting edge stuff, right? [01:31:21] That's actually still pretty difficult. [01:31:23] You could actually do stuff that was kind of state of the art just a few years ago. [01:31:26] All right. [01:31:27] On this subject, I have a confession to make to you. [01:31:30] I were moving into a new home or moving towns, and I decided to not make my new home a smart home because my old smart home was annoying me. [01:31:39] My dishwasher was yelling at me, and my microwave was yelling at me. [01:31:43] And I was walking around my apartment all day saying, You are not the boss of me. [01:31:45] I am the boss of you. [01:31:47] Shut up. [01:31:48] I will unload you when I'm good and ready. [01:31:50] And the TV required 40,000 buttons to turn on. [01:31:53] And it's like, I just want a dumb home for me because maybe it says I'm a dumb person, but it seemed easier to me. [01:32:02] And yet, all of these appliances are getting smarter by the day. [01:32:06] And they're saying there's Going to be a refrigerator that's going to tell you whether things are spoiled on the inside and so on. [01:32:11] So, do you have a smart home? [01:32:13] Do you recommend a smart home? [01:32:15] And how, if at all, concerned should we be about people spying on us, for lack of a better term? [01:32:23] You know, I think people, they distrust Google. [01:32:26] They think Google's amassing information on them. [01:32:27] They distrust the government. [01:32:28] They think the government could possibly hack into one of these appliances. [01:32:32] You know, these are real concerns you hear from people. [01:32:34] Yeah. [01:32:35] So, I know. [01:32:36] I think that a lot of people are concerned about privacy. [01:32:38] So, in life, I was with, I have friends that. [01:32:41] Many of the large internet companies, and I know they're my friends, I trust them to tell me the truth. [01:32:46] Many of my friends are genuinely concerned, but also very respectful of privacy. [01:32:52] So, a lot of the large internet companies, some better than others, really do have stringent privacy controls. [01:32:58] It makes it incredibly difficult for anyone to just spy on you. [01:33:02] Now, having said that, I actually would be disappointed. [01:33:05] I have no reason to think the US government can hack into these devices, but frankly, I'd be a little bit disappointed if they can't. [01:33:12] Uh, but yeah, so you know, by the way, I used to work on speech recognition, right? [01:33:17] So I worked on this voice activated devices. [01:33:19] Uh, one thing I'm not proud of for a long time, even as working on these devices, I had exactly one light bulb in my home that was connected to my smart speaker because the configuration process is so annoying. [01:33:30] So I got through, you know, configuring one light bulb so I could turn it on with a voice command, but after that, I couldn't be bothered. [01:33:36] So I think we still gotta make these things better, you know. [01:33:40] I we tend to inject the margins a lot of things. [01:33:43] Sometimes it's really great and I love it. [01:33:45] But sometimes you do wonder if we're really helping solve people's problems. [01:33:51] Hey, if we have more people working on it, maybe you can all collectively make all this tech much better. [01:33:55] Yeah, no, I've said in this day and age, it's not enough to pretend you actually have to be a good person because someone's probably always listening, watching, amassing data. [01:34:04] They're going to know one way or another. [01:34:07] It's disconcerting, but I don't know. [01:34:09] If you're not a criminal and you're not dealing with terrorists and so on, how worried do you need to be? [01:34:14] I don't know. [01:34:14] I'll give you the last word. [01:34:16] Yeah, you know, I think AI is the new electricity. [01:34:20] Much of the rise of electricity starting about 100 years ago transformed every industry. [01:34:25] I think AI is now on a path to do the same. [01:34:27] So I think, really, to anyone wondering if it's worth learning about it, jumping in, trying to help all of us collectively navigate the future, I think every citizen, every government, all of us as individuals should jump in and play a role in shaping a better future for everyone in light of this amazing technology. [01:34:46] Wonderful talking to you. [01:34:48] Thank you so much for your expertise and your insights. [01:34:51] Thank you, Megan. [01:34:51] It was really fun to do this with you. [01:34:58] So as I mentioned in our other episode this week, we're scaling back a little for this week and next week on our episodes just as we get ready to launch on Sirius. [01:35:06] My team, especially my team, has a lot they need to be doing. [01:35:09] So we're going to launch five days a week starting on September 7th. [01:35:13] But in the meantime, a little bit of scale back schedule. [01:35:16] for those of you wondering. [01:35:17] But our next guest, who's going to be coming up on Monday, is one we've really wanted to have on for a while. [01:35:23] Controversial guy because he worked for Trump. [01:35:25] And, you know, he's been completely excoriated by the mainstream media, but fascinating and really smart dude, Stephen Miller is going to be here. [01:35:33] You stop him on the Kelly pile all the time. [01:35:35] Then you saw what the press did to him when he went on inside the Trump team. [01:35:41] But even just, you know, I've spent years talking to him. [01:35:43] There is no better person to talk to. [01:35:46] if you want to understand what's happening in this country with our southern border, our northern border, and our approach toward immigration in general. [01:35:54] So I'm really looking forward to the conversation, Stephen Miller. [01:35:56] Monday, don't miss it. [01:35:57] In the meantime, go ahead and subscribe so you don't miss it. [01:36:00] Download, give me a five-star rating while you're there, and give me a review. [01:36:04] Let me know what you think. [01:36:05] What do you think of AI? [01:36:06] Are you in favor? [01:36:08] And what would you like me to ask Stephen Miller, taking your thoughts right now in the Apple review section or wherever you download your podcasts? [01:36:16] Thanks. [01:36:18] Thanks for listening to The Megyn Kelly Show. [01:36:20] No BS, no agenda, and no fear. [01:36:24] The Megyn Kelly Show is a devil-may-care media production in collaboration with Red Seat Ventures. [01:36:38] And now, what's up from Kix? === Your AI Thoughts Matter (00:36) === [01:36:40] Kix can fertig grenzlös mange selfies. [01:36:43] The suit, handen ook baddexen din and crush it here. [01:36:46] Men, mauve set the betikir, a renulstress. [01:36:49] O duke an alte handen po Kix doteno. [01:36:51] So, yeah, willkommen to grenzlös me your beauty. [01:36:54] Connect o your betik. [01:36:56] Kos Kix Beauty Unlimited. [01:37:00] Fiken are a super enkel grenzgast program for bedrifter. [01:37:03] Men wist you dato was can start in egg and drift me Fiken. [01:37:06] Juist am thuis and res and registrere aes o enkelperson for a tag, trick to enkel, we are full of schema po Fiken and no. [01:37:12] Which is a very good way to register. [01:37:13] If you want to know how to use the free and how to use the free and how to use the free and how to use the free and