Edition 388 - Professor Bart Kosko
Famous for his "Art and Bart" talks with Art Bell - Bart Kosko on fuzzy logic in 2019 and the future of AI...
Famous for his "Art and Bart" talks with Art Bell - Bart Kosko on fuzzy logic in 2019 and the future of AI...
Time | Text |
---|---|
Across the UK, across continental North America, and around the world on the internet by webcast and by podcast. | |
My name is Howard Hughes, and this is The Return of the Unexplained. | |
Thank you as ever for all of your emails. | |
Please know that I do get to see each and every email as it comes in. | |
And as I always say, when you get in touch, please tell me who you are, where you are, and how you use this show. | |
And you can send me guest suggestions, thoughts about how the show's going, anything you want to do, really. | |
It's nice to connect with you. | |
And thank you very much to people around the world who regularly get in touch, like Robin in the US, like Susan in Boca Raton, Florida, like Janet in Sydney, Australia, and so many other people around the world who make their week include this show, which is just fantastic to know that I've become a part of some people's lives around the world. | |
Of course, I can always do this better. | |
And that's where you come in. | |
You can tell me how it can be improved. | |
I never thought that I was God's gift to broadcasting. | |
So, you know, as I've said before, that's why I ask for feedback and I always try to build it in here. | |
But it is a funny thing. | |
When I do more shows about science, I get people saying, what happened to all of the ghost shows and the paranormal? | |
When I do more shows about paranormality and ghosts, I get emails from people saying, what's happened to your show? | |
Where have all the scientists gone? | |
Well, we're going to redress the balance towards science on this edition, certainly, by speaking with a man who was one of the main guests of the Art Bell era. | |
They used to call them the Art and Bart talks, and I loved them. | |
Back in the 90s, beginning of this millennium as well, Bart Costco, Dr. Bart Cosco, and his talks about artificial intelligence, fuzzy logic, and the way of things would always be enthralling. | |
Bart Cosco, if you haven't heard of him, he's a professor of electrical engineering and law at the University of Southern California, but so much more. | |
A pioneer in machine learning. | |
He is an expert on neural networks. | |
And the thing that made him famous, perhaps, in this country, fuzzy logic. | |
One of the people behind that. | |
But also he studied law and he writes works of fiction as well. | |
He is one of the greatest thinkers that I ever heard Art Bell speak with. | |
And I was determined in this era now to try and find him and speak with him myself. | |
So, Bart Cosco, the guest on this edition of The Unexplained from myself in London to you wherever in the world you happen to be. | |
Like I said, if you want to get in touch with me, the website is the place to go. | |
The website designed and owned by Adam Cornwell from Creative Hotspot in Liverpool, it is theunexplained.tv. | |
Well, let's get now to snowy California. | |
And we are in a total time transition here because here I am speaking literally straight out of bed in London. | |
It's just after dawn. | |
And Bart Cosco is kind of midnight time in the mountainous areas of California. | |
And that's a hell of a transition, isn't it? | |
Stormy London, early morning to late at night in snowy California. | |
What a great thought that is. | |
So let's get to Bart Cosco now, guest on the unexplained this time, and say, Bart, thank you very much for coming on my show. | |
My pleasure. | |
And let's just, for the excitement of my listener here, explain this. | |
Now, I am recording this in a different time zone. | |
It is dawn or thereabouts here in London. | |
We have a storm going on, but it's quite mild. | |
You, and I can't compute this, you're not that far from Los Angeles, Bart, but you're in the middle of a snowstorm. | |
Correct. | |
We don't make that computation here. | |
We don't think that Los Angeles equals snow. | |
Well, when the skies are clear in Los Angeles, you can see the snow-capped mountains behind us. | |
And I'm up now about 7,500 feet, my mountain retreat on spring break. | |
Earlier tonight, I was snowshoeing, and tomorrow I'll be snow skiing, and it's snowing quite furiously right now. | |
Wow. | |
Snowshoeing. | |
I can't even imagine what that's like. | |
But thank you very much for doing this. | |
It's really good of you to do this over this 6,000 miles and over these time zones. | |
And I'm just existing on coffee at the moment, waking myself up because it's early in the morning, and you're late at night doing the same. | |
I used to love your conversations with the late and great Art Bell, who was my hero and mentor. | |
I always thought you and he had a particular kind of rapport. | |
Yes. | |
And I never met him. | |
It was good. | |
We would occasionally talk off radio, but it was something, I don't know, a different back and forth. | |
He was a fun interviewer. | |
He was a tough interviewer. | |
I remember I had just published a paper in the journal Nano Letters in 2003, and it just led to a big spontaneous talk on nano-computing, but the real deal nanocomputing. | |
And they just introduced things like nano-pants at the time, and that often is rebroadcast. | |
But Arnt was a great interviewer, a great thinker, and an extremely open-minded host. | |
I'm not saying that he believed everything he encountered, but he was open-minded. | |
Something, Howard, you don't always see, shall we say, in the sciences. | |
And just as we get older, we tend to be a little more closed-minded. | |
He was a wonderful, probing, open-minded host. | |
I think you have to be open to suggestions from people and to different concepts. | |
Otherwise, we don't learn. | |
Sometimes I get mail from people here saying, my God, what did you put that person on for? | |
You know, they're so far out of the box. | |
They're so far beyond the pale, it's ridiculous. | |
And I say to myself, if we don't do the, what is it, synthesis, antithesis, thesis thing that they do in science, and I'm no scientist. | |
The old dialectic. | |
The old dialectic. | |
I've never been a scientist. | |
But if we don't do that, we ain't going to get anywhere. | |
And I think there's more than that, too. | |
I spend a lot of my time every day, hours a day, with mathematics. | |
I'm almost 60. | |
I still occasionally publish a sole-authored math paper and multi-authored papers on math. | |
I'm literally working on five of them right now. | |
And it's so easy, Howard, to get it wrong. | |
It's so hard to get it right with mathematics. | |
And then when I go back and look at a paper of mine from, God forbid, the 1980s, I'm just cringing when I read it. | |
What did I get wrong? | |
What did I miss? | |
When I see how hard it is to get something right, even as pure a case as mathematics, and then we look at other issues, social issues, the limits of the mind, brain, it introduces in me a great sense of skepticism and tolerance and hopefully open-mindedness. | |
I certainly don't know at all. | |
I'm wrong a lot more often than I'm right. | |
At least I'm aware of that. | |
And I think the only way to proceed in this world is with us open of mind as you can keep it. | |
To be brave enough to be wrong. | |
And I'm in the artistic field. | |
I've tried to do that. | |
And boy, have I been wrong a lot of times. | |
But you have to take it on the chin and carry on. | |
So, Bart, nice to have you here. | |
For listeners who you may be new to in the UK, because before we started recording, you explained to me that the last time you were in London, for example, was 25 years ago. | |
And that's a whole world away. | |
For listeners who don't know you over here... | |
Well, that's what we're going to talk about first. | |
But for listeners who don't know you over here, and we have to say that there is a small degree of digital delay on the conversation between us. | |
So I just want to explain that to my listeners. | |
So I might seem to be stepping on you on occasions, but it's just the delay. | |
Talk to me about you and what you do and who you are. | |
I'm a professor at the University of Southern California. | |
I'm a professor of electrical and computer engineering and a professor of law. | |
I have a lot of degrees, Howard. | |
My first degree was in philosophy, degrees in economics, mathematics, engineering and law. | |
And I actually began in music. | |
I began as a composer. | |
I actually came on a full scholarship to the University of Southern California. | |
When I was 17 years old, Howard, you talked about the arts. | |
I had just finished the orchestral overture to the Count of Monte Cristo, if you ever read that book. | |
And I'd gotten a recording contract. | |
Well, that fell apart when the conductor left. | |
And that got me out from the farmlands of Kansas to California, where I've been ever since. | |
And I made the usual, well, not the usual, but a fairly common path transition from music, which I do on the side now, to mathematics. | |
And along the way, I have to say, having kind of an arts mentality coming at this, composing, for example, that's the hardest thing in the world. | |
How are to come up with a new tune? | |
I mean, how many chord progressions are there? | |
But coming from there and then applying that in mathematics, and for me, the first big insight was I learned this world of black-white logic. | |
Is it snowing outside? | |
Yes or no? | |
And the math is easy to write and it's a bunch of symbols, but pretty soon the snow will start to stop and we'll have that kind of twilight period of either no snow or light snow or when do you say it's not snowing anymore? | |
There's 10 flakes per second in a given region, this sort of thing. | |
And I didn't see how that world, the real world, where it's not black and white, squared with this idealized platonic world, a very beautiful world of mathematics and the science that assumes it's a first approximation. | |
And that for me was kind of a philosophical crisis and also a point of scientific discovery, because we were quite surprised, my colleagues and I, to find that this so-called multi-valued or fuzzy logic, as the late Great Lazizata called it, allowed us to make computers think more like people and vice versa. | |
And was it one of the first big boosts to an earlier era of artificial intelligence? | |
Yes, and I can remember commercials on television over here in the 90s, maybe the back end of the 80s, but certainly in the 90s, when they would advertise the latest great washing machine that had all these multiple programs to do your clothes better than any other, and they would say, and by the way, this washing machine, or this video recorder, is complete with fuzzy logic. | |
It was a great selling thing. | |
And my only concept of fuzzy logic, my only idea of it, is based on marketing campaigns. | |
And I came to the idea that fuzzy logic is this thing that you helped to propound that allows the human being to interface better with the machine and vice versa. | |
Right. | |
And it allows you to program a computer in English, for example. | |
So to take a simple case, I've written about this many times. | |
You can see on my webpage the article I published in 93 in Scientific American, but consider an air conditioner. | |
You might give it some rules, Howard, like if the air is cool, set the motor speed to slow. | |
If the air is warm, set it to high. | |
Well, what is cool air? | |
Well, there's clearly a distribution of that. | |
And same thing with slow speeds. | |
And with a handful of rules like that, you can quickly program a fuzzy logic computer. | |
In effect, it's a standard computer, but it approximates that to control an air conditioner and a variety of other things. | |
And what you mean by cool differs from what I mean by it, so it can be user-dependent. | |
And what you mean by cool changes during the seasons and things like that. | |
So it was a way of capturing the inherent uncertainty in most of the words we use, concepts like cool, effectively any adjective, really, if you think about it, and getting that inside a computer rather than trying to directly program that in in some simple black-white fashion. | |
In other words, exactly what temperature do you jump from cool to not cool? | |
Well, you can pick one, but in the real world, it's a transition. | |
Well, how do you cope? | |
If we talk about the temperature thing for just a second, I work, as we all do, in a lot of offices, and it's a fact that the female members of the crew versus the male members of the crew perceive hot and cold differently. | |
The women are always cold. | |
It's just our physiology, I think. | |
And the guys are always too hot. | |
How does Fuzzy Logic cope with that? | |
First off, in this sense, letting everyone define cold their own way. | |
And to go a little deeper into that, the way we do that is make it adaptive based upon the user interaction, whether it depends on gender, I don't know. | |
But people do differ in this regard, and I think we differ pretty much with all the concepts that we use. | |
The more you interact with it, the better you can tune it. | |
I mean, a simple example is when you drive someone else's car or pick up a rental car, one of the first things you do is reach up and tune the rearview mirror, because that differs from person to person. | |
In the same way with repeated interactions of telling a system or indicating to a system what is cool, you can start to program it. | |
But people do differentness, and it makes you wonder, Howard, that we all use the same black and white ink and words. | |
The word cool written on a page, cool airs and black ink, it looks like we mean the same thing, but you and I mean somewhat different things by that. | |
And it's an elementary Sensory predicate. | |
And you have to wonder when we talk about anything at the higher social level or political level or spiritual level, are we really even talking about the same thing? | |
I think the best we can hope for is an approximation. | |
And that is at the essence. | |
And we'll talk a little later in this conversation about a thing that is dogging us here in the United Kingdom at the moment, Brexit, that you've done some research on. | |
You know, the opposition's perception of where you're coming from will be quite different from your own. | |
I mean, we can even boil that down further from Brexit. | |
In a marriage or a relationship, your view of, you know, what color the new wall is may differ from your partner's view of what color the new wall is and may cause you to come into conflict if you don't both appreciate that fact. | |
Sure. | |
So that is, I guess that's somewhere where we could use, as they call it over here, the appliance of science. | |
And in general, just matters of value, along with what appears to be fact, and that's not always a clear distinction. | |
We think this is good to different degrees, and we're bound to come into conflict. | |
It's very difficult to achieve any kind of group consensus, even with three or four people. | |
And so far as I can tell, the difficulty, as we say, scales non-linearly, you get more people or more groups. | |
It's not an easy thing to do. | |
And with something like Brexit, I'm far from an expert in it, but I did see some recent fuzzy logic research on the matter. | |
You're asking about a very complex question, but you are drawing a hard black-white line to do it or not, to leave or remain through the fuzz. | |
And that is the nature of politics. | |
And I just hope, as a general matter in society, that we draw collectively as few of those lines as we have to, because they're always going to be contentious. | |
As indeed we're seeing, we'll come back to that because we want to talk about some research that you've highlighted to me about it. | |
Back to the overall concept of fuzzy logic, though. | |
We've had it for more than a quarter of a century. | |
How do you think that it has benefited us as we move into the white heat of the AI age? | |
It was one of the first areas, Howard, where we had real commercial applications, as you talked about, the washing machines and the camcorders, and in general, the consumer electronics that largely have come and still come from Japan and South Korea, for example. | |
And it was so popular in the late 80s and early 90s in Japan that the Japanese even introduced a new kanji character for Fudgy or whatever it's called. | |
And it got baked into the cake, though. | |
It's the nature of these things that you take it for granted. | |
Now, for example, a car that I drive in the snow here is called a Subaru. | |
It's a Japanese car. | |
And it's one of the leading cars for snow and ice. | |
And it has built into its very special transition a fuzzy logic system to change the gears, in effect. | |
It's very effective. | |
And they no longer advertise that. | |
It's literally baked into the cake, so to speak, here, with so many other things. | |
And a lot of modern AI applications have come from a different era, a different form of artificial intelligence that formerly is called neural networks. | |
Today it's often just lumped in with the term AI or so-called deep learning. | |
But what it is, and this is important, Howard, when we're talking about cool or good or hot, those terms are clearly symbolic terms. | |
We think about them in a formal way, but there's a sub-symbolic level. | |
And for example, it's housed in our neural tissue, in the brain, of what we mean by the pattern of cool air or warm air. | |
And that's, to first order, a pattern recognition task. | |
So teaching a computer to recognize cool air or more sophisticated, teach them to recognize your face in an image. | |
That's the sort of thing that's been very difficult to do with any kind of rules at all. | |
But we do it, humans and all mammals do that. | |
In fact, all animals do it quite well at the brain level automatically. | |
We can't explain how we do it. | |
It's a form of associative memory. | |
And that's been what you're seeing, that's what's been driving the applications in the media, which has been hyped at least as much as the fuzzy logic was. | |
So it's sub-symbolic. | |
Now, what's interesting here is at the forefront of this, where I think we're at now, and you can see some of the papers of this on my webpage, we're combining these in the sense that we want to go from the kind of dumb neural network, pre-reasoning, sub-symbolic, to the symbolic and reason with that. | |
And I was able recently, after literally decades of studying this, to reformulate what we're doing with fuzzy logic. | |
So for example, I mentioned earlier you could program an air conditioner with a handful of rules, like if the air is cool, set the motor speed to slow. | |
It turns out if I have five such rules, Howard, what I'm really doing, in some sense, is combining or averaging five kinds of probability bell curves, if that works for you. | |
So I can cast it into a language of probability, not quite the way we saw it in the past. | |
And that's important because this has become probability theory, the common coin of modern machine learning. | |
And as a consequence, I can then take the stuff from the neural world, the deep learning pattern recognition systems, and start growing more sophisticated fuzzy systems. | |
Last point, very recent, what we can do. | |
We talked about this in the 90s. | |
We didn't have the computing power really to do this. | |
What do you do, Howard, once you've trained one of these big black boxes, these big neural networks, to recognize your face or to pick out people on the terrorist watch list when they're scanning faces, for example, at a game or they being the government or at an airport? | |
What do you do with that black box? | |
Well, you can convert that now into a set of fuzzy rules. | |
So you can convert the black box into a kind of gray box of rules and then play with those, add to them, manipulate them. | |
And that's sort of where we are right now in large-scale artificial intelligence. | |
Okay, how does that make things better? | |
How does that mean that I don't go to the airport and the facial scanner, you know, doesn't say, oh, well, this Howard Hughes looks a little bit like Richard Gere or come to another conclusion, this Howard Hughes looks a little bit like some person on the terrorist watch list. | |
Let's get the security guards out. | |
You know, how do we make it more accurate, I guess is what I'm asking? | |
That's a hard question. | |
It's exactly the right question. | |
And if I can add a little engineering insight on this, you have to understand anytime we draw A line and detect something like cool air or terrorists or Howard's face or anything like that. | |
There's two possible kinds of errors we can commit. | |
And statisticians are very careful to call these out. | |
Often in the media, they're lost. | |
The first type of error is a false alarm. | |
That we, for example, think, oh my God, you walk in there, Howard, and someone thinks you're a potential terrorist. | |
Okay, so you're not here. | |
That's a false alarm. | |
On the other hand, there's another risk. | |
It's a dual risk of that, though, and that is the risk of a miss, sometimes called a type 2 error or a false positive, where we're saying when in fact you've decided to blow up the facility that in fact you're okay. | |
And so we've missed that. | |
Maybe to think about it in a medical setting, you've got a lump on your neck and you go into the doctor and the doctor says, no, no, no, it's not a problem. | |
It's not cancer. | |
It's benign. | |
And in fact, it really is cancer. | |
Well, that's a miss. | |
And it's an old saying that doctors get paid for false alarms. | |
They get sued for misses. | |
And that kind of skews the risk. | |
Now, if you get nothing out of this other conversation than this, please remember that for a fixed amount of data, once you set this up, however, try to draw the line, if you want to reduce one of those risks, if you want to lower the risk of a false alarm, you can do it by moving the line where we draw the line, but you necessarily increase the risk of a miss. | |
And that's the tough part. | |
And that's just a statistical reality, yeah. | |
That's a statistical reality. | |
It's the hard trade-off of uncertainty. | |
You want to reduce a miss? | |
Great. | |
You're going to increase a false alarm. | |
If you want to make sure that you have essentially zero probability, for whatever reason, that you never miss a terrorist, then you're going to classify everybody in the airport as a terrorist. | |
You're going to have complete false alarm certainty, this sort of thing. | |
And you could have the other extreme. | |
If you're really worried about the false alarm, you want to lower that, then you're going to miss everything. | |
It comes up, to give you a sense how gnawing this is, there's no easy way out of this. | |
It comes up in our systems of justice. | |
Look at the criminal law. | |
In the criminal law, you're charged with a crime, and it's state versus Howard here, and you're either going to get acquitted or found guilty. | |
And so again, we have the problem of the equivalent of false alarm, being falsely accused of being guilty, found guilty, or being acquitted when you're innocent. | |
And if you can be the hard-hearted conservative on the one hand or the bleeding-heart liberal on the other, and either one's going to have problems. | |
You're either going to be letting more guilty people on the street if you're worried about innocent people going in jail, or vice versa. | |
If you want to be real tough and not miss a crime, you're going to put a lot of innocent people in jail. | |
How do we draw those lines? | |
Society's kind of drawn that one for us. | |
But when we come to these issues like detecting terrorism or potential terrorism or as it gets even scarier as governments get more powerful, not just that they have better surveillance techniques, and I don't mean just visual, I mean everything and the bit streams we do, but all these algorithms we're talking about, who do you think has the most of them, runs the most effectively? | |
Of course, the large governments do. | |
And it's going to be ever tougher in a free society to balance these concerns here. | |
Just this issue of false alarm versus misses. | |
Now, I personally think, Howard, that when it's something as important as whether you're carrying potentially a bomb on a bus or especially on an airplane, if you're called aside for a false alarm, if you're potentially worried that's a high enough risk and you're not and is a false alarm, I think you should be really compensated. | |
And I think to be a good utilitarian type person about that, I bring this up because one of my heroes in the UK has always been Jeremy Bentham and his student, Jon Stuart Mill. | |
I learned all about them at school. | |
Good. | |
And I resurrect Jon Stuart Mill, my soon-to-be re-released new novel, Nano Time, by the way, as an intelligent agent, but that's another matter. | |
And the correct person who should pay for the false alarms is you. | |
You and every taxpayer in London, because you're getting the benefit of it. | |
And I think if there were a bit more of this going on in our governments, and certainly here in the United States as well, we would be a little more careful. | |
I hope we would be, and how we run these algorithms. | |
Another answer is, well, is there any way to both lower the risk of false alarm and to increase and to lower the risk of a miss? | |
Answer is yeah, get more data. | |
But at some point, you've got to call it on the data you have, and you're always going to be faced with that trade-off. | |
But you can have as much, as we call it here, data, data, as you want. | |
You can have as much data as you want. | |
But there comes a point where discretion, it would be used by a human being. | |
And an automatic system, unless the logic is very fuzzy and very good, is not going to be able to use discretion, or is it? | |
I mean, in some sense, it does, because the light bulb goes on and says, okay, I've got five high-risk candidates here, and I've got three low-risk and so forth. | |
And then it may be up to you to call it from there. | |
But the picking of high and low RIP, which are, by the way, very inherently fuzzy terms, that's a form of machine discretion, I would say. | |
But in terms of pulling the trigger, to going out and pulling someone over and doing whatever you've got to do or putting them on the no-fly list or whatever the latest technique is, that is an act of policy discretion or of a police force discretion. | |
Right. | |
Okay. | |
Now, the other thing. | |
I think one more thing on that. | |
I think people don't like doing that because it can get sued and so forth. | |
So I think there's a tendency to automate as much of that as possible. | |
And that can have a lot of unintended concerns. | |
I just want everyone to know there's no free lunch here. | |
If you want to detect more potential criminals or whatever it is, you can do that besides it being intrusive. | |
But you're also going to have this issue with a lot more false alarms. | |
On a day-to-day mundane level, I was speaking with Professor Kevin Warwick, who is a big AI and robotics man in the UK, known throughout the world, a couple of nights ago on a radio show. | |
And he and I were talking about the artificially intelligence-driven or powered car, which is getting closer and closer. | |
You know, we're moving to a state where we're not going to have to drive cars in the future. | |
That'll all be done for us. | |
Don't you buy it, but go ahead. | |
Oh, okay. | |
Well, I'd be interested to know why that is. | |
But we came up with this scenario, and I said, well, you know, how is any artificial intelligence system ever going to work out? | |
A scenario that I was in that I've talked about on here before, but it's the best one I know. | |
I was on one of our motorways here, doing 60 miles an hour in the middle of three lanes in complete darkness. | |
In front Of me on the right-hand side in the right lane was a car on its side, okay, across the carriageway. | |
To my left, if I wanted to make sure I kept away from any debris that there may be, there were other cars coming up. | |
You know, they were quite fast. | |
If I veered into the left-hand lane from my middle lane, then I might have had a problem. | |
One of them might have hit me. | |
You know, there may have been problems with people maneuvering in time, so I couldn't really do that. | |
But directly in front of me was a wheel, a complete wheel with tire on its side. | |
My choice in about two seconds was whether to veer into the right-hand lane and hope that I could stop before hitting that car, which was sideways onto me, not really a choice. | |
Go into the left-hand lane and maybe have a collision with traffic that will be there and cannot react quickly enough to what I do, or hope that my car flies over the wheel in front of me and is not too badly damaged. | |
And in about a second, I worked out, having slowed down and done all the calculations in my head, that it was better to go straight ahead. | |
And indeed, it was. | |
There was quite a lot of damage to the underside of my car, but I think it was the safest thing to do. | |
And as I said to Kevin, and as we pondered together, how would any algorithm-driven vehicle be able to make that decision? | |
What it would do is compute much faster than humans think, different hills of probability, and find the hill that's tallest, find the point directly beneath the hill, and go with that. | |
It would go with what's called a max likelihood outcome. | |
It may not be the right one, but on average, it tends to be. | |
But let me say, Howard, I have some experience in the smart car world. | |
If you go to my webpage where I post a lot of things, everything there's free is just papers and contents. | |
I published a paper in the mid-90s on smart cars. | |
And you may not know this, but there's a freeway, a highway system in Southern California called I-15. | |
And it's a backway down to San Diego in the southern part of the state. | |
And it's known as the Avocado Highway for the avocado orchards that line it, but it's also a place that has a changeable lane. | |
So during peak traffic, you can switch one side to the other. | |
God, that sounds dangerous to me. | |
Well, it is, but it's done through what's called Caltrans California Transportation System. | |
Now, I bring that up because I worked with those folks in designing what's called a smart car platoon. | |
So not just a self-driven car, but even back in the early 90s, they had worked this out far beyond that. | |
We really want to have groups or platoons of cars. | |
It's much more efficient. | |
Reason is you can't keep building freeways and highways. | |
We've long since run out of them here in the Los Angeles area. | |
But last time I was in London, it's been a quarter century, it was still pretty congested there too. | |
It's much more efficient if you can group cars together like packets on the internet in platoons. | |
And they join the platoon and they leave the platoon. | |
So we did an experiment with a two-car platoon on I-15 Freeway in early July of 1993. | |
We write it up. | |
And there was a lot of work on that at the time. | |
Now, of course, the techniques today are much better. | |
The computers are more powerful. | |
The Moore's Law effect of the doubling of chip densities of the circuits on a chip every two years roughly is continued. | |
It's going to keep continuing for some time. | |
The sensors are better. | |
These algorithms, essentially the old algorithms that compute these probably hills, but they run a lot faster than they will. | |
Still, something brought that research pretty much to a stop. | |
And I think it's going to be the same thing here, and that is the law. | |
Who do you sue when there is a crash of a smart car? | |
And more generally, who do you sue when the platoon crashes? | |
Answer everyone. | |
And what happens here, Howard, is the litigation risk, not just you're thinking yourself. | |
And yes, you're going to have to pay, I would guess, a higher insurance premium to offset the risk of being, if you use an automated car a certain percentage of time. | |
But it's like what happens in medicine, especially in the United States, but I think in the UK too, that you get institutionalized fear of litigation. | |
And so, for example, we have a lot of concerns in the United States that in the medicine we order too many tests. | |
And lots of late-life procedures that are extremely expensive are done not because it's necessarily in the best interest of the patient, but out of a morbid fear of litigation or your malpractice carrier. | |
And that goes all the way up. | |
So somebody's got to insure those cars. | |
And behind the insurers are the reinsurers. | |
For example, United States Warren Buffett turns out to be a reinsurer, but there's many of those. | |
And no one, to my knowledge, has worked out, let alone really addressed at this point, the litigation issue. | |
And so to do it on a large scale and the way in which you have to change traffic, maybe a little bit will happen incrementally and the like, but it's just so many. | |
There's so much that goes on in the modern tort law and product liability law, which by the way, really came out of California with the long gone Supreme Court justice, California Supreme Court Justice Roger Traynor. | |
And it spread much around the world, certainly to the United Kingdom and elsewhere. | |
But it's a big deal. | |
And it leads to something called a class action. | |
And you may not worry about it, and I may not worry about it. | |
But if you're a car manufacturer or an insurer, you've got to worry about it. | |
I'm sure most people do know, but for those who don't, a class action is where a bunch of people, and I remember studying this at university, get together and hope that there is strength in numbers, and they put a joint case and thereby hope to have a higher chance of victory. | |
Exactly. | |
And what's more than that? | |
It comes out of an old British action from a common law hundreds of years ago called the Bill of Peace, but simply for efficiency and economy. | |
If there's been a plane crash, like for example, there just was we heard about in Africa, a plane crash, you're going to have a lot of people suing, wrongful deaths, suing the airline, and very possibly, in this case, suing the manufacturer Boeing. | |
Now, in that case, the litigants have essentially the same damage as people, loved ones who died. | |
The facts of the matter are identical. | |
Common legal cause, common factual cause. | |
It cries out for what is known in our system, in your system, joinder. | |
And you can do that on a large scale. | |
The difference here is, however, is that once we got into the 60s and 70s and 80s, that accelerated into the megatort case. | |
In the United States, late night, you'll see advertisements. | |
If you've got a problem with this kind of Medical procedure, call this number and join this class action lawsuit. | |
And a lot of people I understand, while we're on the subject, are naturally suspicious of these things because the people in the class may only get, not in the case of the plane crash, but if it's a defective device or a coupon, you might only get $10, $20 a refund or pound refund, whereas the lawyers involved get a tremendous amount of money. | |
And so it often looks like a ways to enrich lawyers. | |
But you have to remember, if you don't have that class action, the modern mega class action, we call it a 23B3 here in the United States. | |
I don't know what you call it in England. | |
But if you don't have that, Howard, you're going to encourage companies to rip off as many people as they can in small amounts to get away with that. | |
So it's going to be out there. | |
And it's a natural thing here, and it'll come up to a great extent. | |
So the fear of the class action, the legitimate fear of that in modern product liability law is, I think, going to put a big damper. | |
It did in the 90s. | |
It basically stopped the research on the platoons, at least from the major car manufacturers. | |
So are we saying, just to boil this down, something that I've never heard anybody say before, and it makes absolute sense from what you've just said, that the development of artificially driven cars, which we're told is coming down the track to us, is going to be halted or stopped or slowed down or changed because if there's an accident, | |
and inevitably there will be, more people, more individuals, more organizations can be sued because in the case of something where there is not a driver, where there are just people sitting as passengers and they're hurt and the cars are damaged, you're going to have a go at the people who designed the roads, the people who put the road signs up, the people who designed the steering wheel, the people who were involved at every single level, whereas in the past you'd say, well, it was driver A or driver B. Right. | |
You're going to have a go at me, the guy who designs the algorithm. | |
You see, he goes all the way back here. | |
And it's the nature of tort liability, and we call the action over, to do that. | |
It'll still proceed, but it'll take a lot longer than it otherwise would have. | |
And it's going to push up insurance premiums, and it's going to complicate everybody's lives. | |
So how do we go ahead from here? | |
Presumably, we could deploy machine learning lawyers then. | |
What do we do? | |
I think we proceed incrementally, but it will be slower than it otherwise would. | |
We live in a system. | |
It's the nature of the free world that we don't have knights like in the old British medieval system. | |
We have lawyers now. | |
And I don't mean that facetiously. | |
I mean somebody who mediates the force of the state. | |
And we have it on a large scale. | |
Just to go back to give you some sense on the other side of the argument of the rationale for this. | |
If I take a can of cola out of the machine and I pop it open, it used to be let the buyer beware. | |
All this switched in the United States in the 60s and 70s to let the manufacturer beware. | |
And that's what we're talking. | |
And the argument for that, Howard, was not some kind of socialist impulse, although I'm sure some people on the left might have been a favorite. | |
What really wasn't that? | |
The argument is that the manufacturer of the cola is in a better position to know the actual risks, to spread the very faint costs of insuring against those risks, against, by slightly raising the price and so forth, versus Lilo, you and me, what do we know about the can of cola when we open the thing? | |
And likewise, about when you get in the smart car directly that you're sitting in or part of a platoon or part of an Uber type arrangement. | |
There's a lot of potential litigation there. | |
And at the highest levels, I don't think they're going to want to proceed until they've worked this out. | |
It'll take litigation because these will often be cases what are called first blush or first hearing. | |
But your point is correct. | |
It will slow the pace of actual applications. | |
Why is nobody talking about this then? | |
We are, you know, this is the first conversation that I'm aware of about something really fundamental. | |
We're just being told that the future is there and it's coming your way. | |
Whether you like it or not, it'll be a couple of years and we're all going to be in, well, it won't happen overnight, but it'll be an incremental thing over a decade or two. | |
And by 20 years from now, we'll all be in automated cars. | |
You say not so. | |
I say, again, to someone, you can go to my webpage, look my publications in the mid-90s or my textbook, Fuzzy Engineering. | |
I have a paper on in there. | |
I was deeply involved in this. | |
We have essentially the same algorithms back then, not as powerful computers, and that made the difference. | |
And it got stopped in its tracks by the concerned litigation. | |
And we're going to see this in many, many other places in the law. | |
And so I happen also to be a professor of law and have looked at this and some other issues. | |
But there's all kinds of things up and down classical contract law, all the way up to modern intellectual property. | |
Who owns, for example, the art that machine intelligence creates. | |
It's very far from clear from the old answers that we have. | |
And these applications, it'll take a while. | |
So just because you hear about a paper published at a conference and something that looks good in a demo doesn't mean you're going to have access at the consumer level when there is this potential systemic risk. | |
It's going to take a long time to work out. | |
It'll come, but the rate at which it's come will be less and slower than it otherwise would be. | |
Well, what a fascinating thought for my listener and myself to contemplate because I've never thought about this. | |
We've got a lot of ground to cover, not a lot of ground time to cover it. | |
You've applied, or your colleagues have applied in the field, fuzzy cognitive maps to the thing that is the top of the news here and has been for the last two years, to the point where we're all sick and tired of it, to be perfectly frank. | |
And by the time people hear this, the scenario will have changed. | |
So I'm not going to comment on the actual physical happenings at the moment, but Brexit, the fact that the United Kingdom, by a majority of 52 plays 48, voted to exit the European Union, which I think the people who voted for it and the people who voted against it, neither of them foresaw the consequences and difficulties of this, which we are now enduring or experiencing, depending on how you look at it. | |
You've applied fuzzy cognitive model to Brexit, and you've come up with four different scenarios here. | |
And I love the names of them Because they are very evocative. | |
The first one is amicable transition. | |
The second one is simple separation. | |
The third is hostile divorce. | |
And the fourth is called clean break. | |
And they are all, for a multitude of reasons, very different. | |
How can you apply fuzzy logic to the process of Brexit, that most human of things? | |
Let me give an answer first of what is the fuzzy cognitive map. | |
I introduced this many, many years ago. | |
Again, I have a strange background. | |
And when I was working on my PhD, Howard, I was actually working in aerospace for the biggest defense contractor in the world initially on the Tomahawk cruise missile. | |
This was called General Dynamics. | |
In the course of doing that, I, well, I can't say what I was doing, but in the course of doing that, I saw certain things. | |
And then in my off hours, I was 23, 24 years old, I would think about it in more abstract mathematical terms. | |
And it led to a paper, you can see it on my webpage, it came out in 1986, called Fuzzy Cognitive Maps. | |
And they're very popular now. | |
Now, what's different about them and why would they apply to Brexit? | |
What's different about a fuzzy cognitive map, and this is an important point, it uses feedback. | |
In almost all the systems, all the systems we've been talking about so far in this podcast, but in most systems in applied science, not all, but in most systems in AI, you put something in the machine on the left and it spits out on the right. | |
That's called feed forward. | |
It doesn't come back and circle back and start over. | |
You can run it many times, but it doesn't swallow, like a snake swallowing its own tail, take as input its own output. | |
That's feedback. | |
Now, when you go to model something in the policy world, like Brexit, or I was recently working with some colleagues at the RAND Corporation, modeling how terrorism arises in different settings. | |
And you can see a paper on that also on my webpage. | |
All kinds of things feedback. | |
There's lots of multi-way causality. | |
One arrow links from one concept like Brexit to the energy policies and energy maps to something and trade and trade maps back to Brexit. | |
And it goes in circles and tangles. | |
Now, in classical ordinary algorithms, you get caught in infinite loops in those tangles. | |
In a cognitive map, we like that. | |
It creates feedback and you turn this thing on, you stimulate it with a pattern, a scenario like Brexit, or something that could follow or precede it. | |
And then it swirls around, Howard, and in some sense cools down into an equilibrium and makes a type of prediction. | |
Now, this is not a numerical prediction in general. | |
It is a pattern prediction. | |
But it is a prediction. | |
And it does so on a scale that exceeds most of the AI techniques. | |
So, for example, the usual way you would model something like this in artificial intelligence, and it's taught at all the universities, is something called a Bayesian belief tree. | |
And it's just what it's a tree. | |
It's a bunch of nodes with arrows, but there's no feedback between them. | |
There's some rare exceptions on this, but it doesn't swirl around. | |
In order to make the probability math easy, we have a small number of nodes. | |
Well, you can't have many nodes and still apply the probability. | |
And the Brexit model, researchers at Cambridge and University of Leeds and elsewhere, they came up with 29 nodes, which is extraordinary in an AI setting. | |
And they're all fed back, and they had two different, I don't know all the details on it, but they had two different workshops of experts to locally draw the arrows. | |
Then they put it together, because you can allow each expert to draw their own picture. | |
What happens with a cognitive map, why they're so popular, Howard, is you can draw your picture of what you think is relevant of Brexit. | |
You can sneak in some of your political prejudices or take them out and put in other causal links, and I can do the same thing, and five other people can do theirs. | |
And then we can naturally combine them to one unified and representative cognitive map that represents all of our views. | |
And when you get more and more people doing this, Howard, you end up with very big maps and kind of a big cognitive map mind. | |
And so you're lucky to get any kind of prediction at all, but you do get a symbolic prediction. | |
And the question these researchers addressed, and they did this before the Brexit vote, so as not to be prejudiced by it, was what effect would Brexit have and different forms of it on the British energy market? | |
Now, I don't know all the findings of it, but it's state of the art in terms of the technique, the number of nodes involved, the sheer complexity that exceeds the ordinary AI computation, in this case 29, and the fact it could be applied so quickly to a current issue. | |
I think you'll see many, many more of these. | |
Cognitive maps have taken off recently, even though, again, they originate out of the 1980s and from an earlier era. | |
We didn't have the kind of computers then, and we certainly didn't foresee the level of applications, but they're increasingly popular at a socio-political level. | |
Okay, well, this third scenario, hostile divorce that came out of this research, I mean, I love that term. | |
We should be using these terms in the media. | |
It says the hostile divorce would see the UK suffer an adverse economic shock generated by lost access to the single market and have it respond by reducing regulation to attract business from abroad. | |
We've had this talked about until a lot of us are sick of hearing about this. | |
That seems to be the worst case scenario. | |
And it seems that the best case scenarios, if I've got this right, are the thing called amicable transition, which I think is what the politicians are trying to work towards, or indeed clean break. | |
Clean break, apparently, according to this fuzzy cognitive model, is where the UK breaks ties completely, but manages to agree on trade terms that see apart access to the single market of Europe. | |
The scenario reflects the aspirations of many Brexiters. | |
Yes, we know that. | |
But it means that it seems to be that it's hard but clear, is what the fuzzy cognitive model says. | |
But what is the cognitive... | |
What does the fuzzy cognitive model tell us about what we should do? | |
Because a lot of us don't know. | |
I think it's not meant to be so much that as it is just a peek into different possible futures. | |
So you could put a description of each of those scenarios, like hostile divorce, and then let the map swirl. | |
Now, remember, the focus here was the energy market in the United Kingdom. | |
And so it would affect different variables in equilibrium in different ways. | |
And then you could look at the amicable separation, and it would likely affect the same variables in different ways. | |
And that's what they were trying to work out in advance Before the actual Brexit vote. | |
And the point was that you couldn't model something of the complexity of Brexit, as crude as these models are, without lots of feedback. | |
And unfortunately, feedback is something the current era of AI pretty much eschews, simply, not because the world's not massively fed back, it is. | |
Everything's connected to everything else, even through gravity, for example. | |
But simply because it's too hard computationally. | |
And so I'm not sure, Howard, where they go with that. | |
I think it's more meant to be a method. | |
And then as the facts approximate one of those scenarios, you can run it in this cognum app or others. | |
Again, you can go back to their cognitive app. | |
You can challenge it. | |
You can reweight it. | |
And you can add to it as many other cognum apps as you want. | |
In fact, a lot of us envisage this becoming a social media type event where each person sketches their view of, for example, the Brexit breakup or in the United States, something equivalently controversial and divisive as whether the current president of the United States will be impeached. | |
You could easily do a fuzzy cognitive map. | |
In fact, some colleagues might have talked about doing that. | |
And it's not telling what's the right answer, Howard. | |
I want to be clear on that because you can pack in a lot of your value judgments. | |
But it would say, given these assumptions, this would be a pattern you'd expect to see. | |
And when you're combining that many variables, when they're that massively interconnected, there's really no easy way the human brain can see that. | |
You need some kind of automated algorithm to do it, these techniques to first approximation. | |
So these techniques will give you a kind of guidepost, but of course they can't tell you what the outcome is going to be because they won't know, just as you and I are not psychic as far as I'm aware, what events will feed into the system, into the model. | |
That's right. | |
So what we try to do is a form of what's called Monte Carlo simulation. | |
We try to get little snapshots of the future by averaging into the future. | |
We take models, we estimate the path, we do that many times, and then we average it. | |
That's how we get your nightly weather forecast or your daily weather forecast. | |
And that's what we do and have done since the early days of the H-bomb, where that literally came out of it, 1953 at Los Alamos. | |
So a form of averaging from the models. | |
The trouble is, if we're working with something like Brexit, we don't have a model of all the social variables. | |
There's just no theory. | |
There's no equations you look up. | |
If I'm worried about neutrons colliding in high-energy particle physics for a bomb, there are equations I can look up. | |
I may not be able to solve them, but I can see them, and I know that in quantum mechanics, what we observe are averages, and I can throw all the Monte Carlo stuff at it. | |
And they've been doing that literally since 1953. | |
The trouble here, and a lot of problems we'd like to address in the future, is we have to come up with a model first, as approximate as they are. | |
Hopefully they'll improve as more experts contribute expertise. | |
We want what's called an expert sample size effect. | |
And then, Howard, we do lots of simulations, lots of averages, and hopefully the average path roughly corresponds to the near-term future, maybe even the far-term future. | |
That would be a nice place to be. | |
Unfortunately, here in the UK, and maybe it's the same in the US, data and number crunching has got a bad name. | |
And one of the reasons that it's got a bad name is that forecasts of the trajectory of the British economy, which we get frequently here and are used as the basis for all sorts of decision-making, are, I would say, and I don't think it's a big statement to say it, my friends in the UK probably would agree with me, invariably completely wrong. | |
I don't want to say completely wrong, but I would suspect in the short term, what's easier to predict, they're more accurate. | |
And the further out you go in time, well, the less accurate they get. | |
That's just the nature of it. | |
But again, if I take even simple econometrics, there are relatively straightforward equations of aggregate supply and aggregate demand. | |
I'm not saying they're the most accurate, but there is a theory there. | |
And then you can look at past data to tune that. | |
And econometricians do a pretty good job of this. | |
Run the current initial conditions and then predict a little bit into the future. | |
If it's the weather, I have something, some differential equations called the Navier-Stokes equation. | |
They're basically Newton's equations of force equals mass times acceleration and conservation of energy, all gussied up in a fluid setting, but I have a model, and then I can run those forward. | |
In fact, that was the, going back to the bomb days where a lot of this comes from the United States in the 1950s, that was also the basis for hoping to be able to modify and predict the weather, which we have today. | |
But when it comes to things like Brexit, the stuff that we fight about, we're going to fight about, very complicated, they're high-dimensional, there's lots of variables, and just a little feedback path can have a chaotic-like effect that completely changes the output. | |
It's difficult, however, because we simply don't have a model. | |
We need things like more big, aggregated cognitive maps. | |
By the way, I've introduced a term, you've heard of big data. | |
With cognitive maps, I call it big knowledge, because we would like to take all the opinion pieces in your newspaper, legal testimony, expert analyses, and translate those, and people work on this, into cognitive map, causal map type pictures, and then aggregate them and aggregate them over time. | |
And these models do tend to improve with time and get big. | |
And so we, instead of just starting afresh with each argument, we have some kind of model about how the parameters in society fit together. | |
So our automated decision-making, this is hopeful, is going to get better as we move into this direction because it is going to be based on what happens when you're born. | |
You're born as a baby, you know nothing, you learn, and by the time you get to 50, you know an awful lot. | |
If we build the same principle into our systems for operating stuff and making decisions for us automatically, electronically, then they will get better at doing what they do because they will have knowledge. | |
They will have experience aggregated over time. | |
Just add the provezzo on average. | |
On average. | |
Other than that. | |
You always remember the statistician who drowned in a lake that was only 12 inches deep on average. | |
So a lot can happen with that average. | |
And unfortunately, the best we can do here in time, and I know we want more, we want a crystal ball, and we don't have it, is predictions, is average the future. | |
And the further out we go, like in the case of looking at a century in the future in terms of carbon emissions and warming effects, the more uncertain there is, the wider the 95% confidence bands are Going to be. | |
That's in the nature of things. | |
That's in the nature of the database techniques. | |
And this data crunching, it shouldn't have a bad name because, in this scenario, Howard, what we want to trump, what we want to decide is not authority, it's not politics, it's data, it's facts. | |
In fact, we say in statistics, data is gold. | |
It's just that, of course, that can be abused and can be hard to get data. | |
And often, far more often than not, I think, the data is just limited. | |
And we try to get more out of it than the data really has to give. | |
Okay, before we move on to your work of fascinating fiction, is the world that is led by artificial intelligence, I mean, we're in it anyway now, but increasingly we're going to be, is that a better world for us? | |
Do we get better outcomes? | |
Do we have better welfare? | |
On average, yes. | |
Again, but that average has the variance and a lot of bounce, especially at the beginning of the process. | |
The trouble I see here is manyfold. | |
And one is the AI world and in general the automated world is going so fast and things like our law are slow that distortions happen. | |
This has already happened in the United States. | |
I think it's happening in Britain and elsewhere with privacy laws. | |
Now, unfortunately, you guys don't have a First Amendment. | |
Shame on you. | |
I'm surprised there hasn't been an effort to come up with something like that. | |
But our First Amendment and our Fourth Amendment over here, can I just tell your viewers, they may not remember what the Fourth Amendment is to the United States Constitution and why it relates to Britain. | |
It's our laws against unreasonable searches and seizures. | |
And when the British were in charge of the colonists back in the 18th century, they tended to be very harsh under old King George and have what were called general warrants and rampage people's houses in Boston. | |
In some sense, Howard, the more your ancestors were harsh on the earlier Americans, the better it was for the current Americans. | |
Because we get this really long, detailed Fourth Amendment, which you don't have. | |
And unfortunately, ours is so shot full of holes, and after 9-11, the holes have gotten wider and wider. | |
But that's an example. | |
Search and seizure, the trade-off between security and liberty, which we've been re-examining, I think, and getting wrong largely since 9-11. | |
That's something we didn't anticipate. | |
So those kinds of things, big problem. | |
You look at China. | |
Just one more thing. | |
China is working on a system right now called social credit. | |
I believe it actually originates from some scheme in Britain in the early 20th century where there'll be at some point some government posting of how good or bad you've been, Howard, and we'll reward you, or maybe prices will be more expensive somewhere and other things. | |
It is so easily within the grasp of modern data science or AI, whatever you want to call it, let alone the future. | |
It's all about the political will. | |
And when we're not updating our laws against privacy invasions, which is deeply tied, I think, to this. | |
And I don't know what you guys are going to do because you don't have a Bill of Rights. | |
We have one. | |
We're having problems with it. | |
But it's very concerning. | |
So it will be a better world for many things. | |
Average health benefits and things like that will certainly go up. | |
But for an old-fashioned, you would call it liberal there, or here we call it libertarian or civil libertarian like me, I'm not so sure, Howard. | |
I'm really not what the future would be like and the sheer intrusions of liberty. | |
And what I find especially disturbing, especially a lot of young people, doesn't seem to bother them. | |
They don't care. | |
Yeah, I find that all the time. | |
And I just, I don't despair, but I am bemused and surprised and shocked sometimes. | |
They need to be thinking about these things. | |
And what they do is they trust there is something that seems to have crept into, and I'm not criticizing the current generation at all because we get better as we go on. | |
Of course we do. | |
But there is an acceptance that has been bred into people. | |
And I don't know whether it's the mass media culture that they feed off, what it is. | |
I don't know whether it's big corporations making them think this way. | |
They don't question. | |
They think that what they're told is what is right. | |
We come from a generation where we questioned and we dug our heels in and we said no, no, no. | |
But not just that, Howard. | |
I mean, you probably want to keep private the fact that you're going to visit person X this weekend. | |
Now, kids are used and everyone else seems to just putting that on social networks. | |
I don't think. | |
It's an innocent trusting that I don't quite understand. | |
And its effect also everyone else, everyone sees everyone. | |
I have to add one more thing, criticism of technology. | |
I have to say this as a professor, and I don't mean just the fact that cell phones go off during my lecture and things like that. | |
But it is the interference, the shrinking of the concentration span. | |
In other words, to do the kinds of stuff that I do in teaching a graduate math course, for example, by the way, I try to do my lectures by memory. | |
I prepare them that much. | |
I have backup notes. | |
It takes a lot of concentration, Howard. | |
That's why I'm often in the mountains of that. | |
A lot of time I'm on what are called noise fasts, away from digital devices, and no Google, no nothing. | |
And I'm finding each new crop of students have a more difficult time getting away from the smartphone and doing what we call in the old law a meeting of minds like you and I are having right now, like Art and I used to have. | |
And the amount of time spent concentrating, like reading something called a book, you know, it seems to the studies I saw, it's been a while since I looked at it, but it's not getting any better. | |
And in order to do deep thinking, let alone creative thinking, you need a lot of time. | |
It takes a while to do it and to go over issues. | |
And that's something that the advances of smartphones and as they will be moved, migrating to our glasses and then to our retinas, wherever they're going, that doesn't favor it. | |
Quite the contrary. | |
There's just too much of a moral hazard here, I think, that we're not addressing. | |
And is there too much of a crutch? | |
Is it linked to that thing they call FOMO, F-O-M-O, fear of missing out? | |
The fact that these people won't give time to what needs to be, have time allocated to it because they are afraid of missing out on something that might appear on social media, might be shown on the television at some point, or something digital that they have to be there. | |
It is related. | |
I've seen anecdotal claims by former employees of some of these social media out firms, firms, that they use social psychology techniques basically based on dopamine to get you to keep looking at things and positive reinforcement based on friendly. | |
There is that, but there's also just the old moral hazard that we don't use slide rules anymore and we move the pocket calculator if people do those kind of things. | |
And a lot of us got less good at doing sums, at doing those kinds. | |
It's just an inevitable thing. | |
But when you're not reading things deeply, I see so many people just doing what's called info snacking, trusting Wikipedia for God's sake. | |
A lot of graduate students write that. | |
And looking very quickly, and there's an economic basis for that because the opportunity, so much content, the opportunity cost of looking at any one given piece of content just keeps going up. | |
Well, we're losing focus and properly training brains. | |
We're getting increasingly distracted. | |
There was an old Kurvanigat story called, a short story called Harrison Bergenon, which in a totalitarian state, the government put a little noise device in your ear every few seconds that went off to keep you from focusing. | |
We've in effect are doing that to ourselves here with distractions. | |
So if I may say this, read a book. | |
The paper cutter. | |
This is apropos of nothing at all, but I remember one of the ways that I used to try and get higher marks when I was studying a politics degree was that Liverpool was a great place for studying politics because it always has been a political melting pot and it had some great political bookshops. | |
So I would go and find the most obscure book about the Australian federal system and I would base my latest essay on federalism on this obscure book that I knew that the lecturer who would be marking it had not read. | |
I would read this short book about the nature of Australian federalism, ask me any question, and base my arguments upon that. | |
The good old days of when you read stuff. | |
Yeah, books, lectures, and bookshops. | |
Man, they're kind of gone away here, but it's a concern. | |
But I really do mean this, especially for young people. | |
If you want a competitive advantage here, read a book. | |
I recommend, believe it or not, War and Peace. | |
First off, I think it's a magnificent book, and there's a lot of wisdom from Old Count Tolstoy. | |
And extra credit for the reader, notice how it's a discussion of the integral and differential calculus. | |
He sneaked in there. | |
But I think that should be not on your bucket list, but on your other list. | |
Just your brain is capable, we know, of so much more than you're applying it to. | |
But you've got to apply it. | |
You've got to train that brain. | |
It takes work, and info snacking is not how you do it. | |
You've got to read books. | |
You've got to go deep. | |
And it takes hours. | |
Concentrated thought. | |
You know, my big sister, Beryl, I was a popular media kid, and this is a long time ago. | |
And she used to say to me when I was struggling with schoolwork, you've got to apply yourself. | |
And how right she was. | |
I don't think anybody says that to anybody anymore, but maybe I'm just, as we say over here in the UK, an old geezer. | |
Got to talk about, when we're talking books, about your work of cyber fiction. | |
Talk to me about that. | |
It's being re-released. | |
Yeah, I wrote many years ago a novel called Nano Time. | |
And it introduced the term. | |
What nanotime means is what it would be like to think with a computer chip in your head or your entire brain backed up on a chip. | |
So the first order would just be take whatever you're thinking about now. | |
You're looking at something, you're feeling something. | |
How would that run? | |
Well, it would run millions or billions of times faster. | |
And billions of times leads to the idea of nanotime versus the way it runs now on meet time. | |
For example, right now your electrical signals and your nerves travel just a couple hundred at most meters per second, not very fast. | |
And it's been that way for a very, very long time on this planet. | |
But when you start to have computer uploads, we're starting that. | |
We have a lot more of that coming, and start to back up the brain. | |
Again, fundamentally, Howard, the flaw of the brain, there's no backup. | |
I mean, we have to fix that and we will. | |
But when you do that, the time scales are different. | |
The setting, though, is World War III. | |
Now, when that was done, I just, if I could mention it for the readers, it's being re-released in the UK, I think, next month through Endeavor Venture. | |
And it was actually, you may find this of interest. | |
I got a phone call from the film director, Oliver Stone. | |
And I had gotten to know Oliver after my book, Fuzzy Thinking, came out. | |
And he knew that I wrote fiction as well as a scientist. | |
And he had started off his film for quite a while. | |
He was in the military. | |
But he had done a movie called Platoon. | |
I don't know if you ever saw that. | |
It's an excellent movie. | |
He did great. | |
Yeah, and it won the Academy Award. | |
But it was preceded in production, but not in distribution, by a movie by the great Stanley Kubrick called Full Metal Jacket, also about the Vietnam War. | |
And there was always kind of a rivalry between the two. | |
So he calls me up. | |
I was up fishing in the Sierras and schools out in May of 94. | |
And he says, Kubrick's making a movie called AI. | |
I want you to give me a treatment. | |
I said, well, what does that mean? | |
He says, well, just write down, you know, pick something big. | |
I said, how big? | |
Big. | |
Okay. | |
And something you care about and something, you know, you're my machine intelligence, we called him machine intelligence guy. | |
And I said, all right, let me think about it. | |
I went back up to the Sierra and I thought, well, what's big enough? | |
What was the biggest thing in the 20th century? | |
Clearly, World War II. | |
Our technology comes from it, and a lot more does. | |
And I think the next biggest thing will be World War III. | |
The only question is, what will it look like? | |
And that drove that. | |
What would that be like when you have more of a cyber war effect? | |
In circa 95, and we tried to get it made as a movie. | |
In those days, we weren't able to. | |
There was always a lot of film interest. | |
So I took the 40-page treatment of that and went over it and kept going. | |
And I happened to be reading, I'd just gotten tenure. | |
That's something we have in the United States in academia. | |
And I went to celebrate in the summer to the British Virgin Islands for scuba diving. | |
And I carried with me the autobiography of Jon Stuart Mill, my favorite Brit besides you. | |
And I really love him. | |
And it occurred to me slowly when I was thinking about this earlier thing I drafted. | |
Well, wait a minute, the main character in the movies or young, selfish guy doing his own thing with his intelligence. | |
He doesn't really have what in drama is called a foil, a buddy, a reflection. | |
Who would be a better foil, Howard, than Jon Stuart Mill? | |
No one was smarter. | |
I mean, this guy, what he learned Greek and Latin as a young kid. | |
He had James Mill and DeLotte training him the whole way. | |
He ultimately becomes a member of parliament. | |
He writes the first book on the philosophy of science. | |
And by the way, the way we modify a fuzzy cognitive map is something I attribute to Mill called concomitance and variation. | |
It's now called the differential heavy learning law. | |
He does that. | |
He does so much more utilitarianism, economics. | |
I think he had the biggest picture. | |
And he articulates in his book on liberty the best argument I've ever heard, I think to this day that I've seen on liberty, that when we have a free market of ideas, and we don't always have that now, it is a form of error correction. | |
No one knows what the truth is, like we were talking about. | |
No one knows the best outcome. | |
A lot of people think they do. | |
We are, because of the way our brains are made, they're very associational. | |
We form associations, spurious and otherwise, all the time. | |
That's literally right down to microarchitecture. | |
We need some way, some mechanism to get us out of these bad equilibria. | |
And an open market of ideas, I think, is it. | |
So wouldn't it be neat to have Jon Stuart Mill? | |
In theory, here's how you do it. | |
You take what's in his autobiography and other books and train that as best you can, train a system of fuzzy rules with what today would be called AI or machine learning or deep networks, neural networks, these sub-symbolic things I mentioned, learn the patterns. | |
And pretty soon you would be able to come up with predictions that are similar to John Stuart Mill. | |
And it would take a lot of computation to do this. | |
Certainly was not feasible to do it in 1995. | |
And it may not be today. | |
But the story takes place in the year 2030, about as far out as we could take the predictions with the CIA fact book and things like that back in those days. | |
And we get World War III. | |
So you get World War III in a couple of days with one guy and his best friend, kind of a boy in his dogtail, but his best friend is in a little gadget in his ear. | |
I have one in my ear right now, by the way. | |
But he's got Jon Stuart Mill there. | |
And so it's coming out again. | |
It was truly an AI novel way ahead of its time. | |
And it brings up something else that's disturbing. | |
Another hat I wear is, I told you I used to be in the defense world. | |
In fact, even as a kid, proposed some weapons programs. | |
And it's been a while since I looked at these things and I talked to some colleagues. | |
But one problem we see in the last couple hundred years is that it's getting easier to attack than to defend. | |
In the old days, the Game of Thrones days, you needed siege warfare. | |
It was very expensive to attack your opponent. | |
And as we've shifted increasingly to really information attacks, like a computer virus, it's trivial to attack some, extremely difficult to defend against the potential viruses coming at you, or cyber attacks, or what are called logic bombs. | |
And so the dangers of war, the inherent game-threatic instabilities of it, are worse than ever, Howard. | |
They really are. | |
We just don't know what the next war will look like. | |
I think there will be one, and there's different scenarios for it, but it highly likely will involve cyber warfare. | |
And as I mentioned, logic bombs. | |
The United States, for example, I'm sure the UK is worried about the extent to which, for example, China, Russia, North Korea, Iran, and other countries, let alone independent hackers, have gotten into the power grid to do what exactly? | |
Or the potential with anti-satellite technology to poke out eyes. | |
And just a lot of things can go wrong extremely fast. | |
We've never had a major cyber attack. | |
We surely will. | |
And so this was the idea. | |
The publisher approached me about it. | |
He said, look, someone read this. | |
I think the novel was way ahead of his time. | |
Should come back out. | |
And I've got an introduction to it. | |
I haven't changed anything in the book. | |
But the question is, is that scenario in that book unduly pessimistic? | |
I'm afraid to say it's not. | |
Not only do we have off-the-shelf information devices that are very dangerous in this regard, or stuff on the black net, as they call it, the dark net, like, for example, the ways to attack through viruses and things like that, but we have something else that we thought we wouldn't see back in the Halcyon days, you call them, of the 90s, and that is nuclear testing. | |
Who would have thought North Korea would test an H-bomb? | |
But they sure did. | |
And as we talk, we don't know how many thousands or tens of thousands of gas centrifuges are spinning just in Pakistan and maybe other places. | |
So there's a nuking up taking place in the world that it was just the opposite. | |
The end of that horrible Cold War, we had a wonderful policy between the U.S. and Russia. | |
Yeah, mutually called megatons to megawatts, where we took former Soviet uranium and we bought it and we burned it. | |
We're still burning some of it, I believe, in our 100-plus nuclear reactors. | |
That's not the world we're in right now, Howard. | |
The effort to develop nuclear devices in Russia, the updating of them in the United States, China's efforts in it, let alone new countries when they come in. | |
We don't know who's exactly doing what. | |
The threat that we have in the Middle East and the story centers around the Middle East. | |
Israel, we know, they don't officially admit it, has a very large arsenal. | |
And Saudi Arabia has threatened to get one, and just on and on it goes. | |
We're living in a world, the clouds have come back, and I think it began, as far as I can tell, with 9-11. | |
Just the end of the Cold War, Halcyon days of the 90s, the sunny 90s are gone, and there's a lot of problems. | |
So with all that, we've got a re-release of Nanotime, and it looks like another one of my books will be coming out following that, too. | |
And with all of that nuclear proliferation, as we used to call it, in this new era of nuclear proliferation, there is the added difficulty that these systems are controlled by artificial intelligence. | |
So you're going to have one machine fighting another machine, and we could all be destroyed almost by some kind of default action by a mechanical system that has nothing to do with a human intervention. | |
And what a scary thought that is. | |
It is a scary thought. | |
The fail-safe scenarios, the concern here, not just human error, but algorithmic error on the one hand, that's bad. | |
But when you combine that with algorithm mischief, it's really terrifying. | |
You have to be extremely scary. | |
We have so many, just between the U.S. and Russia, so many thermonuclear weapons pointed at each other. | |
I think a lot of your reviewers may not know this, Howard. | |
They may think that somehow we sheathed those weapons. | |
We did not. | |
We did not sheave those weapons, quite the contrary. | |
And let me just give you a sense of what the engineer has to deal with in something like this. | |
If one of those takes off and they could split up in the higher atmosphere, you would have to shoot it down, the missile, the ICBM, pretty much in what's called a boost phase when you can see it. | |
You can see it in radar. | |
It's like somebody lighting a match in a dark room. | |
But who wants to shoot down something in Russia or wherever it happens to be in the first couple of minutes? | |
Once it's in outer space and truly goes ballistic, it splits up. | |
It has different warheads. | |
It emits decoys. | |
You can do lots of other stuff to black things out. | |
And then when it comes, re-enters atmosphere, like it's headed towards London, for example. | |
Let me tell you what you have to deal with. | |
You've got to track and knock down something a little bigger than a trash can going approximately seven miles per second. | |
There's no real known way to do that. | |
And that's just for one, or let alone several. | |
If some dark power, if some mischievous hackers, who knows what, some terrorists were able to get in at some other country, maybe North Korean, who knows what it was, and just get one or two of these nukes. | |
So it's not clear. | |
We think we could take it out. | |
But our anti-missile defenses are not that strong for whatever reason. | |
They just aren't. | |
And that's one of many ways. | |
So we're in a world of nuclear proliferation is as real as it ever was. | |
I don't think people are just adequately aware of this, Howard. | |
That's another thing in this conversation that people are not aware of, really. | |
And the one thing I'll take away from the back end of this conversation is that it is indeed easier to attack these days than to defend. | |
And we have to be asking ourselves very seriously when the public finally gets the information, which they haven't got at the moment, is how did we let that happen? | |
One, two, what are we going to do about it? | |
Two big questions for a future conversation, Bart. | |
And there are many things that we need to talk about. | |
We're going to have to park them until we speak again. | |
But thank you very much for a very absorbing hour and 10 minutes. | |
I look forward to doing it again. | |
It's been fun, Howard. | |
Dr. Bart Cosco, Professor Bart Cosco, thank you very much to him. | |
And I will put a link to him and his amazing work on my website, theunexplained.tv. | |
And I think you heard why in the last hour he was one of Art Bell's greatest guests. | |
And I will always think of Art whenever I speak with Bart Cosco. | |
Those art and Bart talks were legendary. | |
You know, we're coming up to the first anniversary of Art Bell's death. | |
And it's still, for a lot of us, a pretty raw wound, isn't it? | |
That at least we still have this man's recorded conversations to listen back to. | |
And I do that whenever I'm feeling, which, you know, happens from time to time, whenever I'm feeling low or down or uninspired or worried about where the next penny's coming from and all that kind of stuff, I find an old show from Art Bell and I listen to it and it reminds me of why I do what I do and who inspired me all those years ago. | |
So Art, we remember you. | |
And also I'd like to mention this. | |
I mentioned it on my radio show, but you may have heard news recently of the sad death of Richard C. Hoagland's rock and partner and guide in life, Robin Falkov. | |
She will be sorely and sadly missed. | |
And Richard, I want to put here on the podcast my sincere condolences, not only from myself, but also, I'm sure, from my audience to you. | |
We'll be very sorry to have lost Robin. | |
And please take care of yourself, Richard. | |
Anything that I can do, please let me know. | |
All right. | |
My name is Howard Hughes. | |
I am in London. | |
This has been The Unexplained. | |
More great guests to come. | |
Until next we meet here. | |
Please stay safe. | |
Please stay calm. | |
Above all, please stay in touch. | |
Thank you very much. | |
Take care. |