Yale ethicist Wendell Wallach warns Senator Chuck Schumer’s AI meetings risk an "oligopoly" of tech enthusiasts and critics, ignoring risks like surveillance capitalism or AGI emerging by 2050. He dismisses the "paperclip scenario" as a real threat but notes minimal progress in ethical AI design, contrasting AlphaFold’s medical breakthroughs with deepfake-driven disinformation. Calling for global certification and cultural-sensitive governance—China’s "harmony" vs. Western rights—he urges regulation to prevent job displacement and authoritarianism, framing AI as a tool that must serve humanity rather than replace it. [Automatically generated summary]
Welcome to this week's interview with Wendell Wallach, an internationally recognized expert on the ethical and governance concerns posed by emerging technologies, particularly artificial intelligence and neuroscience.
I'll be telling you more about him in a minute.
But first, let me just talk to you about a little bit of a few of my concerns.
One of my favorite movies, as I've mentioned many times, is The Christmas Carol, the 1951 version with Alistair Sim, which has the wit to take Dickens' story and place it very firmly in the midst of the Industrial Revolution or in the aftermath of the Industrial Revolution.
And there's a line in it where the young Ebenezer Scrooge, before he becomes the old mean miser, ponders to himself whether machines are in fact good for mankind.
Looking back, now that we have air conditioning and refrigeration and all the wonderful things that have made us wealthier and made life so much easier, it almost seems like a ridiculous question.
But going back, of course, to the Industrial Revolution, people were destroying the machines.
They were called Luddites, but they were Luddites for a reason.
The new technology changed life entirely.
It ruined, destroyed family lives for some people.
It made children, put children into terrible labor conditions.
And it took a long time before that was straightened out.
So lives can be destroyed in the moment and good things can come afterwards.
It's very hard to make those decisions.
And that's why we need people who can see into the future.
Wendell Wallach is a consultant, an ethicist, a scholar at Yale University's Interdisciplinary Center for Bioethics, where he chairs the Working Research Group on Technology and Ethics.
He's co-author of Moral Machines, Teaching Robots Right from Wrong, which maps the new field variously called machine ethics, machine morality, computational morality, and friendly AI.
His latest book is A Dangerous Master, How to Keep Technology from Slipping Beyond Our Control.
Wendell Wallach, thank you so much for coming on.
Thank you for inviting me.
So, Chuck Schumer, Senator Chuck Schumer, is holding meetings to teach the government about AI.
And we're hearing a lot about worrisome things about AI, but also hopefully things they can do.
What do you want the government to know?
If you were in this meeting and maybe you've even been called to this meeting, what's the first thing you would want them to know about artificial intelligence?
Well, I would want them to think about who they're talking with, first of all.
They need to understand the technology, of course, but they may be talking with enthusiasts.
They may be talking with detractors.
And each of the parties in this discussion has its own interests.
So one of the big issues with artificial intelligence is to what extent is it the captive of an AI oligopoly?
And it's the government captive of an AI oligopoly who has particular interests in how the technology is pursued or how government regulates it.
But if it's not getting a balanced perspective from those who are acutely aware of what some of the drawbacks are of artificial intelligence, then the perspective can be very distorted.
So I'm among those who feel that actually artificial intelligence as it's being developed at present is really on not a very healthy trajectory.
And the problem is how we are going to nudge that trajectory back to a more friendly, human-compatible trajectory, as opposed to one that is largely about surveillance, about job replacement, about exacerbating inequalities.
You know, Let's start with this for a minute, though, just so we have a basic idea.
When we use the words AI and artificial intelligence in and of itself is kind of a loaded phrase, the first thing that comes into my mind is kind of Isaac Asimov robot come to life, this sort of movie version of computers taking over the world.
First, is there anything realistic about that?
But more importantly, can you just give me a sense of what artificial intelligence really is in real life?
What are we talking about to begin with?
Well, I think we're in a very long-term experiment to see to what extent human intellectual capacities and perhaps some superhuman intellectual capacities can be reduced to computational systems.
So, what can the computational systems do that we do that we think of as intelligence?
And of course, intelligence has many different, many different characteristics to it.
So, we already have computers that in some respects do some things much better than we.
We can't search massive databases in a few seconds.
So, I think that's what we're talking about when we're talking about artificial intelligence.
But there are many definitions, and some people want to reserve it for what I will say are less mechanical, more constructive, sometimes quasi-creative capabilities,
such as a new piece of artwork or a piece of music that didn't exist before, or even a chat GPT summary of existing information that is a little bit different than any other human has put together.
So, we're in that kind of universe of very different and competing definitions.
But as far as the threat of extinction from artificial intelligence, which is pretty much what's gotten well-fed by Hollywood over the last couple of decades, there is extreme differences in viewpoints there.
And there has been a split within the AI community itself as to whether those who believe that artificial general intelligence, a form of intelligence that can think about nearly every form of question or concern in a way that equals and very soon far outreaches that of human capabilities, whether that's inevitable.
And if it's inevitable, then the high likelihood that it will create a threat.
So, many people in artificial intelligence feel now we're dealing with more basic problems.
Artificial general intelligence is not on the horizon.
But each time there is a breakthrough in artificial intelligence, there is the reawakening of these concerns.
And at least for some of the leaders in artificial intelligence, more are willing to entertain the possibility that artificial general intelligence is on the horizon.
Now, when this question was asked just seven or eight, about eight years ago, when we had the breakthrough called deep learning, the average estimate among AI researchers, well, perhaps we'll see it around 2060.
More recently, that number is coming down for some of the researchers in artificial intelligence.
But there's a problem there.
And the problem is that particularly for humanists, for people who look at the centrality of our emotions, of compassion, of truly idiosyncratic forms of creativity, of common sense reasoning, there is deep suspicion that the pathway we are on with artificial intelligence can really recreate all forms of human intelligence.
So, you know, I've been among that group.
I have called myself for more than a decade a friendly skeptic.
I'm friendly to the Ken Doo engineering spirit that says artificial general intelligence is possible.
I'm skeptical that we truly know enough about intelligence to know whether we can realize that anytime soon.
You know, it's always seemed to me, just looking at the field of play, it always seemed to me that if you invent something that thinks but only thinks, that does not have flesh and blood, that, you know, that I look at you and I immediately identify you as like unto myself and care about what happens to you and worry about you, all of that is gone, but you have something you can calculate advantage that you're essentially inventing a sociopath.
You're inventing something that can calculate its own advantage without having the flesh and blood connection to me.
Is that what we're thinking about when we think about the dangers?
I think it is.
I think it is.
Even if you believe, as some people do, that we can recreate human style emotions, there's still a question whether they will be real emotions, meaning some somatic felt emotions, or they'll be calculated emotions.
If they're calculated motions, you have a sociopath.
So, and the difficulty is if you've created a sociopathic artificial general intelligence, then why should it care about humans at all?
You know, how are you even going to introduce that into your creatures?
I mean, those of us who are not sociopaths, we care about each other because it's a felt concern.
It's real.
We can't escape the pain of those that at least we are close to.
And many of us feel that empathy for those we have no personal connection to at all.
Right, right.
So let's start with a negative side.
If in fact we're inventing a sociopath, what is the literal danger?
What are the kinds of scenarios that you worry about?
Well, Nick Bostrom created what's one of the most popular, and that's the paperclip scenario.
And the paperclip scenario goes, so you've given the artificial intelligence a task to create paperclips.
And it basically uses all the resources on the planet to create paperclips.
That's all it does.
And it basically destroys everything else that exists.
And that's just a way of saying the artificial intelligence may have a goal that does not include human life or biological life at all.
So let's say you give it a goal to create its kind of intelligence throughout the universe.
Well, this kind of intelligence doesn't really depend on biological matter right now.
So it might appropriate all the resources in the world to pursue that universally.
So that's really a central concern is that artificial intelligence may have no concern with human needs.
And it's not clear how we could inculcate those systems with that kind of concern.
Now, I wrote a book with Colin Allen that called Moral Machines, Teaching Robots Right from Wrong, that looked at this question that was being looked at largely by philosophers, but a few computer scientists, whether we could really incult moral decision-making capabilities within artificial systems, within the entities we create.
And we were looking at it from the bottom up.
We weren't talking about artificial general intelligence.
We were just talking about first steps.
I would say next to nothing has happened in that direction in the last eight years.
There's a lot of talk about human-friendly AI and there's a lot of talk about value alignment.
But my own evaluation has been these ideas come largely from engineers who are incredibly naive about the social dimensions of the technologies they're producing and the difficulties of creating the sensibilities.
And for all of our flaws as human beings, nature did create us in ways where at least the vast preponderance of us do have sensitivities to each other.
Yeah, it does worry me when I look at the sort of people with grand schemes and grand plans who think that the world would be a better place if they could enforce compliance over medication and if they could, you know, organize people so that they would be happy, but they would have no jobs.
Many people at the very top of our intellectual mountain don't actually know what a human being is.
They actually don't think about what human beings are in any way that I recognize or any, I don't think anybody from Erasmus would recognize.
Who do we have?
Well, I do know an awful lot of these people.
And I would say that they aren't, you know, many of them are sensitive to the very concerns that you want them to be.
But I think they're also captive of a computational theory of mind, which basically says that everything that the mind body does can be reduced to computational systems.
And some of them go to pretty far-fetched ends to explain how that can occur.
So Ray Kurzwall is among those people.
He thinks that we can recreate the somatic system of a human being in an artificial system.
I question that.
I also believe that when we are talking about human beings, we are talking about a very complex adaptive system to use technical language as opposed to more humanistic language that is unbelievably efficient in adapting to its environment and unbelievably capable in utilizing small amounts of energy to produce fantastic ends.
The amount of energy the human brain uses is nothing compared to what ChatBG uses, and for that matter, even what your computer system uses.
So there are certainly people who are thinking about these questions, but I think they're both captive of what I will call a somewhat distorted philosophical take on what is possible computationally.
But they will say I'm naive.
So Stuart Russell, one of the leaders, said to me one day in artificial intelligence, he said, well, let's say there's only a 10% chance that we could have artificial intelligence by 20, artificial general intelligence by 2050.
Benefits vs. Risks00:12:55
Wouldn't you want us to begin to work on it now?
And I said, yes, of course.
But 10% still begs how much energy you put into it.
How much energy, how much capital do we put into ensuring that a meteor or a comet does not destroy humanity at the same time as the likelihood of that happening could be tomorrow, but it's probably still millions of years off, if that.
So this raises actually another question.
What good is artificial intelligence?
What can we look for?
I mean, obviously everything comes at a price.
You invent mass production of clothes.
You don't get the fine hand-woven clothes you had in the past, but now more people can afford good clothing.
What is the good side of AI?
Because nobody ever really talks about that, except that it can make pictures and write poems that are kind of great.
People talk about this extensively.
I mean, there are conferences called AI for Good.
There are endless discussions of ways AI can be used to determine the use of energy more efficiently, to create new jobs, to reduce poverty, to cure diseases that we couldn't cure otherwise.
One of the most fascinating pieces of research done with AI was the solving of something called the protein folding problem, which is proteins, what we are made of really, the basic substance, are so complex in their structures that humans found it very difficult to think through what their structure would be and therefore how they would attach to various other compounds and so forth.
There is a program called Deep from Deep Mind called Alpha Fold, which solved this problem.
And it will lead to all kinds of breakthroughs in medicine and chemistry over the next decades or so.
So there is no shortage of good things.
And those who are the promoters of artificial intelligence are talking extensively about the good things.
My concern, and I'm a little different than most people on this, is do the good things really justify what some of the detrimental concerns will be?
And as I mentioned, I'm not particularly concerned with the existential risk.
I'm much more concerned with how AI can contribute to undermining democracy, exacerbating inequalities, be used for deep fakes, for creating disinformation, for exacerbating biases, particular gender and race and so forth.
But we have this kind of debate going on, and the debate has changed over the years.
So a question I have asked constantly of groups that I speak before is when you look over the next 20 years, do you think that the benefits of AI will far outweigh the risks and undesired societal consequences?
And in the early years, when I would ask them, the benefits would win overwhelmingly.
More recently, the plethora of risks of undesirable societal consequences has expanded.
And so sometimes the risks will win out.
It has a lot to do with what the audience is, what they're aware of.
But I've asked this at a conference called AI for Good, which the UN's ITU, International Telecommunications Union, puts on every year.
And when I brought it up, it's called AI for Good.
And when I brought it up, the benefits outweighed the risks, maybe two to one with the audience.
But then I asked another question.
He said, how many of you think it could still go either way?
And that got four times as many votes as the benefits.
I'm just estimating here, but I'm trying to give you a feel for what happens.
So I do think what's happened now with generative AI, with large language models and other applications, is that the public and perhaps more importantly, the politicians are getting sensitized to the fact that this is not all a good story and that it's time for them to act.
And I applaud that wholeheartedly.
My focus has been particularly on international governance in this sphere.
But as you mentioned with Senator Schumer, the U.S. Congress is trying to think through what can be done.
My fear is those who want innovation over human welfare or think that innovation will always solve the human welfare problem are going to prevail.
And this will be helpful to those in the AI oligopoly who want limited regulations.
They're calling for regulations right now, but I fear that the regulations they call for are not really going to hold their heels to the fire.
And more importantly, they are going to ensure that the AI oligopoly continues to have a monopoly forever.
You know, there's who's the right, Yuval Harari is not somebody I agree with very often, but one idea that he has that I think is quite fair is he talks about species that evolve in ways that keep them alive, but make their life a living hell.
So if you're cattle, you're kept alive.
You've got a good evolution, but you're used for meat.
And I do look at this and I think when people call for a pause, for instance, which I think Henry Kissinger was saying, we should pause.
Who's going to do that?
Who on earth is going to pause when another guy can then get past him?
What kinds of things can we do to control this from just moving forward on its own steam in an evolutionary way?
Well, we do have AI principles that are pretty much universal at this point.
They've come out of UNESCO.
They've come out of the OECD.
There's actually hundreds of these lists.
And there's one called the Beijing principles.
And they're all relatively the same.
They all raise issues around transparency and privacy and fairness and so forth.
The main difference between the ones that have been dominated by the West is that they have a principle called the protection of human rights, which are fairly well established in law now, internationally.
Even though China is a signatory to the International Declaration of Human Rights, they don't like it being defined by Western liberals.
So they put in their list harmony, the time-honored value of harmony.
So the point is the ethics are more or less universalized.
There are disagreements.
But if we could agree to put enforcements that applications that do not honor these ethical principles, that do not have appropriate safeguards in place should not be deployed.
Nobody's going to stop science from continuing.
Nobody's going to stop research from going on.
But if they can't be deployed, then the attempt to get revenue from them goes down dramatically.
And artificial general intelligence isn't going to be created willy-nilly.
It's going to take tremendous revenue to go in there, whether that comes from advertisements or governments or whatever.
So I think that if we started to put in place safeguards where the developers of AI systems had to prove through testing and certification and ethical oversight that their applications did fulfill these things, they could not be deployed.
And I believe that in many cases, if not all cases, and I've been back and forth to China a lot.
I think, you know, there are problems.
I'm not trying to dismiss them and there are ways in which AI is being used that I would not support at all in China.
But I think basically the intent to take care of the 1.4 billion Chinese people is still the prevalent concern.
And the government is afraid of its people.
It's had revolutions too, just like those in democracy.
So I basically feel that we can get China on board if we really move toward common sense regulation of AI, common sense requirements, that the safety, that the honoring of the shared values is universal.
If AI is teaching itself, which I guess is what it does, right?
It collects all this information and it learns things faster than we can teach it.
How do you enforce principles when it's essentially running ahead of you to get more information, to learn more things?
Well, I mean, it's learning and it isn't learning.
So these generative AI platforms that people are playing along, the large language models like ChatGPT, they're learning in that they're acquiring more and more information.
But they're also dangerous in what they're doing with what they learn.
And frankly, they should not have been deployed in the way they've been deployed without appropriate safeguards in place.
I feel that unequivocally.
And it's not that they can't be beneficial.
They can, and they're certainly saving time for a lot of people, but there's nothing that means they couldn't have been stopped for, you know, they couldn't have been held back from being deployed as broadly as they are before appropriate safeguards were put in place.
So I think this is what's not understood.
And when we say they're learning, well, they're assimilating more information and they're able to parrot that back.
But what they are is largely statistical programs.
They do not understand the content of what they are parroting back.
So my colleague Anya Kaspierson ran asked ChatBT to talk about how AI will affect human dignity, how AI will affect humans.
And it came back with all this language about human dignity.
And so it knows nothing about human dignity.
It doesn't know what these words mean.
It does not have semantic understanding.
And furthermore, it invents stuff.
By now, there's a well-known story of a lawyer who asked a large language model to create a brief for him, created a wonderful brief.
Only problem was it was citing a case that didn't exist.
Wow.
And many of us have encountered that.
So when you use the application superficially, you don't realize that the way the statistical language model looks, it starts creating connections between information that actually have nothing to do because it doesn't have an understanding of the task that it is engaged in.
And this is critical.
We understand.
We have a semantic appreciation.
Children very quickly grasp what the language is about.
Even Helen Keller, who is deaf and blind, could grasp eventually the connection between these hand signals and the water she was feeling from a fountain.
Now, talking to Wendell Wallach, author of Dangerous Master, How to Keep Technology from Slipping Beyond Our Control, an internationally recognized expert on the ethical and governance concerns posed by emerging technologies.
Consciousness Beyond Brains Bodies00:03:04
My final question for you, I guess, is just to ask you if you think that AI can ever become, if machines can ever become actually conscious.
So this is a great question, a difficult question, one of my favorite questions, by the way.
And it depends upon what you think consciousness is or how you define consciousness, unfortunately.
And there is a massive battle going out there, one that basically has purely scientific definitions of consciousness.
So consciousness is basically what brains and bodies do, totally fabricated within the brains and bodies.
There are some more mystical and spiritual definitions that some philosophers also embrace that say there's something about consciousness that it's not totally created by the mind-body.
And it doesn't necessarily mean that the world is conscious or the universe is conscious without this apparatus which creates the kinds of forms of consciousness that we humans experience, how we know ourselves.
But it may not be totally something that we create from the bottom up, that we create through biochemistry and through neural activity and so forth.
So I'm a little agnostic on the machines.
I'm actually perhaps a little bit more open to the prospect that consciousness is not, we don't really understand.
Not only do we not understand it, but consciousness is something more than just what brains and bodies do.
I've meditated for more than 50 years now, and it seems to me consciousness has attributes that sometimes I feel that I am in consciousness rather than consciousness being in me.
Now, I can argue that that's a trick that the mind plays to make you feel that, but I would say I have enough experience over 50 years to say, no, there's a lot of questions out there we just aren't answering.
But that said, does it mean computers kind of cannot have consciousness?
Well, we don't know where computational systems are going to be in the future.
We don't know if we're going to merge machines and biological material.
But I wish we would get off of this kick of thinking that the important thing is the creation of an autonomous machine that can do all of this remarkable, these remarkable things, and talk much more about how machines can be used to augment human consciousness.
that humans work together with the machines as opposed to that we're creating humans to replace us either in the form of work or creativity or other jobs.
Economic Challenges Ahead00:01:30
And the biggest challenge that ChatGPT has created right now is we're going to see millions of jobs displaced.
And we don't know how long it will take before those jobs are recreated.
And we certainly know that we have not created the economic environment to meet the needs of people who do not have work.
We have created an economic environment where two-thirds of all productivity goes to 1% of the people in the world.
And there's resistance to distributing that wealth.
So if we are truly moving in a direction where we are destroying large numbers of jobs, then hold on your seats because it opens the door for social unrest, for geopolitical instability, and it's probably feeding even the divisions in society and the embrace of authoritarianism by some elements of our society.
Wendell Wallach, author of Dangerous Master, How to Keep Technology from Slipping Beyond Our Control.
Thank you very much.
It's rather reassuring to have someone thinking about this at the level you're thinking about it.
That's not what I often hear.
Thank you so much for coming on.
I hope to talk to you again.
I'm most appreciative of your letting me speak.
Again, it is reassuring to have people thinking about these issues at that level, and we'll be thinking about them some more on Friday when the Andrew Claven show is back.
I want you to be there or be plunged into Clavenless Eternity.