Stephen Meyer and George Montañez debate AI’s role, dismissing Matt Schumer’s viral warnings as overblown while agreeing it lacks consciousness—Meyer highlights its reliance on human input, like China’s CCP surveillance, and risks of "garbage in, garbage out," while Montañez notes "jagged intelligence" flaws (e.g., ChatGPT failing basic math). Both reject Kurzweil’s singularity, warning instead of humans misusing AI to amplify harm or inefficiency, stressing control over unchecked systems. Ultimately, they frame AI as a tool dependent on human oversight, not an existential threat. [Automatically generated summary]
The worst case scenario is that ordinary human agents will use the augmentation capabilities that come with AI to do bad things and to do bad things more efficiently than they could have done before.
Hey everyone, it's Andrew Klavan with this week's interview with Stephen C. Meyer and George D. Montanez.
We're going to be talking about AI.
There was a viral tweet or whatever we call it now, a viral post by Matt Schumer, who works developing AI companies.
It was called Something Big Is Happening, and it basically was announcing that AI was about to take over the world, change everything.
It had the tone of a guy who had escaped from like a Nazi death camp and had come back to warn the others that something unbelievable was happening.
It was this urgent, I would call it melodramatic tone that everything is going to change possibly by the end of the year.
But we've been hearing this for a long time and we've been seeing it for a long time and seen actually AI advance at lightning speed for a long time.
So I wanted to talk to people who actually know about this and find out if there's any truth to it.
Stephen Meyer is the director of the Discovery Institute Center for Science and Culture in Seattle and author of many really good books, my favorite being The Return of the God Hypothesis, which is so far, as far as I've read, the best book on how the scientific reasoning for atheism has completely fallen apart and has fallen apart over the last hundred years.
If you haven't read it, I really recommended The Return of the God Hypothesis by Stephen C. Meyer.
George D. Montañez is an associate professor of computer science at Harvey Mudd College, and he directs the Amistad Lab, which works on problems of machine learning and information theory.
And so we were going to talk to him for his expertise about the actual machines.
Guys, thank you so much for coming on.
It's good to see you.
Steve, I want to start with you because I want to get the sort of big picture of how you feel AI is affecting your expertise, which is the idea of the return of spirituality on a scientific basis.
You and I have been at meetings where people have essentially announced that AI and large language systems have erased humankind, all excuse for the soul and all this.
What do you feel as AI advances as quickly as it is, how do you feel this changes things for you and the Discovery Institute, or does it change things at all?
Well, it's a very exciting topic and it relates, first of all, it's great to have George with us because this is his special area of expertise.
It happens that both of us are in Cambridge, England right now.
He's on a fellowship to one of the Cambridge colleges and is just doing some great work here.
You and I were at a conference last fall where this was discussed.
He was at that same conference the year before.
So we've all been kind of party to some of these things.
I think the global view, and I'll just assert it, and we can explain and justify it as we get deeper into the interview is that AI is a fantastic tool for augmentation of human intelligence, but it does not replace human intelligence.
In fact, if you look closely at how it works and where it breaks down, that is where it encounters limits, you can see very clearly that there is a fundamental asymmetry between human intelligence and machine intelligence,
That even the most sophisticated forms of machine intelligence, those based on large language models, that things like ChatGPT show there's a fundamental dependence of the whole system on both the initial input of information from a conscious intelligent agent and also continued monitoring and training of the data sets that are produced by the AI,
by conscious intelligent agents.
So the machines will not replace human agency, but they can augment our intelligence quite nicely.
And there's a new book coming out about that by another of our colleagues that we can talk about a little later in the interview as well.
But that's my 30,000-foot view of the technology.
It can be used for good or for ill.
Obviously, our friends in the CCP, maybe we shouldn't call them friends, but the Communist Party of China has perfected the use of AI in human surveillance.
That's a very scary use of the technology, but there are many, many positive uses of the technology, including in the defense industry.
We've got a project at the Discovery Institute on missile defense and some of the enhancements that are possible for those sorts of technologies using AI are just fantastic for protecting the country.
So you see both the upside and the downside of the technology.
But in each case, whether it's used for good or ill, it's always an augmentation rather than a replacement of human intelligence.
All right.
Well, George, in that case, I mean, one of the things that this Matt Schumer says in his post, he emphasizes this repeatedly, that you civilians don't understand what's coming down the pike.
You're using ChatGPT, but you're not using the paid-for model, or you're using the paid-for model, but you're not using the most up-to-date model.
And basically, what he's talking about is its capacity to replace the working people who do the support work.
So in other words, a lawyer can now will soon be able to turn to an AI model and not need the intern or the guy who was sent over from the university.
And this is something, you know, Peter Thiel has talked a lot about the fact that he feels that the fear that work is about to disappear is behind things like the election of Zoro Mandani, that people feel if the government's not going to take care of them, they're going to have no way to take care of themselves.
So how do you feel about that?
When you're dealing with these machines, do you feel that they are rearranging the workplace or changing it or eliminating it altogether?
So I would say I have mixed feelings about it, right?
So it's not completely one way or the other.
So for example, yes, there's people who are posting, you know, something big is coming down the pipe.
I remember when that was GPT-5.
I don't know if you all remember, but they were going to release the GPT-5 models and that was supposed to wipe out everything.
They released them and everyone said, hey, can we go back to the four model?
Because this one is not as good.
So I think that those in Silicon Valley, they have maybe a vested interest in skewing their perception of how these things operate to be a little bit overly optimistic.
But there are, you know, there are big changes that are going to happen in terms of our ability to get work done quickly.
My kind of mental model for the scale of what's going to happen is basically like the computer or the cell phone, right?
So before the computer, you know, we had a lot of different ways of doing the same sorts of jobs that we have now.
And then we got a lot of new jobs that we couldn't do before the computers.
But it took a while for human beings to figure out how we were going to use computers well.
There was an economist, I want to, I'm not even going to venture to remember his name.
I don't remember his name, but he said that basically the productivity of computers was showing up everywhere except in the actual workplace.
Like it took a while for the workplace to catch up with what computers were capable of doing.
And so you have something like large language models, which they can do a lot of really impressive things.
I'm impressed by the vast array of things you can do with them, but we are still figuring out where are the right places where they can augment us and where are the wrong places.
You can't just throw them blindly at problems that you have because these things perform unequally on different problems and it's not correlated to problem difficulty.
So there's this phenomenon known as jagged intelligence, which is, for example, let's go back to GPT-5.
So GPT-5, it would struggle on multi-digit arithmetic, right?
You know, you load up the latest model and you give it a long multi-digit arithmetic problem.
It'll give you the wrong answer.
That same model can solve problems from the International Math Olympiad, right?
Five out of the six problems.
And so it's not necessarily correlated to how difficult the problem is.
You don't know when it's going to fail you.
I have a colleague here at Cambridge.
He works on syllogistic reasoning.
And what he found is that large language models can't even do basic syllogisms correctly, right?
They can kind of do them.
Sometimes they do them right, but they're not ironclad.
And when you're doing something like deductive logic, it's not good enough to get it sort of right, right?
You need to have the actual step-by-step rules and kind of see-through on that.
And so what he was doing with his work is kind of building augmented systems to help LLMs do things like syllogism, syllogistic reasoning correctly.
So, you know, again, I have mixed feelings.
There are things that we're going to be able to do really quickly.
So for example, a lot of writing, right?
That's tedious writing.
Like, yeah, we could throw a large language model at it.
You have these video diffusion models, which are going to revolutionize how we make movies.
But is that going to replace lawyers?
I don't know.
I don't know if I would want an AI lawyer because as soon as it starts to make up case law, I'm kind of cooked, right?
So I don't necessarily want to rely on a system for these mission-critical tasks.
So let's talk about large language models for a minute because these are the kind of basic questions that I never hear anybody actually ask.
You know, Steve and I were at this conference where somebody said the large language model has essentially proved that language is not a method of conveying meaning.
It's just a pattern, a kind of random pattern that expresses itself more than it expresses the person who thinks he's expressing himself.
Can you explain what a large language model is and why somebody would make that claim?
Yeah.
So a large language model in its simplest way of looking at it is essentially a predict the next word machine.
Right.
And so you're all, you know, maybe this is an overused analogy, but like autocomplete.
You're all familiar with autocomplete.
You start off typing something and the system will predict what you're going to type next.
Now, large language models do that on a scale that is astronomically larger, right?
So it's taking into account the context of what you've written, but it's also taking into account the context of what every human being has written for which we have digitized text, right?
in order to do these predictions.
And so essentially you have the context, the prompt that you give it and the output that has been given so far.
Given that these systems form a distribution over what is going to be the next token that I choose.
And a token, you can think of it as roughly equivalent to a word, but it's what is the next thing I'm going to output.
And given that context, it selects a very specialized next distribution.
where the thing that's going to follow is likely very related statistically to the things that came before it.
And so, yes, you can produce a lot of language like that.
You can produce coherent language like that.
But that's because, as Steve pointed out at the beginning, this has been trained on all of the coherent thought that we have digitized that we're feeding into these systems.
And so they can produce text.
They can do a pretty good job of it, but they have no internal understanding of what the things they're producing are.
And so I give the example, I give a talk on this where I talk about some of the technology to try to get these kind of numbers that represent the words to have some sort of relationship with other numbers, right?
Because at the end of the day, these computers are processing numbers and you want them to have a relationship.
So the way that these systems do it is through a technology called embedding, where essentially it takes a number and it puts it as a point in space.
You can think of it as like a vector in space.
And if two words mean similar things, they're going to be close to each other in space, right?
These vectors.
And so at the end of the day, all the system knows is that this vector is close to this vector.
These things are similar, but it has no notion of what these things actually mean, right?
It doesn't know what they represent.
This is all purely syntax.
Syntax can get you a long way, but it can't get you the full way, right?
Syntax is not the same as semantics.
It doesn't matter how much syntax you have.
You're not going to bridge that gap into meaning.
Augmenting Human Thought00:17:41
You know, when you've lived as long as I have, you've seen kingdoms fall.
I saw Rome fall.
That was just a terrible, terrible time.
You know, and I saw England rise and then fall.
It's just been an amazing life.
But through it all, my family is the most important thing I have.
And with that, I want to make sure they are taken care of, that everything's in place in case something were to happen to me.
Although at this point, it doesn't seem like anything ever will.
It may seem like that to you too, but it's not.
You want life insurance.
And that's why you want to know about Policy Genius.
It's an online marketplace where you can compare quotes from top insurers side by side, completely free.
No jumping around between websites, no endless phone calls.
And what really stands out is their team works for you.
This is important.
They're licensed agents who help you figure out what you actually need, not what some company wants to sell you.
They walk you through coverage amounts, prices, all of it with no guesswork, just straight answers.
They handle the paperwork, answer your questions, take that weight off your shoulders.
You get clarity on what you need and can move forward fast.
This is a perfect time to lock this in and ensure you have real peace of mind.
Check out Policy Genius.
They've got thousands of five-star reviews from people who finally found the right policy for their needs.
They, you will thank yourself later.
They may thank you too.
You never know.
Plan the year knowing you've protected what you've built with Policy Genius.
You can see if you can find 20-year life insurance policies starting at just $276 a year for $1 million in coverage.
At this policygenius.com slash Claven to compare life insurance quotes from top companies and see how much you could save.
That's policygenius.com slash, how do you spell it?
It's K-L-A-V-A-N.
Right.
You know, Steve, this is the thing that bothers me.
There's a habit that human beings have of taking their most advanced piece of technology and using that as a metaphor for the human mind.
So, I mean, if you can go back and read Greek philosophy and they'll talk about the new craft of writing and they'll talk about the mind as a tablet, you know, a tabula, an erased tablet where you can write on the tablet.
Tabula Rossa.
The tabula rasa.
And you have Sigmund Freud building a model that's basically like a steam engine with energy being compressed and then coming back.
And then when we had computers, we would talk about how we were hardwired and how we were software.
What George is describing doesn't sound to me like a human mind at all.
And the comparison I always make is to that Louis Armstrong song where he says, I see friends shaking hands saying, how do you do?
They're really saying, I love you, which is the way human language works.
I mean, we are saying things to each other that we can't quite say.
Are you concerned that this machine is going to aid materialist thought, is going to enhance, especially among the sort of, how can I put it, sometimes spiritually limited people who work on computers, sometimes autistic, I guess, semi-autistic people.
Are you worried that this is going to add to the argument that life is just a sort of meaningless array of physical reactions?
Well, I would say not if we have much to say about it, because there's a really powerful argument against the materialist interpretation.
I think there's an inclination towards materialist thinking among some people in the sciences, certainly, maybe some techies.
But let me go back to my initial point.
I'm very interested in the question of the origin of information because it's a crucial point, a crucial consideration, a crucial issue when you think about the origin of life.
We know that you can't build biological form without prior biological information, in particular information encoded in the DNA molecule, but other places in living systems.
And there are some really interesting parallels between what's going on in the AI discussion and what goes on in the world that I mainly inhabit, which is discussions about biological and cosmological origins.
In the AI discussion, there's a problem known as, or just in AI, and this includes the most current models of large language models, the ChatGPT, there's a problem known as model collapse, where you take that body of data that has been imported into your system that's come from human text.
I recently got an opportunity to get a payout from a company that had, without asking me, taken the text of three of my books and used it to provide data for their AI system.
And so this is where the data comes from.
It's they sometimes call it ground source data.
It's coming from human beings.
It's actually written text that in its original context wasn't just syntactic.
It was semantic.
It had meaning.
And so you import that.
That's your data set.
Now, if you use that to try to solve, you query that data set with a large language model and you ask it to a question.
And then they get an output.
And the output is based on the kind of process that George was just describing.
And if you then take the output of that large language model system that has been queried, we now call that synthetic data because that's no longer generated from a human being.
That's generated from the process of querying the large language model.
If you take that output and then query that again, and again, with each successive iteration, the output gets more less and less meaningful, more and more incoherent, and you get this, what's called an error collapse or a sort of model collapse.
Now, in origin of life research, there are people who have tried to simulate the origin of information using models that are in some way analogous to this.
There's one called hypercycles.
And what happens in the hypercycle model is that with each iteration, the attempt to simulate the origin of genetic information outputs something that has more and more errors in it and is less and less genetically accurate, if you will.
And so you have this, if you graph the informational content of the system over time, in both cases, you have a high initial information content, and over time, you're losing meaningful information.
And so what that shows in the case of the LLMs is a fundamental dependency of the system on an initial input of information from a conscious intelligent agent,
because the outputs are always less coherent, less meaningful than the inputs, unless you have constant monitoring and correcting, but that involves an input of information from a conscious intelligence.
So this problem of model collapse is actually a tell.
It's a huge tell.
It shows that the machine is dependent on the human mind in a way that the human mind is not dependent upon the machine.
And that means a lot of the really utopian or rather dystopian scenarios that people have spun out about getting to a kind of singularity where the machines cross over and then they become conscious themselves and they start to take over from us.
Those things are not going to happen.
There are dystopian possibilities.
I think the CCP's use of AI for surveillance is very dystopian, but that's an example of the technology augmenting the human mind, albeit in a malevolent way.
So I think the key thing here is AI can augment.
It will not replace.
That also then provides an additional plank or an additional piece of evidence that the human mind is fundamentally different than a machine.
And we have evidence of that from neuroscience.
I'd recommend, for example, the new book, The Immortal Mind, by the neuroscientist and neurosurgeon, Michael Egnor.
We have evidence of that, I think, coming out of computer science in many different directions.
And then I think there's just very good arguments in the philosophy of mind that the human mind is not reducible to the brain, nor is it reducible to a machine.
And just one quick book that I would recommend is a new book by Eric Larson and his colleague Chi Wee Ning called Augmented Human Intelligence, Empowering Minds in the Age of AI.
It's a sequel to his earlier book, The Myth of AI.
In that context, he meant the myth of artificial general intelligence.
And he's extending that initial thesis to say this new great large language models, ChatGPT, things like that, they're very good at augmenting and extending the application of human intelligence, but they will not replace it.
You know, this brings me to a question, George, I'd like to ask you.
One of the things that's very frustrating for using just the ordinary AI you find on your computers is I will use it to find a source, a piece of material that I'm looking for.
And I'll ask, for instance, about some stupid thing that a politician said, and it will respond and tell me this thing said, and then correct my idea that he's stupid by telling me, no, he's done some wonderful things that aren't true.
They're actually not what I would say is just an opinion.
I don't care about a machine's opinion.
But it makes me feel, and one of the things this fellow, Matt Schumer, talks about in this viral tweet is he says it is now developing the capacity to make judgments and exercise taste.
It has taste.
And given what Steve just said, that essentially it's fluttering away from human intelligence to something altogether different.
And given how you describe how a large language model works, where is this taste coming from?
Where are these judgments coming from?
And can they possibly be good ultimately for human beings?
So judgment and taste, those are terms that I would not use for a large language model.
Okay.
Right.
So things that could be found by statistical correlation are things that these large language models can find.
So in the sense of he was talking in the context, if I remember the post correctly, about computer code, right?
And he was saying, oh, you know, I tell it to do this thing, build an app, I walk away, I come back, and it's working on it, and it's making good decisions.
So can these systems write functional computer code?
Yeah, they can.
Can they write good computer code?
I think they can as well, right?
And they're getting better at that.
How do they do that?
Well, again, it's based on this statistical correlation with what humans have already done.
So the taste is a derivative taste.
So it's almost like junior hires or high schoolers, right?
You know, the popular kid does something and then everybody else does it.
If that popular kid has pretty good taste, then before long, everyone's listening to Bob Dylan and dressing cool and all these things.
So really it depends on what you put into it, right?
To the degree that we can get these systems to mimic us and mimic us well, to us, that's going to look like it's exercising judgment, but all it's doing is mechanically repeating the judgments that it's already been trained on.
That makes sense.
Okay.
It is.
It's the same old problem we've talked about in computer science for years, garbage in, garbage out.
It depends on what the computer is being trained on and what the data set is that's been provided to it.
So oftentimes those second order corrections to your perhaps center right leaning intuitions are coming from a computer that's been trained on a body of data that's reflecting a conventional center left leaning or hard left leaning data set.
So it's there's no, as George is absolutely correct, there's no taste involved.
It's a matter of what body of data has been provided to the large language model in order for it to generate its outputs.
So, okay, well, that's kind of reassuring.
But at the same time, how do you control that?
I mean, how do you make it set?
How do you somewhere along the line, somebody has to be able to say, like I said, I don't care what this machine is telling me about this politician.
I know what I think about this politician.
How do you get it to basically return, do what it can do, which are the facts?
Yeah, so there are different ways of trying to optimize these systems for different objectives that you have, right?
So a lot of money and a lot of time goes into trying to align these systems with different characteristics in terms of safety, in terms of personality, right?
I put that in quotes, personality of the models, in terms of correctness.
And so that's getting into the weeds a bit of how these systems are trained and how they're optimized.
But essentially, again, it's a problem of mathematical geometry.
Can I get these systems to align well with the patterns that are existing in this data set using the tools of optimization theory and machine learning that we have?
So who controls it?
Whoever is determining what the system training is going to be like.
Whoever is determining what are the objectives we care about.
Are they optimizing these systems for addictive behavior, right?
Do they want us to come back to them and keep using them?
That's going to give you a fundamentally different system than if we are optimizing these for factual correctness, for clarity, for these other objectives that as a society, we might care about a bit more.
So yeah, it's up to us.
Just because we have these systems that have these additional capabilities, that doesn't absolve us of the responsibility of using it wisely and using it responsibly.
So on that note, I'd like to ask, we're coming to the end, I'd like to ask both of you, I'll start with George and then Steve, the nightmare scenarios that people are so good at coming up with where it like sort of that old movie War Games where they just the AI decides it's going to kill us all.
How big a danger is that?
Is there such a danger?
I mean, there's the our Department of War and Anthropic are in a sort of battle over the uses of AI, but that's a battle about human beings.
Is there a chance, a non-zero chance, that a machine could make decisions that we couldn't stop that would be harmful to human beings?
On a smaller scale, some people have been using Open Claud, right?
No.
What is the new name for it?
OpenClaw, I think.
So they changed the names a few times, where this is an agentic system that essentially you give it control over your computer system.
And there's stories about people who said, hey, organize my files, and then it deleted their hard drives, deleted all their files.
Okay.
Yes, something really bad can happen if we hook up large language models or generative AI systems to weapon systems without proper human safeguards.
So those, to me, those are the nightmare scenarios.
It's not that these systems are deciding, hey, I want to eliminate humans.
It's that the internal logic of the systems, right, the parameter settings that feed forward in this neural network are going to trigger things that it has no understanding of what they mean.
That could be a problem.
Because it's not properly constrained, right, George?
Yes, that could kill a lot of people.
So that is the nightmare scenario to me.
I'll leave it at that.
So yes, the thing that you're afraid of can happen, but it's not going to happen for the reason people are saying it's going to happen.
All right.
Well, Steve, what's your worst case scenario for?
I would amplify that by partially repeating something I said before, is that we're not going to get this Ray Kurzweil singularity scenario where the machines reach a point where they increase qualitatively in intelligence and adopt the conscious agency and start to make choices to override their programming and to take over the world in a way that makes us obsolete.
We'll see you next time.
The worst case scenario is that ordinary human agents will use the augmentation capabilities that come with AI to do bad things and to do bad things more efficiently than they could have done before.
And I think one of the best examples of that is precisely the use of AI in surveillance in a country like Communist China today, where AI systems are really the bane of the ordinary citizen whose every movement and even keystroke is being monitored by an AI system.
Worst Case AI Scenario00:01:32
But that's effectively, those systems are under the control of people who want to use them to do that.
And so since AI can augment, it can augment good intentions, good designs, and it can augment bad intentions and bad designs.
Kind of like any other technology, it has the potential for use for good or for ill.
And I would add.
Maybe more capability.
And I would add not just amplifying good or evil, but also stupidity, right?
So doing things unintentionally, but doing them at a speed that we can't stop.
Yeah.
Yeah.
The well of human stupidity is bottomless.
Stephen C. Meyer from the Discovery Institute, George D. Montenez from Harvey Maud College running the Amistad Lab, studying machine learning and information theory.
Thank you so much.
That was really educational, and I appreciate talking to people who actually know what they're talking about.
It's a welcome change from reading the news.
I hope you'll come back.
Good questions, Andrew.
This was fantastic.
Good discussion.
Yeah.
All right.
Thanks a lot, guys.
Good to see you.
All right.
Thanks for having us.
So basically, it's the end of the world, and we should all just hide under our desks until it's over and our machine overlords come and take us away.
No, actually, that was a lot more like what I suspected.
And I always just think, you know, I'm a detective story writer.
What do I know?
But that's a lot closer to what I feel AI is going to be like.
But before the world ends, you might want to come to the Andrew Clavin Show on Friday, and we will talk some more.
In fact, we'll talk more about even things like this.