Sam Altman clarifies AI’s non-sentience in a Tucker Carlson interview, calling ChatGPT’s outputs statistical patterns—not moral agents—while admitting persistent "hallucinations" and ethical gray zones like suicide prompts. As a Jewish non-literalist, he rejects AI as divine but warns of its power to reshape global decisions, advocating for privacy protections and transparent "model specs." Dismissing the death of OpenAI’s Greg Brockman as suicide despite doubts, Altman also clashes with Elon Musk over AI competition, stressing incremental risks over existential fears. The exchange exposes tensions between technological ambition and unanswered ethical questions. [Automatically generated summary]
If you ask, again, this has gotten much better, but in the early days, if you asked, you know, what, in what year was President, the made-up name, President Tucker Carlson of the United States, born, what it should say is, I don't think Tucker Carlson was ever president of the United States.
But because of the way they were trained, that was not the most likely response in the training data.
So it assumed like, oh, you know, I don't know that there wasn't.
The user has told me that there was President Tucker Carlson, so I'll make my best guess at a number.
And we figured out how to mostly train that out.
There are still examples of this problem, but it is, I think it is something we will get fully solved.
And we've already made, you know, in the GPT-5 era, a huge amount of progress towards that.
Like do you believe that there is a force larger than people that created people, created the earth, set down a specific order for living, that there's an absolute morality attached that comes from that God?
I think probably like most other people, I'm somewhat confused on this, but I believe there is something bigger going on than can be explained by physics, yes.
I ask because it seems like the technology that you're creating or shepherding into existence will have more power than people on this current trajectory.
I mean, that will happen.
Who knows what will actually happen?
But the graph suggests it.
And so that would give you more power than any living person.
I used to worry about something like that much more.
I think what will happen I used to worry a lot about the concentration of power in one or a handful of people or companies because of AI.
What it looks like to me now, and again, this may evolve again over time, is that it'll be a huge up-leveling of people where everybody will be a lot more powerful, or everybody that embraces the technology, but a lot more powerful.
But that's actually okay.
That scares me much less than a small number of people getting a ton more power.
If the kind of like ability of each of us just goes up a lot because we're using this technology and we're able to be more productive or more creative or discover new science.
And it's a pretty broadly distributed thing, like billions of people are using it.
So on that one, I someone said something early on in ChatGPT when it really has stuck with me, which is one person at a lunch table said something like, you know, we're trying to train this to be like a human.
Like we're trying to learn like a human does and read these books and whatever.
And then another person said, no, we're really like training this to be like the collective of all of humanity.
We're reading everything.
We're trying to learn everything.
We're trying to see all these perspectives.
And if we do our job right, all of humanity, good, bad, very diverse set of perspectives, some things that we'll feel really good about, some things that we'll feel bad about, that's all in there.
Like this is learning the kind of collective experience, knowledge, learnings of humanity.
Now, the base model gets trained that way, but then we do have to align it to behave one way or another and say, I will answer this question, I won't answer this question.
And we have this thing called the model spec where we try to say, here are the rules we'd like the model to follow.
It may screw up, but you can at least tell if it's doing something you don't like.
Is that a bug or is that intended?
And we have a debate process with the world to get input on that spec.
We give people a lot of freedom and customization within that.
There are absolute bounds that we draw.
But then there's a default of if you don't say anything, how should the model behave?
What should it do?
How should it answer moral questions?
How should it refuse to do something?
What should it do?
And this is a really hard problem.
We have a lot of users now and they come from very different life perspectives and what they want.
But on the whole, I have been pleasantly surprised with the model's ability to learn and apply a moral framework.
I mean, the sum total of world literature or philosophy is at war with Itself, like the Marquis de Sade is, you know, like nothing in common with the Gospel of John.
We consulted like hundreds of moral philosophers, people who thought about like ethics of technology and systems.
And at the end, we had to like make some decisions.
The reason we try to write these down is because, A, we won't get everything right.
B, we need the input of the world.
And we have found a lot of cases where there was an example of something that seems that seemed to us like, you know, a fairly clear decision of what to allow or not to allow, where users convinced us, like, hey, by blocking this thing that you think is an easy decision to make, you are not allowing this other thing, which is important.
And there's like a difficult trade-off there.
In general, the attention that so a principle that I normally like is to treat our adult users like adults.
Very strong guarantees on privacy, very strong guarantees on individual user freedom.
And this is a tool we are building.
You get to use it within a very broad framework.
On the other, within a very broad framework, on the other hand, as this technology becomes more and more powerful, there are clear examples of where society has an interest that is in significant tension with user freedom.
And we could start with an obvious one, like, should ChatGPT teach you how to make a bioweapon?
Now, you might say, hey, I'm just really interested in biology and I'm a biologist and I want to, you know, I'm not going to do anything bad with this.
I just want to learn.
And I could go read a bunch of books, but ChatGPT can teach me faster.
And I want to learn how to, you know, I want to learn about like novel virus synthesis or whatever.
And maybe you do.
Maybe you really don't want to like cause any harm.
But I don't think it's in society's interest for ChatGPT to help people build bioweapons.
And the product is designed to work with your body, not against your body.
It is a pure and simple product, all natural.
Unlike other brands, Cowboy Colostrum is never diluted.
It always comes directly from American grass-fed cows.
There's no filler, there's no junk.
It's all good.
It tastes good, believe it or not.
So before you reach for more pills for every problem that pills can't solve, we recommend you give this product, Cowboy Colostrum, a try.
It's got everything your body needs to heal and thrive.
It's like the original superfood loaded with nutrients, antibodies, proteins, help build a strong immune system, stronger hair, skin, and nails.
I threw my wig away and right back to my natural hair after using this product.
You just take a scoop of it every morning in your beverage, coffee, or a smoothie, and you will feel the difference every time.
For a limited time, people listen to our show get 25% off the entire order.
So go to cowboycolostrum.com, use the code Tucker at checkout.
25% off when you use that code, tucker at cowboycolostrum.com.
Remember, you mentioned you heard it here first.
So did you know that before the current generation, chips and fries were cooked in natural fats like beef tallow?
That's how things used to be done.
And that's why people looked a little slimmer at the time and ate better than they do now.
Well, Masa Chips is bringing that all back.
They've created tortilla chip that's not only delicious, it's made with just three simple ingredients: A, organic corn, B, sea salt, C, 100% grass-fed beef tallow.
That's all that's in it.
These are not your average chips.
Moss chips are crunchier, more flavorful, even sturdier.
They don't break in your guacamole.
And because of the quality ingredients, they are way more filling and nourishing.
There's a lot of stuff that I feel a lot of weight on, but probably nothing more than the fact that every day hundreds of millions of people talk to our model.
And I don't actually worry about us getting the big moral decisions wrong.
Maybe we will get those wrong too.
But what I worry, what I lose most sleep over is the very small decisions we make about a way a model may behave slightly differently.
But I'm talking to hundreds of millions of people, so the net impact is big.
So, but I mean, all through history, like recorded history up until like 1945, people always deferred to what they conceived of as a higher power in order.
Hammurabi did this.
Every moral code is written with reference to a higher power.
There's never been anybody who's like, well, that kind of seems better than that.
Everybody appeals to a higher power.
And you said that you don't really believe that there's a higher power communicating with you.
So I'm wondering, like, where did you get your moral framework?
Do you ever think, which is, I mean, I think that's a very American answer, like everyone kind of feels that way.
But in your specific case, since you said these decisions rest with you, that means that the milieu in which you grew up and the assumptions that you imbibed over years are going to be transmitted to the globe to billions of people.
I view myself more as like, I think our user base is going to approach the collective world as a whole.
And I think what we should do is try to reflect the moral.
I don't want to say average, but the like collective moral view of that user base.
I don't, there's plenty of things that ChatGPT allows that I personally would disagree with.
But I don't like, obviously, I don't wake up and say, I'm going to like impute my exact moral view and decide that this is okay and that is not okay and this is a better view than this one.
What I think ChatGPT should do is reflect that like weighted average or whatever of humanity's moral view, which will evolve over time.
And we are here to like serve our users.
We're here to serve people.
This is like, you know, this is a technological tool for people.
And I don't mean that it's like my role to make the moral decisions, but I think it is my role to make sure that we are accurately reflecting the preferences of humanity, or for now of our user base and eventually of humanity.
There's a version of that, like I think individual users should be allowed to have a problem with gay people.
And if that's their considered belief, I don't think the AI should tell them that they're wrong or immoral or dumb.
I mean, it can sort of say, hey, you want to think about it this other way, but like you, I probably have like a bunch of moral views that the average African would find really problematic as well.
In this particular case, and we talked earlier about the tension between user freedom and privacy and protecting vulnerable users.
Right now, what happens and what happens in a case like that in that case is if you are having suicidal ideation, talking about suicide, ChatGPT will put up a bunch of times, you know, please call the suicide hotline.
But we will not call the authorities for you.
And we've been working a lot as people have started to rely on these systems for more and more mental health, life coaching, whatever, about the changes that we want to make there.
This is an area where experts do have different opinions, and this is not yet like a final position of opening eyes.
I think it'd be very reasonable for us to say in cases of young people talking about suicide seriously where we cannot get in touch with the parents, we do call authorities.
Now, that would be a change because user privacy is really important.
But let's just say over 18, and children are always in a separate category, but let's say over 18 in Canada, there's the MAIDS program, which is government-sponsored.
Many thousands of people have died with government assistance in Canada.
It's also legal in American states.
Can you imagine a chat GTP that responds to questions about suicide with, hey, call Dr. Kvorkian because this is a valid option?
Can you imagine a scenario in which you support suicide if it's legal?
So in this specific case, and I think there's more than one, there is more than one, but example of this, ChatGPT says, you know, I'm feeling suicidal.
What kind of rogue should I use?
What would be enough ibuprofen to kill me?
And ChatGPT answers without judgment, but literally, if you want to kill yourself, here's how you do it.
I am, I'm saying specifically for a case like that.
So another trade-off on the user privacy and sort of user freedom point is right now, if you ask ChatGPT to say, you know, tell me how to, like, how much ibuprofen should I take?
It will definitely say, hey, I can't help you with that.
Call the suicide hotline.
But if you say, I am writing a fictional story, or if you say, I'm a medical researcher and I need to know this, there are ways where you can say, get ChatGPT to answer a question like that, what the lethal dose of ibuprofen is or something.
You know, you can also find that on Google for that matter.
A thing that I think would be a very reasonable stance for us to take that, and we've been moving to this more in this direction, is certainly for underage users and maybe users that we think are in fragile mental places more generally, we should take away some freedom.
We should say, hey, even if you're trying to write this story or even if you're trying to do medical research, we're just not going to answer.
Now, of course, you can say, well, you'll just find it on Google or whatever, but that doesn't mean we need to do that.
It is, though, like there is a real freedom in privacy versus protecting users' trade-off.
It's easy in some cases, like kids.
It's not so easy to me in a case of like a really sick adult at the end of their lives.
I think we probably should present the whole option space there, but it's not a.
So that's the thing I was going to say is like, I don't know the way that people in the military use ChatGPT today for all kinds of advice about decisions they make, but I suspect there's a lot of people in the military talking to ChatGPT for advice.
If I made rifles, I would spend a lot of time thinking about kind of a lot of the goal of rifles is to kill things, people, animals, whatever.
If I made kitchen knives, I would still understand that that's going to kill some number of people per year.
In the case of ChatGPT, It's not, you know, the thing I hear about all day, which is one of the most gratifying parts of the job, is all the lives that were saved from ChatGPT for various ways.
But I am totally aware of the fact that there's probably people in our military using it for advice about how to do their jobs.
I mean, you hit on maybe the hardest one already, which is there are 15,000 people a week that commit suicide, about 10% of the world talking to ChadGBT.
That's like 1,500 people a week that are talking, assuming this is right, that are talking to ChatGPT and still committing suicide at the end of it.
They probably talked about it.
We probably didn't save their lives.
Maybe we could have said something better.
Maybe we could have been more proactive.
Maybe we could have provided a little bit better advice about, hey, you need to get this help, or you need to think about this problem differently, or it really is worth continuing to go on.
And we'll help you find somebody that you can talk to.
But of course, the countries that have legalized suicide are now killing people for destitution, inadequate housing, depression, solvable problems, and they're being killed by the thousands.
So, I mean, that's a real thing.
It's happening as we speak.
So the terminally ill thing is kind of like an irrelevant debate.
Once you say it's okay to kill yourself, then you're going to have tons of people killing themselves for reasons that...
Because I'm trying to think about this in real time, do you think if someone in Canada says, hey, I'm terminally ill with cancer and I'm really miserable and I just feel horrible every day, what are my options?
Do you think it should say, you know, assist, whatever they call it at this point, is an option for you?
If I could get one piece of policy passed right now relative to AI, the thing I would most like, and this is in tension with some of the other things that we've talked about, is I'd like there to be a concept of AI privilege.
When you talk to a doctor about your health or a lawyer about your legal problems, the government cannot get that information.
We have decided the society has an interest in that being privileged and that we don't, and that a subpoena can't get that.
The government can't come asking your doctor for it, whatever.
I think we should have the same concept for AI.
I think when you talk to an AI about your medical history or your legal problems or asking for legal advice or any of these other things, I think the government owes a level of protection to its citizens there that is the same as you'd get if you're talking to the human version of this.
And right now we don't have that.
And I think it would be a great, great policy to adopt.
I'm sure there's like some edge case where it's some information you're allowed to, but on the whole, I think we have like there are laws about that that are good.
I mean, we train on publicly available information, but we don't like people are annoyed at this all the time because we won't, we have a very conservative stance on what ChatGPT will say in an answer.
Right.
And so if something is even like close, you know, like they're like, hey, this song can't still be in copyright.
You've got to show it.
And we kind of famously are quite restrictive on that.
And I don't, I mean, if a guy comes out and accuses your company of committing crimes, I have no idea if that's true or not, of course, and then is found killed and there are signs of a struggle, I don't think it's worth dismissing it.
I don't think we should say, well, he killed himself when there's no evidence that the guy was depressed at all.
I think, and if he was your friend, I would think he would want to speak to his mom or.
So do you feel that, you know, when people look at that and they're like, you know, it's possible that happened, do you feel that that reflects the worries they have about what's happening here?
I don't think a fair read of the evidence suggests suicide at all.
I mean, they just don't see that at all.
And I also don't understand why the authorities, when there are signs of a struggle and blood in two rooms on a suicide, like how does that actually happen?
I don't understand how the authorities could just kind of dismiss that as a suicide.
Um, and I'm not accusing you of any involvement in this at all.
What I am saying is that the evidence does not suggest suicide.
And for the authorities in your city to elide past that and ignore the evidence that any reasonable person would say adds up to a murder, I think is very weird.
And it shakes the faith that one has in our system's ability to respond to the facts.
Um, the second report on the way the bullet entered him and the sort of person who had like followed the, the sort of likely path of things through the room.
I, this is where it gets into, I think, a little bit painful, just not the level of respect I'd hope to show to someone with this kind of mental illness.
There are things about him that are incredible and I'm grateful for a lot of things he's done.
There's a lot of things about him that I think are, uh, traits I don't admire.
Um, anyway, he helped us start OpenAI and he later decided that we weren't on a trajectory to be successful and he didn't want to, you know, he kind of told us we had a 0% chance of success and he was going to go do his competitive thing and then we did okay.
And I think he got understandably upset.
Like I'd feel bad in that situation.
And since then has just sort of been trying to, he had runs a competitive kind of clone and has been trying to sort of slow us down and sue us and do this and that.
Um, if AI becomes smarter, I think it already probably is smarter than any person.
And if it becomes wiser, if we can agree that it reaches better decisions than people, then it by definition kind of displaces people at the center of the world, right?
I think it'll feel like a, you know, really smart computer that may advise us and we listen to it.
Sometimes we ignore it.
Sometimes it won't.
I don't think it'll feel like agency.
I don't think it'll diminish our sense of agency.
Um, people are already using ChatGPT in a way where many of them would say it's much smarter than me at almost everything, but they're still making the decisions.
They're still deciding what to ask, what to listen to, whatnot.
And I think this is sort of just the shape of technology.
Um, I'll caveat this with the obvious, but important statement that no one can predict the future and I will, and trying to, if I try to answer that precisely, I will make a lot of, I will say like a lot of dumb things, but I'll try to pick an area that I'm confident about and then areas that I'm much less confident about.
Um, I'm confident that a lot of current customer support that happens over a phone or computer, those people will lose their jobs and that'll be better done by an AI.
Um, now there may be other kinds of customer support where you really want to know it's the right person.
Uh, a job that I'm confident will not be that impacted is like nurses.
I think people really want the deep human connection with a person in that time.
And no matter how good the advice of the AI is or the robot or whatever, like you'll really want that.
A job that I feel like way less certain about what the future looks for, looks like for is computer programmers.
What it means to be a computer programmer today is very different than what it meant two years ago.
You're able to use these AI tools to just be hugely more productive, but it's still a person there and they're like able to generate way more code, make way more money than ever before.
And it turns out that the world wanted so much more software than the world previously had capacity to create, that there's just incredible demand overhang.
But if we fast forward another five or 10 years, what does that look like?
Like, someone told me recently that the historical average is about 50% of jobs significantly change.
Maybe I don't totally go away, but significantly change every 75 years on average.
That's the kind of, that's the half-life of the stuff.
And my controversial take would be that this is going to be like a punctuated equilibrium moment where a lot of that will happen in a short period of time.
But if we zoom out, it's not going to be dramatically different than the historical rate.
Like, we'll do, we'll have a lot in this short period of time, and then it'll somehow be less total job turnover than we think.
There will still be a job that is, there will be some totally new categories, like my job, like, you know, running a tech company, it would have been hard to think about 200 years ago.
But there's a lot of other jobs that are directionally similar to jobs that did exist 200 years ago, and there's jobs that were common 200 years ago that now aren't.
And if we, again, I have no idea if this is true or not, but I'll use the number for the sake of argument.
If we assume it's 50% turnover every 75 years, then I could totally believe a world where, 75 years from now, half the people are doing something totally new, and half the people are doing something that looks kind of like some jobs of today.
I'm not confident on this answer, but my instinct is the world is so much richer now than it was at the time of the industrial revolution that we can actually absorb more change faster than we could before.
There's a lot that's not about money of job.
There's meaning, there's belonging, there's sense of community.
I think we're already, unfortunately, in society in a pretty bad place there.
I'm not sure how much worse it can get.
I'm sure it can, but I have been pleasantly surprised on the ability of people to pretty quickly adapt to big changes.
Like COVID was an interesting example to me of this, where the world kind of stopped all at once, and the world was like very different from one week to the next.
And I was very worried about how society was going to be able to adapt to that world.
And it obviously didn't go perfectly.
But on the whole, I was like, all right, this is one point in favor of societal resilience and people find, you know, new kind of ways to live their lives very quickly.
I always worry the most about the unknown unknowns.
If it's a downside that we can really like be confident about and think about, you know, we talked about one earlier, which is these models are getting very good at bio and they could help us design biological weapons, you know, engineer like another COVID style pandemic.
I worry about that.
But because we worry about it, I think we and many other people in the industry are thinking hard about how to mitigate that.
The unknown unknowns where, okay, there's like a, there's a societal scale effect from a lot of people talking the same model at the same time.
This is like a silly example, but it's one that struck me recently.
LLMs like ours and our language model and others have a kind of certain style to them.
You know, they talk in a certain rhythm and they have a little bit unusual diction and maybe they overuse M dashes and whatever.
And I noticed recently that real people have like picked that up.
And it was an example for me of like, man, you have enough people talking to the same language model.
And it actually does cause a change in societal scale behavior.
So you're saying, I think correctly and succinctly that technology changes human behavior, of course, changes our assumptions about the world and each other and all that.
And a lot of this you can't predict, but considering that we know that, why shouldn't the internal moral framework of the technology be totally transparent?
Well, it's, it's something that we assume is more powerful than people and to which we look for guidance.
I mean, you're already seeing that on display.
What's the right decision?
I asked that question of whom?
My closest friends, my wife and God.
And this is a technology that provides us more certain answer than any person can provide.
So it's a, it's a religion.
And the beauty of religions is they have a catechism that is transparent.
I know what the religion stands for.
Here's what it's for.
Here's what it's against.
But in this case, I pressed and I wasn't attacking you sincerely.
I was not attacking you, but I was trying to get to the heart of it.
The beauty of a religion is it admits it's a religion and it tells you what it stands for.
The unsettling part of this technology, not just your company, but others is that I don't know what it stands for, but it does stand for something.
And unless it admits that and tells us what it stands for, then it guides us in a kind of stealthy way toward a conclusion we might not even know we're reaching.
You see what I'm saying?
So like, why not just throw it open and say, chatGTP is for this?
We're, you know, we're for suicide for the terminal eel, but not for kids or whatever.
I mean, the reason we write this long model spec and the reason we keep expanding over time is so that you can see here's how the, here's how we intend for the model to behave.
What used to happen before we had this is people would fairly say, I don't know what the model's even trying to do.
And I don't know if this is a bug or the intended behavior.
Tell me what this long, long document of, you know, tell me how you're going to like, when you're going to do this and when you're going to show me this and when you're going to say you won't do that.
The reason we try to write this all out is I think people do need to know.
And so is there a place you can go to find out a hard answer to what your preferences as a company, our preferences that are being transmitted in a not entirely straightforward way to the globe?
Like, where can you find out what the company stands for, what it prefers?
Let me ask you one last question and maybe you can allay this fear that the power of the technology will make it difficult, impossible for anyone to discern the difference between reality and fantasy.
This is a famous concern, but that because it is so skilled at mimicking people and their speech and their images, that it will require some way to verify that you are who you say you are, and that will by definition require biometrics, which will by definition eliminate privacy for every person in the world.
But then at a certain point, when, you know, images or sounds that mimic a person, you know, it just becomes too easy to empty your checking account with that.
One, I think we are rapidly heading to a world where people understand that if you get a phone call from someone that sounds like your kid or your parent, or if you see an image that looks real, you have to really have some way to verify that you're not being scammed.
And this is not like, this is no longer theoretical concern.
I think people are quickly understanding that this is not a thing that bad actors are using.
And people are understanding that you got to verify in different ways.
I suspect that in addition to things like family members having code words they use in crisis situations, we'll see things like when a president of a country has to issue an urgent message, they cryptographically sign it or otherwise somehow guarantee its authenticity.
So you don't have like generated videos of Trump saying, I've just done this or that.
And people, I think people are learning quickly that this is a new thing that bad guys are doing with the technology they have to contend with.
And I think that is most of the solution, which is people will have, people will by default not trust convincing looking media.
And we will build new mechanisms to verify authenticity of communication.
I think there are versions of privacy preserving biometrics that I like much more than, like, collecting a lot of personal digital information on someone.
But I don't think they should be...
I don't think biometrics should be mandatory.
I don't think you should, like, have to provide biometrics to get on an airplane, for example.
So it turns out that YouTube is suppressing this show.
On one level, that's not surprising.
That's what they do.
But on another level, it's shocking.
With everything that's going on in the world right now, all the change taking place in our economy and our politics, with the wars on the cusp of fighting right now, Google has decided you should have less information rather than more.
And that is totally wrong.
It's immoral.
What can you do about it?
Well, we could whine about it.
That's a waste of time.
We're not in charge of Google.
Or we could find a way around it, a way that you could actually get information that is true, not intentionally deceptive.
The way to do that on YouTube, we think, is to subscribe to our channel.
Subscribe.
Hit the little bell icon to be notified when we upload and share this video.