Sam Harris speaks with philosopher David Chalmers about the nature of consciousness, the challenges of understanding it scientifically, and the prospect that we will one day build it into our machines. If the Making Sense podcast logo in your player is BLACK, you can SUBSCRIBE to gain access to all full-length episodes at samharris.org/subscribe.
Just a note to say that if you're hearing this, you are not currently on our subscriber feed and will only be hearing the first part of this conversation.
In order to access full episodes of the Making Sense Podcast, you'll need to subscribe at SamHarris.org.
There you'll find our private RSS feed to add to your favorite podcatcher, along with other subscriber-only content.
We don't run ads on the podcast, and therefore it's made possible entirely through the support of our subscribers.
So if you enjoy what we're doing here, please consider becoming one.
Today I'm speaking with David Chalmers.
David is a philosopher at NYU and also at the Australian National University, and he's the co-director of the Center for Mind, Brain, and Consciousness at NYU.
David, as you'll hear, though we've never met, was instrumental in my turning my mind toward philosophy and science, ultimately, because of the work he began doing on the topic of consciousness in the early 90s.
And I found it fascinating to talk to David.
His interests and intuitions in philosophy align with my own to a remarkable degree.
We spend most of our time talking about consciousness and what it is and why it is so difficult to understand scientifically, conceptually.
We talk about the hard problem of consciousness, which is a phrase he introduced into philosophy that has been very useful in shaping the conversation here.
At least it's been useful for those of us who think that there really is a hard problem that resists any kind of easy neurophysiological solution or computational solution.
We talk about artificial intelligence and the possibility that the universe is a simulation and other fascinating topics, some of which can seem so bizarre or abstract as to not have any real tangible importance.
But I would urge you not to be misled here.
I think all of these topics will be more and more relevant in the coming years as we build devices which, if they're not in fact conscious, will seem conscious to us.
And as we confront the prospect of augmenting our own conscious minds by integrating our brains with machines more and more directly, and even copying our minds onto the hard drives of the future, all of these arcane philosophical problems will become topics of immediate and pressing personal and ethical importance.
David is certainly one of the best guides I know to the relevant terrain.
So now it's with great pleasure that I bring you David Chalmers.
Well, I'm here with philosopher David Chalmers.
David, thanks for coming on the podcast.
Thanks.
It's a pleasure to be talking with you.
You know, I don't think we've ever met.
Are you aware of whether or not we've met?
I feel like we've met by email, but... I don't think so.
I've had emails, a couple of emails back and forth over the years and with Annika, your wife as well, but never in person that I recall.
Yeah, you know, because I feel, the reason why I'm confused about this is because, and this is almost certainly something you don't know, but you served quite an important intellectual role in my life.
I went to one of those early Tucson conferences on consciousness.
Oh, I didn't know that.
I think it was probably 95.
Was 94 the first one?
Yeah, 94 was the first small one with about 300 people.
Then it got really big in 96 with about 2,000 people.
I think I went to 95 and probably 96 as well.
And I had dropped out of school and was, I guess you could say, looking for some direction in life.
And I became very interested in the conversations that were happening in the philosophy of mind.
I think probably the first thing I saw was some of the sparring between Dan Dennett and John Searle.
Then I noticed you in the Journal of Consciousness Studies, and then I think I just saw an ad, probably in the Journal of Consciousness Studies, for the Tucson Conference, and showed up, and I quite distinctly remember your talk there.
Your articulation of the hard problem of consciousness really just made me want to do philosophy, which led very directly into my wanting to know more about science and sent me back to the ivory tower.
And I think a significant percentage of my getting a PhD in neuroscience and continuing to be interested in this issue was the result of just seeing that the conversation you started in Tucson more than 20 years ago.
Okay, well, I'm really pleased to hear that.
I had no idea.
Yeah.
It might have been the 96th conference.
Was Dan Dennett there, you said?
You know, I don't know if... I don't recall if Dan was there.
I went to... I've gone to at least two of them, and they were in quick succession, and I think Roger Penrose was there.
I remember Stuart Hameroff talking at least about their thesis, and it was a fascinating time.
Yeah, that's the event that people call the Woodstock of Consciousness.
Getting everyone together for the kind of, you know, getting the band together for the first time.
It was really a, it was a crazy conference.
It was a whole lot of fun.
It was the first time I had met a lot of these people too, myself, actually.
Oh, interesting.
It was very influential for me.
I feel like I am a bad judge of how...
Familiar people are with the problem of consciousness because I have been so steeped in it and fixated on it for now decades.
So I'm always surprised that people find this a novel problem and difficult to even Notice as a problem.
So let's start at the beginning and let's just talk about what consciousness is What do you mean by consciousness and and how would you distinguish it from the other?
Topics that it's usually conflated with like self-awareness and behavioral report and access and and all the rest I mean, it's awfully hard to define consciousness, but at least let's like to start by saying consciousness is the subjective experience of the mind and the world It's basically what it feels like from the first person point of view to be thinking and perceiving and judging and so on.
So when I look out at a scene like I'm doing now out my window, there are trees.
And there's grass and a pond and so on.
Well, there's a whir of information processing where all this stuff, you know, photons in my retina send a signal of the optic nerve to my brain.
Eventually I might say something about it.
That's all of a level of functioning and behavior.
But there's also really crucially something it feels like from the first person point of view.
I might have an experience of a Of the colors, a certain greenness of the green, a certain reflection on the pond.
This is a little bit like the inner movie in the head.
And the crucial problem of consciousness, for me at least, is this subjective part.
What it feels like from the inside.
This we can distinguish from questions about, say, behavior and about functioning.
People sometimes use the word consciousness just for the fact that, for example, I'm awake and responsive.
That's something that can be understood straightforwardly in terms of behavior and there are going to be mechanisms for how I'm responding and so on.
So I like to call those problems of consciousness, the easy problems, the ones about how we behave, how we respond, how we function.
What I like to call the heart problem of consciousness is the one about how it feels from the first person point of view.
Yeah, there was another very influential articulation of this problem, which I would assume influenced you as well, which was Thomas Nagel's essay, What Is It Like to Be a Bat?
The formulation he gave there is, if it's like something to be a creature or a system processing information, whatever it's like, even if it's something we can't understand,
The fact that it is like something, the fact that there's an internal, subjective, qualitative character to the thing, the fact that if you could switch places with it, it wouldn't be synonymous with the lights going out, that fact, the fact that it's like something to be a bat, is the fact of consciousness in the case of a bat, or in any other system.
I know people who are not sympathetic with that formulation just think it's a kind of tautology or it's just a question-begging formulation of it, but as a rudimentary statement of what consciousness is, I've always found that to be an attractive one.
Do you have any thoughts on that?
Yeah, I find it's about as good a definition as we're going to get for consciousness.
The idea is roughly that a system is conscious if there's something it's like to be that system.
So there's something it's like to be me.
Right now, I'm conscious.
There's nothing it's like, presumably, to be this glass of water on my desk.
If there's nothing it's like to be that glass of water on my desk, then it's not conscious.
Likewise, some of my mental states, you know, my seeing the green leaves right now, there's something it's like for me to see the green leaves.
So that's a conscious state for me.
But maybe there's some unconscious language processing of syntax going on in my head that doesn't feel like anything to me or some motor processes in the cerebellum.
And those might be states of me, but they're not conscious states of me, because there's nothing it's like for me to undergo those states.
So I find this is a definition that's very vivid and useful for me.
That said, it's just a bunch of words, like anything.
And for some people, this bunch of words, I think, is very useful in activating the idea of consciousness from the subjective point of view.
Other people hear something different in that Set of words like what is it like you saying what is it similar to well it's like it's kind of similar to my brother but it's different as well you know for those people that set of words doesn't work what i found over the years is it.
This phrase of nagle is incredibly useful for at least some people in getting them on to the problem although it doesn't work for everybody.
What do you make of the fact that so many scientists and philosophers find the hardness of the hard problem, and I think I should probably get you to state why it's so hard, or why you have distinguished the hard from the easy problems of consciousness, but what do you make of the fact that People find it difficult to concede that there's a problem here, because it's... I mean, this is just a common phenomenon.
I mean, there are people like Dan Dennett and the Churchlands and other philosophers who just kind of ram their way past
the mystery here and declare that it's a pseudo mystery and you and I have both had the experience of witnessing people either seem to pretend that that this problem doesn't exist or they acknowledge it only to change the subject and then pretend that they've addressed it and so let's state what the hard problem is and perhaps you can say why it's why it's not immediately compelling to everyone that it's in fact hard.
Yeah i mean it's obviously a huge amount of disagreement in this area but your sense is my sense is that most people at least got a reasonable appreciation of the fact that there's a big problem here of course what you do after that.
Is very different in different cases some people think it's only a.
initial problem and we can, we ought to kind of see it as an illusion and get past it.
But yeah, to state the problem, I find it useful to first start by distinguishing the easy problems, which are problems basically about the performance of functions from the hard problem, which is about experience.
So the easy problems are, you know, how is it, for example, we discriminate information in our environment and respond appropriately?
How does the brain integrate information from different sources and bring it together to make a judgment and control behavior?
How, indeed, do we voluntarily control behavior to respond in a controlled way to our environment?
How does our brain monitor its own states?
These are all big mysteries, and actually neuroscience has not gotten All that far on some of these problems, they're all quite difficult.
In those cases, we have a pretty clear sense of what the research program is and what it would take to explain them.
It's basically a matter of finding some mechanism in the brain that, for example, is responsible for discriminating the information and controlling the behavior.
And although it's pretty hard work, Finding the mechanism, we're on a path to doing that.
So a neural mechanism for discriminating information, a computational mechanism for the brain to monitor its own states.
And so on.
So for the easy problems, they at least fall within the standard methods of the brain and cognitive sciences.
But basically, we're trying to explain some kind of function and we just find a mechanism.
The hard problem, what makes the hard problem of experience hard, is it doesn't really seem to be a problem about behavior.
Or about functions you could exploit you can principle imagine explaining all of my behavioral responses to a given stimulus and how my brain discriminates and integrates and monitors itself and controls.
You explain all that with a neural mechanism and you might not have touched the central question which is why does it feel like something.
from the first person point of view.
That just doesn't seem to be a problem about explaining behaviors and explaining functions.
And as a result, the usual methods that work for us so well in the brain and cognitive sciences, finding a mechanism that does the job, just doesn't obviously apply here.
We're going to get correlations.
We're certainly finding correlations between processes in the brain and bits of consciousness, an area of the brain that might light up when you see red or when you feel pain.
But nothing there seems yet to be giving us an explanation.
Why does all that processing feel like something from the inside?
Why doesn't it go on just in the dark, as if we were giant robots or zombies without any Subjective experience.
So that's the hard problem and I'm inclined to think that most, you know, most people at least recognize there is at least the appearance of a big problem here.
From that point, people react in different ways.
Someone like Dan Dennett says, ah, it's all an illusion or a confusion and one that we need to, uh, to get past.
I mean, I respect that line.
I think it's a hard enough problem that we need to be exploring every, uh, Every avenue here, and one avenue that's very much worth exploring, is the view that it's an illusion.
But there is something kind of faintly unbelievable about the whole idea that the data of consciousness here are an illusion.
To me, they're the most real thing in the universe.
You know, the feeling of pain, the experience of vision or of thinking.
So, it's a very hard line to take.
The line that He wrote a book, Consciousness Explained, back in the early 90s, where he tried to take that line.
It was a very good and very influential book, but I think most people have found that, at the end of the day, it just doesn't seem to do justice to the phenomenon.
To be fair to Dan, it's been a long time since I've looked at that book.
That was actually, might have been the first book I read on this topic, back when it came out, I think in 91.
Does he actually say, and this is strange, I'm very aligned with you and people like Thomas Nagel on these questions in The Philosophy of Mind, and yet have had this alliance with Dan for many years around the issue of religion, and so I've spent a lot of time with Dan.
We've never really gotten into a conversation on consciousness.
Perhaps we've been wary of colliding on this topic, and we had a somewhat unhappy collision on the topic of free will.
Is it true that he says that consciousness is an illusion, or is it somehow just the hardness of the hard problem is illusory, that the hard problem is categorically different from the easy problems?
I completely understand how he would want to push that intuition.
As I've said before, and I really don't see another way of seeing this, it seems to me that consciousness is the one thing in this universe that can't be an illusion.
I mean, even if we are confused about everything, even if we are even confused about the qualitative character of our experience in many important respects, so that we're not subjectively incorrigible, You can be wrong about what it's like to be you in terms of the details, which is to say you can become a better judge of what it's like to be you in each moment.
But the fact that it is like something to be you, the fact that something seems to be happening, even if this is only a dream or you're a brain in a vat or You're otherwise misled by everything.
There is something seeming to happen, and that seeming is all you need to assert the absolute undeniable reality of consciousness.
I mean, that is the fact of consciousness every bit as much as any other case in which you might assert its existence.
So I just don't see how a claim that consciousness itself is an illusion can ever fly.
Yeah, I think I'm with with you on this.
I think Dan's views have actually evolved a bit over the years on this back in the maybe the 1980s or so.
He used to say things that sounded much more strongly like consciousness doesn't exist.
It's an illusion.
He wrote a paper called on the absence of phenomenology saying there really isn't such a thing as what we call phenomenology, which is basically just another word for consciousness.
He wrote another one called quining qualia, which said we need to we just need to get rid of this whole idea of qualia, which again is a word that philosophers use for the qualitative character of experience, the thing that makes seeing red different from seeing green, they seem to involve different qualities.
At one point, Dan was inclined to say, Oh, that's just a mistake.
There's nothing there.
Over the years, I think, He's found that people find that line just a bit too strong to be believable.
It's just, it just seems frankly unbelievable from the first person point of view that there are no qualia.
There is no feeling of red versus the feeling of green or there is no consciousness.
So he's evolved in the direction of saying that, uh, yeah, there's consciousness, but it's really just in the sense of, for example, there's functioning and behavior and information encoded There's not really consciousness in this strong phenomenological sense that drives the hard problem.
I mean, in a way, it's a bit of a verbal Relabeling of the old line, you know, because you must be familiar.
I know you're familiar with these debates over free will where one person says There is no free will and the other person says well there is free will but it's just this much more Deflated thing which is compatible with determinism and it's basically two ways of saying the same thing I think Dan used to say there is no consciousness of Now he says, well, there is consciousness, but only in this very deflated sense.
And I think ultimately, it's another way of saying the same thing.
He doesn't think there is consciousness in that strong subjective sense that poses the hard problem.
I feel super sensitized to the prospect of people not following the plot here, because if it's the first time someone is hearing these concerns, it's easy to just lose sight of what the actual subject is.
I just want to retrace a little bit of what you said, sketching the hardness of the hard problem.
The distinction between understanding function and understanding the fact that experience exists.
And so we have functions like motor behavior or learning or visual perception.
And it's very straightforward to think about explaining these in mechanistic terms.
I mean, so you have something like vision.
We can talk about the transduction of light energy into neurochemical events and then the mapping of the visual field onto the relevant parts of, in our case, the visual cortex.
And this is very complicated, but it's not in principle obscure.
The fact that it's like something to see, however, remains totally mysterious no matter how much of this mapping you do.
And if you imagine from the other side, if we built a robot That could do all the things we can.
It seems to me that at no point in refining its mechanism would we have reason to believe that it's now conscious, even if it passes the Turing test.
This is actually one of the things that concerns me about AI.
It seems one of the likely paths we could take is that we could build machines that seem conscious, and the effect will be so convincing that we will just lose sight of the problem.
All of our intuitions that lead us to ascribe consciousness to other people and to certain animals will be played upon, because we will build the machines so as to do that, and it will cease to seem philosophically interesting or even ethically appropriate to wonder whether there's something that it's like to be one of these robots.
And yet, it seems to me that we still won't know whether these machines are actually conscious unless we've understood how consciousness arises in the first place, which is to say, unless we've solved the heart problem.
Yeah, and I think we can, maybe we should distinguish the question of whether a system is conscious from how that consciousness is explained.
I mean, even in the case of other people, well, they're behaving as if they're conscious, and we tend to be pretty confident that other people are conscious, so we don't really regard there to be a question about whether other people are conscious.
Still, I think it's consistent to have that attitude and still find it very mysterious, this fact of consciousness, and to be utterly puzzled about how we might explain it in terms of the brain.
So, I suspect that with machines, we may well end up, as you say, just finding it undeniable Very hard to deny that machine, even if there are machines hanging around with us, talking in a human-like way and reflecting on their consciousness, those machines are saying, hey, I'm really puzzled by this whole consciousness thing, because I know I'm just a collection of silicon circuits, but it still feels like something from the inside.
If machines are doing that, I'm going to be pretty convinced that They are conscious, as I am conscious.
But that won't make it any less mysterious.
Well, maybe it'll just make it all the more mysterious.
How on earth could this machine be conscious, even though it's a collection of silicon circuits?
Likewise, how on earth could I be conscious just as a result of these processes in my brain?
It's not that I see anything intrinsically worse about silicon than about brain processes here.
There just seems to be this kind of mysterious gap in the explanation.
In both cases.
And, of course, we can worry about other people, too.
There's a classic philosophical problem, the problem of other minds.
How do you know that anybody else, apart from yourself, is conscious?
Descartes said, well, I'm certain of one thing.
I'm conscious, I think, therefore I am.
That only gets you one data point.
It gets me to me being conscious.
Actually, it gets me to me being conscious right now.
Who knows if I was ever conscious in the past?
Anything else beyond that has got to be something of an inference or an extrapolation.
We end up taking for granted most of the time that other people are conscious, but you could try to raise questions there if you wanted to.
And then as you move to questions about AI and robots, about animals and so on, the questions just become very fuzzy and murky.
Yeah, I think the difference with AI or robots is that, presumably, we may in fact build them along lines that are not at all analogous to the emergence of our own nervous systems.
And so, if we follow the line we've taken, say, with like You know, chess playing computers, where we have something which we don't even have any reason to believe is aware of chess, but it is all of a sudden the best chess player on earth, and now will always be so.
If we did that for a thousand different human attributes, so that we created a very compelling case for its Superior intelligence.
It can function in every way we function better than we can, and we have put this in some format so that it has the memetic facial displays that we find attractive and compelling.
We get out of the uncanny valley, and these robots no longer seem weird to us.
In fact, they detect our emotions better than we can detect the emotions of other people.
Or then other people can detect ours.
And so all of a sudden we are played upon by a system that is deeply unanalogous to our own nervous system.
And then we will just... Then I think it'll be somewhat mysterious whether or not this is conscious, because we have cobbled this thing together.
Whereas in our case, The reason why I don't think it's parsimonious for me to be a solipsist and to say, well maybe I'm the only one who's conscious is because there's this obviously deep analogy between how I came to be conscious and how you came to be conscious.
So I have to then do further work of arguing that there's something about your nervous system or your situation in the universe that might not be a sufficient base of consciousness and yet it is clearly in my own case.
So to worry about other people or even other higher animals seems a stretch.
At least it's unnecessary and it's only falsely claimed to be parsimonious.
I think it's actually, you have to do extra work to doubt whether other people are conscious rather than just simply not attribute consciousness to them.
How would you feel if we met Martians?
Let's say they're intelligent Martians who are behaviorally very sophisticated and we turn out to be able to communicate with them about science and philosophy, but at the same time they've evolved through a completely independent evolutionary process from us, so they got there in a different way.
Would you have the same kind of doubts about whether they might be conscious?
Yeah, well, I think perhaps I would.
It would be probably somewhere between our own case and whatever we might build along lines that we have no good reason to think track the emergence of consciousness in the universe.
Well, it's actually a topic I wanted to raise with you, this issue of epiphenomenalism Because it is kind of mysterious.
The flip side of the hard problem, the fact that you can describe all of this functioning and you seem to never need to introduce consciousness in order to describe mere function, leaves you at the end of the day with The possible problem, which many people find deeply counterintuitive, which is that consciousness doesn't do anything.
It is an epiphenomenon, which is an analogy often given for this.
It's like the smoke coming out of the smokestack of an old-fashioned locomotive.
It's always associated with the progress of this train down the tracks.
It's not actually doing anything.
It's a mere byproduct of the actual causes that are propelling the train.
And so consciousness could be like the smoke rising out of the smokestack.
It's not doing anything.
And yet it's always here at a certain level of function.
If I recall correctly, in your first book, you seem to be fairly sympathetic with epiphenomenalism.
Talk about that a little bit.
Epiphenomenalism is not a view that anyone feels any initial attraction for.
The consciousness doesn't do anything, it sure seems to do so much, but there is this puzzle that pretty well for any bit of behavior you try to explain, it looks like there's the potential to explain it without invoking consciousness in this subjective sense.
There'll be an explanation in terms of Neurons or computational mechanisms of various behavioral responses.
I mean, the one place where, um, so at least start to, at least start to wonder, maybe consciousness doesn't have any function.
Maybe it doesn't do anything at all.
Maybe for example, consciousness gives value and meaning to our lives, which is something we can talk about without actually doing anything.
But then obviously there are all kinds of questions, uh, how and why.
Would it have evolved?
Not to mention, how is it that we come to be having this extended conversation about consciousness if consciousness isn't actually playing a role in the causal loop?
So, in my first book, I at least tried on the idea of epiphenomenalism.
It didn't come out.
Saying this is definitely true, but tried to say, okay, well, if we're forced in that direction, that's one way we can go.
I mean, actually, we're in view, I mean, this is skipping ahead a few steps, is that either it's epiphenomenal, or it's outside a physical system, but somehow playing a role in physics.
That's another kind of more traditionally dualist possibility.
Or, third possibility, consciousness is somehow built in at the very basic level of physics.
So to get consciousness to play a causal role, you need to say some fairly radical things.
I'd like to track through each of those possibilities, but to stick with epiphenomenalism for a moment, you've touched on it in passing here, but remind us of the zombie argument.
I don't know if that originates with you.
It's not something that I noticed before I heard you making it, but the zombie argument really is the thought experiment that describes epiphenomenalism.
You introduced the concept of a zombie, and then I have a question about that.
So yeah, the idea of zombies actually, I mean, it's been out there for a while in philosophy before me, not to mention out there in the popular culture.
But the zombies which play a role in philosophy are a bit different from the zombies that play a role in the movies or in the Haitian voodoo culture.
You know, the ones in the movies are all supposed to be All the different kinds of zombies are missing something.
The zombies in the movie are lacking somehow life.
They're dead, but reanimated.
The zombies in the voodoo tradition are lacking some kind of free will.
Well, the zombies that play a role in philosophy are lacking consciousness.
And this is just a thought experiment, but the conceit is that we can at least imagine a being, at the very least behaviorally, Identical to a normal human being, but without any consciousness on the inside at all.
Just acting and walking and talking in a perfectly human-like way without any consciousness.
The extreme version of this thought experiment says we can at least imagine a being physically identical to a normal human being, but without any subjective consciousness.
So I talk about my zombie twin.
You know, a hypothetical being in the universe next door who's physically identical to me.
He's holding a conversation like this with a zombie analog of you right now.
I'm saying all the all the same stuff and responding, but without any consciousness.
No one thinks anything like this exists in our universe, but the idea at least seems imaginable or conceivable.
There doesn't seem to be any contradiction in the idea.
And the very fact that you can kind of make sense of the idea immediately raises some questions like why?
Aren't we zombies?
There's a contrast here.
Zombies could have existed.
Evolution could have produced zombies.
Why didn't evolution produce zombies?
It produced conscious beings.
It looks like for anything behavioral you could point to, it starts to look as if a zombie could do all the same things without Consciousness.
So if there was some function we could point to and say that's what you need consciousness for, and you could not in principle do that without consciousness, then we might have a function for consciousness.
But right now it seems, I mean, actually this corresponds to the science for anything that we actually do.
Perception, learning, memory, language, and so on.
It sure looks like a whole lot of it can be performed even in the actual world, unconsciously.
So, the whole problem of what consciousness is doing is just thrown into harsh relief by that thought experiment.
Yeah, as you say, most of what our minds are accomplishing is unconscious, or at least it seems to be unconscious from the point of view of the two of us who are having this conversation.
So the fact that I can follow the rules of English grammar, insofar as I manage to do that, that is all being implemented in a way that is
Unconscious, and when I make an error, I, as the conscious witness of my inner life, am just surprised at the appearance of the error, and I could be surprised on all those occasions where I make no errors, and I get to the end of a sentence in something like grammatically correct form, I could be sensitive to the fundamental mysteriousness of that, which is to say that I'm following rules that I have no conscious access to in the moment.
And everything is like that.
The fact that I perceive my visual field, the fact that I hear your voice, the fact that I effortlessly and actually helplessly decode meaning from your words because I am an English speaker and you're speaking in English but if you were speaking in Chinese it would just be noise and
I mean, this is all unconsciously mediated, and so, again, it is a mystery why there should be something that is like to be associated with any part of this process, because so much of the process can take place in the dark, or at least it seems to be in the dark.
This is something that is a topic I raised in my last book, Waking Up, in discussing split-brain research, but there is some reason to worry or wonder whether or not there are islands of consciousness in our brains that we're not aware of, which is to say we have the problem of other minds with respect to our own brains.
What do you think about that?
What do you put the chance of there being something that is like to be associated with these zombie parts of or seemingly zombie parts of your own cognitive processing?
Well, I don't rule it out, you know, I mean, I think when it comes to the mind-body problem, You know, the puzzles are large enough that we just, one of the big puzzles is we don't know which systems are conscious.
So at least some days I see a lot of attraction to the idea of thinking consciousness is much more widespread than we think.
So not just, I guess most of us think, okay, humans are conscious and probably a lot of the more sophisticated mammals at least are conscious.
Apes, monkeys, dogs, cats, around the point of mice.
Maybe some people start to fly, some people start to wobble, but I'm attracted by the idea that for many, at least reasonably sophisticated information processing devices, there's some kind of consciousness.
Maybe this goes down very deep, and at some point maybe we can talk about the idea that consciousness is is everywhere.
But before even getting to that point, if you're prepared to say that, say, a fly is conscious, or a worm with its 300 neurons, and so on, then you do start to have to worry about bits of the brain that are enormously more sophisticated than that, but that are also part of another conscious system.
There's a guy, Giulio Tononi, who's put forward a well-known recent theory of consciousness called the information integration theory.
He's got a mathematical measure called phi of the amount of information that a system integrates and thinks, but roughly, whenever that's high enough, You get consciousness.
So then, yeah, you'd look at these different bits of the brain, the hemisphere, things like the cerebellum and something.
Well, okay, the phi there is not as high as it is for the brain, but it's still pretty high, high enough that in an animal, he would say it's conscious.
So why isn't it?
And he ends up having to throw in an extra axiom that he calls the exclusion axiom, saying if you're part of a system that has a higher phi than you, Then you're not conscious.
So, if the hemisphere has a high phi, but the brain as a whole has a higher phi, then the brain gets to be conscious, but the hemisphere doesn't.
But to many people, that axiom looks kind of arbitrary.
And if it wasn't for that being in there, then you'd be left with a whole lot of conscious subsystems all over.
I agree.
Who knows what it's like to be a subsystem?
What it's like to be my cerebellum?
What it's like to be a hemisphere, but at least makes you worry and wonder.
On the other hand, you know, there are these experiments where, you know, one half of the brain is basically, these situations where one half of the brain basically gets destroyed and the other half keeps going fine.
Yeah.
Well, so I wanted to ask you about Tannone's notion of consciousness as integrated information.
And to my eye, it seems yet another case of someone just trying to ram pass the hard problem.
And actually, I noticed Max Tegmark wrote a paper that actually took Tannone as a starting point.
And Max has been on this podcast.
I don't think we touched on consciousness, but he also did a version of this.
He just basically said, you know, let's start here.
We know that there are certain arrangements of matter that just are conscious.
We know this.
There is no problem.
This is a starting point and now we just have to talk about the plausible explanation for what makes them conscious.
And then he sort of went on to embrace Tononi and then did a lot of physics.
But is there anything in Tononi's discussion here that pries up the lid on the hard problem more than the earlier work he did with Edelman or anyone else's attempt to give some information processing construal or a synchronicity of neural firing construal of consciousness?
Yeah, to be fair to Giulio Tononi, I mean, I think it's true that in some of the presentations of his work in the popular press and so on, you can get this idea, all information integration is all there is.
Let's explain that.
We've explained everything.
He's actually very sensitive to the problem of consciousness.
And when pressed on this, even in some of the stuff he's written, he says, I'm not trying to solve the hard problem in the sense of showing how you can get consciousness From matter is not trying to cross the explanatory gap from physical processes to consciousness rather he says i'm starting with the fact of consciousness i'm just taking that as a given that we are conscious and i'm trying to map its properties and he actually starts.
with some phenomenological axioms of consciousness.
It consists of information that's differentiated in certain ways, but integrated and unified in other ways.
And then what he tries to do is take those phenomenological axioms And turn them into mathematics, turn them into mathematics of information and say, what are the informational properties that consciousness has?
And then he comes up with this mathematical measure.
Then at a certain point, it somehow turns into the theory that all consciousness is, what consciousness is, is a certain kind of integration of information.
The way I would hear the theory, I don't know if he puts it this way.
It's basically there's a correlation between different states of consciousness and different kinds of integration of information in the brain.
There's still a hard problem here because we still have no idea.
why all that integration of information in the brain should give you consciousness in the first place.
But even someone who believes it's a hard problem can believe there are still really systematic correlations between brain processes and consciousness that we ought to be able to give a rigorous mathematical theory of, you know, just which physical states go with which kind of states of consciousness.
And I see Julio's theory is basically as at least a stab in that direction of trying to give a rigorous mathematical theory of the correlations.
Yeah, yeah, well I should say that I certainly agree that one can more or less throw up one's hands with respect to the hard problem and then just go on to do the work of trying to map the neural correlates of consciousness and understand what consciousness seems to be, in our case, as a matter of its instantiation in the brain, and never pretend that the mystery has been reduced thereby, so that, you know, if it just, if it turned out
I think I said this once in response to the work he did with Edelman.
One of the criteria, I don't know if he still does this, but one of his criteria for information integration was that it had to be within a window of something like 500 milliseconds, right?
I just, by analogy, extrapolated that out to, you know, geological processes in the Earth.
I mean, what if it was just the fact that integrated processes in the Earth over a time course of a few hundred years was a sufficient basis of consciousness?
If we just stipulate that that's true, that's still just a statement of a miracle from my point of view, from the point of view of being sympathetic with the hard problem.
That would be an incredibly strange thing to believe, and yet that is the sort of thing we are being forced to believe about our own brains, just under a slightly different description.
I do think there's something intermediate that you can go for here, even if You do believe, and you're very convinced, there's a serious hard problem of consciousness that allows the possibility of at least a broadly scientific approach to something in the neighborhood of the hard problem, where it's not just, oh, let's look at the neural correlates and see what's going on in the human case, but it's something like try to find
The simplest, most fundamental principles that connect physical processes to consciousness as a kind of basic, general, and universal principle.
So we might start with some correlations we find in the familiar human case between, say, certain neural systems and certain kinds of consciousness, but then try and generalize those based on as much evidence as possible.
Of course, the evidence is limited, which is another limitation here.
But then try and find principles which might apply to other systems.
Ultimately, look for really simple bridging principles that cross the gap from physical systems to consciousness and that would, in principle, predict what kind of consciousness you'd find in what kind of physical system.
So I would say something like Tononi's information integration principle with this mathematical quantity phi as a proposal, maybe a very early proposal, about a fundamental principle that might connect physical processes to consciousness.
Now, it doesn't exactly remove the hard problem, because at some point you've got to take that principle as a basic axiom.
Yeah, when there's information integration, there's consciousness.
But then you can at least go on to do science with that principle.
And it may well be that my take on this is that we know that elsewhere in science, you have to take some laws And some principles as fundamental.
Fundamental laws of physics, the law of gravity, or the unified fuel theory, or the laws of quantum mechanics.
Some things are just basic principles that we don't try and explain any further.
But it may well be that when it comes to consciousness, we're going to have to take something like that for granted as well.
So we don't try to explain space, or at least we didn't try to explain space in terms of something more basic.
Some things get taken as primitive.
And we look at the fundamental laws that involve them.
Likewise, the same could be true for consciousness.
And we ended up, you know, maybe it's, we ended up pretty satisfied about what goes on in the case of space.
Space is one of the primitives, but we've got a great scientific theory of how it works.
We could end up in that position for consciousness too.
Yes, we have to take something here as basic, but we'll get this really fundamental principle, say like the information integration principle that crosses the gap.
And yet won't remove the hard problem because that'll be taken as basic, but that will at least be reduced to a situation we're familiar with elsewhere in science.
Yeah, yeah, and actually I'm quite sympathetic with that line.
As you say, there are primitives or brute facts that we accept throughout science and they are no insult to our thinking about the rest of reality.
And so I want to get there, but I realize now I forgot to ask a question that Annika wanted me to ask.
My wife Annika wanted me to ask on the zombie argument And she was wondering why, I mean, whether it was actually conceivable that a zombie would or could talk about consciousness itself.
I mean, how is it that you take a zombie, you know, my zombie twin that has no experience, there's nothing that it's like to be that thing, but it is talking just as I am and is functioning just as I am.
What could possibly motivate a zombie that is devoid of phenomenal experience to say things like, I have experiences but other creatures don't, or to worry about the possibility of zombies?
There would seem to be no basis to make this distinction because everything he's doing he can easily ascribe to others that have no experience.
So there's no There seems to be no basis for him to distinguish experience from non-experience, so I just wanted to get your reaction to that on her behalf.
I mean, this is a big puzzle, and it's probably one of the biggest puzzles when it comes to thinking through this idea of a zombie.
I don't know if zombies talk in their bodies.
If you'd like to continue listening to this conversation, you'll need to subscribe at SamHarris.org.
Once you do, you'll get access to all full-length episodes of the Making Sense podcast, along with other subscriber-only content, including bonus episodes, NAMAs, and the conversations I've been having on the Waking Up app.