Interview with Kevin Mitchell on Agency and Evolution
In this episode, Matt and Chris converse with Kevin Mitchell, an Associate Professor of Genetics and Neuroscience at Trinity College Dublin, and author of 'Free Agents: How Evolution Gave Us Free Will'. We regret to inform you that the discussion does involve in-depth discussions of philosophy-adjacent topics such as free will, determinism, consciousness, the nature of self, and agency. But do not let that put you off! Kevin is a scientist and approaches them all through a sensible scientific perspective. You do not have to agree but you do have to pay attention!If you ever wanted to see Matt geek out and Chris remain chill and be fully vindicated, this is the episode for you.LinksKevin's websiteRobert Sapolsky vs Kevin Mitchell: The Biology of Free Will | Philosophical TrialsKevin's TedX Talk: Who’s in charge? You or your brain? | Kevin Mitchell | TEDxTrinityCollegeDublin
Hello and welcome to Decoding the Gurus, the podcast where anthropologists and psychologists listen to the greatest minds the world has to offer and we try to understand what they're talking about.
I'm Matt Brown, my co-host is Chris Kavanagh.
How are you doing today, Chris?
You ready for the interview we have scheduled?
Yeah, I'm all geared up.
I made the decision freely.
I activated my agentic processes and the molecules in the universe helped push me along that path.
I was the biological entity that offered that decision.
But in another sense, aren't we all just protons and electrons and whatnot vibrating to the cosmic energy?
So who can say, Matt?
Well, like a single-celled organism waving its little flagella, drawn towards the light, your arc draws you inevitably towards interviews and podcasts.
Correct.
I'm irreducibly complex.
I've often said that about to you.
Yeah, you can't take me apart.
There's nothing there.
You take me apart and you're like, God must have made this.
It's just so freaking astounding.
It's what many people have said.
Well, this is the thing that podcasters say, but we really do have a great interview coming up.
It was really fun to geek out with Kevin.
But there was something I think you wanted to get off your chest before we got into it, Chris.
Yeah, I like to think of it more of the collective we, Matt.
We, the podcast Decoding the Gurus, wants to clarify one point because, you know, after doing this gig for a while, Matt, you can anticipate what's going to happen.
And I'm just going to say that because this conversation, one, it talks about things like determinism and agency and so on.
And we have made some joking, lighthearted comments before about avoiding Philosophical topics and whatnot.
So one, we're not entirely opposed to any discussion of that.
Just generally think that it's best done with people who are making...
Sorry, I have relevant expertise.
How would you put it?
You have a strict, no philosophers.
On the podcast, except for the philosophers that we've had on.
They're okay, but they're exceptions, right?
They were allowed on.
They kind of snucked in under the radar.
So why doesn't Kevin Mitchell fall far off the no philosopher?
Oh, right.
Well, yeah, it's a pretty loose rule because it's basically a blanket ban on all philosophers except for the ones that we like, which appears to be a lot of them.
It was never a completely enforced rule anyway, but yeah.
And Kevin Mitchell isn't a philosopher, is the thing.
No, he's a neurogeneticist.
He's a science guy.
And, you know, this book and his thinking does gravitate towards some philosophical questions.
You're interested in things like that, Chris, I know.
Am I?
In consciousness and free will and things like that.
Yeah, I mean, I am, but just my general position is we all know it's not that mysterious, not making a big deal out of it.
And state your opinions less confidently.
That's the general position.
But, yeah.
Well, we try to get out of philosophy just like the mafia.
They pull you back in.
That's right.
We can't.
But like, you know, Neil deGrasse Tyson, we want to say, oh, what's the point of philosophy and whatnot?
And then it ends up, oh, fuck, we have to talk about epistemics and stuff all the time.
So it's a nightmare.
But like, that was just, you know, one thing to flag up.
And yeah, OK, we get it.
Yeah, yeah.
It's a bit hypocritical.
But the other point is that there are some bits during the conversation.
And I was aware of this at the time.
You'll hear us kind of joke about it.
References sense-making and needing a science of meaning.
And now, superficially, that does sound like something that you would hear in the sense-making sphere or come from Jordan Peterson's lips.
I'm aware of that, but I just want to make clear to people that when people use words like meaning or evolution, it doesn't always mean the same thing, right?
So actually, Brett Weinstein says, You should apply an evolutionary lens to help understand what's happening in society.
He's actually not wrong that understanding that we are social primates and that evolution helps shape our psychology can be a useful thing to consider.
But Brett's evolutionary lens is an insane conspiracy theorist, hyper-adaptionist bullshit thing, right?
So when he says, I'm applying an evolutionary lens...
He is, but like a batshit insane conspiracy theory one.
Now, if someone else, some biological evolutionary scientist who's actually not Brett Weinstein says that they apply an evolutionary lens when they're looking at human society or whatever, it can be completely different.
It can be bullshit.
You have to take them as they come.
But we are humans limited by the words we use.
So I think it's worth noting just that in a lot of these occasions, Are using the same words, but with more or less technical specificity.
And Kevin is somebody that has quite clear technical definitions, which we don't go through all of them, but they're covered in the book.
So it doesn't mean you can't disagree with him, still disagree with him all you want, but it is not based on the same bullshit reasoning or lack of biological knowledge that somebody like Brett Weinstein displays.
Yeah, and being very specific about it, if you take something like that word meaning, and Kevin talks about that in relation to the way an organism processes sensory input.
And if you haven't read the book, it might, like you say, superficially sound a little bit like a sense maker type thing where meaning can be anything and suddenly you're talking about egregores.
You know, dragons of solitude in a JP universe.
But actually, when Kevin talks about it in the book in detail, he's very clear about what it means.
He operationalizes it and defines that word very well, which is basically information gathered by the senses.
That actually is informative to an organism in terms of what it wants, in scare quotes.
So it could be a single-celled organism that is receiving photons and is interpreting that as a light source, and that might be meaningful to it because it wants to swim towards the light source because it can photosynthesize there or something.
So there's nothing metaphysical about it.
Yeah, that's not saying meaning in the Jordan Peterson scientific mysticism.
It's not like...
So it is applying a continuum, yes, but firmly entrenching meaning in a biological process and what is distinguishing the agents from non-agents and non-material.
So we cover it in the episode, but again, just Kevin is not in the sense-making realm.
Hopefully people in our audience should be able to determine that there is a difference.
But I just know because of...
The vocabulary that there might be that objection raised.
And if so, you know, you can go annoy Kevin on Twitter or you can read his book or other lectures and see what he says.
But yeah, that's all.
I thought it's useful to flag it up because they are the same words that are being used.
Yeah, in a different sense.
Okay, very good, Chris.
Anything else you wanted to get off your chest on behalf of the podcast?
No, no.
The podcast is silent and we move monk-like in procession to enter the interview with Kevin Mitchell.
Kevin Mitchell.
I'm Matt Brown.
With me is Chris Kavanagh.
And with me is a third person.
It is Professor Kevin Mitchell.
Welcome, Kevin.
Oh, thanks very much.
It's a pleasure to be here.
So, Kevin, we've got you on because we've both read your book and we've enjoyed it very much.
So, for everyone else, though, Kevin's an Irish neurogeneticist.
Is that correct, Kevin?
Sure, yeah.
It's a little fudgy term, but yeah, why not?
It's something like cognitive anthropologists, that kind of thing.
It's a bit more technical.
So you're a professor of genetics at Trinity College, Dublin, and your research focuses on the mechanisms of neurodevelopment and how that contributes to cognition and behaviour.
Now, the book that Chris and I both read and enjoyed is called Free Agents, How Evolution Gave Us Free Will.
To start us off, Kevin, I thought, before we jump into free will and consciousness...
Wait, before, I'm not going to start on consciousness.
I have a non-book-related question for Kevin that I just wanted to clarify before we start.
Kevin, I'm from Ireland as well, in case you didn't notice.
I did notice.
The North part.
But I've listened to a couple of your interviews with Robert Sapolsky and stuff as well.
I'm really bad in general with Irish accents, but...
I couldn't work out where your accent is from.
And I think it's like a distinctive, well-known Irish-style accent.
So I was just curious.
Are you from Dublin?
Mine is a mix.
I live in Dublin, but I was actually born in the States.
I lived in Philadelphia until I was nine, and then I moved to Dublin.
I had a proper Dublin accent, but then moved in my 20s to California, and I had to tone it.
I had to tone it way, way down, especially all the swearing, which just didn't go over well at all.
So then, yeah, I spent 10 years in California and then moved back to Dublin.
So I have a bang middle of the Atlantic weird accent that I can only apologize for.
No, I like that because this gives me more credibility as I detected something slightly different.
You did.
I'm actually much better than I give myself credit for.
That was just because I initially thought you were from Northern Ireland.
I get that a lot.
I had a taxi driver one time who flat out refused to believe that I was not from Belfast.
I just told him I was not, but he didn't believe me.
So, yep.
Yeah.
I guess we're dialectically kindred spirits.
I agree.
I agree.
I like your accent.
It's beautiful.
Yes.
Melodious and soothing.
I listened to it, Chris, too, for many hours with the audiobook version of your book.
Chris, if you ever write a book, you better get someone else to read it.
Yeah, I don't think I'll narrate it.
Yes, that's right.
But you did a good job, Kevin.
Thank you.
Yeah, that was a whole thing, that reading the audiobook.
I never had done that before.
Man, it's tiring.
You really have to work on your diction.
I was cursing myself for writing long sentences with eight clauses in them and no breath points.
How do you do it?
Do you just do it in shifts and you talk for a couple of hours and have a break?
Yeah, that's it.
Yeah, exactly.
Yeah, yeah.
Full on, you know, in the studio, it was five days recording with, you know, the sound guy and the producer in my ear.
And she stopped me every once in a while and said something like, a little bit of mouth noise there, darling.
Could you just do that again?
That's lovely.
I think, Kevin, you might not be entirely cut out to be a guru, though, because in our experience, that's one thing gurus have no problem with.
They can talk.
Uninterrupted for 30, 40 minutes without any real need for prompting or checks.
So, yeah, if you feel at all self-conscious about it, yeah, you need to train that out of view to become a proper drawer.
All right.
Well, look at you two.
I was there all business, business, business, trying to get to the meat of the matter, and you have a nice chat.
I'll bring us back to business now.
So before we get into the book, which I want to...
Give Ellis's a bit of a whirlwind tour through it.
But one of the bits that I really appreciated about it, in fact, I'm thinking of even prescribing it as a sort of a supplementary text to my students because I teach physiological psychology when I'm not too busy.
And I think what psychologists often miss is that...
Evolutionary grounding in this is how it all came to be.
And it sort of presented the brain and its functions are presented as a Baroque mystery and a fascinating little mechanism or whatever.
But there isn't really much discussion about how it came to be the way it is.
So how do you see it?
Do you see an evolutionary point of view as absolutely crucial to understanding how the brain works?
I do.
I mean, I would say that for everything in biology, you know, it's a sort of a truism that nothing in biology makes sense except through that lens of evolution.
So, and especially for something, you know, really complicated like free will or human cognition, you know, I think we have probably the most complex form of cognition, presumably on the planet, and to try and understand it just in that most complicated form,
right, from the get-go.
First of all, it's just too complex to get all of the bits.
It's easier to build up an understanding than to just approach a really complicated thing.
If you were interested in principles of aeronautics, you wouldn't start with the space shuttle.
You'd start with the Wright brothers or paper airplanes or something like that.
So you can build up some concepts that we need to have grounded to understand cognition in humans.
That's the approach.
That I wanted to take.
And especially, I think that's true for things like agency.
Because, I mean, partly because, and free will, partly because it gets really tied up in humans.
The debate does with questions of moral responsibility and consciousness and ethics and all sorts of other things, which to me are very important, interesting questions, but they're extra questions.
You can ask a more basic question.
How does any organism do anything?
And that was the approach that I wanted to take.
Again, you can build up and ground an understanding of concepts that you're going to need, but they sound mysterious.
Things like purpose and meaning and value.
They sound kind of mystical and unscientific, but you can get a nice grounding of them, I think, in very simple organisms.
And then you can approach how that is manifested in humans.
So yeah, I'm a big proponent of the evolutionary approach.
I mean, if we want to understand how...
Intelligence and cognition and agency work, then we can approach that by looking at how they emerged, you know, and how nature got there.
That seems to be a useful approach, yeah.
I might have a little bit, a slight tangent even before you've got into it, Kevin, but it seems a good time to ask that, you know, you're active online and will have come across all the various bad...
Evolutionary psychology or the big focus on race and IQ online.
And from that, I see often a legitimate reaction about skepticism around evolutionary psychology approaches and kind of wariness about anybody who is applying an evolutionary lens to...
Human psychology and human behavior.
And for all the reasons that you just talked about and Matt mentioned, I think it's unfortunate that that's the case.
But I'm wondering, just in your experience or your thoughts on the issue, how to go about, for like, say, the layman, right, who doesn't know the ins and outs of evolutionary psychology or behavioral genetics,
how do you go about distinguishing between Reasonable, like people looking into the topic versus the kind of culture war, poison stuff.
It's so difficult and it's a sort of a constant battle, to be honest.
So on the evolutionary psychology front, I think you can think of it with the capital E, capital P, you know, the field that...
That self-identifies as evolutionary psychology, which is mainly concerned with the idea that we humans evolved in our recent human lineages under certain environmental conditions that we are adapted to and that our modern lifestyle conflicts with some of the things that we're adapted to.
There's some truth to that.
I think there's some useful insights there.
And there's also some very facile sort of just-so stories that can be reached for.
And it can be very difficult to distinguish the good ideas from the bad ones there.
And so, generally speaking, I don't even know.
They're quite untestable, right?
That's the problem, is that they tend to be just a hypothesis.
And sometimes they sound...
They sound good.
They make sense.
For example, the idea that we are predisposed on having access to fatty food to eat as much of it as we can, right?
Because we didn't get meat that often and, you know, it just made good evolutionary sense to do that.
But that is a mismatch with our current environment because we have access to high-fat food all the time as much as we would want, right?
And then it leads to overeating and sort of maladaptive outcomes.
That's one that makes complete sense to me.
There's lots of other ones that don't, well, they sound plausible, but then about, say, men being hunters and women being the nurturers and stuff like that.
First of all, even if that was true, and even if it does make a kind of a mismatch with current society, which is not a given, there's usually a follow-on from that, which is that it ought to be that way.
It's this mistake of taking an ought from an is and saying, okay, clearly in ancient times when we were cavemen, men were hunters and that's the natural way of doing things and therefore we should be like that now.
But the therefore just doesn't follow.
The whole point of civilization is that we don't have to be beholden to any of those biological legacies if we don't want to be and if as a society we decide not to be.
So I think on the evolutionary psychology front, what I'm proposing is small case E and small case P. It's just a part of doing biology.
You can't be doing biology without an evolutionary approach, really.
That's my feeling, anyway.
You know, the evolutionary sweep that I'm taking here is much, much longer, right?
I mean, it goes all the way back to the beginning of life, frankly.
And so I'm not making kind of, you know, sociocultural claims in this case.
Yeah, I think in some respects it's not universal because I think there is plenty of good work in evolutionary psychology around kind of cultural.
Absolutely.
Yeah, yeah.
Sometimes it's speculative, but like Michael Tomasello's work on shared intentionality and that kind of thing, or comparative psychology studies with other primates tend to be, I think, very good work in general.
But I think what you're kind of pointing at is that the more hyper-adaptionist speculative claims that you hear, for instance, in Brett Weinstein talking about the Nazis being a specific lineage of the German lineage and the evolutionary dynamics there.
That tends to be stuff that you should be wary of.
Whereas a lot of the work, and I would say in your book, is more focused actually, as you say, not entirely on humans and actually quite a lot on the evolutionary Processes in other species in general.
So I think when people are actually talking about biology and neuroscience, it's a lot better than when they're purely focused on culture, even though I do like some of the cultural evolution stuff.
Yeah, no, and I didn't want to give the impression either that evolutionary psychology, you know, as a field is all bunk or anything.
Not at all.
There's tons of great stuff and very important insights and so on.
It's just that it lends itself to these really simplistic kind of hypotheses that can't be tested.
And then it's just bad.
It's just bad evolution, and it's bad psychology combined, basically.
So that doesn't help.
I just wanted to underline another point you made there, which is that there's the capital E, capital P evolutionary psychology, which, as you said, I like some of it, and some of it is absolutely terrible.
But more broadly, I...
I struggle to talk about it with people because they immediately think to that.
But often what I'm talking about is what you alluded to, which is that it's more of a framework which underlies everything that you do and informs it, even if your focus is on something else.
So, for instance, I do a lot of work in addiction, various behavioral addictions.
We see, for instance, that cross-culturally there's a bit of a universal...
Yeah, no,
and I think that's a great example that young men being more prone to taking risks, there's a very good evolutionary explanation for that.
And it's not just evolutionary, it's an ecological.
Life history explanation of what it is that teenagers should be doing.
And, you know, it's not an apparition.
I don't know if you know Sarah-Jane Blakemore's work.
She would say it's a perfectly adapted part of the life cycle.
They're doing exactly what they should be doing.
That's the whole, you know, it's part of the ecology there.
Yeah, so I think that's...
Makes them annoying to interact with.
Yeah.
I did want to just pick up on the other part of the question, Chris, that you asked me about the behavioral genetics side, because that is another area where it's just super, super easy to take really simplistic readings of what's actually a complex and nuanced kind of field.
And so, for example, you could say, well, these people have shown that such and such a trait is heritable.
And what they mean by that is across a population that they studied, where they see some variation in a trait, say, imagine like height, right?
So there's some tall people and short people, there's a distribution, and you can ask how much of that distribution is due to differences in genes versus something else.
And that proportion is the heritability.
And it's a really technical concept in population genetics, but of course it sounds like just a colloquial term, right?
How genetic something is and it's easy to get those things mixed up and it's also easy to infer that because something is partly heritable in some population that it's therefore genetically fixed and that it's completely innate and that it's immutable and those things don't follow actually,
right?
And it definitely doesn't follow that just because you have some trait that's heritable in one population and then you see a difference in the trait between two populations that that difference is due to genetics.
It just doesn't follow from that at all.
The difference could be entirely environmental because all you've done in your one population is surveyed the set of variation that actually exists.
For example, if you looked at body mass index.
It's really highly heritable within a given population.
It's very strongly genetically driven in that sense, how much your appetite control, energy balance, where that set point is.
But if you look across populations, you see huge differences in average body mass index that are completely cultural and societal.
So the heritability thing is easily misunderstood and widely.
Misunderstood, sometimes wantonly, I think.
And, yeah, then you get into all kinds of really unpleasant sort of stuff, especially in the online world.
Yeah.
I didn't think about that carefully until recently, and correct me if my rough understanding of this is wrong, but the sort of natural inclinations to think about a heritability component and a non-heritable component, and they're just like two...
Two bits, it'd be a percentage, it could be 20, 80, 50, 50, and they just sort of add up.
But you can imagine that because one component is modifiable, like the environment, if you had an environment which is extremely homogenous, like everyone grew up in a similar circumstance or whatever, then almost by definition, any variation that you did observe would be 100% genetic.
Yeah, that's exactly right.
I guess on top of that, again, correct me if I'm getting this wrong, they also interact.
So as well as there being the additive bit, it gets more complicated again.
Absolutely.
So you're right on both counts.
So the heritability is a proportion that applies to a given population studied at a given time.
And so if there's very little environmental variance in that population at that time...
The heritability will just be very high because it's the only thing left making a contribution.
But that doesn't mean that environmental factors couldn't make a contribution.
You just haven't done the right comparison.
Yeah, so that's absolutely true.
And then, sorry, the second thing that you said was spot on as well, and I've just forgotten.
Oh, the interaction.
Oh, the interaction.
So yeah.
So in order to make these calculations, you know, if you're doing twin studies or family studies or population studies, people generally just make some set of assumptions that make the mathematics possible.
And one of the assumptions is there's no interaction between genetics and environment.
And we absolutely know for many, many traits that that's not the case in humans, right?
So, you know, for example, for something like intelligence and educational attainment.
There's totally a genetic interaction with the environment because we share an environment and we share genes with our parents who create the environment that we're in, unless you're adopted.
And so there's a massive sort of confound there in those kinds of studies.
That doesn't mean they're not trustworthy at all.
It just means that the number of heritability that you settle on is like, it's sort of arbitrary to a certain extent, right?
If it's 80% or 70%.
It doesn't matter.
It's not a constant.
You haven't discovered the speed of light.
It's just a descriptor of what's happening in the population you're looking at at the time.
Yeah.
Well, let's bring this back to, I guess, the beginning kind of of the story that you take in your book.
And I heard you on an interview recently making the point that actually it's quite helpful to think about.
Like a single-celled organism or a little worm wriggling about in the ground because, you know, we may not like to think about it, but we've still got an awful lot in common with those simple organisms in terms of the basic existential facts of being an organism in the world that needs to eat and defecate and hopefully find a mate and not get eaten.
So maybe start us off with that journey you took.
Yeah, so the reason why...
I did that because the question of free will hangs on things like how could it be that we could make a decision?
How could we be in control of our actions if we're just neural machines?
It's just our brains doing things and they push us around.
We're just the vehicle of our brains.
Or even worse, how could it be we're in charge of anything if our brains are just physical objects?
Made of atoms and subatomic particles and ultimately quantum fields and whatever.
And physics says all of that is just going to evolve as a system in a deterministic fashion.
So it's just the laws of physics are going to determine where the atoms go when they bounce off each other or whatever.
So where are you?
You just disappear in that.
If that's true, there's no doing.
There's just lots of happenings.
There's no doings in the universe.
And it turns out that that problem doesn't just apply to humans, it applies to anything, right?
It applies to any living organism, this question of agency.
How could any organism be said to do something, to act in the world, if it's all just physics that's deterministically driven?
You know, by these low-level laws.
And that's, you know, why I wanted to start there.
And also for this other reason that I mentioned earlier, that I wanted to disentangle the question of agency from these, what I take to be secondary questions of moral responsibility and consciousness and things that are, you know, sort of uniquely human and even sociocultural in some ways.
So I wanted to get at the more basic biology.
And I think you can, you know...
In order to do that, it basically took me back to the question of what does it even mean to be a living thing?
What's the difference between living things and non-living things?
For me, one of the big differences is living things act.
They do things.
Non-living things don't.
It's funny because it's so fundamental, that property of agency.
Yet, if you open a biology textbook and you look at the list of the characteristics of life, agency won't be one of them.
It won't be in the index.
It's just not a thing that's talked about as a central pillar of what it means to be living.
Can I butt in?
Oh, sorry.
I don't want to ruin your floor.
If I've understood you right, it's partly like levels of description.
What's the best level at which to understand a phenomena?
We could be trying to understand how a star works or how chemical interactions work.
There is a level of description there in terms of things happening.
But if you want to shift to say, okay, well, Why is that tree growing leaves?
Or why is the worm heading away from the light?
Then you're not going to find your answers at that lower level of description?
Yes, but it's deeper than that, actually, because you could say that, right?
And some people would say, so the physicist Sean Carroll, for example, has this idea that you can have these effective theories at different levels, which are just useful.
But in a sense, they're almost useful fictions.
Because all the real causation is still happening at the lowest levels.
So you could say your tree is growing here and you could give some reasons for it.
But actually, someone else could say, no, look, it's just the way the atoms are bouncing off each other.
Your level of causality is kind of an illusion.
It's convenient, but it's not where the real business is happening.
So what I want to do is something even deeper than that.
What I want to say is, how could it be...
That the level of describing the tree and what it's doing is the right level, right?
That all the causation is not at the bottom.
Actually, how could that be?
How could life have elbowed its way out of basic physics and gotten some wiggle room to become living things with organization where there's some macroscopic kind of causation, not just microscopic physics?
So, yeah, so the evolutionary story gets kind of metaphysical pretty quickly, actually, because it has to dig into what do we even mean by causation?
How could it be that there can be some causal slack in the system that enables the way it's organized to have any causal power in determining what happens?
So, yeah, it gets deep pretty quickly.
I might push you.
Forward more levels than you're intending, Kevin.
But one of the questions that I think comes up with this, and when I was listening to the book, it was something I was waiting to hear as well.
And I think you explained it well, but you can probably do it better in person.
So whenever discussing the kind of organization of the brain and neural activity and why it makes sense that, like you're talking about, it isn't just an epiphenomenon to talk about
the kind of
collective upper levels and the kind of assessing of patterns and this kind of thing.
That to me, it did sound like you're talking about the causality going from the bottom up, but also coming back down, right?
Yeah. And being able to...
To work from the top down.
But I think for some people that will be the part which is kind of difficult because how can it be going in that direction if everything relies on the fundamental processes?
Could you elucidate a bit about, you know, how patterns...
And you described it very beautifully, so I'm sorry to...
No, no, no.
Yeah, no, you've absolutely hit on the sort of major problem that people have with this view of top-down causation, because it sounds like it couldn't possibly be in play if, indeed, all of the causation is already exhausted at the lower levels.
So, I mean, the basic starting point here is to say, well, all the causation is not exhausted at the lower levels.
Like, if we look to physics, if we look to quantum physics and even classical physics, this idea that it's deterministic is just not supported, right?
There's just no good evidence for the idea that classical physics even is deterministic in the sense that the state of a system right now, plus the laws of physics, completely determine the state of the system at some subsequent time point, right?
And a time point after that, and after that, and forever, and ever, and ever, right?
That view just doesn't hold and isn't really supported by physics, even though many people take it as sort of proven.
I could be.
Are you referring to the quantum dynamics, like the uncertainty there, or is it something else?
I could be one of those people who doesn't quite.
Sure.
Yeah, yeah, yeah.
So there is the quantum uncertainty, right?
So it is.
In the first instance, literally physically impossible for a system to exist in a state with infinite precision at any given time point.
So the idea that you just have a system, you know, it's just given with infinite precision, the description of the system, and then it evolves from there, just isn't true.
It just can't hold in the universe, right?
And that's from the Heisenberg uncertainty principle and other things.
But also, there's another principle, which is simply the case that information is physical.
It's not just some material floating around kind of thing.
It has to be instantiated in some physical form.
And that the amount of information that can be held in any finite bit of space is finite.
You just can't pack infinite information into a finite bit of space.
So the idea that at the Big Bang, it already contained all of the information of everything that's going to happen for the rest of time, including...
Me saying this sentence and you hearing it is, I mean, it's ludicrous on its face, but it's also just sort of mathematically and physically impossible, right?
There's good physical physics reasons why that couldn't be the case.
And so what you have is systems that have a little bit of fuzziness around the edges, as it were.
And, you know, for lots of things like the solar system...
That little bit of imprecision doesn't matter because they're very linear systems.
They're very isolated.
And of course, we can make these great predictions and we can land little rovers on comets and all kinds of stuff.
It's amazing, right?
But most systems in the world are not like that.
Most systems are actually chaotic and they have lots of nonlinear feedback, which means that tiny imprecisions or bits of indefiniteness in the parameters at any point Over some time evolution lead to the future being genuinely open.
There's lots of ways the system could evolve, depending on how these sort of random digits take on values as time progresses.
So that's my non-physicist understanding of some of that physics.
And it's contested, right?
I don't want to just say this is the way that it is, but it's a way that...
I mean, it fits better my intuitions about...
You know, certainly the idea that everything we're saying right now was prefigured and predetermined from the Big Bang just sounds silly, right, and absurd.
And it doesn't have to be that way, and physics doesn't say that it is or prove that it was.
Yeah.
Chris and I, I do want to go first.
Yeah, look, I mean, part of it, obviously, is totally uncontroversial.
There is, like...
As you said, you can't compress all the information into the Big Bang.
There's uncertainty and randomness being injected into even physical systems all the time.
And there is room for emergent behaviour, complex systems that are interacting with many moving parts.
And my go-to when imagining this is always thinking about the weather and thinking about hurricanes and things.
And I suppose...
To put words in your mouth, see if this is the kind of position you would hold.
If you're looking to understand why, say, a particular molecule of air or even a leaf blowing around the wind is doing that, then you can and maybe should be pointing to the hurricane, which is, of course, an emergent complex system,
which is absolutely not predictable if you roll back the clock a thousand years or so.
And that is legitimate and sometimes You know, you argue that's the correct attribution of causality for that phenomena.
Yeah, so let me pick up first on one thing you said about, you know, the idea that there's randomness constantly being injected into the system.
I think that's probably, that's not the way I...
I've come to think about it because when you think about it that way, it sounds like the randomness is this positive thing that is coming from nowhere and just appearing in the universe.
And that's problematic.
I think it's the reason why a lot of people object to the idea.
It's like, where is this coming from?
I'd like to think of it just as an indefiniteness in the future.
It's a negative.
It's an absence of full determination.
So the present state underdetermines.
What happens in the future based on just the low-level laws of physics?
It could go lots of different ways, right?
And then what becomes interesting is to ask, okay, well, what else could influence the way that a system evolves through time?
And what else is the way that it's organized, right?
I mean, you referred to a tornado, the self-organizing or a whirlpool, right, is a collective kind of phenomenon where what all of the water molecules or air molecules are doing constrains what each of them is doing.
So they collectively make up this dynamic pattern that in turn constrains what all the parts do.
And again, it's sort of a non-problematic, non-controversial view, I hope, right?
And what you can do is take that a little step further and say, okay, well, what if the way a system is organized had some function?
And this gets back to these questions of purpose and meaning and value.
So if we think about a living system as like a storm, In a bubble, right?
It's this set of chemical reactions that are all reinforcing each other collectively, right?
So, they generate this regime of constraints that is made by all the bits, but also constrains all the bits to keep that pattern going through time.
Well, then, right, if a living thing, you know, the ones that are good at it persist, and the ones that are bad at it don't, right?
So, it's kind of a tautology, but it's the basis for all selection.
So, we could have ones that are configured one way or configured another way, and the ones that are configured in a way that helps them persist will persist through time.
And that kind of configuration can come to take on the form of functional systems that, say, allow a bacterium to detect things in its environment and embody a control policy that says, if you detect a food molecule,
you should go towards it, and if you detect a high pH, you should move away from it.
So then we've got some functionality embodied in the organization of the system that can do work in the sense of determining what happens or settling what happens in an adaptive fashion relative to this purpose of existing.
So we've got purpose in a non...
Mystical, non-cosmic kind of way, just this sort of tautological way.
We've got value because it's good to go one way or another relative to that purpose.
And then we've also got meaning.
This is a new kind of causation in the world that non-living things don't show, is that the bacterium is sensitive to information.
It's buffered.
It's insulated from physical and chemical causes outside it by its cell membrane and its cell wall.
But it's sensitive to information about what's out in the world, and that information has meaning.
And it's the meaning that's important for what happens.
And ultimately, after billions of years of evolution, you have the same things at play in us, where it's meaningful states of neural activity that are the important thing.
Yes, they have to be instantiated in some physical pattern, but the details of the pattern are actually arbitrary.
That's not where the causal sensitivity lies.
It's what the patterns mean.
That's important.
And that's why I think anchoring that view in these simple systems, I hope at least, lets us get a grip on these sort of slippery concepts, a bit nebulous bake sounding, lets us operationalize them in a firmer way,
and then we can go from there and scaffold.
Our understanding of more complex systems on the back of that.
I want to dig a bit more into the biology and the cognition, Kevin, but unfortunately there is one philosophy that is still floating and I just want to check if I understood correctly.
So one thing that I fairly often encounter when debating with people about determinism is this notion that if you Wound the clock back and you set everything up, you know, and replayed it.
It would all go the same way, right?
And this is one of the kind of thought experiments that you set every atom in the same way and you're saying something different would happen because it can't, right?
Now, my answer to that has typically been that relies on assuming that the universe is, as you described, like there's a ribbon going and we know that it would all go.
And that, my argument is just...
We don't know that.
We don't know exactly the way that it is, but there are different opinions amongst people about the position.
And on that, I detect that you are saying something similar about that we cannot just assume that the future is completely set and use that as the foundational premise to argue.
But on that as well...
If it were the case that the universe was deterministic in that manner, I'm just curious, does your model about how cognition functions and kind of top-down causality, does it completely rely on the universe not being deterministic in that manner,
or could it be incorporated into a model with that?
Well, yeah, so it's a great question.
It's the question, really.
In a sense, you could say, let's take the universe as it is right now, with all these sort of cognitive systems moving around in it, like you and me, and say, okay, now let's assume it's deterministic.
Could you still do anything?
Would what happens still be up to you?
And to me, that thought experiment suffers from the problem that it just assumes the existence of agents in a universe like that.
First, you have to show me how you would get agents.
Who are trying to do things and expending all this energy, apparently trying to take action to make things happen that they want to happen when all of the causation is actually still deterministic.
And what that means is that it's reductionistic, right?
It means it's not just that it's determined, it's that all the causation is still happening at the level of atoms and subatomic particles and quantum fields and so on, right?
So for me, the big question there would be, well...
Why would you ever get living organisms in a universe like that?
And I'm not convinced that they would ever emerge.
The main point I would make is that...
Physics just doesn't say that determinism is true.
So why does everyone start with that premise in the free will debate?
It just drives me bananas.
And you see people, they'll just accept determinism.
Even people who are arguing, they'll accept determinism.
And then one of them, so Greg Caruso and Daniel Dennett, for example, had a book out recently called Just Desserts.
Which is very interesting to read.
It's mainly about moral responsibility, really, but they both accept determinism, this single timeline, and Greg Caruso ends up just being a free-wheel skeptic, which, you know, is really denying the phenomenology of our everyday existence completely, that we do make choices and have some control over our actions.
And then Dennett ends up being a compatibilist, which is basically the view that despite the universe being deterministic, We can still assign moral responsibility to agents for their actions because the causation comes from within them.
So, Chris, that's the view that I think is quite common, actually.
It's the one that you were alluding to, where you can make this sort of construct where you say, okay, even though there's no real choice...
In the universe, there's only one possibility that will happen.
You're still part of the causal influence there.
The way that you're configured is part of the causal influence of what's going to happen.
Therefore, we can still blame you.
That's Dennett's view.
I'm trying to be charitable about it, but I find it incoherent.
I just don't think it could possibly hold.
So for me, the rollback to tape experiment, first of all, I wouldn't start from that.
That premise, right?
If you start from determinism, then you have to ask, well, where could the freedom come from?
But if you start just by accepting the evidence for indeterminacy and an open future, then you have to ask a different question.
It really interestingly flips the script.
Then you have to ask, well, shit, now all these kind of things could happen.
How does the organism make happen this one thing that it wants to happen?
Where's the control coming from?
And that's where I think the...
Ironically, in a sense, the answer, the power to control what happens, comes from the indeterminacy itself.
Not directly.
It allows macroscopic causation to emerge, to evolve.
So it's a necessary condition.
And some people would say, look, either the universe is deterministic physically, in which case I'm not in charge of anything, or it's indeterministic, in which case my decisions...
Are just driven by randomness, and I'm not in charge of anything.
And there's a third way, which is what I'm arguing for, which is, no, the indeterminacy just creates this causal slack.
It allows macroscopic organizations to emerge, and because we have this selective force through time, or process through time, then what you will get is macroscopic organizations that do things, that allow organisms to do things in the world and become loci of causal power themselves.
I've got one just very quick follow-up to that, which is, so in your model, Kevin, is it that the fact that there are agents in the universe as distinct from like objects and non-agentic things,
that is the key evidence that indicates that the model, you know, the universe model cannot be purely deterministic because without that, Agents can't function.
And if that's the case, I can imagine that one of the objections that would arise would be, is that putting too much emphasis on the fact that there are agentic beings on our random little rock?
We don't have any evidence yet for anywhere else in the universe.
So the fact that we exist still needs to be explained, but there's a big...
And agents are only, as far as we can see, on our planet.
So, yeah, I guess I'm not freezing it very well, but I'm just wondering, you know, it's putting a lot of work on the agents.
It is.
So what I don't want to do is suggest that I wouldn't say, actually, that...
My theory, my way of thinking about this rests on that.
What I would say is that for the compatibilists, it's a thing that they need to explain, right?
I'm saying that that's part of what happens is that agency emerges because it can, right?
Because there's this little bit of causal slack that allows life to wiggle free and to become causally efficacious as entities.
Whereas for compatibilists, I think that's a thing that they just assume is the case right now, that these agents just exist.
All I'm saying is that, wait a minute, you need to explain how that would happen in a fully deterministic universe.
Let me just say one other thing about the control issue and the idea of winding back the clock.
First of all, the question is whether you would ever do otherwise.
I would say, actually, what the organism has to do is prevent otherwise from happening.
If it wasn't trying to do something, all sorts of stuff could happen.
So it has to exercise some control to make happen what it wants to happen, but only within certain limits.
It doesn't have to worry about all the microscopic details of all the atoms and everything like that.
It just has to achieve its goals, whatever it wants to happen, at a macroscopic level.
You know, it's not concerned with trying to put every atom in its place.
It just has to do, excuse me, what it wants.
Like, take a drink of water here, which I will.
Well, I think I'm going to take the opportunity to pull you both back from the brink of an abbess of philosophy.
Because, yeah, I mean, I don't know.
It's all above my power grid.
I don't really understand.
I sometimes wonder whether or not...
Just like in Hitchhiker's Guide to the Galaxy, there are a lot of questions that seem to be well-formed questions, which seems like they should have a definite answer, but they're kind of just words that people use.
And just because the question, do I have free will, sort of makes sense in English and feels important to me, I sometimes wonder whether or not those...
Like, it's just not a very good question.
Well, it certainly has some hidden layers to it, right?
And when you say, do I have free will, people would often say, well, what do you mean by free?
And what do you mean by will?
And what they should be asking is, what do you mean by I?
Right?
It's like, what is the self?
And that's not a Jordan Peterson version of that.
I was also going to say, what do you mean by how?
Well, I mean, I think just in case anyone thinks Kevin is transforming into Jordan Peterson, I think what you're alluding to is that, I guess, physicalist perspective, yeah, where there is no ghost in the machine.
There isn't a little homunculus you can identify in your brain that is you, rather, where a big messy thing.
Yes, as you emphasized, we're organisms that do where our boundaries, our skin or the cell membranes are important, but still we're a fuzzy construct and localizing the eye anywhere isn't really possible from that point of view.
Yeah, exactly.
And I think there's sort of two extreme views of...
The self, one of them is this dualistic kind of theological view of a, it's not even just necessarily theological, but of an immaterial kind of thing, right, as some sort of substance or object that is attached to your body somehow or inhabiting your body,
the ghosts and the machine, as you say.
And there's no, you know, that's just not a useful kind of construct, and there's no evidence for that.
But the other extreme, and I was listening to your recent interview with With Sam Harris just this morning, actually, where, you know, someone like Sam Harris or many others would say, well, actually, the self is just an illusion.
And it's just, you know, when you look to find yourself in your experiences, you find nothing but the experiences.
And therefore, there's a kind of a follow-on from that, which is, therefore, you can't have free will because you don't exist as a self.
You're not the kind of thing that could have free will.
And to me, that's just a mistake because you can have a self that's not localizable.
You can have a self that is the continuity of pattern through time that still has causal power as a bundle of memories and attitudes and dispositions and relationships and all the rest of that, right?
That is a really efficacious thing without being a separable, isolatable object.
Fine.
It doesn't have to be an object.
There's no reason for that.
And it can still have existence.
So that's where I would fall down on that.
Now we're going to go down a whole other philosophical rabbit hole.
Selves can be systems.
We should get that on t-shirts.
Systems can be selves.
That's exactly like the whole thing with living organisms.
That's what they are.
There are patterns that persist through time, and it's the persistence through time that is the essence of selfhood.
It's not a thing that you can isolate either within an organism or in an instant of time.
It just doesn't apply.
The concept just doesn't apply to an instant of time.
It applies to the continuity through time.
That's what makes a self a self.
So, Kevin, some people, I think, in hearing that, especially people that are somewhat sympathetic to...
Sam Harris' kind of position about the self will point to things like the famous Leibowitz experiments or split-brain research indicating that, at least in the popular presentation, that although we think that we are the authors of our actions consciously,
a lot of the work, so to speak, mental work, is going on under the hood unconsciously.
And there's a kind of post-hoc story that is appearing in the mind afterwards, right?
And I kind of already know, but I'm wondering if for our listeners you could explain why that isn't the case, that those experiments have not dealt with the kind of position that we are just passengers who believe that we're at the steering wheel,
but we're mostly unconscious motives pushing us along.
Well, so first of all, there's just not one answer, right?
There just doesn't have to be the same kind of thing going on all the time.
It may absolutely be the case that in certain circumstances, we're kind of on autopilot and we're not thinking about what we're doing very much.
And there are sort of subconscious influences that are informing or even, if you want to say, driving what we do.
And so, you know, for example, in the Libet experiments, where you just have to kind of lift your hand every once in a while, and there's this so-called readiness potential that's detected in brainwaves, if you do an EEG while people are doing that, that suggested that the brain was making the decision before the person,
right?
And was only telling the person afterwards.
Now, first of all, there's a whole bunch of technical reasons why that just doesn't...
It doesn't look like the right interpretation.
But secondly, even if it were, like, who cares?
That scenario, why would, like, they literally tell people do something on a whim whenever the urge takes you.
So imagine I'm the subject, right?
I've made a decision.
I made a conscious decision to take part in this experiment.
And now I'm sitting there watching a clock and just every once in a while they tell me to lift my hand.
They have told me that I should lift my hand on a whim.
So occasionally I do.
And, you know, if you let your brain If you decide that, fine.
That's a good strategy there.
You've got no other reason to do it.
A lot of the decisions that we make, I think, fall into this range.
Either they're completely habitual.
Because we've been through this sort of scenario before.
We've done all this learning.
We tried things out.
We know what's a good thing to do here.
We don't have to think about it.
All that work is pre-done.
We've offloaded it to these sort of automatic systems, which is great.
Super adaptive.
It's fast.
And it makes use of all the learning that we've done.
And then, you know, then there's sort of ones where we don't care.
We don't really know what to do.
We're sort of indifferent.
But we should do something.
That's the important thing, is that we shouldn't just dither or vacillate forever.
We should just get on with things.
And a lot of the decisions that we make are like that, where we control them to a certain extent, but we don't care about the details.
And then other cases where we're really kind of at a loss.
It's a really novel scenario, something where we really don't know what to do, we have to deliberate, or it's a really important decision.
And we have to really think about it.
And then we do, right?
So we take this sort of conscious deliberative control.
So generally speaking, I think that, you know, the evidence from neuroscience where people extrapolate from one particular setup and say, look, see, it's always like this, and we never have control.
Well, that just doesn't follow, right?
It's just a non sequitur if you allow that we can be doing different kinds of cognition that has a different level of conscious involvement in different scenarios.
Now, Kevin, I've got to dwell on those motor readiness potentials a little bit, and that's purely because it gives me an opportunity to remind people that I worked on those during my PhD, in fact.
I did my PhD in an EEG lab, and I was mainly interested in the signal processing aspects of it.
But I picked those readiness potentials, lateralized readiness potentials, and that paradigm where people could elect to push a button basically whenever they felt like it.
And just to let people know the methodology there, the idea is that people press the button whenever they feel like it.
We know exactly when they press the button.
Then we can look at the event-related potential, they call them, but going back in time by a couple of seconds.
And what we see is this slow depolarization across the cortex when that happens.
The methodology you mentioned, and this is one of the things that I thought was really cool as a first-year PhD student, and it was only until a few years later that I started to think about it a bit more carefully.
And putting aside a lot of methodological quibbles, for instance, the subjectivity and people saying, you know, just nominating when it was they decided to press the button.
I'm sure there are other issues that you know about there, Kevin.
I mean, the thing that occurred to me with that is that it seems...
And of course, for you to have a thought or to form an intention to generate a little motor program to do anything or intend to do anything,
By definition, unless you believe that there is a spirit that is entering the brain and making that happen, then there has to be something physically going on in the brain for that intention to arise.
And so when I thought about it, I thought, well, this is kind of, and of course, it would almost be surprising if you didn't detect some kind of activity in the brain before people form the conscious awareness of planning to do something.
Yeah, it is.
I mean, like you just said, there are some other technical quibbles about the interpretation of this thing where the readiness potential starts to ramp up, you know, maybe 350 milliseconds before the movement, and yet people say they only became conscious of the urge to move,
you know, 50 milliseconds before or something like that.
And so part of the technical thing is that when you time lock to them actually having done something, you see this Gradual wrapping up.
But if you time lock to some arbitrary signal, like a sound or something, then what you see is that sometimes the activity goes up and goes down again.
And sometimes, you know, up and down, up and down.
So the start of the activity going up is not, in fact, a commitment to move that your brain has decided.
So anyway, that's a technical thing on that front.
But yeah, more generally, like, yeah, there's a whole field of, you know, many people saying...
Look, it's not you doing things.
We can go in in neuroscience and we can see it's just this part of your brain is active or that part or it's just, you know, we can go in in animals and we can drive the activity of these different circuits and it really looks like it's just this big neural machine and it is driving activity.
You know, what does it matter what the states mean to you cognitively?
That's not where the causal power actually is.
And that, for me, is a mistake, I think, because you can show, in fact.
That it's the patterns that are important.
It's not the low-level details.
And that this attempt to reduce cognition to particular neural activities is a mistake in two levels.
First of all, it's not the right level of sensitivity because the neural details are somewhat incidental.
They're somewhat arbitrary.
It's the meaning of the pattern that's important.
That's what the system's causally sensitive to.
And secondly, it's very reductionist in the sense of isolating.
A particular circuit and saying, look, it's this circuit that made you do that, right?
Whereas actually, the way the brain works is this much more holistic kind of thing where all the different parts are talking to each other.
They've all got different jobs.
They're monitoring different things.
They're interested in different bits of information over nested timescales.
They're providing context to each other and data to each other.
And they're all trying to settle into a big kind of...
Consensus for what to do.
And that is not this sort of mechanistic system, and it's not even really an algorithmic system.
It's a big dynamical system trying to satisfy lots of constraints and optimize over many, many variables at once for the reasons of the organism and on the basis of the things that organisms are thinking about.
So once you're at that stage...
You might as well say, well, that's the organism deciding what to do for its reasons, right?
Like, what else would you want?
There's a nice illustration that you give in the book that a lot of people take, you know, it is a very good illustration of the way our cognition and our attention can cause, like, blind spots, right?
The famous experiment where you ask people to count the basketball passes and a man in a gorilla suit walks on, and most people...
Don't recognize it because their attention is focused on the task.
And you could easily see that if you ask people to look for a gorilla suit, that they would 100% pick it up, right?
So it is, that to me struck as a good example that taking from that experiment that Our minds are completely driven by unconscious mechanisms is a wrong extrapolation.
And similarly, just in all of the things that you were both saying there, I keep coming back to this thing, and I kind of raised it with Sam as well, that habits and unconscious things and so on, to me, these all seem like components of self.
I guess this is the problem that self can mean so many different things, but I am.
I am my, you know, cognitive heuristics and that kind of thing.
So saying like that, yeah, that basically it has to only be the kind of very, very high level, top down conscious reflection, which is the self.
It's kind of creating an artificial divide because if it isn't you, who is it?
Absolutely right.
So it sets and, you know, Robert Sapolsky has a new...
So he's a neuroscientist from Stanford.
He has a new book out called Determined, which is making the complete opposite case to the one that I make.
And it's very, very similar to Sam Harris, actually.
And it's sort of ironic in that, you know, Robert is a reductionist and a behaviorist, I think.
But he's also in this, has this sort of dualist intuition, right?
Where it's like, if it's not this disembodied thing, self doing it.
If it can be shown, if these processes can be shown to have any physical biological instantiation, then by definition they're not you.
And it's not you doing it, it's just the biology doing it.
And it's setting the bar so high that it's only, by definition, rules itself out.
It's just not a useful way of arguing about it because it says the only way you could have free will is by magic.
And here we're showing there's no magic, therefore you don't have free will.
Just a sort of a circular, facile argument, frankly.
And, you know, instead of that, we've got this fascinating biology that we can get into, which can say, well, how could it be that all of this machinery working, right, all of these neural bits and pieces and so on can actually represent things that we're thinking about,
where the thinking has some causal power, you know, in the system without that being magic.
That, for me, was, I guess, the project.
That I was more interested in.
And to me, I think you can come to a naturalistic framework that allows you to think about those things without appealing to any kind of mysticism or descending into this mechanistic reductive determinism either.
I think there's a middle ground that we can inhabit.
Yeah, yeah.
It's interesting, these conceptual divides.
And I've come to realize this in talking to people who are clearly very...
Smart, very well-educated, sometimes in a different field, usually in a different field, and maybe that's the problem.
But I think where the three of us share a lot of common ground is that we operate from a heuristic which starts off, which is first of all materialist and physicalist, but that just acknowledges that there are emergent...
There are systems of interacting agents, and then it can be meaningful to talk seriously about things that are happening at that level.
And they are, in a sense, they're virtual things, in a sense.
And I'm thinking here of representations and information.
They're not magical ideas.
Like you said earlier on, that information has to be represented somewhere physically.
It isn't just like a pattern just by itself that has like a high degree of Shannon entropy.
There's no physical discrimination between total random noise and a bitmap of a cute cat.
The difference is in the information, but you can't point in a reductionist sense to where that lives.
Yeah.
Is that the difference between us and them?
So first of all, yes.
I think what's interesting when you start having these discussions is that you can be looking at the same evidence as somebody else and coming to a different conclusion.
And that usually means you're bringing different things to the table, right?
You've got some sort of outlook that may be implicit or tacit that is often worth kind of scratching into and to figure out what that is.
The other thing with...
What you were just talking about with information is that we have a really good science of information, right?
We have information technology.
We have whole fields of industry built on it.
We don't have a good science of meaning.
I mean, there is, you know, fields of semantics and linguistics and psychology and so on.
Have you read Maps of Meaning?
That seminal work might have missed you, but I just want to make sure.
It's on my list.
It's on my long list, Chris.
So, yeah, I mean, the thing about meaning is it's hard to localize.
It's hard to quantify.
It's inherently subjective, and it's interpretive, right?
It's not encoded.
It's not just encompassed in the signal or the message.
It's in the interpretation of the message, right?
And so it's inherently a systems thing.
It's just not something that you can localize and point to and quantify and so on.
So it becomes more difficult to have a good science of it, but that doesn't mean it's impossible.
And, you know, if we're not doing that, if we're not thinking about meaning, you're not thinking about biology in the right way because living things are...
Are sense-making, right?
They're extracting meaning.
They're running on meaning.
That's how they get around in the world is what they need, right?
They need meaningful information to get around in the world.
So, yeah, again, it gets back to this idea that there are some very fundamental principles and concepts that I think are key in biology that have been kind of overlooked maybe in our physics envy to get to really mechanistic explanations.
But I think they have a rightful place, and I think we can have a science of them that isn't Ru, Deepak Chopra kind of stuff.
I've got a question about that, Kevin, and the way that I might ask you for an illustration in terms of like, so, you know, for our listeners, they will have heard about many of the gurus.
Talk about the importance of meaning, the importance of sense-making in a different context, right?
But I'm wondering if you could give an illustration like, so when you were talking about, you know, the human brain and neural processes and how it is that you could have the kind of pattern from collections be the signal or something that is being interpreted at a higher level or the individual Components are not the core thing.
It's more the gradient across it.
So I'm going to do a bad job of explaining that.
But I think if in terms of having a higher level semantic or associative pattern, could you give an illustration of how that could apply in the way human cognition works?
How it could feed down so that it's more important than the individual?
Yeah.
Like neurons firing.
Sure.
I mean, I guess a sort of commonplace experience would be that, you know, say we're reading some text, and the text could be written in one font or a different font or in italics or in bold or all in capitals or whatever, and it would all mean the same, right?
So the particular instantiation is not that meaningful because we categorize a bunch of different shapes as A. And we categorize a bunch of other different shapes as B and other different shapes as C and so on.
So all the time we're doing that kind of a thing.
We are categorizing semantic concepts like that at a high cognitive level into these categories where we are only interested in the differences when they push the pattern from one category to another.
So we're attuned to the sharp borders between the categories that make up these letter shapes, but we allow lots of variability within that.
So neurons do the same thing, right?
Individual neurons do it.
There's this idea that comes from, you know, basic kind of starting neuroscience with reflex arcs, where one neuron fires and it drives the next one and it drives the next one and so on, right?
So it's this electrical machine.
And I rather would reverse that and say, look, what's happening is that this neuron here is monitoring its inputs, right?
And it's doing some interpretation on the signals that it's getting.
Because, for example, this one may be sensitive to the rate of incoming spikes of signals from another neuron, but not the precise pattern in some cases, right?
So the precise pattern is lost.
All this guy cares about is, did I get 10 spikes in that last 100 milliseconds, or did I only get 5?
And if it's only 5, I'm not firing.
And if it's 10, I'm firing.
So it's an active interpretation.
And I think the same thing happens with populations of neurons, where you have one population that's monitoring another one.
And if this lower one takes on any of a bunch of patterns that mean A, then this...
This guy might fire because that's what it's sensitive to.
Whereas if any of the bunch of patterns that mean B, this guy won't fire.
So that's what I was kind of referring to before about the causal sensitivity in the system being at that higher level of the patterns.
And the important thing is that the sensitivity gets configured into the system because the patterns mean something to the organism.
They carry information about something out in the world or the state of the organism or one of its ongoing goals or any of the other sort of elements of cognition that it needs to figure out what to do.
Does that answer the question?
I mean, I think you can couch it at cognitive levels, but actually what you see is that the neural instantiation is the same process happening.
Some intuitions that were, for me at least, anyway, very fuzzy, but very satisfying when I was reading your book, which is that I think you're hinting at that non-linearity in these systems is really important.
And, you know, there's this sort of, you know, like a gradient of A-ness, or B-ness, rather.
Careful, careful.
Let's go with B-ness.
And, you know, and it fires or it doesn't.
So there's a non-linear activation there.
When you don't have those binary boundaries, and I think you could even extend this analogy to...
To cells, like it being very important for cells to have a cell membrane so that their activity of being active or not or whatever is separated from the environment around them.
Because if everything is just diffusing into everything else and there is just these linear gradients, then you really don't have any scope for interesting and complex behavior.
And I can't articulate it.
I'm sure I don't understand it properly, but it just feels intuitive.
That idea of transitioning from continuous, messy stimuli of physics to almost like a computer binary representation is really important for all the things that make big brains interesting.
First of all, you've touched on a really fundamental issue about what life is, which is that it's It's a pattern of stuff and processes that keeps itself out of equilibrium with the environment.
So if it's in physical thermodynamic equilibrium, then there's no boundary.
There's no entity.
It's not a thing.
It's just a constant part of the flux.
It's one with the universe.
You don't want to be one with the universe.
So that's absolutely a fundamental principle of what life is.
And then as you go through into Into individual cells in a multicellular organism, those cells still have some degree of autonomy.
They're still trying to make sense of what's out in the world.
There's still this barrier, the cell membrane.
They're still taking in information and they're sensitive to it and they're operating on it.
And each individual cell will have a sort of criteria embodied in it for what it should do when it sees this signal or that signal.
And that's basically how knowledge gets embodied and procedural knowledge gets embodied into our brains through learning, is by changing the weighting of those criteria that each neuron or each population of neuron is acting on.
So, yeah, that's where it comes to be the case that the system, a neural system, is really doing cognition, right?
Cognition is not an epiphenomenon.
It's what the thing is doing.
It's just using its neurons to do that.
This gets back to something that Sam Harris was saying in your recent interview about this idea that when you think about yourself as an experiencer, what you find is just the experiences, right?
You're just having some percepts at any given moment, and there's no self there.
I just think that's really mistaken, actually.
I think it's a mistaken intuition from introspection because, in fact, the whole point of perception is that it's this active process.
It's not passive.
We're not passively sitting here being bombarded by sensory information, or at least that's not what our percepts are made of.
Our percepts are the inferences that we actively draw from that sensory information.
That's our self doing something, right?
A non-self can't be doing that.
That's an action that is required, or an activity that is inherently subjective, where the organism is bringing all its past experience and knowledge to the act of perception.
So it's a filtering, it's an interpretation.
It's not just this passive sort of flow of things, of experiences where there is no self doing the experiencing.
Yeah, one of the introductory concepts you have to get across to undergraduate students is to correct that intuition that even something like vision is analogous to a video camera that is feeding a video stream into the brain.
Where, of course, there is so much processing going on that transforms it.
Probably reduces information in the Shannon entropy sense.
It simplifies it, but it makes it far more comprehensible and allows you to actually do something with it.
I think that is incontrovertibly true.
Simple perception is an active process.
Yes.
Because for organisms, again, it comes down to what they need.
What do they need to get around in the world?
They don't need to know about the photons hitting their retina.
They need to know what they're bouncing off of in the world before they hit their retina, right?
So they need to make some inferences.
And it's a hugely difficult problem, this inverse problem.
There's loads of potential patterns that probably could make the same pattern of photons, right?
So you have to do loads of work.
And it's a skill that organisms acquire, right?
That they develop this skill of seeing.
And yeah, it's absolutely...
Sense-making, though not in the guru sense of sense-making, I think.
Well, I thought this was a good time to raise the issue of consciousness.
I get one question.
I get one question about it.
It's purely, again, because reading your book was very refreshing in this regard for me.
It helps, Kevin, that I agree with almost everything you say.
But even if it wasn't, I would have found it intellectually stimulating.
But the one thing for me is that I have consistently not been puzzled by the concept of consciousness.
Because for me, and I'm approaching it, I think, primarily from the view of a cognitive anthropologist.
Human cognitive processes make ontological categorizations quickly.
They separate things into agents, objects, living things, spatial entities, and so on.
And we have some evidence that these are cross-culturally pretty consistent, even if the taxonomical categories that each culture invents are wildly different, underpinning that quite consistent things in cognitive processes.
But so, when I think about consciousness...
To me, the obvious kind of connection was that we are agentic beings that are able to imagine potential different futures and to try and think about different outcomes and we model ourselves in different outcomes or different situations and kind of mentally time travel into those scenarios while also...
Thinking back experiences that we've had in the past.
So it seemed to me that humans have quite a sophisticated agent modeling cognitive apparatus in their mind.
And that from that, I would anticipate that some sense of self slash consciousness would be a very likely a component of having such a model.
It would be, I just imagine it as like, okay, so the model is set up this way and would work like that.
Now, Matt...
It assures me that that is something I'm inserting to the model that doesn't need to be there.
But I just find it hard to comprehend how you would have such, you know, good modeling abilities without some, you know, like kind of self-agentic aspect of that.
So why is Matt wrong?
That's the question.
I absolutely, I think, agree with you, Chris, that that sense, first of all, yes, we model all sorts of things, right?
So we model ourselves, we model the world, we model the future, we think about what could happen, right?
So we have this imagined future that we're sort of simulating, we're weighing up the utility of various predictive outcomes and so on.
The question, however, is why that should feel like something, right?
And so here's the push.
I'll give maybe.
Right, so Matt's encouraging me here.
So the pushback is why should that feel like something?
Because maybe you could design a sort of a cybernetic system that has those functions in it.
And in fact, many other animals certainly do have many of those functions, just maybe not to the same level that we have, not to the extent of the sort of time horizon that we're concerned with and so on, right?
So the concern would be, well, yeah, you can build all of those things.
You can build more complex control systems where having those functionalities give better adaptive control of behavior.
But why does it feel like something?
That's the really important bit.
And why does it feel the way it does, right?
And so one counter to that is that actually what we're doing...
What we have a capacity for in humans is modeling not just doing these sort of simulations of objects in the world and states of the world and states of ourselves, but modeling the processes that are doing the modeling, right?
So we can think about our own thoughts.
So here we get to this kind of recursive idea that really, you know, probably most...
Nicely articulated by Douglas Hofstadter, for example, in I Am A Strange Loop, where the idea is we get to a level where we have enough levels of our cognitive cortical systems that the top ones are looking down on the lower ones,
where the objects are now the thoughts themselves.
We can think about our thoughts, we can reason about our reasons, and it may just be that once you have a model of yourself thinking, And you're using that as part of the control system because you can use that to direct your attention,
to think about other things.
Maybe that just necessarily feels like something.
That's what it is for us.
Now, it's also possible that there's another way that it feels to have the kinds of control systems.
That other animals have, that are sentient, that are clearly responding to things in the world that are sensitive to states of the world and states of themselves.
So they probably still have some kind of experience, but they may not have the same kind of mental consciousness that we have, which is really a world of ideas and thoughts that are somewhat abstracted from these more basic kind of cognitive operations.
I know I said one question.
I just have one slight comment.
And yet it's good because Kevin can...
Expertly rebut and explain in technical details why.
Everything there makes sense, Kevin, and also that I might throw shade towards...
Well, it's not throwing shade, but just saying that this might be the cognitive processes that meditators are so interested in because we have this recursive aspect that...
They draw some unwarranted metaphysical conclusions from that, but it could just be a fascinating process within the way that human minds work.
In any case, the one kind of point that I've raised the map before, and I'll raise it to you as well, is that when you were saying that, you know, why would it be, or that it feels like this,
or that...
You know, you can create a system where you have all of that, but you don't have the phenomenological experience.
And my kind of reaction to that was, but we've never seen that anywhere in the world.
So it's a theoretical possibility, but it's never yet happened.
So if they make something which is exactly like that and which likes it, then that would be true.
But at the minute, it feels like a kind of thought experiment to say, well, it is.
It's the zombie experiment.
It's Chalmers' philosophical zombie experiment.
Everything is happening exactly the same as in you, but it doesn't feel like anything.
I don't think, for me, the idea that you can conceive of that has no weight in any kind of argument by itself.
Conceivability is not an argument.
But there's two ways you can see it.
Either the phenomenology pops out of the way that that thing works.
Or, which is equally problematic, it's added somehow, right?
There's this extra bit that is the phenomenology, and that's what Chalmers' Thought Experiment is sort of getting at, is there's something you could subtract and have everything else left.
And that doesn't make much sense to me either.
But I think there's another way you could think about it, which is there is a phenomenology that emerges, and then that phenomenology has some...
Add adaptive value to it and some causal efficacy in terms of being able to think about abstract thoughts.
So there we're just into metacognition and what metacognition gets you as a part of a control system.
And it gets you, for example, the ability to not just have a thought, but to judge the thought.
So you can have a belief and then say, wait, should I have that belief?
How certain should I be?
What's my confidence level in that belief?
That's going to inform my decision-making here because I think such and such, but I'm not really sure.
So maybe I need to get more information or maybe I shouldn't jump here.
And so, yeah, again, I think you can operationalize the metacognitive stuff much more easily in terms of control system, behavioral control, and so on.
The what it's likeness, for me, I mean, I didn't even try.
To address it, because I just don't know, right?
Here, maybe it's the biggest mystery that we still have, but I certainly don't have an answer to.
The very, very last thing just to say is that I think the example that you give in the book about schizophrenia, where people experience a voice, an internal monologue, not as theirs, but it is generated by their brain, right?
Yes.
That could...
Give some, you know, indication that when the brain is not functioning entirely properly, that you kind of have the sensation of intrusive thoughts from elsewhere.
So, yeah.
Anyway, Matt, that's it.
I promise I won't mention consciousness again.
I'm very satisfied.
The C word.
No, that was fine.
That was fine.
Actually, Kevin's answers, I think, brought some balance to the force because you agree with everything you said.
I do too.
I sign off, honestly, to all of that.
I can definitely see the adaptive benefits of having those self-reflective processes.
Yeah, there's a lot of evidence for all of that.
The only thing I think is...
So Chris rebels against the idea of anyone calling anything mysterious because he thinks it's hinting at something magical.
And I suppose where I'm coming from is, again, emphasizing my sign-off on all of that.
I just do find it, just at a gut-feeling level, a little bit mysterious, like, where the consciousness...
This thing, whether it pops out or it's added on or whatever, I guess maybe I find the idea of p-zombies not entirely illogical.
Like, I can imagine it being pretty possible and all of the AIs that are floating around, you know, I think show that you could make a pretty convincing simulation of something that does do sort of chain of thought stuff and does have some ability to reflect.
It doesn't seem implausible to me.
I guess Chris's argument is, well, you know, we're complex agents and we're conscious.
So, you know, why is it mysterious?
Because we have proof.
It just is.
And, you know, I accept that.
I accept that it just is.
But I still find it a bit odd.
Well, I mean, yeah, it is mysterious in the sense that we don't currently understand it.
So I wouldn't, yeah, I agree.
I just don't think that that means it's, that's a statement about us.
That's not a statement.
About the thing, right?
So the statement is, we find it mysterious.
That's a description of us, not the thing.
So it doesn't mean it's always a mystery in and of itself.
We can find out.
I mean, lots of other things used to be mysterious too.
Yeah, it feels mysterious to me.
I also feel hungry sometimes and a whole bunch of other things.
So I agree.
You're with me a happy man already.
Kevin, I feel like you've restored balance to the universe by making us both feel that we're correct.
Yeah, well, I was trying to explain to Chris because you really did help because you laid it out in a way that Chris could agree with and did agree with 100%.
And I do too.
And my thing all along was we do agree, Chris.
I just think it's just...
It's a little bit mysterious that there's a subjective, phenomenological experience of it.
And that's all.
That's all.
And nothing else.
We're on board.
We actually agree.
I think you can, yes, you can feel that it's mysterious without adopting a mysterianism philosophy, metaphysics, right?
Just committing to it always being mysterious for all time.
Yeah, that's right.
And I know why you react to that statement, Chris, because I agree.
I'm with you.
I don't like the people that turn it into some mystical, you know, mystery that we're going to find a place for spirits and poltergeists.
It's just that it's not the issue about what people do, like they pack children and stuff.
Like, of course, we all agree that that's mad, right?
Or the people that really fix it on using.
Quantum indeterminacy to justify everything, right?
Just as a get out of GL3 card.
The bit that I just tend to bump into is that I just kind of think because of all the processes that we've discussed about agents going around in the world and trying to model things and so on, that the fact that it's like something just seems to me like Yes,
it has to be.
Maybe it's a failure of my imagination, but I'm just like, what's the alternative?
And I can't imagine it because it's the only way that I know that that process works.
So I can theoretically imagine a world where it isn't.
And whenever I encounter beings that are doing that, I'll be interested to discuss what their inner worlds are like.
That's the tricky bit.
Do you know Mark Soames' work at all?
S-O-L-M-S?
Very interesting stuff about really sort of basal kinds of conscious experience that are basically emotional, you know, triggered from the brainstem, and they convey what are essentially control signals, right?
Homeostatic signals that say, my food levels are low, my osmotic balance is off, my sleep...
Debt is too high.
It's too cold out here, right?
You know, so really, really basic things that basic organisms feel.
And he has a sort of a theory that these have to be submitted to a central cognitive economy that is trying to adjudicate over them and arbitrate over them all at once to decide, okay, well, look, I'm hungry and I need sleep and I need shelter.
Which of them am I going to prioritize at any moment?
And these things have a valence, right?
They feel good or bad.
But just by itself, if all the central processing unit only got good or bad signals, then it would no longer know the source of that.
It would no longer know, oh, wait, this is a hunger signal that I'm feeling right now.
It's not just bad.
It's bad relative to that thing that I need to keep track of.
And his argument is that the qualitative feel...
It's required in that because there's no other way to keep track of it.
It just has to feel like something in order for the central system to kind of keep an eye on everything and know what it's referring to.
To me, it's sort of intuitively appealing.
I kind of was like you.
I was nodding my head when I heard that, and then I'm just not 100% convinced, but I find it kind of a useful way of thinking about Why it should feel like something just in terms, again, of a control system architecture.
Yeah, I mean, like, on one level, I totally get it.
And I, because I've always thought of emotions as being a modulating influence, right, to help with decision making.
But I guess I still...
Maybe this is where me and Chris are different.
Chris would say, yes, well, of course, it has to feel like something in order for that to be happening.
But I suppose I can imagine a system that has modulating emotional factors there, which is reprioritizing things and all that stuff, and they're not having to be a kind of a self and a unitary experience of that going on.
Yeah.
I guess that's my objection.
Yeah, no, again, I can imagine both sides, right?
I can imagine exactly what you just said.
You just build a system that has these different sensors to different parameters of internal states and you somehow keep track of them in a central system that arbitrates over them.
And that's just robotics and, you know, computation.
And then on the other hand, I'm also tempted to think, well, maybe if you did that, it just would feel like something.
Well, I guess Chris has to be right because we do feel something.
I don't need to hear anything else around that saying.
Chris has to be right.
That's the only thing that trash is doing.
Just sample that and loop it.
Well, I just had a kind of, it's a bit of a speculative query, but I'd be curious, Kevin, that with all the AI advances and Matt and I, I'm just wondering if any of the AI developments and the things that are going on there,
if you have any thoughts about how that relates to all of the stuff that you've put into thinking about the agents and agency?
Yes, I do have thoughts.
I'm not an expert on the AI stuff, so this is filtered through my understanding of what these systems do from talking with people who work in the field and so on.
So first of all, I would say there's a question about whether these systems have understanding, and then that raises the question of, well, what the hell do we mean by understanding?
What does that entail in a natural?
And I think you can build up this view of a map of knowledge about the world, knowledge about causal relations in the world, exactly the kind of things that we need in order to simulate an action and the predicted outcomes from it and so on.
So that's actionable knowledge.
It's causally relevant kind of descriptions of the way the world works and the way the self works in it.
And that's the kind of thing that, you know, large language battles don't have that because they're not interacting with the world in the same way.
They're not causally intervening.
They're not acquiring causal knowledge themselves through that intervention.
They have all the text of all the people who've ever been interacting with the world.
And so they can mount a really good simulation by making a perfectly plausible utterance in response to some prompt.
And so it looks like they understand things, but it's sort of a parasitic understanding.
They don't have a world model.
They have a world of words model, which is a separated kind of a thing, right?
So, I think what would be interesting, although potentially dangerous and ethically fraught, would be to think about, well, what would it take to build an agent?
Not to build artificial intelligence, but to build an intelligence, an entity.
What would that necessarily entail?
And I think if you look at the architecture of cognitive behavioral control that I just described the evolution of in the book, you could see there are ways to...
Make a being that has that kind of architecture, which is not just an internal thing.
It's an architecture of relationships with the world that is an ongoing thing that allows it to learn what to do in different situations and so on.
And that leads to intelligence having an adaptive value, where intelligence pays off in behavior.
It's not an abstract kind of a thing like playing chess or something like that.
It's like, I need to get food.
Where can I find it?
Yeah, I think there's a route that you could at least imagine where you build an artificial agent that interacts with the world in such a way that intelligence becomes selected for and ultimately emerges in that system if you allow it to evolve or iterate over designs or so on.
But right now, the things that we have are these disembodied, disengaged...
Systems that I don't think are entities and that I don't think have agency because they're not designed to.
It's not a knock on them.
That's not what they're for.
Yeah, so I think it's a really, really open field.
I think it's really exciting.
And again, I think it's very ethically fraught because if people go about making artificial agents, then we're going to have all kinds of questions about responsibility.
For those agents, responsibility of the agents and what they do, and all kinds of other sorts of questions that we probably should figure out before anybody goes about building one of them.
Yeah, I guess one of the things that many people are worried about AI for many different reasons, but one thing I noticed amongst the AI doomers, the people who are very, very concerned about AI, I think...
Perhaps going a bit beyond legitimate concerns and getting into a bit of speculation, is they definitely do imagine that an intelligent AI, hyper-intelligent AI, would sort of immediately hop to a lot of human motivations.
So the argument goes, okay, it becomes very intelligent.
It is naturally going to want to persist.
It is naturally going to want to exert some control over its own destiny.
And this will make it want to kill all humans is kind of how the argument goes.
And I think from your point of view, as you just mentioned, evolution has invested us and all living creatures with those imperatives.
So it's very natural for us to assume.
That something intelligent is going to happen too.
But I personally suspect not.
But how about you?
Yeah, no, I think the order is important, right?
What happened in nature was the purpose and meaning and value came first and intelligent systems emerged from that because it's useful relative to those goals, right?
Those standards.
I don't think that the kinds of systems that we're seeing now that are built for These, you know, things like text prediction and so on, even though they have this great corpus of words that they can kind of build an internal relational model of,
I don't think agency will just pop out at them with the current architectures.
I think you'll have to do something different.
I think you'll have to embody them.
You'll have to give them some sort of skin in the game.
You may have to, you know, in order to get that.
Yeah, so I don't see the Doomer stuff.
I don't see Skynet or Ultron emerging from ChatGPT in its current incarnations or any further incarnation that has the same architecture.
I think we're probably safe on that front.
There's lots of other reasons to be worried about the influence of AI systems as they're applied in the world, but those are more societal, not technical, I think.
Yeah.
So getting back to the book and connecting a little bit to AI, I suppose, is that one thing that these AIs are indisputably good at is creativity.
And you see it with the image generation and just recently video generation is pretty interesting.
And of course, the...
Creative writing, it might bullshit a lot sometimes, it may not understand what it's doing, but it's certainly creative.
And I think that's something you emphasized in your narrative of, in the book, of human cognition in the context of evolution, that you had with simpler organisms.
I guess a lack of creativity.
There was certainly agency.
There was certainly evolutionary imperatives and responses to stimuli.
But there was a lack of flexibility, a lack of behavioral flexibility.
And at some point...
Evolution figured out in some organisms that that could actually be a very handy thing.
How would you describe that process?
Yeah, I think that's right.
So many organisms can react to things in the world.
They may have some kind of pre-configured control policies that say, yeah, I should approach this kind of thing.
I should avoid that kind of thing and so on.
And they may be capable of integrating multiple signals at once.
Assessing, in effect, a whole situation and deciding what's the best thing to do in this situation.
So they'll have systems to do that, and they may have systems to learn from experience so that their recent history or even far history can inform what they should do in any scenario.
But, yeah, many of them are not particularly...
They don't have an open-ended repertoire of actions, which is what we do.
Within our physical constraints, we can do all kinds of things.
We can think of all kinds of things that we could do.
Part of what has happened over evolution is that cognitive flexibility became really valuable in human evolution in particular.
In a sense, probably because I think it snowballs on itself, right?
The more sort of flexibility you have, the more control over your environment, the more you can move into new environments, which makes it more valuable to have more cognitive flexibility and so on.
So I think you get this amplifying loop that happened in human evolution, which moved at some point from human biological evolution to human cultural evolution, and then it really...
It really took off because then we could share our thinking with each other.
So yeah, we have this ability for creative thought.
And I think what, you know, we deploy it most of the time in terms of creative problem solving, right?
And that's what most, you know, other animals that have some open-endedness to their behavior do that as well.
And it's something that I don't know that AI does that, right?
You said, you know, AIs are very creative and they're generative, right?
I mean, that's what they're for.
But they generate new stuff by kind of recombining old stuff, right?
So I don't know.
It's not creative in the same sense as...
That's how I create stuff, Kevin.
I don't know how you do it.
Yeah, I mean, maybe you're right.
Maybe they're recombining.
Well, are they recombining ideas?
That's the question, right?
Are they having ideas and recombining them?
Or are they just recombining material?
In a new way.
And yeah, I don't know, maybe there's not a sharp line between those things.
I do think, you know, one thing is in terms of the creative problem solving, many organisms, including us, have these systems where we can, you know, say we're in some given scenario, we kind of recognize it, but there's a few things that we think we could do.
So A, B, C, or D, these are the options that we have.
And we evaluate them and just decide on one of them.
Maybe we're not 100% sure, but we decide on one of them.
And we try that out, and it turns out we're not achieving our goals.
So we go back and we try one of the other ones.
And we might exhaust that and still not be achieving our goals.
And then there's a system that can kick in that involves part of the brainstem called the locus ceruleus, or the blue spot, that releases norepinephrine into parts of the rest of the brain, including the cortex.
Where the theory is that it kind of shakes up the patterns that are there.
So it kind of breaks the patterns out of their ruts, the most obvious things that were suggested to do, and allows it to explore, expand the search space for new ideas.
So, you know, thinking outside the box.
And then those new ideas are evaluated and simulated and so on and maybe tested out in the world.
So what's really interesting is that what that is actually doing...
It's kind of making use of the noisiness or the potential indeterminacy in the neural circuits.
It doesn't add it, it releases it.
So ordinarily, the habits kind of constrain it, constrain the neural populations.
This system effectively reduces those constraints and lets the populations explore different patterns that they wouldn't normally go into.
And to me, that's a really just beautiful kind of example of how...
An organism can make use of this indeterminacy, but it's deciding to do it, right?
It's a resource that it can draw on to enable it to expand the things that it even thinks of, right?
The things that it even conceives of to do in any given scenario.
Matt is an AI evangelist in a way, as you can tell by his description of creativity.
But the so Matt, you mentioned to me that, you know, whenever you make
With ChatGPT or other LLMs, prompts that encourage them to be reflective and recursive.
You can often overcome blockages that people otherwise say they aren't able to do it.
But if they aren't able to do it, they shouldn't be able to reach it when you encourage them to go through additional steps.
I know it's completely different.
In this case, the difference being, of course, that we are the ones prompting it into the system.
So maybe we are functioning as the kind of agent in that system still.
But I don't see that it would be impossible to create a system that was doing that from its own set of instructions.
So maybe it's at UMath and Kevin, but...
Is that a completely distinctive process, or would you see that as potentially analogous?
Whoever wants the answer.
You're the guest, Kevin.
It's open to me.
Like I said earlier, I'm not probably enough of an expert on the inner workings of those models to know exactly where or whether there's creativity.
At work, you know, this term that's pretty poorly defined anyway, even in human endeavors.
So, yeah, I don't know.
I don't, again, for most of these things, I don't see any reason why they couldn't this kind of system.
You know, what I just described, there's no reason why that kind of a system couldn't be put into an artificial entity of some kind.
I just am a little skeptical that the current versions have that kind of capability.
Yeah, I mean, I agree.
I think these terms are badly defined, and until we get a really good definition of what we mean by creativity, there's not much point.
I mean, personally, I think a good starting point is just randomness combined with evaluation in a cycle.
And certainly, I used to paint paintings, art, badly, but my system was to kind of do abstract art.
I'd sort of paint.
Almost randomly.
Maybe some intuitions were going in there, but really probably mostly randomly.
And then I'd step back and look at it and paint over the bits that I didn't like and then have another go and then following that process.
And the finished product can seem like it came about through this mystical process of creativity, but it can be arrived at.
Algorithmically.
And I think if you talk to a lot of artists, they'll often describe what they actually do in more prosaic terms.
But I think what we're getting at is that there's some fundamental Issues that confront all agents.
One example is the conflict between exploitation and exploration.
Sometimes you're running around randomly trying out different things and sometimes you're onto a good thing and you just keep doing it.
It's interesting to know.
That was a little bit of cognitive science.
I didn't know that we'd identified the mechanism that actually encourages a bit more of the creativity.
Yeah.
I mean, sorry, you're exactly right.
That exploit-explorer problem, which is ubiquitous, is another area where some little bits of randomness can be useful, right?
So occasionally, even if you're onto a good thing, it pays to occasionally look around and do a little exploration because it's always going to be the case that good times are not going to last, right?
And so having that...
Policy built in to the systems that are directing exploration or exploitation avoids going down this thing where you're over-committing to a certain resource that is definitely going to run out.
Evolution knows it's going to run out because it has done in the past.
There's a bunch of other systems, even in simple organisms, where a little bit of noisiness is used as a resource for a little bit of variability.
First of all, it's just a...
It's a feature in the nervous system.
It's not a bug.
It's a feature that enhances signal processing in many ways.
But also, you know, sometimes it's useful, say, when you're avoiding a predator, you know, the last thing you want to be is predictable because you'll be lunch.
So, you know, many organisms kind of use some sort of randomizer to tell them, you know, which way to jump.
Yeah.
Well, for what it's worth, those deep artificial neural networks have...
You know, intrinsically stochastic.
They wouldn't work very well without that.
But another thing that occurred to me, I wonder if this is related to your thesis at all, which is something that's always fascinated me, is the credit assignment problem.
So with organisms and humans' intelligent agents, you do some stuff, sometimes over an extended period of time, and then something good happens.
You have a reward, you have a signal that what you That's something that you want.
But then you're confronted with this problem where you have to look back.
Maybe it wasn't the thing you just did.
Maybe it was some sequence of steps or maybe it was this thing you did way back then.
And this to me is one of the most interesting challenges I think an agent's got in the world.
How do you...
Does that relate to your thesis at all?
It does.
I mean, in the sense that as you're building up a model of the world and those causal relations, that's exactly what you need to do is distinguish the real causal relations from the ones that were only apparent, right?
And of course, it's super difficult to do as the longer the time frame over which the causal influence, the true causal influence obtains.
So, yeah, I mean, there's lots of people working on this problem.
I'm not really an expert on it at all, but you can see the elements that would have to be required is that you need to have some record of events.
You need to have some working memory to be able to keep track of not just what's happening right now, but what just was happening and what was happening 10 minutes ago and what happened a week ago and so on.
So you can see that it depends on this sort of nested.
A set of memory systems and then some kind of an evaluative system that can say this was the important bit relative to that.
In many cases, the only way you get that data is by doing it again and seeing that actually across many, many instances, lots of things were varying through time, but these ones all had this same thing in common between them, and that must be what the causal influence was.
That's exactly the system that leads to understanding.
That's what understanding what's going on entails.
And what's interesting is that in many of the AI systems, the machine learning systems, they have so much data and so much compute and so much energy available to them that they don't necessarily compress things in such a way and abstract,
identify the salient features, and instead they often overfit.
To bizarre stuff, or not bizarre, just arbitrary stuff.
And then that manifests as a failure to generalize to a new situation.
And so the ability to abstract true causal relations from noisy experience and then generalize that to a new novel situation where that's useful knowledge, to me, that is what understanding is.
I think that's a reasonable description of understanding.
And it's something that I feel like Most of the current machine learning models don't come close to.
I've just got one more question for you, and then we're going to let you go and get about your day.
Okay.
This is the last one for me, I promise.
And this is out of curiosity.
I mean, I'm going to simplify it a fair bit.
But you know how in...
Like, explanations for the, in evolutionary terms, the explosive growth, I suppose, in human intelligence, in relatively recent evolutionary history.
There's, to simplify a lot, there's kind of two explanations, right?
There's the humankind being the toolmaker, right?
We got smart because being smart means you can make really good tools, understand the physical environment, hunt prey better, avoid predators, all the rest.
And then there's kind of another explanation, which is more of an intraspecies competition explanation, which is that our social environment...
Started to get more complex as our brains got a little bit bigger.
And then an evolutionary arms race sort of went on that the better understander you were of your fellow humans and the better you could communicate with and manipulate and understand their motivations and intentions, the better you would do.
And, you know, that has some appeal to me as well, I suppose, in terms of the language instinct and things like that.
I mean, what's just...
I mean, just shooting from the hip, it's not a serious question, I suppose.
I'm just curious.
You know more about this stuff than I do.
What's your gut feeling about it?
Yeah, I mean, my general feeling whenever I hear any of those theories that says this is the one thing that explains how we got there is that it's not the one thing.
It was just a bunch of things.
It was a confluence of various factors that feed off each other and amplify each other.
Iteratively increase the value of being intelligent through all of those things at once.
Some people say it's cooking and we could get more calories that way, or it's the social number of people that we interact with, or it's walking upright, it's having dexterity, all these various things.
To me, it doesn't make sense to settle on any one of those as the...
As the prime causal factor, I think they were all convolved with each other in a way that can't be probably decomposed explanatorily in retrospect.
That would be my feeling.
But yeah, I think all of those things are at play.
One thing I will say that you just kind of prompted there is this thinking about other people, right?
So being a better understander of other people, as you said, is really valuable, right?
And so what that means is that we...
The capacity that psychologists call a theory of mind, being able to think about someone else's thoughts, becomes really valuable.
And there is an argument that actually, that's why we can think about our own thoughts.
Because the ability to think about other people's thoughts became valuable, and then it turned out, "Oh, hey, I can think about my own thoughts at the same time."
So the self-awareness was an exaptation on the social value of theory of mind.
I don't 100% buy that either.
I think it's probably both things going on.
Being able to model someone else's mind while not being able to model your own mind doesn't make much sense.
They probably co-evolved.
But you can at least see a value of modeling minds generally in the social context, which is just more obvious there than simply the isolated.
I totally agree.
I can see the importance of having that theory of mind and I can see how it can be recapitalised to promote self-awareness.
I suppose the other thing that guides me a little bit more towards the importance of I guess social dexterity as being an important factor is that it is around language and it's connected to what we were talking about before which is that the thing about language,
a bit like letters, you have a discrimination between you see an A or you don't see an A and all words are pretty much about putting categories onto the world and putting those categories is extremely important for us to do.
Or construct interesting mental representations from them and to do things with them, right?
So, again, just gut feelings and intuitions here from my side anyway.
I find it really hard to imagine how you could have a very intelligent species that wasn't using some form.
Of language to think, not just communicate with others, but even to think.
To what degree do you reckon is language sort of fundamental and required for thinking?
Yeah, I mean, I think what language enables us, because we get these categorical...
Entities that we can then manipulate as objects without having to mentally keep track of all the properties of them, right?
We can just use it as a label.
We can think about dogs without having to constantly say, those one-blooded, furry things with four legs and wagger tails and can bite you, right?
I mean, you just couldn't think like that.
It would just be cumbersome, right?
So now we have a category dog.
That's an element that we can use and we can recombine it in an open-ended way to have thoughts and express them.
Sorry, not just to express them, but to have them.
I think you're right there.
In ways that we couldn't do otherwise.
Now, there's a couple ways to think about that.
One is that some people would say, well, the language that we have shapes our thought.
I would say that the types of things that we want to think about shapes the elements of our language.
So we have objects, and we have actions, and we have mainly prepositions, which are causal relations.
Interdependencies between them, right?
Those are the elements in the world.
The world is structured like that, and so our thought reflects that because it's useful, and therefore our language reflects those elements.
But once we have that, then I do think it opens up this explosion of infinite open-endedness, of abstract thoughts that we can entertain.
That couldn't have been there before.
And of course, the communicative element then is that we can think together, right?
We don't have to think alone anymore.
And then you can learn something and tell me, and I immediately get the benefit of the hard work that you've done, and it's cumulative over generations and so on.
It's Chris's turn now, and I'm going to shut up from here on, but I've got to mention that I had one of those woe-dude moments there, because I totally agree with you, that the physical world, That we care about in which you have, you know, subjects and objects and things acting on other things.
I could see how that does shape the things that we think about.
Even when a mathematician or a physicist is thinking about something extraordinarily abstract that we have no personal contact, they use physical and geometric analogies in order to think about it.
And having that shape our language, I was like...
Whoa, yeah.
That's cool.
You can have the gift of Matt's head exploding in the universe.
That's a good note, I think, to round off on, Kevin, because I suspect if I don't stop, Matt, you will be here for another hour.
As a guru, Kevin, I give you good marks because you mention meaning.
And you talk about sense-making, you've got consciousness, free will, you know, big things popping around.
This could get you in the idea.
You get marked down because you disparage monocausal accounts.
You're not entertaining.
Mystic forces.
You said that you didn't know an awful lot about some of the topics.
Yeah, that's not allowed.
That's also not allowed.
You can't express uncertainty.
And your analogies were not long or flowery enough.
So you've got some positive points, but basically not going to cut it.
But yeah, I have to say I encourage anyone listening that has an interest in any of these topics.
We generally don't do book promotion stuff, and I know that you didn't come on to us, but I just would recommend your book, I think, Matt would too.
To be clear, we contacted Kevin to please come on our show.
His agent did not contact us.
That's what I was saying.
Yeah, it's been a great pleasure and entertainment, and I think the key takeaway is that...
Essentially, I'm right.
That's what I take away from what you said.
You know, Matt has a couple of things that he's okay about as well.
But yeah, it's been a pleasure.
And yeah, hopefully we can do it again for all other topics.
We didn't get you to talk about gurus that much this time, but it might be good to pick your brain on those folks.
Great.
Well, it's been great fun, guys.
Yes, Chris, you're right.
And yeah, it was a pleasure.
Okay, and thanks for me too, Kevin.
This was great fun.
And yeah, if you can make the time in the future, yeah, we'll have to get you back on to get you to give us your hot takes on the Infosphere and Gurus.