Michael Levin: Reality is an Illusion - Alien Intelligence, Biology, Life | Lex Fridman Podcast #486
|
Time
Text
The following is a conversation with Michael Levin, his second time on the podcast.
He is one of the most fascinating and brilliant biologists and scientists I've ever had the pleasure of speaking with.
He and his labs at Tufts University study and build biological systems that help us understand the nature of intelligence, agency, memory, consciousness, and life in all of its forms here on Earth and beyond.
This is the Lex Friedman Podcast.
To support it, please check out our sponsors in the description, where you can also find links to contact me, ask questions, give feedback, and so on.
And now, dear friends, here's Michael Levin.
You write that the central question at the heart of your work, from biological systems to computational ones, is how do embodied minds arise in the physical world and what determines the capabilities and properties of those minds?
Can you unpack that question for us and maybe begin to answer it?
Well, the fundamental tension is in both the first person, the second person, and third person descriptions of mind.
So in third person, we want to understand how do we recognize them and how do we know looking out into the world what degree of agency there is and how best to relate to the different systems that we find.
And are our intuitions any good when we look at something and it looks really stupid and mechanical versus it really looks like there's something cognitive going on there?
How do we get good at recognizing them?
Then there's the second person, which is the control.
And that's both for engineering, but also for regenerative medicine.
When you want to tell the system to do something, right?
What kind of tools are you going to use?
And this is a major part of my framework is that all of these kinds of things are operational claims that you're going to use the tools of hardware rewiring, of control theory and cybernetics, of behavior science, of psychoanalysis and love and friendship.
Like what are the interaction protocols that you bring, right?
And then in first person, it's this notion of having an inner perspective and being a system that has valence and cares about the outcome of things, makes decisions and has memories and tells a story about itself and the outside world.
And how can all of that exist and still be consistent with the laws of physics and chemistry and various other things that we see around us?
So that I find to be maybe the most interesting and the most important mystery for all of us to, well, on the science and also on the personal level.
So that's what I'm interested in.
So your work is focused on starting at the physics, going all the way to friendship and love and psychoanalysis.
Yeah, although, although actually I would turn that upside down, I think that pyramid is backwards.
And I think it's behavior science at the bottom.
I think it's behavior science all the way.
I think in certain ways, even math is the behavior of a certain kind of being that lives in a latent space.
And physics is what we call systems that at least look to be amenable to a very simple, low-agency kind of model and so on.
But that's what I'm interested in is understanding that and developing applications because it's very important to me that what we do is transition deep ideas and philosophy into actual practical applications that not only make it clear whether we're making any progress or not, but also allow us to relieve suffering and make life better for all sentient beings and enable to, you know, enable us and others to reach their full potential.
So these are very practical things, I think.
Behavioral science, I suppose, is more subjective and mathematics and physics is more objective.
Would that be the clear difference?
The idea basically is that where something is on that spectrum, and I've called it the spectrum of persuadability.
You could call it the spectrum of intelligence or agency or something like that.
I like the notion of the spectrum of persuadability because it's an engineery approach.
It means that these are not things you can decide or have feelings about from a philosophical armchair.
You have to make a hypothesis about which tools, which interaction protocols you're going to bring to a given system.
And then we all get to find out how that worked out for you, right?
So you could be wrong in many ways.
In both directions, you can guess too high or too low or wrong in various ways.
And then we can all find out how that's working out.
And so I do think that the behavior of certain objects is well described by specific formal rules.
And we call those things the subject of mathematics.
And then there are some other things whose behavior really requires the kinds of tools that we use in behavioral cognitive neuroscience.
And those are other kinds of minds that we think we study in biology or in psychology or other sciences.
Why are you using the term persuadability?
Who are you persuading and of what?
Well, in this context.
Yeah.
The beginning of my work is very much in regenerative medicine, in bioengineering, things like that.
So for those kinds of systems, the question is always, how do you get the system to do what you want it to do?
So there are cells, there are molecular networks, there are materials, there are organs and tissues and synthetic beings and biobots and whatever.
And so the idea is if I want your cells to regrow a limb, for example, if you're injured and I want your cells to regrow a limb, I have many options.
Some of those options are I'm going to micromanage all of the molecular events that have to happen, right?
And there's an incredible number of those.
Or maybe I just have to micromanage the cells and the stem cell kinds of signaling factors.
Or maybe actually I can give the cells a very high-level prompt that says, you really should build a limb and convince them to do it, right?
And so where, which of those is possible, I mean, clearly people have a lot of intuitions about that.
If you ask standard people in regenerative medicine and molecular biology, they're going to say, well, that convincing thing is crazy.
What we really should be doing is talking to the cells, or better yet, the molecular networks.
And in fact, all the excitement of the biological sciences today are at single molecule approaches and big data and genomics and all of that.
The assumption is that going down is where the action is going to be, going down in scale.
And I think that's wrong.
But the thing that we can say for sure is that you can't guess that.
You have to do experiments and you have to see because you don't know where any given system is on that spectrum of persuadability.
And it turns out that every time we look and we take tools from behavioral science, so learning different kinds of training, different kinds of models that are used in active inference and surprise minimization and perceptual multistability and visual illusions and all these kinds of interesting things, you know, stress perception and memory, active memory reconstruction, all these interesting things.
When we apply them outside the brain to other kinds of living systems, we find novel discoveries and novel capabilities actually being able to get the material to do new things that nobody had ever found before.
And precisely because I think that people didn't look at it from those perspectives.
They assumed that it was a low-level kind of thing.
So when I say persuadability, I mean different types of approaches, right?
And we all know if you want to persuade your wind-up clock to do something, you're not going to argue with it or make it feel guilty or anything.
You're going to have to get in there with a wrench and you're going to have to tune it up and do whatever.
If you want to do that same thing to a cell or a thermostat or an animal or a human, you're going to be using other sets of tools that we've given other names to.
And so that's now, of course, that spectrum, the important thing is that as you get to the right of that spectrum, as the agency of the system goes up, it is no longer just about persuading it to do things.
It's a bi-directional relationship, what Richard Watson would call a mutual vulnerable knowing.
So the idea is that on the right side of that spectrum, when systems reach the higher levels of agency, the idea is that you're willing to let that system persuade you of things as well.
You know, in molecular biology, you do things.
Hopefully the system does what you want to do, but you haven't changed.
You're still exactly the way you came in.
But on the right side of that spectrum, if you're having interactions with even cells, but certainly, you know, dogs, other animals, maybe other creatures soon, you're not the same at the end of that interaction as you were going in.
It's a mutual bi-directional relationship.
So it's not just you persuading something else.
It's not you pushing things.
It's a mutual bi-directional set of persuasions, whether those are purely intellectual or of other kinds.
So in order to be effective at persuading an intelligent being, you yourself have to be persuadable.
So the closer in intelligence you are to the thing you're trying to persuade, the more persuadable you have to become.
Hence the mutual vulnerable knowing.
What a term.
Yeah.
Yeah, Richard.
Yeah, you should talk to Richard as well.
He's an amazing guy, and he's got some very interesting ideas about the intersection of cognition and evolution.
But I think what you bring up is very important because there has to be a kind of impedance match between what you're looking for and the tools that you're using.
I think the reason physics always sees mechanism and not minds is that physics uses low-agency tools.
You've got voltmeters and rulers and things like this.
And if you use those tools as your interface, all you're ever going to see is mechanisms and those kinds of things.
If you want to see minds, you have to use a mind, right?
You have to have, there has to be some degree of resonance between your interface and the thing you're hoping to find.
You've said this about physics before.
Can you just linger on that, like expand on it?
What do you mean, why physics is not enough to understand life, to understand mind, to understand intelligence?
You make a lot of controversial statements with your work.
That's one of them.
Because there are a lot of physicists that believe they can understand life, the emergence of life, the origin of life, the origin of intelligence using the tools of physics.
In fact, all the other tools are a distraction to those folks.
If you want to understand fundamentally anything, you have to start a physics to them.
And you're saying, no, physics is not enough.
Here's the issue.
Everything here hangs on what it means to understand.
For me, because understand doesn't just mean have some sort of pleasing model that seems to capture some important aspect of what's going on.
It also means that you have to be generative and creative in terms of capabilities.
And so for me, that means if I tell you this is what I think about cognition in cells and tissues, it means, for example, that I think we're going to be able to take those ideas and use them to produce new regenerative medicine that actually helps people in various ways, right?
It's just an example.
So if you think as a physicist, you're going to have a complete understanding of what's going on from that perspective of fields and particles and who knows what else is at the bottom there.
Does that mean then that when somebody is missing a finger or has a psychological problem or has these other high-level issues, that you have something for them, that you're going to be able to do something?
Because my claim is that you're not going to.
And even if you have some theory of physics that is completely compatible with everything that's going on, it's not enough.
That's not specific enough to enable you to solve the problems you need to solve.
In the end, when you need to solve those problems, the person you're going to go to is not a physicist.
It's going to be either a biologist or a psychiatrist or who knows, but it's not going to be a physicist.
And the simple example is this.
Let's say someone comes in here and tells you a beautiful mathematical proof.
It's just really deep and beautiful.
And there's a physicist nearby and he says, well, I know exactly what happened.
There were some air particles that moved from that guy's mouth to your ear.
I see what goes on.
It moved the cilia in your ear and the electrical signals went up to your brain.
I mean, we have a complete accounting of what happened, done and done.
But if you want to understand what's the more important aspect of that interaction, it's not going to be found in the physics department.
It's going to be found in the math department.
So that's my only claim is that physics is an amazing lens with which to view the world, but you're capturing certain things.
And if you want to stretch to sort of encompass these other things, we just don't call that physics anymore, right?
We call that something else.
Okay, but you're kind of speaking about the super complex organisms.
Can we go to the simplest possible thing where you first take a step over the line, the Cartesian cut, as you've called it, from the non-mind to mind, from the non-living to living?
The simplest possible thing isn't that in the realm of physics to understand?
How do we understand that first step where you're like, that thing is no mind, probably non-living.
And here's a living thing that has a mind.
That line.
I think that's a really interesting line.
Maybe you can speak to the line as well.
And can physics help us understand it?
Yeah, let's talk about it.
Well, first of all, of course it can, meaning it can help, meaning that I'm not saying physics is not helpful.
Of course, it's helpful.
It's a very important lens on one slice of what's going on in any of these systems.
But I think the most important thing I can say about that question is I don't believe in any such line.
I don't believe any of that exists.
I think there is a, I think it's a continuum.
I think we as humans like to demarcate areas on that continuum and give them names because it makes life easier.
And then we have a lot of battles over so-called category errors when people transgress those categories.
I think most of those categories at this point, they may have done some good service at the beginning of when the scientific method was getting started and so on.
I think at this point, they mostly hold back science.
Many, many categories that we can talk about are at this point very harmful to progress because what those categories do is they prevent you from porting tools.
If you think that living things are fundamentally different from non-living things, or if you think that cognitive things are these like advanced brainy things that are very different from other kinds of systems, what you're not going to do is take the tools that are appropriate to these kind of cognitive systems, right?
So the tools that have been developed in behavioral science and so on, you're never going to try them in other contexts because you've already decided that there's a categorical difference, that it would be a categorical error to apply them.
And people say this to me all the time is that you're making a category error.
And as if these categories were given to us, you know, from on high and we have to obey them forevermore, the category should change with the science.
So yeah, I don't believe in any such line.
And I think a physics story is very often a useful part of the story.
But for most interesting things, it's not the entire story.
Okay.
So if there's no line, is it still useful to talk about things like the origin of life?
That's one of the big open mysteries before us as a human civilization, as scientifically minded, curious Homo sapiens.
How did this whole thing start?
Are you saying there is no start?
Is there a point where you can say that invention right there was the start of it all on Earth?
My suggestion is that much better than trying to, in my experience, much better than trying to define any kind of a line, okay?
Because inevitably, I've never found, and people try to, you know, we play this game all the time when I make my continuum claim, then people try to come up, okay, well, what about this?
You know, what about this?
And I haven't found one yet that really shoots that down, that you can't zoom in and say, yeah, okay, but right before then, this happened.
And if we really look close, like, here's a bunch of steps in between, right?
Pretty much everything ends up being a continuum.
But here's what I think is much more interesting than trying to make that line.
I think what's really more useful is trying to understand the transformation process.
What is it that happened to scale up?
And I'll give you a really dumb example.
And we always get into this because people often really, really don't like this continuum view.
The word adult, right?
Everybody is going to say, look, I know what a baby is.
I know what an adult is.
You're crazy to say that there's no difference.
Not saying there's no difference.
What I'm saying is the word adult is really helpful in court because you just need to move things along.
And so we've decided that if you're 18, you're an adult.
However, what it hides is what it completely conceals is the fact that, first of all, nothing happens on your 18th birthday, right?
That's special.
Second, if you actually look at the data, the car rental companies actually have a much better estimate because they actually look at the accident statistics and they'll say it's about 25 is really what you're looking for, right?
So theirs is a little better.
It's less arbitrary.
But in either case, what it's hiding is the fact that we do not have a good story of what happened from the time that you were an egg to the time that you're the supposed adult.
And what is the scaling of personal responsibility, decision-making, judgment?
Like these are deep, fundamental questions.
Nobody wants to get into that every time somebody has a traffic ticket.
And so, okay, so we've just decided that there's this adult idea.
And of course, it does come up in court because then somebody has a brain tumor or somebody's eaten too many Twinkies or something has happened.
You say, look, that wasn't me.
Whoever did that, I was on drugs.
Well, why'd you take the drugs?
Well, that was, you know, that was yesterday, me.
Today, this is something, right?
So, so we get into these very deep questions that are completely glossed over by this idea of an adult.
So, I think once you start scratching the surface, most of these categories are like that.
They're convenient and they're good.
I get into this with neurons all the time.
I'll ask people, what's a neuron?
Like, what's really a neuron?
And yes, if you're in neurobiology 101, of course, you just say, look, these are what neurons look like.
Let's just study the neuroanatomy and we're done.
But if you really want to understand what's going on, well, neurons developed from other types of cells, and that was a slow and gradual process.
And most of the cells in your body do the things that neurons do.
So, what really is a neuron, right?
So, once you start scratching this, this happens.
And I have some things that I think are coming out of our lab and others that are, I think, very interesting about the origin of life, but I don't think it's about finding that one boom.
Like, this is, yeah, there'll be, there are innovations, right?
There are, there are innovations that allow you to scale in an amazing way, for sure.
And lots of people that study those, right?
So, so things that are thermodynamic kind of metabolic things and all kinds of architectures and so on.
But I don't think it's about finding a line.
I think it's about finding a scaling process.
The scaling process, but then there's more rapid scaling and there's slower scaling.
So, innovation, invention, I think, is useful to understand so you can predict how likely it is on other planets, for example, or to be able to describe the likelihood of these kinds of phenomena happening in certain kinds of environments.
Again, specifically in answering how many alien civilizations there are.
That's why it's useful.
But it's also useful on a scientific level to have categories, not just because it makes us feel good and fuzzy inside, but because it makes conversation possible and productive, I think.
If everything is a spectrum, it becomes difficult to make concrete statements, I think.
Like, we even use the terms of biology and physics.
Those are categories.
Technically, it's all the same thing, really.
Fundamentally, it's all the same.
There's no difference in biology and physics, but it's a useful category.
If you go to the physics department and the biology department, those people are different in some kind of categorical way.
So, somehow, I don't know what the chicken or the egg is, but the categories, maybe the categories create themselves because of the way we think about them and use them in language, but it does seem useful.
Let me make the opposite argument.
They're absolutely useful.
They're useful specifically when you want to gloss over certain things.
The categories are exactly useful when there's a whole bunch of stuff.
And this is what's important about science: like the art of being able to say something without first having to say everything, right?
Which would make it impossible.
So, categories are great when you want to say, look, I know there's a bunch of stuff hidden here.
I'm going to ignore all that and we're just going to let's get on with this particular thing.
And all of that is great as long as you don't lose track of the stuff that you glossed over.
And that was what I'm afraid is happening in a lot of different ways.
And in terms of, look, I'm very interested in life beyond Earth and all of these kinds of things, although we should also talk about what I call SUT-S-U-T-I, the search for unconventional terrestrial intelligences.
I think we got much bigger issues than actually recognizing aliens off Earth.
But I'll make this claim.
I think the categorical stuff is actually hurting that search because if we try to define categories with the kinds of criteria that we've gotten used to, we are going to be very poorly set up to recognize life in novel embodiments.
I think we have a kind of mind blindness.
I think this is really key.
To me, the cognitive spectrum is much more interesting than the spectrum of life.
I think really what we're talking about is a spectrum of cognition.
And it's weird as a biologist to say, I don't think life is all that interesting a category.
I think the categories of different types of minds is extremely interesting.
And to the extent that we think our categories are complete and are cutting nature at its joints, we are going to be very poorly placed to recognize novel systems.
So, for example, a lot of people will say, well, this is intelligent and this isn't, right?
And there's a binary thing, and that's useful and occasionally that's useful for some things.
I would like to say, instead of that, let's admit that we have a spectrum.
But instead of just saying, oh, look, everything's intelligent, right?
Because if you do that, you're right.
You can't do anything after that.
What I'd like to say instead is, no, no, you have to be very specific as to what kind and how much.
In other words, what problem space is it operating in?
What kind of mind does it have?
What kind of cognitive capacities does it have?
You have to actually be much more specific.
And we can even name, right?
That's fine.
We can name different types of, I mean, this is doing predictive processing.
This can't do that, but it can form memories.
What kind?
Well, habituation and sensitization, but not associative conditioning.
Like it's fine to have categories for specific capabilities, but it's actually, I think it actually makes for much more rigorous discussions because it makes you say, what is it that you're claiming this thing does?
And it works in both directions.
And so some people will say, well, that's a cell.
That can't be intelligent.
And I'll say, well, let's be very specific.
Here are some claims about, here is some problem solving that it's doing.
Tell me why that doesn't, you know, why doesn't that match?
Or in the opposite direction, somebody comes to me and says, you're right, you're right.
You know, the whole, the whole solar system, man, it's just like this amazing, like, okay, what is it doing?
Like, tell me, tell me what tools of cognitive and behavioral science are you using to reach that conclusion, right?
And so I think it's actually much more productive to take this operational stance and say, tell me what protocols you think you can deploy with this thing that would lead you to use these terms.
To have a bit of a meta-conversation about the conversation, I should say that part of the persuadability argument that we two intelligent creatures are doing is me playing devil's advocate every once in a while.
And you did the same, which is kind of interesting, taking the opposite view and see what comes out.
Because you don't know the result of the argument until you have the argument.
And it seems productive to just take the other side of the argument.
For sure.
It's a very important thinking aid to, first of all, what they call steel manning, right?
To try to try to make the strongest possible case for the other side and to ask yourself, okay, what are all the places that I'm sort of glossing over because I don't know exactly what to say?
And where are all the holes in the argument?
And what would a really good critique really look like?
Yeah.
Sorry to go back there just to linger on the term because it's so interesting, persuadability.
Did I understand correctly that you mean that it's kind of synonymous with intelligence?
So it's an engineering-centric view of an intelligent system, because if it's persuadable, you're more focused on how can I steer the goals of the system, the behaviors of the system, which meaning an intelligent system maybe is a goal-oriented, goal-driven system with agency.
And when you call it persuadable, you're thinking more like, okay, here's an intelligent system that I'm interacting with that I would like to get it to accomplish certain things.
But fundamentally, they're synonymous or correlated, persuadability and intelligence.
They're definitely correlated.
So let me, I want to, I want to preface this with one thing.
When I say it's an engineering perspective, I don't mean that the standard tools that we use in engineering and this idea of enforced control and steering is how we should view all of the world.
I'm not saying that at all.
And I want to be very clear because people do email me and say, ah, this engineering thing, you're going to drain the life and the majesty out of these high-end human conversations.
My whole point is not that at all.
It's that, of course, at the right side of the spectrum, it doesn't look like engineering anymore, right?
It looks like friendship and love and psychoanalysis and all these other tools that we have.
But here's what I want to do.
I want to be very specific to my colleagues in regenerative medicine.
Just imagine if I went to a bioengineering department or a genetics department and I started talking about high-level cognition and psychoanalysis, right?
They didn't want to hear that.
So I bring my, I focus on the engineering approach because I want to say, look, this is not a philosophical problem.
This is not a linguistics problem.
We are not trying to define terms in different ways to make anybody feel fuzzy.
What I'm telling you is if you want to reach certain capabilities, if you want to reprogram cancer, if you want to regrow new organs, you want to defeat aging, you want to do these specific things, you are leaving too much on the table by making an unwarranted assumption that the low-level tools that we have, so these are the rules of chemistry and the kind of remolecular rewiring, that those are going to be sufficient to get to where you want to go.
It's an assumption only, and it's an unwarranted assumption.
And actually, we've done experiments now.
So not philosophy, but real experiments, that if you take these other tools, you can in fact persuade the system in ways that has never been done before.
And we can unpack all of that.
But it is absolutely correlated with intelligence.
So let me flesh that out a little bit.
What I think is scaling in all of these things, right?
Because I keep talking about the scaling.
So what is it that's scaling?
What I think is scaling is something I call a cognitive light cone.
And the cognitive light cone is the size of the biggest goal state that you can pursue.
This doesn't mean how far do your senses reach.
This doesn't mean how far can you affect it.
So the James Webb telescope has enormous sensory reach, but that doesn't mean that's the size of its cognitive light cone.
The size of the cognitive light cone is the scale of the biggest goal you can actively pursue.
But I do think it's a useful concept to enable us to think about very different types of agents of different composition, different provenance, engineered, evolved, hybrid, whatever, all in the same framework.
And by the way, the reason I use light cone is that it has this idea from physics that you're putting space and time kind of in the same diagram, which I like here.
So if you tell me that all your goals revolve around maximizing the amount of sugar, the amount of sugar in this, in this 10, 20 micron radius of space-time, and that you have 20 minutes memory going back and maybe five minutes predictive capacity going forward, that tiny little cognitive light cone, I'm going to say probably a bacterium.
And if you say to me that, well, I'm able to care about several hundred yards sort of scale, I could never care about what happens three weeks from now, two towns over.
Just impossible.
I'm saying, you might be a dog.
And if you say to me, okay, I care about really what happens, you know, the financial markets on Earth, you know, long after I'm dead and this and that, say, you're probably a human.
And if you say to me, I care in the linear range, I actively, I'm not just saying that I can actively care in the linear range about all the living beings on this planet, I'm going to say, well, you're not a standard human.
You must be something else because humans, I don't know, these standard humans today, I don't think, can do that.
You must be some kind of a bodhisattva or some other thing that has these massive cognitive light cones.
So I think what's scaling from zero, and I do think it goes all the way down, I think we can talk about even particles doing something like this.
I think what scales is the size of the cognitive lycone.
And so now this is an interesting here.
I'll try for a definition of life for whatever it's worth.
I spend no time trying to make that stick, but if we wanted to.
I think we call things alive to the extent that the cognitive light cone of that thing is bigger than that of its parts.
So in other words, rocks aren't very exciting because the things it knows how to do are the things that its parts already know how to do, which is follow gradients and things like that.
But living things are amazing at aligning their competent parts so that the collective has a larger cognitive light cone than the parts.
I'll give you a very simple example that comes up in biology and that comes up in our cancer program all the time.
Individual cells have little tiny cognitive light ones.
What are their goals?
Well, they're trying to manage pH, metabolic state, some other things.
There are some goals in transcriptional space, some goals in metabolic space, some goals in physiological state space, but they're generally very tiny goals.
One thing evolution did was to provide a kind of cognitive glue, which we can also talk about, that ties them together into a multicellular system.
And those systems have grandiose goals.
They're making limbs.
And if you're a salamander limb and you chop it off, they will regrow that limb with the right number of fingers.
Then they'll stop when it's done.
The goal has been achieved.
No individual cell knows what a finger is or how many fingers you're supposed to have, but the collective absolutely does.
And that process of growing that cognitive lycone from a single cell to something much bigger, and of course, the failure mode of that process.
So cancer, right?
When cells disconnect, they physiologically disconnect from the other cells, their cognitive light cone shrinks.
The boundary between self and world, which is what the cognitive lycone defines, shrinks.
Now they're back to an amoeba.
As far as they're concerned, the rest of the body is just an external environment.
And they do what amoebas do.
They go where life is good.
They reproduce as much as they can, right?
So that cognitive light cone, that is the thing that I'm talking about that scales.
And so when we're looking for life, I don't think we're looking for specific materials.
I don't think we're looking for specific metabolic states.
I think we're looking for scales of cognitive lycone.
We're looking for alignment of parts towards bigger goals in spaces that the parts could not comprehend.
And so cognitive lycone, just to make clear is about goals that you can actively pursue now.
You said linear, like within reach immediately.
No, I didn't.
Sorry, I didn't mean that.
First of all, the goal necessarily is often removed in time.
So in other words, when you're pursuing a goal, it means that you have a separation between current state and target state at minimum, your thermostat, right?
Let's just think about that.
There's a separation in time because the thing you're trying to make happen so that the temperature goes to a certain level is not true right now.
And all your actions are going to be around reducing that error, right?
That basic homeostatic loop is all about closing that gap.
When I said linear range, this is what I meant.
If I say to you, this terrible thing happened to 10 people and you have some degree of activation about it.
And then they say, no, no, no, actually it was 100, you know, 10,000 people.
You're not 1,000 times more activated about it.
You're somewhat more activated, but it's not 1,000.
And if I say, oh my God, it was actually 10 million people, you're not a million times more activated.
You don't have that capacity in the linear range.
You sort of, right, if you think about that curve, we sort of reach a saturation point.
I have some amazing colleagues in the Buddhist community with whom we've written some papers about this.
The radius of compassion is like, can you grow your cognitive system to the point that, yeah, it really isn't just your family group.
It really isn't just the hundred people you know in your circle.
Can you grow your cognitive light cone to the point where, no, no, we care about the whole, whether it's all of humanity or the whole ecosystem or the whole whatever.
Can you actually care about that the exact same way that we now care about a much smaller set of people?
That's what I mean by linear range.
But you say separated by time like a thermostat.
But a bacteria, I mean, if you zoom out far enough, a bacteria could be formulated to have a goal state of creating human civilization.
Because if you look at the, you know, bacteria has a role to play in the whole history of Earth.
So if you anthropomorphize the goals for bacteria enough, I mean, it has a concrete role to play in the history of the evolution of human civilization.
So you do need to, when you define a cognitive light cone, you're looking at directly short-term behavior.
Well, no.
How do you know what the cognitive light cone of something is?
Because as you've said, it could be almost anything.
The key is you have to do experiments.
And the way you do experiments is you put barrier, you have to do interventional experiments.
You have to put barriers between it and its goal, and you have to ask what happens.
And intelligence is the degree of ingenuity that it has in overcoming barriers between it and its goal.
Now, if it were to be that, now this is the this, this is, I think, a totally doable but impractical and very expensive experiment.
But you could imagine setting up a scenario where the bacteria were blocked from becoming more complex, and you can ask if they would try to find ways around it, or whether it's actually, nah, their goals are actually metabolic.
And as long as those goals are met, they're not going to actually get around your barrier.
This business of putting barriers between things and their goals is actually extremely powerful because we've deployed it in all kinds of, and I'm sure we'll get to this later, but we've deployed it in all kinds of weird systems that you wouldn't think are goal-driven systems.
And what it allows us to do is to get beyond just the what you call it anthropomorphizing claims of say, you know, saying, oh, yeah, I think, you know, I think this thing is trying to do this or that.
The question is, well, let's do the experiment.
And one other thing I want to say about anthropomorphizing is people say this to me all the time.
I don't think that exists.
I think that's kind of like, you know, and I'll tell you why.
I think it's like heresy or like other terms that aren't really a thing.
Because if you unpack it, here's what anthropomorphism means.
Humans have a certain magic, and you're making a category error by attributing that magic somewhere else.
My point is, we have the same magic that everything has.
We have a couple of interesting things besides the cognitive light cone and some other stuff.
And it isn't that you have to keep the humans separate because there's some bright line.
It's just that same old, all I'm arguing for is the scientific method, really.
That's really all this is.
All I'm saying is you can't just make pronouncements such as humans are this and let's not sort of push that.
You have to do experiments.
After you've done your experiments, you can say, either I've done it and I found, look at that, that thing actually can predict the future for the next 12 minutes.
Amazing.
Or you say, you know what?
I've tried all the things in the behaviorist handbook.
They just don't help me with this.
It's a very low level of like, that's it.
It's a very low level of intelligence.
Fine, right?
Done.
So that's really all I'm arguing for is an empirical approach.
And then things like anthropomorphism go away.
It's just a matter of, have you done the experiment and what did you find?
And that's actually one of the things you're saying that if you remove the categorization of things, you can use the tools of one discipline on everything.
You can try to try and then see.
That's the underpinnings of the criticism of anthropomorphization.
Because what is that?
That's like psychoanalysis of another human could technically be applied to robots, to AI systems, to more primitive biological systems, and so on.
Try.
Yeah.
We've used everything from basic habituation conditioning all the way through anxiolytics, hallucinogens, all kinds of cognitive modification on the range of things that you wouldn't believe.
And by the way, I'm not the first person to come up with this.
So there was a guy named Bose well over 100 years ago who was studying how anesthesia affected animals and animal cells and drawing specific curves around electrical excitability.
And he then went and did it with plants and saw some very similar phenomena.
And being the genius that he was, he then said, well, I don't know when to stop, but there's no, you know, everybody thinks we should have stopped long before plants because people made fun of him for that.
And he's like, yeah, but the science doesn't tell us where to stop.
The tool is working.
Let's keep going.
And he showed interesting phenomena on materials, metals and other kinds of materials, right?
And so the interesting thing is that, yeah, there is no generic rule that tells you when, when do you need to stop?
We make those up.
Those are completely made up.
You have to just, you have to do the science and find out.
Yeah, we'll probably get to it.
You've been doing recent work on looking at computational systems, even trivial ones like algorithms and sorting algorithms and analyzing in a behavioral kind of way, see if there's minds inside those sorting algorithms.
And of course, let me make a pothead statement question here that you can start to do things like trying to do psychedelics with a sorting.
Yeah.
And what does that even look like?
It looks like a ridiculous question.
It'll get you fired for most academic departments, but it may be if you take it seriously, you could try and see if it applies.
Yeah.
If a thing can be shown to have some kind of cognitive complexity, some kind of mind, why not apply to it the same kind of analysis and the same kind of tools, like psychedelics, that you would to a human mind that's a complex human mind is, at least might be, a productive question to ask what?
Because you've seen like spiders on psychedelics, like more primitive biological organisms on psychedelics, why not try to see what, what an algorithm does on psychedelics?
Well yeah, because you see the.
The thing to remember is we don't have a magic sense or a really good intuition for what the mapping is between and the embodiment of something and the degree of intelligence it has.
We think we do, because we have an n of one example on earth and we kind of know what to expect from cells snakes uh, you know primates what, but?
But we really don't.
We don't have, and this is what we'll get into more of this stuff on the platonic space, but our intuitions around that stuff is so bad that to really think that we know enough not to try things at this point is, I think, really short-sighted.
Before we talk about the platonic space, let's lay it out some foundations.
I think one useful one comes from the paper, Technological Approach to Mind Everywhere.
An experimentally grounded framework for understanding diverse bodies and minds.
Could you tell me about this framework?
And maybe can you tell me about figure one from this paper that has a few components.
One is the tiers of biological cognition that goes from group to whole organism to whole tissue organ, down to neural network, down to cytoskeleton, down to genetic network.
And then there's layers of biological systems from ecosystem down to swarm, down to organism, tissue, and finally cell.
So can you explain this figure and can you explain the TAME so-called framework?
So this is the version 1.0, and there's a kind of update at 2.0 that I'm writing at the moment, trying to formalize in a careful way all the things that we've been talking about here.
And in particular, this notion of having to do experiments to figure out where any given system is on a continuum.
And let's just start with figure two maybe for a second, and then we'll come back to figure one.
And first, just to unpack the acronym, I like the idea that it spells out TAME, because the central focus of this is interactions and how do you interact with a system to have a productive interaction with it.
And the idea is that cognitive claims are really protocol claims.
When you tell me that something has some degree of intelligence, what you're really saying is this is the set of tools I'm going to deploy.
And we can all find out how that worked out for you.
And so technological, because I wanted to be clear with my colleagues that this was not a project in just philosophy.
This had very specific empirical implications that are going to play out in engineering and regenerative medicine and so on.
Technological approach to mind everywhere, this idea that we don't know yet where different kinds of minds are to be found, and we have to empirically figure that out.
And so what you see here in figure two is basically this idea that there is a spectrum and I'm just showing four waypoints along that spectrum.
And as you move to the right of that spectrum, a couple of things happen.
Persuadability goes up, meaning that the systems become more reprogrammable, more plastic, more able to do different things than whatever they're standardly doing.
So you have more ability to get them to do new and interesting things.
The effort needed to exert influence goes down.
That is, autonomy goes up.
And to the extent that you are good at convincing or motivating the system to do things, you don't have to sweat the details as much, right?
And this also has to do with what I call engineering agential materials.
So when you engineer wood, metal, plastic, things like that, you are responsible for absolutely everything because the material is not going to do anything other than hopefully hold its shape.
If you're engineering active matter or you're engineering computational materials or better yet, agential materials like living matter, you can do some very high-level prompting and let the system then do very complicated things that you don't need to micromanage.
And we all know that that increases when you're starting to work with intelligent systems like animals and humans and so on.
And the other thing that goes down as you get to the right is the amount of mechanism or physics that you need to exert the influence goes down.
So if you know how your thermostat is to be set as far as its set point, you really don't need to know much of anything else, right?
You just need to know that it is a homeostatic system and that this is how I change the set point.
You don't need to know how the cooling and heating plant works in order to get it to do complex things.
By the way, a quick pause just for people who are listening, let me describe what's in the figure.
So there's four different systems going up the scale of persuadability.
So the first system is a mechanical clock, then it's a thermostat, then it's a dog that gets rewards and punishments, Pavlov's dog, and then finally a bunch of very smart-looking humans communicating with each other and arguing, persuading each other using hashtag reasons.
And then there's arrows below that showing persuadability going up as you go up these systems from the mechanical clock to a bunch of Greeks arguing and then going down as the effort needed to exert influence.
And once again, going down is mechanism knowledge needed to exert that influence.
Yeah.
I'll give you an example about that panel C here with a dog.
Isn't it amazing that humans have been training dogs and horses for thousands of years knowing zero neuroscience?
Also amazing is that when I'm talking to you right now, I don't need to worry about manipulating all of the synaptic proteins in your brain to make you understand what I'm saying and hopefully remember it.
You're going to do that all on your own.
I'm giving you very thin in terms of information content, very thin prompts.
And I'm counting on you as a multi-scale agential material to take care of the chemistry underneath.
So you don't need a wrench to convince me.
Correct.
I don't need, and I don't need physics to convince you.
And I don't need to know how you work.
Like I don't need to understand all of the steps.
What I do need to have is trust that you are a multi-scale cognitive system that already does that for yourself.
And you do.
Like this is an amazing thing.
People don't think about this enough, I think.
When you wake up in the morning and you have social goals, research goals, financial goals, whatever it is that you have, in order for you to act on those goals, sodium and calcium and other ions have to cross your muscle membranes.
Those incredibly abstract goal states ultimately have to make the chemistry dance in a very particular way, right?
Our entire body is a transducer of very abstract things.
And by the way, not just our brains, but other, you know, our organs have anatomical goals and other things that we can talk about because all of this plays out in regeneration and development and so on.
But the scaling, right, of all of these things, the way you regulate yourself is not by, oh my God, you don't have to sit there and think, wow, I really have to push some sodiums across this membrane.
All of that happens automatically.
And that's the incredible benefit of these multi-scale materials.
So what I was trying to do in this paper is a couple of things.
All of these were, by the way, drawn by Jeremy Gay, who's this amazing graphic artist that works with me.
First of all, in panel A, which is the spiral, I was trying to point out, is that at every level of biological organization, like we all know we're sort of nested dolls of organs and tissues and cells and molecules and whatever.
But what I was trying to point out is that this is not just structural.
Every one of those layers is competent and is doing problem solving in different spaces and spaces that are very hard for us to imagine.
We humans are, because of our own evolutionary history, we are so obsessed with movement in three-dimensional space that even in AI, you see this all the time.
They say, well, this thing doesn't have a robotic body.
It's not embodied.
Yeah, it's not embodied by moving around in 3D space, but biology has embodiments in all kinds of spaces that are hard for us to imagine, right?
So your cells and tissues are moving in high-dimensional physiological state spaces and in gene expression state spaces, in anatomical state spaces.
They're doing that perception decision-making action loop that we do in 3D space when we think about robots wandering around your kitchen.
They're doing those loops in these other spaces.
And so the first thing I was trying to point out is that, yeah, every layer of your body has its own ability to solve problems in those spaces.
And then on the right, what I was saying is that this distinction between, you know, people say, well, there are living beings and then there are engineered machines and then they often follow up with all the things machines are never going to be able to do and whatever.
And so what I was trying to point out here is that it is very difficult to maintain those kind of distinctions because life is incredibly interoperable.
Life doesn't really care if the thing it's working with was evolved through random trial and error or was engineered with a higher degree of agency because at every level, within the cell, within the tissue, within the organism, within the collective, you can replace and substitute engineered systems with the naturally evolved systems.
And that question of is it really, you know, is it biology or is it technology, I don't think is a useful question anymore.
So I was trying to warm people up with this idea that what we're going to do now is talk about minds in general, regardless of their history or their composition.
It doesn't matter what you're made of.
It doesn't matter how you got here.
Let's talk about what you're able to do and what your inner world looks like.
That was the goal of that.
Is it useful as a thought experiment, as an experiment of radical empathy to try to put ourselves in the space of the different minds at each stage of the spiral?
Like what state space is human civilization as a collective embodied?
What does it operate in?
So humans, individual organisms operate in 3D space.
That's what we understand.
But when there's a bunch of us together, what are we doing together?
It's really hard, and you have to do experiments, which at larger scales are really difficult.
But there is such a thing.
There may well be.
We have to do experiments.
I don't know.
There's an example.
Somebody will say to me, well, with your kind of panpsychist view, you probably think the weather is agential too.
It's like, well, I can't say that, but we don't know.
But have you ever tried to see if a hurricane has habituation or sensitization?
Maybe.
We haven't done the experiment.
It's hard, but you could, right?
And maybe weather systems can have certain kinds of memories.
I have no idea.
We have to do experiments.
So I don't know what the entire human society is doing, but I'll just give you a simple example of the kinds of tools.
And we're actively trying to build tools now to enable radically different agents to communicate.
So we are doing this using AI and other tools to try and get this kind of communication going across very different spaces.
I'll just give you a very kind of dumb example of how that might be.
Imagine that you're playing tic-tac-toe against an alien.
So you're in a room, you don't see him.
So you draw the tic-tac-toe thing on the floor, and you know what you're doing.
You're trying to make straight lines with X's and O's.
And you're having a nice game.
It's obvious that he understands the process.
Like sometimes you win, sometimes you lose.
Like it's obvious.
In that one little segment of activity, you guys are sharing a world.
What's happening in the other room next door?
Well, let's say the alien doesn't know anything about geometry.
He doesn't understand straight lines.
What he's doing is he's got a box and it's full of basically billiard balls, each one of which has a number on it.
And all he's looking, he's doing is he's looking through the box to find billiard balls whose numbers add up to 15.
He doesn't understand geometry at all.
All he understands is arithmetic.
You don't think about arithmetic.
You think geometry.
The reason you guys are playing the same game is that there's this magic square, right, that somebody constructed that basically is a three by three square where if you pick the numbers right, they add up to 15.
He has no idea that there's a geometric interpretation to this.
He is solving the problem that he sees, which is totally algebra.
You don't know anything about that.
But if there is an appropriate interface like this magic square, you guys can share that experience.
You can have an experience.
It doesn't mean you start to think like him.
It means that you guys are able to interact in a particular way.
Okay, so there's a mapping between the two different ways of seeing the world that allows you to communicate with seeing a thin slice of the world.
Thin slice of the world.
How do you find that mapping?
So you're saying we're trying to figure out ways of finding that mapping for different kinds of systems.
What's the process for doing that?
So the process, the process is twofold.
One is to get a better understanding of what the system, what space is the system navigating, what goals does it have, what level of ingenuity does it have to reach those goals.
For example, xenobots, right?
We make xenobots.
This is, or anthropots.
These are biological systems that have never existed on Earth before.
We have no idea what their cognitive properties are.
We're learning.
We found some things.
But you can't predict that from first principles because they're not at all what their past history would inform you of.
Can you actually explain briefly what a xenobot is and what an anthropot is?
So one of the things that we've been doing is trying to create novel beings that have never been here before.
The reason is that typically when you have a biological system, an animal or a plant, and you say, hey, why does it have certain forms of behavior, certain forms of anatomy, certain forms of physiology?
Why does it have those?
The answer is always the same.
Well, there's a history of evolutionary selection, and there's a long, long history going back of adaptation, and there are certain environments, and this is what survived.
And so that's why it has.
So what I wanted to do was break out of that mold and to basically force us as a community to dig deeper into where these things come from.
And that means taking away the crutch where you just say, well, it's evolutionary selection.
That's why it looks like that.
So in order to do that, we have to make artificial synthetic beings.
Now, to be clear, we are starting with living cells.
So it's not that they had no evolutionary history.
The cells do.
They had evolutionary history in frogs or humans or whatever.
But the creatures they make and the capabilities that these creatures have were never directly selected for.
And in fact, they never existed.
So you can't tell the same kind of story.
And what I mean is we can take epithelial cells off of an early frog embryo, and we don't change the DNA.
No synthetic biology circuits, no material scaffolds, no nanomaterials, no weird drugs, none of that.
What we're mostly doing is liberating them from the instructive influences of the rest of the cells that they were in in their bodies.
And so when you do that, right, normally these cells are bullied by their neighboring cells into having a very boring life.
They become a two-dimensional outer covering for the embryo, and they keep out the bacteria, and that's that.
So you might ask, well, what are these cells capable of when you take them away from that influence?
So when you do that, they form another little life form we call a xenobot.
And it's this self-motile little thing that has cilia covering its surface.
The cilia are coordinated so they row against the water and then the thing starts to move and has all kinds of amazing properties.
It has different gene expression, so it has its own novel transcriptome.
It's able to do things like kinematic self-replication, meaning make copies of itself from loose cells that you could put in its environment.
It has the ability to respond to sound, which normal embryos don't do.
It has these novel capacities.
And we did that.
And we said, look, here are some amazing features of this novel system.
Let's try to understand where they came from.
And some people said, well, maybe it's a frog-specific thing.
You know, maybe this is just something unique to frog cells.
And so he said, okay, what's the furthest you can get from frog embryonic cells?
How about human adult cells?
And so we took cells from adult human patients who were donating tracheal epithelia for biopsies and things like that.
And those cells, in, again, no genetic change, nothing like that, they self-organized into something we call anthrobots.
Again, self-motile little creature, 9,000 different gene expressions.
So about half the genome is now different.
And they have interesting abilities.
For example, they can heal human neural wounds.
So in vitro, if you plate some neurons and you put a big scratch through it, so you damage them, anthropots can sit down and they will try, they will spontaneously, without us having to teach them to do it, they will spontaneously try to knit the neurons across.
What is this video that we're looking at here?
So this is an anthropot.
So often when I give talks about this, I show people this video and I say, what do you think this is?
And people will say, well, it looks like some primitive organism you got from the bottom of a pond somewhere.
And I'll say, what do you think the genome would look like?
And is it, well, the genome would look like some primitive creature.
Right.
If you sequence that thing, you'll get 100% Homo sapiens.
And that doesn't look like any stage of normal human development.
It doesn't act like any stage of human development.
It has the ability to move around.
It has, as I said, over 9,000 differential gene expressions.
Also, interestingly, it is younger than the cells that it comes from.
So it actually has the ability to roll back its age.
And we could talk about that and what the implications of that are.
But to go back to your original question, what we're doing with these kinds of systems.
Try and talk to it.
We're trying to talk to it.
That's exactly right.
And not just to this, we're trying to talk to molecular networks.
So we found a couple of years ago, we found that gene regulatory networks, never mind the cells, but the molecular pathways inside of cells can have several different kinds of learning, including Pavlovian conditioning.
And what we're doing now is trying to talk to it.
The biomedical applications are obvious.
Instead of, hey, Siri, you want, hey, liver, why do I feel like crap today?
And you want an answer.
Well, you know, your potassium levels are this and that.
And I don't feel, you know, I don't feel good for these reasons.
And you should be able to talk to these things.
And there should be able to be an interface that allows us to communicate, right?
And I think AI is going to be a huge component of that interface, of allowing us to talk to these systems.
It's a tool to combat our mind blindness, to help us see diverse other very unconventional minds that are all around us.
Can you generalize that?
Let's say we meet an alien or an unconventional mind here on Earth.
Think of it as a black box.
You show up.
What's the procedure for trying to get some hooks into a communication protocol with a thing?
Yeah, that is exactly the mission of my lab.
It is to enable us to develop tools to recognize these things, to learn to communicate with them, to ethically relate to them, and in general, to expand our ability to do this in the world around us.
I specifically chose these kinds of things because they're not as alien as proper aliens would be.
So we have some hope.
I mean, we're made of them.
We have many things in common.
There's some hope of understanding them.
You're talking about xenobots and that.
Xenobots and anthropots and cells and everything else.
But they're alien in a couple of important ways.
One is the space they live in is very hard for us to imagine.
What space do they live in?
Well, your body, your body cells, long before we had a brain that was good for navigating three-dimensional space, was navigating the space of anatomical possibilities.
It was going from, you start as an egg and you have to become a snake or a giraffe or whatever or a human, whatever, whatever we're going to be.
And I specifically am telling you that this general idea, when people model that with kind of cellular automata type of ideas, this open loop kind of thing where, well, everything just follows local rules and eventually there's complexity and here you go.
Now you've got a giraffe or a human.
I'm specifically telling you that that model is totally insufficient to grasp what's actually going on.
What's actually going on, and there have been many, many experiments on this, is that the system is navigating a space.
It is navigating a space of anatomical possibilities.
If you try to block where it's going, it will try to get around you.
If you try to challenge it with things it's never seen before, it will try to come up with a solution.
If you really defeat its ability to do that, which you can, you know, they're not infinitely intelligent, so you can defeat them.
You will either get birth defects or you will get creative problem solving, such as what you're seeing here with xenobots and anthropots.
If you can't be a human, you'll find another way to be in.
You can be an anthropod, for example, or you'll be something else.
Just to clarify, what's the difference between cellular automata type of action where you're just responding to your local environment and creating some kind of complex behavior and operating in the space of anatomical possibilities?
Sure.
So there's a kind of goal, I guess, you're teaching.
There is some kind of thing.
There's a will to X something.
The will thing, let's put that aside because that's it.
Well, it's fine.
I go anthropomorphism.
I just always love to quote Nietzsche.
Yeah, yeah, yeah.
And I'm not saying that's wrong.
I'm just saying I don't have data for that one, but I'll tell you the stuff that I'm quite certain of.
There are a couple of different formalisms that we have in control theory.
One of those formalisms is open loop complexity.
In other words, I've got a bunch of subunits like a cellular automaton.
They follow certain rules.
And you turn the crank, time goes forward, whatever happens, happens.
Now, clearly, you can get complexity from this.
Clearly, you can get some very interesting-looking things, right?
So the game of life, all those kinds of cool things, right?
You can get complexity, no problem.
But the idea that that model is going to be sufficient to explain and control things like morphogenesis is a hypothesis.
It's okay to make that hypothesis, but we know it's false, despite the fact that that is what we learn in basic cell biology and developmental biology classes.
When the first time you see something like this, inevitably, especially if you're an engineer in those classes, you raise your hand and you go, hey, how does it know to do that?
How does it know four fingers instead of seven?
What they tell you is it doesn't know anything.
Make sure that that's very clear.
They all insist when we learn these things, they insist nothing here knows anything.
There are rules of chemistry, they roll forward, and this is what happens.
Okay.
Now, that model is testable.
We can ask, does that model explain what happens?
Here's where that model falls down.
If you have that model and situations change, either there's damage or something in the environment that's happened, those kind of open loop models do not adjust to give you the same goal by different means.
This is William James' definition of intelligence, the same goal by different means.
And in particular, working them backwards, let's say you're in regenerative medicine and you say, okay, but this is the situation now.
I want it to be different.
What should the rules be?
It's not reversible.
So the thing with those kind of open loop models is they're not reversible.
You don't know what to do to make the outcome that you want.
All you know how to do is roll them forward, right?
Now, in biology, we see the following.
If you have a developmental system and you put barriers between, so I'm going to give you two pieces of evidence that suggests that there is a goal.
One piece of evidence is that if you try to block these things from the outcome that they normally have, they will do some amazing things, sometimes very clever things, sometimes not at all the way that they normally do it, right?
So this is William James's definition.
By different means, by following different trajectories, they will go around various local maxima and minima to get to where they need to go.
It is navigation of a space.
It is nod blind to turn the crank and wherever we end up is where we end up.
That is not what we see experimentally.
And more importantly, I think what we've shown, and this is something that I'm particularly happy with in our lab, over the last 20 years, we've shown the following.
We can actually rewrite the goal states because we found them.
We have shown through our work on bioelectric imaging and bioelectric reprogramming, we have actually shown how those goal memories are encoded, at least in some cases.
We certainly haven't got them all, but we have some.
If you can find where the goal state is encoded, read it out and reset it, and the system will now implement a new goal based on what you just reset.
That is the ultimate evidence that your goal-directed model is working.
Because if there was no goal, that shouldn't be possible.
You shouldn't be right once you can find it, read it, interpret it, and rewrite it.
It means that by any engineering standard, it means that you're dealing with a homeostatic mechanism.
How do you find where the goal is encoded?
So, through lots and lots of hard work.
The barrier thing is part of that, creating barriers and observing.
The barrier thing tells you that you should be looking for a goal.
So, step one when you approach a genetic system is create a barrier of different kinds until you see how persistent it is at pursuing the thing it seemed to have been pursuing originally.
And then you know, okay, cool, this is uh, this thing has agency, first of all.
And then, second of all, like you start to build an intuition about exactly which goal it's pursuing.
Yes, the first couple of steps are all imagination.
You have to ask yourself, what space is this thing even working in?
And you really have to stretch your mind because we can't imagine all the spaces that systems work in, right?
So, step one is, what space is it?
Step two, what do I think the goal is?
And let's not mistake, step two, you're not done.
Just because you haven't made a hypothesis, that doesn't mean you can say, Well, there, I see it doing this, therefore that's the goal.
You don't know that you have to actually do experiments.
Now, once you've made those hypotheses, now you do the experiments.
You say, Okay, if I want to block it from reaching its goal, how do I do that?
And this, by the way, is exactly the approach we took with the sorting algorithms and with everything else.
You hypothesize the goal, you put a barrier in, and then you get to find out what level of ingenuity it has.
Maybe what you see is, well, that derailed everything, so probably this thing isn't very smart.
Or you see, oh, wow, it can go around and do these things, or you might see, wow, it's taking a completely different approach, using its affordances in novel ways.
Like, that's a high level of intelligence.
You will find out what the answer is.
Another pot had a question: Is it possible to look at speaking of unconventional organisms and going to Richard Dawkins, for example, with memes?
Is it possible to think of things like ideas?
Like, how weird can we get?
Can we look at ideas as organisms, then creating barriers for those ideas, and seeing are the ideas themselves you take the actual individual ideas and trying to empathize and visualize what kind of space they might be operating in?
Can they be seen as organisms that have a mind?
Yeah.
Okay, if you want to get really weird, we can get really weird here.
Think about the caterpillar-butterfly transition.
Okay, so you've got a caterpillar, soft-bodied kind of creature, has a particular controller that's suitable for running a soft-body, you know, kind of robot.
It has a brain for that task, and then it has to become this butterfly, hard-bodied creature flies around.
During the process of metamorphosis, its brain is basically ripped up and rebuilt from scratch, right?
Now, what's been found is that if you train the caterpillar, so you give it a new memory, meaning that if the caterpillar sees this color disc, then it crawls over and eats some leaves.
Turns out the butterfly retains that memory.
Now, the obvious question is: how the hell do you retain memories when the medium is being refactored like that?
Let's put that aside.
I'm going to get somewhere even weirder than that.
There's something else that's even more interesting than that.
It's not just that you have to retain the memory, you have to remap that memory onto a completely new context.
Because guess what?
The butterfly doesn't move the way the caterpillar moves, and it doesn't care about leaves.
It wants nectar from flowers.
And so, if that memory is going to survive, it can't just persist, it has to be remapped, be remapped into a novel context.
Now, here's what I now here's where things get weird: we can take a couple of different perspectives here.
We can take the perspective of the caterpillar facing some sort of crazy singularity and say, my God, I'm going to cease to exist, but I'll sort of be reborn in this new higher dimensional world where I'll fly.
Okay, so that's one thing.
We can take the perspective of the butterfly and say that, well, here I am, but I seem to be saddled with some tendencies and some memories, and I don't know where the hell they came from.
And I don't remember exactly how I got them.
And they seem to be a core part of my psychological makeup.
And they come from somewhere.
I don't know where they come from.
So you can take that perspective.
But there's a third perspective that I think is really interesting and useful.
And the third perspective is that of the memory itself.
If you take a perspective of the memory, so what is a memory?
It is a pattern.
It is an informational pattern that was continuously reinforced within one cognitive system.
And now here I am, I'm this memory.
What do I need to do to persist into the future?
Well, now I'm facing the paradox of change.
If I try to remain the same, I'm gone.
There's no way the butterfly is going to retain me in the original form that I'm in now.
What I need to do is change, adapt, and morph.
Now, you might say, well, that's kind of crazy.
Well, how are you taking the perspective of a pattern within an excitable medium, right?
Agents are physical things.
You're talking about the, you're talking about information, right?
So, so let me let me tell you another quick science fiction story.
Imagine that some creatures come out from the center of the earth.
They live down in the core.
They're super dense.
Okay.
They're incredibly dense because they live down in the core.
They have gamma ray vision for, you know, for, and so on.
So they come out to the surface.
What do they see?
Well, all of this stuff that we're seeing here, this is like a thin plasma to them.
They are so dense, none of this is solid to them.
They don't see any of this stuff.
So they're walking around, you know, the planet is sort of covered in this thin gas, you know.
And one of them is a scientist and he's taking measurements of the gas.
And he says to the others, you know, I've been watching this gas and they're like little whirlpools in this gas.
And they almost look like agents.
They almost look like they're doing things.
They're moving around.
They kind of hold themselves together for a little bit and they're trying to make stuff happen.
And the others say, well, that's crazy.
Patterns in the gas can't be agents.
We are agents.
We're solid.
This is just patterns in an excitable medium.
And by the way, how long do they hold together?
He says, well, about 100 years.
Well, that's crazy.
Nothing, you know, no real agent can exist to maybe that dissipate that fast.
Okay, we are all metabolic patterns, among other things, right?
And so one of the things that, and so you see what I'm warming up to here.
So one of the things that we've been trying to dissolve, and this is like some work that I've done with Chris Fields and others, is this distinction between thoughts and thinkers.
So all agents are patterns within some excitable medium.
We could talk about what that is.
And they can spawn off others.
And now you can have a really interesting spectrum.
Here's the spectrum.
You can have fleeting thoughts, which are like waves in the ocean when you throw a rock in.
They sort of go through the excitable medium and then they're gone.
They pass through and they're gone, right?
So those are kind of fleeting thoughts.
Then you can have patterns that have a degree of persistence.
So they might be hurricanes or solitons or persistent thoughts or earworms or depressive thoughts.
Those are harder to get rid of.
They stick around for a little while.
They often do a little bit of niche construction.
So they change the actual brain to make it easier to have more of those thoughts, right?
Like that's a thing.
And so they stay around longer.
Now, what's further than that?
Well, fragments, personality fragments of a dissociative personality disorder, they're more stable and they're not just on autopilot.
They have goals and they can do things.
And then past that is a full-blown human personality.
And who the hell knows what's past that?
Maybe some sort of transhuman, you know, transpersonal, like, I don't know, right?
But, but this idea, again, I'm back to this notion of a spectrum.
It's there is not a sharp distinction between, you know, we are real agents and then we have these thoughts.
Yeah, patterns can be agents too.
But again, you don't know until you do the experiment.
So if you want to know whether a soliton or a hurricane or a thought within a cognitive system is its own agent, do the experiment.
See what it can do.
Can it learn from experience?
Does it have memories?
Does it have goal states?
Does it, you know, what can it do, right?
Does it have language?
So, so coming back to then, your original question, yeah, we can definitely apply this methodology to ideas and concepts and social whatevers, but you've got to do the experiment.
That's such a challenging thought experiment of like thinking about memories from the caterpillar to the butterfly as an organism.
I think at the very basic level, intuitively, we think of organisms as hardware.
Yeah.
And software is not possibly being able to be organisms.
But what you're saying is that it's all just patterns in an excitable medium.
And it doesn't really matter what the pattern is.
We need to and what the excitable medium is.
We need to do the testing, avoid how persistent is it?
How goal-oriented is it?
And there's certain kinds of tests to do that.
And you can apply that to memories.
You can apply that to ideas.
You can apply that to anything really.
I mean, you could probably think about like consciousness.
There's really no boundary to what you can imagine.
Probably really, really wild things could be minds.
Yeah, stay tuned.
I mean, this is exactly what we're doing.
We're getting progressively more and more unconventional.
I mean, so, so, this, so, this whole distinction between software and hardware, I think, I think it's a super important concept to think about.
And, and yet, the way we've mapped it onto the world, I would like to blow that up in the following way.
Um, and again, I want to point out, so I'll tell you what the practical consequences are because this is not just fun stories that we tell each other.
These have really important research implications.
Think about a Turing machine.
So, one thing you can say is the machine's the agent, it has passive data, and it operates on the data.
And that's it.
The story of agency is the story of whatever that machine can and can't do.
The data is passive and it moves it around.
You can tell the opposite story.
You can say, Look, the patterns on the data are the agent.
The machine is a stigmergic scratch pad in the world of the data doing what data does.
The machine is just the consequences, the scratch pad of it working itself out.
And both of those stories make sense depending on what you're trying to do.
Here's the biomedical side of things.
So, our program in bioelectrics and aging, okay?
One model you could have is the physical organism is the agent, and the cellular collective has pattern memories, specifically what I was saying before, goals, anatomical goals.
If you want to persist for 100 plus years, your cells better remember what your correct shape is and where the new cells go, right?
So, there are these pattern memories that exist during embryogenesis, during regeneration, during resistance to aging.
We can see them, we can visualize them.
One thing you can imagine is: fine, the physical body, the cells are the agent.
The electrical pattern memories are just data.
And what might happen during aging is that the data might get degraded.
They might get fuzzy.
And so, what we need to do is reinforce the data, reinforce the memories, reinforce the pattern memories.
That's one specific research program, and we're doing that.
But that's not the only research program.
Because the other thing you might imagine is that what if the patterns are the agent in exactly the same sense as we think in our brains, it's the patterns of electrophysiological computations, whatever else that is the agent, right?
And that what they're doing in the brain are the side effects of the patterns working themselves out.
And those side effects might be to fire off some muscles and some glands and some other things.
From that perspective, maybe what's actually happening is maybe the agent's finding it harder and harder to be embodied in the physical world.
Why?
Because the cells might get less responsive.
In other words, the cells are sluggish.
The patterns are fine.
They're having a harder time making the cells do what they need to do.
And that maybe what you need to do is not reinforce the memories.
Maybe what you need to do is make the cells more responsive to them.
And that is a different research agenda, which we are also doing.
We have evidence for that as well, actually, now.
And then we published it recently.
And so my point here is: when we tell these crazy sci-fi stories, the only worth to them, and the only reason I'm talking about them now, and I hadn't been up a year ago, I wasn't talking about this stuff, is because these are now actionable in terms of specific experimental research agendas that are heading to the clinic, I hope, in some of these biomedical approaches.
And so now here we can go beyond this and we can say, okay, so up until now, we've considered what are disease states?
Well, we know there's organic disease.
Something is physically broken.
We can see the tissue is breaking down.
There's this damage in the joint, you know, the liver is doing whatever we can see these things.
But what about disease states that are not physical states?
They're physiological states or informational states or cognitive problems.
So in other words, in all of these other spaces, you can start to ask: what's a barrier in gene expression space?
What's a local minimum that traps you in physiological state space?
And what is a stress pattern that keeps itself together, moves around the body, causes damage, tries to keep itself going, right?
What level of agency does it have?
This suggests an entirely different set of approaches to biomedicine.
And anybody who's, let's say, in the alternative medicine community is probably yelling at the screen saying, we've been saying this for hundreds of years.
And yeah, and I'm well aware, these are not, the ideas are not new.
What's new is being able to now take this and make them actionable and say, yeah, but we can image this now.
I can now actually see the bioelectric patterns and why they go here and not there.
And we have the tools that now hopefully will get us to therapeutics.
So this is very actionable stuff.
And it all leans on not assuming we know minds when we see them because we don't.
and we have to do experiments.
To return back to the software-hardware distinction, you're saying that we can see the software as the organism and the hardware as just the scratch pad, or you can see the hardware as the organism and the software as the thing that the hardware generates.
And in so doing, we can decrease the amount of importance we assign to something like the human brain, where it could be the activations, it could be the electrical signals that are the organisms, and the brain is the scratch pad.
And by saying scratch pad, I don't mean it's not important.
When we get to talking about the platonic space, we have to talk about how important the interface actually is.
The scratch pad isn't unimportant.
The scratch pad is critical.
It's just that my only point is that when we have these formalisms of software, of hardware, of other things, the way we map those formalisms onto the world is not obvious.
It's not given to us.
We get used to certain things, right?
But who's the hardware, who's the software, who's the agent, and who's the excitable medium is to be determined.
So this is the good place to talk about the increasingly radical, weird ideas that you've been writing about.
You've mentioned it a few times, the platonic space.
Sir, there's this ingressing minds paper where you describe the platonic space.
You mentioned there's an asynchronous conference happening, which is a fascinating concept because it's asynchronous.
People are just contributing asynchronously.
So, what happened was this crazy notion, which I'll describe momentarily, I have given a couple talks on it.
I then found a couple papers in the machine learning community called the Platonic Representation Hypothesis.
I said, that's pretty cool.
These guys are climbing up to the same point where I'm getting at it from biology and philosophy and whatever.
They're getting there from computer science and machine learning.
We'll take a couple hours.
I'll give a talk.
They'll give a talk.
We'll talk about it.
I thought there were going to be three talks at this thing.
Once I started reaching out to people for this, everybody sort of said, you know, I know somebody who's really into this stuff, but they never talk about it because there's no audience for this.
So I reached out to them.
And then they said, yeah, oh, yeah, I know this mathematician or I know this, you know, economist, whatever, who has these ideas, and there's no way we can ever talk about them.
So I got this whole list, and it became completely obvious that we can't do this in a normal, you know, it's we're now booked up through December.
So every week in our center, somebody gives a talk.
We kind of discuss it.
It all goes on this thing.
I'll give you a link to it.
And then there's a huge running discussion after that.
And then in the end, we're all going to get together for an actual real-time discussion section and talk about it.
But there's going to be probably 15 or so talks about this from all kinds of disciplines.
It's blown up in a way that I didn't realize how much undercurrent of these ideas had already existed that were ready.
Like now is the time.
And I think this is like I've been thinking about these things for, I don't know, 30 plus years.
I never talked about them before because they weren't actionable before.
There wasn't a way to actually make empirical progress with this now.
You know, this is something that Pythagoras and Plato and probably many people before them talked about.
But now we're to the point where we can actually do experiments and they're making a difference in our research program.
You can just look it up Platonic Space Conference.
There's a bunch of different fascinating talks.
Yours first on the patterns of forms and behavior beyond emergence, then radical Platonism and radical empiricism from Joe Dietz and patterns and explanatory gaps in psychotherapy.
Does God play dice from Alexei Tolchinsky and so on?
So let's talk about it.
What is it?
And it's fascinating that the origins of some of these ideas are connected to ML people thinking about representation space.
Yeah.
The first thing I want to say is that while I'm currently calling it the Platonic space, I'm in no way trying to stick close to the things that Plato actually thought about.
In fact, to whatever extent we even know what that is, I think I depart from that in quite in some ways.
And I'm going to have to change the name at some point.
The reason I'm using the name now is because I wanted to be clear about a particular connection to mathematics, which a lot of mathematicians would call themselves Platonists, because what they think they're doing is discovering, not inventing as a human construction, but discovering a structured ordered space of truths.
Let's put it this way.
In biology, as in physics, there's something very curious that happens that if you keep asking why, then something interesting goes on.
Well, I'll give you two examples.
First of all, imagine takedas.
So the cicadas come out at 13 years and 17 years.
And so if you're a biologist, then you say, so why is that?
And then you get this explanation for, well, it's because they're trying to be off-cycle from their predators.
Because if it was 12 years, then every two year, every three year, every four year, every six year predator would eat you when you come on, right?
So, and you say, okay, okay, cool.
That makes sense.
What's special about 13 and 17?
Oh, they're prime.
Uh-huh.
And why are they prime?
Well, now you're in the math department.
You're no longer in the biology department.
You're no longer in the physics department.
You're now in the math department to understand why the distribution of primes is what it is.
Another example, and I'm not a physicist, but what I see is every time you talk to a physicist and you say, hey, why do the leptons do this or that?
Or the fermions are doing whatever?
Eventually the answer is, oh, because there's this mathematical, this SU8 group or whatever the heck it is, and it has certain symmetries and these certain structures.
Yeah, great.
Once again, you're in the math department.
So something interesting happens is that there are facts that you come across.
Many of them are very surprising.
You don't get to design them.
You get more out than you put in in a certain way because you make very minimal assumptions and then certain facts are thrust upon you.
For example, the value of Feigenbaum's constant, the value of natural algorithm E, these things you sort of discover, right?
And the salient fact is this.
If those facts were different, then biology and physics would be different, right?
So they matter.
They impact instructively, functionally, they impact the physical world.
If the distribution of primes was something else, well, then that's okay.
This would have been coming out at different times.
But the reverse isn't true.
What I mean is there is nothing you can do in the physical world to change E, as far as I know, to change E or to change Feigenbaum's constant.
You could have swapped out all the constants at the Big Bang, right?
You can change all the different things.
You are not going to change those things.
So this, I think, Plato and Pythagoras understood very clearly that there is a set of truths which impact the physical world, but they themselves are not defined by and determined by what happens in the physical world.
You can't change them by things you do in the physical world, right?
And so I'll make a couple of claims about that.
One claim is, I think we call physics those things that are constrained by those patterns.
When you say, hey, why is this the way it is?
Ah, it's because this is how symmetries are or topology or whatever.
Biology are the things that are enabled by those.
They're free lunches.
Biology exploits these kinds of truths.
And really, it enables biology and evolution to do amazing things without having to pay for it.
I think there's a lot of free lunches going on here.
And so I show you a xenobot or an anthropot.
And I say, hey, look, here are some amazing things they're doing that tissue has never done before in their history.
You say, first of all, where did that come from?
And when did we pay the computational cost for it?
Because we know when we paid the computational cost to design a frog or a human, it was for the eons that the genome was bashing against the environment getting selected, right?
So you paid the computational cost of that.
There's never been any anthrobots.
There's never been any xenobots.
When do we pay the computational cost for designing kinematic self-replication and all these things that they're able to do?
So there's two things people say.
One is, well, it's sort of, you got it at the same time that they were being selected to be good humans and good frogs.
Now, the problem with that is it kind of undermines the point of evolution.
The point of evolutionary theory was to have a very tight specificity between how you are now and the history of selection that got you here, right?
The history of environments that got you to this point.
If you say, yeah, okay, so this is what your environmental history was.
And by the way, you got something completely different.
You got these other skills that you didn't know about.
That's really strange, right?
And so then what people say is, well, it's emergent.
And they say, what's that?
What does that mean?
And they say, besides the fact that you got surprised, right?
Emergence is often just means I didn't see it coming.
Something happened.
I didn't know that was going to happen.
So what does it mean that it's emergent?
And people say, well, and there are many emergent things like this.
For example, the fact that gene regulatory networks can do associative learning.
That's amazing.
And you don't need evolution for that.
Even random genetic regulatory networks can do associative learning.
I say, why does that happen?
And they say, well, it's just a fact that holds in the world, just a fact that holds.
So now you have an option.
You can go one of two ways.
You can either say, okay, look, I like my sparse ontology.
I don't want to think about weird platonic spaces.
I'm a physicalist.
I want the physical world, nothing more.
So, what we're going to do is when we come across these crazy things that are very specific, like, you know, anthropots have four specific behaviors that they switch around.
Why four?
Why not 12?
Why not?
Like four, why four?
When we come across these things, just like when we come across the value of E or Feigenbaum's number or whatever, what we're going to do is we're going to write it down in our big book of emergence.
And that's it.
We're just going to have to live with it.
This is what happens.
We're just, you know, there's some cool surprises.
You know, when we come across them, we're going to write them down.
Great.
It's a random grab bag of stuff.
And when we come across them, we'll write them down.
That's one.
The upside is you get to be a physicalist, then you get to keep your sparse ontology.
The downside is I find it incredibly pessimistic and mysterious because you're basically then just willing to make a catalog of these amazing patterns.
Why not instead?
And this is why I started with this Platonic terminology.
Why not do what the mathematicians already do?
A huge number of them say we are going to make the same optimistic assumption that science makes that there's an underlying structure to that latent space.
It's not a random grab bag of stuff.
There's a space to it, which where these patterns come from.
And by studying them systematically, we can get from one to another.
We can map out the space.
We can find out the relationships between them.
We can get an idea of what's in that space.
And we're not going to assume that it's just random.
We're going to assume there's some kind of structure to it.
And you'll see all kinds of people, I mean, you know, well-known mathematicians that talk about this stuff, you know, Penrose and lots of other people, who will say that, yeah, there's another space physically, and it has spatial structure, it has components to it, and so on.
We can traverse that space in various ways.
And then there's the physical space.
So I find that much more appealing because it suggests a research program, which we are now undergoing in our lab.
The research program is everything that we make, cells, embryos, robots, biobots, language models, simple machines, all of it.
They are interfaces.
All physical things are interfaces to these patterns.
You build an interface.
Some of those patterns are going to come through that interface.
Depending on what you build, some patterns versus others are going to come through.
The research program is mapping out that relationship between the physical pointers that we make and the patterns that come through it, right?
Understanding what is the structure of that space, what exists in that space, and what do I need to make physically to make certain patterns come through.
Now, when I say patterns, now we have to ask what kinds of things live in that space.
Well, the mathematicians will tell you, well, we already know.
We have a whole list of objects, you know, the amplitohedrons and the, you know, all this crazy stuff that lives in that space.
Yeah, I think that's one layer of stuff that lives in that space.
But I think those patterns are the lower agency kinds of things that are basically studied by mathematicians.
What also lives in that space are much more active, more complex, higher agency patterns that we recognize as kinds of minds.
That behavioral scientists would look at that pattern and say, well, I know what that is.
That's the competency for delayed gratification or problem solving of certain kinds or whatever.
And so what I end up with right now is a model in which that latent space contains things that come through physical objects, so simple, simple patterns, right?
So facts about triangles and Fibonacci patterns and fractals and things like that.
But also, if you make more complex interfaces such as biologicals and but importantly, not just biologicals, but let's say cells and embryos and tissues, what you will then pull down is much more complex patterns that we say, ah, that's a mind, that's a human mind, or that's a snake mind or whatever.
So I think the mind-brain relationship is exactly the kind of thing that the math-physics relationship is.
That in some very interesting way, there are truths of mathematics that become embodied and they kind of haunt physical objects, right?
In a very specific functional way.
And in the exact same way, there are other patterns that are much more complex, higher agency patterns that basically inform in form living things that we see as obvious embodied minds.
Okay, given how weird and complicated this we're describing is, we'll talk about it more, but you got to ELI5, the basics to a person has never seen this.
So again, you mentioned things like pointers.
So the physical object themselves or the brain is a pointer to that platonic space.
What is in that platonic space?
What is the platonic space?
What is the embodiment?
What is the pointer?
Yeah.
Okay.
Let's try it this way.
There are certain facts of mathematics.
So the distribution of prime numbers, right?
That if you map them out, they make these nice spirals.
And there's an image that I often show, which is a very particular kind of fractal.
And that fractal is a Halley map, which is, it's pretty awesome that it actually looks very organic.
It looks very biological.
So if you look at that thing, that image, which has very specific complex structure, it's a map of a very compact mathematical object.
That formula is like, you know, Z cubed plus seven.
It's something like that.
That's it.
So now you look at that structure and you say, where does that actually come from?
It's definitely not packed into the Z cubed plus seven.
There's not enough bits in that to give you all of that.
There's no fact of physics that determines this.
There's no evolutionary history.
It's not like we selected this based on some, you know, from a larger set over time.
Where does this come from?
Or the fact that I think about, think about the way that biology exploits these things.
Imagine a world in which the highest fitness belonged to a certain kind of triangle, right?
So evolution cranks a bunch of generations and it gets the first angle right, and cranks a bunch more generations, gets the second angle right.
Now there's something amazing that happens.
It doesn't need to look for the third angle because you already know.
If you know to, you get this magical free gift from geometry that says, well, I already know what the third one should be.
You don't have to go look for it.
Or as evolution, if you invent a voltage-gated ion channel, which is basically a transistor, and you can make a logic gate, then all the truth tables and the fact that NAND is special and all these other things, you don't have to evolve those things.
You get those for free.
You inherit those.
Where do all those things live?
These mathematical truths that you come across that you don't have any choice about.
Once you've committed to certain axioms, there's a whole bunch of other stuff that is now just, it is what it is.
And so what I'm saying is, and this is what Pythagoras was saying, I think, that there is a whole space of these kinds of truths.
Now, he was focused on mathematical ones, but he was embodying them in music and in geometry and then things like that.
There are this space of patterns and they make a difference in the physical world to machines, to sound, to things like that.
What I'm extending it, and what I'm saying is, yeah.
And so far, we've only been looking at the low agency inhabitants of that world.
There are other patterns that we would recognize as kinds of minds, and that you don't see them in this space until there's an interface, until there's a way for them to come through the physical world.
That interface, the same way that you have to make a triangular object before you can actually see the rule of, you know, what you're going to gain, right, out of the rules of geometry and whatever, or you have to actually do the computation on the fractal before you actually see that pattern, if you want to see some of those minds, you have to build an interface, right?
At least if you're going to interact with them in the physical world, the way we normally do science.
As Darwin said, mathematicians have their own new sense, like a different sense than the rest of us.
And so that's right.
Mathematicians can perhaps interact with these patterns directly in that space.
But for the rest of us, we have to make interfaces.
And when we make interfaces, which might be cells or robots or embryos or whatever, what we are pulling down are minds that are fundamentally not produced by physics.
So I don't believe that.
I don't know if we're going to get into the whole consciousness thing, but I don't believe that we create consciousness, whether we make babies or whether we make robots.
Nobody's creating consciousness.
What you create is an interface, a physical interface, through which specific patterns, which we call kinds of minds, are going to ingress, right?
And consciousness is what it looks like from that direction looking out into the world.
It's what we call the view from the perspective of the Platonic patterns.
Just to clarify, what you're saying is a pretty radical idea here.
So if there's a mapping from mathematics to physics, okay, that's understandable, intuitive, as you've described.
But what you're suggesting is there's a mapping from some kind of abstract mind object to an embodied brain that we think of as a mind.
Yeah.
As fellow humans.
What is that?
What exactly?
Because you said interface.
You've also said pointer.
So the brain, and I think you said somewhere a thin interface.
A thin client.
Yeah.
The brain brain.
That brain is a thin client.
Yeah.
Thin client.
Okay.
So you're a brain as a thin client to this other world.
Yeah.
Can you just lay out very clearly how radical the idea is?
Sure.
Because you're kind of dancing around.
I think you could also point to Donald Hoffman and kind of who speaks of an interface to a world.
So we only interact with the quote-unquote real world through an interface.
What is the connection here?
Yeah.
Okay.
A couple of things.
First of all, when you said it makes sense for physics, I want to show that it's not as simple as it sounds.
Because what it means is that even in Newton's boring sort of classical universe, long before quantum anything, Newton's world, physicalism was already dead in Newton's world.
I mean, think about what that means.
This is nuts.
Because already he knew perfectly well, I mean, Pythagoras and Plato knew that even in a totally classical deterministic world, already you have the ingression of information that determines what happens in what's possible and what's not possible in that world from a space that is itself not physical.
In other words, it's something like the natural logarithm E, right?
Nothing in Newton's world is set the value of E. There is nothing you could do to set the value of E in that world.
And yet, that fact that it was that and not something else governed all sorts of properties of things that happened.
That classical world was already haunted by patterns from outside that world.
This should be like, this is wild.
This is not saying that, okay, everything was cool.
Physicalism was great up until maybe we got quantum interfaces or we got consciousness or whatever, but originally it was fine.
No, this is saying that it was that that worldview was already impossible, really.
So from a very long time ago, we already knew that there are non-physical properties that matter in the physical world.
This is a chicken or the egg question.
You're saying Newton's laws are creating the physical world?
That is a very deep follow-on question that we'll come back to in a minute.
All I was saying about Newton is that you don't need quantum anything.
You don't need to think about consciousness.
You already, long before you get to any of that, as Pythagoras, I think, knew, already we have the idea that this physical world is being strongly impacted by truths that do not live in the physical world.
Which truths are we referring to?
Are we talking about Newton's laws?
Like mathematical equations?
Like mathematical facts.
So, for example, the actual value of E.
Oh, like very primitive mathematical facts.
Yeah.
Yeah.
I mean, some of them are, you know, I mean, if you ask Don Hoffman, there's this like amplitude thing that is a set of mathematical objects that determines all the scattering amplitudes of the particles and whatever.
They don't have to be simple.
I mean, the old ones were simple.
Now they're like crazy.
I can't imagine this amplitahedron thing, but maybe they can.
But all of these are mathematical structures that explain and determine facts about the physical world, right?
If you ask physicists, hey, why, you know, this many of this type of particle, ah, because this mathematical thing has these symmetries.
That's why.
So Newton is discovering these things.
He's not inventing.
This is very controversial, right?
And there are, of course, physicists and mathematicians who disagree with what I'm saying for sure.
But what I'm leaning on is simply this.
I don't know of anything you can do in the physical world.
You're around at the Big Bang.
You get to set all the constants.
Set physics however you want.
Can you change E?
Can you change Feigenbaum's constant?
I don't think you can.
Is that an obvious statement?
I don't even know what it means to change the parameters at the start of the Big Bang.
So physicists do this.
They'll say, okay, you know, if we made the ratio between the gravitation and the electromagnetic force different, would we have matter?
How many dimensions would we have?
Would there be inflation?
Would there be this or that?
You can imagine playing with it.
There are, however, many unitless constants of physics.
These are the kind of like knobs on the universe that you could have, could in theory be different.
And then you'd have different physical properties.
You're saying that's not going to change the axiomatic systems that mathematics has.
What I'm not saying is that every alien everywhere is going to have the exact same math that we have.
That's not what I'm claiming, although maybe, but that's not what I'm claiming.
What I'm saying is you get more out than you put in.
Once you've made a choice, and maybe some alien somewhere made a different choice of how they're going to do their math, but once you've made your choice, then you get saddled with a whole bunch of new truths that you discover that you can't do anything about.
They are given to you from somewhere, and you can say they're random, or you can say, no, there's a space of these facts that they're pulled from.
There's a latent space of options that they come from.
So when your E is exactly 2.718 and so on, there is nothing you can do in physics to change it.
And you're saying that space is immutable.
I'm not saying it's immutable.
So I think Plato may or may not have thought that these forms are eternal and unchanging.
That's one place we differ.
I actually think that space has some action to it, maybe even some computation to it.
But we're just pointers.
I'll circle back around to that whole thing.
So the only thing I was trying to do is blow up the idea that we're cool with how it works in physics, no problem there.
I think that's a much bigger deal than people normally think it is.
I think already there, you have this weird haunting of the physical world by patterns that are not coming from the physical world.
The reason I emphasize this is because now what I'm going to, when I amplify this into biology, I don't think it sort of jumps as a new thing.
I think it's just a much more, I think what we call biology is our systems that exploit the hell out of it.
I think physics is constrained by it, but we call biology those things that make use of those kinds of things and run with it.
And so again, I just think it's a scaling.
I don't think it's a brand new thing that happens.
I think it's a scaling, right?
So what I'm saying is, we already know from physics that there are non-physical patterns, and these are generally patterns of form, which is why I call them low agency, because they're like fractals that stand still and they're like prime number distributions.
Although there's a mathematician that's talking in our symposium that's telling me that actually I'm too chauvinistic even there.
Actually, even those things have more, more oomph than even I gave them credit for, which I love.
So what I'm saying is those kind of static patterns are things that we typically see in physics, but they're not the full extent of what lives in that space.
That space is also home to some patterns that are very high agency.
And if we give them a body, if we build a body that they can inhabit, then we get to see different behavioral competencies that the behavior scientists say, oh, I know what that looks like.
That's this kind of behavioral, you know, this kind of mind or that kind of mind.
In a certain sense, I mean, yes, what I'm saying is extremely radical, but it is a very old idea.
It's an old idea of a dualistic worldview, right?
Where the mind was not in the physical body and that it in some way interacted with the physical brain.
So I just want to be clear, I'm not claiming that this is fundamentally a new idea.
This has been around for forever.
However, it's mostly been discredited.
And it's a very unpopular view nowadays.
There are very few people in the, for example, cognitive science community or anywhere else in science that like this kind of view.
Primarily, and already Descartes was getting crap for this when he first tried out is this interaction problem, right?
So the idea was, okay, well, if you have this non-physical mind and then you have this brain that presumably obeys conservation of mass energy and things like that, how are you supposed to inter, you know, how are you supposed to interact with it?
There are many other problems there.
So what I'm trying to point out is that, first of all, physics already had this problem.
You didn't have to wait till you had biology and cognitive science to ask about it.
And what I think is happening in the way we need to think about this is coming back to my point that I think the mind-brain relationship is basically of the same kind as the math-physics relationship.
The same way that non-physical facts of physics haunt physical objects is basically how I think different kinds of patterns that we call kinds of minds are manifesting through interfaces like brains.
How do we prove or disprove the existence of that world?
Because it's a pretty radical one.
Because this physical world we can poke.
It's there.
It feels like all the incredible things like consciousness and cognition and all the goal-oriented behavior and agency all seems to come from this 3D entity.
Yeah, I mean.
And so like we can test it, we can poke it, we can hit it with a stick.
Yeah, sort of.
Sort of.
I mean, the one, so Descartes got some stuff wrong.
I think the one thing that he did get right, the fact that actually you don't know what you can poke and what you can't poke.
The only thing you actually know are the contents of your mind and everything else might be, and in fact, what we know from Anil Seth and Don Hoffman and various other people is definitely a construct.
You might be on drugs and you might wake up tomorrow and say, my God, I had the craziest dream of being Lex Friedman and amazing.
It's a nightmare.
Yeah, who knows?
It's a ride.
Right.
But you see, it's not clear at all that the physical poking is your primary reality.
That's not clear to me at all.
I don't know.
That's an obvious thing that a lot of people can show is true to take a step to the Descartes.
I think, therefore, I am.
That's the only thing you know for sure.
And everything else could be an illusion or a dream.
That's already a leap.
I think from a basic caveman science perspective, the repeatable experiment is the one that most of intelligence comes from here.
The reality is exactly as it is.
To take a step towards the Donald Hoffman worldview takes a lot of guts and imagination and stripping away of the ego and all these kinds of processes.
I think you can get there more easily by synthetic bioengineering in the following sense.
Do you feel a lack of x-ray perception?
Do you feel blind in the X-ray spectrum or in the ultraviolet?
I mean, you don't.
You have absolutely no clue that stuff is there.
And all of your reality as you see it is shaped by your evolutionary history.
It's shaped by the cognitive structure that you have, right?
There are tons of stuff going on around us right now of which we are completely oblivious.
There's equally all kinds of other stuff which we construct.
And this is just modern cognitive science that says that a lot of what we think is going on is a total fabrication constructed by us.
So I think this is not a, I don't think this is a philosophy.
I mean, Descartes got there from a philosophical point.
That's not what I'm, that's not the leap I'm asking us to make.
I'm saying that depending on your embodiment, depending on your interface, and this is incredible, this is increasingly going to be more relevant as we make first augmented humans that have sensory substitution.
You're going to be walking around.
Your friends are going to be like, oh man, I have this primary perception of the solar weather and the stock market because I got those implants.
And what do you see?
Well, I see the traffic of the internet through the trans-Pacific channel.
We're all going to be living in somewhat different worlds.
That's the first thing.
The second thing is we're going to become better attuned to other beings, whether they be cells, tissues.
What's it like to be a cell living in a 20,000-dimensional transcriptional space?
To novel beings that have never been here before, that have all kinds of crazy spaces that they live in.
And that might be AIs, it might be cyborgs, it might be hybrids, it might be all sorts of things.
So, this idea that we have a consensus reality here that's independent of some very specifically chosen aspects of our brain and our interaction, we're going to have to give that up no matter what to relate to these other beings.
I think the tension is, and absolutely, and this idea that you're talking about of sort of almost, I think you've termed it cognitive prosthetics, which is different ways of perceiving and interacting with the world.
But I guess the question is: is our human experience, the direct human experience, is that just a slice of the real world, or is it a pointer to a different world?
That's what I'm trying to figure out.
Because the claim you're making is a really fascinating one, compelling one.
There's a pretty strong one, which is there's another world into which our brain is an interface to, which means you could theoretically map that world systematically.
Yeah, which is exactly what we're trying to do.
I mean, right.
But it's not clear that that world exists.
Yeah, yeah.
Okay.
I mean, so, so that's the beautiful part about this.
And this is why I'm talking about this now, where I wasn't, you know, about a year ago, up until a year ago, I was never talking about this because I think this is now actionable.
So there's this diagram that's called the map of mathematics, and they basically try to show how all the different pieces of math link together.
And there's a bunch of different versions of it.
So there's two features to this.
One is that what is it a map of?
Well, it's a map of various truths.
It's a map of facts that are that are that are thrust on you.
You don't have a choice.
Once you've picked some axioms, you just, you know, here's some surprising facts that are just going to be given to you.
But the other key thing about this is that it has a metric.
It's not just a random heap of facts.
They are all connected to each other in a particular way.
They literally make a space.
And so, when I say it's a space of patterns, what I mean is it is not just a random bag of patterns such that when you have one pattern, you are no closer to finding any other pattern.
I'm saying that there's some kind of a metric to it so that when you find one, others are closer to it, and then you can get there.
So, that's the claim.
And obviously, this is not everybody buys this, and so on.
This is one idea.
Now, how do we know that this exists?
Well, I'll say a couple of things.
If that didn't exist, what is that a map of?
If there is no space, if you don't want to call it a space, that's okay, but you can't get away from the fact that as a matter of research, there are patterns that relate to each other in a particular way.
The final step of calling it a space is minimal.
The bigger issue is, what the hell is it a map of then if it's not a space?
So, that's the first thing.
Now, that's how it plays out, I think, in math and physics.
Now, in biology, here's how we're going to know if this makes any sense.
What we are doing now is trying to map out that space by saying, look, we took, we know that the frog genome maps to one thing, and that's a frog.
Turns out, that exact same genome, if you just take the slightest step with the exact same genome, but you just take some cells out of that environment, they can also make xenobots with very specific different transcriptomes, very specific behaviors, very specific shapes.
It's not just, oh, well, you know, they do whatever, and that they have very specific behaviors, just like the frog had very specific properties.
We can start to map out what all those are, right?
Make that latent, basically try to draw the latent space from which those things are pulled.
And one of two things is going to happen in the future.
And so, this is, you know, come back in 20 years and we'll see how this worked out.
One thing that could happen is that we're going to see, oh, yeah, just like the map of mathematics, we made a map of the space.
And we know now that if I want a system that acts like this and this, here's the kind of body I need to make for it because those are the patterns that exist.
The anthropots have four different behaviors, not seven and not one.
And so that's what I can pull from.
These are the options I have.
Is it possible that there's varying degrees of grandeur to the space that you're thinking about mapping?
Meaning, it could be just like with the space of mathematics, might it strictly be just the space of biology versus a space of like minds, which feels like it could encompass a lot more than just biology?
Yeah, except that I don't see how it would be separate because I'm not just talking about an anatomical shape and transcriptional profile.
I'm also talking about behavioral competencies.
So, when we make something and we find out that, okay, it does habituation sensitization, it does not do Pavlovian conditioning, and it does do delayed gratification, and it doesn't have language.
That is a very specific cognitive profile.
That's a region of that space, and there's another region that looks different because I don't make a sharp distinction between biology and cognition.
If you want to explain behaviors, they are drawn from some distribution as well.
So, I think in 20 years or however long it's going to take, one of two things will happen.
Either we and other people who are working on this are going to actually produce a map of that space and say, here's why you've gotten systems that work like this and like this and like this, but you've never seen any that work like that, right?
Or we're going to find out that I'm wrong and that basically it's not worth calling it a space because it is so random and so jumbled up that there is, we've been able to make zero progress in linking the embodiments that we make to the patterns that come through.
Yeah, just to be clear, I mean, from your blog post on this, from the paper, I mean, we're talking about space that includes a lot of stuff.
Yeah, yeah.
It includes human, what is it, meditating?
Steve.
Hello, my name is Steve.
AI systems.
So all the space of computational systems, objects, biological systems, concepts, it includes everything.
Well, it includes specific patterns that we have given names to.
Some of those patterns we've named mathematical objects.
Some of those patterns we made, we've named anatomical outcomes.
Some of those patterns we've made psychological types.
So every entry in an encyclopedia, old school Britannica is a pointer to this space.
There is a set of things that I feel very strongly about because the research is telling us that that's what's going on.
And then there's a bunch of other stuff that I see as hypotheses for next steps that guide experiment.
So what I'm about to tell you, I don't know, you know, these are things I don't actually know.
These are just guesses that, you know, you need to make some guesses to make progress.
I don't think that there are specific, or I don't know, but it doesn't mean that there are going to be specific platonic patterns for this is the Titanic and this is the sister of the Titanic and this is some other kind of boat.
This is not what I'm saying.
What I'm saying is in some way that we absolutely need to work out, when we make minimal interfaces, we get more than we put in.
We get behaviors, we get shapes, we get mathematical truths and we get all kinds of patterns that we did not have to create.
We didn't micromanage them.
We didn't know they were coming.
We didn't have to put any effort into making them.
They come from some distribution that seems to exist that we don't have to create.
And exactly whether that space is sparse or dense, I don't know.
So for example, if there is some kind of a Platonic form for the movie The Godfather, if it's surrounded by a bunch of crappy versions and then crappier versions still, I have no idea, right?
I don't know if the space is sparse or not.
I don't know if it's finite or infinite.
These are all things I don't know.
What I do know is that it seems like physics and for sure biology and cognition are the benefits of ingressions that are free lunches in some sense.
We did not make them.
Calling them emergent does nothing for a research program.
Okay.
That just means you got surprised.
I think it's much better if you say, if you make the optimistic assumption that they come from a structured space that we have a prayer in hell of actually exploring.
And in some decades, if I'm wrong, and it says, you know what, we tried.
It looks like it really is random, too bad.
Fine.
Is there a difference between, like, can we one day prove the existence of this world?
And is there a difference between it being a really effective model for connecting things, explaining things, versus like an actual place where the information about these distributions that we're sampling actually exists that we can hit with a stick?
Yeah, you can try to make that distinction.
But I think modern cognitive neuroscience will tell you that whatever you think this is, at most, it is a very effective model for predicting the future experiences you're going to have.
So all of this that we think about as physical reality is just as a nice comedian model.
I mean, that's not me.
That's predictive processing and active inference.
Like that's modern neuroscience telling you this, that this isn't anything that I'm particularly coming up with.
All I'm saying is the distinction, the distinction you're trying to make, which is like an old school, like realist, you know, kind of view that is it metaphorical or is it real?
All we have in science are metaphors, I think.
And the only question is how good are your metaphors?
And I think as agents living in a world, all we have are models of what we are and what the outside world is.
That's it.
And the question is, how good is it a model?
And my claim about this is in some small number of decades, this will either give rise to a very enabling mapping of the space for AI, for bioengineering, for biology, whatever, or we're going to find out that it really sucks because it really is a random grab bag of stuff.
And we tried the optimistic research program, it failed, and we're just going to have to live with surprise.
I mean, I doubt that's going to happen, but it's a possible outcome.
But do you think there is some place where the information is stored about these distributions that are being sampled to the thin interfaces?
Like actual place.
Place is weird because it isn't the same as our physical space-time.
Okay.
I don't think it's that.
So calling it a place is a little bit more.
Like physics, general relativity describes a space-time.
Could other physics theories be able to describe this other space where information is stored that we can apply, maybe different, but in the same spirit, laws about information.
I definitely think they're going to be systematic laws.
I don't think they're going to look anything like physics.
You can call it physics if you want, but I think it's going to be so different that that probably just cracks the word.
And whether information is going to survive that, I'm not sure.
But I definitely think that it's going to be, there are going to be laws.
But I think they're going to look a lot more like aspects of psychology and cognitive science than they're going to look like physics.
That's my guess.
So what does it look like to prove that world exists?
What it looks like is a successful research program that explains how you pull particular patterns when you need them and why some patterns come and others don't and show that they come from an ordered space.
Across a large number of organisms?
Well, it's not just organisms.
I mean, I think it's going to end up, and I mean, you can talk to the machine learning people about how they got to this point again, because this is not just me.
There are a bunch of different disciplines that are converging on this now simultaneously.
You're going to find, again, just like in mathematics, where from different directions, everybody sort of is looking at different things.
Oh, my God, this is one underlying structure that seems to inform all of this.
So in physics, in mathematics, in computer science, machine learning, possibly in economics, certainly in biology, possibly in cognitive science, we're going to find these structures.
It was already obvious in Pythagoras' time that there are these patterns.
The only remaining question is, are they part of an ordered, structured space?
And are we up to the task of mapping out the relationship between what we build and the patterns that come through it?
So from the machine learning perspective, is it then the case that even something as simple as LLMs are sneaking up onto this world?
That the representations that they form are sneaking up to it.
I've given this talk to some audiences, and especially in the organicist community, people like the first part where it's like, okay, now there's an idea for what the magic quote unquote is that's special about the living things and so on.
Now, if we could just stop there, we would have dumb machines that just do what the algorithm says, and we have these magical living interfaces that can be the recipient for these ingressions.
Cool, right?
We can cut up the world in this way.
Unfortunately, or fortunately, I think that's not the case.
And I think that even simple minimal computational models are to some extent beneficiaries of these free lunches.
I think that the theories we have, and this goes back to the thin client interface kind of idea, the theories we have of both of physics and computation, so theory of algorithms, you know, Turing machines, all that good stuff, those are all good theories of the front end interface.
And they're not complete theories of the whole thing.
They capture the front end, which is why they get surprised, which is why these things are surprising when they happen.
I think that when we see embryos of different species, we are pulling from well-trodden, familiar regions of that space, and we know what to expect, frog, you know, the snake, whatever.
When we make cyborgs and hybrids and biobots, we are pulling from new regions of that space that look a little weird and they're unexpected, but we can still kind of get our mind around them.
When we start making AIs, like proper AIs, we are now fishing in a region of that space that may never have had bodies before.
It may have never been embodied before.
And what we get from that is going to be extremely surprising.
And the final thing just to mention on that is that because of this, because of the inputs from this Platonic space, some of the really interesting things that artificial constructs can do are not because of the algorithm.
They're in spite of the algorithm.
They are filling up the spaces in between.
There's what the algorithm is forcing you to do, and then there's the other cool stuff it's doing, which is nowhere in the algorithm.
And if that's true, and we think it's true even of very minimal systems, then this whole business of language models and AIs in general, watching the language part may be a total red herring because the language is what we force them to do.
The question is, what else are they doing that we are not good at noticing?
And this is something that we are, I think, as a kind of existential step for humanity is to become better at this because we are not good at recognizing these things now.
You got to tell me more about this behavior that is observable that is unrelated to the explicitly stated goal of a particular algorithm.
So you looked at a simple algorithm of sorting.
Can you explain what was done?
Sure.
First, just the goal of the study.
There are two things that people generally assume.
One is that we have a pretty good intuition about what kind of systems are going to have competencies.
So from observing biologicals, we're not terribly surprised when biology does interesting things.
Everybody always says, well, it's biology.
Of course, it does all this cool stuff.
But we have these machines and the whole point of having machines and algorithms and so on is they do exactly what you tell them to do.
And people feel pretty strongly that that's a binary distinction and that that's what that's we can carve up the world in that way.
So I wanted to do two things.
I wanted to, first of all, explore that and hopefully break the assumption that we're good at seeing this because I think we're not.
And I think it's extremely important that we understand very soon that we need to get much better at knowing when to expect these things.
And the other thing I wanted to do was to find out, you know, mostly people assume that you need a lot of complexity for this.
So when somebody says, well, the capabilities of my mind are not properly encompassed by the rules of biochemistry, everybody's like, yeah, that makes sense.
You're very complex and okay, your mind does things that you can't, you didn't see that coming from the rules of biochemistry, right?
Like we know that.
So mostly people think that has to do with complexity.
And what I would like to find out is, as part of understanding what kind of interfaces give rise to what kind of ingressions, is it really about complexity?
How much complexity do you actually need?
Is there some threshold after which this happens?
Is it really specific materials?
Is it biologicals?
Is it something about evolution?
Like, what is it about these kinds of things that allows this surprise, right?
Allows this idea that we are more than the sum of our parts.
And so, and I had a strong intuition that none of those things are actually required, that this kind of magic, so to speak, seeps into pretty much everything.
And so, to look at that, I wanted also to have an example that had significant shock value.
Because the thing with biology is there's always more mechanism to be discovered, right?
Like there's infinite depth of what the materials are doing.
Somebody will always say, well, there's a mechanism for that.
You just haven't found it yet.
So, I wanted an example that was simple, transparent, so you could see all the stuff.
There was nowhere to hide.
I wanted it to be deterministic because I don't want it to be something around unpredictability or stochasticity.
And I want it to be something familiar to people, minimal.
And I wanted to use it as a model system for honing our abilities to take a new system and looking at it with fresh eyes.
And that's because these sorting algorithms have been studied for over 60 years.
We all think we know what they do and what their properties are.
The algorithm itself is just a few lines of code.
You can see exactly what's there.
It's deterministic.
And that's why.
That's why, right?
I wanted the most shock value out of a system like that, if we were to find anything, and to use it as an example of taking something minimal and seeing what can be gotten out of it.
So I'll describe two interesting things about it.
And then we have lots of other work coming in the next year about even simpler systems.
I mean, it's actually crazy.
So the very first thing is this.
The standard sorting, so let's take bubble sort, right?
And all these sorting algorithms, you know, what you're starting out with is an array of jumbled up digits, okay?
So integers.
It's an array of mixed up integers.
And what the algorithm is designed to do is to eventually arrange them all into order.
And what it does generally is compare some pieces of that array.
And based on which one is larger than which, it swaps them around.
And you can imagine that if you just keep doing that and you just keep comparing and swapping, then eventually you can get all the digits in the same order.
So the first thing I decided to do, and this is the work of my student Taning Zhang and then Adam Goldstein on this paper.
This goes back to our original discussion about putting a barrier between it and its goals.
And the first thing I said, okay, how do we put a barrier in?
Well, how about this?
The traditional algorithm assumes that the hardware is working correctly.
So if you have a seven and then the five, and you tell them to swap the lines of swap, swap the five and the seven, and then you go on, you never check, did it swap?
Because you assume that it's reliable hardware.
Okay.
So what we decided to do was to break one of the digits so that it doesn't move.
When you tell it to move, it doesn't move.
We don't change the algorithm.
That's really key.
We do not put anything new in the algorithm that says, what do you do if the damn thing didn't move?
Just run it exactly the same way.
What happens?
Turns out something very interesting happens.
It still works.
It still sorts it.
But it eventually sorts it by moving all the stuff around the broken number.
That makes sense.
But here's something interesting.
Suppose we plot at any given moment, we plot the degree of sortedness of the string as a function of time.
If you run the normal algorithm, it's guaranteed to get where it's going.
It's got to sort and it will always reach the end.
But when it encounters one of the broken digits, what happens is the actual sortedness goes down in order to then recoup and get better order later.
What it's able to do is to go against the thing that it's trying to do to go around in order to meet its goal later on.
Now, if I showed this to a behavior scientist and I didn't tell them what this system was doing is, they will say, well, we know what this is.
This is delayed gratification.
This is the ability of a system to go against its gradient and get what it needs to do.
Now, imagine two magnets.
Imagine you take two magnets and you put a piece of wood between them and they're like this.
What the magnet is not going to do is to go around the barrier and get to its goal.
They're not smart enough to go against their gradient.
They're just going to keep doing this.
Some animals are smart enough, right?
They'll go around.
The sorting algorithm is smart enough to do that.
But the trick is there are no steps in the algorithm for doing that.
You could stare at the algorithm all day long.
You would not see that this thing can do delayed gratification.
It isn't there.
Now, there's two ways to look at this.
On the one hand, you could say, so the reductionist physics approach, you could say, did it follow all the steps in the algorithm?
Yeah, it did.
Well, then, there's nothing to see here.
There's no magic.
This is, you know, it does what it does.
It didn't disobey the algorithm.
Right.
I'm not claiming that this is a miracle.
I'm not saying it disobeys the algorithm.
I'm saying it's not failing to sort.
I'm saying it's not doing some sort of crazy quantum thing.
I'm not saying any of that.
What I'm saying is other people might call it emergent.
What it has are properties that are not complexity, not unpredictability, not perverse instantiation, as in sometimes in A-Life.
What it has are unexpected competencies recognizable by behavioral scientists, meaning different types of cognition, primitive.
Well, we want it primitive.
So there you go.
It's simple.
That you didn't have to code into the algorithm.
That's very important.
You get more than you start with, then you put in.
You didn't have to do that.
You get the surprising behavioral competencies, not just complexity.
That's the first thing.
The second thing, which is also crazy, but it requires a little bit of explanation.
The second thing that we said is, okay, what if instead of in the typical sorting algorithm, you have a single controller, top-down?
I'm sort of god-like looking down at the numbers and I'm swapping them according to the algorithm.
What if, and this goes back to actually the title of the paper talks about agential data, self-sorting algorithms.
This is back to like who's the pattern and who's the agent, right?
He said, what if we give the numbers a little bit of agency?
Here's what we're going to do.
We're not going to have any kind of top-down sort.
Every single number knows the algorithm and he's just going to do whatever the algorithm says.
So if I'm a five, I'm just going to execute the algorithm and the algorithm will try to make sure that to my right is the six and to my left is a four.
That's it.
So every digit is, so it's like a distributed anti, you know, it's like an ant colony.
There is no central planner.
Everybody just does their own algorithm.
Okay.
We're just going to do that.
Once you've done that, and by the way, one of the values of doing that is that you can simulate biological processes because in biology, you know, if I have like a frog face and I scramble it with all the different organs, every tissue is going to rearrange itself so that ultimately you have, you know, nose, eyes, head, you know, you're going to have an order, right?
So you can do that.
But okay, fine.
But you can do something else cool.
Once you've done that, you can do something cool that you can't do with a standard algorithm.
You can make a chimeric algorithm.
What I mean is not all the cells have to follow the same algorithm.
Some of them might follow bubble sort.
Some of them might follow a selection sort.
It's like in biology, what we do when we make chimeras, we make frog elottls.
So frog elottls have some frog cells, they have some axolotl cells.
What is that going to look like?
Does anybody know what a frog elottl is going to look like?
It's actually really interesting that despite all the genetics and the developmental biology, you have the genomes, you have the frog genome, you have the axolotl genome.
Nobody can tell you what a frog elottl is going to look like, even though you have, you have, This is back to your question about physics and chemistry.
Like, yeah, you can know everything there is to know about how, you know, how the physics and the genetics work, but the decision-making, right, is like baby axolotls have legs.
Taplots don't have legs.
Is a froglottle going to have legs, right?
Can you predict that from understanding the physics of transcription and all of that?
Anyway, so we made some, so you see, this is like an intersection of biology, physics, and cognition.
So we made chimeric algorithms.
And we said, okay, half the digits randomly.
We assign them randomly.
So half the digits are randomly doing bubble sort.
Half the digits are randomly doing, I don't know, selection sort of stuff.
But once you choose bubble sort, that digit is sticking with bubble sort.
It's sticking.
We haven't done the thing where they can swip swap between.
No, they're sticking to it, right?
You label them and they're sticking to it.
The first thing we learned is that, the first thing we learned is that distributed sorting still works.
It's amazing.
You don't need a central planar when every number is doing its whole thing, still gets sorted.
That's cool.
The second thing we found is that when you make a chimeric algorithm where actually the algorithms are not even matching, that works too.
The thing still gets sorted.
That's cool.
But the most amazing thing is when we looked at something that had nothing to do with sorting, and that is we asked the following question.
We defined, Adam Goldstein actually named this property, and I think it's well named.
We define the algotype of a single cell.
It's not the genotype.
It's not the phenotype.
It's the algotype.
The algotype is simply this.
What algorithm are you following?
Which one are you?
Are you a selection sort or a bubble sort?
That's it.
There's two algotypes.
And we simply asked the following question.
During that process of sorting, what are the odds that whatever algotype you are, the guys next to you are your same type?
It's not the same as asking how the numbers are sorted because it's got nothing to do with the numbers.
It's actually, it's just whatever type you are.
It's more about clustering than sorting.
Clustering.
Well, that's exactly what we call it.
We call it clustering.
And at first, so now think of what happens.
And that's, and you can see this on that graph.
It's the red.
You start off, the clustering is at 50% because as I told you, we assign the algotypes randomly.
So the odds that the guy next to you is the same as you is half 50%.
There's only two algotypes.
In the end, it is also 50% because the thing that dominates is actually the sorting algorithm.
And the sorting algorithm doesn't care what type you are.
You've got to get the numbers in order.
So by the time you're done, you're back to random algotypes because you have to get the numbers sorted.
But in between, in between, you get some amount of increased, very significant, because look at the control is in the middle, the pink is in the middle.
In between, you get significant amounts of clustering, meaning that certain algotypes like to hang out with their buddies for as long as they can.
Here's one more thing, and then I'll kind of give a philosophical significance of this.
So we saw this, and I said, that's nuts because the algorithm doesn't have any provisions for asking, what algotype am I?
What algotype is my neighbor?
If we're not the same, I'm going to move to be next to, like, if you wanted to implement this, you would have to write a whole bunch of extra steps.
There would have to be a whole bunch of observations that you would have to take of your neighbor to see how he's acting.
Then you would infer what algotype he is.
Then you would go stand next to the one that seems to have the same algotype as you.
You would have to take a bunch of measurements to say, wait, is that guy doing bubble sort?
Is he doing selection?
Sorry.
Like if you wanted to implement this, it's a whole bunch of algorithmic steps.
None of that exists in our algorithm.
You don't have any way of knowing what algotype you are or what anybody else is.
Okay.
We didn't have to pay for that at all.
So notice a couple of interesting things.
The first interesting thing is that this was not at all obvious from the algorithm itself.
Algorithm doesn't say anything about algotypes.
Second thing is we paid computationally for all the steps needed to have the numbers sorted, right?
Because we know you pay for a certain computation cost.
The clustering was free.
We didn't pay for that at all.
There were no extra steps.
So this gets back to your other question of how do we know there's a platonic space?
And this is kind of like one of the craziest things that we're doing.
I actually suspect we can get free compute out of it.
I suspect that one of the things that we can do here is use these ingressions in a useful way that don't require you to pay cost to pay physical costs, right?
Like we know every bit has an energy cost that you have to get.
The clustering was free.
Nothing extra was done.
Yeah, just this plot for people who are just listening on the x-axis is the percentage of completion of the sorting process.
The y-axis is the sortedness. of the list of numbers.
And then also in the red line is basically the degree to which they're clustered.
And you're saying that there's this unexpected competence of clustering.
And I should comment that I'm sure there's a theoretical computer scientist listening to this saying, I can model exactly what is happening here and prove that the clustering increasing decreases.
So taking the specific instantiation of the thing you've experimented with and prove certain properties of this.
But the point is that there's a more general pattern here of probably other that you haven't discovered, unexpected competencies that emerge from this.
You could get free computation out of this thing.
So this goes back to the very first thing you said about physicists thinking that physics is enough.
You're 100% correct that somebody could look at this and say, well, I see exactly why this is happening.
We can track through the algorithm.
Yeah, you can.
There's no miracle going on here, right?
The hardware isn't doing some crazy thing that it wasn't supposed to do.
The point is that despite following the algorithm to do one thing, it is also at the same time doing other things that are neither prescribed nor forbidden by the algorithm.
It's the space between the chance and necessity, which is how a lot of people see these things.
It's that free space.
We don't really have a good vocabulary for it, where the interesting things happen.
And to whatever extent it's doing other things that are useful, that stuff is computationally without extra cost.
Now, there's one other cool thing about this.
And this is the beginning of a lot of thinking that I've done about when this relates to AI and stuff like that, intrinsic motivations.
The sorting of the digits is what we forced it to do.
The clustering is an intrinsic motivation.
We didn't ask for it.
We didn't expect it to happen.
We didn't explicitly forbid it, but we didn't know.
This is a great definition of the intrinsic motivation of a system.
So when people say, oh, that's a machine, it only does what you programmed it to do.
I, as a human, have intrinsic motivation.
I'm creative and I have intrinsic motivation.
Machines don't do that.
Even this minimal thing has a minimal kind of intrinsic motivation, which is something that is not forbidden by the algorithm, but isn't prescribed by the algorithm either.
And I think that's an important third thing besides chance and necessity.
Something else that's fun about this is when you think about intrinsic motivations, think about a child.
If you make him sit in math class all day, you're never going to know what the other intrinsic motivations are that he might be doing, right?
Who knows what else he might be interested in.
So I wanted to ask this question.
I want to say, if we let off the pressure on the sorting, what would happen?
Now, that's hard because if you mess with the algorithm, now it's no longer the same algorithm.
So you don't want to do that.
So we did something that I think was kind of clever.
We allowed repeat digits.
So if you allow repeat digits in your array, you can still have all the fives can still be after all the fours and after all the sixes, but you can keep them as clustered as you want.
So this thing at the end where they have to get declustered in order for the sorting to happen, we thought maybe we could let off the pressure a little bit.
If you do that, all you do is allow some extra repeat digits, the clustering gets bigger.
It will cluster as much as you let it.
The clustering is what it wants to do.
The sorting is what we're forcing it to do.
And my only point is: if the bubble sword, which has been gone over and gone over how many times, has these kinds of things that we didn't see coming, what about the AIs, the language model, everything else?
Not because they talk, not because they say that they're, you know, have an inner perspective or any of that, but just from the fact that this thing is even the most minimal system surprises with what happens.
And I, frankly, when I see this, tell me if this doesn't sound like all of our existential story.
For the brief time that we're here, the universe is going to grind us into dust eventually.
But until then, we get to do some cool stuff that is intrinsically motivating to us, that is neither forbidden by the laws of physics nor determined by the laws of physics.
But eventually it kind of comes to an end.
So I think that that aspect of it, right?
That there are spaces, even in algorithms, there are spaces in which you can do other new things, not just random stuff, not just complex stuff, but things that are easily recognizable to a behavior scientist.
You see, that's the point here.
And I think that kind of intrinsic motivation is what's telling us that this idea that we can carve up the world, we can say, okay, look, biology is complex.
Cognition, who knows what's responsible for that, but at least we can take a chunk of the world aside and we can cut it off and we can say, these are the dumb machines.
These are just this algorithms.
Whereas we know the rules of biochemistry don't explain everything we want to know about how psychology is going to go, but at least the rules of algorithms tell us exactly what the machines are going to do, right?
We have some hope that we've carved off a little part of the world and everything is nice and simple and it is exactly what we said it was going to be.
I think that failed.
I think it was a good try.
I think we have good theories of interfaces, but even the simplest algorithms have these kinds of things going on.
And so that's why I think something like this is significant.
Do you think that there is going to be in all kinds of systems of varying complexity things that the system wants to do and things that it's forced to do?
So are there these unexpected competencies to be discovered in basically all algorithms and all systems?
That's my suspicion.
And I think that it is extremely important for us as humans to have a research program to learn to recognize them, predict and recognize.
We make things, never mind something as simple as this.
We make social structures, financial structures, internet of things, robotics, AIs.
We make all this stuff.
And we think that the thing we make it do is the main show.
And I think it is very important for us to learn to recognize the kind of stuff that sneaks in into the spaces.
It's a very counterintuitive notion.
By the way, I like the word emergent.
I hear your criticism, and it's a really strong one that emergent is like you toss your hands up.
I don't know the process, but it's just a beautiful word because it is, I guess, it's a synonym for surprising.
And I mean, this is very surprising, but just because it's surprising doesn't mean there's not a mechanism that explains it.
Mechanism and explanation are both not all they're cracked up to be in the sense that, you know, anything you and I do, we could come up with the most beautiful theory, we paint a painting, anything we do.
Somebody could say, well, I was watching the biochemistry and the Schrodinger equation playing out, and it totally described everything that was happening.
You didn't break even a single law of biochemistry.
Nothing to see here.
Nothing to see, right?
Like, okay, you know, consistent with the low-level rules, you can do the same thing here.
You can look at the machine code and say, yeah, this thing is just executing machine code.
You can go further and say, oh, it's cool.
It's quantum foam.
It's just doing the thing that quantum foam does.
You're saying that's what physicists miss.
And I'm not saying they're unaware of that.
I mean, they're generally a pretty sophisticated bunch.
I just think they've picked a level and they're going to discover what is to be seen at that level, which is a lot.
And my point is the stuff that the behavior scientists are interested in shows up at a much lower level than you think.
How often do you think there's a misalignment of this kind between the thing that a system is forced to do and what it wants to do?
It's particularly, I'm thinking about various levels of complexity of AI systems.
So right now, we've looked at like five other systems.
That's a small N, okay?
But just looking at that, I would find it very surprising if Bubble Sort was able to do this and then there was some sort of valley of death where nothing showed up and then blah, blah, blah, living things.
Like, I can't imagine that.
I'm going to say that if something, and we actually have a system that's even simpler than this, which is one D cellular automata that's doing some weird stuff.
If these things are to be found in this kind of simple system, I mean, they just have to be showing up in these other more complex AIs and things like that.
The only thing what we don't know, but we're going to find out, is to what extent there is interaction between these.
So I call these things side quests.
It's like in a game, this is the main thing you're supposed to do.
And then as long as you still do it, the thing about this is you have to sort.
You have to sort.
There's no miracle.
You're going to sort.
But as long as you can do other stuff while you're sorting, it's not forbidden.
And what we don't know is to what extent are the two things linked?
So if you do have a system that's very good at language, are the others, the side quests that it's capable of, do they have anything to do with language whatsoever?
We don't know the answer to that.
The answer might be no, in which case, all of the stuff that we've been saying about language models, because of what they're saying, all of that could be a total red herring and not really important.
And the really exciting stuff is what we never looked for.
Or in complex systems, maybe those things become linked.
In biology, they're linked.
In biology, evolution makes sure that the things you're capable of have a lot to do with what you've actually been selected for.
In these things, I don't know.
And so we might find out that they actually do give the language some sort of leg up, or we might find that the language is just, you know, that's not the interesting part.
Also, it is an interesting question of this intrinsic motivation of clustering.
Is this a property of the particular sorting algorithms?
Is this a property of all sorting algorithms?
Is this a property of all algorithms operating on lists on numbers?
How big is this?
So, for example, with LLMs, is it a property of any algorithm that's trying to model language?
Or is it very specific to transformers?
And that's all to be discovered.
We're doing all that.
We're doing all that.
We're testing.
We're testing the stuff in other algorithms.
We're looking for, we're developing suites of code to look for other properties.
We, you know, to some extent, it's very hard because we don't know what to look for, but we do have a behaviorist handbook, which tells you what all kinds of things to look for.
The delayed gratification, the problem solving.
We have all that.
I'll tell you an N of one of an interesting biological intrinsic motivation because people, so in like the alignment community and stuff, there's a lot of discussion about what are the intrinsic motivations going to be of AIs?
What are their goals going to be?
What are they going to want to do?
Just as an N of one observation, anthrobots, the very first thing we checked for.
So this is not experiment number 972 out of 1,000 things.
This is the very first thing we checked for.
We put them on a plate of neurons with a big wound through them, a big scratch.
First thing they did was heal the wound.
So it's an N of one, but I like the fact that the first intrinsic motivation that we noticed out of that system was benevolent and healing.
Like I thought that was pretty cool.
And we don't know.
Maybe, you know, maybe the next 20 things we find are going to be some sort of damaging effect.
I can't tell you that.
But the first thing that we saw was kind of a positive one.
And I don't know.
That makes me feel better.
What was the thing you mentioned with the anthrobots that they can reverse aging?
There's a procedure called an epigenetic clock, where what you can do is look at particular epigenetic states of cells and compare to a curve that was built from humans of known age, you can guess what the age is.
So we can take now, and this is Steve Hobrath's work and many other people, that when you take a set of cells, you can guess what their biological age is.
So we make the anthropotes from cells that we get from human tracheal epithelium.
We collaborated with Steve's group, the Clock Foundation.
We sent them a bunch of cells and we saw that if you check the anthropots themselves, they are roughly 20% younger than the cells they come from.
And so that's amazing.
And I can give you a theory of why that happens, although we're still investigating.
And then I can tell you the implications for longevity and things like that.
My theory for why it happens, I call this age evidencing.
And I think that what's happening here, like with a lot of biology, is that cells have to update their priors based on experience.
And so I think that they come from an old body.
They have a lot of priors about how many years they've been around and all that.
But their new environment screams, I'm an embryo.
Basically, there's no other cells around.
You're being bent into a pretzel.
They actually express some embryonic genes.
They say you're an embryo.
And I think it doesn't, it's not enough new evidence to roll them like all the way back, but it's enough to update them to about 28% back.
Yeah, so it's similar to like when older adult gives birth to a child.
So you're saying you can just fake it till you make it with age?
Like the environment convinces the cell that it's young?
Well, first of all, yes, yes.
And that's my hypothesis.
And we have a whole bunch of research being done on this.
There was a study where they went into an old age home and they redid the decor, like 60s style, when all these folks were really young.
And they found all kinds of improvements in blood chemistry and stuff like that because they say it was sort of mentally taking them back to when they were the way they were at that time.
I think this is a basil version of that, that basically, if you're finding yourself in an embryonic environment, what's more plausible that you're young or what?
I think this is the basic feature of biology is to update priors based on experience.
Do you think that's actually actionable for longevity?
Like you can convince cells that they're younger and thereby extend a lifespan.
This is what we're trying to do.
Yeah.
Could it be as simple as that?
Why?
That's not I'm not claiming it's simple.
That is in no way simple, but because again, you have to, all of this, all of the regenerative medicine stuff that we do balances on one key thing, which is learning to communicate to the system.
We have to, if you're going to convince that system, you know, so when we make gut tissue into an eye, you have to convince those cells that they're priors about.
We are, we are gut precursors.
Those priors are wrong and you should adopt this new worldview that you're going to be, you know, you're going to be an eye.
So, being convincing and figuring out what kind of messages are convincing to cells and how to speak the language and how to make them take on new beliefs, literally, is at the root of all of these future advances in birth defects and regenerative medicine and cancer.
And that's what's going on here.
So, I'm not saying it's simple, but I can see the path.
Going back to the platonic space, I have to ask if our brains are indeed thin-client interfaces to that space, what does that mean for our mind?
Like, can we upload the mind?
Can we copy it?
Can we ship it over to other planets?
Like, what does that mean for exactly where the mind is stored?
Yeah, a couple of things.
So, we are now beyond anything that I can say with any certainty.
This is total conjecture, okay?
So, because we don't know yet, the whole point of this is we actually don't really understand very well the relationship between the interface and the thing.
And the thing you're currently working on is to map this space.
Correct.
And we are beginning to map it, but this is a massive effort.
So, a couple of conjectures here.
One is that I strongly suspect that the majority of what we think of as the mind is the pattern in that space.
And one of the interesting predictions from that model, which is not a prediction of modern neuroscience, is that there should be cases where there's very minimal brain and yet normal IQ function.
This has been seen clinically.
Karina Kaufman and I reviewed this in a paper recently, a bunch of cases of humans where there's very little brain tissue and they have normal or sometimes above normal intelligence.
Now, things are not simple because that obviously doesn't happen all the time, right?
Most of the time, that doesn't happen.
So, what's going on?
We don't understand.
But it is a very curious thing that is not a prediction of, I'm not saying it can't, you know, you can take modern neuroscience and sort of bend it into a pretzel to accommodate it.
You can say, well, there are these, you know, kind of redundancies and things like this, right?
So, you can accommodate it, but it doesn't predict this.
So, there are these incredibly curious cases.
Now, do I think you can copy it?
No, I don't think you can because what you're going to be copying is the interface, the front end, the brain or whatever.
The action is actually the pattern in the platonic space.
You're going to be able to copy that?
I doubt it.
But what you could do is produce another interface through which that particular pattern is going to come through.
I think that's probably possible.
I can't say anything at this point about what that would take, but my guess is that that's possible.
Is your guess, your gut, is that that process, if possible, is different than copying?
Like it looks more like creating a new thing versus copying.
For the interface.
So if you could, so here's my prediction for Star Trek transporter.
For whatever reason, right now, your brain and body are very attuned and attractive to a particular pattern, which is your set of psychological propensities.
If we could rebuild that exact same thing somewhere else, I don't see any reason why that same pattern wouldn't come through it the same way it comes through this one.
That would be a guess, you know?
So I think what you will be copying is the physical interface and hoping to maintain whatever it is about that interface that was appropriate for that pattern.
We don't really know what that is at this point.
When we've been talking about mind, in this particular case, it's the most important to me because I'm a human.
Does self come along with that?
The feeling like this mind belongs to me?
Yeah.
Does that come along with all minds?
The subjective, not the subjective experience.
The subjective experience is important too, consciousness.
But like the ownership.
I suspect so.
And I think so because of the way we come into being.
So one of the things that I should be working on is this paper called Booting Up the Agent.
And it talks about the very earliest steps of becoming a being in this world.
Kind of like you can do this for a computer, right?
And before you switch the power on, it belongs to the domain of physics, right?
It obeys the laws of physics.
You switch the power on some number of nanoseconds, microseconds, I don't know, later.
You have a thing that, oh, look, it's taking instructions off the stack and doing them, right?
So now it's executing an algorithm.
How did you get from physics to executing an algorithm?
Like what was happening during the boot up exactly before it starts to run code or whatever, right?
So we can ask that same question in biology.
What are the earliest steps of becoming a being?
Yeah, that's a fascinating question through embryogenesis.
At which point are you booting on?
Yeah, exactly.
Do you have a hope of an answer to that?
Well, I think so.
I think so in two ways.
The first thing is just physically what happens.
So I think that your first task as a being, and again, I don't think this is a binary thing.
I think this is a positive feedback loop that sort of cranks up and up.
Your first task as a being coming into this world is to tell a very compelling story to your parts.
As a biological, you are made of agential parts.
Those parts need to be aligned, literally, into a goal they have no comprehension of.
If you're going to move through anatomical space by means of a bunch of cells, which only know physiological and metabolic spaces and things like that, you are going to have to develop a model and give them bend their action space.
You're going to have to deform their option space with signals, with behavior shaping cues, with rewards and punishments, whatever you got.
Your job as an agent is ownership of your parts, is alignment of your parts.
I think that fundamentally is going to give rise to this ability.
Now, that also means having a boundary saying, okay, this is the stuff I control.
This is me.
This other stuff over here is outside world.
I have to figure out, you don't know where that is, by the way.
You have to figure it out.
And in embryogenesis, it's really cool.
You can, as a grad student, I used to do this experiment with duck embryos, which is a flat blastodisc.
You can take a needle and put some scratches into it.
And every island you make for a while until they heal up thinks it's the only embryo.
There's nothing else around.
So it becomes an embryo.
And eventually you get twins and triplets and quadruplets and things like that.
But each one of them at the border, you know, they're joined.
Well, where do I end and where does he begin?
You have to know what your borders are.
So that action of aligning your parts and coming to be this, I mean, I'm even going to say this emergence.
We just don't have a good vocabulary for it.
This emergence of a model that aligns all the parts is really critical to keep that thing going.
There's something else that's really interesting.
And I was thinking about this in the context of this question of these beautiful kind of ideas, you know, that there's this amazing thing that we found.
And this is largely the work of Federico Pegosi and my group.
So a couple of years ago, we saw that networks of chemicals can learn.
They have five or six different kinds of learning that they can do.
And so, what I asked them to do was to calculate the causal emergence of those networks while they're learning.
And what I mean by that is this: if you're a rat and you learn to press a lever and get a reward, there's no individual cell that had both experiences.
The cells at your paw had touched the lever, the cells in your gut got the delicious reward.
No individual cell has that both experiences.
Who owns that associative memory?
Well, the rat.
So, that means you have to be integrated, right?
If you're going to learn associative memories from different parts, you have to be an integrated agent that can do that.
And so, we can measure that now with metrics of causal emergence, like phi and things like that.
So, we know that in order to learn, you have to have significant phi.
But I wanted to ask the opposite question: What does learning do for your phi level?
Does it do anything for your degree of being an agent that is more than the sum of its parts?
So, we train the networks, and sure enough, some of them, not all of them, but some of them, as you train them, their phi goes up.
Okay.
And so, basically, what we were able to find is that there is this positive feedback loop between every time you learn something, you become more of an integrated agent.
And every time you do that, it becomes easier to learn.
And so, it's this virtuous cycle.
It's an asymmetry that points upwards for agency and intelligence.
And now, back to our Platonic space stuff, where does that come from?
Doesn't come from evolution.
You don't need to have any evolution for this.
Evolution will optimize the crap out of it for sure, but you don't need evolution to have this.
It doesn't come from physics, it comes from the rules of information, causal information theory, and the behavior of networks, the mathematical objects.
This is not anything that was given to you by physics or by a history of selection.
It's a free gift from math.
And those two free gifts from math lock together into a spiral that I think causes simultaneously a rise in intelligence and a rise in collective agency.
And I think that's just been just amazing to think about.
Well, that free gift from I think is extremely useful in biology when you have small entities forming networks, hierarchy that builds more and more complex organisms.
That's obvious.
I mean, this speaks to embryogenesis, which I think is one of the coolest things in the universe.
And in fact, you acknowledge its coolness in Ingressing Minds paper, writing, most of the big questions of philosophy are raised by the process of embryogenesis right in front of our eyes.
A single cell multiplies and self-assembles into a complex organism with order on every scale of organization and adaptive behavior.
Each of us takes the same journey across the Cartesian cut, starting off as a quiescent human oocyte, a little blob thought to be well described by chemistry and physics.
Gradually, it undergoes metamorphosis and eventually becomes a mature human with hopes, dreams, and a self-reflective metacognition that can enable it to describe itself as not a machine.
It's more than its brain, body, and underlying molecular mechanisms, and so on.
What in all of our discussion can we say as the clear intuition how it's possible to take a leap from a single cell to a fully functioning organism, full of dreams, and hopes, and friends, and love, and all that kind of stuff.
In everything we've been talking about, which has been a little bit technical, like how do we understand?
Because that's one of the most magical things the universe is able to create.
Perhaps the most magical.
From simple physics and chemistry, create this us two talking about ourselves.
I think we have to keep in mind that physics and chemistry are not real things.
They are lenses that we put on the world, that they are perspectives where we say, we are, for the time being, for the duration of this chemistry class or career or whatever, we are going to put aside all the other levels and we're going to focus on this one level.
And that what is fundamentally going on during that process is an amazing positive feedback loop of collective intelligence for the interface.
It's the physical interface is scaling the cognitive light cone that it can support.
So it's going from a molecular network.
The molecular network can already do things like Pavlovian conditioning.
You don't start with zero.
When you have a simple molecular network, you are already hosting some patterns from a platonic space that look like Pavlovian conditioning.
You've already got that starting out.
That's just a molecular network.
Then you become a cell, and then you're many cells.
And now you're navigating anatomical morphous space and you're hosting all kinds of other patterns.
And eventually, and I think, again, I think there's, and this is like what, you know, all the stuff that we're trying to work out now, there's a consistent feedback between the ingressions you get and the ability to have new ones, which again, I think it's this like positive feedback cycle where the more of these free gifts you pull down, they allow you physically to develop to a ways where, oh, look, now we're suitable for more and higher ones.
And this continuously goes and goes and goes until, you know, until you're able to pull down a full human set of behavioral capacities.
What is the mechanism of such radical scaling of the cognitive cone?
Is it just this kind of the same thing that you were talking about with the network of chemicals being able to learn?
I'll give you two mechanisms that we found.
But again, just to be clear, these are mechanisms of the physical interface.
What we haven't gotten is a mature theory of how they map onto the space.
That's just like just beginning.
But I'll tell you what the physical side of things look like.
The first one has to do with stress propagation.
So imagine that you've got a bunch of cells and there's a cell down here that needs to be up there.
All of these cells are exactly where they need to go.
So they're happy.
Their stress is low.
This cell, now let's imagine stress is basically a it's a it's a it's a physical implementation of the error function.
It's basically the amount of stress is basically the delta between where you are now and where you need to be.
Not necessarily in physical position.
This could be in anatomical space and physiological space and in transcriptional space, whatever, right?
It's just the delta from your set point.
So you're stressed out, but these guys are happy.
They're not moving.
You can't get past them.
Now imagine if what you could do is you could leak your stress, whatever your stress molecule is.
And the cool thing is that evolution has actually conserved these highly.
So these are all, and we're studying all of these things.
They're actually highly conserved.
If you start leaking your stress molecules, then all of this stuff around here is starting to get stressed out.
When things get stressed, starting to get stressed out, their temperature, not physical temperature, but in the sense of like simulated annealing or something, right?
Their ability to, their plasticity goes up because they're feeling stress.
They need to relieve that stress.
And because all the stress molecules are the same, they don't know it's not their stress.
They are equally irritated by them as if it was their own stress.
So they become a little more plastic.
They become ready to kind of adopt different fates.
You get up to where you're going, and then everybody's stress can drop.
So notice what can happen by a very simple mechanism.
Just be leaky for your own stress.
My problems become your problems.
Not because you're altruistic, not because you actually care about my problems.
There's no mechanism for you to actually care about my problems, but just that simple mechanism means that faraway regions are now responsive to the needs of other regions, such that complex rearrangements and things like that can happen.
It's alignment of everybody to the same goal through this very dumb, simple stress-sharing thing.
Via leaky stress.
Leaky stress, right?
So there's another one.
There's another one, which I call memory anonymization.
So imagine here, two cells, and imagine something happens to this cell and it sends a signal over to this cell.
Traditionally, you send the signal over, this cell receives it.
It's very clear that it came from outside.
So this cell can do many things.
It could ignore it.
It could believe, you know, it could take on the information.
It could just ignore it.
It could reinterpret it.
It could do whatever.
But it's very clear that came from outside.
Now, imagine the kind of thing that we study, which is called gap junctions.
These are electrical synapses that could directly link the internal milieus of two cells.
If something happens to this cell, it gets, let's say, it gets poked and there's a calcium spike or something, that propagates through the gap junction here.
This cell now has the same information, but this cell has no idea.
Wait a minute, is that my memory or is that his memory?
Because it's the same, right?
It's the same, it's the same components.
And so what you're able to do now is to have a mind melt.
You can have a mind meld between the two cells where nobody's quite sure whose memory it is.
And when you share memories like this, it's harder to say that I'm separate from you.
If we share the same memories, we're kind of, and I don't mean every single memories, right?
So they still have some identity, but to a large extent, they have a little bit of a mind melt.
And there's many complexities you can lean on top of it.
But what it means is that if you have a large group of cells, they now have joint memories of what happened to us, as opposed to you know what happened to you and I know what happened to me.
And that enables a higher cognitive light cone because you have greater computational capacity, you have a greater area of concern of things you want to manage.
I don't just want to manage my tiny little memory states because I'm getting your memories.
Now I know I got to manage this whole thing.
So both of these things end up scaling the size of things you care about.
And that is a major ladder for cognition: this scale, the degree of the size of concern that you have.
It'd be fascinating to be able to engineer that scaling.
Probably applicable to AI systems.
How do you rapidly scale the cognitive cone?
Yeah, we have some collaborators in a company called SoftMax that we're working with to do some of that stuff.
In biology, that's our cancer therapeutic, which is that what you see in cancer, literally, is cells electrically disconnect from their neighbors when they were part of a giant memory that was working on making a nice organ.
Well, now they can't remember any of that.
Now they're just amoebas, and the rest of the body is just the external environment.
And what we found is if you then physically reconnect them to the network, you don't have to fix the DNA.
You don't have to kill the cells with chemo.
You can just reconnect them and they go back to, because they're now part of this larger collective, they go back to what they were working on.
And so, so, yeah, I think we can intervene at that scale.
Let me ask you more explicitly on the search, the sooty search for unconventional terrestrial intelligence.
What do you hope to do there?
How do you actually find try to find unconventional intelligence all around us?
First of all, do you think on Earth there is all kinds of incredible intelligence we haven't yet discovered?
I mean, guaranteed, we've already seen in our own bodies.
And I don't just mean that we are host to a bunch of microbiome or any of that.
I mean that your cells, and we have all kinds of work on this, every day they traverse these alien spaces, 20,000-dimensional spaces and other spaces.
They solve problems.
I think they suffer when they fail to meet their goals.
They have stress reduction when they meet their goals.
These things are inside of us.
They are all around us.
I think that we have an incredible degree of mind blindness to all of the very alien kinds of minds around us.
And I think that looking for aliens off the earth is awesome and whatever.
But if we can't recognize the ones that are inside our own bodies, what chance do we have to really, you know, to really recognize the ones that are out there?
Do you think that could be a measure like IQ for mind?
What would it be?
Not mindedness, but intelligence that's broadly applicable to the unconventional minds, that's generalizable to unconventional minds, where we could even quantify, like, holy shit, this discovery is incredible because it has this IQ.
Yeah, yes and no.
The yes part is that what, as we have shown, you can take existing IQ metrics, I mean, literally existing kinds of ways that people use to measure intelligence of animals and humans or whatever, and you can apply them to very weird things.
If you have the imagination to make the interface, you can do it.
And we've done it and we've shown creative problem solving and all this kind of stuff.
So yes.
However, we have to be humble about these things and recognize that all of those IQ metrics that we've come up with so far were derived from an N of one example of the evolutionary lineage here on Earth.
And so we are probably missing a lot of them.
So I would say we have plenty to start.
We have so much to start with.
We could keep tens of thousands of people busy just testing things now.
But we have to be aware that we're probably missing a lot of important ones.
What do you think has more interesting, intelligent, unconventional minds inside our body, the human body?
Or like we were talking off Mike, the Amazon jungle, like nature, natural systems outside of the sophisticated biological systems we're aware of.
Yeah.
We don't know because it's really hard to do experiments on larger systems.
It's a lot easier to go down than it is to go up.
But my suspicion is, you know, like the Buddhists say, innumerable sentient beings.
I think by the time you get to that degree of infinity, it kind of doesn't matter to compare.
I suspect there's just massive numbers of them.
Yeah, I think it really matters which kind of systems are amenable to our current methods of scientific inquiry.
I mean, I spent quite a lot of hours just staring at ants when I was in the Amazon.
And it's such a mysterious, wonderful collective intelligence.
I don't know how amenable it is to research.
I've seen some folks try.
You could simulate.
But I feel like we're missing a lot.
I'm sure we are.
But one of my favorite things about that kind of work: have you seen there's at least three or four papers showing that ant colonies fall for the same visual illusions that we fall for?
Not the ants, the colonies.
So if the colonies.
So if you lay out food in particular patterns, they'll do things like complete lines that aren't there and like all the same shit that we fall for.
They fall for.
So I don't think it's hopeless, but I do think that we need a lot of work to develop tools.
Do you think all the tooling that we develop and the mapping that we've been discussing will help us do the SETI part, finding aliens out there?
I think it's essential.
I think it's essential.
We are so parochial in what we expect to find in terms of life that we are going to be just completely missing a lot of stuff.
If we can't even agree on, never mind definitions of life, but What's actually important?
I led a paper recently where I asked whatever, 65 or so modern working scientists for a definition of life.
And we had so many different definitions across so many different dimensions.
We had to use AI to make a morphous space out of it.
And there was zero consensus about what actually is important.
And if we're not good at recognizing it here, I just don't see how we're going to be good at recognizing it somewhere else.
So, given how miraculous life is here on Earth, so it's clear to me that we have so much more work to do.
That said, would that be exciting to you if we find life on other planets in the solar system?
Like, what would you do with that information?
Or is that just another life form that we don't understand?
I would be very excited about it because it would give us some more unconventional embodiments to think about, right?
A data point that's pretty far away from our existing data points, at least in this solar system.
So that'd be cool.
I'd be very excited about it.
But I must admit that my level of my set point for surprise has been pushed so high at this point that it would have to, you know, it would have to be something really weird to make me shocked.
I mean, the things that we see every day is just, yeah.
I think you've mentioned in a few places that you wrote that the Ingressing Minds paper is not the weirdest thing you plan to write.
Yeah.
How weird are you going to get?
Can you hit maybe a better question is like, in which direction of weirdness do you think you will go in your life?
In which direction of the weird overtone window are you going to expand?
Yeah.
Well, the kind of background to this is simply that I've had a lot of weird ideas for many, many decades.
And my general policy is not to talk about stuff until it becomes actionable.
And the amazing thing, I mean, I'm really kind of shocked, is that in my lifetime, the empirical work, like I really didn't think we would get this far.
And the knob, I have this like mental knob of what percentage of the weird things I think do I actually say in public, right?
And every few years, when the when the empirical work moves forward, I sort of turn that knob a little, right?
As we keep going.
So I have no idea if we'll continue to be that fortunate or how long I can keep doing this or however, like, I don't know.
But just to give you a direction for it, it's going to be in the direction of what kinds of things do we need to take seriously as other beings with which to relate to.
So I've already pushed it, you know, so like we knew brainy things and then we said, well, it's not just brains.
And then we said, well, it's not just, so, you know, it's not just in physical space and it's not just biologicals and it's not just complexity.
There's a couple of other steps to take that I'm pretty sure are there, but we're going to have to do the actual work to make it actionable before, you know, before we really talk about it.
So that direction.
I think it's fair to say you're one of the more unconventional humans scientists out there.
So the interesting question is, what's your process of idea generation?
What's your process of discovery?
You've done a lot of really incredibly interesting, like you said, actionable, but interesting out there Ideas that you've actually engineered with Xenobots and Anthrobots, these kinds of things.
Like, what when you go home tonight, go to the lab, what's the process, empty sheet of paper, when you're thinking through it?
Well, the mental part is a lot of it, much like, funny enough, much like making Xenobots, you know, we make Xenobots by releasing constraints, right?
We don't do anything to them.
We just release them from the constraints they already have.
And then we see, so a lot of it is releasing the constraints that mentally have been placed on us.
And part of it is my education has been a little weird because I was a computer scientist first and only later biology.
And so by the time I heard all the biology things that we typically just take on board, I was already a little skeptical and thinking a little differently.
But a lot of it comes from releasing constraints.
And I very specifically think about, okay, this is what we know.
What would things look like if we were wrong?
Or what would it look like if I was wrong?
What are we missing?
What is our worldview specifically not able to see, right?
Whatever model I have.
Or another way I often think is I'll take two things that are considered to be very different things and I'll say, let's just imagine those as two points on a continuum.
What does that look like?
What does the middle of that continuum look like?
What's the symmetry there?
What's the parameter that I can, you know, what's the knob I can turn to get from here to there?
So those kinds of, I look for symmetries a lot.
I'm like, okay, this thing is like that way in what way?
What's the fewest number of things I would have to move to make this map onto that?
So those are kind of mental tools.
The physical process for me is basically, I mean, obviously, I'm fortunate to have a lot of discussions with very smart people.
And so in my group, I've hired some amazing people.
So we, of course, have a lot of discussions and some stuff comes out of that.
My process is I do pretty much every morning or I'm outside for sunrise and I walk around in nature.
This is not really anything better than as inspiration, right, than nature.
I do photography and I find that it's a good meditative tool because it keeps your hands and brain just busy enough.
Like you don't have to think too much, but you're sort of twiddling and looking and doing some stuff.
And it keeps your brain off of the linear, like logical, like careful train of thought enough to release it so that you can ideate a little more while your hands are busy.
So it's not even the thing you're photographing.
It's the mechanical process of doing the photography.
And mentally, right?
So because I'm not walking around thinking, okay, let's see.
So for this experiment, I got to get this piece of equipment and this, like that goes away.
And it's like, okay, what's the lighting?
And what am I looking at?
And during that time, when you're not thinking about that other stuff, then I say, well, yeah, I got to get a, you know, I got a notebook.
And I'm like, look, this is what we need to do.
So that, that kind of stuff.
And the actual idea, writing down stuff, is a notebook, is it computer?
Are you super organized thinking or is it just like random words here and there with drawings?
And I would, if and also, like, what is the space of thoughts you have in your head?
Is this sort of amorphous things that aren't very clear?
Are you visualizing stuff?
Is there something you can articulate there?
I tend to leave myself a lot of voicemails because as I'm walking around, I'm like, oh man, this idea.
And so I'll just call my office and leave myself a voicemail for later to transcribe.
I don't have a good enough memory to remember any of these things.
And so what I keep is a mind map.
So I have an enormous mind map.
One piece of it hangs in my lab so that people can see, like, these are the ideas.
This is how they link together.
Here's everybody's project.
I'm working on this.
How the hell does this attach to everybody else?
So they can track it.
The thing that hangs in the lab is about nine feet wide.
It's a silk sheet.
And I, you know, it's, it's out of date within a couple of weeks of my printing it because new stuff keeps moving around.
And then, and then there's more that isn't for anybody else's view.
But yeah, I try to be very organized because otherwise I forget.
So everything is in the mind map.
Things are in manuscripts.
I have something like right now, probably 163, 62 open manuscripts that are in process of being written at various stages.
And when things come up, I stick them in the right manuscript in the right place so that when I'm finally ready to finalize, then I'll put words around it, whatever.
But there's like outlines of everything.
So I try to be organized because I can't, I don't have to, you know.
So there's a wide front of manuscripts of work that's being done and it's continuously like pushing towards completion, but you're not clear where it's going to be finished when and how.
I mean, that's yes, but that's just the, that's just the theoretical, philosophical stuff, the empirical work that we're doing with in the lab.
I mean, those are, we know exactly, you know, it's more focused.
There's just a lot of things.
Like we know this is this is you know anthropot aging.
This is limb regeneration.
This is the new cancer paper.
This is whatever.
Yeah, those things are very linear.
Where do you think ideas come from when you're taking a walk that eventually materializing a voicemail?
Where's that?
Is that from you?
Is that, you know, a lot of really, some of the most interesting people feel like they're channeling from somewhere else.
I mean, I hate to bring up the platonic space again, but, but, I mean, if you talk to any creative, that's basically what they'll tell you, right?
And certainly that's been my experience.
So I feel like it's a, the way it feels to me is a collaboration.
So collaboration is I need to bust my ass and be prepped in one, A, to work hard, to be able to recognize the idea when it comes, and B, to actually have an outlet for it so that when it does come, we have a lab and we have people who can help me do it and then we can actually get it out, right?
So that's my part is, you know, be up at 4.30 a.m. doing your thing and be ready for it.
But the other side of the collaboration is that, yeah, when you do that, like amazing ideas come.
And to say that it's me, I don't think would be right.
I think it's definitely coming from other places.
What advice would you give to scientists, PG students, grad students, young scientists that are trying to explore the space of ideas, given the very unconventional, non-standard, unique set of ideas you've explored in your life and career?
Let's see.
Well, the first and most important thing I've learned is not to take too much advice.
And so I don't like to give too much advice.
But I do have one technique that I found very useful.
And this isn't for everybody, but there's a specific demographic because a lot of unconventional people reach out to me and I try to respond and help them and so on.
This is a technique that I think is useful for some people.
How do I describe it?
You need to, it's the act of bifurcating your mind.
And you need to have two different regions.
One region is the practical region of impact.
In other words, how do I get my idea out into the world so that other people recognize it?
What should I say?
What are people hearing?
What are they able to hear?
How do I pivot it?
What parts do I not talk about?
Which journal am I going to publish in?
Is it time now?
Do I wait two years for this?
All the practical stuff that is all about how it looks from the outside, right?
All the stuff that I can't say this, or I should say this differently, or this is going to freak people out, or this is, you know, this community wants to hear this so I can pivot it this way.
Like all that practical stuff.
It's got to be there.
Otherwise, you're not going to be in a position to follow up any of your ideas.
You're not going to have a career.
You're not going to have resources to do anything.
But it's very important that that can't be the only thing.
You need another part of your mind that ignores all that shit completely because this other part of your mind has to be pure.
It has to be, I don't care what anybody else thinks about this.
I don't care whether this is publishable, describable.
I don't care if anybody gets it.
I don't care if anybody thinks it's stupid.
This is what I think and why, and give it space to sort of grow, right?
And if you keep the, if you try to mush them, if you try to mush them together, I found that impossible because the practical stuff poisons the other stuff.
If you're too much on the creative end, you can be an amazing thinker.
It just nothing ever materializes.
But if you're very practical, it tends to poison the other stuff because the more you think about how to present things so that other people get it, it constrains and it and it bends how you start to think.
And, you know, what I tell my students and others is there's two kinds of advice.
There's very practical, specific things.
Like somebody says, well, you forgot this control, or this isn't the right method, or you shouldn't be.
That stuff is gold.
And you should take that very seriously and you should use it to improve your craft, right?
And that's like super important.
But then there's the meta-advice where people are like, that's not a good way to think about it.
Don't work on this.
This isn't that stuff is garbage.
And even very successful people often give very constraining, terrible advice.
Like one of my reviewers in a paper years ago said, I love this Freudian slip.
He said he's going to give me constrictive criticism, right?
And that's exactly what he gave me was constrictive criticism.
I was like, that's awesome.
That's a great type of.
That's very true.
I mean, that second, the bifurcation of the mind is beautifully put.
I do think some of the most interesting people I've met are sometimes fall short on the normy side, on the practical, how do I having the emotional intelligence of how do I communicate this with people that have a very different worldview, that are more conservative and more conventional and more kind of fit into the norm.
You have to be able to have the skill to fit in.
And then you have to, again, beautifully put, be able to shut that off when you go on your own and think.
And having two skills is very important.
I think a lot of radical thinkers think that they're sacrificing something by learning the skill of fitting in.
But I think if you want to have impact, if you want ideas to resonate and actually lead to, first of all, be able to build great teams that help bring your ideas to life.
And second of all, for your ideas to have impact and to scale and to resonate with a large number of people, you have to have that skill.
And those are very different.
Those are very different.
Let me ask a ridiculous question.
You already spoke about it, but what to you is one of the most beautiful ideas that you've encountered in your various explorations.
Maybe not just beautiful, but one that makes you happy to be a scientist, to be able to be a curious human exploring ideas.
I mean, I must say that, you know, I sometimes think about these ingressions from this space as a kind of steganography.
You know, so steganography is when you hide data and messages within the bits of another pattern that don't matter, right?
And the rule of steganography is you can't mess up the main thing.
You know, it's a picture of a cat or whatever.
You got to keep the cat, but if there's bits that don't matter, you can kind of stick stuff.
So I feel like all these ingressions are a kind of universal steganography, that there's this like these patterns seep into everything everywhere they can.
And they're kind of, they're kind of shy, meaning that they're very subtle, not invisible.
If you work hard, you can catch them, but they're not invisible, but they're hard to see.
And The fact that I think they also affect quote unquote machines as much as they certainly affect living organisms, I think is incredibly beautiful.
And I personally am happy to be part of that same spectrum.
And the fact that that magic is sort of applicable to everything, a lot of people find that extremely disturbing.
And that's some of the hate mail I get.
It's like, yeah, we were with you on the majesty of life thing until you got to the fact that machines get it too.
And now that's like terrible, right?
You're kind of devaluing the majesty of life.
And I don't know.
The idea that we're now catching these patterns and we're able to do meaningful research on the interfaces and all that is just to me absolutely beautiful.
And that it's all one spectrum, I think, to me is amazing.
I'm enriched by it.
I agree with you.
I think it's incredibly beautiful.
I lied.
There's an even more ridiculous question.
So it seems like we are progressing towards possibly creating a super intelligent system.
And AGI and ASI, if I had one, gave it to you, put you in the room, what would be the first question you ask it?
Maybe the first set of questions.
Like there's so many topics that you've worked on and are interested in.
Is there a first question that you really just, if you can get an answer, solid answer?
I mean, well, the first thing I would ask is, how much should I even be talking to you?
For sure, because it's not clear to me at all that getting somebody to tell you an answer in the long run is optimal.
It's the difference between when you're a kid learning math and having an older sibling that'll just tell you the answers, right?
Like sometimes it's just like, come on, just give me the answer.
Let's move on with this, you know, cancer protocol and whatever.
Like, great.
But in the long run, the process of discovering it yourself, how much of that are we willing to give up?
And by getting a final answer, how much have we missed of stuff we might have found along the way?
Now, I don't know what the thing is, I don't think it's correct to say, don't do that at all.
Take the time and all the blind alleys.
And that may not be optimal either, but we don't know what the optimal is.
We don't know how much we should be stumbling around versus having somebody tell us the answer.
That's actually a brilliant question to ask AGI.
Then I think, I mean, if it's really an AGI question.
Yeah, if it's really an AGI, I'm like, tell me what the balance is.
Like, how much should I be talking to you versus stumbling around in the lab and making all my, you know, all my own mistakes?
It was at 70, 30, you know, 1090.
I don't know.
So that would be, that would be.
And then the AGI will say you shouldn't be talking to me.
It may well be.
It may say, what the hell did you make me for in the first place?
You guys are screwed.
Like, that's possible.
Yeah.
You know, the second question I would ask is, what's the answer I should be?
What's the question I should be asking you that I probably am not smart enough to ask you?
That's the other thing I would say.
This is really complicated.
That's a really, really strong question.
But again, the answer might be you wouldn't understand the question it proposes, most likely.
So I think for me, I would probably, assuming you can get a lot of questions, I would probably go for questions where I would understand the answer.
Like it would uncover some small mystery that I'm super curious about.
Because if you ask big questions like you did, which is really strong questions, I just feel like I wouldn't understand the answer.
If you ask it, what question should I be asking you?
It would probably say something like, you'll say something like, what is the shape of the universe?
And you're like, what?
Why is that important?
Right.
You would be very confused by the question it proposes.
Yeah.
I would probably want to, it would just be nice for me to know straight up first question: how many living, intelligent alien civilizations are in the observable universe.
Yeah, that would just be nice.
Yeah.
To know if is it zero or is it a lot?
I just want to know that.
And then, and unfortunately, it might answer.
It might, it might be a, give me a, uh, a Michael Levin answer.
That's what I was about to say: is that my guess is it's going to be exactly the problem you said, which is, it's going to say, oh my God, I mean, right in this room, you got, you know, and like, oh, man.
Yeah, yeah, yeah.
Everything you need to know about alien civilizations is right here in this room.
In fact, it's inside your own body.
Just for starters.
AGI, thank you.
All right, Michael, dear one, one of my favorite scientists, one of my favorite humans.
Thank you for everything you do in this world.
Thank you so much.
Truly, truly fascinating work.
And keep going for all of us.
Thank you so much.
Thank you so much.
It's great to see you.
Like, it's always a good discussion.
Yeah.
Thank you so much.
I appreciate this.
Thank you.
Thanks for listening to this conversation with Michael Levin.
To support this podcast, please check out our sponsors in the description, where you can also find links to contact me, ask questions, get feedback, and so on.
And now, let me leave you with some words from Albert Einstein.
The most beautiful thing we can experience is the mysterious.