971 Consciousness and Evolutionary Ethics - A Listener Conversation
|
Time
Text
I'm calling you from Freedom Main Radio. Hello!
Hey, how's it going? I'm not too bad, how are you?
I'm good, good thanks.
That's good, that's good. Just to save us all from carpal tunnel syndrome, I thought it might be the kind of thing that would be better to talk about rather than type about.
Indeed. And yeah, the personal interaction is much more effective, you know, over the phone.
Sure, sure. Absolutely. Absolutely.
And I am recording this, just so you know, if we get useful stuff out of it, it's a good idea to release it as a podcast, if that's all right with you.
I would be honored. Excellent, excellent.
Okay, so you have a thesis about the immateriality of consciousness that, if you could just sort of step me through it, I got, I think, most of it from your post, but just so I make sure I understand it.
Okay. Basically, I was trying to figure out what consciousness was one day, And so I decided to design a robot that was as similar to a human as possible, without actually being conscious.
So like the queen or something?
The queen? What? Just kidding, just kidding.
Go on. Okay. Yeah, my sense of humor is sometimes busted.
Or mine is bad. One of the two we don't know.
No, no. No, I get it afterward.
Right. Anyway...
You see, now I'm getting it.
It's like, oh yeah, that's funny.
Okay, so we designed this Android, and I tried to eliminate everything that the Android would not be able to do without consciousness, as a way of trying to figure it out.
And I realized I can't really figure out anything that the Android can't do.
Without consciousness, unless consciousness is somehow not a physical property, basically.
Right, right.
That makes sense. Like, unless there's magic pixie dust in the human brain which produces consciousness that can't be replicated any other way, then if you design an android or an android is brought into being that has all of the physical attributes of consciousness, then it should possess consciousness,
right? Indeed. Yeah, I mean, that seems perfectly reasonable to me, and another way of looking at that is that when we go out into the intergalactic federation and we meet other beings, perhaps even silicon-based, that have developed all the properties required for consciousness, then they should.
You know, if you photocopy a brain, you should get consciousness, right?
Indeed.
But then if you think about that some more, and you say, "Okay, I'm now going to remove things from the Android until it's not conscious." Correct.
And now I'm nervous, I've forgotten my argument.
Oh, no problem. Take your time.
This is complex stuff, so no rush.
Alright. You might want to, just if you want to, you might want to bring up your post.
Yeah, there we go. Because it's always hard to read.
Sometimes people ask me about UPB, and I get a little brain seizure.
It's like, Yeah, it seems kind of true, but now I can't think of exactly what it is.
So, it's unendable.
Okay.
Right.
So, we've made this Android, and it can do everything a human can do.
And we try and remove consciousness from it.
But we find that we can't.
Because it's...
It's a brain that we've designed.
There's no particular component we can take out.
We have to completely cripple it so it stops being conscious.
Sorry to interrupt, but I just kind of want to make sure that I understand that point.
So we have a fully conscious android brain, because when we look at human beings, we can do things like certain brain injuries or a frontal lobotomy or things like that can reduce our capacity for consciousness.
And it sort of goes down the scale of intelligence, right?
So if you have somebody who has...
An IQ of 150, then they're really smart, and then as you scale it back down to 100, to 90, 80, 70, and so on, you get diminishing levels of consciousness until you end up with, I don't know, like a brain-dead coma victim or something.
So I'm just trying to, like, the way that I would analogize it in my mind, rightly or wrongly, is to say that As we take away aspects of the android's intelligence, it would be like going down through the scale until you end up with the intelligence of, I don't know, like a smart chimpanzee or, you know, through the human mind.
Would that be reasonable or no?
I'm going to mention a very fascinating study that was done.
There's this particular brain injury that you can get, and it wipes out conscious perception of one side of your visual field.
Outside of your visual field, right, okay.
So, like, if you ask the person, hey, what's over there?
They're like, I don't know, I'm blind over there, you know, brain damage, haha.
But if you punch them, like if you punch at them over there in the blind spot, they will flinch.
Cool. Yeah, this is the Oliver Sacks stuff, because I used to read some of this stuff when I was younger.
It's really neat stuff. I don't know.
I saw it in New Scientist.
Ah, right, right. So, what we know is that we can process information Without it being conscious.
I am kind of stretching this here.
You'd probably really want to do a lot more tests, but I think from the evidence that we have that you can process without consciousness.
Oh, and I would fully, fully support you on that, insofar as, you know, in some of the work that I do in psychological areas, you can have, you know, somebody claims to love his wife and truly believes that he loves his wife, but then has a dream that he wants to kill her, and then as he goes down the road, he finds out that she's kind of mean and sarcastic and puts him down, or whatever, right?
So, I'm certainly a full believer that we have massive horsepower that goes below the surface.
Right, so... The question then is, you know, how much processing can we do without consciousness?
And basically, I don't see any reason why we can't do all of it without consciousness.
I don't see what exactly...
Again, I just want to understand what you mean by that.
So, would you say that a dream that you would have at night is the same as processing without consciousness?
Well, it has...
Well, physically speaking, it has consequences, right?
Like, it's not just... There's a dream and you experience it.
There's a dream and there's memories laid down and you may act on it, and so on.
And these could be, I'm not a determinist, but just for the sake of argument, these could be deterministically done.
Like you could implement in a computer all the same impulses as the brain, as the dream, and have it lay down the memories, and have it You didn't act on those memories much as a human would, but instead of doing it through conscious choice, it did it because you programmed it to.
Sure. No, I can understand that.
This would also be the effects of childhood propaganda, as we see within society the necessity of the state.
And so it's all just taken for granted, and people don't even think about it, in the same way that in the Middle Ages they really didn't think about the non-existence of God, because they were never exposed to the idea.
But I just wanted to understand, because you're saying processing without consciousness, and I just want to make sure that I understand what you mean by that.
So you would include at night, because obviously we're not conscious while we're dreaming, but there's a lot of processing going on.
Are we conscious while we're dreaming?
That's actually a good question.
Because... Okay, I'm going to define consciousness.
Okay. Consciousness is stuff we experience.
So we experience a dream.
So we're conscious while we're dreaming.
It's just like a separate kind of consciousness than waking consciousness.
We're not conscious... Oh, so it's experiences that we process at some level that we can remember.
Is that right? Sorry?
Is it experiences that we process that at some level we can remember?
Yes. Basically, yes.
Okay, got it. Yes.
I was just thinking about sleepwalkers, and I'm like, are sleepwalkers conscious?
That's interesting. But, like, and then during deep sleep, you're definitely not conscious, because you don't lay down any memories that you can access while you're sleeping deeply.
Right. So, yeah, so if we can...
Okay, and here's the thing.
We either can implement dreams deterministically in silicon, and in such a way that...
In such a way that they're indistinguishable from human dreams, or we can't.
And if we can, then consciousness is an extra property.
It's something that the brain does that's...
Magic. That's like magic, yes.
Well, it's not like magic. To me, it would be like if you replicate the brain at every level and you don't end up with consciousness, then consciousness is a kind of pixie dust, right?
Which would, to me, be entirely mystical.
Yes, and I certainly don't believe that.
I'm just saying that I can't rule it out because we'd have to actually build this thing to know.
Well, I would like to say that I would feel more comfortable ruling it out, and that may be my own hubris or whatever, but I would feel more comfortable, because for me, an explanation which is magic pixie dust is never an explanation, right? So, I would never be satisfied with that and say that that's an answer, because there always must be an answer that is not, it's magic, you know?
Like, that's what I've always tried to sort of work with in theory of ethics and so on.
So, anyway...
So that's eliminated.
But if we can't do it, if we can't implement a dream in a deterministic way in an Android, then what we have is...
If you want, we can actually make the Androids use neurons for processing.
I believe this is theoretically possible.
Yeah, that would be closer to what it is we're trying to...
Like, rather than try to build a car out of balsa wood, we should build it out.
It's going to be closer, so...
Yeah, okay, so if we do that, if we try and do it deterministically and we fail, now let me think about this for a second, make sure what I'm about to say makes sense.
How would we fail? I'm not sure how that would fail.
If we tried to build the android and we can't in any way, shape, or form reproduce human consciousness despite having a functional copy of the brain in an android system, is that right?
Then we would fail because there's a magic pixie dust that we can't figure out to do with consciousness.
Well, then what's happened is that there's an unknown in the brain, actually.
But it's not part of the neurons.
It's not part of anything physical that's going on.
It's some... It's actually gone beyond that.
Basically, the neurons in the brain have done a weird dance and have now contacted something beyond physical.
Yeah, and if I understand, to analogize this, and let me know if this works for you or not, if I assemble a cow cell by cell, right, and then put the energy into it that's the equivalent of a cow's energy, I should get some substantial methane farts, right?
Like I should get a living cow that moves around and moos and so on, right?
But if I do everything...
That is the same as a cow and run the electricity through it or whatever, and it doesn't come alive, then there's something else that's going on.
Would that be a fair analogy?
Yes. That is everything I said.
I suddenly realized I want to add something.
So we build a cow cell by cell, and we put the energy in and we get farts.
Or... We've both basically ruled that out.
That's the pixie dust. We'll rebuild it with the same cells, but not necessarily cell by cell.
We just put all the components in, as far as we know.
And it mostly farts, but it doesn't quite work.
It starts farting like the tubes get crossed.
It doesn't quite work.
Right, or it's unable to reproduce or it gives birth to a cow with three heads or something like that.
Right. Right. It's like if I type in a computer program that you've written, it should be the same, right?
It should work exactly the same.
Right. And so if we do that, if we build an android with a neuron brain and we Can't make it conscious?
Then we've missed something, but we use neurons.
What can we possibly miss?
Right, right, right.
And this is where a religious person would step in and say, well, you can make the body, but you can't make the soul, right?
Yeah, and I personally think that you can make the soul.
Yeah, I agree. We'll go with that.
I think my mom and my dad did, just didn't know it.
Yeah, I mean, I think we certainly can.
If you can reproduce everything, then you should get the same stuff, right?
Right. I mean, with the minor caveat that if free will is a valid position, which I think you and I would both agree it's a reasonable place to start from, then you would have all of the capacity, but you may not get a photocopied life, right?
At least you probably wouldn't. Right, naturally.
Actually, my proof of free will is three lines long.
My first axiom is trust your senses.
So, it's like, assume I trust my senses.
Do I have free will?
Yes. Therefore, I have free will.
It's kind of fun. Right.
That's very, very succinct, I must say.
So, which two possibilities?
We've got the...
We can't make a...
Well, you had started, and this is, of course, perfectly valid, right?
but you had started by saying that this position was in opposition to a position that I had held or explicated or expounded in some way.
And I wasn't sure that I followed which position of mine it was inconsistent with.
I'm not sure it matters yet, because if you agree it matters, and if you don't agree it, it just doesn't matter.
True. If you want me to go with it anyway...
No, no, that's fine. We can go ahead with the conversation.
It doesn't particularly matter.
I was just wondering if it was on the top of your head, but please, go on.
All right, so we've talked about...
Okay, so we...
So we've got the...
Just while you're looking, I'll throw a few two cents in, because I think that we may be pursuing a line of thought that has had some exploration before, of course, because I think that we may be pursuing a line of Aristotle called it the essence of a thing, right?
So he'd say, well, if you see a baby, you look at it and you say, hey, that's a baby, right?
And then if you see a baby that's 20 feet tall, you say, oh my god, that's an enormous baby.
And then if you see a baby 20 feet tall with tentacles coming out of its belly that's blue, you'd say, oh my god, it's a 20 foot tall blue tentacle sprouting baby.
But at some point you will add something where you'll say, I don't know what the hell that is.
That's some sort of freaky thing that I can't even figure out.
And that last thing that changes, he would call the essence, right?
When something changes from one thing to another, it's the essence of that thing.
And in some ways, I think what we're trying to do is say, at what differentiating point is consciousness no longer recognizable as consciousness?
Sort of. That would be nice, yeah, if we could do that.
And this is the kind of thing where when we look at a dead guy, we say, well, that's not conscious, right?
And when we look at a guy who's giving a lecture, we say, that's conscious, right?
And there's some parts in the middle.
That we can't figure out, right?
I mean, or that are in a continuum that there's a gray area, right?
Like, we can look at a child and say, that's pre-puberty, and then we can look at a 20-year-old and hopefully say, that's post-puberty, but there's a gray area in the middle where it's like, kinda, kinda?
And I think the same thing is probably true of consciousness.
Okay, so we've got the...
Alright.
So we've got the android that...
That fails to be conscious.
We've got the android that's conscious and I'm going to prove that both of these mean that something interesting is happening.
So if we've got the android that is conscious and basically we start messing around with its modules a bit and we can't find the one that actually does consciousness and we can't find the relationship that actually does consciousness then Then consciousness is kind of like a magic pixie dust.
There's no objective test to prove that it exists or not.
That's true, but that of course would be, that in a sense is the argument from omniscience.
Like if we knew every possible variable, but we still couldn't figure it out.
But then the argument would, I think, be more strong to say, well, if we can't figure it out, then we don't know every possible variable.
Like, in a sense, we know that we know every possible variable when we can figure it out.
So I don't know that it would be a strong argument to say, we know every possible variable, but we can't figure it out.
Because then the fact of knowing every variable would be the indication, That we haven't figured it out yet, that there's a variable there that we don't know about.
Okay. So that's why I have to then consider the option where we can get rid of consciousness, but we can't quite figure out why.
Right. So we're in possession of all the variables, but we still can't reproduce consciousness.
No, we're...
Hold on. No, we have an android brain, and we can't quite...
Wait a minute. So we can't remove consciousness, the thing with the omniscience, but the opposite is that, like if we're not omniscient, we just have to assume that we happen to find it.
We happen to find that thing that does the consciousness.
And I think there is actually like a lobe in the brain that does consciousness.
Right. That like, that does, you know, you damage it and you lose part of your visual field and so on.
Right. And so we take that out and then we find out what the android can still do.
Right, so you can't see a ball but can catch a ball or something like that.
Yeah. And there's only two possibilities.
One is perfectly functional.
In which case, the consciousness didn't do anything.
Right, it's just like imaginary icing on a cake, right?
Right. And second, it does do something.
Alright, here we go.
Right, and if I understand it correctly, the example that you gave earlier of the brain that was flinching from a blow that it could not see, I think, of course, one of the evolutionary challenges of that would be that that is reactionary rather than initiatory.
So you can flinch from something that's coming at you, but you can't see where to pick the apple, so you can't eat, right?
So if consciousness could still react to stimuli that was coming in despite no conscious processing of it, That would be certainly something that would be occurring, but the evolutionary use and value of consciousness would be to initiate action towards a goal.
I like that idea on one hand, and on the other hand, I'm going to say, what if, in response to hunger, you flinch toward where you think an apple tree is, and in response to seeing the apple, you flinch in a very complex way such that you grab the apple and start eating it.
That is certainly possible, but if you did not know where the apple tree was and could not consciously process visual stimuli, then you wouldn't be able to do that, I don't think.
We've got an Android that's doing this, right?
We can just program in and give it the coordinates of the apple tree, and then it does something that we can't functionally distinguish from consciousness.
Well, yes, but then I would say that by typing in the coordinates, we are actually a substitute for the conscious senses, if that makes sense.
Well, but then we just pull the flinches back, right?
We say, okay, you're born, and then you flinch and ask your parents where apple trees are.
Right, right, right.
But then there's a bit of an infinite regression thing, because someone at some point has to not flinch, but actually know where the apple trees are and so on.
Yeah, well, that's a problem with the creation of consciousness in the first place, right?
Which I don't think anyone's really competent to examine, because think about this.
There's basically, there was one animal, and it was not conscious, and then it had a baby, and that animal was very slightly conscious.
Right, that's freaky as all get-up, isn't it?
But that's like the life thing, too, right?
Well... I can totally see the life thing, though.
Like, life appearing, that's cool.
It's really, really cool, but it's just, you know...
Well, I agree with you, but to me, the life thing is bigger than the consciousness thing, because it's like, nothing moves, nothing moves unless some outside force acts upon it, and then, bam, hey, we've got stuff moving with no outside force acting on it, so to speak.
And that seems to be pretty wild.
I don't think that's true, actually, because the outside forces, in the case of small cells, they're like chemical concentrations.
It's just that suddenly the matter can tell.
It just self-organizes into something that has a...
How do I put this?
Well, okay. Like a cell code of power, like it moves and it hunts and it reproduces and so on.
Yeah, but the same with consciousness, we start with just, you know, one function.
And I don't think it matters particularly what function this is.
But basically, like, you have basically a bunch of fat molecules just floating around.
And they like to form into globules because that's what they're like.
And they just kind of float around and it's not alive.
And then it absorbs...
Some proteins and suddenly these proteins cause it to move towards other proteins that cause it to move toward other proteins.
And so now it seems to have a goal.
It wants to concentrate this protein which is against entropy in a closed system.
And so we go from, you know, basically completely not life to something that is life, but it's still mechanistic.
And we can just keep adding these components until we get something that looks like a cell.
Yeah, that's true. That's a good point.
But then consciousness, as you say, is like a completely...
I mean, there's lots of life, but there really is only one human consciousness, right?
It is the unique thing in the universe as far as we...
Right.
Oh, well. That's actually a good question, whether it is or not, because...
The real reason that I know that you're conscious is because I can talk to you.
I can ask you questions, and you respond in a way that is familiar to me.
And so I know that you share some properties that I have.
Right. Hopefully not over-familiar to you, otherwise my show means nothing.
But go on. Yes, indeed.
Well, actually, no. You see, when you're familiar, it's like, ah, right, so now I know you're consistent.
And then when you're unfamiliar, you're like, well, I don't know that, but he was right before, so he's probably still right.
Anyway, but we can't really talk to a chimp.
So it's hard to tell if they're conscious or not.
And then we really, really can't talk to a bug.
Right. There is a test you can do.
I'm not sure if they know what attention is.
This is the only problem. But you can observe fruit flies paying attention.
They put down their books, right?
Well, what they do is they trap it with a bit of glue and they put it in a tube that's a screen.
And then they just, like, play blocks going by it and see how the brain activity changes.
Right. And it seems like the fruit fly can choose which block it pays attention to.
Huh. Why else?
Yeah, it's a very basic form of consciousness.
Like, I'm not sure it would actually be able to do anything with its attention, but it can actually pay attention.
Well, certainly, I mean, even down at the single-cell tapeworm level, so to speak, You have to differentiate between food and danger, right?
I mean, so there has to be some level of focus.
But selective focus between the same foods would be something, I guess, more advanced.
I'm going to think about that.
Because I want to... So, anyway, the initiation of consciousness.
And so, we have something that's not conscious, and then we have something that is conscious.
And... And I forget where I was coming with that.
Why were we talking about that?
Something that... Oh, you were talking about...
I mean, the... The issue as a whole is this differentiation between non-consciousness and consciousness, and certainly it would have stepped up through a ladder, right?
Evolution tends to work in a real, slow, painful, step-by-step process.
There's not... Like, you don't just sort of go from cave fish to guy with eye, right?
I mean, it's sort of slow...
I mean, guy with one eye, maybe, but not two.
That would be really funny, actually.
It's just, you know, all these fishes swimming around, and suddenly they're like, you know, one fish says to the other fish.
Jenny, my phone seems to have arms.
And he wants to read a book.
I don't understand.
And Jenny's like, I know, mine has three horns.
What's going on? Right.
So it climbed up the evolutionary consciousness ladder, like step by step, that ladder went up.
And then at some point, and there's this guy who's got the rise of consciousness and the bicameral mind.
I've had it on my bookshelf for like 10 years, and I haven't read it yet.
Richard Dawkins talks about it quite positively, that It is when you become an observing ego to yourself, this massive leap that occurs, which does seem to, and I'm just going by my wife's practice here in terms of, she practices psychology, and there is a correlation between Between the capacity for an observing or critical third eye, an ego, and intelligence, right?
Like therapy and self-knowledge and so on, a kind of elite activities that you don't get a guy who's got an IQ of 90 going into therapy and being successful, right?
Like it takes a fair amount of intelligence to be able to put yourself in other people's shoes, to look at yourself in the third person, to compare your behavior to abstract standards and improve and so on.
So even within the human mind, even within the human society, there seems to be quite a spread in terms of what people are capable of.
Now, it's not a one-to-one correlation.
I'm sure that Paul Wolfowitz, though, having a high IQ, has the wisdom of a rodent, and that's not very complementary to rodents, but there does seem to be some correlation even within the scale of conscious capacities within the human species.
Some people have extra features.
Yeah, that's right. They've had the upgrades.
And not the stinky Vista ones, but the real good leopard ones.
Alright. So...
Alright, this is, I admit, like, when I thought of this idea, I was definitely having a peak of mental acuity.
Now I'm a bit lower and it's taking longer.
All right.
So-- Oh, well, I hope you don't mind.
I'm going to repeat myself, and then I think I'm going to remember where I was going with the origin of consciousness thing.
I would be the last person to criticize somebody for the repetition in a podcast, so please.
Right. Okay.
So anyway, we have this android with a meaty brain, and we take out the consciousness lobe, and we notice what's different.
this actually should be an area of study because if it's nothing...
Oh, wait, no.
If it's...
Well, I mean, if we remove the part of the brain that deals with consciousness and we can still function as if we had consciousness, then it's almost like a tumor, right?
Because it's taking resources and producing no value, right?
Alright, so we take out the consciousness module, and the thing is, if there's any single action that the android now can't do, like...
I'm trying to think of a good action, but we're going to start with a bad example.
Say it just can't brush its teeth anymore.
It's just incapable of brushing its teeth.
The brushing your teeth is a purely mechanical action there.
So I don't see what exactly is so special about the consciousness module that you suddenly can't do these perfectly mechanical things.
So perhaps you can think of a better example.
You mean in terms of an activity to do with consciousness?
Here's another problem that I have.
We do do things with consciousness, but we don't necessarily know that we have to have consciousness to do them.
It might be one of...
Two or more possible ways to do it.
Right, right. I mean, I think that we can do a differential diagnosis, so to speak, and we can say, well, we have some...
Sorry? I'm losing you.
Oh, sorry. We can do a differential diagnosis, and we can say, well, a chimpanzee does not have consciousness, and therefore the human mind, which does appear to have consciousness, we can look at the differences between the human mind and the chimpanzee mind, or brain, And say, well, there must be some physical basis for these differences.
It can't just be, you know, God spat a soul down or gallop when we were born or something.
There must be a physical basis to the differences in mental capacities.
So we can do that.
I mean, that's sort of just like comparison kind of thing.
One thing can do this, one thing can't.
So let's look at the physical differences between the two.
That may be a place to start.
I would sort of say, though, that, for instance, it seems to me, at least in my experience...
That human beings have a fetish, a drive, almost an obsession, I would say.
I don't think it's unhealthy.
We always, always, always...
Seem to have to compare our actions to a universal standard and say that we are acting in accordance with a universal standard, right?
So when you say to anyone, why did you do what you did, particularly in the realm of things which may be moral, then they will almost never come back and just say, I did it because I wanted to or I did it because I did it.
But rather they will say, I did it because it was right or good or practical or provoked or necessary.
We as a species seem to have this absolute need to compare actions and justify them according to abstract and universal standards, like truth and right and wrong and all that kind of stuff.
And for me, if somebody was able to function in the world, in society, and never once exhibit that behavior of saying, I did X because it was justified by a universal Z, that to me would be an indication that they had lost a pretty primary aspect of consciousness that is unique.
The shark doesn't say I ate the fish because the fish was taunting me and it's immoral to let that happen because you've got to stand up for yourself or something.
They just eat the fish because they're hungry.
But human beings, we're always comparing our behaviors to these abstract standards.
No political leader has ever declared war because he said I feel pissed off.
They always say it's because we're about to be attacked and we're defending the people.
There's always these moral justifications that occur.
For everything. And so to me, that would be an aspect of consciousness.
If you were to remove a part of the human mind so that somebody no longer felt the need to justify his or her behavior relative to an abstract standard, then that to me would be a very core differentiator.
And if they could do everything else and function well, that would be more of a mystery to me.
Alright. Just a heads up, you're kind of fading into weird...
Robot-y talk. I can still mostly understand you, but it's...
Okay, let me just make sure I shut down everything so that...
I don't think I've got anything else in terms of internet, but let me just double-check.
Because I may be banned with being hogged by something, but let me just...
Let me just check.
Do-do-do-do-do-do-do.
Okay, let me just turn this.
I've got a VPN on.
There you go. So let's see if that does it.
Sorry, go ahead. All right.
Okay. So, but how are they going to act differently?
Okay. Sorry to interrupt.
They would do what they would do, but they wouldn't justify it according to any standard.
So the differentiator would be, instead of George Bush standing up and saying, well, we have to because Saddam Hussein is flouting the laws of the civilized world and he's got weapons of mass destruction and he's going to launch them in Baltimore and I have to protect my people and we regretfully use force to save civilization and blah, blah, blah. If he just said, I hate the guy, he tried to kill my dad, so we're going to go and wipe him out.
Like, if he just, without any justification, without any reference to abstract principles, if he justified his actions, or rather, no longer justified his actions, but didn't feel the need to give any ethical or objective reasons for why he was doing what he was doing, but just went and invaded Iraq anyway, that to me would be a very interesting aspect to take out of human consciousness.
So you'd still end up doing the same stuff, you would just not have all of the mealy-mouthed justifications that most people use, right?
Alright, now I understand what you're saying.
So, this is what we do.
We take this module out of the Android's brain, and then we ask, why can't it justify its actions?
Not, you know, it doesn't.
Whatever. It actually can't.
Right. Why can't it justify its actions?
Right, right. Right. And I would say that if we left the goal-seeking aspect of the android's brain intact, then it would immediately have to start justifying its actions or be unable to achieve its goal.
And the reason that I say that is that the reason that George Bush came up with all of these moral justifications for this genocide in Iraq...
Was because otherwise he would not be able to invade Iraq, right?
Like if he just said, look, I hate the guy and I want to control the oil, right?
Then nobody would have gone to the war.
Nobody would have approved the war.
There would have been a revolt, right?
So he would be unable to achieve his goals.
So I think the android would either become completely ineffective or it would start to realize that it couldn't get what it wanted without appealing to these abstract justifications.
So couldn't-- OK.
No, OK.
And just to give you another example of that, in my book on truth, I sort of talk about how parents use ethical justifications, not because they understand anything about ethics, but because it works in terms of controlling children.
So it would be interesting to see how parents would parent in the absence of Of appeals to right and wrong, good and bad, or ethical absolutes or universal values of any kind.
I think it would be impossible to parent, in fact, without that.
I mean, that's why I have some sympathy for parents who've had to put up with all this religious nonsense in terms of the necessary things that we need to control or manage or optimize children's behavior within a society.
Impossible in what way, though?
As in, their children would fail to obey them, sort of thing?
Or... Their children would be just as confused as they were.
Well, I would imagine that the children, because children are born to some degree selfish, right?
And there's nothing wrong with that. That seems entirely productive, right?
But we have to train children to have empathy by snatching and...
Sorry, what did you say?
We have to train children to develop empathy by telling them that it's morally wrong to hit other children and snatch their toys, right?
Right.
Now, we could make the argument and say, well, it's not that it's morally wrong.
It's just that it's impractical because in 20 years, if you still have these habits, you won't be able to get a job.
You won't be able to have a relationship.
But children, of course, don't care about 20 years from now, right?
I mean, that's part of the joy of being a kid is living in that eternal now.
So I think that without appeals to moral rules or ethical absolutes or standards, it would be impossible to parent because you would just end up having to bully your children, right?
I mean, a lot of parents do that too, but they mostly bully them with ethical rules.
Thank you.
Thank you.
So wait a minute. Sorry, just saying that if we take out the part of the brain that would feel the need to justify decisions based on reference to universal values, then parenting would, I think, become largely impossible.
The teaching of children would become largely impossible.
All of these sorts of things would occur that would be, I think, largely impossible.
So, but without bullying, I'm...
It would still be difficult even with billing.
So what we would say here is the parent would say to the kid, you know, don't teach your brother.
And the kid would be like, why not?
And the parent would be like, because if you hit your coworkers in 20 years, you're going to get fired.
And the kid's going to be like, I don't care about that.
And then the parent is just going to have to be like, alright, well I'm going to beat you up then until you can't hit your brother.
And they're going to have to keep doing that, right?
Right. And then the child is going to grow up and end up being stronger than the parent, right?
So that's not a strategy that's going to work, right?
Plus, if the rule is you should not hit others, the child is immediately going to compare being beaten with the rule, don't hit others and say, well, you're a hypocrite and blah, blah, blah, right?
Wait, sorry? Um...
If the parent has the rule, don't hit others, and then pounds the child into submission to that rule, the child will grudgingly submit, but will be entirely bitter and say, well, if the rule is don't hit others, why are you hitting me?
Well, yeah, but I thought we were talking about people who were missing the moral sense at the moment.
Yes, yeah, no, that's right.
Then the children would bow down to the force of the parent, but they would continue to do whatever they felt they could reasonably get away with, right?
So it just couldn't ever work, right?
They'd just get up in the middle of their brother and then sneak back to bed, and it would be an unmanageable situation.
Very much like the situation in, say, Crocodiles, for instance.
I'm pretty sure that, you know, if the little crocodile wants to do something that the parent crocodile doesn't, They do just have to bully them.
And of course, once they grow up, they can't do that anymore.
In fact, you'll see all these fights where they're like, I want to do that.
And they're like, no, I don't want you to.
Let's try and kill each other. Right, right, right.
And that's a pretty primitive state to be in.
And I would say that if you could remove that part, that would be a pretty differentiating aspect of consciousness.
And I think an enormous amount of other things would fall away.
I don't think you could remove that part, because if you think about it, even something like engineering is the comparison of something to an external rational objective state, which is consistency with specific gravity and density and mathematics and so on.
So human beings, I think, would be unable to function beyond an animal level if we did not have the ability to compare our thought processes to an objective and universal standard.
Wow. So I think you just solved an interesting question.
I'm not sure if you did it on purpose or if you were doing it in a roundabout way.
So what you've just said, I think you did say this before, is that humans evolve morality.
And it was basically so they didn't have to beat up their kids as much, because that's really not effective.
And of course, it also gave us the ability to resolve conflicts when we're older without hitting each other.
Yes, and of course, as I've tried to work through in the UPB book, morality is logically consistent and human beings have a fetish for logical consistency, right?
Whether the premises accepted are ridiculous or not, like God died for your sins or whatever, but we still do have a fetish for logical consistency.
I would say that's a pretty delineating aspect, which is the comparison of our thoughts and words to an abstract standard.
So what we've got then is morality evolved for one reason, And we've now hijacked this propensity for universal principalizing into science.
Well, I'm trying to do that, yeah.
I mean, I would say that morality evolved because it's the most effective control mechanism.
I've just got this new book, Real-Time Relationships, where I talk about it.
It evolved because it was the most powerful way of controlling other people, because it makes slaves attack each other, which lowers the cost of ownership for slavery.
Also helpful. Except that if you take it to its final conclusion, you realize that you can't have slaves.
So, yeah, about that.
Right, so I'm trying to sort of move that debate from the realm of religion, which is the magic pixie dust.
Why be good? Because God tells you to.
Well, that's not an answer, right?
That makes no sense at all.
That's just passing the buck.
Well, God tells you to. Well, how does God know?
Right, and why does God perform opposite actions to the ethical instructions he gives us, and yet we call him good?
All of these sorts of things, right?
So, go ahead.
I think that's awesome. It's like, And now I know where science comes from.
I think that's really cool.
Go on. Hey, if I said something cool, I always like to know.
Well, yeah, science is just...
We didn't evolve science.
There was no evolutionary pressure to make science.
It just happened to be really, really successful, but it came originally out of morality.
Actually, that makes a lot of sense, because if you think about it, our morality is right in the brain.
We have a moral grammar I'm sure you know about.
Oh, yeah. And so, like, for...
We've been a species for, what, 100,000 years, or is it a million years?
I think it's 100,000 to 200,000, depending on who you talk to, but it's something around there.
I don't think it's a million. Right.
So, but only in the past, what, like 5,000?
8,000? Have we had anything like technology?
Right. And you could really say that the modern scientific revolution is, you know, 16th century, like 400 or 500 years.
Right. So, that's like a...
It's a classic example of evolution hijacking stuff that itself has made for something entirely different.
Oh, sure. Yeah, no. I mean, what I think was that we developed this weird capacity in our brain to compare our thoughts to an external standard.
And then, through that, we became susceptible to moral pressure, and therefore people developed morality as a form of dominating mythology, and they used that to crush and control other human beings.
And that's why we lost the equality of the tribe, the relative equality of the tribe, and evolved in things like the nation-state of the Egyptians, because they had this incredibly powerful tool called moral guilt and moral control, and so they used that.
And what, at least, I'm sort of straining my brain to do is to say, well, yes, we do have this innate desire to compare our thoughts and behaviors to an objective standard, so let's work like hell to make that standard as objective as possible, because the only other alternative is to be exploited.
I've thought about this as well, and the question to me is always, there would be, at some point, I would have thought, they hadn't thought of oppressing their fellow tribesmen as a state.
Why didn't their fellow tribesmen resist?
Why didn't they say, you know, this sucks?
I'm just going to stab you with a spear.
And we'll see who's arguing about what now.
And this is the thing, right?
Because there would only be one guy that would come up and be like, aha, I can impress you all as a state.
And all 30 other tribes would be like, um, about that.
Right, right. Well, I would say that it's because we all start in a state, right?
We all start in a family.
And a family is not a situation of equality, right?
Parents and toddlers. We all start in a state, in a government, so to speak.
Where our parents, where we are not competent to make long-term decisions for ourselves, where our parents do have to control and manage us.
And, you know, there's all these other good things that parents should do, like consult and explain and so on.
But basically, it is a strictly biologically hierarchical relationship.
We all start in ignorance, right?
And children have some aspects, like they're frightened of thunder because they're boogeymen under the bed and so on, right?
So we all start in a state of superstitious statism, so to speak.
And, of course, the whole purpose is to outgrow all of that.
But because we haven't found a way to not state represented superstitions, Sorry, since we haven't really found a way yet to have the family not represent a state or to have the family train the child to outgrow this hierarchy, we end up with this hierarchy reproduced in society.
And it seems to be believable because that's what we grew up with and we never outgrew it.
Would you agree with the statement that...
I think you've answered my question.
If you would agree with the statement that maturity...
Emotional and mental maturity does not come automatically, and we're fooled into thinking it does because physical maturity most certainly does come automatically.
I would not fully agree with that.
And the reason that I would say that is because the amount of effort that is poured into retarding children is so extraordinarily high.
Like you say, well, human beings, I'm not saying you would say this, but you could say in the Soviet Republic, you could say, well, look, everybody is naturally a communist because look at the fact that everybody believes in communism.
And you'd say, well, yes, but you subject these people to violent propaganda from the age of two to the age of 20.
Right.
So we I mean, if you genuinely believed that human beings were naturally communistic, then you would not need to subject them to propaganda.
Right.
So when you look at the amount of energy, both from religion and from statism and to some degree from the cult of the family that is poured into remolding, reshaping, crushing, you know, undermining the minds of children.
I think that it's very hard to say what they would evolve to in their natural state.
but it sure wouldn't be anything like what we get today, because otherwise there'd be no point investing all of this.
If everybody just grew up and wanted to pay taxes, then the government wouldn't need to have all of this nonsense with public schools and propaganda and so on.
I'm still kind of curious about...
how do I put this so I'm guessing then that like the state started I don't know why I'm so obsessed with this.
Actually, I do know why. Never mind.
The state started with this one guy being like, hey, there's this god, and you should follow what he says, and also, I'm the only one who knows what he says, haha.
And he did that either to people who hadn't fully outgrown their parents, or he did that to people who hadn't actually completed the maturity process.
Like, he started with some natural, like, say, Someone was traumatized, not by their parents, but like a tiger attack, and so they got a bit retarded because of that.
And so that's what he would have had to start with.
It's like either children or with people who had been traumatized for whatever reason.
Because I really don't see how or why a fully mature person would ever, you know, like how could possibly...
And the reason I want to say this is because I think once we do finally get into Enkabistan, we will be mature people.
And there most definitely will be people who try and reimpose the state.
And so we kind of need to know why it was successful the first time.
Well, I would say, the way that I would explain it is to understand how human beings were domesticated.
You would look at how any other being or species was domesticated, right?
So there were wild horses, and you had to put a hell of a lot of effort into domesticating.
The first herd of wild horses probably kicked half their domesticators to death, right?
But then after you've got them in captivity, and you get to shape them from birth onwards, then that investment pays off handsomely, right?
Right. And so the first control and subjugation of human beings, we don't do it with whips and fences primarily, we do it with ethics or false ethical theories.
Right. Yeah, it was probably a horrible and bloody battle to domesticate the first slaves to the hierarchy, but after that, you now have control over the children, and we no more fear, I mean, the farmers no more fear, sorry, the government no more fears rebellion from the current herd of people than A farmer fears that the cows are going to invade his home and, you know, assault his wife. All right.
Yeah, that makes a lot of sense.
I like that. Okay.
So, yeah, and so...
And so now when people say, oh, well, even if you did get anarchy, it would just revert to a government.
I'd be like, well, no, actually.
No healthy human being would ever, ever submit to a government.
So, um... I'm actually reading someone else who's, he refutes ethics, so we know he's going wrong right there, but it's funny because like two sentences later he says, and violence is bad.
I'm like, but you can't have ethics.
But, you know, aside from this, he actually, he goes pretty well.
He says, all right, I think this is what will reduce violence to zero.
And basically what I want to say to him is, look, a healthy human being would never agree to be taxed.
It just, it won't happen.
So, that would kind of reduce the violence to nothing.
That wouldn't it. Yeah, I mean, and for the people who say, well, as soon as you get rid of a government, one will come back, then you may as well ask them, do you think that foot binding is about to make a comeback in China?
Indeed. Or slavery in America, right?
I mean, of the purely sanctioned historical kind.
I mean, you know, once you take these steps forward, then you don't tend to go backwards particularly, and we never took the step forward with regards to statism, or the cult of the family, or even the nail in the coffin for strong atheism, so those things remain indeterminate.
But yeah, once human beings are free and happy, they don't voluntarily go back into shackles.
And it's not profitable to do so without estate and without propaganda.
That's what lowers the total cost of ownership for slaves to the point where it becomes economically feasible, like in a free market.
War can't be profitable.
It's only profitable in a status situation where you can force taxpayers to pay for it.
Indeed. Well, how did you know that, incidentally?
Like, it makes sense, but, you know, I try not to believe things just because they make sense.
Oh, I've written articles about it where I've actually done the math.
Oh, okay. So I can forward you those if you like, but...
No, I trust you, actually.
That's fine. If someone's done the math, I'm all good.
Yeah. Okay, cool.
Was this so useful? Was there anything else that you wanted to talk about?
I know that we've only got some way down the path of your theory, but you said you wanted to take some time to mull some stuff over more, which of course is great, right?
Alright, so we have a data point here.
Through the awesome power of consciousness, because I really don't think this was an evolutionary leap, we've gone from merely being ethical to being ethical and scientific.
Right, same principles in comparison of our thoughts to a universal standard.
Can we implement that in silicon?
I think we probably could.
I mean, I would say that we have a lot more to learn about the brain since nobody knows what the hell's going on really about the brain.
But yeah, once we have figured it out, I mean, but there's so much soft learning that the brain does, like in terms of dreams and so on.
And of course, the brain as it stands right now, like understanding a healthy human brain, Is really, really hard to do right now.
It's like, if everybody's foot is bound in China, you don't even know what a healthy foot looks like.
So you're studying something that's broken and mutated and smashed up.
And for the most part, that's the state of our minds, because we're subject to all the nonsense and propaganda and so on.
So, you know, the first thing we've got to do is have a healthy brain, and then we can study that without all of this nonsense and defenses and fear and anxiety and weird attachments and Like, all of the sort of broken brain aspect of what goes on in our minds.
So, yeah, first thing we've got to do is stop binding our feet and then we can study a healthy human brain and then we'd be much...
Like, we don't want to replicate a neurotic mind in a robot.
We don't mob at the paranoid android, right?
That would be kind of a... Well, maybe we do.
Maybe we want to do that and then just mess with things until it's fixed.
Could be. I tried that with my computer.
it doesn't work too well but maybe there are people who do it with robots perfectly be some interesting tool sets alright well why don't why don't we pause here I will send you a copy of this conversation and you can have a listen to it and let me know what you think.
I mean, I think we talked about some really cool stuff and I'd be happy to release it as a podcast, but you can have a listen and let me know what you think.
Let you know what I think about it being a podcast?
Yes. I stand by my statement that I think would be honored.
Do I want a copy? Sure, I want a copy.
I will send you a link for sure.
How long have we been talking? We have been talking for 59 minutes.
Nice. Oh, man.
All right. Okay, man. Talk to you soon. I enjoyed it.