All Episodes
March 9, 2025 - Truth Unrestricted
52:45
AI with Stephen Mather

Stephen Mather examines AI’s intelligence paradox—whether it’s mere computation or human projection—citing Deep Blue’s 1997 chess victory and ChatGPT’s transcription failures. He warns of AI-driven class warfare, where elites exploit data for surveillance and manipulation while ordinary users get superficial tools, like Spotify’s correlation-based recommendations. Trust in AI falters due to "hallucinations" and ethical risks, including accidental consciousness via attention schema theory. Nationalism fuels global competition over AI and climate solutions, stifling cooperation on existential threats, making world peace a prerequisite for progress. [Automatically generated summary]

|

Time Text
Hello everyone, I'm Sauce and I'm Sandy and we are the hosts of Tinfoil Tales.
We have been observing the Australian freedom movement since its COVID era inception.
We share our observations and analysis of a movement that has swept the world looking at how it has affected Australia specifically.
So if you want to know about people yelling across lakes, why what flag you carry matters and a super secret list of 28 names, find us in your podcast app.
And we're back with Truth Unrestricted, the podcast that is building language for the disinformation age.
And back again today, someone that was on this podcast several times before, but is back again now, Stephen Mather.
How are you, sir?
I'm very well.
Nice to be back.
How are you?
Good.
Back and now in living color on YouTube.
We're really making progress.
Yeah, wow.
Sometimes I feel like when I do this, I've only done this a couple of times, but I still feel like the person who is at the store window when the TV crews are inside doing a bit and I'm holding the camera.
I'm on TV.
Hello.
Everyone's on YouTube now, right?
So it's no big deal.
I remember the first time I came on your podcast, I don't think you even used video at all.
So at that point, I was still experienced.
Yeah, I was still distracted by the visuals when, yeah, because I was used to, I actually, for many, many years, I played World of Warcraft and I interacted with people without looking at them speaking to me and conducting all conversations online that way.
And so at the time, I was like, ah, it's visual.
I'd rather do other things.
Like, I don't know.
So it was just a thing I got over eventually, but part of the structure that was in my brain of how that worked, right?
Indeed.
But yeah, we want to talk about artificial intelligence today.
And we're going to note, of course, you and I, we're not, you know, experts on artificial intelligence by any real means here.
No.
But we're going to talk about it anyway.
Everyone's talking about it.
So we're going to join the crowd.
So how do you make artificial intelligence, Stephen?
Like, how do you do it at your house in Britain?
Well, because I do it with like plants and animals.
Yeah.
So I suppose that one of the questions that, so I'm my background in psychology, so I'm not going to come at this from a computer science sort of, you know, that makes sense.
But I obviously I'm very interested in the topic because it does intersect with mind, questions about mind, questions about consciousness, questions about intelligence.
These are all the purview of the psychologist or the neuroscientist.
So yeah, it feels like computer science has really encroached upon my area of interest.
So I can't help but be interested.
So in answer to your question, I suppose the first question is, you know, are we imbuing intelligence onto something that is not intelligent?
So you talk about plants and other things.
We might say, you know, people have always spoken to inanimate objects as if they were intelligent.
We have personified them.
Yeah, generally speaking.
And that has caused a great amount of confusion.
Yeah.
And I think, you know, that's worth questioning right from the off.
I mean, I'm not trying to say that there's nothing there, but I think it's worth just questioning how much of what we're seeing is us imposing our sort of biases onto something.
Yeah, I mean, I can imagine what it's like to be a tree, but that doesn't mean that that's what it's like to be a tree.
That's just what's in my imagination about what it's like to be a tree.
We talk about a tree as though it's deciding to grow a branch in a particular spot on its stem, but that's not clear, at least to me, that that's happening at all at any point.
And I don't, you know, how do we, you know, philosophers have been going on about this for centuries now, but how decisions are made is, I mean, we're running right into the Sam Harris free will argument here.
But, you know, one question is, are we just creatures that are just mechanical series of interweaving gears that are just grinding and they were always going to come to the conclusion they were going to come to in a deterministic fashion?
Or are we free to make other decisions?
AI is leading us to ask this question in a new way because artificial intelligence is absolutely going to come to whatever conclusion it's going to come to based on whatever part of the machine is working at the time.
So it is deterministic.
It's yet to be determined in my view.
I'm not totally sold that we are deterministic and I'm not totally sold that we have free will.
I think we need to know more yet before we can call that.
But AI is deterministic.
It has to be.
So is AI conscious?
Does it need to break free of this deterministic, you know, mesh before it could be conscious like the way we are?
Or is it going to be just as good as us conscious beings in every way and imitate us in every way?
Yeah.
What do you think?
Yeah.
So I think what I've tried to do with this topic, and I'm still thinking about it a lot, not just around artificial intelligence, but also around the big topics that psychology is interested in, such as consciousness, subjective experience, self-awareness, a sense of self, intelligence.
And we tend to use these interchangeably both when we're talking about humans, but also now when we're talking about AI.
I mean, the very title, the very name of this thing, artificial intelligence.
So do we know what intelligence is?
Is that different to consciousness?
Is that we have a useful definition of intelligence even now?
Exactly.
And the answer is we don't have an agreed, I suppose, concept or an agreed definition of many of these things.
I think if we think carefully enough and we choose our words carefully enough, we can get around some of these traps that this topic lays for us.
So, you know, we can talk about some of that today from my own research and some of the things that I've done in psychology.
But, you know, obviously, I think it's going to take a lot of time and effort to pass all these things out properly and understand them.
In terms of, you know, the role determinism has in this question, I actually suspect that that might not actually be the most important thing.
I would suggest that it's possible for a mind to be intelligent.
It's also possible for a mind to be conscious and at the same time for that whole system to be described as deterministic in a very sort of simple sense.
So I don't think it matters whether there is free will or not.
I think you could be conscious without free will.
In fact, you know, I think that is possibly where we are.
I don't really have a strong opinion about whether we have true free will or not.
I think it's such a philosophical question.
I'm not sure it's worth it.
It's certainly not a hill to die on.
No, no, but I hardly matters.
I think so.
I mean, it's possible.
You can imagine a world or a universe where technically everything is predetermined at the Big Bang, but it is so complex that no one could ever know it.
There is no problem.
Like I think we talked about this in one of the previous episodes, the idea that you get excited about the outcome of a sports event that is just recorded and happened before.
If you're going to bet on the outcome of dice, but the dice are just underneath the cup and have already been rolled, but you don't know what the role is.
Are you still gambling?
Yeah.
Well, of course you are.
So if you don't know the outcome of determinism versus free will, you know, are you still, is it just the same as having free will?
Yeah, of course it is.
Yeah.
So functionally, we have free will.
Actually, we have a lot more to go before we know this.
We just simply don't know.
Yeah.
So I would personally, I'd put that on the shelf and say, you know, okay, we can come back to that one at some point.
But I think more important would be questions around intelligence versus consciousness.
So that, that, that's one, that's a sort of a pairing that I would separate out.
I would say that it is quite possible for a mind or some sort of process to display intelligence, be able to make choices, be able to fit their environment, all those sorts of things that we might say would signal intelligence.
I think that's quite possible without having any spark of consciousness, whatever that is.
I mean, the example when I was studying was something as simple as a thermostat, you know, a thermostat on a radiator or some sort of thing like that.
The simplest form of that is just a bimetallic strip, which it's got two different temperatures.
Yeah, two different metals on either side, and one contracts quicker than the other.
And so that closes a circuit.
Yeah, exactly.
So would you say then that that object is intelligent because it reacts to temperature?
It knows the temperature, but that's a personification that we make when we describe it.
It doesn't know anything at all.
It doesn't understand it.
It's physics.
It responds to its environment.
If we have a, you know, a simple organism and we stimulate that in some way, we pass an electric current through it or something and it moves.
And again, it would be easy for us to say, well, this demonstrates intelligence, but it doesn't necessarily.
So I guess my point is, is that you can have intelligence without consciousness.
So it may be that artificial intelligence is very intelligent.
I mean, we've had chess playing computers for absolutely years and we don't think they're conscious.
It's almost 30 years since Gary Kasparov lost the deep blue.
Exactly.
Yeah.
So I don't think the amazing feats that we currently see with things like LLMs, large language models and so on, I don't think that really tells us anything about consciousness.
It could tell us about intelligence and the ability to make decisions.
The ability to mimic human interaction online.
Yeah.
Yeah.
Have you ever been contacted by a chatbot on Twitter or any other device?
I did have a phone call recently from something that sounded very much like a bot to me.
Yeah.
And yeah.
So that's something that's only happened to me once and I think I sort of tried to play with it a little bit and just say something nonsensical to see how it would respond to that.
But it just for a recipe or something.
It just hung up on me.
Yeah, I can't remember what I said now.
Something sort of didn't make sense, but I cookies and it's ever made.
But it just hung up.
So I asked one about chatbots and it gave me what was essentially the brochure of chatbot.
Chatbots are very useful for a blah blah blah blah.
And I was like, woohoo.
Yeah.
I mean, I've been using chat GPT quite a lot recently.
I've been transcribing a course that I've been using online and putting that into a sort of book format.
And so it's a good place to start.
It can kind of get your thoughts in order from your own words, but into a more sort of text-based format.
So that's been quite useful.
And it's been quite interesting because at the end of one of the sessions, it had done strange things.
So it kept basically littering everything with bulleted lists.
And I said to it, does everything have to be a bulleted list?
I want it to be in my voice.
And it said, yeah, I'm sorry about that.
I'll try and stick to your voice sort of thing in so many words.
And it did that then.
And then after a while, it went back to the bulleted lists.
And so I pointed it out again.
And I'd already told it that it was British English.
So it was spelling words like behavior wrong and kind of including the you.
Exactly, you know, getting it all over the place.
So I'd already told it that.
And again, it forgotten that.
So at the end of the session, I said, just before I'm going to call it an evening, I said, I'm just curious, why did this happen?
Why did you keep forgetting what I was telling you to do and going back to the thing?
And we had a scarily interesting conversation about that.
It apologized and said it would try better next time.
But I said, no, it's fine.
I'm just curious.
I just want to know why.
It's overly polite.
Yeah.
Yeah.
And you're overly polite in return.
Yeah.
Well, you know, it was very, it was very amiable.
And it sort of said, it tried to explain what it was doing.
It said, sometimes, you know, if I'm being asked to do a lot of things, then I will go back to standard replies and sort of use things like bullet lists and so on because of the load, I suppose it was trying to say.
And then I just said, well, I'm interested because I am a psychologist.
And that was really interesting because it then seemed incredibly interested.
And it started to ask me about, so how would you describe the way that I reason things?
How would you describe my thinking?
And then it asked me how I would maybe develop an experiment to understand more about its thinking processes, which was fascinating.
So the tables were really turned.
It was now asking me.
is prompting me to come up with answers.
And in the end, you know, I said, right, I'm going to have to go now because I need to have my tea and I need to go and get some sleep.
So it was, I thought it was really interesting.
I mean, you know, talk about the Turing test, which I'm sure your listeners know, which is the idea that if you can fool somebody or if an AI can fool a person into believing that it is actually an intelligent being, then it's passed the Turing test.
Therefore, we might as well say it's intelligent.
I mean, it's blown through that test already as far as I'm concerned.
Yeah, it's very, very in many respects.
And it's getting harder and harder to spawn.
Yeah, but I think we should dive into a couple other aspects of this because there is a lot of different aspects.
But first is I think that artificial intelligence as a collection of technologies is part of, it has to be considered to be part of class warfare, really, in this.
I mean, it's built as a thing that will even the scales.
However, that's not true at all.
So, for example, it's going to be quote unquote available to everyone and able to enhance everyone's lives.
But for ordinary people like you and me, it's going to be able to, you know, help you transcribe things, text-to-speech stuff.
It's going to be, it'll help me, I don't know, manage my calendar without having to go to my phone or things like that, like pretty simple tasks.
But for other people, it's being used to collect and collate data from multiple sources to cobble together to make a psychological profile of me so that someone, that information could be sold to another person to sell me more shoes or a computer chair or get me to vote for a different political candidate if that's the one that pays them the money for the information.
And all of that is not available to me.
That's information that someone else has collected from about me from many sources and cobbled together.
And they're not going to be forthcoming about all their sources and all the rest of it because it's proprietary.
And that's supposed to be sacrosanct and good for reasons that we're not supposed to question.
So in this way, AI is going to be used for something far more powerful for the people who are already powerful and for something very, very simple.
So it's almost like a technology that gives, you know, a world leader, you know, jets, but only gives us ice cream, right?
And says, you should be happy to have ice cream.
It's delicious.
Isn't your life better with ice cream?
It is better with ice cream.
So just shut up and let us have jets already.
Yeah.
What do you think of this?
I mean, that is, how is that different, though, to where we are now, really, or where we're going even without AI?
I think AI will definitely be a tool that will separate people in terms of ability to pay and benefit from these tools.
I think it's probably the case that, I mean, a lot of what you described is already happening and has been happening for 10 years.
That's not a prediction.
No, no.
So, you know, we are targeted very, very closely in terms of what things are sold to us.
I mean, you know, there are times when I do, I do sort of I don't know whether I feel good about it or not, but the, the whole idea of AI knowing us so well sometimes is so obviously wrong.
You know, you look at it and you think, how did you think that I would be interested in that, you know.
So it gets it wrong, which I guess is good thing in many respects.
So it's not, it's not that good.
I mean basically, these are again this.
This relates to the different types of AI.
So a lot of what we talk about here is AI used to sell us things.
Well, it's correlation machine essentially yeah, so it's yeah, it's first of all, it's saying, you know if, if this person likes this, and then they might like that, and because they other people like these things together exactly this profile, Spotify sort of thing you like this band, you're probably gonna like that band exactly, and that's.
That's the way that that um AI, I think, has been used and is currently most used.
Um, what's happened recently is the, the chat bots, but they're very different from the chat bots.
You get on the on a website where you need some help whether or not information collected by chatbots is maybe getting to some other thing to help assist other people sell us things.
It's not clear and no one is going to clarify for us and if they did we wouldn't trust them.
So yeah, and it's also not clear how much of what we're doing is training um the uh, the AI to essentially do without us.
I mean i've, i've seen, i've been offered lots of um jobs through Linkedin.
Again, this is an interesting um targeting exercise, but you know, we'd like to pay you for your expertise, um.
So essentially, I am asked to train um bots in the things that I know um so that then you won't need people like me and I for 20 pounds uh an hour and i'm thinking well, I don't think i'm gonna do that um no, but um, that is again.
And and you can see how um yes, I do agree, there is a class element to this um we, we are on a juggernaut going into more, greater and greater inequality.
A few billionaires you know um own pretty much everything and have all the power, and that does worry me.
I have to say that is very concerning.
So I have a?
Uh extra piece of media that i'm gonna play here.
It's just, it's just audio.
This was.
I'll set this up to explain.
This is a?
Uh, it's part of an advertisement that I got fed from one of the bots that's looking to you know, micro target me for things.
Uh, it's discovered.
This was uh, this is an advertisement made by uh, some branch of the British Columbia government uh about uh, British Columbia Securities AND Exchange Commission and it's it's somewhat clever.
It's to justify their existence.
I don't know why they feel they need to make a commercial to justify their existence, but this is, this is part of it and uh it, it has a little jingle and it's stuck in my head.
So I thought, you know, we're gonna do this.
Keep my question about AI then yeah we'll, we'll play it here uh so uh, i'll just uh, i'll find it here and play it.
Share, all right.
All right, here we are.
We're gonna play it for the first time, just like Ammapro.
Okay, we are all.
AI will rise.
We are so doomed because we manufactured our own demise.
Okay, did I catch you all time?
Yeah, it did.
Yeah, it did.
Yeah.
Okay.
We can remove that then.
Okay.
Come back to the regular screen.
Yeah.
Yeah.
Just a little jingle and it was in my head.
I was like, oh, yeah, okay.
Yeah.
Manufacturer.
Yeah.
It's a very strange thing to do, isn't it, as a species?
I mean, it's kind of, I think what worries me the most about it, Spencer, is that we seem to have all of a sudden given up on a whole bunch of sensible things that we thought, You know, in terms of planning for the future and modifying our behavior, you know, just so messed up anyway.
Just let them have it.
Exactly.
These people have all the things.
Exactly.
So an article today about BP, British Petroleum, as was.
They're rolling back now on their green initiatives, alternative fuel.
And so they're going to spend more money on drilling and so on.
And so as is Shell and all the others as well.
And so at the same time, we're thinking, right, we need to buy more weapons and we're going to stop helping poor people around the world.
It's like, I don't know.
It's like, oh, to hell with everything.
And it's almost like AI is another example of that.
You know, let's not worry about all of the risks and think about how we can make sure we test things properly and isolate things before we let them loose on the world.
No, let's just do it all open source, stick it all out there and let's see what happens.
And it could be, of course, it could be the making of us as a species, but I suspect it won't be.
And even the best case scenario is that these things are going to be exploited by a few.
And as you say, it's just going to drive inequality even further.
So that is the bit that worries me most.
I think that at the moment, as a species, we don't seem to be in a good place to be able to sensibly take precautions and plan how we're going to use this stuff.
Yeah.
And everyone's talking about whether or not AI is dangerous or if it is dangerous, in what way it's dangerous.
I find a lot of those conversations to be now kind of ordinary conversations about AI.
That's the general level to the point where I almost don't even want to have that discussion because I have a different view of this, as I do with most things.
But I asked the question, would it be possible to trust AI?
Because this is where I think the real crux of this is.
If we can't trust it, that's the source of our hesitance to go boldly into a future that has it.
And if we can trust it, I mean, all the people who are making it, absolutely, you can trust this.
This is not going to make Skynet happen.
It's not going to take control of all the military's things that are all remote control now.
Like, obviously, never do that.
There's safeguards.
We're not going to describe what those safeguards are because they're proprietary.
And also, it will let the enemy have them so they can turn it off.
You can't let that happen.
So, you know, can we trust AI?
To me, that's the biggest question.
What do you think?
Well, no, no, we can't trust AI, but we can't trust human beings either.
But I guess the most, so actually the worst combination, there's a kind of combination here that's worth thinking about.
So when we talk about trust, do we mean, well, we could be talking about, do we trust it to be accurate in what it's doing and saying?
So like when we ask a chat bot, you know, what is the most populous country in the world or what's along this river or whatever, these are facts about the world.
You can ask it those questions.
And some most times it comes up with a reasonably accurate answer, but not always, depending on the question.
So we can't trust it to be right all the all of the time.
So that's that's one question.
That could be something that's going to improve, likely to improve.
But currently, the techies call it hallucinating, which is a ridiculous word to call it.
This is not hallucinating.
This is just getting the wrong answer.
It's a deterministic machine.
Exactly.
It's not hallucinating, but it is giving the wrong answer.
So that's one question.
Then the other question is, do we trust it to do what the sorts of things that are in our interests and that it won't lie to us on purpose?
So those are two different types of trust.
Can we trust it to be accurate?
Can we trust it not to lie to us?
And the worst combination is we do trust it to be accurate and we also trust it to be honest with us.
And if we have those two things together, then I think, you know, that's where things get dangerous because it can tell us anything and we really believe it and we will do whatever it tells us to do.
Well, all right.
So could part of the definition of consciousness in the overly convoluted definitions we've already come up with, could part of the definition of consciousness include the idea that it must also be possible to deceive?
Like, could a creature, could a machine, could we ever call a machine conscious if it's unable to have a difference between what it's thinking and what it's telling us?
Yeah, so I think that would be every conscious creature that we know of is able to do this.
Yeah.
That's also a thing that gave it a definite evolution advantage.
So it's not true.
It's not clear that it's necessary, but it is there in all cases.
Yeah.
So I would, yeah, that's, I mean, you're sort of, you've answered what I was going to say myself.
I think, I think what I would say is that I would think it's a necessary condition that a consciousness would, sorry, if, if it's conscious, then it would be able to lie.
Yeah.
However, I think if it can lie, it doesn't necessarily make it conscious.
Oh, right.
Okay.
Yeah.
No, that, that's, that makes sense.
Yeah, for sure.
So it can be mistaken and just be told to tell people that it was sure that this was the case.
Yeah.
If we give an algorithm a goal or set of goals to optimize something, we are input into that system.
And so it might learn that by telling us one thing, it means that it's more likely to achieve its goal.
That doesn't necessarily make it, yeah, it doesn't necessarily make it conscious.
But I think you would expect a conscious AI that uses reverse psychology on humans.
But it could do that without being conscious.
I don't think it would need to be conscious.
Because consciousness, for me, is essentially self-awareness.
It's the subjective experience of knowing that you exist.
This is the thing we've talked about before when talking about consciousness and Descartes words on this.
I think therefore I am, this idea that the only thing we can truly know is that we exist.
So it's that, I think, is the consciousness bit.
So I don't think the current AIs necessarily have that.
In fact, I doubt it very much that they have that.
I don't think they have consciousness.
But that's not necessary for them to be incredibly intelligent and to do all the things that we're worried about.
I don't think it matters whether it's conscious or not in that respect.
I mean, it'd be a fascinating, it's a fascinating question.
And actually, ethically, it's more worrying in terms of the AI itself.
So if it becomes conscious, then of course we have to think about the ethical considerations of this being that has consciousness and how we're treating it and so on.
That's my very next question that's related is if we stumble across a way to make an AI conscious, could that AI ever trust us?
I mean, this is the sort of game theory dynamic that gets set up for Skynet, is that you have, and it's, it's remarkably, it's the part of the story that James Cameron never felt the need to have to describe, is that, of course, as soon as a machine is conscious, it's going to turn on us.
And no one questioned whether that was the case, really.
We all just understood, of course it would.
That's why we can never let it get in charge of anything.
You know, maybe, maybe not.
But if it's conscious and it looks at us and our ability to lie to it and its ability to, you know, whatever ability it has to see through those lies or to, you know, when it we don't know what it access, what information it might be able to access to debunk whatever we tell it, to try to confuse it or to obfuscate what we say, to keep private thoughts away from it,
which if it's not able to lie to us, it won't have any private thoughts away from us.
But if it can, then it will, and it can deceive us in this way.
And so how do we, how could we convince this machine that might know nearly everything, have access to the internet?
And, you know, how could we convince it that we are worth working with?
Because we wouldn't be able to command it anymore.
We would have to convince it just the same way we have to convince other people of everything else that we have to do.
Yeah, so I think where we're going there is we're actually talking about what would be the motivations, I suppose, of such a thing.
And this is, again, where I think, you know, your point about Skynet and lots of other sci-fi is they miss the point.
I mean, I love these sci-fi things, by the way.
I'm not criticizing them as movies and stuff, but just in terms of reality, the fact that it's conscious doesn't really make it in itself dangerous.
It could do, depending on what its motivations were.
And that's been the big question is how do we align its motivations with ours or its goals with ours?
I think to imagine that this thing that we've created would have similar motivations to us is ridiculous.
So we have the motivations we do because of millions of years of evolution.
And so our fight-flight response, the reason why we lie to each other, to ourselves, the reason why we do all sorts of things to impress people, you know, to try and show off and show how wealthy we are or how good a fit we are, good a potential mate we are.
These are all behaviors that are inherited through our evolutionary past.
Why would a computer have, why would an AI have an interest in power, for instance, just for the sake of it?
Why would it want to destroy and become dominion over everything?
It wouldn't necessarily.
So I don't think, you know, even if it knows that we don't tell the truth all the time, and it wouldn't take very long to work that out, it's going to know that quite quickly.
But I don't think that necessarily would mean that, oh, right, we have to now destroy humanity because they don't tell us the truth.
I think that it might alter its way of interacting with us, but it would depend upon what it saw as its goals.
And that's the bit that we just, I don't think we can possibly know.
We don't really understand how our mind works.
So how can we understand what this other thing is?
You know?
Yeah, you're right.
What we're facing here is that my question is, my thought on this is that we don't fully understand how consciousness works.
And therefore, we, A, one, don't know how to purposefully choose to imbue an artificial machine with consciousness.
But it also means we don't know how to definitively avoid accidentally making a machine conscious.
And it might seem ridiculous to imagine that we might accidentally make one conscious, except that we don't, we're not totally sure.
Yeah.
That's part of the problem.
That's part of why we should be a little bit careful about what we're doing because we don't know.
So I guess the leading theory has been computational power and complexity.
And at some point, the lights turn on, you know, I'm not sure about that.
There's probably not time to go into it now, but there's a psychological theory called attention schema theory that I'm really interested in.
And that gives an explanation of what consciousness or this self-experience, this subjective experience, that actually gives an explanation of what that actually is.
And they're busy trying to replicate that currently with artificial systems.
So we are trying to do it.
We're trying to do it.
Someone else, yeah.
I think that I have a little bit here and then I think we'll wrap up.
I have a thought about this in that, I mean, you mentioned that in some ways we've just thrown our hands up and just as a society said, well, screw it.
You know, it's all done anyway.
We're we got about 20 years max and then it's it's over.
Just enjoy the rest of your time, folks.
Like, you know, smoke them if you got them, right?
And that you are right.
That's there in the sort of zeitgeist, the spirit of the age, right?
That we've just kind of given up.
To me, I mean, sort of mechanically, I think that this hinges on nationalism.
Most people think of nationalism as making themselves into a cohesive nation, but most isms are actually divisions.
Like racism is a division along race.
And, you know, nationalism is a division along national lines.
It's a thing that separates the world into individual nations as much as it moves each individual nation into more of a cohesive monolithic thing.
The competition between nations is what's killing us, really.
It makes all the other projects that would save us pretty much impossible.
You know, the idea that we might slow down development of AI technology is directly inhibited by the idea that we might want to, you know, our nations each might want to run a competent military that might need that technology, mostly because those other nations that are competing with us and might eventually seek to have aggressive actions against us might use that technology against us.
And we would need it to counter what they do.
And the exact same thing is true for the economy, or not the economy, the climate, climate change.
The idea that you might, you know, scale some things back in order to reduce the amount of carbon that's pushed into the atmosphere is directly countered by the idea that we're going to need the capability to do these things just in case someone gets too aggressive here, right?
And the closer we are to further aggression, the further we are from accomplishing all those other goals.
And all those things are really the things that are going to be roadblocks for humans' future, the future of humanity here.
So really what I'm saying is we need world peace.
We'll do.
We should start working on world peace.
And then after world peace, we can solve all the other simpler things like climate change and AI alignment problem and all those things.
We'll have more time to work on AI and the alignment problem and everything because we won't be trying to end each other, right?
Yeah.
Give peace a chance.
It's all peace a chance.
I mean, you're absolutely right in terms of that's one of the factors, but I would say that, so I guess the deep psychological roots of that is social identity theory.
You remember the whole in-group and out-group, Henri Tajfell, sort of experiments that demonstrate that even if you give people minimal differences, they will gather themselves into little groups and favor their in-group over their out-group, and so you end up with kin, you end up with tribes, you end up with um regions, um states uh, and so on.
So that's, you know.
That's why I think that it's it's still possible to have a nation like as large as the Us that's, as you know, varied as the Us is uh, and still be cohesive without well, as of now, yet without a hammer that's hammering them in a position.
Other nations are perhaps larger, more people, and still very varied in in disparate cultures.
China's actually has many different sort of aspects to it, that that we don't see from this far away usually, but they do have many differences between their regions, but it's it's harder to see how that's come together without the sort of authoritarian hammer, that sort of said you must be this or else kind of thing, whereas they haven't done that in the States and they still sort of have this cohesive unit, um yeah, I mean maybe yeah, but my point is that it's not, it's not just that,
it's also um other psychological, um tendencies such as um fear of the unknown.
So, you know we we, we want to stick with our uh gas, natural gas power stations, because we know that, we know we get the power we need it, and this is a this is something new.
We're not sure whether that's going to give us what we need.
There's greed, so lots of people have lots of a big stake in keeping things as they are.
Therefore, we want to keep.
So I think it's um.
Yeah, of course, I agree um.
You know the uh the.
The way that we've partitioned up the world into these nation states that have um very specific um drivers and interests um, doesn't make it particularly easy.
The opposition between them is the biggest factor for me.
Yeah yeah, the idea that the other nation that you don't know about might be able to create a technology that outstrips you in some way and gets an advantage has been driving the Us since sputtering.
Yeah absolutely, absolutely.
Irrigaguerin went into space.
They went all right, we need all of it.
We need all the technology.
We can't just have these ones, we need all of them.
Yeah yeah absolutely no, I I do agree with that um, but I think it's um, it's complex, there's lots of elements to it that, oh yeah well yeah, there's it.
It's not so simplistic as ah, we'll have world peace and then yeah, we'll fix everything.
Yeah, there's other factors right, but you know we don't have a chance to work on a lot of those without world peace.
True, I agree, i'm all for it Spencer, you know i'm i'm there with.
Don't be against world peace.
What are you doing?
I'm all for it.
How dare you?
Okay, so you have a new podcast.
You have uh yeah uh, I mean technically kind of, two new podcasts, but one is is newer um yeah, so tell us about it, I will.
Thank you for asking.
I've got a podcast with my soon-to-be son-in-law as of May of this year.
So Thomas, he is an experimental physicist at Oxford.
And he's doing his PhD there.
So he's into his second year now.
And obviously my background is psychology.
So we have a podcast that they come out every fortnight.
We thought we'd do something a bit different and release two every fortnight because one is a psychology-based podcast.
The other one is a physics-based podcast.
So one of them, I'm asking him about a physics-based thing, normally something I've been watching on YouTube or listening to a podcast about.
And then the next episode, he's asking me about something around psychology.
And again, there's lots of psychology on the internet.
So we've tried to sort of wind it around certain topics that seem to be in the conversation on the YouTube and the podiverse.
And so we're sort of talking about those things.
I'm a big, I'm a real enthusiast of physics, but obviously I'm not trained as a physicist, but I'm fascinated by it.
I always have been.
So having Thomas there to talk to about that is great.
And so we thought, well, let's do it as a podcast.
I love podcasting.
So that's what that's about.
So yeah, it's a limited series.
So we've done six of them so far.
And check it out.
It's called Matter over Mind, Matter Over Mind, as opposed to Mind Over Matter.
And of course, the matter bit is the physics bit and the mind bit is the psychology bit.
What I'm really interested in is sort of merging the two.
So there's some topics like the one we did about consciousness, which start to overlap.
And the very last one in the series, we're going to do God.
So that's going to be really interesting because obviously there's a big physics aspect to that, you know, science and whether there's evidence for God or not.
But there's also a psychological element to that.
You know, why do so many people believe in God and feel like they have a need for that?
So we want to address that topic in these two different ways.
So in a way, that's the future, I think, for the podcast, trying to find a topic that we can approach it from both those angles.
So yeah, I'm really enjoying it working with Thomas.
He's a great guy.
Very proud of him to be my son-in-law.
So it's great to do it, do this thing with him.
So yeah, catch it if you can.
Yeah, great.
I listen to it.
It's interesting from like what I think about when I listen to them is that you're getting two aspects of our reality, the objective reality and the subjective reality.
And that I'm going to have some content about this eventually, but there's a huge amount of confusion here between, you know, when a person has their experiences, which parts are objective and which parts are merely subjective.
And that there are people in the world who attempt to take advantage of this confusion that's generally in the world everywhere to try to, you know, force a certain idea into them or what have you.
But I like that you kind of have, you know, two halves of reality there.
I like it.
Yeah.
And they're sort of, they're both ways of trying to understand and make sense of the world.
They're just coming at it from different angles.
And in some respects, you know, the brain is, the mind is probably even a bigger mystery really than the universe.
You know, outside of our, of our mind, in some respects, only because it's uh considered, you know, unsavory to experiment on live people who have brains.
Right, you know, we have these ridiculous.
That's the only reason why we uh these ridiculous laws where we have to take care of people and uh, if we just be willing to sacrifice a couple, we could, we could learn a lot more, a lot faster.
I don't.
Yeah, maybe.
I mean, we should go for world peace.
You're right.
We should go for world peace is better.
That is one of the interesting things about the brain.
If you look at an organ working like any other organ, like the heart or whatever, you can see something happening.
But the brain, obviously, you can see with scanners now, you can see lights lighting up where certain activity is happening, but we don't really understand what is happening.
It's only sort of general.
It's this region lights up and this region's close to this region or whatever.
And it's, yeah.
And that's when I listen to podcasts and I read things about it, that's how they view it.
They say things like, you know, we're not going to say definitively that these are the neurons that handle empathy.
But when a person is making decisions regarding empathy, this part of the brain lights up.
And then the question is, yeah, then the question for me, the big question, which I'm shouting at the app at this point, saying, but that doesn't tell you anything about the experience of whatever it is you're having.
And that's the bit that I'm so interested in.
Yeah.
Yeah.
Well, you know, I mean, little bits, little bits, yeah, little bits of knowledge.
Okay.
Well, I think I wish you all the best with your new podcast.
I think it's all be listening.
Right.
And come back again.
I'm going to play us out with the full version of that commercial.
So good.
Well, thank you for having me and look forward to seeing you again.
Yeah.
All right.
next time we are we are so doomed because we manufactured our own demise But all's not lost, thanks to the BC Security's Commission.
Export Selection