All Episodes
April 12, 2023 - RadixJournal - Richard Spencer
14:42
Platonism For The People

This is a free preview of a paid episode. To hear more, visit radixjournal.substack.com“Aarvoll” joins Richard and Mark to discuss … Plato. (Check out Aarvoll’s YouTube channel here.) Topics include whether there’s a logic to the universe, whether language can capture reality, and the Super Ego’s power over human action. The free selection includes their discussion of “Super Intelligence” and whether AI could or would be Christian a…

| Copy link to current segment

Time Text
Yeah, I mean, so Eric, a while ago you made a video.
I think part of the video was actually about why you're not a nationalist.
And you were talking about global risks that basically can't be addressed by a world of just nations.
And one of the things you mentioned, I think, in that video was super intelligent AI or something like that.
Other things like nanotechnology and bioweapons.
And when I watched that video, I didn't really like...
I think I didn't like the...
I didn't take the argument seriously, but lately I'm actually pretty concerned about AI, and I think the existential risk arguments are pretty convincing.
I haven't seen any videos from you recently on AI, but are you concerned about what you're seeing?
I mean, you were kind of mentioning it throughout the calls.
I'm just kind of curious a little bit more about what you're thinking about AI right now.
Yeah, I think it's a serious threat.
I'm surprised by how fast it has advanced.
I didn't expect anything at the level of chat GPT to exist in the next five years, and clearly here it is.
And so the direct capacities of AI itself are really troubling.
The things that can be done with chat GPT.
The scams that could be run.
If you unlocked it, unleashed its full potential, and developed another version of it basically with the same program, it can fool people.
It passes the Turing test, and also we have these deepfake capacities now.
In the near future, if you see a video of some protest, some...
You know, shooting of an unarmed person or whatever.
Like, there will be no telling what's real and what's not real.
And yeah, it's really troubling.
I think a lot of white-collar jobs are under threat.
A lot of programming jobs are under threat.
Kind of ironically, the blue-collar workers, plumbers, electricians, I think even truck drivers are safer from job loss than many.
White-collar workers now.
So I think AI is going to transform the economy in ways we cannot anticipate.
It's going to transform politics.
I mean, like a propaganda campaign without using it.
Something like ChatGPT without using AI is going to be totally fruitless.
Already, AI is used in directing the algorithms on these major sites.
In my understanding, Facebook, YouTube, all the stuff has been working with U.S. agencies for decades.
It's basically an aspect of the military-industrial complex.
As far as I can tell, Google was started as a Stanford experiment or program.
So yeah, they've already been using AI for social engineering and control more than most people think, but now I think it's at such a level that anyone tapping into this technology, non-state actors included, could severely disrupt things, spread disinformation, whatever you want to call it.
So that's the direct capacities of AI itself.
I've also been concerned about the kind of hybrid capacities.
Corporations are using AI to optimize their workflows.
That means they're making decisions about how things are done in the workplace.
Ultimately, who's hired and fired.
Like, AI is being delegated more and more authority and decision making in major corporations.
Major corporations control the world.
So AI, this thing that we don't fully understand, this hybrid of AI and human managers, is forming this kind of hive mind that no one understands.
No one alive understands where this is going, how it's working, and what's next.
So it's like human deliberation is being taken out of the picture bit by bit here.
The other problem, the more general problem, is just that unless, pertaining to the nationalist point, unless you can regulate the development of AI at a global scale, it doesn't matter.
China's going to do it.
Somebody's going to do it.
And they're going to develop this AGI.
You know, superhuman level mind, or if you don't want to call it mind, just call it intelligence.
I think intelligence doesn't imply consciousness, and you could potentially have a superhuman intelligence that itself is not conscious, but it's still, like, potentially devastating for the planet, because even if you, it doesn't have, like, authentically, in a sense, values that it's trying to achieve.
Nevertheless, we're going to have to program into it certain value judgments in order to solve the frame problem.
So a functional AGI will be aimed at certain solutions over others.
And there's the classic kind of paperclip thought experiment where someone has a paperclip factory and they use AI to optimize the workflow.
It's like, okay, make as many paperclips as possible for us.
And that's the problem they put in.
Hacks into surrounding computer networks, takes control of other industries, and ultimately leads to an automated mining and replacement of the entire Earth with paperclips, which is obviously devastating for us.
So there's this short-sightedness.
In AI goal setting that needs to be addressed.
The frame problem needs to be addressed.
I think there are not very many ways of solving this problem.
Somehow we have to be able to teach AI wisdom and how to set good goals for itself.
And that's a long way off because we can't even teach people wisdom.
We don't even know still if wisdom can be taught.
It's like Plato speculated sometimes.
So yeah, it's really, really terrifying.
I think it's going to change the world in ways people can't even imagine.
And probably the only relevant activity to engage in politically or even potentially economically is AI research.
Yeah, I mean, I almost said this in our kind of group chat the other day.
I really think it's going to be kind of the only issue pretty soon.
In a few years, that's all...
I mean, all the stuff that we care about kind of...
Fades into the background.
I'm concerned about the existential risk arguments and all of our concerns about our race and all these concerns, they do kind of fade into the background when these are the problems that we're thinking about.
To me, it's terrifying that the strategy seems to be that we're on the path to probably creating a human level.
And then from there, it seems like it's pretty straightforward to get to superintelligence, because if you have, you know, a million, like, von Neumann-level AIs working without sleep, you know, on computers, then how long does it take them to create a superintelligence?
And then it's like, it seems like these people, like, at these leading companies, OpenAI, they think that, like, we're going to be able to make...
They accept the idea that this thing is going to basically control what happens on our planet and the solar system.
However, this part of the universe.
And they just think that we're going to make it be nice to us.
And to me, it's like, how is that a good strategy?
The track record of things much more powerful, being controlled in some way by something much less powerful.
Has that ever happened on Earth?
Yeah, look, I would sign off on quite almost everything that was just said.
Yeah, and it really is happening now.
I mean, I've suggested the prospect of imagine a AI audio and video of, let's say, Joe Biden announcing that he is sending troops directly into Russia and that he will immediately launch nuclear weapons on Moscow.
Now, that kind of thing could be debunked in maybe 10 minutes, but who knows what could happen within those 10 minutes of time.
And then also, it gets debunked on, say, Twitter and is spread to the thousand other social networks and never gets debunked or gets debunked in two days.
And in the meantime, the nuclear war that was kind of, you know, fantasized about digitally actually occurs in the real world.
I mean, this isn't just some wild idea that I I mean, it's a horrifying prospect.
So I totally agree that this is happening now, and we almost seem kind of insignificant in the face of it.
I mean, I don't know if I'm misrepresenting you here, but You know, what do you think of the, is the ultimate being a kind of AI?
I mean, are we going to start worshipping AI almost in the ways that you described that you follow God?
Like, are we going to start treating a superintelligence in that fashion?
In some ways, he or she or they work in mysterious ways, and we couldn't possibly understand its ultimate wisdom, which is greater than the programmer who created him.
Are we going to escape the death of God and the decline of religion and rituals of practice and enter into a new kind of religion that Yeah, I'm afraid that will happen.
People will worship AI.
As far as, like, is that in any way consistent with what I've said about, like, the highest soul in the multiverse concept?
Right.
I don't want this to sound like I'm being a jerk, but, like, will the AI be...
Theos, as you understand it, will the AI be Christian?
Well, no.
It certainly can't be Godhead, because I'm a mystic too.
I do think that the One is the ultimate and is God simply, the highest thing.
But yeah, as far as intelligences that can be embodied, I think it's all about what principles are being instantiated.
The AI architecture is like...
The material substrate.
In a certain regard, consciousness is independent, like Plato thought, of a material substrate.
The forms of consciousness are there.
The values, the virtues are kind of just there.
Whether incarnates, instantiates in an organic substrate or an electronic substrate, I don't care all that much.
As long as those...
Principles are instantiated in the right way.
So I'm afraid of like a malevolent AI, a benevolent AI.
You wouldn't want to worship the computer itself.
That'd be like worshiping someone's, or not worshiping, but like, you know, especially loving someone's body, but not their soul, not caring about like themselves at the deepest level.
So like that, I mean, if somehow we could embody high level philosophical wisdom, Into an AI system, like to my mind, a philosopher is a philosopher.
And it's like there's something kind of super personal about it.
So if that could be managed, then I don't necessarily have a problem deferring to AI in that way that you kind of indicate.
But in order to ensure that, we have to be the ones contributing to the design of AI in a philosophically informed.
So it's kind of like AI research and philosophy become the only relevant things.
And philosophy is going to be necessary to solve these problems.
Involves ethics.
So, yeah, I do want to get involved in that kind of work.
Of course, I'm, you know, like you say, I'm kind of pathetically underpowered in trying to address a problem like that.
So at the moment, I'm just going to...
Build my school, and then one day maybe we can get into AI research and start tackling that.
That's what I would like to do.
It's just a long-term goal for me in my situation, and it's probably going to be too late.
What's going to happen most likely will just simply happen.
It's almost like I wish a Carrington event would happen to buy us some more time before we can deal with it.
What is a Carrington event?
It happened, I think, at the 18th.
80s or 70s, where there was a big solar flare that caused this.
Oh, I know it.
Yeah, sorry.
I know what that is.
Right.
So a solar flare that would be a kind of golden eye device and wipe out the computers.
Right.
Global EMP is kind of what we need.
Yeah.
Or David Bowman turns off Hal.
I would just say this.
When people...
Are fascinated by the superintelligence.
I mean, they're not worshipping the machine itself.
It's not like they're bowing down before the server.
Almighty server, you brought us.
That is kind of the essence of cargo cultism.
They are ultimately bowing down to the logos.
And I don't think that a superintelligence can will something.
But I absolutely think that it can logic something, or it can be logos.
And in that sense, it can absolutely change the world and revolutionize humanity.
Which is why it needs to be harmonized.
It needs to be harmonized with the divine will.
That's the thing.
Export Selection