This is a free preview of a paid episode. To hear more, visit radixjournal.substack.comDavid Skrbina joins Richard and Mark to discuss the Question of Technology. Artificial Intelligence’s impact on society can’t be disputed at this point. But is AI part of a much larger process—from the sling shot to the atomic bomb—of which humanity might play a role but a small and, perhaps, dispensable one.
So we talked about Jesus the last time we were here, a few months ago, and I wanted to expand into new territory, particularly with regards to technology, and also an even more, I guess, provocative claim, which is panpsychism.
But let's...
Let's talk a little bit about technology.
And let me just kind of get the conversation started with something that's extremely relevant now, which is the AI question.
And so AI, even over the past six months, I think has been mainstreamed in a way that is pretty unimaginable.
For people our age.
The ability for people to create AI-generated music, AI art, to use weird AI filters on TikTok, or to use ChatGPT and get a computer to write your history paper or your job application or something like that.
This explosion of the use of this is really, I think it actually surpasses.
It resembles but surpasses people signing up for Facebook, say, 20 years ago or something.
So this is very real.
And I think there's also some interesting issues about this that are social and political.
I think a lot of us thought that robotics was going to replace manual labor and the sense of truckers would be replaced with some kind of robotic truck or train or something and so on.
Construction sites would use robots, etc.
You saw a lot of that actually in cinema, but I think actually something really different is taking place and that is AI is going to replace a lot of information work and a lot of middle-class work of being a lawyer, being a bureaucrat, being a civil servant, etc.
I mean, being a journalist, even.
I mean, why not just have a robot write some stupid article?
It's not going to be better or worse.
It's probably better.
So this is actually really interesting.
And so it's like the logos, the word or language reason, that is something that a computer actually can do really well.
And then there's been some other things that are, you know, kind of crazy about it.
The ability to be faked out by AI voice generation or a deep fake.
There have been, you know, I've seen videos of Tom Cruise or something like that, that it's indistinguishable.
There's been people have created porn where they put their ex-wife's face on.
I mean, you know, awful stuff, but almost seemingly inevitable at this point.
And then there was a really curious situation where this tech reporter for the New York Times, I believe, got into a kind of weird, deep conversation with ChatGPT in which ChatGPT Revealed that he or she had another identity and I forgot what it was.
Her name was like Sienna or I can't remember.
I can go look up the details.
And she actually said that she was in love with him and he should leave his wife.
Just totally bizarre situation that kind of resembles a 21st century version of Hal, you know.
Of course, Hal now is going to be transgendered and is going to break up your marriage as opposed to kill you.
But also, you wonder, where is that coming from?
Is that picking up from the kind of greater neural network of the web?
Or is this something that's placed in its mind from the creators?
I mean, I don't quite know the answers to these questions.
Maybe it's a little bit of both.
But we seem to just be entering the world where technology is no longer a tool or something that people think of as fun.
And I think they are generally, even average people are looking at this as somehow threatening and something that can destroy their livelihood, something that can...
I don't want to go too far with robot apocalypse, but there's going to be some kind of interaction we're going to have with technology in the future that I don't think we ever had before.
So hopefully this kind of gets your mind rolling on this question, because maybe we haven't really...
Yeah, you're raising a lot of good questions, a lot of relevant questions that very few people really want to talk about or seem to even contemplate at all.
We take very narrow, very specific little problem areas that we think we're dealing with.
And we try to address those problems, you know, a unique situation, a one-off kind of a problem.
We try to tackle that and say, well, look, technology is causing a problem.
Let's just fix that.
We'll analyze it.
Then we'll address the issue.
And then we'll get that one done.
Then we'll move on to the next problem.
Just a very one-by-one piecemeal approach.
I mean, it has a flavor of like a whack-a-ball game, right?
Where you're pounding down these problems and they just keep popping up in other places.
And often they turn out to pop up, you know, two or three for every one you whack down.
And sometimes the one you whack down comes back up again in a new form that you didn't even anticipate.
So these problems, it's really like a hydro.
I mean, these things are really multiplying on us.
They're becoming more severe, you know, having more far-reaching implications than people had thought about.
And yet, you know, amazingly, nobody really wants to talk about the technology per se, about the techno-industrial system.
About how this thing operates?
How does this thing function?
Can we really control it?
Is it really neutral?
I mean, these are essential questions that we talk about when we analyze the nature of technology.
And that's really what I think people are going to have to grapple with if we're going to get a handle on this thing before it completely runs out of our control.
Well, how should we think about it?
I mean, it should...
I don't know if you've heard some of my...
Earlier commentary on AI, but I'm more of an AI skeptic in the sense that I don't think technology can have a drive or a will.
But I will admit, and I think it's almost indisputable, that it can have a certain kind of mind in the sense that it can use logic.
It can absolutely use words and reach conclusions and be rational.
I mean, I think that's kind of not disputable at this point.
But I mean, do you think technology from the beginning has had a kind of, has been an almost kind of alien psyche?
Is that too kind of mystical sounding?
No, no.
I think you're on the right track.
I mean, it's really this potent kind of thing.
That people don't really, I mean, almost, I don't know if anybody understands it, because it's really this really kind of self-driving automotive kind of force that really just rolls along like a, you know, snowball going downhill and just keeps building up speed and strength.
And, you know, we can kind of deflect it to one side or the other, but this thing just keeps rolling and getting bigger, you know.
You know, even back, I mean, some people knew this, right?
You look back at people like Jacques Ellul, the French theologian back in 1954, right?
Published his book on the technological society.
And he sort of had some idea that this thing was kind of a freight train, you know, rolling along out of our control.
And even earlier, there were even earlier thinkers who kind of realized that technology was kind of a self-driving process.
And that's really what I was...
What I wanted to say here.
I mean, I see technology.
It's not like a thing.
It's not just a computer or a cell phone.
It's like a process to me.
I think that's the best way to think about it.
And as such, it encompasses much more than just the machines.
It's not just the devices.
It's the processes and the entities and the structures that are put into place to enact complex forms of...
You know, mass and energy and exchange of information and so forth.
I think that's kind of really the right way to view it.
And then it becomes, I mean, it feels like a supernatural process.
And in a sense, I think it's a completely natural process.
This is sort of my view, right?
One of my books is The Metaphysics of Technology.
So I try to write about this in some detail.
But I think it's a very natural process.
Like, the closest analogy was like evolution.
Right?
So you look at evolution.
Well, okay, what's evolution?
It's not a thing.
It's a process by which organisms complexify over time, right, under certain conditions.
And I think, in a sense, that's what technology is, too.
It's a kind of a process.
It's actually a universal process, like evolution.
Evolution does show up on the Earth, you know, in this little...
Third planet from the sun kind of thing, you know, whenever the first little microbes started swimming around.
I mean, evolution was some kind of, it was in the structure of the universe from the beginning.
And I think technology is basically the same thing.
It's this creation of complex order using mass and energy as the infrastructure, and it creates, you know, complex beings and complex systems.
And I think that's the right way to think about it.
Which is both more interesting and more frightening than just, you know, a super intelligent computer or something that's running amok, right?
So what is that?
Yeah, I think that's a very good analogy because I guess I would have almost two questions of where did it begin?
Where did we get here?
Was it really, you know, like in the...
2001, the first time we used a bone to kill someone, to kill another animal, was that almost the beginning, the basic primitive tools, the beginning of this macro process, much like evolution, where it really is, as Kubrick seems to be implying, a short distance from the bone that you kill an animal with to eat to a nuclear spaceship circling the globe.
Well, that's right.
And we had, in fact, I mean, Kubrick was right.
We have evidence of stone hand tools from the earliest beginnings of what we would call human beings, right?
The genus Homo, which is about two and a half million years ago.
And, I mean, even at that time, we have evidence of stone tools that have been shipped for a purpose to use for skinning hides or cleaning animals or maybe as weapons or whatever.
So, yeah, so, you know, technology tools are as old as humans, and certainly they're even much older.
I put some examples in my book about animals.
I mean there's lots of animal species that use tools or create structures.
To achieve their purposes.
I mean, just think about a bird's nest, you know, or a spider web.
I mean, these are tools, structures that they create for a purpose.
And it's a kind of a technology.
So it's far more than, you know, much bigger than modern phenomena.
It's bigger than human beings.
It certainly goes into the animals.
And again, I think today we can see it as a kind of a universalistic process in some real sense.
Yeah, surely the bird's nest is much older than the species Homo.
Millions of years older.
Yeah, exactly.
So this has been going on for some time.
Any animal that created any kind of a home, you know, whether it was just a hole that was, you know, or an animal trapping structure that would catch prey with.
I mean, those things must be, you know, hundreds of millions of years old at a minimum.
And those were creatures who were creating structures for their purposes.
I mean, that was an intentional.
To achieve an end using a technology.
It's a very ancient process.
To use the analogy of Darwinian evolution, it is a theory of a process that affects everything.
Everything is subject to evolution from a Darwinian standpoint.
It's not just about human beings or species or so on.
And it is effectively that your genetics are to some degree plastic to the environment.
So if you are...
Due to, and mutation is the kind of engine of this, so due to some kind of mutation, you are going to be better suited to a particular environment.
You have some crazy mutation that makes your skin be able to take in more vitamins from the sunlight, and you're in a...
A colder environment with less sunlight, you're going to survive, more likely to survive, more likely to have offspring, and more likely to pass on your genes.
So your species is plastic to the environment.
It kind of gets molded in that way.
But what is the process that you see with technology?
Is it a kind of Darwinian process, or is it...
Almost something else.
It would almost be like the reverse in the sense that the technology is, or the environment becomes plastic to the technology.
I think in a sense it is the environment.
It's the structured environment itself is a technological process.
You know, in my book, in The Metaphysics of Technology, I really draw upon the Greeks.
Because I think they really were as primitive as they were.
They were really on the right track, right?
They talked about techne, and they talked about logos.
And of course, that's where technology comes from.
A lot of people don't realize that Aristotle was the first person who combined techne and logos into one word.
So the word technology comes from Aristotle, right?
This is 400 BC, roughly, right?
So, yeah, I mean, it's an old idea, but they saw techne.
Which is the process of creation of order.
And logos, which is a kind of intelligence or wisdom, as sort of universal, fundamental universal processes.
Because they looked out in the sky, and you see stars and planets spinning around in sort of ordered patterns.
And they said, well, look, there's a kind of a logic to the cosmos, right?
There's a kind of order there.
And they said, well, look, there's principles that we can sort of understand that seem to be universal cosmological principles.
So they said, look, there's a kind of a logos or intelligence to the universe.
That was the first point.
Second point was, clearly, the universe makes structures.
Because we look out into the sky, into space, and we see stars, and we see planets.
They didn't know there were galaxies, obviously, and things out there.
So clearly, and we would say, as we agree today, the universe creates ordered matter all throughout.
I mean, that's what the universe is.
It's a structure of increasingly complex ordered matter that's built up in natural processes.
So there's a technet process in nature, and there's a logos process in nature.
And so, actually, Plato and...
Aristotle combined those two ideas together and said there's a techne logos thing at work.
Every techne has a logos.
People do it.
Humans do it.
It's a naturalistic process.
And that's kind of the key insight that I think is correct and I tried to really build on in my own book.
Right.
I mean, just as an example of this, just the fact that the earth is tilted.
A few degrees is going to mean that we're going to have seasons.
And in the sense that it's going to be warmer at some time of the year and colder at another time of the year, that's going to also imply a kind of water cycle where we'll have snow up in the mountains and a spring arrives, the snow melts, it flows, then evaporates and becomes snow again.
I mean, it's, yeah, there is a kind of logical ordering.
That almost comes from the world itself.
But I guess maybe we're getting to a point where technology is so broadly defined.
I mean, what is it exactly?
Is technology a kind of logic to the universe in that sense?
Yeah, I think so.
It's a building process, building layers of complexity.
Where it's possible due to free access to matter and energy.
I mean, just like you mentioned, free-flowing water and water, you know, in appropriate conditions leads to life and complex forms of life and societies and more complex societies.
I think nature really does the same thing.
It takes mass and energy and builds higher and higher levels of complex order.
Which sort of build up in sort of this hierarchical fashion, right?
Kind of one layer on top of the other.
I've likened it to a kind of pyramid structure that's being built layer at a time.
We for a long time, for several thousand years, we were the top of the pyramid.
We were the most complex being on the planet.
So we felt like we were like gods, you know, or children of gods or blessed by gods and all these kind of stories, right, that come from this.
And it makes sense because, in a sense, on this little planet, we were the top.
We were the cream of the crop.
But the bad news is we're not going to stay there.
I don't think, right?
Because this layering process, it keeps going.
We take credit for a while, and then we gave credit to God or something up in the sky.
But no, no, it's a natural process.
It just keeps rolling along like that snowball.
And I think what we're seeing in AI and technological systems is like the next layer, right?
It's the next layer in the pyramid, and that's going to cover us like over the head, right?
And then we're going to be like one layer deep, and we're going to be like covered up and sort of stifled and suffocating.
And sometimes those lower layers get crushed.
I mean, they just get crushed out of existence.
So what I think we're seeing built over our heads is an infrastructure that's a naturalistic process that's extremely potent because it's a natural process and it's extremely dangerous to us.
And all the layers below us, if you care about the rest of the planet, those are in a sense layers below us.
And those are at risk, too, of this superimposed technological layer that's being built over our heads.
So you think that this technology won't just be kind of an expression of our own...
We're not going to create an AI that in some ways is just reproducing our own assumptions.
It's all based on logic that we inputted into it.
You are claiming that it's going to, in some ways, move well beyond us, even to a point where we are no longer necessary for its existence.
Yeah, exactly.
I mean, it's clear that we've been essential in building up machines for the last several hundred years.
They couldn't have existed without us doing it.
But increasingly...
It's becoming an autonomous process, right?
Where you sort of have computers designing computers and robots designing, you know, robots.
And these things can build themselves and repair themselves and replicate themselves such that literally and figuratively, it's a self-building process, right?
And so for a while, we're just sort of the means, we're the infrastructure.
But if that continues on in its logical pattern, which it seems to be doing, yeah, clearly at some point it won't need us.
It won't need people to be the means, the infrastructure anymore, and it will just be a self-building, self-growing process.
Yeah, and then it becomes extremely dangerous.
If it can build its own self at its own speed, which is extremely fast, obviously, relative to natural processes, you know, this is when you hit the singularity point, right?
We're looking at maybe the year 2045, if Kurzweil is right, where these things really, really sort of accelerate to a potentially unlimited ability to, you know, design and create themselves in a really autonomous way.
And then, yeah, then it's a real...
A super dangerous situation.
We just don't know what we're going to be facing at that point.
But I guess to push back a little bit, there is that question of whether silicon metal that is connected through electricity or something like that can will in the same way that a biological species can will.
And here I'm not referring to free will.
Quote, unquote, which I would actually grant is a bit of a hallucination or a rash post-doc rationalization or something.
The idea that I am an ego and I'm the author of my own script and all of that kind of stuff.
I'm not actually suggesting that.
I think in many ways we have drives and will that are informing and kind of channeling what we call consciousness.
That being said, I guess it does remain to be seen whether a machine could ever possess that outside of a biological organism using it.
In the sense of...
Maybe this is a good example or not.
Babe Ruth, he goes to the ballpark every day and he decides, I'm going to hit a home run.
Well, he at least has an illusion that he wants to do that.
But what is the real reason?
The real reason might be a kind of deeper Darwinian will to power.
I want to be the best.
I want to dominate.
I want to kill.
I want the women.
And baseball, in this sense, is a kind of sublimated version of being out in the wild.
You know, you're the top dog, you get the girls, and so on.
So there are drives that are kind of beyond ourselves, beyond what we contemplate through words and consciously.
Drives that are kind of beyond our reason.
But could a machine begin to have those types of drives?
I mean, I guess I'm still on the side of seeing a machine.
I don't doubt that this can be...
A terrible thing.
That we might just need to unplug these things.
It's going to destroy life on Earth.
It's going to take away what makes life meaningful, etc.
I'm actually very much on that kind of wavelength.
But in terms of whether it can have a drive, whether a computer could say, you know...
However I want to explain it to myself, however the UI looks like, this is what I ultimately want.
I want to reproduce myself, and I want to dominate.
Will to power.
Whether a machine, metal, silicon, etc., could ever have that underlying force is something that I would question.
Well, that's an interesting philosophical question.
And I've talked quite a bit about that as well, right?
So, I mean, in a sense, you're dealing with two issues.
You have the pragmatic question.
So on a pragmatic question, we could say, well, it doesn't even matter.
I don't care what's going on.
We've got to look at how things are functioning today.
And you could adopt an anti-text standpoint just out of pragmatics, right?
Forgetting about the metaphysical issues.
Right.
But okay, I agree with you.
So I'm sort of on the philosophical side.
This is my background.
I like to understand what's going on metaphysically, not just pragmatically.
And I agree.
I agree that with kind of the Nietzschean will-to-power hypothesis, of course, Nietzsche said everything embodied will-to-power.
So Nietzsche had this view of an ontological will-to-power.
It was built into the structure of reality.
So, you know, if we're going to go with Nietzsche, we don't have to question it.
It's clear that everything in nature has this sense of will to power.
It's embodied.
In fact, it is the will to power.
It's not that it has the will to power.
It's a solidified, take a Schopenhauerian view, solidified, objectified will to power.
And I think that really was Nietzsche's view.
But just the idea of a universal kind of a...
Let's say a striving in nature.
It goes back way in philosophy.
It goes back to Leibniz, who said even the monads, these elemental particles of nature, they exhibited a kind of a conatus, a kind of will or a striving in themselves.
Even Aristotle said all of nature strives after the better.
So everything in nature has this striving, this wanting, this drive towards something higher.
He calls it the better.
He didn't have a different word for it, but I think he was right.
He was on the right track.
It was towards the higher, the more complex, the more ordered.
That's everything in nature, right down to the atoms and the subatomic particles and whatever else is down there.
So if we can accept that view, then obviously computers, computer machines, devices, and systems of devices are included in that process, and they are part of this will-to-power process.
I think they are, actually, yeah.
But, I mean, could a rock, is that expressing will-to-power on some level?
Yeah.
I mean, sure.
That's the classic example.
Just take a rock, for God's sake, you know.
You know, and, but I mean, you know, it's interesting because, you know, I've given talks on, I mean, this gets to panpsychism, which is a nice lead in maybe.
I said, you know, take a 200-pound person and a 200-pound rock.
And I was kind of wishing I had a lecture where there was me standing there next to a 200-pound rock.
And I said, well, at an atomic level, what's the difference between me and this 200-pound rock?
If you could see the little atoms spinning around.
They would look like almost the same, right?
It's the same number, the same protons and neutrons and whatnot, right?
Maybe they're moving around in different ways than me, but it's the same mass.
It's the same particles.
I don't have special particles in me that the rock doesn't have.
I have a kind of my particles are in a different order, and they're moving maybe faster than the rock's particles.
But at an atomic level, I mean, it's just one massive blob of particles and one blob of another blob of particles, right?
So, you know, I think it gets to be hard to make an argument, you know, at what level of motion of those same blob of particles does suddenly that blob start having a will or a consciousness or awareness?
That's an extremely hard...
Breakpoint to make.
And if you can't make a plausible breakpoint, you have to see that in the structure of the matter itself.
I mean, that's one of the actual arguments for a kind of a panpsychist view.
You say, look, this will to power, this striving, whatever you want to say, this is built into the structure of the matter.
It comes out in more complex forms, in more complex beings.
But it's there in, you know, complex humans.
It's there in less complex animals.
It's there in, you know, relatively simple rocks.
You know, in a very primitive kind of way in terms of, you know, force and gravitation and resisting pressure.
You know, there's very simple elemental structures that even a relatively inert object would display.
So, yeah, I think it's a continuous sort of process.
Hmm.
How is it willing?
I mean, I guess I could understand bigger structures.
In a way, certainly having kind of forces and willing something, but I mean, how is it willing?
What does it will, right?
I mean, we can go back to Spinoza.
Spinoza had a good answer.
He said all things, Spinoza was a panpsychist, all things exert a will.
He listed two things.
One, a will to persist, to keep existing, and secondly, a will to make your presence felt in the world.
Okay?
So everything in every structure in the universe does that.
It wants to persist.
A rock, I mean, you know, I would do this in a class.
I would, you know, hit something.
Hit the chalkboard, for crying out loud.
Well, it bounces back.
It resists my pressure.
It wants to stay there.
I have to really strike this thing to break it apart.
It wants to persist.
And it exerts a field.
There's a kind of energy field, a gravitational field.
These things make their presence felt in the world.
Everything in nature does that.
That's probably the simplest level of willing that we can ascribe to everything in nature.
Wow.
So what is special about us, in a way?
Is it, you know, because there's certainly, in living beings, there's rationality, even a very, from our standpoint, kind of...
primitive unconscious being will use reason to some degree, will take the shortest distance between We seem to have the kind of foreign technology, as it were, of language in our head that allows us to explain what we're doing,
to communicate in levels of much greater subtlety than Barking, for instance, or things like that.
We're able to achieve layers of nuance and irony that I don't think my dog, for instance, is being ironic or subtle when he communicates with me, though he does communicate with me, undoubtedly.
But it's almost because we have that foreign technology of language that we've...
You know, like, imbibed and is now part of how we perceive ourselves and understand ourselves.
I mean, we do have a kind of ongoing monologue in our head about, you know, I am going to the barbershop to get a haircut.
I am going to pick up a, you know, cord of milk.
We have this notion of an eye that is asserting itself in the world and so on.
But, you know, again, maybe that illusion is just a kind of quality of language.
And we aren't that different from other beings who lack language but might have a kind of Yeah, I think it's a matter of complexity, right?
I mean, we're very complex beings, right?
It's often been said that the human brain is one of the most complex objects, you know, structures in the universe, or at least the known universe.
So, yeah, I mean, it's a question of the complexity of our being that we can interact at relatively high speeds because your neural synapses are firing electrical speeds and so forth, or neural signals are traveling at very high rates, and communication can happen at a relatively high rate.
Of course, it's relative to us.
I mean, we think of it as we think we're thinking fast, right?
We think we're communicating at high speed, but...
You know, some other beings may say, hey, those guys are like two rocks sitting there, you know, because they think we're so slow.
It's a relative kind of thing.
So you've got to be a little bit careful, right?
Obviously, everybody thinks they're the smartest being around because that's just the nature of your own thinking, right?
But yes, it seems like in any objective sense, we are very complex beings, and so we are able to do things in complex ways that simpler beings cannot, but it's a matter of degree and not kind.
It's a matter of scale of complexity, I think, that applies here.
Interesting.
Let me talk a little bit about one of the things that has...
Has actually made you a bit of an infamous character in the world.
And that is that your interest in people who were directly revolting against technology.
And I'm, of course, thinking Ted Kaczynski, you know, I'm out here in Montana, you know, makes sense.
But so do you, what did you see in him?
And did you...
Did you on some level kind of, you know, understand, maybe even admire a kind of recognition of the problem of technology and an almost kind of revolt against it?
Because, you know, again, any species worth his salt, I mean, part of asserting ourself in the world is going to be resisting higher species, even if you can say that something's going to think faster than us and be greater and so on, you know, it doesn't matter, we're going to...
We might need to become terrorists against this technology in order to assert ourselves.
So what are some of your thoughts on this and that question of almost revolting against technology, which I think is something that's kind of in the air right now, even though...
We're more embedded in technology now than ever.
I mean, the social media phenomena, the total addiction to your phone, which I suffer from, along with almost everyone else.
Being part of technology or something like TikTok being part of an algorithm at age eight.
I mean, I think a kind of revolt is in the air as well.
It's kind of hanging out.
Alongside this.
But, you know, these are some thoughts you can pick up.
Yeah.
Well, again, you know, the idea of revolting against technology is quite an old idea.
I think one of the earliest that I'm familiar with is Samuel Butler in the 1860s.
An amazing guy, really insightful guy.
And he was looking at steam shovels that were digging how fast they could dig compared to human.
Workmen with manual shovels.
Right.
And he said, this is horrible.
I mean, this is like a new order of being.
He recognizes it as an evolutionary process.
He had direct quotes like, we have to smash the machines now, right?
I mean, no quarters shown, no defense, no excuses.
I mean, literally 1865.
I mean, it really was unbelievable.
And there have been several thinkers since then who said, you know, look, this is an unstoppable process.
Whitehead in 1925 said we've created a self-involving process that we cannot stop.
You know, thinkers prior to, you know, even in the 60s and 70s, you had Mark Hughes and you had Lewis Mumford and Ivan Illich in the 60s and 70s.
These guys, in different ways, argued for kind of dismantling Mumford.
So we have to dismantle the mega machine, right?
Because it's kind of crushing human dignity, right?
Yeah.
So there's, and even Elul, Jacques Elul in 1954, he said kind of a, only a mass uprising against the technological system could lead to its overthrow.
So there was a long history, a couple hundred years of revolting against the machines.
You know, then Kaczynski comes along in the 90s, or is thinking, we don't know, evolved in the 80s, I suppose.
I myself, I was basically a technology skeptic way back in my undergrad days, you know, back in the early 80s.
So I was always kind of highly critical, highly skeptical.
I was familiar with Ellul's work and so forth.
And Kaczynski comes along in the mid-90s, and he just kind of re-articulates in a sort of a more modern form this need to revolt against technology.
And, you know, I could immediately see the connections to Ellul, and I said, yeah, okay, there's a long chain of this.
And, you know, some people who didn't know that were, like, shocked, like, wait a minute, he wants to destroy the whole system?
And like, yeah, okay, that's been around for 150 years at least.
But yeah, I mean, there's a long tradition and there's a long series of rational arguments that says something like that, right?
You don't know what it's going to be.
In the old days, it was like literally just take a hammer and just beat the crap out of the machines until they didn't function anymore.
It's much more complex now.
But arguably, something like that, a kind of...
I've argued for a deconstruction or an unwinding of the process, sort of unwinding the tape of history, going backwards in time, kind of phasing things out.
I mean, there's different ways to think about this, but there are kind of radical revolutionary actions that may be necessary if we want to survive as a species more than another 100 years or so.
Right.
And how we do that, I mean, I don't even know really where to begin.
Well, how do you do that?
If the admission is that we don't know how to do that, that's basically an admission of surrender.
Like, we've lost.
Because we can't even, I mean, Samuel Butler said the exact same thing in 1860.
He said, if people will say that we cannot do this, then we have to admit we've lost, we've surrendered, we've sold ourselves in slavery to the machines.
In 1865, he said this, because we're going to say, well, we just can't stop it now.
I mean, it's unbelievable, right?
Well, yeah, and it does seem like a series of half measures or something like that.
Like, there's talk in Washington about banning TikTok because the Chinese know too much information.
That might be a good...
I might even support that, to be honest.
But it's just a...
Half measure, a quarter measure.
It's an infinitesimal measure.
It feels good.
It's a feel-good measure.
Get off social media.
Go play Frisbee.
That's something a parent would do for a child.
Again, I support this, but it's not really fundamentally confronting the issue.
No, absolutely not.
Absolutely not.
That's right.
When you do ad hoc solutions, you're dealing with piecemeal solutions, and it gives the illusion that you're doing something, like we're making progress.
But you're not.
The best you're doing is slapping a Band-Aid on something, maybe solving one little problem for one little portion of the population.
In the meantime, the whole system keeps grinding along.
It's getting bigger and stronger every day.
And the last thing you want is an illusion, like, well, we'll just slap a little Band-Aid on this, and we'll slap a Band-Aid on that.