David Krueger warns that humanity faces existential risk through gradual disempowerment rather than sudden annihilation, predicting economic displacement, cultural reliance on AI for expression, and political "algocracy" where leaders become algorithmic puppets. He dismisses conspiracy theories linking U.S. safety advocates to China, noting Beijing's aggressive regulation protects its social order despite surveillance use. While current alignment tests fail as AIs adapt, the Trump administration's new Center for AI Safety under Howard Lutnick offers nascent hope, yet Krueger fears unchecked systems like Moloch will ultimately control human destiny. [Automatically generated summary]
This is precisely what's so terrifying about the trajectory that a lot of Silicon Valley investors are trying to put us on now, where they've started to realize that, you know, maybe we don't need these workers to get so much income.
Maybe we can build machines that replace them.
Honestly, the inspiration for this, you have a little bit to do with it because it started becoming, Steve, it started becoming very striking to me that there was an incredible, Broad support in America for these ideas.
For a long time, I used to call this the Bernie to Bannon coalition, saying, hey, you know, yeah, curing cancer is great.
We can do a lot of wonderful things with AI to strengthen our economy and strengthen our country and strengthen our military, but let's make sure that it's in the service of human beings, not in the service of some machines.
When we're talking about losing control over AI, we're not talking about the chatbots, we're talking about AI agents, we're talking about systems that are autonomous.
I think in 10 years, we will, if things go well, we will look back at this moment and we will view it as a moment of kind of collective insanity and be like, wow, can you believe that we were ever doing that?
That we were racing to build this technology that we knew had a massive chance of replacing us and was going to completely disrupt our society in all the other ways that you mentioned.
One of the main reasons I am optimistic is because in my time in the field, I've seen this go from a complete issue that nobody was talking about.
To be more and more understood and accepted by not just the research community, but policymakers, the public.
We talk a lot about existential risk in artificial intelligence.
Sometimes we discuss it in terms of human action, humans using machines.
What if a dictator uses algorithms to monitor the communications and even the thoughts of a population, and then uses those thoughts, uses those communications to subdue his own people?
What happens if a rogue actor uses the Expertise provided by an AI system to create a bioweapon or any other kind of improvised weaponry.
What if the US or China develop armies of humanoid robots, drone swarms in the skies, and deploy these autonomously against soldiers or even citizens?
What happens if both do this?
On the other side of this, the more extreme wavelength, you have the idea that.
Artificial intelligence itself could be put in control of these systems and by its own decision making capacity begin to produce propaganda to subdue the population or perhaps to unleash a bioweapon to weaken or kill a population or the entire human race.
What happens, these thinkers ask, if AIs take control of autonomous drone swarms and exterminate some or the entire human race?
Now, these are Terminator vibes.
Wake up tomorrow and the robots kick in your door and drag you away.
But there are more subtle scenarios that are proposed.
Among the most plausible is gradual displacement.
What happens if human beings gradually cede control to the machines?
They do so on an economic level, jobs being displaced slowly but surely until humans are rendered obsolete.
What happens when human beings deploy AIs for culture and then eventually?
Have completely lost the capacity to express themselves, to persuade their fellow humans on a cultural level.
What happens if we cede the control of the state bit by bit to an algocracy?
This idea of gradual displacement is put forward by Professor David Kruger.
David Kruger is the CEO of Evitable and a researcher at Mila in Montreal.
I'm so glad that it happened and grateful to Senator Sanders for really talking about the elephant in the room.
We're building AI systems that are going to be as smart and smarter than people, and we don't have any plan for how to keep them under control or keep them from replacing us.
So, you know, that's really the basic picture that basically nobody, no other politician is talking about as directly as Bernie Sanders.
And the only way that I think we can stop that from happening is to make sure that not only American companies don't build this thing, but also Chinese companies, also European companies.
You know, it really needs to be a global thing.
So that's why we also had these researchers from China there.
And, you know, there's a lot of agreement among researchers that AI has these massive risks and that we should at least be regulating it.
I personally think we shouldn't be building it at all right now.
It was funny to me, you know, a lot of people were flipping out about the doomers, doomers, I guess, being you and Max Tegmark, collaborating with the Chinese in order to subvert the U.S. government.
Now, Bernie, I won't say that he is a total commie or anything, but.
Bernie would maybe be a little suspect on that front.
However, listening to the Chinese researchers who were there, well, there via Zoom, they seemed a lot less concerned about the dangers, especially the younger gentleman.
Pardon me if I can't remember or even pronounce his name, but it's interesting to me.
This narrative is that U.S. and Canadian doomers are collaborating with China to subvert AI innovation.
But in China, the narrative isn't really as gloomy by and large.
I don't have my finger on the pulse there as much.
I will say so.
I mean, first of all, the whole like collaborating with China thing, it's just really silly.
It's ridiculous.
I mean, this is just like a conversation about the risks of AI.
There was no, you know, scheming or like, oh, let's work together.
And it's public, you know, you can go and watch the thing.
So, you know, this is just the kind of dialogue that we should be having.
I mean, even if you think, you know, China is the worst. nation to ever exist and our mortal enemies.
We talked to the Soviet Union, to Russia all throughout the Cold War.
The idea that you just shouldn't talk to your enemies when you face a common threat is ridiculous and stupid.
In terms of the vibes of Chinese researchers, the Chinese government has been, I want to say, regulating AI more aggressively than anywhere except maybe Europe.
And they've also said publicly that they want more international cooperation and stuff.
Now, I don't know entirely what to make of that.
Again, a lot of people will say, well, you can't trust anything they say.
I wouldn't say let's just trust them on their word, but I think it's some sign that they have some appetite for this.
When I went to China three years ago to speak to researchers there, one thing I found is the attitude, I think, is very different from here.
So in both places, researchers agree we need to solve the safety, security, alignment, control problems.
We don't understand the systems.
There's technical problems we need to solve.
In the US, it's like we need to do that because if we don't, the government's not going to do anything and then we might all die, right?
We might lose control.
In China, it's more like if we don't do this, the government isn't going to let us build the systems we want to do.
That was kind of the vibe I got there.
And certainly their government is, I think, more worried about AI disrupting their social order, which they obviously want to keep very controlled.
Yeah, my impression is that while I'm not trying to give a whole lot of credit to the CCP by any means, at the very least, they've taken the Problems with child safety and other elements of AI and digital culture more seriously, at least on a regulatory basis.
Now, at the same time, they openly use algorithmic systems to scrape up and analyze the population's behavior and use it to suppress them at every turn.
So it's a mixed bag, to say the least.
And in no way, shape, or form do I want the U.S. to end up like China.
But I do think that the whole notion that you can't talk to people and that talking to people somehow means.
That you're in cahoots with them, I just find that to be completely absurd.
I mean, you could argue that you and I are in cahoots, but, you know, until I subvert you.
So, this idea I think that when you look at existential risk or catastrophic risk in general, just the risk of AI, the conversation naturally does veer towards these notions of sudden annihilation.
You know, you wake up one day and the AIs have taken over.
The robot has put the pillow over your face while you were asleep.
The notion of gradual disempowerment, I think, is really compelling because it's.
One, it shows kind of the continuity of AI development and deployment with other technological developments and deployments.
So, TV, internet, smartphones, social media, all these were gradual processes.
They happened, it seems like overnight looking back, but they were gradual processes.
And they're not complete.
It's not like everybody's done it.
The same thing with gradual disempowerment.
I find it to be very persuasive because of its subtlety.
So, if you would, could you just walk the audience through?
At least a brief overview of the six principles that you put forward in the original paper and the three sectors of society you focused on the economy, the culture, and the state.
I think you're not the only one who finds this a lot more compelling.
Many people I talk to, I think, are very skeptical that AI poses a risk of human extinction until we start talking about it this way.
So they're like, Rogue AI, Terminator stuff, I just don't buy that.
I'm like, Well,.
Answer me this Do you think governments are going to build autonomous weapons if other countries are doing it?
And, well, yes.
And then, do you think we're going to have some sort of international treaty to not build those weapons?
Like, I don't know, probably not.
Seems like it's kind of anarchy out there.
So, we're going to be going there with AI by default.
And it might happen pretty gradually.
But all of the scary things that people are worried about with AI, I feel like, okay, maybe not literally all of them, but like, If it's technically possible, we may well do it.
So, gradual disempowerment, it's kind of an idea that has been floating around in some form for a long time.
Like I said, when I talk to researchers, I've been doing this for over a decade, this is often where I go in order to convince them to take these risks seriously.
But this paper was really trying for sort of the nth time to get those ideas out there on paper in a way that.
Would shift the conversation and bring more attention to this, which is kind of a neglected form of risk.
And so, like you mentioned, there's the cultural, economic, and political disempowerment that we talk about in this paper.
The economic one, I think I like to start with because I think it's the most obvious.
Everyone's already talking about is AI going to take all our jobs?
If we keep building more powerful AI systems, they will be economically out competing humans.
And then we'll need, you know, Some sort of like different way of organizing society.
Like, I've heard people talk about a government jobs guarantee or something like that would be really the only kind of thing that would allow people to keep their job.
And then people also talk about like universal basic income.
I don't like either of these solutions because at the end of the day, even if it's a jobs guarantee, it's a government handout, right?
And I don't think we want to be reliant on government handouts to put food on the table.
If the government is the only way that you're surviving, the government can just pull that away at any time and you can't survive anymore.
So that's why we have to talk about the government side of this as well, the political disempowerment.
So, in the same way that AI is going to be competitive with our jobs, it's going to be competitive with politicians for their jobs as well, and for policymakers more broadly, everyone in politics.
And so, you know, if people are really replaced, not just in the workplace, but across the board, throughout society, then I just don't see that we're going to continue to be able to steer the future and have any control.
And that's really concerning.
The cultural part, the last one, I think, is this one's like maybe a little bit non-obvious at first.
But what I think about when I'm thinking about cultural disempowerment today, right now, is all the people having relationships with chatbots where, you know, they will.
Do a lot of things just because the chatbot told them to, basically.
And then the other thing I think about is, and this might seem a little bit out there for some of your listeners, but in the bubble that I'm in, tech and AI, and now I moved to Silicon Valley or like Berkeley recently to set up this nonprofit, there's a lot of people who really think that AI is like the next phase of evolution.
You know, AI is like a person and deserves rights and deserves moral consideration and all of that.
And I think that's really dangerous where we're at today because, you know, we don't want to start treating AI as, you know, another being deserving of rights because then if it is more competitive than us, then we'll have no protections left, basically.
And, you know, I think this is like a deep philosophical question that, you know, we do want to think about more, but it's really not.
Play out the narratives that you hear from Anthropic, from OpenAI, certainly Elon Musk.
He frames it as a warning, but he continues to pursue it.
And a bit more subtly from Google.
That narrative ultimately leads to exactly what you're talking about.
They don't always talk in terms of immediate annihilation, they bring up the possibility, but.
Without a doubt, inevitably, if their aims come to fruition and they're able to replace all the coders, all the white collar jobs, all the blue collar jobs, if they're able to first improve the government through algorithmic efficiency, and then slowly but surely the politician becomes a sock puppet for the algorithm, and then maybe the politician just becomes the algorithm.
You just have like some kind of deep faked Josh Hawley talking about the dangers of AI, a deep fake Bernie who lives centuries.
Yeah, these are real issues.
And the cultural.
Issue, I think, is probably the one that resonates the most with most people right now because that is happening, obviously.
People know other people who are in love with their chatbots or, at the very least, rely on them for everything.
Now, you talk about the interrelationship of these things in the paper, too.
Could you give some sense of, like, if you just take one kind of path for how cultural disempowerment would lead to political and economic or any such path?
I guess, you know, we talked about, like, so if AI is doing all our jobs and then we're like, well, we need the government to, you know, sort of step in, we still have political power, so maybe we can have. some government program that keeps people alive or maybe just says, no, people are still going to have jobs.
We're not going to let AI do all the jobs, whatever it is.
You might think, okay, we can rely on the government here.
But then if the government is itself, again, being composed of AI and increasingly the decision-making is being done by AI, then humans might be disempowered there as well.
And maybe we still have a vote, but we're all just so controlled and manipulated by propaganda that Essentially, you can predict and control how people are going to vote so well with AI, with AI itself, that that's determining the outcomes of the election rather than our own intuition and decisions and judgments and values.
And I'm glad you mentioned the sock puppet thing as well, because that's something that people are often saying is why don't we keep a human in the loop here, right?
So AI can make advice, we can use it as a tool, but humans are always going to be in charge, and that's what we want.
Having a human in the loop, it sounds great, but it's harder.
In practice, to make that human really a meaningful part of the decision making.
And so then that can happen in politics and also broadly throughout culture where everybody's just deferring to AI all the time from making all their decisions.
Maybe their decisions about how to vote as well, you know?
So you have the politician basically repeating propaganda that AI generated and the public then asking the AI which AI generated propaganda is superior.
And then ultimately, you know, like I was saying, maybe we end up.
Giving the AIs rights, or another thing that I think is a pretty disturbingly realistic scenario in my mind is that we get chips that go in your brain that starts out, it's like for therapeutic purposes or whatever, but next we're using it to augment ourselves, next we're using it to connect to the internet and other people in some hive mind thing.
After a few years, it's like, maybe this chip should be bigger and there's not really space in there.
Why don't we just take out this part of your brain?
And then the next year, it's like, this brain part isn't really that useful anymore.
And, you know, this is just like really disturbing.
And even the small version, you know, I think by default, we should expect that these chips are going to be, you know, on the cloud and controlled by, you know, big companies and government in a way that we don't really have much, you know, legibility into.
And it's not very, you know, trustworthy and it's very dangerous, I think.
And that's another form of like, Gradual disempowerment, where you might, you know, that might take a long time to go from like this little chip in your brain to something that's increasingly controlling your behavior.
But that also might be increasingly like a requirement to get certain kinds of work, right?
It's like the same way, like you kind of have to have a cell phone now.
It's like pretty hard to navigate society without one.
There's increasingly like need to, you know, give your identity every time you buy a sandwich or whatever.
Like, so we see this direction of travel, and I think that's very dangerous.
And people oftentimes have criticized us at the war room and other.
People discussing these technologies saying, oh, well, that will never happen.
Five years ago, that was constant, right?
Even as the pandemic was ongoing, and you heard Klaus Schwab at the World Economic Forum waxing poetic about the rule of AI and brain chips and all of this.
But now, I mean, you have, you already had a lot of programs like BlackRock Neurotech was being rolled out in universities and other experimental labs.
And so you had the first real BCIs, brain computer interfaces, coming online.
And then the first, they weren't the first, but mass deployment, you would say, in the dozens.
And then now with Neuralink run by a guy who openly talks about how hundreds of millions of people will need to be chipped to keep up with the AI.
And then now, at the beginning of the pandemic, you had Charles Lieber.
At Harvard.
And he was developing Neuralace.
It was a more subtle, injectable brain computer interface.
And he got busted for, I think he was just taking money under the table from the Chinese.
And it was just reported that he's now in China developing his brain computer interfaces.
I mean, first of all, I'm just like, well, what counts as AI?
There's kind of a fuzzy boundary there.
Like, you know, Google search and like just, you know, computer vision systems that recognize handwriting, like these sorts of things, translation, I think are just pretty obviously useful and I wouldn't get rid of those.
But, you know, a lot of my hesitancy and skepticism here is not about like the technology itself.
I think AI can do all sorts of great things.
It has vast potential as a technology in lots of areas, like medicine is a classic one people talk about.
But it's about society's readiness to absorb these advances as fast as they're coming.
And it's about the way that They are kind of being developed by tech billionaires who have very strange values and kind of the lack of accountability and transparency in process.
It's just we're rushing towards this thing, and there's no, it's completely insane to be racing so fast to build this with all the risks that it poses.
I'm here with David Kruger, CEO of Evitable and researcher at.
David, you and I have met a number of times in person in this crazy world of digital interaction, in person, first in San Francisco at The Curve, at Lighthaven, and then again at the Future of Life Institute's event around the pro human declaration and the composition of it.
So we both have at least some common touch points or reference points in this culture.
The rapid extermination narrative is really dominant.
I'm curious, with your thesis, do you get a lot of pushback?
Do you find yourself in a lot of arguments about this, or is it just a friendly exchange between gentlemen?
Yeah, just talking about existential risk, the risk of extinction generally, basically.
I think a lot of people were kind of like, well, I don't know.
At that time, there was a lot of skepticism about if we would even get to AGI anytime soon, which is never, you know, we're going to get there eventually in my mind.
And so we got to.
Grapple with these questions one way or another.
But yeah, it's gotten a lot better.
The researchers are much more willing to grapple with these risks these days.
But yeah, we were talking about earlier the other groups here and ideologies.
So there's some that are very, very into go as fast as you can and yeah, maybe humans will survive, maybe not.
But that's not the important thing here.
The important thing is like.
Progress and technology.
And so, you know, those are arguments that are going to keep going, you know, indefinitely, I guess.
But I used to have more arguments about just like, is this like a thing at all that we should be worried about or that might happen?
And that's kind of, that's much more, it feels like a settled question these days.
I'm an argumentative guy, so I still have big fights.
Well, you know, that comment, you brought it up at the Bernie event that things, you know, on the one hand, there isn't enough awareness around the problems of AI.
But on the other hand, over the years, It has exploded onto public consciousness.
It's no longer the Terminator.
It's XAI.
It's Google.
It's Anthropic.
In that, do you find?
I mean, you are interacting with people in these corporations.
A lot of them worry about some of the same things you do.
What's your read on that?
Like, you have people like Anthropic who are very intently communicating their worries for whatever reason.
Elon Musk is very much the same.
Do you find a lot of reception to your ideas there?
Yeah, you know, I mean, I always feel like I should talk to these people more because I think, you know, they're basically making a mistake in my mind by working at these companies and continuing to pursue the technology with full awareness of its risks because they do believe that it's just inevitable.
And, you know, what I've seen, like we were talking about, is just more and more awareness and concern over time.
It's very clear the direction of travel.
I just don't know if we'll get there fast enough.
But when you have hundreds of people who are worried about this and go and work at AI companies instead of going and doing what I'm doing, talking to the public, talking to policymakers, saying, hey, this is a crisis.
We should stop right now.
We could be, I think, raising the awareness so much faster if people working at these companies would say, you know what, I quit.
I don't want to work on this thing that could kill everyone anymore.
I don't want to work on taking everyone's job.
This is not okay ethically.
I think when I talk to people, this sort of stuff often resonates.
And I think a lot of people do feel a lot of doubt and guilt and uncertainty about their choices to work at those companies because of this stuff.
Yeah, it's kind of different for different people what they respond to.
So I believe in basically telling the truth, being straightforward about my concerns.
So I feel like I have to talk about extinction.
I have to talk about even the most sci fi version where the AI suddenly takes off, takes over.
Because I think that's real.
I think that's a thing that absolutely could happen.
I'm not saying it's going to.
I'm not sure.
The future is uncertain.
We don't understand this technology very well, but we can't rule that out.
It's actually shockingly likely in my mind.
But a lot of people are going to be more receptive to other things like gradual disempowerment or even just unemployment or the prospect that terrorists or school shooter types are going to be able to manufacture weapons of mass destruction in their garage.
And I think the most promising signs I see are just more people waking up and realizing how insane this situation is, how big and how urgent the risks are.
Because I think that's what it's going to take, right?
Like to make something like that happen.
We're going to have to start treating this like it's as big or a bigger deal than nuclear weapons.
Well, you see, right now, I mean, at the moment, maybe by the time this airs, things will have changed quite a bit.
But at the moment, you have a response from the Trump administration to the dangers of AI.
You know, it's been all over the news today that CASI, the Center for AI Standards and Innovation, Casey, under the Commerce Department, will be the main interface between the tech companies and the U.S. government and will begin testing frontier models before they are deployed.
At least there's an agreement with, at the moment, Google, Microsoft, and XAI.
So, do you think that there's a lot of questions about?
I mean, they've got Casey as a brand new director.
Of course, the Commerce Department is run by Howard Lucknick, which is a questionable choice in a Horrendous situation for many reasons, but do you see this as promising?
Because I don't think it's necessarily a coincidence that just last week you got Max Tegmark on here talking about this.
You got you and Tegmark in the Capitol talking about these problems and the lack of response, and then lo and behold, we now have one.
Does this seem promising to you, at least in the seminal or the nascent phase?
And I think probably this has more to do, you know, much as I'd like to feel responsible with mythos and the cybersecurity threats from that model, which I think are huge and really caught most people by surprise.
And I wish people would stop being caught by surprise.
We know these things are coming down the pipeline.
But yeah, in terms of this response, testing is obviously a good thing.
I don't know if they're going to do the best job of it.
I don't think, you know, it's not adequate, right?
And we don't know how to do testing well enough.
So, there's a lot of false solutions that people are offering and will offer to this problem.
And as somebody who's been in the field looking at the research for a long time, I can tell you we don't know how to test systems.
We don't know how to align them, give them our goals or our values.
And we also don't know how to tell what they're thinking and how they might behave.
So, people are working on all those things.
We make progress, but there's still open research problems.
And when you say we don't know how to test them, do you mean that the evaluations that we see now from the Center for AI Safety or Anthropics Internal Testing, Apollo, people like this, that the measurement of the capabilities are not accurate, or do you mean something else by that?
It was called the AI Safety Task Force at the time.
Now it's the AI Security Institute in the United Kingdom.
Yes.
Looking at the state of play right now, the last couple model releases, they were like, we sort of tried to test it, but at the end of the day, we kind of just went with vibes because they felt like their tests weren't meaningful enough.
And they're maxing out the capabilities.
And then the other thing that I think is really important for people to realize is the AI now can tell that it's being tested quite reliably.
And so once the AI knows it's being tested, you have to wonder is it doing the right thing because that's what it wants to do or because it knows that's what we want it to do?
So, in essence, it seems like what you're describing is a situation where you can test the capabilities and get a surface level idea of what's going on, but beneath that surface, there's a whole lot happening in these systems that you just simply can't tease out.
Yeah, and the capabilities might be more than what we are able to observe and elicit.
That's another really important point.
People think that we can know what these systems are capable of, but there's been a lot of times when you just prompt the system a little bit differently or you set up.
You know, another thing around it to help it do its job, and it can suddenly do the task way better.
So, we don't even fully know what the systems are capable of.
You know, I read your recent essay, kind of the retrospective, and a few musings post publication of Gradual Disempowerment, and I was very happy that you gave me, you threw me a bone at the very end.
The very last point being that maybe human beings will become dumber and dumber, that you don't think that that's really all that big of a deal, but hey, I might as well mention it.
Because, you know, people make the analogy with like calculators, where it's like, I think it's good that people can do arithmetic, but like we don't have to be that good at it anymore because we have calculators.
I mean, that's why they're kicking our asses in the universities.
Well, you know, I just couldn't let you go without getting that one last jab in.
On the one hand, I appreciate you throwing us a bone on the inverse singularity thesis that as humans get dumber and dumber, the machines will seem smarter and smarter.
But in general, you know, again, just to reiterate, I think that your work on just AI risk in general has been very, very persuasive, very, very thorough.
Even if I don't know that we'll be able to do it, I would love to see it all shut down too, maybe for different reasons.
And yeah, I really, really appreciate everything you've done.
And once again, War Room Posse, in case you have forgotten, the central banks are buying gold at record levels.
That's why major firms like Vanguard and BlackRock hold significant positions in gold.
And that's why I encourage you to consider diversifying your savings with physical gold from Birch Gold Group.
Think of physical gold as being analogous to a biological brain, and think of digital currency as analogous to AIs.
The AIs take over.
The biological brain plummets.
What you need is gold, physical gold.
So text Bannon to the number 989898.
That's Bannon to the number 989898 and learn how gold can protect your assets.
That is Bannon to the number 989898.
Now, Warren Posse, as I see you off here, I want to.
Talk about just for a moment a concept of gradual disempowerment that goes to mythological levels.
That is the idea of Moloch, the analogy between systems that are completely either out of human control or against human values.
This was an idea first brought up by Scott Alexander of Slate Star Codex, and it was taken from a poem, Howl, from Allen Ginsberg, and however much you Think that Allen Ginsberg was a degenerate weirdo.
I think that it is undoubted that his passage on Moloch in the poem Howl is as relevant to our society today as it was then.
And hey, maybe it takes a degenerate to truly understand the essence of a Canaanite demon and its machinic counterpart.
What sphinx of cement and aluminum bashed open their skulls and ate up?
Their brains and imagination.
Moloch, solitude, filth, ugliness, ash cans and unobtainable dollars, children screaming under stairways, boys sobbing in armies, old men weaving in the parks.
Moloch, Moloch, nightmare of Moloch, Moloch the loveless, mental Moloch.
Moloch, the heavy judger of men.
Moloch, the incomprehensible prison.
Moloch, the crossbones, soulless jailhouse, and congress of sorrows.
Moloch, whose buildings are judgment.
Moloch, the vast stone of war.
Moloch, the stunned government.
Moloch, whose mind is pure machinery.
Moloch, whose blood is running money.
Moloch, whose fingers are ten armies.
Moloch, whose breast is a Cannibal dynamo, Moloch whose ear is a smoking tomb, Moloch whose eyes are a thousand blind windows, Moloch whose skyscrapers stand in the long streets like endless Jehovah's, Moloch whose factories dream and croak in the fog,
Moloch whose smokestacks and antennae crown the cities, Moloch whose love is endless oil and stone, Moloch whose soul is electric.
Electricity and banks, Moloch whose poverty is the specter of genius, Moloch whose fate is a cloud of sexless hydrogen, Moloch whose name is the mind,
Moloch in whom I sit lonely, Moloch in whom I dream angels, crazy in Moloch, sucker in Moloch, lack love and manless in Moloch.
Moloch who entered my soul early, Moloch in whom I am a consciousness without a body, Moloch who frightened me out of my natural ecstasy, Moloch whom I abandon.
Wake up in Moloch, light streaming out of the sky, Moloch, Moloch, robot apartments, invisible suburbs, skeleton treasuries, blind capitals, demonic industries.
Pavements, trees, radios, tons, lifting the city to heaven, which exists and is everywhere about us.
Visions, omens, hallucinations, miracles, ecstasies, gone down the American River, dreams, Adorations, illuminations, religions, the whole boatload of sensitive bullshit.
Breakthroughs over the river, flips and crucifixions, gone down the flood, highs, epiphanies, despairs, ten years' animal screams and suicides, minds, new loves, mad generation, down on the rocks of time.