All Episodes
Aug. 22, 2024 - Conspirituality
01:15:57
220: Swifties for Trump (and Other AI Lies)

Did you hear that Taylor Swift is supporting Donald Trump for president? That’s right: last week, the former president shared photos of fans rocking “Swifties for Trump” t-shirts on Truth Social. Trump seemed pleased, replying to his online following of 24 billion people: “I accept.” Yet something seemed off. Just like photos of Trump surrounded by seven-fingered Black Americans smiling on city stoops, these Swiftie endorsement photos were AI generated. As it turns out, AI is playing a more insidious and dangerous role in this year’s election—and in the conspiracy theory industry—than ever. The line between fact and fiction has long been blurred, but we’re entering a new reality where distinguishing between those two might prove impossible. This week, we look at the stakes of the first AI-generated election in America—and wonder how much worse it can get, even in three short months. Show Notes Trump Promotes A.I. Images to Falsely Suggest Taylor Swift Endorsed Him Christian Nationalists Are Opening Private Schools. Taxpayers Are Funding Them. Fake Photos, Real Harm: AOC and the Fight Against AI Porn Propaganda, foreign interference, and generative AI AI Poses Risks to Both Authoritarian and Democratic Politics Election disinformation takes a big leap with AI being used to deceive worldwide B'Tselem report summary: "Welcome to Hell: The Israeli Prison System as Network of Torture Camps", August 2024 B'Tselem report: "Welcome to Hell: The Israeli Prison System as Network of Torture Camps", August 2024 Detention and alleged ill-treatment of detainees from Gaza during Israel-Hamas War  U.S. decries reported sexual abuse of Palestinian prisoners after graphic video aired on Israeli TV  US approves sale to Israel of $20 billion weapons package | Reuters  A record share of US electricity comes from zero-carbon sources - but more work is needed  China could exceed renewables generation target of 33% by 2025 | S&P Global Commodity Insights  Israeli Government Pays $2M for AI Influence Campaign Israel-Funded Disinfo Campaign Targets US Lawmakers Learn more about your ad choices. Visit megaphone.fm/adchoices

| Copy link to current segment

Time Text
You don't need to create the fake video for this tech to have a serious impact.
You just point to the fact that the tech exists and you can impugn the integrity of the stuff that's real.
Fake news, anybody?
Yeah, that's great.
It's like Bentham's Panopticon prison.
Like if everyone thinks they're being surveilled, but they can't see the prison wardens,
the prison wardens don't even have to be there, right?
If you're a fan of workplace comedies, like The Office, or satire like The Onion,
then I have a podcast that I know you'll love.
It's called Mega.
Mega is an improvised satire from the staff of a fictional megachurch.
That's the premise.
Each week, the hosts, Holly Laurent and Greg Hess, are joined by guests, people like Cecily Strong or Jen Hatmaker.
To portray characters inside the colorful world of Twin Hills Community Church, which they describe as a mega church with a tiny family feel.
The result is a sharp-witted and hilarious look into the world of commercialized religion using humor to cope with the frightening amount of power that church and religion have.
So I very much recommend you checking out Megha's episodes, like the one with Saturday Night Live's Cecily Strong playing Cece String, a hilarious character who's fresh out of jail, and also comedian Jason Mantzoukas.
You may find yourself dying of laughter and perhaps inspired to take an improv class yourself.
Megha is able to keep you laughing as you think and reflect about the world we live in.
You can find Megha on Apple Podcasts, Spotify, or wherever you listen to podcasts.
Comedy fans, listen up.
I've got an incredible podcast for you to add to your queue.
Nobody listens to Paula Poundstone.
You probably know that I made an appearance recently on this absolutely ludicrous variety show that combines the fun of a late night show with the wit of a public radio program and the unique knowledge of a guest expert who was me at the time, if you can believe that.
Brace yourself for a rollercoaster ride of wildly diverse topics, from Paula's hilarious attempts to understand QAnon to riveting conversations with a bonafide rocket scientist.
You'll never know what to expect, but you'll know you're in for a high-spirited, hilarious time.
So this is comedian Paula Poundstone and her co-host Adam Thalber, who is great.
They're both regular panelists on NPR's classic comedy show.
You may recognize them from that Wait, Wait, Don't Tell Me.
And they bring the same acerbic, yet infectiously funny energy to Nobody Listens to Paula Poundstone.
When I was on, they grilled me in an absolutely unique way about conspiracy theories and yoga and yoga pants and QAnon, and we had a great time.
They were very sincerely interested in the topic, but they still found plenty of hilarious angles in terms of the questions they asked and how they followed up on whatever I gave them, like good comedians do.
Check out their show.
There are other recent episodes you might find interesting as well, like hearing crazy Hollywood stories from legendary casting director Joel Thurm, or their episode about killer whales and killer theme songs.
So Nobody Listens to Paula Poundstone is an absolute riot you don't want to miss.
Find nobody listens to Paula Poundstone on Apple podcasts, Spotify, or wherever you listen to your podcasts.
Hey everyone. Welcome to conspirituality, where we investigate the intersections of conspiracy
theories and spiritual influence to uncover cults, pseudoscience, and
authoritarian extremism.
I'm Derek Barris.
I'm Matthew Remsky.
I'm Julian Walker.
You can find us on Instagram and threads at ConspiratualityPod.
You can also access all of our episodes ad-free, plus our Monday bonus episodes on Patreon at patreon.com slash conspirituality.
You can also access just our Monday bonus episodes via Apple subscriptions.
And as independent media creators, we really appreciate your support.
Thank you.
Did you hear that Taylor Swift is supporting Donald Trump for president?
Amazing.
That's right.
Last week, the former president shared photos of fans rocking Swifties for Trump t-shirts on Truth Social.
Trump seemed pleased, replying to his online following of 24 billion people.
I accept.
Yet something seemed off.
Just like photos of Trump surrounded by seven fingered black Americans smiling on city stoops, these swifty endorsement photos were AI generated.
As it turns out, AI is playing a more insidious and dangerous role in this year's election and in the conspiracy theory industry than ever.
The line between fact and fiction has long been blurred, but we're entering a new reality.
Where distinguishing between those two might prove impossible.
This week, we look at the stakes of the first AI-generated election in America and wonder
how much worse it can get even in three short months.
This week in Conspiratuality.
Well, a few months ago, we covered the Project 2025 chapter on the Department of Education
over in our Patreon bonus series.
Now, besides the right's goal of completely dismantling that administration, and it is in that text, Julian and I touched on the topic of school choice, which in not-so-coded conservative Christian nationalist circles really just means using taxpayer dollars to pay for religious schooling.
There are two main ways the Right accomplishes this.
One is through the Education Savings Accounts, or ESAs, and through school vouchers.
ESAs are kind of like FSAs, which allow you to contribute pre-tax dollars to healthcare expenses.
But in this case, you can use untaxed money to pay for private schooling.
School vouchers let people use taxpayer dollars for public schooling for private schooling.
So it's originally earmarked for public schooling, but then you can choose where to send your children.
Now, both of these mechanisms function slightly differently, but both are ways that the right has funneled billions of dollars into private Christian schools with public money.
Now, according to a new reporting by Mother Jones Senior Editor and former guest of this podcast, Kiera Butler, It's way worse than we think.
So Butler investigates Dream City Christian Academy, which is part of the Phoenix, Arizona-based Mega Church, which draws a weekly attendance of 21,000 faithful.
That's amazing.
Yeah.
The school boasts roughly 800 students, and it is a K-12 academy, and it is one of 41 schools in Turning Point USA's Turning Point Academy program.
They have a lot of fucking names for all these things.
Now, that program bills itself as an educational movement that exists to glorify God and preserve the founding principles of the United States through influencing and inspiring the formation of the next generation.
Now, perhaps unsurprisingly, they also rail against the woke agenda in public schools in all of their literature.
Because they're totally opposed to indoctrinating children.
Absolutely!
That's in their mission statement, Julian.
I just read it.
So every family enrolled in that academy receives about 90% of public school funding, or $7,400 per year, to be able to attend a private Christian school.
Tuition for this dream academy ranges from $10,450 to, what a number, $13,999 per year.
$13,999 per year.
That's like a car sales tag.
Yeah, it's actually more like a coaching mastermind program price.
Yeah, yeah, right.
I don't know.
I'm not a parent.
You guys are.
I can imagine paying over $10,000 a year to send my kids to kindergarten.
I don't know.
I'm a public school kid.
Maybe that's just me.
No, well, you raise a really good point.
I cannot fathom on our income doing any kind of private education at all.
And the prices here in Canada are similar.
So yeah, you're right.
Well, families offset up to two-thirds of that cost.
To attend this private religious school with tax dollars.
Now, the academy itself also received a million dollars in voucher money last year.
But here's the real kicker.
Initial estimates on the sort of tax burden that this would have in Arizona put this at $65 million per year.
So that's $65 million that they thought were going to go from taxpayer money into these religious institutions.
But it turns out it's more like $940 million of taxpayer money being shuffled into private Christian schools just in Arizona.
And that's just one of 29 states that have similar private school voucher programs.
So this is another example of how Republicans are really, really good at not taxing and spending and blowing out the budget, right?
Oh, absolutely.
Fiscal conservatives, baby.
All the way.
I mean, it's just slightly more.
Just 65 to 940 million difference.
Yeah, just in the article itself, I think it's a 1400% increase that they were off, right?
Now, the right is very vocal, as you just flagged, Jelaine, in demanding that their First Amendment rights be in place in so many different ways, but they somehow fail to remember that that same amendment creates a separation of church and state, which also implies taxpayer money and church.
Yet, on top of all of this, Butler reports, quote, a prerequisite for students and their families to attend some of the schools that currently receive voucher money is that they accept Jesus Christ as their Lord and Savior."
Which all really makes you wonder what this choice is actually about.
This past week saw Harris County, Texas District Attorney, Kim Ogg, give a press conference
wrapping up an investigation into election irregularities.
As of January 2023, 22 Republican candidates who lost in the midterm elections had filed
lawsuits alleging election misconduct.
That's a lot of candidates.
But Harris County is traditionally a Democratic Party stronghold, and it's the most heavily populated county in the state.
And these were mostly elections for appointments to local courthouse positions.
An over 18 month long investigation into the 2022 election in Harris County showed no evidence whatsoever of election fraud, misconduct or tampering of any kind.
But there were still actually problems with this election.
They came down to incompetence and greed, which is often a good bet when you have a conspiracy theory being proposed.
It may just be incompetence and greed.
Right.
Polling places were not well prepared.
Equipment malfunctioned.
They opened late or they closed early.
Turns out the Houston Astros had won the World Series the day before.
So some poll workers had been celebrating late into the night.
But most importantly, Several polling places ran out of paper ballots on election day.
And here's where it gets interesting.
So it turns out that Darrell Blackburn, who was the guy in charge of deciding how many ballots each location got and then delivering those ballots, he just got it wrong.
Investigators found no evidence that any specific area or demographic, any polling place, any political party was unequally disadvantaged by his incompetence.
This incompetence, though, was a function of the fact that he was punching in and out and getting paid as if he was a full-time government employee while also commuting to another full-time job, and he was collecting both paychecks.
All right, so that's just sort of an oddity.
But that didn't stop District Attorney Kim Ogg from giving a quite grandstanding press conference about the importance of election integrity.
It didn't stop Texas Republicans from seizing this as an opportunity, and this is during 2023 when all of the lawsuits were filed, to then pass a law dissolving the Harris County Election Office itself and transferring its duties overseeing elections to the county clerk and tax assessor.
And this is a power grab similar to what has been reported in Georgia and in Florida And this gives the Texas Secretary of State leeway to intervene in Harris County elections.
So it's an example of how all the way up and down the ballot.
GOP candidates who lose now automatically file lawsuits alleging some kind of fraud.
And that then kicks a whole set of other gears into motion that have been put in place for just this purpose.
So with the Trump crime syndicate having had another four years to get all of their players in place, I'm going to go ahead and predict right now that the results of the 2024 election will probably be the most delayed in our history as layers of shenanigans are enacted.
So get ready.
alternate electors, recounts, election officials and secretaries of state who've gotten themselves
into those positions specifically to help Trump, as well as Leonard Leo groomed judges
who are more than ready to lean hard to the right.
We're recording Tuesday.
I know a lot of people, including myself, are glowing with the first night of the festivities.
The Soul Children of Chicago, like, made me burst into tears with the anthem.
AOC killed it.
Biden passed the torch.
Warnock.
Hilary says, I almost smashed the glass ceiling, but I know Kamala will do it.
The crowd chants, lock him up, and she smiles and nods.
Kamala makes a surprise appearance in an Obama-style tan suit.
All very solid DNC theater.
So, my apologies for harshing the vibe here a little bit with some op-ed thoughts about the attention economy relationship between conspiracy theories, conspiracy theory debunking, and in real life disasters.
We know, we've covered this for years now, that persecution and imprisonment fantasies are a staple of the MAGA Imaginarium.
During COVID, they went on about concentration camps.
If Harris is elected, they say, the country will go communist.
Or if you're James Lindsay, it'll go fascist-communist.
You'll have your rights stripped away, your accounts frozen, they'll take your guns, they'll lock your phone remotely, you'll be forcibly enrolled in healthcare, you'll be jabbed with poison, you'll be locked up, you'll be force-fed tofu and crickets and special pills that will trans you like the frogs.
Can I just say here too that James Lindsay kind of has the Turducken theory of authoritarianism, right?
It's a communism inside a fascism inside a communism.
Got these very strange ideas about how this stuff works.
Yeah, so the Harris transition team, according to this crowd, is conspiring and plotting.
They're twiddling their fingers and twirling mustaches.
And, you know, our job is to debunk all of that because we know that their fever dreams are really dangerous.
They have nothing to do with the boring liberal bureaucracy that Harris will probably preside over if she wins.
Because underneath the vibes, Harris' world is probably pretty mundane.
She's talking about price gouging, housing, medical debt, reversing Trump's corporate tax cut.
There's nothing dramatic or nefarious or deceptive going on.
No one's being abused.
She couldn't also be a top executive for a government that helps fund a secretive network of torture prisons in a dysfunctional client state, could she?
Well, on August 7th, the Palestinian-Israeli Human Rights Organization, B'Tselem, released a report called Welcome to Hell, detailing the post-October 7th treatment of Palestinian prisons in Israeli prisons through interviews with 55 former prisoners, most of them held and released without trial.
Just one sentence from their executive summary on what they describe as systemic institutional policy.
Frequent acts of severe arbitrary violence, sexual assault, humiliation and degradation, deliberate starvation, forced unhygienic conditions, sleep deprivation, prohibition on imputative measures for religious worship, confiscation of all communal and personal belongings, and denial of adequate medical treatment.
These descriptions appear time and again in the testimonies in horrifying detail with chilling similarities.
So, underneath the carnage, now there's a polio outbreak, is a story of systematic torture and abuse, which B'Tselem notes has impacted virtually every Palestinian family, with 800,000 incarcerations going back to 1967.
On socials, it's all reinforced by CCTV footage from inside one of the prisons showing groups of guards raping a male prisoner behind a wall of riot shields.
And this is widely broadcast within Israel, prompting a debate within the Knesset as to whether prison rape is permissible.
So, how has the Biden-Harris administration responded?
State Department official Matthew Miller says, we have seen the video and reports of sexual abuse of detainees are horrific, adding that the reports should be fully investigated by the Israeli government.
There ought to be zero tolerance for sexual abuse, rape of any detainee, period.
And then Corinne Jean-Pierre gave a similar statement from the White House.
As they were speaking, however, Secretary of State Blinken approved an additional almost $20 billion in sales of military hardware to Israel.
So if Israel is committing war crimes, as many international observers say they are, what does it mean that the Biden-Harris administration is funding them?
As I'm writing this, he's fresh out of a three-hour meeting with Netanyahu where we see something very familiar.
He comes out of the meeting and he says, Netanyahu has accepted the ceasefire deal, but then his office releases a statement that doesn't use the word ceasefire at all.
Alright, why am I bringing it up?
Because I see and feel a troubling symmetry.
I'm not making a false equivalency here.
It's a variation on Klein's Mirror World that I can't quite wrap my head around.
In the disinformation sphere, a lot of us do great work in tracking down and understanding the absurdity of conspiracy theories promoted by right-wing wackos.
We know that they're totally harmful, we show that.
We also know that they are ephemeral and the weavers of the theories move on to other pastures.
But here we have a situation in which morbid fantasies that they deploy to generate moral panics around things like murdered children are actually manifest in real time.
Uh, this is ongoing for 10 months now and our own people are facilitating it, borderline denying it, definitely not owning it.
So it feels to me like this is a critical moment in center-left media to be sober about our probably improving prospects because jumping on the vibes bandwagon when there are Real questions left hanging.
It might show some naivety when it comes to whether core moral issues are actually faced directly.
I think what's at stake in my view is not just discussions over horse trading or just the way politics works in this crazy world.
I feel like a kind of hype and blind spotting can continue to degrade institutional distrust, and I believe that that's at the heart of conspiracy theory problems.
No one needs to invent conspiracy theories about the powerful if we're all honest about what the powerful are up to.
Yeah, that's all really well said, Matthew.
I mean, the only thing I would add is it's Sadly, tragically, awfully characteristic of the prison systems widely across the region and in other countries, including Saudi Arabia, which is America's other big ally in the region.
Yeah, I guess, you know, where people in favor of a Harris-Walls presidency can apply pressure is where they have capital.
And that's with Israel, right?
When you say that.
The Biden administration's response is to say that Israel should investigate this.
Yeah, while they're debating in the Knesset that whether prison rape is acceptable.
Yeah, I saw that.
I saw that moment.
It's absolutely grotesque.
What would you rather the response would be?
Would it be calling for like a U.N.
investigation or America sending their own observers?
Because it seems that you're implying that because the U.S.
is supporting Israel militarily right now, As they enact this absolutely appalling genocide in Gaza.
That the US is then also sort of tacitly supporting what they're doing in their prison system in Israel.
I think it's hard to disambiguate them.
And I think UN observation would be great.
American observers, I don't know how all of that works.
I do know that the language of ceasefire has changed to the language of arms embargo.
And that's now supported by seven major unions, Association of Flight Attendants, American Postal Workers, Painters and Allied Trades.
United Auto Workers, National Education Association, like there's six million union members who have said, oh, yeah, actually, ceasefire is not really doing the job anymore.
It's been semanticized into nothing.
And so arms embargo is reasonable pressure.
So I think there's a range of solutions.
But I mean, I don't think we're really seeing any concrete movement in that direction is my main point.
Yeah.
And so it sounds like you saw Biden doing his his Incredible, incredible speech last night, which was a little bit all over the place, I thought.
But I did catch that moment in which he said, you know, those protesters out there have a point.
Did you see that?
I didn't watch the entire speech.
So I missed that comment.
Yeah.
He said, listen, folks, the protesters out there have a point and we need to stop this war right now.
You know, and then, of course, bring the hostages home and the various ways of couching it so that it's You know, tries to please everyone in the midst of something really, really difficult.
When you mentioned the prison videos, it makes me think of I spent part of the weekend reading over Genocide Watch.
This whole situation made me think of the Uyghurs, because that's 10 years into detention camps now, and there's over a million people, and there have been reports coming from there of just the horrific conditions.
There's an open shoot and kill policy, so if someone tries to run, they could just shoot, there's no problems with that.
And in this sort of situation, obviously we're not supplying arms to China, we're supplying them to Taiwan predominantly.
Both the US and Canada have pretty robust trade deals going with China and we owe a lot of debt to China.
So this just makes me think about what this conversation entails is speaking up for the people who can't and where our money goes.
And I'm wondering when you weigh it out against these other, and we can also talk about genocides in Africa as well.
But specifically the Uyghur one has bothered me for a long time that it hasn't reached any sort of public consciousness.
Yeah, I think it's all bad.
I think that when circumstances come to constellate around a particular kind of political moment and a moral test is presented, That's kind of what comes to the fore.
It's kind of like what you said last week about who was ready for the viral tweet about J.D.
Vance.
There's huge Palestinian-American enclaves in America.
They're a voting bloc.
Michigan hangs in the balance, perhaps, of the Arab-American vote.
It's very politically You know, at the tip, and people have influence with this.
But I'm sure, you know, activists always know that they are spread absolutely thin.
But I remember the people that I used to get arrested with, we were protesting something different every day.
So I'm sure attention is being apportioned out according to what its return is assessed to be.
Well, I am somewhat heartened by the evolving quality of the official statements being made by Kamala and now by Biden as well.
Each time it seems like they go a little further in terms of condemning what's
happening in Gaza, in terms of calling for an end to the war now.
If you're a fan of workplace comedies like The Office or satire like The Onion,
then I have a podcast that I know you'll love.
It's called Mega.
Mega is an improvised satire from the staff of a fictional megachurch.
That's the premise.
Each week, the hosts, Holly Laurent and Greg Hess, are joined by guests, people like Cecily Strong or Jen Hatmaker.
To portray characters inside the colorful world of Twin Hills Community Church, which they describe as a mega church with a tiny family feel.
The result is a sharp-witted and hilarious look into the world of commercialized religion using humor to cope with the frightening amount of power that church and religion have.
So I very much recommend you checking out Megha's episodes, like the one with Saturday Night Live's Cecily Strong playing Cece String, a hilarious character who's fresh out of jail, and also comedian Jason Mantzoukas.
You may find yourself dying of laughter and perhaps inspired to take an improv class yourself.
Megha is able to keep you laughing as you think and reflect about the world we live in.
You can find Megha on Apple Podcasts, Spotify, or wherever you listen to podcasts.
A couple weeks ago, right-wing pundit and conspiracy theorist Dinesh D'Souza, whose 2022 film 2000 Mules has really helped keep the 2020 election was stolen narrative alive, tweeted out a photo to his 4.2 million Twitter followers on Saturday, August 10th.
The photo is of Kamala Harris and Tim Walz getting off their plane for a rally at an airplane hangar in Detroit.
D'Souza outlined two parts of the plane in red, and that highlights the reflection off the plane.
And the caption reads, check the reflection in the plane.
Does this look like a real picture to you?
The following day, Donald Trump truthed this conspiracy theory to however many people actually use Truth Social.
Is that the word for tweeted there?
Truthed?
Oh, yeah.
They truth it out?
Okay.
Oh, they send out truths?
Oh, Jesus Christ.
From the heart.
Yeah, right.
Surprisingly, neither de Souza or Trump took curved surfaces into account for this photo because you couldn't see the people from the reflection of the bottom of the plane.
They were certain the image of 15,000 people cheering for Harris and Walls was manipulated.
Or, more likely, they're just trolling.
Most likely, in my opinion, is that D'Souza was trolling and Donald Trump actually believed it.
Yeah, I also saw some discourse around the VP's plane being at the wrong angle to reflect the crowd as well.
I think it's worth noting, too, that D'Souza's thick red sharpie outlines of the plane's fuselage are, like, really reminiscent of other markings we see these days used by, like, trans investigators.
They're drawing circles on Michelle Obama's crotch or recently Andrew Tate's Speedos because apparently either he's really tucking well or he's trans as well.
The difference is that D'Souza is circling something to indicate digital fakery, and the Transvestigators are circling what they say is physical fakery.
But both actions point to the red sharpie person as having a kind of mystical intuition.
A person who can see beyond the appearances to the truth that Harris is a con artist, Michelle Obama is a man.
They're also intervening on an endless stream of images, which I want to talk about a little bit later.
It's some small rebellion against passivity.
And I think it probably feels really graphic.
I just want to say I know something else that is really smooth and shiny.
And it's the brain of Dinesh D'Souza and Donald Trump.
I mean, whatever the claim is, it's always the facts and evidence that define the difference between an actual conspiracy being exposed versus a baseless conspiracy theory.
So finding seemingly meaningful anomalies in photo and video analysis has always been part of the conspiracy theory shtick.
The mysterious umbrella man at the JFK assassination.
What was he doing?
Or the little explosive puffs circled on the collapsing Twin Towers of 9-11.
Must be evidence that they were pre-wired with explosives.
The conspiracist always claims to be able to see and interpret what everyone else supposedly misses in plain sight.
Yeah.
And in the past, photos and video have been taken as undeniable documentation of reality.
But with emerging generative AI, as we're discussing today, we have Increasing legitimate concerns about whether or not the documentation can be trusted.
And that cuts both ways, right?
Yeah.
Right.
These guys love Sharpies.
I mean, Trump's manipulation of the hurricane a few years ago comes to mind.
Yeah, that was great.
And all of this leads to a question humans have had to grapple with since the term artificial intelligence was coined by computer scientist John McCarthy in 1956.
Which is, what is reality anyway?
Or, more to our purposes today, how is AI going to spread FUD, which is fear, uncertainty, and doubt, about the upcoming election and beyond?
Now, there's always precedent.
When the Access Hollywood tape came out, Trump balked, saying that maybe that's an AI-generated voice.
Now that one never really stuck, and the Dr. Plain Image didn't really either, except for the most hardcore followers.
But there are plenty of instances where it's not so easily laughed off, and I have a feeling it's going to play heavily in our online discourse leading into November.
So, a few other examples of the sort of FUD that AI is spreading right now, which can be very malicious.
One is deepfake porn.
Now, research from cybersecurity company DeepTrace Labs found that 96% of all deepfake videos on the internet are non-consensual porn that feature women.
Like real people.
Real people.
Often real people or some sort of, not necessarily, it's not always someone in high school fucking with another girl that they want to get back at revenge porn or something like that.
It could just be a woman, but it is always them, the women submitting to the men in some capacity and very often their heads are put up.
I mean, AOC was victim of that earlier this year.
The point you're making is not that the use of their face is non-consensual, but that the porn itself depicts a non-consensual act.
Both.
Exactly.
Exactly.
It's both of trying to generate particular women, but also submissive women, like kind of create your own reality sort of vibe.
So that's all in the 96% number.
And such deep fakes often proliferate on Twitter where they're slow to remove it, if ever.
And this also plays into a larger AI narrative since the predominant amount of The chatbot usage is sexual in nature, and that's almost always guys who are looking for some sort of companionship.
Yeah, this blending of things is really disturbing to me.
There's two categories.
So we have deepfake or revenge porn featuring, like, actual people, and then there's fully generated AI women who are, like, composites.
Like, they're everyone but no one, but which can only, like, contribute to the misogyny of, like, impossible standards that everybody's living with anyway.
Yeah, exactly.
Now another usage of AI is in propaganda.
So we flagged Taylor Swift off the top, not someone we often talk about on this podcast, but her influence is vast.
And so, you know, any sort of The attention economy that Trump can grab by using her is why he would say, I'll take that endorsement.
But generative AI can produce overwhelming amounts of content that floods social media in general.
So Brookings has speculated that Russia and China have participated in such campaigns to shore up overseas support for their respective governments using AI-driven influence campaigns.
Politicians in Moldova, Slovakia, and Bangladesh have all recently been caught using deepfakes
to spread misinformation about their opponents.
And this is a potential tactic when it comes to American politicians spreading misinformation
and at least so far it all points to the right who are going to utilize and weaponize this.
And one other big one obviously that kind of plays into the last two but is election
And we shouldn't overlook the primary goal of Trump's crowd size disinformation here with that photograph.
I mean, we can laugh it off about how he's always focused on crowd size.
But the truth is, if he can convince his base that the Harris campaign isn't really drawing that size of a crowd, it's much easier to convince them that the election was stolen, because how can a campaign with no real enthusiasm actually translate into that many votes?
Now, just this morning, I was listening to Morning Joe, and they made the point of saying that apparently inside the Trump campaign, his biggest concern right now is how big of an audience Harris is going to get on Thursday night when she speaks to accept the nomination, and if it's going to be bigger than his.
Of course it is.
That is where that campaign's mind is at, or at least Trump's mind is at.
We're going to get into why this matters and speculate a bit on the role of AI, but let's define some terms because I think it's important to understand what we're talking about when we say AI.
McCarthy initially coined the term to describe the concept of creating machines that could simulate aspects of human intelligence.
Like problem solving, learning, reasoning, understanding language, decision making, learning from experiences, and adapting to new situations.
Now he envisioned a future in which computers could perform tasks that require human-like cognitive functions, essentially making computers think in ways that are similar to us.
And let me just ask, achieving consciousness is kind of an elusive threshold in this whole discussion?
I mean, I couldn't get away from my nerdy obsession with philosophy of mind without inserting a point here.
I would argue, you know, along with philosopher Dan Dennett and some others in the field, that consciousness is better understood as perhaps an umbrella term for several different activities like sensory processing, memory, cognition, language, things like that.
Rather than this sort of reified quality or substance or immaterial mind thing, right?
In fact, I would say what makes human consciousness besides at the moment still being more complex and flexible than anything machines can do, most different from machine processing is its embodied and relational emotional nature, which to me is really
fascinating.
Yeah, really simple heuristic is mind is what brain does when talking about this.
And the emotional nature here is key in this conversation because sure,
Science fiction fans and science fiction writers often position AI as emotional.
Indeed, I'm sure many people think that's where it's heading.
I flagged it before, the I'm your friend chatbot, whether it's companionship or sexual, whatever it is, is indicative of this.
There are a number of startups who are trying to exploit this.
But most serious researchers recognize that mimicking human consciousness and its wide array of emotional responses isn't really the goal.
It's more about accomplishing superhuman tasks of speed and accuracy.
I guess the closer those superhuman tasks get to the sort of qualities of creativity or agency or the ability to make decisions or to innovate or write plays, that's where a lot of our existential anxiety comes in, right?
Sure, and I fully believe that we're three to four years away from a full-length feature film being produced by AI with prompts is there, and it's going to be a blockbuster.
So, you know, it's a very challenging one, especially as a creative, as a writer, who pours all their time into this.
Yeah, let's talk about it.
Alright, do you have somebody to talk to, Derek, about how that's going?
Oh yeah, let me pull up my phone.
There is a credible debate that a lot of art will be created through this, and we have to see.
And, you know, it's split within the artistic community as well.
Some artists fully embrace AI.
I'll get into a little bit how I use it.
But some people reject it, and that's fine.
But yeah, it would be interesting for something non-emotional to be able to evoke so much emotion in people, and we are going to have to grapple with that.
So AI is used in pretty much every facet of our lives already.
Talk to Siri, talk to Amazon Echo, self-driving cars.
I have a 2018 Subaru Forester, which is equipped with EyeSight, which is a very rudimentary sort of autonomous function that warns me of things in the area, but that's used with AI.
Any chatbot you talk to on a retail site or actual chatbots like Claude or Perplexity.
Your social media algorithm is AI-driven.
Netflix, YouTube, Spotify.
Your bank probably has a fraud detection service.
My favorite uses are in healthcare, where AI is being used in drug and virus discovery right now.
People kind of talk about how climate change is coming the same way they do with AI, but it's all already here.
It's everywhere.
And what we're talking about today is more the panic, which you flagged, Matthew, of AI, which is something sci-fi writers imagined before McCarthy coined that term.
They'd been warning about robots taking over the planet for generations now.
And there is good reason to fear AI, especially if you work in the tech industry like I do, besides my work on this podcast.
Dell just fired 20,000 employees as it quote, unquote, unlocks modern AI.
Cisco is laying off roughly 4,000 people for that reason.
And they're focusing on cybersecurity by using AI.
Now, as of July of this year, 100,000 tech industry workers have reportedly lost their jobs because of AI.
Now, this isn't saying that every one of those workers will be replaced by a chatbot.
Sometimes the company is just switching their corporate focus, and in the process they remove departments that no longer Let me just dip over into Matthew's lane here and say that the real danger of AI is not that your digitally configured, algorithmically driven companion through your phone is going to get envious and jealous of you and start hunting you down.
It's capitalism.
Yeah, I want to say something about that in a bit.
Derek, I'll just start by just admiring your stoicism.
Derek, you're giving this report in a very even tone.
You're like, you know, balanced as always.
But yeah, I'm freaking out over here.
I'll get into the reason for that as well.
But let's talk about the existential risks with AI that are in the near term, but they're also far term ones, depending where you stand.
So again, we're all writers, we feel that podcast, but I am a contract copywriter with two tech companies right now and both focus on AI in some capacity and I am not training chatbots I don't know.
I'm writing about the potential benefits of AI and healthcare, which I flagged.
You know, I'm talking about semiconductors and legal services.
Now, will a robot be able to do my content work for these companies one day?
I don't know.
So, Matthew, I am a little bit scared, too.
Well, you're not directly training chatbots, but isn't it the case that at this point,
like, you can't be sure that you're copywriting this script we're consulting now in Google Docs.
Even our voices on this episode aren't being fed directly into an LLM without our knowledge or consent.
Well, I would say for the Google Doc we're looking at, I'm not that worried about because it is password protected and 2FA, but all the other work, absolutely.
I mean, we know that our book, Conspiratuality, was used to train chatbots.
We saw it on a list that was published in The Atlantic through a database a few last years.
So, yeah, absolutely.
I wanted to start off by defining AI because it is usually invoked in a state of panic, but in the process, it's flattened as one thing.
I think that's why Trump and the right can get traction and will likely get more traction by invoking the demon of AI in their rhetoric over the next few months.
Because if you don't know what application of AI they're even discussing, then it becomes a boogeyman to be feared, which is from the same playbook they've been using against the left for many years.
Being existentially scared of transgender people and immigrants requires that you know very little about them.
Back to your point, Matthew, there's definitely things to be feared.
But I also, again, I'm working right now with this healthcare company that might be able to detect contradictions, contraindications in drugs before they happen in human bodies and also help expedite clinical trials by using AI.
So I can't completely be against it.
There's some really good uses.
So, after Trump proved his ignorance of curved surfaces, disinformation researcher and friend of the pod, Rene D'Aresta, posted a 2018 BuzzFeed interview she did with Charlie Worzel, who's now at The Atlantic, great writer, both of them are great writers, and she highlighted this quote, which perfectly describes this phenomenon we're discussing right now.
You don't need to create the fake video for this tech to have a serious impact.
You just point to the fact that the tech exists and you can impugn the integrity of the stuff that's real.
Fake news, anybody?
Yeah, that's great.
It's like Bentham's Panopticon prison.
Like, if everyone thinks they're being surveilled but they can't see the prison wardens, the prison wardens don't even have to be there, right?
I interviewed Renée da Resta a second time for her book, Invisible Rulers, The People Who Turn Lies Into Reality, for episode 209.
We really value her work here on the pod, and I see her as one of the world's foremost experts on the digital disinformation crisis and its political implications.
The evolution of technology and media creates new avenues and forms of political persuasion or propaganda.
And experts classify propaganda using this color coding of white, black, and gray.
So in white propaganda, the origin of the messaging is really transparent.
You know where it's coming from, while what they call black propaganda will create the impression that the messages have originated somewhere else.
And then with gray propaganda, somewhere in between, it's not really clear where the messaging originates.
And the main reason deepfakes and other types of AI-generated propaganda are so concerning is that it gives black propaganda a new level of deceptive sophistication.
That's a really good breakdown.
That's helpful.
So DiResta explains that the advent of the internet and social media has created this three-way synergy between the influencer, the algorithm, and the crowd.
And that sort of drives a lot of the activity.
And it's now upon this participatory synergy that generative AI can ride, because now that we have these tools that can simulate information, media, and the online utterances of human avatars very quickly, and in some cases for fairly low cost compared to what it used to cost to create some kind of like real fake video of, you know, real people doing things they never actually did.
Highly relatable influencers Which is a huge part of the new ecosystem that we're in.
Can then either create or just circulate AI-generated content, and the algorithms then serve to amplify whatever gains traction.
So it's kind of AI acting upon AI.
That's amazing.
On all of us, right?
Within, then, the crowd that is drawn to those specific types of messages.
So in this way, AI-based disinformation is perhaps no different, this is what the rest would argue, than other forms of propaganda.
It may just seem more convincing.
And I want to underline here something that may already be apparent, which is there's this language shift that Diresta makes in her book, and it's away from the distinction we've kind of stumbled over a lot, right, between misinformation and disinformation.
And toward just using the term propaganda.
So typically, so many public conversations on disinformation start by talking about the question of intent, making it clear that some people actually believe that the false information they're spreading is really true, and that that's categorically different from someone who knows that they are lying.
But DiResta argues that built into the existing understanding of propaganda is that what starts off as deliberate disinformation, perhaps created at times by hostile state actors, inevitably then gets picked up and circulated now by like your elderly mother on Facebook.
The rather unsatisfying tendency has been then to call this misinformation when your elderly mother circulates it.
But the content is really the point.
Through the lens of propaganda, Who has which intention becomes less important than what the original agenda is behind the lie.
Now, Diresta also points out in her book that the distinction between social media or independent media, so we're talking about things like substacks and podcasts and YouTube channels on the one hand, and then more established legacy media on the other, it's not like there's a wall between these two things.
It's actually a very blurry distinction, given how the information ecosystem functions.
Old media reports on what is happening online, and then more partisan channels may just uncritically repeat it, so it just becomes another distribution network.
And then each successive repetition will cite the previous version of the same story as if that gives it credibility, but it could just be like that meme that circulated on Twitter the other day.
Taylor Lorenz brought this up on Threads recently about like the fact that at the DNC this week, there are a number of people who use their social media feeds as media who are covering it.
And I replied that I think being someone who was trained in journalism in the 90s and have kind of watched a generation move, I think this is a fully legitimate form of journalism Without the guardrails of editorial, so that's a very important distinction if you're talking about traditional journalism.
I almost feel like there needs to be a third category here.
We talk about op-ed versus actual journalism, but now you have people who are covering events in a very credible way, but they are usually coming at it with a more opinionated stance, which is completely fine.
I've always said in this podcast, we kind of broker in all of these things.
But to actually create that third category now so that we can better understand that distinction between social media and the legacy media that you're talking about because those lines are completely blurred at this point.
I think the category is streaming, right?
It's live streaming.
Yes, so that's the thing, right, is that you have this raw feed of people who are who are right there.
And because of the technology that's available now, they're able to just live stream stuff.
And I think that that is interesting and important, but it's also potentially really confusing.
And one of the things we've seen is that it drives more and more of the 24 hour cable news cycle, feeling like they have to jump on things really quickly and maybe like slide around some of their fact checking.
Yeah, and to your point, Derek, about editorial guardrails, some of those streamers are going to have some kind of self-discipline around that.
They're going to have a particular, they're going to have awareness around their angles and others just aren't.
So it's not like that's a good category for describing quality, but that's the, I don't know, that's the medium.
That's a really good point, and that's more like a fourth category, because I was thinking more about the people who we do this sometimes with videos, where we take a few different videos, cut them together, edit it, editorialize over it, and that's a function as well, and we release that on social media.
So, that's not exactly streaming because there is some pre-work done.
There is writing and research that goes into that process.
But when I do that on my own as compared to when I'm writing for a publication, the editorializing, the fact-checking is purely on me or for our scripts, it's on the three of us.
And that is yet another category, too, that I think we need to address as a society.
Yeah, and in those cases, you know, it's absolutely clear to all of us, and hopefully to all of our audience, that it's completely through our particular lens, from our particular perspective and bias, so to speak.
Which is probably why it's good that the three of us disagree on a regular basis, right?
Like, we agree on fact checking, we agree on certain levels of integrity, but, you know, our angles aren't going to converge into a bright point of light, are they?
We've talked about this behind the scenes for a long time and we've made the conscious decision to bring it out in the actual podcast more and I've advocated for it because I think it's healthy because to me this is how humans actually talk when they're together and there is some important quality about that when you don't have people who agree with each other on the minor details.
I think we're all kind of united and broader in the broader picture of what we hope to see, but how we
get there is where we have differences, and being able to talk through those, I think, is really
important. Yeah, it may be a really important aspect of how we avoid just becoming, right,
recycling our own sort of shared echo chamber opinions.
With regard to AI, Diresta's opinion is that regardless of the form it takes, she's a lot less alarmist about this actually, which is kind of refreshing, regardless of the form it takes, Her point is that educating people on the manipulative techniques inherent to how propaganda works in general is the key.
So for her, it's media literacy, essentially.
In America, we have this dangerous situation, which is that even without AI, over a third of Americans believe the big lie.
She worries that this could create a sense of justification for using whatever manipulative techniques are at hand.
At this time, including AI to try to write what they perceive as having been wronged.
I have one more point here, which is about another arena in which we're actually seeing real-time unfoldment of AI being used to political ends.
And that's with the Israel-Gaza war.
You know, New York Times did a story about a report published by the Israeli watchdog group called Fake Reporter.
This is back in early June.
There's a link in the show notes to it.
It's about how a consulting firm called Stoic, and that's all capital S-T-O-I-C, Uh oh!
Stoic was paid $2 million by the Israeli government to run an influence campaign designed to sway the North American public, but also government opinions, specific people in the government.
And they used AI specifically to do this.
So, Stoic used a combination of websites and AI-generated fake social media accounts, which would then have AI-generated comments associated with them.
The operation had several layers.
It tried to influence at least 128 American lawmakers, specifically by targeting their social media accounts, trying to get reactions out of them.
It sought to sow discontent amongst domestic coalitions that have rallied to the Palestinian cause.
And also linked support of Israel to the legacy of human rights initiatives.
It also smeared the United Nations Relief and Works Agency, UNRWA, as being linked to Hamas on insufficient evidence.
They also dismissed claims of human rights abuses.
They amplified stories about Muslim extremist groups, especially in Canada, and then manipulated photos and lobbied journalists to try to get coverage of stories containing these kinds of false or exaggerated claims.
The network was also shown to rely on connections to anti-immigrant and far-right as well as pro-Israel accounts on Twitter for some of its distribution.
There were over 500 fake accounts on Facebook and Instagram that were suspended or removed by those platforms.
The operation is part of a larger trend, though, which originates in Israel.
So what happened with the Israeli government when they were called out?
business that is usually advertised as offering, you know, data analysis, content creation
and distribution, and generative AI to political candidates and organizations.
So what happened with the Israeli government when they were called out?
How did they respond?
So their being behind the campaign was reported by this Israeli watchdog group.
That was repeated by the New York Times.
The Times says they independently verified the connection.
And the specific allegation is that Israel's Ministry of Diaspora Affairs, which has a budget of about $2 billion annually, retained the services of this consulting group called Stoic.
But according to the Jerusalem Post, that government department has said the claim made by the New York Times was baseless.
So essentially, they denied everything.
Yeah, they denied it.
It's also worth noting that even without shadowy AI contracts, right-wingers in Israel are very invested in, you know, allegations that Palestinians themselves are crisis actors or fakers or not real.
they're engaged in the histrionics of what they call Pollywood.
So guys, I wanted to point out that AI, or at least the idea of it, isn't just weaponized
against evidence, but I've been noticing that it's weaponized against people.
I don't know if you've seen that MAGA influencers are calling opponents NPCs now, but that's getting pretty popular.
It's a favorite Musk insult.
So that's non-playable characters for the olds.
They're confined to state compliance.
They're never deviating from their limiting programming, etc.
Have weird, repetitive movements.
But Derek, I wanted to ask you, what can you tell us about the cannibalism problem with AI?
Isn't that the biggest problem with the tech, regardless of what it's used for?
It's a really big one.
So the cannibalism problem is when AI models, particularly large language models or LLMs, are increasingly trained on data that's generated by other AI models.
Now, there are a few real issues here.
One is degradation of quality, and basically the data becomes less diverse and more repetitive.
I've seen it referred to as a photocopy of a photocopy, or for us olds, copying an MP3 over and over again.
Oh, it's tapes, man!
It's audio tapes.
Forget MP3s.
Yeah, just dubbing back and forth on the two tape decks of your boombox.
Oh man, I loved it.
I miss those days.
This is really dangerous because I wrote an article for Mother Jones earlier this year about LLMs being trained on misinformation, and I found many examples of that in health misinformation for this article.
And when that happens, it just confirms and strengthens biases.
And so the spread of misinformation is considered another major issue with cannibalism.
We all know this well because it can have serious societal impacts and that's reflected, for example, in the declining number And finally, you'll know this term, feedback loops.
over the last few years.
There's something called model collapse and that's when AI models that are exposed to
AI-generated data lose the ability to accurately represent the data distribution they were
initially trained on.
And finally, you'll know this term, feedback loops.
In AI systems, this means strengthening reinforced biases and inaccuracies, which will inevitably
lead to an erosion of trust in AI-generated content because it becomes nearly impossible
to distinguish between real and synthetic information.
A few solutions have been proposed for this problem that includes diverse human-curated datasets, which will ensure accuracy and reliability, but I'm skeptical of this one because it would require a massive investment by tech companies who, as I flagged earlier, are moving in the other direction in terms of wanting AI to cut costs and not add to them.
And to be honest, I'm not that hopeful for the other two solutions.
Oh, no.
One is transparency and explainability, which is the development of AI systems that can explain
their decision-making processes.
Now, most companies want to keep it inside the black box and not tell how it's happening.
So that's a big problem.
The other is external auditing, which allows third party audits to take place.
But same thing, proprietary models, they're not going to want people coming in to take a look at how the sausage is being made.
So both to me are very highly unlikely.
So all right, so we're fucked on that particular point, Derek.
There's no relief there whatsoever.
But let's talk about a brighter issue, energy usage.
Yeah, so we're even more fucked on that issue.
So for a recent project, I had to look at comparative current and projected energy consumption.
Between cryptocurrency mining, which a lot of people have rightly criticized as being energy intensive, and the electricity is going to be needed to power generative AI.
So, some numbers.
This year, Bitcoin mining will require 130 terawatt hours of electricity.
Now, I'm sure you both know exactly what that means, but for comparison, the United States total amount of electricity usage for everything in 2023 was 4,065 terawatt hours, and that was actually down a percent from 2022.
terawatt hours, and that was actually down a percent from 2022.
So 130 of 4,065 is a pretty big number for something like mining crypto.
Now by contrast, Generative AI will use 200 terawatt hours this year, so 200 against 130.
Now, to use another comparison, the US used 400 terawatt hours of electricity for air conditioning this past year.
That's also a major issue because 10% of our electricity consumption goes to HVAC systems, for example.
Okay, I know there's a lot of numbers.
Yeah.
We are currently using half as much electricity for AI on our computers as we are to run HVAC systems in the entire country, okay?
Incredible.
So here's the thing about the projections.
So cryptocurrency mining has always been energy intensive, but the supply will run out.
Satoshi Nakamoto published the Bitcoin white paper on October 21st, 2008.
And on that day, he set the maximum number of Bitcoins to be mined at 21 million.
So it's going to run out in about 2140.
And there's also a way of mining where you can't just mine them all at once.
So, it is a while off, but if you look at projected energy usage for mining, it's a relatively modest increase of 10 terawatt hours per year.
Okay?
So, as I said, right now it's 130.
In the year 2034, the expected demand is 240.
In the year 2034, the expected demand is 240.
That's not nothing, but it's nothing like generated AI.
Final point, if you look at the graph for electricity demand for crypto, it almost doubles
in the next decade.
Okay.
Now we'll turn to generative AI.
The expected use in 2034 is 36,700 terawatt hours.
Oh my gosh.
Okay.
36,700.
Remember, the entire usage of the US is 4,065.
It's an insane number.
The entire planet used 25,000 terawatt hours in 2023.
So, if that hockey stick graph bears out for generative AI, the industry alone will use 50% more electricity than the entire planet uses on an annual basis.
You're welcome, Matthew.
Yeah, it's incredible.
This is 10 years from now, you're saying, is where that's going.
Yeah, and just to underline the climate context here, we're nowhere near replacing fossil power with carbon neutral options for existing global power usage.
So the U.S.
is currently at 40% production.
China's at about 30% renewable.
But these monstrous numbers you're citing, Derek, like they just gobble up that fragile math of a process which is already taking too long.
I think it's incredible.
I think it's also on brand that techno-capitalism like continues to fill up these poison chalices.
It's like productivity that pollutes efficiencies that like really efficiently kill us.
It's incredible.
Well, the project that I was working on was about data centers and the data center that I was working with specifically is building them.
In areas of wind and water electricity for renewable purposes.
So people are thinking about that, but you are exactly correct.
The incentives aren't there yet to move all of that power over away from fossil fuels.
And without those tax breaks, for example, like there's no way there could be enough data centers running off wind to supply the demand that's coming.
So yes, we are fucked.
It's run, it's everything is a runaway train.
And I mean, just in general, with regard to what AI seems to facilitate, it feels like some final acceleration of capitalism.
Like, to me, it's, it's incredible, but also totally on brand that these LLM designers, like they just looked out into the internet and they decided like it was the natural world.
Just an infinitude of raw resources that just belong to nobody, ready to suck in and exploit.
I've written nine books, and I never conceived of them being hoovered into somebody's hard drive so that they could learn how to mimic whatever it is I do and then sell it for pennies on Amazon.
I wrote books for people.
It's not like I expected to get paid or felt entitled about that.
It was always a labor of love.
And then along comes some like dipshit and says, thanks for your service.
I'll just take that.
I don't, there's like a fundamental theft that's been going on.
And I don't, I don't really see a lot of commentary on that.
I mean, artists are saying this, of course, but in a more general global sense, human labor as it exists on the internet is just now belonging to somebody else.
I want to distinguish something though, because you are absolutely correct that there are some people who looked out into that horizon and said, yeah, I'm going to take your books and I'm going to use it to make money.
But I do want to defend engineers because I've worked with so many of them over the last decade in various facets of tech.
These are people who just like writing code for the most part.
Yeah.
So they have an assignment.
They have a problem to figure out.
They're not looking at that broader landscape that the CEO and the C-suite is seeing for marketing purposes.
And it's one of the hard things to talk about in tech itself because there is a diverse political array of workers who work in tech.
Yeah.
But because the biggest names are the most egregious peoples, the Musts, the Peter Thiels, they take up so much oxygen and they're so insane and so wealthy that people tend to glom onto them as representative of Silicon Valley.
When you have people like Mark Cuban, who is a billionaire who is trying to figure out prescription drugs for people and creating solutions.
There are other billionaires.
Duska Moskovich is a good example, the creator of Asana, who's a billionaire, and he's extremely liberal.
He's all for the Harris campaign, but he's also always pushing back on tech industry leaders.
So, you know, there is a little more balance there.
Now, that does not negate anything you just said, though, Matthew.
I just want to defend some of my co-workers who I know for these years who are not those people.
It's like us working in capitalism, right?
It's we can do what we can to be ethical, but it's like the machine itself seems to have its own logic.
I agree with that.
I hadn't had this thought before.
There is something incredibly, it sort of stops you in your tracks.
It stops me in my tracks to recognize that this technological development, which is so much about hyper abstraction, right?
It's so much about sort of You know, the fantasy of being able to create alternate realities.
It's so much about this immersive technological advance comes at this incredible material cost in terms of our actual world that nothing, nothing can just sort of glide along frictionlessly on the surface.
Like it's okay.
You want to keep developing this technology, which requires more and more and more energy.
Well, here are the consequences.
Are you doing that math and are you doing it with any kind of moral intelligence?
And it's not a Star Trek world in which the planet is somehow idyllically unpolluted, right?
The cost is not even close to being overcome.
The cost of the holodeck is that it's being powered It's sucking the life and the color out of our actual planet.
Yeah, it's Borges, right?
And the map.
I just wanted to finish by saying that when we're talking about AI and its likely disruptions, especially politically.
Derek, you argued last week that that Vance tweet about the couch went viral because there was an appetite there waiting to grab onto something, and I agree.
And so I'm thinking about why are people so hungry for conspiracy theories, and why will AI facilitate that?
Why are they so primed to disbelieve what they see?
You know, we have charted journalism failures and government lies going back decades.
We can pick out distinct technological shifts that seem to deepen the problem, but I think there's something else going on, or at least I feel it.
I can't stop thinking that it has something to do with a disconnection between images and responses, especially images of things that are wrong, responses that amount to paralysis.
The attraction of the AI narrative, to me, has something to do with the passivity of media consumption.
Like, we have this proliferation of images that makes the interaction with any one image kind of distant and dissociative and ironic.
When I think about how much of my own life has changed with regard to the sheer amount of information and content I'm taking in at all moments, I find it all hard to process and therefore to believe.
And I think one very common feeling I have is that there are so many things that are obviously wrong that I am exposed to and yet not taken care of.
At times, it evokes a kind of survivor guilt within me, like we can see and feel so many tragedies, but nothing comes of them.
We get no satisfaction from being these eternal observers.
I remember, I was remembering this week in the late 1970s, I'm watching TV and Sally Struthers used to come onto the television standing beside children with bellies bloated by malnutrition in Ethiopia.
And it was at once so awful and yet so distant and unreal and so hard to believe that phoning the 800 number would have anything to do with those children having anything resembling a repaired life.
It was ultimately, I think, alienating.
And, you know, now I see constant wall-to-wall coverage of war in the Middle East.
Nothing really happens in terms of state or institutional responsibility.
And I can only imagine that the drive to say, that's not really happening, or those are crisis actors, or that was AI, it grows with the level of exposure we have to disaster with no relief.
Like, how can this all be real and also not be stopping?
So, I think that before we get into the instant appetite for the meme, or the denial of a real event, or the excuse to abdicate concern because it might be fake, I think I imagine a lot of us have been watching too many things that we can't change, just go by.
And that feels really bad.
And as I remarked above, I think when D'Souza takes that red marking tool and he makes circles on those panels on the plane fuselage, or one of the transvestigators makes the circles, there's a feeling of satisfaction that they have in that.
The feeling of, I intervened.
I put my mark on this image.
He is disastrously wrong and a very bad person, but he's also doing something resonant for a population bombarded by images.
He's like stopping the flow.
He's drawing a line in the sand around what he has to accept and what he thinks he has the power to change.
And I think that's why it's appealing, and I can actually understand it.
I find it interesting that you found the Sally Struthers campaign alienating.
That was one of the most successful marketing campaigns of all time, and it actually changed... Was it really?
Like in real dollars?
Yeah, it changed.
It not only helped alleviate poverty, it also changed how marketing was done moving forward.
It shifted the entire industry, and the reason that happened was Well, along with that, along with the We Are The World project is because what people figured out was instead of saying a million people are starving, they showed you one.
And that one drove way more adoption and people donating than just saying a million people because of the power of that image.
Now, where I think there's a difference with what you mentioned about D'Souza is that There's this idea that drawing the sharpie and then just putting on the internet and sharing actually does something, is some sort of activism.
Whereas in the 80s, people did call that number and that money did get to those children in some capacity at least.
That's not what's happening in our current environment.
You know, maybe, I'm glad to be informed by that because maybe what I'm actually reflecting on is a personal sense of alienation or kind of paralysis as I'm eight, nine years old and I'm watching that.
It seems that then something changes.
I mean, like the DNC convention is going on now.
There's a lot of news about like, you know, how, what happened, what unfolded in 1968 and how everybody was watching that unfold.
And that, along with coverage of the Vietnam War, actually made something stop or contributed to an intensification of real-world debate.
Something happened in the intervening years where this distance between images and responses seemed to extend and bloat itself out.
One of the challenges is if you don't see the results of your activism, it can be very challenging.
So I worked on a project back in like 2004 or 5 with, I forget what US government agency, but we did a series of shows to try to help Americans get more passports.
Because Americans can pretty much travel everywhere, and less than 50% of Americans even own a passport.
But we contributed, we weren't the only people, but it did increase by almost 10% over those couple years.
And so you do have results that can bear fruit, even if you don't always see them.
I also worked on another music project to provide mosquito netting in Mali.
I produced a record by Via Farca Torri, and all the proceeds of that record We went on to buy mosquito nets and we were able to buy a few thousand nets and deliver them directly to families because Veo was involved with this project.
So I would say I do understand that sense of, oh my God, that's so far away and I can't help.
But that is why charitable organizations, when they are credible, are really powerful and that money or service that you provide does actually translate.
I'm really glad, Derek, that you ended there, because you told us a whole bunch of horrifying shit today, and I think that the dream of an episode around, you know, the lies of AI is, well, what do we actually do as human beings that's real, and how can we continue to do that?
I think that's what we all long for.
Thank you for listening to another episode of Conspiratuality.
Export Selection