Kevin, editor of Prism Meta News and co-founder of the American Information Integrity Alliance, examines how AI-generated deepfakes—like manipulated Ron DeSantis ads or non-consensual porn—threaten democracy, elections, and personal safety by spreading fabricated narratives faster than corrections. The "No Fakes Pledge," a voluntary anti-deception commitment for political figures, aims to set moral norms amid First Amendment hurdles and the arms race of synthetic media. Blockchain and speculative self-surveillance offer partial solutions but fail to address systemic risks, leaving prevention as the critical frontline defense against irreversible harm. [Automatically generated summary]
And we're back with Truth Unrestricted, the podcast that would have a better name if they weren't all taken.
I'm Spencer, your host.
And I'm going to do this a little different today.
I'm going to introduce the topic first before I get to the guest.
We used to rely on eyewitness testimony almost exclusively when making official accounts.
Historians had to rely on notes in journals and diaries to understand what a person saw or felt during a particular historical event.
With the advent of recorded media, we began to move past this reliance on shoddy memory filtered through a laundry list of biases.
Recorded images on security cameras has, to a large extent, supplanted eyewitness testimony as a reliable standard for discerning fact from fiction of an event.
It isn't perfect, as it doesn't see everything, but over time, we've added more cameras so we can see more angles and more properly determine the truth of an event.
But as with all things, our technology has stepped in to give it a twist.
Our most expensive and expansive storytelling efforts have led us to make things that are completely fantastical, and I love them.
2001 A Space Odyssey, Star Wars, Lord of the Rings, the Marvel movies.
We've created a multitude of ways to make images on screen that are completely fake and could not happen with our current knowledge of science and likely could never happen even with much greater knowledge.
But to see them on screen ignites my most fantastic imaginings and generally makes me very, very happy.
I love that we have this technology for making fiction.
And then, as with other sectors of technology, people have begun to use this technology to make an unreal thing for a purpose that it is not meant to be fictional.
My main work here on this podcast is to fight against something that I call unreality.
It can be amply demonstrated that it is currently possible to warp a person's thinking so much that they can come to love a desired political outcome.
In fact, they can love it so much that they live for it, pray for it, and allow it to affect every action they take in life.
They can ignore evidence of real things if they feel that accepting that evidence would shake them from the political outcome they want.
This is a problem.
It's not actually new.
It's just never been as widespread as it is now.
The internet was created in part on a dream of bringing humanity together, of removing the ability for demagogues to get away with untruths, because it would greatly increase the ability for people to find out the truth that would refute them.
But instead, we have seen it become the world's greatest bullshit generator, churning out a seemingly endless stream of baseless rhetoric, which its viewers fail to properly question or interrogate.
Our education system has failed to give people at large the ability to see fact from fiction.
And this is a problem, obviously.
It has greatly divided us.
Bad actors in the world have contributed in an effort to sell additional, unnecessary healing products that don't work.
Political engineers add fuel to the bonfire in an effort to separate populations to gain support for political movements.
We have videos from previous events being recycled and reused as being from current events.
We have voices narrating these videos as juxtaposition to create an entirely different narrative than what is happening.
And the fact that these bad actors don't need to be at the scene means they can do it from the safety of another continent with no real cost and can therefore, in some cases, become fabulously rich from the production of fake news content.
So with that, it's time to introduce our guest.
Take it away.
Hey, Spencer, happy to be here.
This is Kevin.
I am the editor of Prism Meta News, which has a weekly newsletter called This Week in Misinformation, and that's up on Substack.
And I'm also co-founder of what we're calling the American Information Integrity Alliance, which is a group of like-minded citizens looking for ways to support one another in doing anti-misinformation work of all kinds.
And in particular, around the tradecraft elements of what we have to do out there.
People doing OSINT on disinformation networks, for example, people doing debunks, people who are reporting, who are journalists who are reporting on the misinformation news, folks involved in media literacy initiatives, people and organizations on the news literacy side.
So really all ranges.
And then kind of across the different domains that are most important right now for the misinformation problem that we're facing in society, which is to say, we see health, including COVID, as being one big one, democracy, including our elections, as being another, climate and science not related to health kind of as a separate bucket and so forth.
We're also tackling human problems related to misinformation.
And a couple of those that were particularly taken on board is what are what are, well, first I mentioned what we're doing to support the people who are doing this kind of work because right now many of them don't have support.
So that's problem one.
Problem two is nobody seems to have cracked the code on how we're going to reach the millions of people who are sort of in conspiracy theory rabbit holes.
There's a lot of energy around this.
I think a lot of ideas and discussion.
I think we haven't come up with it just yet.
We generally speaking.
So we're pursuing some things within the profit of the context of this nonprofit organization, the American Information Integrity Alliance, to tackle that, create a community of former truthers that can speak to coming out of that experience in a credible way, and that will draw other people out.
So those are some of the highlights about me.
Great.
Just one thing there.
There was one term you used there that I'm not very familiar with.
So I'm sure maybe some other people aren't.
What is OSINT?
OSINT is a shorthand for open source intelligence.
So to distinguish it from other kinds of ints, this is a shorthand in the profession of intelligence where you'll have like human intelligence is human signals intelligence is SIGINT.
Right.
And then open source is its own discipline.
It used to belong more to the intelligence world, but with the internet and a lot of information being out there, just this proliferation of data about all kinds of things, that has moved out more of the government domain.
And now I think a lot of expertise resides more on the private side or on the civil society side.
So you'll see a lot of people like this if you search this term OSINT.
You'll be able to pull some of that up.
But like Belling Cat is a good example.
Belling Cat is par excellence in terms of the OSINT that they do.
If there's something that crops up on the internet and there's some weird stuff about it, like they will track it down, right?
Different kinds of toolkits, different things that they use.
Some of it is custom build.
Some of it is available to more people.
They have kind of a playbook that they go by.
And so all that's really powerful.
I'd love to take not just Bellingtap, but some other folks that are involved in this space, um, take some of that and make it accessible to a broader set of people who maybe you're getting into this and it's sort of fraught to start thinking about researching on these kind of things.
These are, um, if you're looking at disinformation networks, these are powerful people, well-resourced organizations.
And even if, even without that, there tends to be a good amount of irrational conspiracy theory thinking involved a lot of times, which leads to then anonymized, sometimes actual named people threatening you for the work that you're doing.
Right.
Everybody experiences this, but like people of color, women especially, experience that a lot more.
We flushed this out in a series of audio chats on Twitter back before Twitter became useless.
And it was really striking to sort of learn from those experiences.
And there's been some good reporting about this as well.
I try to cover those types of stories in the newsletter that I write because I think it's important to understand, right?
Just people who are researching this, publishing work in this space, reporting on it, even for a major organization or attached to an institution like a university, receive a lot of harassment.
And a lot of times, their institution doesn't know what to do or like does all the wrong things for whatever reason.
They're really lacking a lot of support there.
So I think at a basic level, we could do more on the not just OSINT, but just kind of generally the tradecraft part of this so that people can play a strong defense out there so they can maintain their identity, maintain their privacy, even when they're making people mad.
That makes sense.
Yeah.
Just as an example, if I have this right, is something like the Center for Countering Digital Hate like an OSINT organization?
Is that an example of what you're talking about, Michael?
You could say that.
I mean, I think they do.
I'm not an expert on what CCDH does.
I like their work.
You know, I would probably put them more in the bucket of like they're monitoring what's going on out there, which is a form of more just a form of general social studies research in a way.
But with an activist like digital sociology is really kind of what they're what they're doing.
Yeah.
Yeah, yeah, which is great.
And it's part of the equation.
I wouldn't, I don't know if I'd really call it OSIN, but it is a good example because when you go out and highlight, for example, in their case, they're talking about hate speech against specific kinds of groups more than maybe some of these folks over here that are saying, like, oh, this is misinformation.
That's a specific disinformation theme that's being put out.
Here's something that's connected to influence operations of a foreign state, Russia or somebody else who's putting out a certain type of narrative just to muddy the waters or try to influence opinion in the US or the UK or Canada or whatever it is.
But yeah, CCDH, but it's a good example because once you start doing that, you become a lightning rod.
In their case, CCDH and the ADL have both become villains on Twitter for the sin of pointing out that a lot of hate speech has proliferated on the internet, in particular on Twitter after Elon Musk has taken over.
And then that leads Elon Musk, who has this kind of conspiratorial mindset, to look and say, well, you must be in cahoots then with somebody to make us look bad for the advertisers and gin up a whole story that doesn't even exist.
It's just, you know, like it's easy to revert to that instead of like understand them for what they're doing, which is, yes, they're doing activism, but they're objectively showing you something instead of rejecting it out of hand and taking corrective action, which would probably do better for your advertisers.
You're taking this other attack, which makes sense on an emotional level, but I think is part of the reason why the business continues to fail.
Yeah, I mean, one could almost take a look at anyone that he's any media organization that he's considered an enemy and put the star beside their name as maybe more true than what you think.
I mean, he also openly called Bellingcat a psyop.
Yeah.
He just used those exact words and I couldn't believe it.
Yeah, you know, Elon Musk is, he doesn't have a great understanding of the information environment in which he lives and swims.
And we're all like this to a certain degree.
Yeah.
The problem is like Elon Musk also has thousands of people around him who are constantly telling him he's a genius.
And so.
Yeah.
Yeah.
He has very few negative feedback loops.
Yeah.
Yeah.
When he gloms onto some of these things in his echo chamber and then starts to kind of repeat them and like they really loudly love that, right?
The people who agree with that and think that way.
And he starts like, oh, I must be on to something.
You know, I don't want to get too much into his mind.
It's kind of a scary place, I think.
But like that to me is the story of a little bit of Elon Musk for the last couple of years and especially in the last year as I don't know if it's like the stresses of running eight businesses and one of them is failing really badly compounds or whatever, but that slide into conspiratorial, oh, everyone's out to get me and like Bellingcat is a psychopath.
Yeah, he heard somebody say that, right?
Like he's just, he doesn't really, I don't know if he really believes that.
Maybe he does.
I don't know.
That's the way they talk in the circles that he is in now that he is chosen to be part of and to be a leader of.
And so you get that kind of thing.
Yeah.
Did you ever watch The Sopranos?
Yeah, I never did.
No, there was this great one moment in one of the seasons where his wife is angry at him and she tells him that in this moment of anger, she tells him that nobody likes him and that everyone that he works with just pretends to like him because he's the boss.
And he goes, you know, he's the mob boss and he goes and he's hanging out with all the mob guys and then he has some joke and everyone is laughing hysterically at his joke.
And he has this moment where he kind of, you know, in slow motion, he looks around the room and he sees that everyone's just not, you know, they seem just a little more fake in their laughter.
And he's just, you know, and you could tell it got to him.
And it was really well done, really excellent moment.
But to me, this is a lot like the Elon Musk that we mention now is that he's everyone laughs hysterically at all his jokes.
And unless anyone tells him that they're not really finding it funny, he'll just think that they're, they all think he's wonderful because those are all the social cues that they give him.
And yeah, that dynamic is definitely at play in this situation.
I mean, what I would say is, A, there are people who are saying that to him, you know, but he just thinks they're haters, right?
So like he's no one he knows and likes and respects.
Yeah, right.
Yeah.
Like he's just, I mean, if, if like David Sachs or somebody was like, hey, you know, that's actually, then like, maybe he would change his mind.
But like anybody else, right, on the middle or on the left, like, cause he thinks they're all compromised, woke mind virus and everything else, right?
Like, so why would I listen to that?
So he just doesn't have anybody in the position of the wife in that situation, right?
Like who would be the one?
Like, hey, you should be mindful of this dynamic that's playing out.
I don't think there's anybody like that in his life because for whatever reason.
But yeah, then like you need that.
And then you would also have to have a certain level of self-reflection about your information diet and the dynamics of your information environment more generally.
And he doesn't seem to be very curious about that.
So I wonder, like, I think, you know, he could come to that kind of realization.
I think it would take more than probably an offhand comment of like, hey, do you notice?
Angry moment from a lover or something.
Yeah, right.
Yeah, like I don't know.
I feel like, you know, and I've offered to coach him.
I tweeted many times.
I said, I'll coach you a little bit on this for free.
I'm happy to do it.
It's not about the money for me.
I just think somebody with as much influence as he has marginal changes in the way that his brain is approaching this world on like the human level and the social level, technical and engineering, right?
Like he actually great, right?
That's fine.
Yeah, sure.
But then, like, I think he's he's approaching Twitter.
The way he's approached Twitter makes clear like he doesn't understand that most of what we're dealing with here is human stuff and uh social interaction.
Yeah, there's like social interaction component to it.
Like, what's the magic of Twitter?
It's not features.
He comes in with all this stuff about features, but that's not it.
Like, that wasn't why people were on Twitter.
People were on Twitter.
It was weird and funny, and like you could get into crazy arguments and you could see a celebrity, you know, fight with another celebrity.
And, you know, like, I think he gets the memes, right?
He gets a little bit of that.
But most of the moves that he's taken prove that he doesn't understand what made Twitter work.
And that's why Twitter doesn't work anymore because he kind of has destroyed it all.
Anyway, that's my Twitter side, Mar.
Yeah.
Well, so if we should probably get back to what we really came here to talk about, which wasn't Elon Musk.
No.
But you have been working for a while on getting the website up and you have it now.
And I saw it when you got it up.
And I looked and I saw what this thing that you're calling the no fakes pledge.
And I liked it a lot.
So what is the no fakes pledge?
Yes.
Okay.
So to talk about the no fakes pledge, we're going to back up a little bit and talk about what I call the synthetic media tsunami.
Oh, yes.
Yes.
I was going to get to that, but yeah, absolutely.
Time together.
Yeah, I'll just pull that right up front if that's okay.
Everybody knows ChatGPT exploded on the scene this year.
If you've been paying attention in 2022, you saw a lot of this with DALI and Midpoint on the generating of pictures side of things.
Those have continued to advance really well, as well as the chat-based, you know, GPT things.
But basically, a situation where a couple of years ago, it wasn't really imaginable that we would have a lot of really believable content that machines could just produce on a mass volume level.
But that's our reality now.
It's become that way.
Not exactly overnight, but within the last year, the writing's sort of on the wall for this, where to the extent we were already swimming against a current of made up nonsense put out there by bad actors who could gin it up for whatever, you know, in whatever way.
Well, now those bad actors have access to this technology and you can automate it.
It can go, you know, make, you know, let's make this one, but I don't have to re-Photoshop it each time.
I can just say, say to the image generator, make 100 variations of this.
And 10 seconds later, you've got them and you just post them all.
You know, you can put all of this stuff out.
It's washing over us right now.
A lot of people haven't realized that it's here.
It's going to get worse by the way that you look at it.
It's just going to be kind of really difficult to tell anything that you're looking at is fake or not.
The text, image, you know, you name it.
Right.
And a lot of that is really borne out in the recent conflict between Israel and Hamas.
Yeah.
I chronicled this in the newsletter to the extent I can.
A lot has been written about it, but the Twitter in particular is just awash in fake images or images that are from previous conflicts that are being kind of like repurposed, recontextualized to make it seem like they're from this conflict and they're not, to include atrocities, to include.
So, you know, sometimes it's with an agenda of I'm trying to make Israel look bad.
Sometimes with the agenda, I'm trying to make the Hamas look bad.
Sometimes my agenda is for the Palestinian people, whatever it is.
And then sometimes stuff just gets in the ecosystem because people like to throw things out there, you know, like just people who just want to watch the world burn.
It's kind of like that.
Yeah.
It's kind of like, and we really don't have a lot of defenses against this.
So the pledge came about as one way to think about, to think about addressing this problem.
And it's by far, you know, by no means is it the only way, but it's something that I think is important.
It has to be done.
We're still a ways away, for example, from regulation that's going to make sense.
I'm really skeptical of regulation also because of the First Amendment and just like the enforcement part of it.
There's just a lot of synthetic media stuff we're not going to be able to reach, whether it's through regulation or any other means.
Like it's going to be a feature of our lives.
So I don't want anybody to walk away from what I'm saying here with the pledge concept and think like, oh, well, this doesn't even get at it, right?
Like it's not going to solve the problem.
It's not going to solve the problem.
But here's our thinking is at least our leaders, the people that we choose to govern, are aware enough of the problem and are responsible enough about it that and realize that this is going to be an arms race just in our politics.
If you leave everything else outside of it, like it's going to be an arms race just in our politics to use this technology for political ads to portray events that didn't occur, to insert people or objects that weren't originally present at actual events into saying that they were there, substantively altering audio or video recordings of your opponent or somebody else, right?
You can fabricate or attribute statements to a public figure, a private individual.
You can resurrect a historical figure, right?
All it takes is a few seconds of their voice.
And if they have enough, you know, they got a bit of video to go with it, then you can sort of like bring people back.
You can bring Churchill back from the dead and have him say some things that he never said.
It's really a whole wide world of possibility when you think of the creative possibilities.
But when we're talking about what's going to go into our political ads, for example, specifically with political ads, how are we going to see this play out?
And can we do anything about it?
Is it just going to be a free-for-all?
And right now it is.
We're headed toward a free-for-all situation.
So the pledge says, if you're a candidate, if you're a political campaign around a candidate, including staffers, if you're a political party, you know, we would hope the major political parties would also want to participate in this.
You know, you see the writing on the wall.
You join a pledge with us that says, we promise not to use AI technology or knowingly permit others to use it on our behalf to do any of these things.
And the reason for that is because we recognize this free-for-all situation and we just want our, we want the people to make decisions about who's going to lead us on the basis of real stuff and not on the basis of who can best gin up a bunch of fake stuff about themselves or the other person.
So that's the heart of it.
It's, it's really, it's a norms setting exercise more than anything else.
There's obviously like PACs and other players in the political space.
I don't know how that like we can really, you know, we would invite PACs, I think, to probably join this, but there's so much dark money and everything else.
But the concept for it right now is just really brief three paragraphs kind of statement.
You sign on to it.
And the only reason why you, you know, make it so that the only reason why you wouldn't is if you're a dirty trickster, you know, the dirty trickster from our world of politics will not want to do this.
They will want this tool at their disposal.
And what we're trying to do is like say, everybody who doesn't want to be a dirty trickster and realizes the danger here, let's lash up and, you know, set some norms around this use.
And then, yes, some people will still use it, but we can sort of show over here the impact, the effect we can have by not.
And can we turn that into a negative, right?
Like instead of it, it's just a, oh, it's a thing we'll try, you know, and we'll make it, we'll make an attack ad that is a deep fake of our opponent and he'll say this thing, like impose a cost to that.
And the cost is you're going against the norm here that we've all sort of signed and agreed, agreed to.
And we're just going to let everybody know you're not in standing with that.
You're not on board with it.
And whether, whether that outweighs the political benefit of doing it, I'm not sure, but we have to at least try, in my opinion.
Yeah.
Yeah.
I mean, what you're attempting here is to create a social expectation for the behavior of, at the very least, the people who are going to ask for our permission to be in charge, right?
And maybe if it catches on, apply some kind of moral penalty for breaking this rule.
Because I think you're right.
Having a legislative rule isn't going to be very useful.
It's going to be far too difficult for a legal standard to be set.
And then also, because of the nature of the position that most of these people are running for, judicial system isn't going to be that effective at running this through and keeping it in line.
I don't think.
I think that this one is going to have to be judged in the court of public opinion, right?
And if we can point out and make it explicit that the people who are asking our permission to be in charge should also play by this rule in addition to the other sets of rules we ask them to play by, I don't think that's unreasonable, really.
Yeah.
I mean, I, when you mention what you're talking about here, I think I'm a big TV guy.
I've watched a lot of television.
Whenever I think about a TV show that's had a politician that's running for an office or something, you know, The West Wing was a very popular show at one time and it had a lot of politics and a lot of the talk of the campaigning and the ads and whatnot.
And there's always the hero politician is always starts out with a positive message.
And then if the story is realistic, at some point, they are pushed to the brink and have to go negative.
And they have to put out some kind of negative advertising and whatever.
But think about what those conversations would be like if instead of talking about going negative, they were talking about going fake.
Yeah.
Like, what would that conversation really be like?
Do we think that in the urge, the urge to win the race and then become the person in the position to, they would tell themselves, do the good things for the world, would they, what, what lengths would they go to?
Would they agree to falsify and let a PAC do in their name, falsify a large number of things about the opponent to make it more negative if the real negative stuff wasn't available?
Yeah.
Yeah, I don't know.
Yeah.
And I can't answer for them.
But yeah, I love the West Wing.
And I think that's a great, I think that's a great sort of way to frame it as well, that the incentives that you're going to face soon.
You're underwater or you're 10 points behind as a Hail Mary, I've got to go negative or I will just lose.
Right.
And like, that's a reality.
But most of the time, what that falls back to is the candidate's intention to not be a negative political, you know, negative ad politician, to not be an attack ad guy, right?
And like, I don't, you know, he, he or she doesn't see themselves in that way, doesn't want it to be associated with them.
But when it comes down to it, like, if that's all that's sort of stopping us, like, I can get over that because I have this other argument, which is I'll have no influence.
I won't be able to do any of the agenda, which I believe in is good if I don't defeat this person who is genuinely corrupt and terrible.
So, like, why shouldn't we be able to at least point that out?
So, like, it is really important to get into and think about the incentives.
In the case of going fake, right, is what we're talking about here is like, you're going to have the same set of things.
You may have, you know, you're definitely going to have some consultants come on to a campaign and be like, we could drop this ad or, you know, or we could tweak this a little bit this way.
Yeah.
For sure.
Do this.
And then that'll be really subtle things too.
Like there was a Ron DeSantis ad and it was him standing in some place and waving.
And then they AI's some fighter jets flying overhead, which is not, which is not a thing.
This is so subtle, right?
So this is not a thing that really happened for a governor.
I don't know, like, like maybe sometimes it wasn't in that shot.
They, they, we say photoshopped in, but like they, they generated those fighter jets in and the sound effect and everything to make it seem like he's a presidential and sort of heroic figure.
So that's an example of a positive use of it.
You're using it to make yourself look better in, you know, in a way that never really happened, which, you know, there's lots of ways to do that.
And this is kind of par for the course.
But what we're saying is it's too dangerous to start doing that using deep fakes and some of this other generative AI.
And therefore, we need to have something on paper, like you've committed to it.
That's what it is.
Win or lose.
I'm not going to be a person that participates in it.
And then when that conversation rolls around, it won't be so much like, oh, well, I didn't want to go fake, but I feel like I have to.
It's going to be like with great reluctance.
I have cost imposed now.
And like, we're not going to enforce it.
Like, who are we?
Right.
But there's going to be, there's, there's a norm set.
You've signed on to it.
You actually believe in that.
And then when push comes to show, if you don't do that, you withdraw from this pledge, which anybody's free to do at any time.
They want to like get in and start doing fake stuff or they move ahead without withdrawing.
Then we're definitely going to kind of say, hey, this, you know, that's not enough with what we're, what we're about here.
And so you've got to like remove yourself from this.
So we'll figure out how to kind of arbitrate those kinds of things.
But yeah, I want there, I want there to be something that's more substantial than just the candidate has a distaste for going fake.
I want there to be something real on the other side.
Yeah, preferably, because it needs to be something seriously negative that would deter people who, in my opinion, as cynical as I am, are already fairly dishonest and doing many of their things just to seek more approval.
A serious deterrent, really.
So I want to get into another aspect of this is that when we talk about a no fakes pledge, in your mind, is this maybe like a temporary measure that's meant to buy some time for technology to catch up and provide a more permanent technological solution for discovering fake images or authenticating authentic images?
Like what do you have any thoughts on that?
I'm not too optimistic.
I'm not too optimistic that in the future the defensive capabilities are going to outstrip the offensive potential.
So, right off the bat, like, no, this isn't like a holding pattern while we wait for that salvation to arrive.
Um, I don't really think it's going to.
Uh, nothing so far has shown me that that's going to work.
You see, you know, I try to follow some of the news.
Um, obviously, a lot of people are thinking about this.
Um, develop a tool that can detect something that's AI.
So, it's like AI to see and tell you, tell a human what is a you know, what is AI generated.
And um, this is the arms race part of it.
You know, this is it is race.
And I think that the offensive has a strong advantage here to the extent where, like, maybe there's some sign, like, you can already fool humans, so we're beyond that.
There's no like humans can pick out the difference.
The question is: can we can we build some machines and algorithms, other AI to help this thing?
And maybe, I'm not counting on it, but maybe, because there are some signatures that are not visual that possibly would come into play here.
Like, no machine can generate something that's truly random, for example.
But humans, humans, and humanity, and the interaction of human beings definitely can.
And so, you may, there may be signatures that a machine would be able to pick up on that would recognize in the machine's output.
But again, that gets pretty subtle, and I'm not sure.
And the other thing, too, is like, even if you can do that, if you haven't set some standards and it is a free-for-all, we know for sure that correcting and corrective information far lags behind the proliferation of the false thing.
And that's true.
It's a falsehood that's going to be true in the case of images and then some because of how visceral and visual human beings are, where the retraction or the correction reaches maybe 5% of the original audience of a thing that went viral that was false, right?
That sounds about right.
Yeah, yeah, like generously.
And I think that that rule is going to.
So, even let's imagine you had a tool that was 100% effective, then you're still only going to be able to correct 5% of the situation.
It would have to catch it immediately.
Yeah.
So, prevention is worth a pound of treatment or whatever, announced of prevention and all that.
So much in this case that I think we just have to do everything we can and put a lot of, I don't know about put all our eggs in this basket, but put a lot of emphasis on prevention and deterrence at the outset.
Um, because uh, the use of this stuff is going to overwhelm us.
It could.
I mean, we have had for several years now already people who are using this deep fake deepfake technology to put other people's images onto pornographic video to fake that they were in a video of some kind.
I mean, that that's been happening for years already.
Um, and that's already its own level of problem, mostly a problem for people that we as ordinary plebeians don't care as much about, like you know, rich celebrity types.
Uh, but you know, we should care.
They're still people, they don't deserve that.
And it's also affected not just celebrities, it's affected individuals who didn't ask to be in any kind of limelight, they just happened to know a person that had the ability to gain some technology that could be used in this way.
And it's not great, it's really not great.
All that.
Um, I have two ideas that could maybe add something to this that might help.
One is the idea that I have that blockchain technology could eventually be used to authenticate images from the source.
Authenticate that the image that we're viewing is the same as the source as it was taken, if that makes sense.
That's not a fix-all.
That's not a thing that could, you know, it's not a silver bullet.
It won't take down the werewolf.
It could help in some situations where, say, there's some video evidence for that has to be used in a court of law, for example.
Then maybe this technology could authenticate this as a chain of evidence kind of situation where it could be shown that it hasn't been doctored or altered or any in any kind of meaningful way.
That might be something of a band-aid or something that helps.
Another idea I have is a little bit crazier, and it's definitely not a fix-all.
But when I think about deep fakes, I think about the idea of someone getting accused of, let's say, a crime that they didn't commit, and there's some deep fake video that shows them doing it, and they have to maybe prove that they didn't do it.
I wonder if in the future we'll have something that's essentially self-surveillance technology, which would essentially be, I mean, first would be available only to the rich, but it would be, you know, in my demented mind, where, you know, why a person would do this would be because they feel they might be targeted by this kind of fake media attack of some kind.
But in essence, let's say you're a celebrity and you think you might get accused of something or someone try to take advantage of you in some way to blackmail you because they, you know, claim that you were doing something that you weren't doing or whatever.
You would pay a fee to a company that would essentially have like an app on your phone.
The app would record your location and record essentially your voice full time and send it, transmit it to a secure server that's completely locked down so that it's not like available or anything.
And if you ever had to prove that you weren't in location X at time Y, you could go to your own vault, which is the thing that belongs to you that, you know, your life recorded of everywhere that you were and all the noises you were making at the time.
And, you know, you're obviously not going to show the personal noises you were making at the time because, you know, it's still going to be there when you have personal moments, but you might be able to show that you were in a different city, in a different part of the town.
You know, that might be a thing that people, rich people, especially who feel they might be targeted by this kind of thing, might actually pay for as a service in order to protect themselves from people who, at a very low cost, can make fake content that they can pretend like they committed some kind of crime, an assault on a person, that sort of thing.
What do you think about this idea?
Is it too crazy?
Would no one do this because it's just too nuts to record yourself everywhere?
Like, what's your thought on this?
You know, it sounds like an episode of Black Mirror.
Sure.
Well, I should write for them.
Absolutely.
It is to say, you know, frightening to think that that might be where society is headed, but also not entirely implausible, right?
We might have some things like that.
So the blockchain piece, I think potentially there's something to that.
I'm for any and all.
Like, as I said, the pledge is like one piece of it.
I believe we've got to set norms.
I think this is a powerful way we could do it.
But the actual verification of the chain of custody of an image, for example, or a factoid or whatever, back to whatever it was, like that would be good, actually, to have that built in, at least some kind of assurance that you can trace.
It certainly helped with debunking, right?
Yeah, like this didn't just get generated and put out by device at this IP address.
It's like the, you know, the camera owned by the National Geographic reporter on location in Gaza, like that would be pretty valuable to have and to know as a piece of metadata attached with that, as long as it can't be spoofed, which, you know, is why you would think about it in a blockchain sense.
Yeah.
I think could be part of the equation.
Yeah, that the part about the constant monitoring as a defense against deep self-surveillance.
Yeah.
Self-surveillance, maybe.
It raises a lot of questions.
I mean, I think the problem that you're describing there is a fictional scenario that I think some celebrities have had in the past.
No, it's we're going to have to deal with that, right?
Like it will be a situation.
Someone's going to try it.
Someone's going to try it in this world today.
Someone will try that.
Yeah.
And so like thinking about alibis and thinking about visual evidence.
Yeah.
Right now, as I mean, you teed this up at the beginning, visual evidence and like surveillance cameras and whatever has been sort of like the gold standard.
We've supplanted memory, you know, witnesses and memories, like fine.
But if there's video of them doing it, then that means that they did it and the jury will be convinced.
But the reason that that works is because the jury will be convinced.
Like, can, you know, could the could the government of whatever country you live in come up with a video of you?
You know, it looks like you doing a thing that you never did, prosecute you on that basis.
And, you know, even in the context of a democracy and having a jury of your peers, like that will be very convincing.
And what are you doing?
Oh, yeah.
And I don't think we've even begun to reckon with that, but that's part of it, right?
That's part of the thing that we're facing down.
Yeah.
Okay.
Well, I think I'm out of questions.
Do you have any more, like any more comments or anything?
No, I mean, we covered a lot.
I think it was a really good conversation.
It left me.
We're all trying here.
Obviously, I already, you know, I'm hopeful when I am able to talk to people who are working on this.
I also, you know, I'm like taken aback sometimes when somebody introduces a new idea that I hadn't considered.
And oh, gosh, yeah, like, what do we do about that?
So there's many aspects to what we're doing with the American Information Integrity Alliance.
We can address some parts of this.
We're just early days, this pledge that we're working on sort of at the beginning, but we want to tackle this problem in its many forms.
So we'll be, yeah, we'll be talking about all that and more, I'm sure.
Great.
Well, if anyone has any questions, comments, concerns about anything that they hear on this podcast, they can send that email to truthunrestricted at gmail.com.
Do you have any Twitter handle or anything you want to plug?
Sure.
I don't do Twitter anymore.
I mean, I'm there anymore.
So look up on threads, Prism meta news is on threads.
I'm on Mastodon and Post, and I don't really do that much on those either.
So find me on Threads.
And the newsletter is at prism meta news.substack.com.
You can check that out.
That's this week in Misinformation.
And I've also done a voice essay on January 6th.
That's been a recurring theme.
I do a lot of January 6th sort of tracking and stuff on that.
And I mentioned earlier the audio chats we used to do on Twitter called those Misinfo Meetups.
And I've released those in audio format there on my sub stack as well.
So you can check out all that stuff and reach out anytime.