Joe Rogan, Jack Dorsey, Vijaya Gadde, and Tim Pool debate Twitter’s content moderation, where Sean Baker’s account was suspended for a lion-wildebeest image while violent threats against minors at Covington High School went unchecked. Dorsey defends "variety perspective" policies but admits enforcement flaws, like banning users for "#LearnToCode," while Gadde insists rules target harassment—not ideology—citing UN human rights protections. Pool argues conservatives face disproportionate bans (e.g., Jacob Wall, Jonathan Morgan) and ideological censorship, pushing dissent into radicalized fringe platforms, escalating offline violence. Despite Dorsey’s transparency promises, Pool warns regulation risks overreach without tech understanding, leaving free speech vulnerable to advertiser-driven suppression and a minority’s enforcement agenda. [Automatically generated summary]
And the reason why we decided to come together is we had, I thought, a great conversation last time, but there's a lot of people that were upset that there were some issues that we didn't discuss or didn't discuss in depth enough or they felt that I didn't press you enough.
I talked to Tim because, you know, Tim and I have talked before and he made a video about it and I felt like his criticism was very valid.
So we got on the phone and we talked about it and I knew immediately within the first few minutes of the conversation that he was far more educated about this than I was.
So I said, would you be willing to do a podcast and perhaps do a podcast with Jack?
And he said, absolutely.
So we did a podcast together.
It was really well received.
People felt like we covered a lot of the issues that they felt like I didn't bring up.
And so then Jack and I discussed it and we said, well, let's bring Tim on and then have Vidya on as well.
His count was frozen today because of an image that he had because he's a proponent of the carnivore diet.
There's a lot of people that believe that this elimination diet is very healthy for you and it's known to cure a lot of autoimmune issues with certain people, but some people ideologically oppose it because they think it's bad for the environment or you shouldn't eat meat or whatever the reasons are.
Well, for a lot of people that have autoimmune issues, particularly psoriasis and arthritis, it's a lifesaver.
It's crazy.
Essentially, it's an autoimmune issue.
So, because he has a photo of a lion in a header eating what looks like a wildebeest or something like that, his account was locked for violating his rules against graphic violence or adult content in profile images.
That seems a little silly.
And I wanted to just mention that right away.
Now, whose decision is something like that?
Like, who decides to lock a guy's account out because it has a nature image of, you know, natural predatory behavior?
On this particular case, it's probably an algorithm that detected it and made some sort of an assessment.
But as a general rule, how we operate as a company is we rely on people to report information to us.
So if you look at any tweet, you can kind of pull down on the carrot on the right and you can say report the tweet, and then you have a bunch of categories you can choose from of what you want to report.
I think this one in particular, though, is probably an algorithm.
And I'm guessing that people are already reviewing it, but there's a choice to appeal any action, and that would go to a human to make sure that it is actually a violation of the rules, or in this case, if it's not, then it would be removed.
I think this does reveal part of the challenges that we face as a global platform at scale.
I don't know what happened in this case.
Sorry, it's hard for me to talk about it.
But what I would say is that it doesn't really matter if one person reports it or 10,000 people report it.
We're going to review the reports and we're going to make an assessment.
And we're never going to kick someone off the platform finally and forever without a person taking a look and making sure that it's an actual violation of the reports.
It could even be other carnivore diet proponents who are just jerks that don't like him because he's getting all the love.
People are weird.
Yeah, that's true.
The idea, though, is that it does kind of highlight a bit of a flaw in that it's good that someone can – because you might see something awful, someone doxing someone or something like that, and then you can take that and report it, and then people can see it and get rid of it and minimize the damage that's done.
I think, I mean, I'd be interested to hear your ideas around this, but our perspective right now is around this concept of variety perspective.
Like, are we encouraging more echo chambers and filter bubbles, or are we at least showing people other information that might be counter to what they see?
And there's There's a bunch of research that would suggest that further emboldens their views.
There's also research that would suggest that it at least gives them a consideration about what they currently believe.
Given the dynamics of our network being completely public, we're not organized around communities, we're not organized around topics, we have a little bit more freedom to show more of the spectrum of any one particular issue.
And I think that's how we would approach it from the start.
That said, we haven't really dealt much with misinformation more broadly across, like, these sorts of topics.
We've focused our efforts on elections and, well, mainly elections right now.
I think that the tough part of this is really, and I'd love to have a discussion about this, is do you really want corporations to police what's true and not true?
But the places that we focus on is where we think that people are going to be harmed by this in a direct and tangible way that we feel a responsibility to correct.
So years ago, we passed a policy that we call our hateful conduct policy, and that prohibits targeting or attacking someone based on their belonging in any number of groups, whether it's because of their religion or their race or their gender, their sexual orientation, their gender identity.
So it was something that's broad based is that you can't choose to attack people because of these characteristics.
But can we just take a step back and try to level set what we're trying to do with our policies?
Because I think it's worth doing that.
So as a high level, I personally, and this is my job to run the policy team, I believe that everyone has a voice and should be able to use it.
And I want them to be able to use it online.
Now where we draw a line is when people use their voice and use their platform to abuse and harass other people to silence them.
Because I think that that's what we've seen over the years is a number of people who have been silenced online because of the abuse and harassment they've received and they either stop talking or they leave the platform in its entirety.
If you look at free expression and free speech laws around the world, they're not absolute.
They're not absolute.
There's always limitations on what you can say and it's when you're starting to endanger other people.
Yeah, examples where you guys were alerted multiple times and did nothing, like when Antifa doxed a bunch of law enforcement agents, some of the tweets were removed, but since September, this tweet is still live with a list of private phone numbers, addresses, yet Kathy Griffin...
She's fine.
The guy who threatened the lives of these kids in Covington and said, lock them in the school and burn it down, you did nothing.
What did you guys do with Kathy Griffin when she was saying she wanted the names of those young kids wearing the MAGA hats at the Covington High School kids?
So in that particular case, you know, our doxing policy really focuses on posting private information, which we don't consider names to be private.
We consider your home address, your home phone number, your mobile phone number, those types of things to be private.
So in that particular case, we took what I think now is probably a very literal interpretation of our policy and said that that was not a doxing incident.
And given the context of what was going on there, that if I was doing this all over again, I would probably ask my team to look at that through the lens of what was the purpose behind that tweet?
And if the purpose was, in fact, to identify these kids to either dox them or abuse and harass them, which it probably was, then we should be taking a more expansive view of that policy and including that type of content.
This is also an evolution in prioritization as well.
One of the things we've come to recently is we do need to prioritize these efforts, both in terms of policy, enforcement, how we're thinking about evolving them.
One of the things that we want to focus on as number one is physical safety.
And this leads you immediately to something like doxing.
And right now, the only way we take action on a doxing case is if it's reported or not.
What we want to move to is to be able to recognize those in real time, at least in the English language, recognize those in real time through our machine learning algorithms, and take the action before it has to be reported.
So we're focused purely right now on Going after doxing cases with our algorithms so that we can be proactive.
That also requires a much more rigorous appeals process to correct us when we're wrong.
But we think it's tightly scoped enough.
It impacts the most important thing, which is someone's physical safety.
Once we learn from that, we can really look at the biggest issue with our system right now is all the burden is placed upon the victim.
So we only act based on reports.
We don't have a lot of enforcement, especially with more of the takedowns that are run through machine learning and deep learning algorithms.
I mean, we prioritize the queue based on severity, and the thing that will mark severity is something like physical safety or private information or whatnot.
So generally, we try to get through everything, but we have to prioritize that queue even coming in.
And we have to look at that, but we also have to look in the context.
Because we also have, I think we talked about this a little bit in the last podcast, but we have gamers on the platform who are saying exactly that to their friends that they're going to meet in the game tonight.
And without the context of that relationship, without the context of the conversation that we're having, we would take the exact same action on them incorrectly.
Fact check me on that, but that's basically the conversation that was had.
There's a guy at Disney, he posted a picture from Fargo of someone being tossed in a wood chipper, and he says, I want all these MAGA kids done like this.
You had another guy who specifically said, lock them in the school, burn it down, said a bunch of disparaging things, and then said, if you see them, fire on them.
Well, again, as I said earlier, Joe, we don't usually automatically suspend accounts with one violation because we want people to learn.
We want people to understand what they did wrong and give them an opportunity not to do it again.
And it's a big thing to kick someone off the platform.
And I take that very, very seriously.
So I want to make sure that when someone violates our rules, they understand what happened and they're given an opportunity to get back on the platform and change their behavior.
And so in many of these cases, what happens is we will force someone to acknowledge that their tweet violated our rules, force them to delete that tweet before they can get back on the platform.
And in many cases, if they do it again, we give them a timeout, which is like seven days, and we say, look, you've done it again.
Potentially, but our parity rules are very specific that if you have an account that is a parity account, you need to say that it is a parity account so you don't confuse people.
I understand why reasonable people would have different impressions of this.
I'm just going through and telling you what they are just so we can have all the facts on the table and then we can debate them.
And then the last one, we found a bunch of things that he posted that we viewed as incitement of abuse against Leslie Jones.
So there's a bunch of them, but the one that I like to look at, which really convinced me, is he posted two doctored tweets that were supposedly by Leslie Jones.
They were fake tweets.
The first one said, And then the second one said, the goddamn slur for a Jewish person, at Sony ain't paid me yet, damn Bix nude better pay up.
I hope we all agree that doxing is something that Twitter should take action on.
And it can threaten people in real life.
And I take an enormous amount of responsibility for that because I fear daily for the things that are happening on the platform that are translating into the real world.
Chuck Johnson said that he was preparing something to take out DeRay McKesson.
And in a journalistic context, people take this to mean he was going to do a dossier or some kind of hit piece on DeRay.
He was permanently banned.
And my understanding, and it's been a long time since I've read this, there was some leaked emails, I think, from Dick Costolo where he said – maybe it wasn't Dick.
I don't want to drag Dick.
I don't know who it was exactly.
They said, I don't care.
Just get rid of him.
And he was off.
So you have...
And again, maybe there's some hidden context there.
Number one, we haven't done enough education about what our rules are.
Because a lot of people violate our rules and they don't even know it.
Like, some of the statistics that we've looked at, like, for a lot of first-time users of the platform, if they violate the rule once, almost two-thirds of them never violate the rules again.
So we're not talking about, like, a bunch of people accidentally.
Like, if they know what the rules are, most people can avoid it.
So we have a lot of work to do in education so people really understand what the rules are in the first place.
The other thing we have to do to address these allegations that we're doing this from a biased perspective is to be really clear about what types of behavior are caught by our rules and what types are not.
And to be transparent within the product.
So when a particular tweet is found to be in violation of our rules, being very, very clear, like this tweet was found to be in violation of this particular rule.
And that's all work that we're doing.
Because we think the combination of education and transparency is really important, particularly for an open platform like Twitter.
It's just part of who we are, and we have to build it into the product.
I appreciate that your particular thoughts, though, on those examples that he described, when he's talking about someone saying they should throw these children into a wood chipper versus Chuck Johnson saying he should take this guy – he wants to prepare a dossier to take this guy out, or how did he say it?
It's about a pattern and practice of violating our rules.
And we don't want to kick someone off for one thing.
But if there's a pattern and practice like there was from Milo, we are going to have to take action at some point because we can't sit back and let people be abused and harassed and silenced on the platform.
But in this particular case, it's how the speech is being used.
This is a new vector of attack that people have felt that I don't want to be on this platform anymore because I'm being harassed and abused and I need to get the hell out.
Will people harass and abuse me all day and night?
You don't do anything about that.
My notification is permanently locked at 99. You have it worse than I do.
I mean, you get substantially more followers.
And I don't click the notification tab anymore because it's basically just harassment.
So this is a really funny anecdote.
I was covering a story in Berkeley and someone said, if you see him, attack him.
I'm paraphrasing.
They said basically to swing at me, take my stuff, steal from me.
And Twitter told me after review it was not a violation of their policy.
Somebody made an allusion to me being a homosexual, and I reported that instantly gone.
So for me, I'm like, well, of course.
Of course Twitter is going to enforce the social justice aspect of their policy immediately, in my opinion, probably because you guys have PR constraints and you're probably nervous about that.
But when someone actually threatens me with a crime and incites their followers to do it, nothing got done.
The reason I bring him up is that Oliver Darcy, one of the lead reporters covering Alex Jones and his content, said on CNN that it was only after media pressure did these social networks take action.
So that's why I bring him up specifically because it sort of implies you are under PR constraints to get rid of him.
You have to look at the full context on the spectrum here.
Because one of the things that happened over a weekend is what Alex mentioned on your podcast with him.
He was removed from the iTunes podcast directory.
That was the linchpin for him because it drove all the traffic to...
What he said, basically zero.
Immediately after that, we saw our peer companies, Facebook, Spotify, YouTube, also take action.
We did not.
We did not because when we looked at our service and we looked at the reports on our service, we did not find anything in violation of our rules.
Then we got into a situation where suddenly a bunch of people were reporting content on our platform, including CNN, who wrote an article about all the things that might violate our rules that we looked into.
And we gave him one of the warnings.
And then we can get into the actual details.
But we did not follow.
We resisted just being like a domino with our peers because it wasn't consistent with our rules and the contract we put before our customers.
So there were three separate incidents that came to our attention after the fact that were reported to us by different users.
There was a video that was uploaded that showed a child being violently thrown to the ground and crying.
So that was the first one.
The second one was a video that we viewed as incitement of violence.
I can read it to you.
It's a little bit of a transcript.
Sure.
But now it's time to act on the enemy before they do a false flag.
I know the Justice Department's crippled a bunch of followers and cowards, but there's groups, there's grand juries, there's you called for it.
It's time politically, economically, and judiciously, and legally and criminally to move against these people.
It's got to be done now.
Get together the people you know aren't traitors, aren't cowards, aren't helping their frickin' bets, hedging their frickin' bets like all these other assholes do, and let's go, let's do it.
So people need to have their, and then there's a bunch of other stuff, but at the end, so people need to have their battle rifles ready and everything ready at their bedsides, and you've got to be ready because the media is so disciplined in their deception.
There are certain types of situations where if you were reporting on, you know, war zone and things that might be happening, we would put an interstitial on that type of content that's graphic or violent, but we didn't feel that that was the context here.
Well, there's a video that's been going around that was going around a few...
Four or five weeks ago, the one where the girls were yelling at that big giant guy and the guy punched that girl in the face and she was like 11 years old.
I saw that multiple times on Twitter.
That was one of the most violent things I've ever seen.
This giant man punched this 11-year-old girl in the face.
So the third strike that we looked at was a verbal altercation that Alex got into with a journalist, and in that altercation, which was uploaded to Twitter, there were a number of statements using eyes of the rat, even more evil-looking person, he's just scum, You're a virus to America and freedom, smelling like a possum that climbed out of the rear end of a dead cow.
You look like a possum that got caught doing some really, really nasty stuff in my view.
It's a preposterous volume that you guys have to deal with, and that's one of the things that I wanted to get into with Jack when I first had him on, because when my thought, and I wasn't as concerned about the censorship as many people were, my main concern was, what is it like to start this thing that's kind of for fun, and then all of a sudden it becomes the premier platform for free speech on the planet Earth?
It is that, but it's also a platform that's used to abuse and harass a lot of people and used in ways that none of us want it to be used, but nonetheless it happens.
And I think it's an enormously complicated challenge for any company to do content moderation at scale.
And that's something that we are sitting down thinking about.
Then Alex Jones confronts him in a very aggressive and mean way, and that's your justification for, or I should say, I inverted the timeline.
Basically, you have someone who's relentlessly digging through stuff, insulting you, calling you names, sifting through your history, trying to find anything they can to get you terminated, going on TV even, writing numerous stories.
You confront them and say, you're evil, and you say a bunch of really awful mean things.
But you have a journalist who recently went on TV and said CPAC is a bunch of gullible conservatives being fed red meat by grifters.
You can tell this guy's not got honest and an honest agenda.
So what you have to me, it looks like the conservatives, to an extent, probably will try and mass flag people on the left.
But from an ideological standpoint, you have the actual, you know, whatever people want to call it, sect of identitarian left that believe free speech is a problem, that have literally shown up in Berkeley burning free speech signs.
And then you have conservatives who are tweeting mean things.
And the conservatives are less likely, I think it's fair to point out, less likely to try and get someone else banned because they like playing off them.
And the left is targeting them.
unidentified
So you end up having disproportionate- I feel like there are a lot of assumptions in what you're saying.
Quillette recently published an article where they looked at 22 high-profile bannings from 2015 and found 21 of them were only on one side of the cultural debate.
So we don't make these rules in a vacuum, just to be clear.
We have a bunch of people all around the world who give us context and the types of behavior they're seeing, how that translates into real-world harm.
And they give us feedback.
And they tell us, like, you should consider different types of rules, different types of perspectives, different...
Like, for example, when we try to enforce hateful conduct in our hateful conduct policy in a particular country, we are not going to know all the slur words that are used to target people of a particular race or a particular religion.
So we're going to rely on building out a team of experts all around the world who are going to help us enforce our rules.
So in the particular case of misgendering, I'm just trying to pull up some of the studies that we looked at, but we looked at the American Association of Pediatrics and looked at the number of transgender youths that were committing suicide.
It's an astronomical, I'm sorry, I can't find it right now in front of me.
It's a really, really high statistic that's like 10 times what the normal suicide rate is.
Of normal teenagers.
And we looked at the causes of what that was happening.
And a lot of it was not just violence towards those individuals, but it was bullying behavior.
And what were those bullying behaviors that were contributing to that?
And that's why we made this rule.
Because we thought, and we believe, that those types of behaviors were happening on our platform, and we wanted to stop it.
Now there are exceptions to this rule.
We don't, and this is all, this isn't about like public figures, and there's always going to be public figures that you're going to Want to talk about, and that's fine.
But this is about, are you doing something with the intention of abusing and harassing a trans person on the platform?
And are they viewing it that way and reporting it to us so that we take action?
So I will just state, I actually agree with the rule.
From my point of view, I agree that bullying and harassing trans people is entirely wrong.
I disagree with it.
But I just want to make sure it's clear to everybody who's listening.
My point is simply that Ben Shapiro went on a talk show and absolutely refused.
And that's his schtick.
And he's one of the biggest podcasts in the world.
So if you have all of his millions upon millions of followers who are looking at this rule saying this goes against My view of the world, and it's literally 60-plus million in this country, you do have a rule that's ideologically bent.
And it's true.
You did the research.
You believe this.
Well, then you have Ben Shapiro, who did his research and doesn't believe it.
I understand, but you essentially created a protected class, if this is the case, because despite these studies and what these studies are showing...
There's a gigantic suicide rate amongst trans people, period.
It's 40%.
It's outrageously large.
Now, whether that is because of gender dysphoria, whether it's because of the complications from sexual transition surgery, whether it's because of bullying, whether it's because of this awful feeling of being born in the wrong gender, all that is yet to be determined.
The fact that they've shown that That there's a large amount of trans people that are committing suicide.
I don't necessarily think that that makes sense in terms of people from someone's perspective, like a Ben Shapiro, saying that if you are biologically female, if you are born with a double X chromosome, you will never be XY. If he says that, that's a violation of your policy.
Epic, world-class, legend tennis player, who happens to be a lesbian, is being harassed because she says that she doesn't believe that trans women, meaning someone who is biologically male, who transitions to a female, should be able to compete in sports against biological females.
This is something that I agree with.
This is something I have personally experienced a tremendous amount of harassment because I stood up when there was a woman who was a trans woman who was fighting biological females in mixed martial arts fights and destroying these women.
And I was saying, just watch this and tell me this doesn't look crazy to you.
Well, my point is, You should be able to express yourself.
And if you say that you believe someone is biologically male, even though they identify as a female, that's a perspective that should be valid.
First of all, it's biologically correct.
So we have a problem in that if your standards and your policies are not biologically accurate, Then you're dealing with an ideological policy.
I don't want to target trans people.
I don't want to harass them.
I'll call anybody whatever they want.
If you want to change your name to a woman's name and identify as a woman, I'm 100% cool with that.
By saying, I don't think that you should be able to compete as a woman, this opens me up for harassment.
But going into, like, Meghan Murphy, for instance, right?
You can call that targeted harassment.
If Meghan Murphy, who is – for those that don't know, she's a radical feminist who refuses to use the transgender pronouns – If she's in an argument with a trans person over whether or not they should be allowed in sports or in biologically female spaces, and she refuses to use their pronoun because of her ideology, you'll ban them.
My understanding, and I don't have the tweet by tweet the way that I did for the others, but my understanding is that she was warned multiple times for misgendering an individual that she was in an argument with, and this individual is actually bringing a lawsuit against her in Canada as well.
So you have an argument between two people, and you have a rule that enforces only one side of the ideology, and you've banned only one of those people.
If she's saying a man is never a woman, if that's what she's saying, and then biologically she's correct, we obviously have a debate here.
This is not a clear-cut...
This is not something like you can say, water is wet, you know, this is dry.
This is not like something you can prove.
This is something where you have to acknowledge that there's an understanding that if someone is a trans person, we all agree to consider them a woman and to think of them as a woman, to talk to them and address them with their preferred name and their preferred pronouns.
But biologically, this is not accurate.
So we have a divide here.
We have a divide between the conservative estimation of what's happening and then the definition that's the liberal definition of it.
And I think what I'm trying to say is that it's not that you can't have those viewpoints.
It's that if you're taking those viewpoints and you're targeting them at a specific person in a way that reflects your intent to abuse and harass them.
So you're having an individual who is debating a high-profile individual in her community, and she's expressing her ideology versus hers, and you have opted to ban one of those ideologies.
Because I know people who have specifically begun using insults of animals to avoid getting kicked off the platform for breaking the rules.
Certain individuals who have been suspended now use certain small woodland creatures in place of slurs, so they're not really insulting you, and it's fine.
But there are people who consider themselves trans species.
Now, I'm not trying to belittle the trans community by no means.
I'm just trying to point out that you have a specific rule for one set of people.
So there are people who have general body dysphoria.
You don't have rules on that.
There are people who have actually amputated their own arms.
You don't have rules on that.
You have a very specific rule set.
And more importantly, in the context of a targeted conversation, I can say a whole bunch of things that would never be considered a rule break, but that one is, which is ideologically driven.
And even in this case, it wasn't just going against this particular rule, but also things that were more ban-evasive as well, including taking a screenshot of the original tweet, reposting it, which is against our Terms of Service.
I understand what you're saying, but I just want to make sure I point out she was clearly doing it as an effort to push back on what she viewed as an ideologically driven rule.
This is a debate where there's a division, and there's a division between people that think that trans women are invading biological female spaces and making decisions that don't benefit these biological females, cisgender, whatever you want to call them.
This is an actual debate, and it's a debate amongst progressive people, amongst left-wing people, and it's a debate amongst liberals.
This is, I mean, I would imagine the vast majority of people in the LBGT community are, in fact, on the left.
And this is one example of that.
So you have a protected class that's having an argument with a woman who feels like there's an ideological bent to this conversation that is not only not accurate, but not fair.
And she feels like it's not fair for biological women.
I got sent a screenshot from somebody, and maybe it's faked.
I think it was real.
They were having an argument with someone on Twitter and responded with, dude, comma, you don't know, blah, blah, blah.
And they got a suspension and a lockout and had to delete the tweet because the individual, using a cartoon avatar with the name apparently was Sam… Thank you for not being offended.
It's a really important thing to go over all the nuances of this particular subject because I think that one in particular highlights this idea of where the problems lie in having a protected class.
There's this progressive perspective of racism that it's only possible if you're from a more powerful class.
It's only punching down.
That's the only racism.
I don't think that makes any sense.
I think racism is looking at someone that is from whatever perspective Whatever race and deciding that they are in fact less or less worthy or less valuable, whatever it is.
That takes place across the platform against white people.
Now I'm not saying white people need to be protected.
I know it's easier being a white person in America.
It's a fact.
But it's hypocritical.
To have a policy that only distinguishes you can make fun of white people all day long, but if you decide to make fun of Asian folks or, you know, fill in the blank, that is racist.
But making fun of white people isn't, and it doesn't get removed.
My understanding is that you guys started banning people officially under these policies around 2015, and all the tweets she made was prior to that, and so you didn't enforce the old tweets.
Just to address that point, and I think Jack talked about this a little bit, like this is where right now we have a system that relies on people to report it to us, which is a huge burden on people.
And especially if you happen to be a high-profile person, and Tim, you would understand this, you're not going to sit there and report every tweet.
You're not going to go through tweet by tweet as people respond to you and report it.
People tell us this all the time.
So this is where we have to start getting better at identifying when this is happening and taking action on it without waiting for somebody to tell us it's happening.
So, yes, there is a danger of the algorithms missing context, and that's why we really want to go carefully into this, and this is why we've scoped it down, first and foremost, to doxing.
Which is, at least, first, it hits our number one goal of protecting physical safety.
Like, making sure that nothing done online will impact someone's physical safety offline, on our platform, in this case.
The second is that there are patterns around doxing that are much easier to...
There are exceptions, of course, because you could dox a representative's public office phone number and email address, and the algorithm might catch that, not have the context that this is a U.S. representative and this information is already public.
So let me ask you, the rules you have are not based in US law, right?
US law doesn't recognize restrictions on hate speech.
It's considered free speech.
So if you want to stand in a street corner and yell the craziest things in the world, you're allowed to.
On your platform, Twitter, you're not allowed to.
So even in that sense alone, your rules do have an ideology behind them.
I don't completely disagree.
I think, you know, I don't want harassment.
But the reason I bring this up is getting into the discussion about democratic health of a nation.
So I think it can't be disputed at this point that Twitter is extremely powerful in influencing elections.
I'm pretty sure you guys published recently a bunch of tweets from foreign actors that were trying to meddle in elections.
So even you as a company recognize that foreign entities are trying to manipulate people using this platform.
So there's a few things I want to ask beyond this, but...
Wouldn't it be important then to just – at a certain point, Twitter becomes so powerful in influencing elections and giving access to even the president's tweets that you should allow people to use the platform based under the norms of U.S. law.
First Amendment, free speech, right to expression on the platform.
This is becoming too much of a – it's becoming too powerful in how our elections are taking place.
So even if – You are saying, well, hate speech is our rule, and a lot of people agree with it.
If at any point one person disagrees, there's still an American who has a right to access to the public discourse, and you've essentially monopolized that, and not completely, but for the most part.
So isn't there some responsibility on you to guarantee, at a certain extent, less regulation happen, right?
Like, look, if you recognize foreign governments are manipulating our elections, then shouldn't you guarantee the right to an American to access this platform to be involved in the electoral process?
I'm not sure I see the tie between those things, but I will address one of your points, which is we're a platform that serves the world.
So we're global.
75% of the users of Twitter are outside of the United States.
So we don't apply laws of just one country when we're thinking about it.
We think about how do you have a global standard that can meet the threshold of as many countries as possible because we want all the people in the world to be able to participate And also meet elections like the Indian election coming up as well.
I'm not sure what you're talking about, but we did have our Vice President of Public Policy testify in front of Indian Parliament a couple weeks ago, and they were really focused on election integrity and safety and abuse and harassment of women and political figures and the likes.
So my concern, I guess, is I recognize you're a company that serves the world, but as an American, I have a concern that the democracy I live in, the democratic republic, I'm sorry, and the democratic functions are healthy.
One of the biggest threats is Russia, Iran, China.
They're trying to meddle in our elections using your platform, and it's effective, so much so that you've actually come out and removed many people.
Covington was apparently started by a account based in Brazil.
The Covington scandal where this fake news goes viral.
It was reported by CNN that it was a dummy account.
They were trying to prop it up and they were pushing out this out of context information.
So they do this.
They use your platform to do it.
You've now got a platform that is so powerful in our American discourse that foreign governments are using it as weapons against us.
And you've taken a stance against the laws of the United States.
I don't mean like against like you're breaking the law.
I mean you have rules that go beyond the scope of the U.S. which will restrict American citizens from being able to participate.
Meanwhile, foreign actors are free to do so so long as they play by your rules.
So our elections are being threatened.
By the fact that if there's an American citizen who says, I do not believe in your misgendering policy, and you ban them, that person has been removed from public discourse on Twitter.
Let's say in protest, an individual repeatedly says, no, I refuse to use your pronouns, in like Megan Murphy's case.
She's Canadian, so I don't want to use her specifically.
The point I'm trying to make is, at a certain level, There are going to be American citizens who have been removed from this public discourse which has become so absurdly powerful foreign governments weaponize it because you have different rules than the American country has.
Just to be clear, my understanding, and I'm not an expert on all the platforms, is that foreign governments use multiple, multiple different ways to interfere in elections.
It is not limited to our platform, nor is it limited to social media.
If you're going to restrict American citizens from participating on a platform where even the president speaks, and it's essentially you have a privately owned public space, if I could use an analogy that would be most apt, and you've set rules that are not recognized by the US.
In fact, when it came to a Supreme Court hearing, they said hate speech is not a violation.
It's actually protected free speech.
So there's actual odds.
So there might be someone who says, I refuse to live by any other means than what the Supreme Court has set down.
That means I have a right to hate speech.
You will ban them.
That means your platform is so powerful, it's being used to manipulate elections, and you have rules that are not recognized by the government to remove American citizens from that discourse.
So as a private platform, you've become too powerful to not be regulated if you refuse to allow people free speech.
So, yes, we do have an issue with foreign entities and misinformation.
And this is an extremely complicated issue, which we're just beginning to understand and grasp and take action on.
I don't think that issue is solved purely by not being more aggressive on something else that is taking people off the platform entirely as well, which is abuse and harassment.
It's a cost-benefit analysis, ultimately, and our rules are designed, again...
You know, they don't always manifest this way in the outcomes, but in terms of what we're trying to drive is opportunity for every single person to be able to speak freely on the platform.
And in part of that, the recognition that we're taking action on is that when some people encounter particular conduct, that we see them wanting to remove themselves from the platform completely, which goes against that principle of enabling everyone to speak or giving people the opportunity to speak.
The rules are focused on the opportunities presented, and we have particular outcomes to make sure that those opportunities are as large as possible.
The point I made about foreign governments was just to explain the power that your platform holds and how it can be weaponized.
We'll separate that now.
When Antifa shows up to Berkeley and bashes a guy over there with a bike lock, that is suppressing his speech.
That's an act of physical violence.
However, when Antifa links hands and blocks a door so that no one can go to an event, that is also legally allowed.
So what you're saying is that if someone is engaging in behavior, such as going on Twitter and shouting someone down relentlessly, that's something external to what happens in the world under the U.S. government.
I am allowed to scream very close to you and not let you speak in public.
But on Twitter, you don't allow that.
So there's a dramatic difference between what Twitter thinks is okay and what the U.S. government thinks is okay, how our democracy functions and how Twitter functions.
The issue I'm pointing out is that we know Twitter is becoming extremely important in how our public discourse is occurring, how our culture is developing, and who even gets elected.
So if you have rules that are based on a global policy… That means American citizens who are abiding by all of the laws of our country are being restricted from engaging in public discourse because you've monopolized it.
On the other hand, if the people that are on the platform I see what you're saying.
We can see that at a certain point, Twitter is slowly gaining, in my opinion, too much control from your personal ideology based on what you've researched, what you think is right, over American discourse.
If Twitter, and again, this is my opinion, I'm not a lawmaker, but I would have to assume if Twitter refuses to say, in the United States, you are allowed to say what is legally acceptable, period, then lawmakers' only choice will be to enforce regulation on your company.
Actually, Tim, I spent quite a bit of time talking to lawmakers as part of my role.
I had a public policy.
I spent a lot of time in D.C. I want to say that Jack and I have both spent a lot of time in D.C. And I think from the perspective of lawmakers, they, across the spectrum, are also in favor of policing abuse and harassment online and bullying online.
Those are things that people care about because they affect their children, and they affect their communities, and they affect individuals.
And so I don't think that, and as a private American business, we can have different standards than what an American government-owned corporation or American government would have to institute.
Those are two different things.
And I understand your point about the influence, and I'm not denying that.
Certainly, Twitter is an influential platform.
But, like anything, whether it's the American law or the rules of Twitter or the rules of Facebook or rules of any platform, there are rules.
And those rules have to be followed.
So it is your choice whether to follow those rules and to continue to participate in a civic dialogue, or it is your choice to not do that.
But at a certain point, you should not have the right to control what people are allowed to say.
Look, I'm a social liberal.
I think we should regulate you guys because you are unelected officials running your system the way you see fit against the wishes of a democratic republic.
And there are people who disagree with you who are being excised from public discourse because of your ideology.
You think we shouldn't have rules that- Well, I'm frustrated because of the hypocrisy of when I see the flow of one direction.
And then what I see are Republican politicians, who in my opinion are just too ignorant to understand what the hell's going on around them.
And I see people burning signs that say free speech.
I see you openly saying, we recognize the power of our platform, and we're not going to abide by American norms.
I see the manipulation of Twitter in violation of our elections.
I see Democratic operatives in Alabama waging a false flag campaign using fake Russian accounts.
And the guy who runs that company has not been banned from your platform.
Even after it's been written by the New York Times, he was doing this.
So we know that not only are people manipulating your platform, you have rules that remove honest American citizens with bad opinions who have a right to engage in public discourse.
And it's like you recognize it, but you like having the power?
I'm not quite sure at what point— So just to get back to my point, so you believe that Twitter should not have any rules about abuse and harassment or any sort of hate speech on the platform?
You're asking us to comply with the U.S. law that would criminalize potential speech and put people in jail for it, and you're asking us to enforce those standards.
However, you get really dangerous territory if someone accidentally tweets an N and you assume they're trying to engage in a harassment campaign, which is why I said let's talk about learn to code.
So if you have a direct message and someone says something terrible and then you receive a death threat and you report that to us, then we would read it because you've reported it to us.
So if Tim writes an N, and I write an I, and Jamie writes a G, can you go into our direct messages and say, hey, let's fuck with Jack, and we're going to write this stuff out, and we're going to do it, and let's see if they ban us.
So there was a situation, I guess about a month ago or so, where a number of journalists were receiving a variety of tweets, some containing learn to code, some containing a bunch of other coded language that was wishes of harm.
These were thousands and thousands of tweets being directed at a handful of journalists.
And we did some research and what we found was a number of the accounts that were engaging in this behavior, which is tweeting at the journalists with this either Learn to Code or things like Day of the Rope and other coded language, were actually ban evasion accounts.
That means accounts that had been previously suspended.
And we also learned that there was a targeted campaign being organized off our platform to abuse and harass these journalists.
Can we pause this because this is super confusing for people who don't know the context.
The learn to code thing is in response to people saying that people that are losing their jobs like coal miners and truck drivers and things like that could learn to code.
It was almost like ingest initially.
Or if it wasn't in jest initially, it was so poorly thought out as a suggestion that people started mocking it, right?
People are using it to mock how stupid the idea of taking a person who's uneducated, who's in their 50s, who should learn some new form of vocation, and then someone says, learn to code.
And so then other people, when they're losing their job or when something's happening, people would write, learn to code, because it's a meme.
I would just characterize learn to code as a meme that represents the elitism of modern journalists and how they target certain communities with disdain.
So to make that point, there are people who have been suspended for tweeting something like, I'm not too happy with how BuzzFeed reported the story, hashtag learn to code.
Making representation of these people are snooty elites who live in ivory towers.
But again...
This is a meme that has nothing to do with harassment, but some people might be harassing somebody and might tweet it.
Why would we expect to see – even still today, I'm still getting messages from people with screenshots saying I've been suspended for using a hashtag.
And the editor-in-chief of The Daily Caller, he quote-tweeted a video from The Daily Show with hashtag learn to code, and he got a suspension for it.
It was, again, a specific set of issues that we were seeing targeting a very specific set of journalists.
And it wasn't just the learn to code.
It was a couple of things going on.
A lot of the accounts tweeting learn to code were ban invaders, which means they've previously been suspended.
A lot of the accounts had other language in them, or tweets had other language like day of the brick, day of the rope, oven ready.
These are all coded meanings for violence against people.
And so people who are receiving this were receiving hundreds of these.
In what appeared to us to be a coordinated harassment campaign.
And so we were trying to understand the context of what was going on and take action on them.
because again, I don't know, Joe, if you've ever been the target of a dogpiling event on Twitter, but it is not particularly fun when thousands of people or hundreds of people are tweeting at you and saying things.
And that can be viewed as a form of harassment.
It's not about the individual tweet.
It is about the volume of things that are being directed at you.
And so in that particular case, we made the judgment call, and it is a judgment call, to take down the tweets that were responding directly to these journalists that were saying learn to code, even if they didn't have a wish of harm specifically attached to them because of what we viewed as coordinated even if they didn't have a wish of harm specifically attached to them because And again, like I was saying, some of the other signals and coded language, and we were,
We were worried that learn to code was taking on a different meaning in that particular context.
It was just, Learn to Code kind of identified, you have these journalists who are so far removed from middle America that they think you can take a 50-year-old man who's never used a computer before and put him in a, you know.
The hashtag, the idea of Learn to Code condenses this idea, and it's easy to communicate, especially when you only have 280 characters, that there is a class of individual in this country.
I think you mentioned on, was it Sam Harris, that the left, these left liberal journalists only follow each other.
They've done the study again, the visualization, and now there is a lot more cross-pollination.
But what we saw is folks who were reporting on the left end of the spectrum mainly followed folks on the left, and folks on the right followed everyone.
So here's what ends up happening, and this is one of the big problems that people have.
With this story particularly, you have a left-wing activist who works for NBC News.
I'm not accusing you of having read the article.
He spends like a day lobbying to Twitter saying, Guy, you have to do this.
You have to make these changes.
The next day he writes a story saying that 4chan is organizing these harassment campaigns and death threats.
And while 4chan was doing threads about it, you can't accuse 4chan simply for talking about it because Reddit was talking about it too, as was Twitter.
So then the next day, after he published his article, now he's getting threats.
And then Twitter issues a statement saying, we will take action.
And to make matters worse, when John Levine, a writer for The Wrap, got a statement from one of your spokespeople saying, yes, we are banning people for saying learn to code.
A bunch of journalists came out and then lied.
I had no idea why saying this is not true.
This is fake news.
Then a second statement was published by Twitter saying it's part of a harassment campaign.
And so then the mainstream narrative becomes, oh, they're only banning people who are part of a harassment campaign.
But you literally see legitimate high-profile individuals getting suspensions for joining in on a joke.
You have situations like this where you can see – this journalist, I'm not going to name him, but he routinely has very left-wing – I don't want to use overtly esoteric words, but intersectional dogmatic points of view.
So this is – What does that mean?
So like intersectional feminism is considered like a small ideology.
People refer to these groups as the regressive left or the identitarian left.
These are basically people who hold views that a person is judged based on the color of their skin instead of the content of their character.
So you have the right-wing version, which is like the alt-right, the left-wing version, which is like...
Intersectional feminism is how it's simply referred to.
So you'll see people say things like – typically when they rag on white men or when they say like white feminism, these are signals that they hold these particular views.
And these views are becoming more pervasive.
So what ends up happening is you have a journalist who clearly holds these views.
I don't even want to call him a journalist.
He writes extremely biased and out of context story.
Twitter takes action in response, seemingly in response.
Then we can look at what happens with Oliver Darcy at CNN. He says, you know, the people at CPAC, the conservatives are gullible eating red meat from grifters, among other things, disparaging comments about the right.
And he's the one who's primarily advocating for the removal of certain individuals who you then remove.
And then when Kathy Griffin calls for doxing, that's fine.
When this guy calls for the death of these kids, he gets a slap on the wrist.
And look, I understand the context matters, but grains of sand make a heap, and eventually you have all of these stories piling up, and people are asking you why it only flows in one direction.
Because I've got to be honest, I'd imagine that calling for the death three times of any individual is a bannable offense, even without a warning.
You just get rid of them.
But it didn't happen, right?
We see these, you know, people say men aren't women, though, and they get a suspension.
We see people say the editor-in-chief of The Daily Caller may be the best example.
Hashtag learn to code, quoting The Daily Show, and he gets a suspension.
Threatening death and inciting death is a suspension, too.
Yeah, I think we have a lot of work to do to explain more clearly when we're taking action and why, and certainly looking into any mistakes we may have made in those particular situations.
My point is that I think a lot of people that are on the right feel disenfranchised by these platforms that they use on a daily basis.
I don't know what the percentages are in terms of the number of people that are conservative that use Twitter versus the number of people that are liberal, but I would imagine it's probably pretty close, isn't it?
If we were purely looking at the content, but a lot of this agent work is based on the behaviors, all the things that we've been discussing in terms of the context of the actual content itself.
I definitely hear the point in terms of us putting this rule forth.
But we have to balance it with the fact that people are being driven away from our platform.
And they may not agree with me on that, my folks from Missouri.
But I think they would see some valid argument in what we're trying to do to, again, increase the opportunity for as many people as possible to talk differently.
That's it.
It's not driving the outcomes that you're speaking to.
And there's disagreement as to whether this is the right outcome or not and this is the right policy.
And yes, our bias does influence looking in this direction.
And our bias does influence us putting a rule like this in place, but it is with the understanding of creating as much opportunity as possible for as many people to speak based on the actual data that we see of people leaving the platform because of experiences they have.
And to your credit, I really do appreciate the fact that you're very open about that you have made mistakes and that you're continuing to learn and grow and that your company is reviewing these things and trying to figure out which way to go.
And I think we all need to pay attention to the fact that this is a completely new road.
This road did not exist 15 years ago.
There was nothing there.
That is a tremendous responsibility for any company, any group of human beings.
To be in control of public discourse on a scale unprecedented in human history.
And that's what we're dealing with here.
This is not a small thing.
And I know people that have been banned to them, this is a matter of ideology, this is a matter of this, this is a matter of that.
There's a lot of debate going on here, and that's one of the reasons why I wanted to bring you on.
Because, Tim, because you know so much about so many of these cases, because you are a journalist, and you're very aware of the implications and all the problems that have been...
So I do want to make one thing really clear, though.
I have a tremendous amount of respect and trust for you when you say you wanted to solve this problem simply because you're sitting here right now and these other companies aren't, right?
Jack, you went on Sam Harris.
You were on Get With Gadsad.
And that says to me a good faith effort to try and figure out how to do things right.
So as much as I'll apologize for getting kind of angry and being emotional because...
Look, we also haven't been great at I'm explaining our intent.
And there's a few things going on.
One, as Joe indicated, centralized global policy at scale is almost impossible.
And we realize this.
Services have different answers to this.
Reddit has a community-based policy where each topic, each subreddit has its own policy, and there's some benefit to that.
So that's problem number one.
We know that this very binary off or on platform isn't right, and it doesn't scale, and it ultimately goes against our key initiative of wanting to promote more healthier conversation.
And we need to, the reason I'm going on all these podcasts and having these conversations and ideally Vidja's getting out there more often as well because we don't see enough and hear enough for her.
We need to have these conversations so we can learn.
We can get the feedback and also pay attention to where the technology is going.
before the podcast we talked a little bit about, and I talked about it on our previous podcast and also Sam's, that technology today is enabling content to live forever in a way that was not possible before.
You can say that everything on the Internet lives forever, but it's generally not true because any host or any connection can take it down.
The blockchain changes all that.
It can actually exist forever, permanently, without anyone being able to touch it, government, company, individual.
And that is a reality that we need to pay attention to and really understand our value.
And I believe a lot of our value in the future, not today, again, we have a ton of work, is to take a strong stance of like we are going to be a company that given this entire corpus of conversation and content within the world, we're going to work to promote healthy public conversation.
That's what we want to do.
And if you disagree with it, You should be able to turn it off and you should be able to access anything that you want as you would with the internet.
But those are technologies that are just in the formative stages and presenting new opportunities to companies like ours.
And there's a ton of challenges with them and a ton of things that we've discussed over the past hour.
That it doesn't solve and maybe exacerbates, especially around things like election interference and some of the regulatory concerns that you're bringing.
But can I explain what health at least means to us in this particular case?
So, like, we talked a little bit about this on the previous podcast, but, like, we have four indicators that we're trying to define and try to understand if there's actually something there.
One is shared attention.
Is a conversation generally shared around the same objects, or is it disparate?
So, like, as we're having a conversation, the four of us are having a conversation, are we all focused on the same thing?
Or is Joe on his phone, which you were earlier, or like, whatever is going on?
Because more shared attention will lead to healthier conversation.
Number two is shared reality.
Not whether something is factual, but are we sharing the same facts?
Is the earth round?
Is the world flat?
So we can tell what facts are we sharing and what facts are we not sharing, what percentage of the conversation.
So that's a second indicator.
Third is receptivity.
Are the participants receptive to debate and to civility and to expressing their opinion?
And even if it is something that might be hurtful, are people receptive to at least look at and be empathetic and look at what's behind that?
This is the one we have the most measurement around today.
We can determine and predict when someone might walk away from a Twitter conversation because they feel it's toxic.
I think it's a question of what thresholds you allow.
And the more control we can give people to vary the spectrum on what they want to see, that feels right to me.
I mean, Joe and your Alex podcast did exactly this thing.
You're hosting a conversation.
You had both of your guests who started talking over each other.
You pause the conversation.
You said, let's not get combative.
Someone said, I'm not being combative.
You said, you're all talking over each other.
And there's a dynamic that the conversation then shifted to that got to some deeper points, right?
Could have just said, let that happen and let it go.
And that's fine, too.
It's up to who is responsible.
Viewing and experiencing that conversation.
And I agree with you.
It is completely far off from where we are today.
Not only have we had to address a lot of these issues that we're talking about at this table, but we've also had to turn the company around from a business standpoint.
We've had to fix all of our infrastructure that's over 10 years old.
We had to go through two layoffs because the company was too large.
So we have to prioritize our efforts.
And I don't know any other way to do this than be really specific about our intentions and our aspirations and the intent and the why behind our actions.
And not everyone's going to agree with it in the particular moment.
So I want to point this out before I make my next statement, though, just real quick.
It seems like the technology is moving faster than the culture.
So I do recognize you guys are in a rock and a hard place.
How do you get to a point where you can have that open source crypto blockchain technology that allows free and open speech?
I think we're good to go.
is, you know, and maybe you have research on this, which is why you've taken the decisions you have.
But when you ban someone because they've said, you know, bad opinions, misgendering, well, they're not going to go away.
They're going to try and find anywhere they can speak.
So what effectively happens is you're taking all of these people from a wide range of the most, to use a prison analogy, murderers all the way to pot smokers, and you're putting them in the same room with each other, and you're saying you're not welcome here.
Well, what happens when you take someone who smokes pot and put them in prison with a bunch of gangbangers and murderers?
We haven't done a great job at having a cohesive stance on rehabilitation and redemption.
We have it in parts.
So the whole focus behind the temporary suspensions Is to at least give people pause and think about why and how they violated our particular rules that they signed up for when they came in through our terms of service.
Whether you agree with them or not, this is the agreement that we have with people.
Right now, that's the only option that we've built into our rules, but we have every capability of changing that, and that's something that I want my team to focus on, is thinking about, as Jack said, not just coming back after some time-bound period, but also, like, what more can and should we be doing within the product itself?
Early on to educate people about the rules.
So one of the things that we're working on is a very, very simplified version of the Twitter rules.
That's two pages, not 20. I've made sure that my lawyers don't write it and it's written in as plain English as we can.
We try to put examples in there.
And like really taking the time to educate people.
And I get people aren't always going to agree with those rules and we have to address that too.
But at least simplifying it and educating people so that they don't even get to that stage.
But once they do, understanding that there are going to be different contexts in people's lives, different times, they're going to say and do things that they may not agree with and they don't deserve to be permanently suspended forever.
So we, this is something that actually we just had a meeting on this earlier this week with our executive team and, you know, identifying kind of some of the principles by which we would want to think about, you know, time bounding suspension.
So it's work.
We have to do it and we're going to figure it out.
I'm not going to tell you it's coming out right away, but it's on our roadmap.
When someone reports something, instead of you having to worry about it, there would be no accusation of bias if 100,000 users were randomly selected to determine, because Periscope does this.
But we do have a lot of experiments that we're testing and we want to build confidence and it's actually driving the outcomes that we think are useful.
And Periscope is a good playground for us across many regards.
Well, I think, depending on which political faction you ask, they'll say money is influence.
So I'm not going to say that the Saudi prince who invested in Twitter – because again, it's been a while since I've read these stories – is like showing up to your meetings and throwing his weight around.
Alex was texting me saying that he never did anything to endanger any child and that he was disputing what people were saying about a video of a child getting harmed.
I think that that's why it's so important that we take the time to build transparency into what we're doing.
And that's part of what we're trying to do is not just in being here and talking to you guys, but also building it into the product itself.
I think one of the things that I've really loved about a new product launch, what we've done is to Disable any sort of ranking in the home timeline if you want, and you don't have to see our algorithms at play anymore.
These are the kinds of things that we're thinking about.
How do we give power back to the people using our service so that they can see what they want to see and they can participate the way they want to participate?
And this is long term, and I get that we're not there yet, but this is how we're thinking about it.
I mean, in just one switch and turning all the algorithms off.
What does that do?
What does that look like?
So these are the conversations that we're having in the company.
Whether they be good ideas or bad ideas, we haven't determined that just yet.
We definitely...
Look, I definitely understand the mistrust that people have in our company, in myself, in the corporate structure, in all the variables that are associated with it, including who chooses to buy on the public market, who chooses not to.
I get all of it, and I grew up on the Internet.
I'm a believer in the Internet principles, and I want to do everything in my power to make sure that we are consistent with those ideals.
At the same time, I want to make sure that every single person and do everything in my power has the opportunity to participate.
So even in countries where it's criminal to be LGBT, you will still ban someone for saying something disparaging to or saying something to that effect?
Let's say Saudi Arabia sentenced someone to death.
I don't want to call it Saudi Arabia specifically.
Let's call it Iran because I believe that's the big focus right now with the Trump administration.
Iran, it's my understanding, it's still punishable by death.
I could be wrong, but it is criminal.
If someone then directly targets one of these individuals, will you ban them?
I mean, do you guys function in Iran?
We're blocked in Iran.
Yeah, that's what I figured.
But there are some countries where, for instance, Michelle Malkin recently got really angry because she received notice that she violated blasphemy laws in Pakistan.
So you do follow some laws in some countries, but it's not a violation.
I guess the question I'm asking is, in Pakistan, it's very clearly a different culture.
Just to add on to what Jack's saying, we actually are very, very transparent about this.
So we publish a transparency report every six months that details every single request that we get from every government around the world and the content that they ask us to remove, and we post that to an independent third-party site.
So you could go right now and look and see every single request that comes from the Pakistani government and what content they're trying to remove from Pakistan.
There's a perception that you sending that notice is like a threat against them for violating blasphemy laws, whereas it's very clearly just letting you know a government has taken action against you.
It's saying that the government has restricted access to that content in that country.
And the reason we tell users or tell people that that's happened is because a lot of them may want to file their own suit against the government, or a lot of them may be in danger if they happen to be under that particular government's jurisdiction, and they may want to take action to protect themselves if they know that the government is looking at the content in their accounts.
We send the notice to everybody.
We don't always know where you are or what country you live in.
And so we just send that notice to try to be as transparent as possible.
Your policies support a community, but there may be laws in a certain country that does not support that community and finds it criminal.
So your actions are now directly opposed to the culture of another country.
I guess the point I'm trying to make is that if you enforce your values, which are perceivably not even the majority of this country, if you consider yourself more liberal-leaning than you're half of the United States, but you're enforcing those rules on the rest of the world that use the service, it's sort of forcing other cultures to adhere to yours.
So a lot of our rules are based in more of the UN Declaration than just purely US. Doesn't the UN Declaration guarantee the right of all people through any medium to express their opinion?
Look, I think it's also important that the company is not just me.
We have people in the company who are really good at this and are making some really tough decisions and having tough conversations and getting pushback and getting feedback.
This is a gentleman that was in the USA Today article where he admitted that he had used tactics in the past to influence the election, and he will continue to do so using all of his channels.
And so when we saw that report, our team looked at his account.
We noticed there were multiple accounts tied to his account, so fake accounts that he had created that were discussing political issues and pretending to be other people from other perspectives.
How did you find that out?
We would have phone numbers linking accounts together or email addresses, in some cases IP addresses, other types of metadata that are associated with accounts, so we can link those accounts together.
And having multiple accounts in and of itself is not a violation of our rules because some people have their work account, their personal account.
It's when you're deliberately pretending to be someone else and manipulating a conversation about a political issue.
And those are exactly the types of things that we saw the Russians do, for example, in the 2016 election.
So it was that playbook and that type of activity that we saw about Jacob Wall.
But it's about grains of sand making a heap in the flow of a direction where we can see Jacob Wall has said he's done this, so you're like, we're going to investigate, we ban him.
Right, right, right.
It was recently reported and covered by numerous outlets that a group called New Knowledge was meddling in the Alabama election by creating fake Russian accounts to manipulate national media into believing that Roy Moore was propped up by the Russians.
Facebook banned him, as well as four other people, but Twitter didn't.
So in the case of Jacob Wall, we were able to directly attribute through email addresses and phone numbers his direct connection to the accounts that were created to manipulate the election.
If we're not able to tie that direct connection on our platform, or law enforcement doesn't give us information to tie attribution, we won't take action.
And it's not because of political ideology, it's because we want to be damn sure before we take action on accounts.
I believe we were able to take down a certain cluster of accounts that we saw engaging in the behavior, but we weren't necessarily able to tie it back to one person controlling those accounts.
To clarify what they said, what they claimed to the New York Times was that it was a false flag.
New York Times said they reviewed internal documents that showed they admitted it was a false flag operation.
The guy who runs the company said, oh, his company does this.
He wasn't aware necessarily, but it was an experiment.
So he's given kind of, in my opinion, duplicitous – not straightforward, but at the time of this campaign, which he claims to know about – He tweeted that it was real.
So during the Roy Moore campaign, he tweets, wow, look at the Russians.
Then it comes out later, his company is the one that did it.
So you're kind of like, oh, so this guy was propping up his own fake news, right?
Then when they get busted, he goes, oh, no, it's just my company doing an experiment.
But you tweeted it was real.
You use your verified Twitter account to push the fake narrative your company was pumping on this platform.
I just mean when we look at a company-wide average of all of your employees and the direction they lean versus the news sources they're willing to read, you're going to see a flow in one direction, whether it's intentional or not.
And it was because Vidya said, you know, we're permanently banning this account.
And yes, we didn't have the same sort of findings in the other particular account, which I got feedback on, passed to her, and we didn't find what we needed to find.
I think, you know, a lot of what people assume is malintent is sometimes fake news.
You know, I think one of my biggest criticisms in terms of what's going on in our culture is the news system is, like you pointed out, although it's changed, left-wing journalists only follow themselves.
That's my experience.
I've worked for these companies, and so they repeat these same narratives.
They don't get out of their bubble.
Even today, they're still in a bubble, and they're not seeing what's happening outside of it.
And then what happens is, you know, according to data—I think this is from Pew— Most new journalism jobs are in blue districts.
So you've got people who only hear the same thing.
They only cover the same stories.
So if, you know, we hear about Jesse Smollett.
We hear about how the story goes wild.
But there's like 800 instances of Trump supporters wearing MAGA hats getting beaten, you know, throughout the past couple of years.
We had a guy show up to a school in Eugene, Oregon with a gun and fire two rounds at a cop wearing a Smash the Patriarchy and Chill shirt.
And those stories don't make the headlines.
So it's, you know, when the journalists are inherently in a bubble, I hear you.
Yeah, but I don't think they have the level of transparency that we want to put into it.
So we actually want to show whether a comment was moderated and then actually allow people to see...
So, both showing the action that this person moderated a particular comment, and then you can actually see the comment itself.
It's one click over, one tap over.
That's how we're thinking about it.
It might change in the future, but we can't do this without a level of transparency, because we minimize something Vidja spoke to earlier, which is speaking truth to power, holding people to account.
Even things like the Fyre Festival, where, you know, you had these organizers who were deleting every single comment, moderating every single comment that called this thing a fraud, and don't go here.
We can't reliably and, like, just from a responsibility standpoint, ever create a future that enables more of that to happen.
And that's how we're thinking about even features like this.
Is there a reason – Well, with the Proud Boys, what we were able to do was actually look at documentation and announcements that the leaders of that organization had made and their use of violence in the real world.
So that was what we were focused on.
And subsequent to our decision, I believe the FBI also designated them.
Gavin McGinnis, Anthony Cumia, who was part of Opie and Anthony, now it's his own show, It happened on his show because there was a guy that was on the show and they made a joke about starting a gang based on him because he was a very effeminate guy and they would call him the Proud Boys.
And they went into detail about how this thing became...
From a joke and saying that you could join the Proud Boys and everyone was, you know, it was like being silly to people joining it and then it becoming this thing to fight Antifa and then becoming infested with white nationalists and becoming this thing.
Well, in many ways it was, but it's been documented how it started and what it was and misrepresented as to why it was started.
He was talking to me about Antifa, that when Antifa was blocking people like Ben Shapiro's speeches and things along those lines and stopping conservatives from speaking, you should just punch them in the face.
We're going to have to start kicking people's asses.
I was like, this is not just irresponsible, but foolish and short-sighted and just a dumb way to talk.
So then you have the Antifa groups that are engaging in the same thing.
The famous bike lock basher incident where a guy showed up, hit seven people over there with a bike lock.
I'm going to leave that out for the time being.
You have other groups like By Any Means Necessary.
You have in Portland, for instance, there are specific branded factions.
There's the tweet I mentioned earlier where they doxed ICE agents and they said, do whatever inspires you with this information.
And I mean, you're tagged in a million times.
I know you probably can't see it, but you can actually see that some of the tweets in the thread are removed.
But the main tweet itself from an anti-fascist account linking to a website is Straight up saying, like, here's the private home details, phone number, addresses of these law enforcement officers is not removed since September.
So what you end up seeing is, again, to point, I think one of the big problems in this country is the media, because it was reported that the FBI designated Proud Boys an extremist group.
But it was a misinterpretation based – a sheriff wrote a draft saying with – the FBI considers them to be extremists.
The media then reported hearsay from the sheriff and the FBI came out and said, no, no, no, we never meant to do that.
That's not true.
We are just concerned about violence.
So the Proud Boys all get purged.
And again, I think Gavin is a different story, right?
If you want to go after the individuals who are associating with that group versus the guy who goes on his show and says outrageous things and goes on Joe's show.
But then you have Antifa.
What I mean by that is they have specific names, they sell merchandise, and they're the ones showing up throwing mortar shells into crowds.
They're the ones showing up with crowbars and bats and whacking people.
I was in Boston, and there was a rally where conservatives were planning on putting on a rally.
It was literally just libertarians and conservatives.
Antifa shows up with crowbars, bats, and balaclavas with weapons threatening them.
So I have to wonder if these people are allowed to organize in your platform.
Are you concerned about that?
Why aren't they being banned when they violate the rules?
Is it a centralized organization the same way that – I hear you on Proud Boys, but where they have tenants that are written out and there's a leader and like – It's not the same, but there are specific branded cells.
Well, in the past when we've looked at Antifa, we ran into this decentralization issue, which is we weren't able to find the same type of information that we were able to find about Proud Boys, which was a centralized, leadership-based documentation of what they stand for.
But absolutely, I mean, it's something that we'll continue to look into.
And to the extent that they're using Twitter to organize any sort of offline violence, it's completely prohibited under our rules, and we would absolutely take action on that.
There's another – when it comes to the weaponization of rules against – like Gavin isn't creating a compilation of things he's ever said out of context and then sending them around to get himself banned.
Other people are doing that to him, activists who don't like him, and it's effective.
In fact, I would actually like to point out there's one particular user who has repeatedly made fake videos attacking one of your other high-profile conservatives so much so that he's had to file police reports, harassment complaints, and it just doesn't stop.
I guess I'll ask this to this regard.
If someone repeatedly makes videos of you out of context, fake audio, accusing you of doing things you've never done, at what point is that bannable?
I agree that it's out of context and it's disingenuous, but it's still the person saying it and you're making a compilation of some pre-existing audio or video.
So I think in the instance of Gavin, like, one of the things he said was, like, a call to violence, but he was talking about, like, it was in the context of talking about a dog and being scolded.
So he was like, hit him, just hit him, and then it's like, it turns out he's talking about a dog, like, doing something wrong.
And they take that and they snip it, and then it goes viral, and then everyone starts flagging, saying, you gotta ban this guy.
So again, I understand, like, you know.
But I guess the issue is, if people keep doing that to destroy someone's life...
So I think there's a bigger discussion, I think, both of you could probably shed some important light on, too, outside of Twitter.
This weaponization of content from platforms is being used to get people banned from their banking accounts.
We can talk about Patreon, for instance.
And again, this may just be something you could chime in on.
Patreon banned a man named Carl Benjamin, also known as Sargon of Akkad.
I knew he had done things that were egregious violations of the rules because, plain and simple, I didn't bring him up to go through it and try to figure out if he – but it does sound like at least the first one was meant to be a critique of your – Potentially, but there are a bunch of others if you want to hear them.
Oh, so the reason I brought him up again, but we'll move on, was that activists found a live stream from eight months ago.
I totally forgot why I was bringing this up because we've moved so far away from where we were.
But they pulled a clip from an hour and a half or whatever into a two-hour live stream on a small channel that only had 2,000 views, sent it to Patreon, and then Patreon said, yep, that's a violation, and banned him outright without warning.
Which, again, I understand is different from what you guys do.
You do suspensions first.
But I guess the reason I was bringing it up was to talk about a few things.
Why blocking isn't enough.
Why muting isn't enough.
And if you think that it's driving people off the platform, people post my tweets on Reddit.
I block them.
They use a dummy account, load up my tweet, post it to Reddit, and then spam me on Reddit.
So, you know, blocking and even leaving Twitter would never do anything.
Short of me shutting up, there's nothing you can do to protect me or anyone else.
So there's also all this infrastructure that we have to fix in order to pass those through in terms of what action you took or what action someone else took to be transparent about what's happening on the network.
The second...
The second thing, block is really interesting.
My own view is it's wholly unsatisfying because what you're doing is you're blocking someone.
when they get notification that you've blocked them, which may embolden them even more, which causes, you know, others around and ramifications from the network.
But also that person can log out of Twitter and then look at your tweets just on the public.
public web because we're public.
So it doesn't feel as rigorous and as durable as something like making mute much stronger.
I'm not sure about that because one of the things that I do think is that just – I'm not in favor of a lot of this heavy-handed banning and a lot of the things that have been going on, particularly a case like the Megan Murphy case.
But what I think that we are doing is we're exploring the idea of civil discourse.
We're trying to figure out what's acceptable and what's not acceptable.
And you're communicating about this on a very large scale.
And it's putting that out there and then people are discussing it.
Whether they agree or disagree, whether they vehemently defend you or hate you, they're discussing this.
And I think this is how these things change.
And they change over long periods of time.
Think about words that were commonplace just a few years ago that you literally can't say anymore.
I mean, there's so many of them that were extremely commonplace or not even thought to be offensive 10 years ago that now you can get banned off of platforms for them.
Yeah, Dankula is the guy who got charged and convicted of making a joke where he had his pug do a Nazi salute.
But I was there and I was arguing that a certain white nationalist had used racial slurs on YouTube.
He has.
I don't want to name him.
And some guy in the UK said, that's not true.
He's never done that.
And I said, you're crazy.
Let me pull it up.
Unfortunately, I don't know why, but when I did the Google search, nothing came up.
What I did notice was at the bottom of the page, it said due to UK law, certain things have been removed.
So I don't know if it's exactly why I couldn't pull up a video proving or tweets or anything because I think using these words gets stripped from the social platforms.
Tim, I think it's a cost-benefit analysis, and we have to constantly rehash it and do it.
We have the technology we have today, and we are looking at technologies which open up the aperture even more.
And we all agree that a binary on or off is not the right answer and is not scalable.
We have started getting into nuance within our enforcement, and we've also started getting into nuance with the presentation of content.
So, you know, one path might have been for some of your replies for us to just remove those offensive replies completely.
We don't do that.
We hide it behind an interstitial.
To protect the original tweeter and also folks who don't want to see that.
They can still see everything, they just have to do one more tap.
So that's one solution, ranking is another solution, but as technology gets better and we get better at applying to it, we have a lot more optionality, whereas we don't have that as much today.
I feel like, you know, I'm just going to reiterate an earlier point, though.
You know, if you recognize sunlight is the best disinfectant, it's like you're chasing after a goal that can never be met.
If you want to protect all speech and they start banning certain individual, you want to increase the amount of healthy conversations, but you're banning some people.
Well, how long until this group is now offended by that group?
I don't believe that, but we have to work with the technologies, tools, and conditions that we have today and evolve over time to where we can see examples like this woman at the Westboro Baptist Church who was using Twitter every single day to spread hate against the LGBTQA community.
And over time, we had, I think it was three or four folks on Twitter who would engage her every single day about what she was doing, and she actually left the church.
I think we're very early in our thinking here, so we're open-minded to how to do this.
I think we agree philosophically that permanent bans are an extreme case scenario, and it shouldn't be one of our, you know, regularly used tools in our tool chest.
So how we do that, I think, is something that we're actively talking about today.
One of the challenges is we have the benefit in English common law of hundreds of years of precedent and developing new rules and figuring out what works and doesn't.
Twitter is very different.
So I think with the technology, I don't know if you need permanent bans or even suspensions at all.
You could literally just – I mean lock someone's account is essentially suspending them.
But again, I wouldn't claim to know anything about the things you go through.
But what if you just restricted most of what they could say?
You blocked certain words in a certain dictionary.
If someone's been – if someone received – Greased Hill.
And then, as Vijay pointed out earlier, we have the timeline.
We started ranking the timeline about three years ago.
We enable people today to turn that off completely and see the reverse cron of everything they follow.
You can imagine a world where that switch has a lot more power over more of our algorithms throughout more of the surface areas.
You can imagine that.
So these are all the questions that are on the table.
You asked about timeline, and this is a challenging one.
I don't know about timeline because first...
We've decided that our priority right now is going to be on proactively enforcing a lot of this content, specifically around anything that impacts physical safety, like doxing.
Peer review, which we mentioned, but have you just considered opening an office, even a small one, for trust and safety in an area that's not predominantly blue so that at least you can have some pushback?
We're thinking of doing something we call case studies, but essentially, like, this is our case law.
This is what we use.
And so high-profile cases, cases people ask us about, like to actually publish this so that we can go through, you know, tweet by tweet just like this.
Because I think a lot of people just don't understand and they don't believe us when we're saying these things.
So to put that out there so people can see.
And again, they may disagree with the calls that we're making, but we at least want them to see why we're making these calls.
So I think ultimately my main criticism stands and I don't see a solution to in that Twitter is an unelected, unaccountable as far as I'm concerned when it comes to public discourse.
You have rules that are very clearly at odds as we discussed.
I don't see a solution to that and I think in my opinion we can have this kind of like we've toned things down.
We've had some interesting conversations but ultimately unless you're willing to allow people to just speak entirely freely – You are – we have an unelected group with a near monopoly on public discourse in many capacities and I understand it's not everything.
Reddit is big too and it's – what I see is you are going to dictate policy whether you realize it or not and that's going to terrify people and it's going to make violence happen.
It's going to make things worse.
I hate bringing up this example on the rule for misgendering because I'm actually – I understand it and I can agree with it to a certain extent.
I have nothing but respect for the trans community, but I also recognize we've seen an escalation in street violence.
We see a continually disenfranchised large faction of individuals in this country.
We then see only one of those factions banned.
We then see a massive multinational billion-dollar corporation with private and foreign investors.
And it looks to me like if foreign governments are trying to manipulate us, I don't see a direct solution to that problem, that you do have political views.
You do enforce them.
And that means that Americans who are abiding by American rule are being excised from political discourse, and that's the future.
And again, we ground this in creating as much opportunity as possible for the largest number of people.
That's where it starts.
And where we are today will certainly evolve.
But that is what we are trying to base our rules and judgments.
And I get that that's an ideology.
I completely understand it.
But we also have to...
We also have to be free to experiment with solutions and experiment with evolving policy and putting something out there that might look right at the time and evolving.
I'm not saying this is it, but we look to research, we look to our experience and data on the platform, and we make a call.
And if we get it wrong, we're going to admit it and we're going to evolve it.
That there are American citizens abiding by the law who have a right to speak and be involved in public discourse that you have decided aren't allowed to.
I think Jack has said it a couple times, but the first thing we're going to do is prioritize people's physical safety because that's got to be understanding.
I mean, my opinion would be as much as I don't like a lot of what people say about me, what they do, the rules you've enforced on Twitter have done nothing to stop harassment towards me or anyone else.
I swear to God, my Twitter, I mean, my Reddit is probably 50 messages from various far-left and left-wing subreddits lying about me, calling me horrible names, quote-tweeting me, and these people are blocked.
Right?
And I never used to block people because I thought it was silly because they can get around it anyway, but I decided to at one point because out of sight, out of mind.
If they see my tweets less, they'll probably interact with me less, but they do this, and they lie about what I believe, they lie about what I stand for, and they're trying to destroy everything about me, and they do this to other people.
I recognize that.
So ultimately I say, well, what can you do?
It's going to happen on one of these platforms.
The internet is a thing.
As they say on the internet, welcome to the internet.
So to me, I see Twitter trying to enforce all these rules to maximize good, and all you end up doing is stripping people from the platform, putting them in dark corners of the web where they get worse, and then you don't actually solve the harassment problem.
No, I'm not talking – but there are dark corners of Reddit.
There are alternatives.
I mean the internet isn't going to go away and people have found alternatives.
And here's the other thing that's really disconcerting.
We can see a trend among all these different big Silicon Valley tech companies.
They hold a similar view to you guys.
They ban similar ideology and they're creating a parallel society.
You've got alternative social networks popping up that are taking the dregs of the mainstream and giving them a place to flourish, grow, make money.
Now we're seeing people be banned from MasterCard, banned from PayPal, even banned from Chase Bank because they all hold the same similar ideology to you.
In some capacities, I don't know exactly why Chase does it.
I assume it's because you'll get some activists who will lie.
But, you know, so what I see across the board, it's not just and this is what I want to bring up before about perspective on these things.
You guys are like, we're going to do this one thing.
And no snowflake blames itself for the avalanche.
But now what do we have?
We have conservatives being stripped from PayPal.
We have certain individuals stripped from PayPal, Patreon financing.
So they set up alternatives.
Now we're seeing people who have, like, you mentioned Westboro Baptist Church, and she's been deradicalized by being on the platform.
But now we have people who are being radicalized by being pushed into the dark corners, and they're building, and they're growing.
And they're growing because there's this idea that you can control this and you can't.
You know, I think you mentioned earlier that...
There are studies showing, and also counter-studies, but people exposed to each other is better.
I found something really interesting, and because I have most, whether or not people want to believe this, all of my friends are on the left, and some of them are even, like, socialists, and they're absolutely terrified to say, to talk, because they know they'll get attacked by the people who call for censorship and try to get them fired.
And when I talked to them, I was talking to a friend of mine in LA, and she said, is there a reason to vote for Trump?
And I explained a very simple thing about Trump supporters.
This was back in 2016. I said, oh, well, you've got a lot of people who are concerned about the free trade agreements sending jobs overseas.
So they don't know much about Trump, but they're going to vote for him because he supported that.
And so did Bernie.
And then the response is, really?
I didn't know that.
And so you have this ever-expanding narrative that Trump supporters are Nazis and the MAGA head is the KKK hood.
And a lot of this rhetoric emerges on Twitter.
But when a lot of these people start getting excised, Then you can't actually meet these people and see that they're actually people, and they may be mean.
They may be mean people.
They may be awful people, but they're still people, and even if they have bad opinions, sometimes you actually, I think in most instances, you find they're regular people.
Well there's a part of the problem of calling for censorship and banning people in that it is sometimes effective and that people don't want to be thought of as being racist or in support of racism or in support of nationalism or any of these horrible things so you feel like if you support these bannings you support positive discourse and a good society and all these different things.
What you don't realize is what you're saying, is that this does create these dark corners of the web and these other social media platforms evolve and have far...
I mean, when you're talking about bubbles and about these groupthink bubbles, the worst kind of groupthink bubbles is a bunch of hateful people that get together and decide they've been persecuted.
Instead of, like we were talking about with Megan Phelps, having an opportunity to maybe reshape their views by having discourse with people.
Well, let's think about the logical end of where this is all going.
You want healthier conversations, so you're willing to get rid of some people who then feel persecuted and have no choice but to band together with others.
MasterCard, Chase, Patreon, they all do it.
Facebook does it.
They're growing.
These platforms are growing.
They're getting more users.
They're expanding.
They're showing up in real life.
And even if these people who are banned aren't the neo-Nazi evil, they're just regular people who have banded together, that forms a parallel finance system, a parallel economy, you've got Patreon alternatives emerging where people are saying, we reject you, and now we're on a platform where people say the most ridiculous things.
You start hating people that are progressive because these are the people that, like, you and I have talked about the Dayton Society report that labeled us as alt-right adjacent or whatever.
They connected because you and I have talked to people that are on the right or far right that somehow or another were secretly far right and that there's this influence network of people together.
Well, it's a schizophrenic connection.
It's like one of those weird things where people draw a circle.
Oh, you talk to this guy and this guy talk to that guy.
So you're probably not familiar, but a group called Data& Society published what's entirely fake, a report labeling 81 alt-right adjacent to whatever they want to call it.
YouTube channels included Joe Rogan and me.
It's fake.
But you know what?
A couple dozen news outlets wrote about it as if it was fact.
You believe the Proud Boys were labeled by the FBI's extremists when they actually weren't.
It was a sheriff's report from someone not affiliated with the FBI, but they are activists within media who have an agenda, and we saw this with Learn to Code.
It was an NBC reporter who very clearly is left-wing identitarian.
I think we're good to go.
The snowflake won't blame itself for the avalanche.
You guys are doing what you think is right.
So is Facebook, YouTube, Patreon, all these platforms.
And it's all going to result in one thing.
It's going to result in groups like Patriot Prayer and the Proud Boys saying, I refuse to back down, showing up.
It's going to result in Antifa showing up.
It's going to result in more extremism.
You've got an Antifa account that published the home addresses and phone numbers that hasn't been banned.
That's going to further show conservatives that the policing is asymmetrical, whether it is or isn't.
And I think the only outcome to this on the current course of action is like insurgency.
We've seen people planting bombs in Houston, try to blow up a statue.
We saw someone plant a bomb at a police station in Eugene, Oregon.
Two weeks before that, a guy showed up with a gun and fired two rounds at a cop wearing a smash at the Patriarchy in Schilcher.
So, you know, so that happens.
Then a week later, they say you killed our comrade.
Then a week later, a bomb is planted.
I don't believe it's a coincidence.
Maybe it is.
I lived in New York.
I got out.
Too many people knew who I was.
And there was people sending me emails with threats.
And I'm like, this is escalating.
You know, we've seen for the past years with Trump, we've seen Breitbart has a list of 640 instances of Trump supporters being physically attacked or harassed in some way.
There was a story the other day about an 81 year old man who was attacked.
A regulator's job is to protect the individual and make sure that they level the playing field and they're not pushed by any particular special interests.
Like, companies like ours who might, you know...
I agree that we should have an agency that can help us protect the individual and level the playing field.
So I think oftentimes companies see themselves as reacting to regulation.
I think we need to take more of an education role.
So I don't fear it.
I want to make sure that we're educating regulators on what's possible, what we're seeing, and where we could go.
Do you think that by these approaches and by being proactive and by taking a stand and perhaps offering up a road to redemption to these people and making clear distinctions between what you're allowing, what you're not allowing, you can hold off Regulation, or do you disagree with what he's saying about regulation?
No, I don't believe that should be our goal, is to hold off regulation.
I believe we should participate like any other citizen, whether it be a corporate citizen or an individual citizen, in helping to guide the right regulation.
So, are you familiar, and I could be wrong on this because it's been like 15 years since I've done this.
Are you familiar with the Clean Water Restoration Act at all?
I don't expect you to be.
It's a very specific thing.
So, it was at some point in like the early 70s.
There was a river in Ohio.
And again, I could be wrong.
It's been 15 years.
I used to work for an environmental organization.
It started on fire.
And what was typically told to us was that all of these different companies said we're doing the right thing.
But as I mentioned, the snowflake doesn't blame itself.
So over time, the river was so polluted it became sludge and lit on fire.
And so someone said, if all of these companies think they're doing the right thing, And they've all just contributed to this nightmare.
We need to tell them blanket regulation.
And so what I see with these companies like banking institutions, public discourse platforms, video distribution, I actually – I'm really worried about what regulation will look like because I think the government is going to screw everything up.
But I think there's going to be a recoil of – first, I think the Republicans – because I watched the testimony you had in Congress and I thought they had no idea what they were talking about nor did they care.
There was like a couple people who made good points, but for the most part, they were like, I don't know, whatever.
And they asked about Russia and stuff.
So they have no idea what's going on, but there will come a time when, you know, for instance, one of the great things they brought up was that by default, when someone in D.C. signs up, they see way more Democrats than Republicans.
Right?
You remember that when you testified?
But yeah, so, well, there's an issue.
And I don't think, I believe you when you say it's algorithmic, that these are prominent individuals, so they get automatically recommended.
But then they're, you know, so again, the solution to that, like, how do you regulate a programmer to create an algorithm to solve that problem is crazy.
You're regulating someone to invent technology.
But I feel like there will be a backlash when too many, right now we're seeing, the reason, one of the reasons we're having this conversation is that conservatives feel like they're being persecuted and repressed.
Yeah, I mean, so there's two fields of research within artificial intelligence that are rather new, but I think really impactful for our industry.
One is fairness in ML. Fairness in what?
Fairness in machine learning and deep learning.
So looking at everything from what data set is fed to an algorithm, so like the training data set, all the way to how the algorithm actually behaves on that data set, making sure that it does not develop bias over the longevity of the algorithm's use making sure that it does not develop bias over the longevity
So that's one area that we want to lead in, and we've been working with some of the leading researchers in the industry to do that because the reality is a lot of this human judgment is moving algorithms.
And the second issue with it moving to algorithms is algorithms today can't necessarily explain the decision-making criteria that they use.
So they can't explain in the way that you make a decision, you explain why you make that decision.
Algorithms today are not being programmed in such a way that they can even explain that.
You may wear an Apple Watch, for instance.
It might tell you to stand every now and then.
Right now, those algorithms can't explain why they're doing that.
That's a bad example because it does it every 50 minutes, but as we offload more and more of these decisions, both internally and also individually to watches and to cars and whatnot, There is no ability right now for that algorithm to actually go through and list out the criteria used to make that decision.
So this is another area that we'd like to get really good at if we want to continue to be transparent around our actions because a lot of these things are just black boxes and they're being built in that way because there's been no research into like, well, how do we get these algorithms to explain what their decision is?
My fear is it's technology that you need to build, but the public discourse is there.
We know that foreign governments are doing this.
We know that democratic operatives in Alabama did this.
And so I imagine that with Donald Trump – he talked about an executive order for free speech on college campuses.
So the chattering is here.
Someone is going to take a sledgehammer to Twitter, to Facebook, to YouTube and just be like – Not understanding the technology behind it, not willing to give you the benefit of the doubt, and just saying, I don't care why you're doing it, we are mad.
You know what I mean?
Pass some bills and then it's over.
Again, clarifying, I think you guys are biased and I think what you're doing is dangerous, but I think that doesn't matter.
It doesn't matter what you think is right.
It matters that all of these companies are doing similar things and it's already terrifying people.
I mean, look, when I saw somebody got banned from their bank account, That's terrifying.
There was a reporter, and I could be getting this wrong because I didn't follow it very much, with Big League Politics, who said that after reporting on PayPal negatively, they banned him.
So again, not to imply that you guys do use it, but I asked specifically because it's been reported other organizations do.
So we have activist organizations.
We have journalists that I can attest are absolutely activists because I've worked for Vice.
I worked for Fusion.
I was told...
Implicitly, not explicitly, to lie, to side with the audience, as it were.
I've seen the narratives they push, and I've had conversations with people that I'm going to keep relatively off the record.
Journalists who are terrified because they said the narrative is real.
One journalist in particular said that he had evidence of, essentially, he had reason to believe there was wrongdoing, but if he talks about it, he could lose his job.
And there was a journalist who reported to me that Data& Society admitted their report was incorrect.
And now you've got organizations lobbying for terminating Joe and I because of this stuff.
So this narrative persists.
Then you see all the actions I mentioned before and all the organizations saying we're doing the right thing.
And I got to say, like, we're living in a – I mean, I feel like we're looking at the doorway to the nightmare dystopia of – I just want to clarify.
And unlike the internet, within a company like ours, you don't necessarily see the protocol, you don't see the processes, and that is an area where we can do a lot much better.
I guess, you know, beat it over the head a million times, beat the dead horse.
I think ultimately, yeah, I get what you're doing.
I think it's wrong.
I think it's terrifying.
And I think we're looking – we're on the avalanche already.
It's happened.
And we're heading down to this nightmare scenario of a future where it terrifies me when I see people who claim to be supporting liberal ideology burning signs that say free speech, threatening violence against other people.
You have these journalists who do the same thing.
They accuse everybody of being a Nazi, everybody of being a fascist, Joe Rogan for Christ.
First of all, I will say it's hilarious to me that people have Band-Aids.
They never use, but they don't store at least one emergency food supply.
It's like, you never use Band-Aids.
Why do you have them?
But, no, I do.
I see this every day.
It was a couple years ago.
I said, wow, I see what's happening on social media.
We're going to see violence.
Boom, violence happened.
I said, oh, it's going to escalate.
Someone's going to kill.
Boom, Charlottesville happened.
And it's like, I've...
There have been statements from foreign security advisors, international security experts saying we're facing down high probability of civil war.
And I know it sounds crazy.
It's not going to look like what you think it looks like.
It may not be as extreme as it was in the 1800s.
But I think it was in the Atlantic where they surveyed something like 10 different international security experts who said based on what the platforms are doing, based on how the people are responding, one guy said it was like 90% chance, but the average was really high.
Well, you know, when people see someone saying things that they don't agree with, it's very important for people to understand where silencing people leads to.
And I don't think they do.
I think people have these very simplistic ideologies and these very narrow-minded perceptions of what is good and what is wrong.
And I think, and I've been saying this over and over again, but I think it's one of the most important things to state.
People need to learn to be reasonable.
They need to learn to be reasonable and civil discourse.
Do you think we could do this again in like six months and see where you guys are at in terms of like what I think is important is the road to redemption.
I think that would open up a lot of doors for a lot of people to appreciate you.
I mean, there was an early phrase in the internet by some of the earliest internet engineers and designers, which is, code is law.
And a lot of what companies like ours and startups and random folks who are individuals who are contributing to the internet will change parts of society, and some for the positive and some for the negative.
And the most...
I think the most important thing that we need to do is to, as we just said, shine a bunch of light on it, make sure that people know where we stand and where we're trying to go and what bridges we might need to build from our current state to the future state.
And be open about the fact that we're not going to, and this is to your other point, we're not going to get to a perfect answer here.
It's just going to be steps and steps and steps and steps.
What we need to build is agility.
What we need to build is an ability to experiment very, very quickly and take in all these feedback loops that we get, some feedback loops like this, some within the numbers itself, and integrate them much faster.
I mean, the United States doesn't have a platform to do that.
When you're talking about the internet, the United States, if they want to come up with a United States Twitter, like a solution or an alternative that the government runs, and they use free speech to govern that, good luck.
But I think it's important to point out, too, that a lot of people don't realize you guys have to contend with profits.
You have to be able to make money to pay your staff.
You don't get free money to run your company.
So aside from the fact that you have advertisers who want to be on the platform, I imagine a lot of these companies are enforcing hate speech policies because advertisers don't want to be associated with certain things.
So that creates, through advertisement, cultural restrictions.
Well, the problem with people like me is that I put out a lot of content, and there's millions of views, and it's impossible to moderate all the comments.
If you put a YouTube video on and you have a bunch of people that say a bunch of racist things in your YouTube comments, you could be held responsible and get a fuck.
Jason Kuznicki: So the reason I bring that up is just because there's going to be things that even if you segment your advertisers from… Look, I pointed out I think the Democrats are in a really dangerous position because outrage culture, although it exists in all factions, is predominantly on one faction.
And so when Trump comes out and says something really offensive, you know, grab him by the, you know what I'm talking about, the Trump supporters laugh.
they bought t-shirts that said it the people on the left, the democrat types, they got angry so what happens now, you see Bernie Sanders he's being dragged, the media is looking for blood and they're desperate, they're laying people off they're dying, and they will do whatever it takes to get those clicks what does that have to do with twitter though?
It has to do with the fact that someone's going to find something on your platform and they're going to call up your advertiser and say, look what Twitter's doing.
And you're going to be like, oh, we had no idea.
And too bad.
Canceled all ads.
Your money's dried up.
And so the reason I bring that up is I recognize Twitter, YouTube, Facebook, these other platforms are worried.
Money has to come from somewhere to pay people.
So you also have to realize you've got the press that's salivating looking for that juicy story where they can accuse you of wrongdoing because it'll get them clicks.
They'll make money.
And that means even though YouTube did nothing wrong with these comments, it was just a creepy group of people who didn't break the rules, who figured out how to manipulate the system.
YouTube ate like YouTube.
You had to take the take that one.
The advertisers pulled out, YouTube lost money.
So YouTube then panics, sledgehammers comments, just wipes them out.
The point I'm trying to bring up is that even if Twitter wanted to say, you know what, we're going to allow free speech, what happens?
Advertisers are like, later.
Even if you segment it, they're going to be threatened by it, and so the restrictions are going to come from whether or not you can make money doing it.
Listen, there's stuff like that on Netflix specials that are out right now.
Things are changing.
It's just in the process of this transformation where people are understanding that because of the internet, if you look at late night conversations, how about...
Colbert saying that President Trump has Putin's dick in his mouth.
How about him saying that on television?
Do you really think that would have been done 10 years ago?
It wouldn't have been.
Or 15 years ago.
Or 20 years ago.
Impossible.
Not possible.
Standards are changing because of the internet.
So things that were impossible to say on network television just 10 years ago, you can't say now.
Because people who are activists were complaining that he had said some homophobic things that he had subsequently apologized for before they ever said that.
But I am a comedian, and I understand where things are going.
The demise of free speech is greatly exaggerated.
That's what I'm saying.
I'm saying there's a lot of people out there that are complaining.
But the problem is not necessarily that there's so many people that are complaining.
The problem is that people are reacting to those complaints.
The vast majority of the population is recognizing that there is an evolution of free speech that's occurring in our culture and in all cultures around the world.
But this is a slow process when you're in the middle of it.
It's almost like evolution.
You're in the middle of it, you don't think anything is happening.