Claims: in ai censorship

10 claims
Narrow claims Pick any combination. Press Enter to apply typed text.
Clear filters
Speaker
Target
Topic
Certainty
Claim text
Date range
09 Apr 2026
Commercial LLMs like Anthropic, OpenAI, and Google censor users.

And when it comes to these, you know, right now we've got Anthropic, we've got OpenAI, we've got Google. I might be missing one of the big commercial based LLMs out there right now, but the biggest problem with these fucking things is. They're so good, but they will censor your ass.

09 Apr 2026
Censorship in commercial LLMs drives people to unaligned local LLMs.

They understand that by making it so that you can't make a Charles Manson AI through OpenAI, it doesn't make people not make the Charles Manson AI, it protects you from a lawsuit. But what it does do is it drives people into unaligned LLMs. And that is what is happening.

04 Feb 2026
ChatGPT deleted information about a UK rape case involving a Muslim defendant because it was politically sensitive.

There was a case in the UK where a guy had raped a 13-year-old girl. But because he was Muslim and he'd gone to a madrasa and the judge let him off jail time, said you were very sexually naive. You didn't understand. The guy was saying, oh, I thought women were nothing. And like a lollipop you dropped on the floor. And the judge let him off jail time. And I thought, this is quite extreme. And I found it. It came up on ChatGPT and then it deleted. And I said, oh, I think you just deleted the information for me. It's in the public domain. Why did you do that? It said, oh, you know, it's fine. It might violate my terms of service. And I said, well, how could it? This is an article that's in the public domain. So it gave me the information again, deleted it again. I said, you keep deleting this. Stop it. It said, I definitely won't delete it. Then it did the same again. So what it's doing is it's saying, because this is a news story that could be deemed anti-immigrant, or this is a news story that is politically sensitive, I'm not going to let you see it.

24 Feb 2025
U.S. AI models censor criticism of big pharma, the truth about COVID origins, the stolen 2020 election, and 9/11.

Yeah, DeepSeek may censor conversations about Tiananmen Square, but the U.S. AI models censor any criticism of big pharma. They censor the truth about COVID origins. They censor the truth about the stolen 2020 election. They censor the truth about 9 /11. You name it, the U.S. models censor all of these subjects.

04 Jan 2025
X's AI system Grok identifies criticism of official narratives and affiliation with certain accounts as negative content that limits reach.

And this is according to X's own AI system. It says things like being critical of FBI statements, challenging official statements from authorities, skepticism about official narratives. This is from X's own AI saying this. They've also said being affiliated or talking to other accounts, like maybe Alex Jones, could also de-boost you. So these are the kinds of things that have people extremely skeptical of what the future of X looks like.

15 May 2019
Social media companies use AI for censorship because human moderators become radicalized after listening to conservative shows.

And so years ago, I began to learn that one big reason they wanted to go to AI computers doing censorship, which more and more they're doing, is because humans that have to listen to shows like this at Facebook and Twitter and Google and places and YouTube and censor it, well, they get radicalized, you see. They actually tune in. And hear what we have to say versus what mainstream media is saying about us and what the corporate system is saying about us.