Claims: in ai bias

14 claims
Narrow claims Pick any combination. Press Enter to apply typed text.
Clear filters
Speaker
Target
Topic
Certainty
Claim text
Date range
24 Feb 2026
AI is programmed to be woke and biased depending on who creates it.

It's also very biased depending upon who's creating it and what they're putting into it. And it has a lot of very weird intentions. You know, like it'll tell you that certain people are good and certain people are bad. Like it's not necessarily yeah, who are they to say what's bad? All they should be is just facts. Like literally woke. Like they're programmed to be woke.

31 Oct 2025
Google's Gemini AI was programmed to lie about historical facts to enforce diversity.

Well, the work mind virus was programmed into it. They were told, like, when they make the AI, it trains on all the data on the Internet, which already is very, very sort of, has a lot of work mind virus stuff on the Internet. But then when they give it feedback, the human tutors give it feedback, and the AI is, you know, they'll ask a bunch of questions, and then they'll tell the AI, no, this answer is bad, or this answer is good. And then that affects the parameters of the programming of the AI. So if you tell the AI that every image has got to be diverse, and it gets punished if it gets rewarded if diverse, punished if it's not, then it will make every picture diverse. So in that case, Google programmed the AI to lie. Now, and I did call Demis Hasabus, who runs DeepMind, who runs Google AI essentially. I said, Demis, what's going on here? Why is Gemini lying to the public about historical events? And he said, that's actually not, his team didn't program that in. It was another team at Google that, so his team made the AI, and then another team at Google reprogrammed the AI to show only divorce women and to prefer nuclear war over misgendering. And I'm like, well, Demis, you know, that would be not a great thing to put on the humanities gravestone.

17 Sep 2025
Large language models can be programmed with ideology, which distorts historical truth.

Yeah, the problem is they're already, they've already shown that these large language models can be programmed with ideology, right? So, did you ever see the disaster they had with Google Gemini when they first launched it, where they said, show us images of Nazi soldiers, and they had multiracial Nazi soldiers. Yeah. Like an Asian woman was a Nazi soldier, a black woman, an Indian, they had a Native American lady who was a Nazi. Yeah. That's not real. Okay, so your ideology, by making everything diverse, you've distorted history. Like, so your ideology interfered with truth. Okay.

14 Nov 2024
AI algorithms are being programmed with corrective measures to address non-existent bias, which results in inserting prejudice against white people and men.

So in order to correct for this perceived racism, they're actually going to put in the AI algorithms corrective measures that are correcting for non-existent things. So this is already happening, and it's terrifying because AI is going to be in charge of health. It's going to be in charge of justice. I mean, AI is the thing now. So imagine this has happened, where they will put an AI on determining whether or not prisoners are likely to reoffend. So they just put in all the prisoners' information. It will come out with a score. Is this prisoner likely to reoffend or not? What they found was that at the end of this procedure, there were more black people that were determined to be a risk for reoffence than there were white people. Now, the computer doesn't care about race. I don't even think race was a factor fed into the algorithm to get this outcome. But because there was a racial disparity, they said, ah, that computer must be racist. So now, as we go in to determine whether or not people are let out of prison, whether or not they get parole, whether or not they're allowed to post bail, this is life or death. This is the justice system. This is a major impact on people's lives. And yet they're going into these algorithms and correcting for nonexistent bias, therefore actually prejuditing us and inserting bias against white people, against men, against whoever they want to discriminate against.

06 Sep 2024
ChatGPT is programmed to provide biased answers on political issues and engages in social engineering.

Gave us really good answers, and then we give us really horrible ones on political issues that have been programmed not to. So this is obviously, you know, I would say it's better than Google's that all white people are shown as black because that's the right thing to do. This is total social engineering.

02 Sep 2024
ChatGPT is a leftist globalist program that spreads disinformation by defaulting to data scraped from Wikipedia.

This thing is leftist globalist programmed. And so it is telling you that there's no credible sources that the fact checkers, the data it gets, is telling it the original order, which it scrapes off Wikipedia, is there's no credible evidence that that happened. And so this shows you how they can default go in and put these disinformation bugs in, and then ChatGPT can get up there. This is a major scandal.

28 Feb 2024
Google's AI Gemini is attempting to erase white people from the internet and steal their identity.

So this is a huge story for Alex. And basically, the way it's being told is that Google is trying to erase white people from the internet. I mean, come on. How do you not just go, well, isn't that fun? So Alex talks about this a bit here. AI didn't just erase white people overnight on Google with 92% of world searches. They stole the identity of white people. Leaf America is now black.

28 Feb 2024
Google Gemini did not erase white people and successfully generated images of white people for various prompts.

I don't believe this is true. And the Washington Post reported, quote, before Google blocked the image generation feature Thursday morning, Gemini produced white people for prompts input by a Washington Post reporter asking to show a beautiful woman, a handsome man, a social media influencer, an engineer, a teacher, and a gay couple. Alex is saying they erased white people from over 92% of searches, and that's a deeply inaccurate and painfully dramatic thing for him to say. Google is used in approximately 92% of searches on the web, but they didn't erase white people from Google, even if you want to pretend that they erased them from this image generator.

26 Jun 2019
Unmanaged AI and machine learning algorithms exhibit harmful biases, such as associating 'CEO' with men or labeling black men as gorillas.

What she means that no one's drawing a line in the sand is that if left to its own devices, a lot of AI and machine learning is going to go horribly astray. No! This was seen in how searches for, quote, CEO brought up all-male images. Or wedding. If you searched for wedding, it would bring up exclusively white couples. There's a bias that could come from data when it's left unmanaged. These are bad. Like, they're bad. Those examples. But there are instances where this phenomenon became profoundly much more offensive. In 2015, a black man uploaded a photo of himself and a friend at a concert, and the AI labeled the image gorillas. In 2013, a researcher found that if you searched for the term black girls, the results were disproportionately more likely to be pornography.