All Episodes
April 26, 2022 - Health Ranger - Mike Adams
25:09
Google whistleblower Zach Vorhies reveals extreme BIAS behind "Machine Learning Fairness
| Copy link to current segment

Time Text
When I saw this fake news document, which by the way, I've disposed to the public now, You can see this at ZachBorhees.com.
When I saw this document, I went, okay, well, now they're describing that there's this fake news and they're classifying examples of what fake news means.
Well, what's going to be the thing that actually actively suppresses this quote-unquote fake news?
And I started to search for that, and that's what led me to this machine learning fairness system.
And machine learning fairness was essentially a digital form of cultural Marxism that was going to re-rank the entire information landscape of the Internet so that we as individuals could be programmed.
And this isn't my words.
This is Google's words.
What I did was that I found this set of slides describing the justification for machine learning fairness.
And in it, There was this particular set of slides that talked about how to program individuals.
And it's a four-step process.
The first step is like information is generated and then it's aggregated and filtered and then it's ranked.
And then the fourth step is people like us are programmed.
And I saw this and I went, Wow!
They're really laying it out here that people are this easily programmed.
I was like, well, maybe it's just the ideology of this one person.
Maybe it's not representative of the company.
And then what I did is I found out that this same four-step process existed in other slides by other authors.
And I went, whoa, this is a system.
This is a system.
I'm seeing this same four-step process on how to program people, the internet citizens, as, you know, systemic now.
And so I started looking at, like, what this machine learning fairness is.
And I want to describe for you exactly what machine learning is and why machine learning fairness is a very, you know, very troubling term.
So imagine that this circle right here represents all of AI. You've got your chess playing AI, you've got your thing that gives you the recommendations on your Netflix or YouTube.
Those are all different types of AIs.
Now, machine learning is like a little tiny subset of that whole set of AI. And the way that machine learning works is that You create a pile of data that has labels associated to it.
And what you do is that, you know, let's say you have like an army of like a hundred raters and they go through 10,000 articles and then they manually label it like fake news or not fake news, to use a concrete example.
And so...
You run this through machine learning fairness, already pre-tagged, and this machine learning algorithm is going to grow in.
It's going to try to find all the different patterns.
It's like, oh, and there's this word.
That's very strongly indicative of the human giving it the label fake news.
And so after a while, it starts figuring out what the patterns are.
And then now what you have is you've got a trained classifier.
And this trained classifier can now have new input, run through it, and now it's going to automatically generate that label, whether it's fake news or whether it's not fake news.
But of course, it's just applying and extending the biases of the humans that trained it.
That's right.
It's encoding the biases of the individual people into this machine learning algorithm itself.
Right.
This is what machine learning is, and the classifiers that are designed for social justice at Google was called machine learning fairness.
Now, you probably can understand why they would call this thing machine learning fairness.
If you're against it, you inherently sound like you're against fairness.
Unfair, yeah.
Fair, right?
Because if you're not for fairness, then you're for algorithmic unfairness.
And it's this dichotomy.
And so as soon as I saw this word, I went, oh, wow.
This is exactly what I would name it if I was an extreme leftist.
I would call the totalitarian takeover fairness.
And as I read through these documents describing how this thing works, one of the interesting...
Questions in the Q&A section was a question of whether objective reality could be categorized as unfair.
And the answer that Google gave was yes.
And the example that they used was the following.
Let's say someone was doing an image search for CEOs.
And if Google image search returns mostly male results, And even if these results match objective reality, Google's position was that this could still be classified as algorithmically unfair and justify product intervention.
And so I looked at this and I went, wow, they are going against the natural order of the universe.
You know, because we just had this whole James Damore thing where James is like, hey, maybe the fact that we've got more men in programming is because programming appeals more to men than it does to women in the same way that teaching appears, you know, appeals more to women than it does men.
Maybe it's not indicative of systemic bias.
And so here I was again, seeing this same thing play out again, and Google, having learned none of the lessons, seemed determined to want to double down.
So I just want to interject here and bring in a term, hate facts, right?
So something is a fact, but then Google's raters qualify it as hate, even though it's true.
And so then they introduce this idea that facts can be biased or facts should be banned or censored or statements of fact because the facts themselves are hateful, such as, for example, Instagram recently banned FBI crime statistics That talk about, I think, black-on-black crime, for example.
Well, that's a hate fact, according to the left.
You're not allowed to talk about those statistics, even though they claim to be the party of science.
Yeah, and it's sad because they've appropriated the term science, and they've replaced it with their own sort of, like, scientism, some sort of religious construct, which is intolerant towards heterodoxy.
Like, there's no dissenting opinions that are allowed.
Right.
And, you know, that is a really great example of what's been happening to the whole industry in general.
Go ahead.
Well, I don't mean to interrupt, but whenever you get a chance, talk about You know, in Trump's executive order, he mentioned that Google has links to China and helped develop technology in China that China is using to oppress human rights.
I'd just like you to speak to that.
Did you see any evidence of that in your research?
Or was that part of the culture that you saw?
Or is that something that you never witnessed in the company?
Well, here's the thing, is that Whenever you're running an operation, you always want to put out disinformation in order to muddy the waters and make it so that people don't know what's going on.
And at the same time that Machine Learning Fairness was rolling out internally, there was this supposed leak of Project Dragonfly.
And Project Dragonfly was the supposed censorship engine for China.
And everyone's attention got focused to that because it was available as what the media was talking about.
No one was talking about machine learning fairness.
And I did a search for both.
And what I found is that there was virtually no trace of Project Dragonfly.
And there was this gigantic project of machine learning fairness.
And so I'm of the belief that Project Dragonfly was a fake project in order to take the air out of the room and allow the anger towards censorship to sort of dissipate harmlessly into the air.
When in reality they were building a censorship engine to apply to Americans.
So, but which is far worse than I think the way Americans would view a censorship engine being deployed in China.
They would say, well, that's China.
That's communism.
That's the way they run their country.
But like you said, maybe this was designed to deceive Americans into thinking that it was just a China problem, not something happening in America.
Right.
It was something like, oh, if we're going to get so mad at Google over this project, you know, over perceived censorship, then, you know, you wouldn't then believe that Google's actually doing this in America and in other projects.
Because, you know, as you said, China's got their own rules.
Like, if China has too much censorship, that's up to the Chinese, not a bunch of...
Americans to fix.
And this is essentially what we were doing.
We were getting really outraged over this fake project, over Chinese censorship, where if you go to China, no one really uses Google that much.
It's like number fifth, which means it's got almost no market share.
And there was this sort of media sensation that made it seem like Because Google was doing this censorship of this dragonfly, that meant that all the Chinese were going to be censored.
In the same way that if, I don't know, Arby's switches to coconut oil, that means that we're all going to get less canola oil on the diets.
It doesn't matter.
It's a small sliver of the market share.
And so, you know, I didn't see anything about Project Dragonfly.
After my disclosure in June 2019, one month later, Google was like, oh, we canceled Dragonfly, right?
Additionally, the history of machine learning fairness is really long.
Like, this is not a project that originated in Google.
This was a project that originated in Stanford.
They had a long history of brilliant academics that were developing this project and went from Stanford and got moved into Google.
By, you know, dangling this over here, this Project Dragonfly, ooh, it's a predator insect, like, conjures up all these bad feelings, right?
Like, come on, who would name, like, why don't you just call it Velociraptor?
That's, like, the same sort of, like, negative connotation.
Like, you're not going to name your totalitarian censorship regime, you know, after something bad.
You're going to name it after something good, so you can't, you know, fight against it.
Like fairness.
Yeah, like fairness.
Like machine learning fairness.
So, yeah.
So I'm sitting here.
I'm seeing all this fairness getting rolled out to Google Search, Google News, YouTube.
I'm seeing the...
The talking points that are being delivered to the managers so that they can explain why they're going to be censoring all the results on their different corpuses.
And what was really strange is that the cloud team was one of the biggest teams to be indoctrinated, which is insane because that's their most...
Corporate, public-facing things.
The cloud center is where other businesses go to just host their apps.
They have no business going in there and telling another company, What their code of conduct was going to be.
But what I saw in there was that they were setting down the groundwork to allow this.
And what I mean by that is that the introduction of race-based affirmative action hiring practices throughout the company, or I'm sorry, throughout that division.
And so, you know, these proactive measures were not just in recruitment, but also in attrition.
If you're going to say, all right, you know, buy Google.
I'm going to go take this other job.
If you're a white person, they're going to be like, I'll let them go.
And if you're a minority, they're going to be like, oh, no, no, no.
We're going to give you a better offer.
You should stay because it helps their diversity quotients.
And the performance metrics of a manager were actually based on...
You know, the diversity metrics that were being rolled into the management class.
You know, and so, man, this seems like illegal.
Like, I think California's got, like, race, like, you can't hire, you can't treat people differently based on their race, you know, no matter what it is.
And so, you know, it didn't matter to Google.
California didn't matter.
They were going to do it anyways.
And I'm like, well, this is one of the documents I'm going to download.
And so I downloaded it.
Did you find that change in the culture to be shocking, given that in the hiring process, I'm sure the hiring process was merit-based.
You got the job if you were qualified, if you were intelligent, if you could write code well.
But then abandoning the merit-based system and now prioritizing, basically judging people based on the color of their skin, Was that a big shock to you that that was taking place in a company that used to be so merit-based?
Yeah, well, I mean, they had taken some small steps towards abandoning merit-based hiring, and the first sort of inkling that I got from this was when they decided to re-add names back onto their resumes.
They did this experiment at one time where they removed all of the race, gender, You know, indications on a resume.
Like, they went to the name and they blacked it out, for example, because, you know, your name gives away a lot of who you are and what kind of race you belong to.
And so they removed this.
What they found out was that their hiring decision started to be more racist and more Which is insane, given that all that information was stripped away.
Right.
So in other words, at that point, they were just hiring people based on qualifications and they were ending up with more white males, basically.
Yeah.
And so it hurt their gender and racial statistics.
So they decided to re-add the names so that, you know, I don't know, so they could improve their equity of different races within the company.
And so that was like the first sort of like inkling.
But seeing all this stuff about how they were describing people and what they said, oh man, there's this really famous document that got released called Coffee Beans.
And what they said is that like, hey, well, you know, Should we really include all coffee beans, even if they're lower quality, right?
Because, like, you know, at the front, like, assume that there's, like, high-quality coffee beans that are really easily accessible.
Like, what you're going to do is you're going to get, like, these high-quality ones, thinking that you're going to get a better blend of coffee.
But they're like, well, we should go out there and actually find possibly lesser-quality coffee beans.
And even though individually they may be lesser quality, the thing is that they're going to create a more tasteful blend.
Wow.
It's like, what?
Right?
Did they really just, you know, just, you know, break down the whole racial question into like coffee beans?
But here it is, a blend of coffee beans.
So they're trying to get a better blend of coffee.
So they're going to go out and they're going to have, you know, these racist sort of quotas.
And so I saw this, and I saw the machine learning fairness, and I saw the indoctrination.
And, you know, I put that together with our implicit association test, which was a computerized test that gives you metrics on how internally racist you are.
Which, by the way, was declared a fraud by none other than the Wall Street Journal.
I put all this together and I went, wow, they're really going for it.
They're really going for a Marxist approach.
Subversive.
And I had experience in this because I've dated a lot of Russians.
And so they've told me stories about what happened in Ukraine.
They told me about, you know, what happened in the Soviet Union and how it was a clown world.
And so I said, oh, wow, this is really, you know, you know, we're really on the beginnings of sort of a cultural revolution like they had in Russia or they had in China.
And so I started to familiarize myself with what a communist revolution sort of looks like.
And what I saw is that all of them follow a very similar pattern, which is the control of the language and political correctness.
And so I went, okay, well, this is what they're doing.
This explains the political correctness.
This explains the intolerance for dissent.
It's all part of a plan that's being organized and being executed, you know, across assets.
I'm talking media, I'm talking tech, and it's now coming at us from all these different directions.
And, you know, at this point I was just, let's say 2017, I just was downloading all these documents, but I didn't really intend to leak all this stuff to the public.
But that all changed when I caught Google deleting words out of their Arabic dictionary in order to make a Trump tweet sound crazy.
Was that the Kafefe?
Yeah, it was the Kefefe situation.
Kefefe, yeah.
In May of 2017, Trump struck some sort of deal with Saudi Arabia.
And they overthrew their succession line and put MBS eventually in power.
But he had struck this deal in May 2017.
And then on May 31st, Trump tweeted, despite the negative constant press, Kefefe.
And the very next day, the New York Times wrote an article saying that the internet had come to the wrong conclusion of what the word kefefe meant.
And what the internet had done was that they had run this word kefefe through Google Translation.
And Google Translation said, oh, this word is Arabic, and it means we will stand up.
And so, if you take that translation and you put that with the original tweet, what you get is, despite the negative constant press, we will stand up.
Is that a valid decode?
Could be.
Yeah, it makes linguistic sense.
Right.
And so...
Okay, so these people did that.
So the New York Times decided that they were bringing on themselves to say, well, this actually is a false decode.
And who did they get to confirm that?
It was someone who was a previous contractor with the company.
Who's now working at Harvard or one of these ivory tower academias.
And so it was basically like almost circular reasoning.
Like, hey, we asked our employee and they confirmed it.
Well, for some reason, that same day, an executive at Google wrote up this design doc saying, well, the New York Times said this word is nonsense.
We currently translate this word into this meaning.
And Obviously, Google is wrong.
So retroactive deletions, Ministry of Truth style, classic Orwellian style, but justifying it because the New York Times said so.
So the New York Times gets to define linguistic reality now and then Google conforms.
That's right.
That's exactly what happened.
What was then jaw-dropping was the team that was tasked to censor this word.
And this team called themselves the Derrida team.
And I don't know if you know of a philosopher by the name of Jacques Derrida, who was known as the father of postmodernism, who happened to advocate For the destruction of Western civilization through attacking the language.
Wow.
And so he kind of set down this groundwork of like, well, if you change the terms and you make all terms racist or sexist or one of these isms, then you can attack people at the language because essentially we operate on language.
So if you attack the language, you attack the substrate of opposition.
And this was really punctuated in the book 1984 by George Orwell, who predicted that The socialists would rule society through the controlled language.
And so what a coincidence that the team responsible for the censorship called themselves the Derrida team.
And so I saw this go down and I saw that, you know, They couldn't just delete it once, Mike.
They actually had to delete the word twice because the AI was able to find another path to translation.
And so they deleted this word twice out of the Arabic translation dictionaries.
And then on June 7, 2017, the Washington Post did an article advocating for the removal of President Trump through the use of the 25th Amendment.
And the evidence that they supplied for the removal...
Was that Trump was mentally incapacitated because he was tweeting nonsense.
Well, it's nonsense because Google's deleting the words out of the dictionary.
I saw that.
And I said, this has moved beyond just, you know, misguided ideology and Trump derangement syndrome.
This is now active insurrection.
This is now active sedition and possibly treason.
And now that I know about it, I have a responsibility to let the authorities know.
And if I don't, then now I'm part of the conspiracy through silence.
What did you do?
This video was made possible by Brighteon.com.
After being deplatformed by YouTube, I built Brighteon.com so that we can speak.
All voices of dissent are welcome.
Export Selection