All Episodes
June 27, 2019 - Sargon of Akkad - Carl Benjamin
13:52
Google is Unfair
| Copy link to current segment

Time Text
I think there have been enough leaks from Google now to show that Google is not a fair organization, which is particularly concerning because fairness is something that Google prides itself on.
But to demonstrate this, we're going to have to go through quite a long thought process.
Jen Janai is an ethicist at Google, previously head of their trust and safety team, and now currently the head of responsible innovation and global affairs at Google.
On Friday the 9th, 2018, the Evening Standard published an article in which Jen Janai explains to them what it is that Google is going to try and do to combat bias.
She says, We are all horribly biased.
We all have our biases.
All our existing data sources are built on everyone before us and their historical biases.
On May 9th, 2019, Jen Jenai gave a talk at Google I.O. 19, an international developer conference, where she explained their AI principles.
They give them a set of value-driven principles as a company to be one, socially beneficial, and two, avoid creating or reinforcing unfair bias.
She also discusses underrepresented groups who you need to give extra focus on to, quote, get it right.
It is not quite defined, but presumably she means fair treatment in line with her values.
So what is bias?
Well, a standard dictionary definition of bias that you'll find on Google goes something like this.
Inclination or prejudice for or against one person or group, especially in a way considered to be unfair.
I think that this is quite an inadequate definition because bias is often the source of perceived unfairness, and it also leaves out bias for non-person things.
Also, it uses quite loaded words like prejudice.
So I would define bias to be preference for the valued and preference against the unvalued.
I think this is a more neutral way of constructing a definition for bias and I think it's one that more people would agree with and people at Google themselves would probably agree that that's a fair definition of bias.
Public representatives of Google often talk about fairness.
In fact, they talk about it an awful lot.
And on the 24th of June, Project Veritas released an expose that featured Jen Janai, in which there was, again, a heavy focus on fairness.
This came from Janai, and this came from an engineer who was saying, fairness, you know, you need to be fair when programming their algorithms.
And a complaint from the whistleblower featured in this clip was, they're not an objective source of information.
This lines up with what Jen Janai had previously said about helping out underrepresented groups.
So again, a standard dictionary definition of fairness is treating someone in a way that is right or reasonable, or treating a group of people equally and not allowing personal opinions to influence your judgment.
Given the way that Google has treated many of its content creators on YouTube, it's hard to believe that this is a definition of fairness that Google would adhere to.
For example, Black Pigeon speaks to have his channel suddenly terminated with no strikes, no warnings, only to have it reinstated after a massive public outcry.
I don't believe that you could call that fair or reasonable treatment.
And this definition is also the opposite of what Janai had already said regarding underrepresented groups who you need to give extra focus on to get it right.
And this leads us to another undercover clip that was featured in the Project Veritas video, which I'm not going to show, by the way, any of it, because they've had it censored off of their channel and Vimeo and BitChute.
So, in it she said, they were not saying what is fair and equitable, so we're like, well, we're a big company, we're going to say it.
People who voted for the current president do not agree with our definition of fairness.
That appears to be because Google's definition of fairness is not the standard English definition of fairness.
They do not appear to be treating their content creators in line with what is considered to be a fair procedure.
And so the question really is, how is a fair outcome determined?
Now, I think there are a few ways of looking at this.
I think one question you have to ask yourself is, does it match a predetermined result, or is it determined through procedure?
I think that we could split these two worldviews into two categories that I'm going to call conceptual fairness and procedural fairness.
So conceptual fairness, in the case of Gen Janai and Google, I think would be equality as fairness.
Things being equal is what is fair.
And I think that that comes at the end result of what has happened.
Whereas I think that what we could call the right wing view of this is treatment as fairness, how you are being treated.
What I think I'm going to call procedural fairness.
So conceptually, you're looking at the outcome and saying what is fair.
Or procedurally, you're looking at the way things are done and deciding if they are fair.
Because one actually does negate the other.
If you have conceptual fairness, you could quite easily engage in procedural unfairness to get to your conceptual result.
Conceptual fairness is known in advance and it has absolute consequences.
As in, we already know what we are expecting to see and any deviation from that is not in line with our conception of fairness.
Procedural fairness is not known in advance and has relative consequences.
But it is the procedure itself that determines a fair consequence, not a predetermined version that we have in advance.
When Janai says people who voted for the current president who do not agree with our definition of fairness, I think these are the two definitions we are talking about.
I think that people like Janai believe in conceptual fairness and people who voted for Trump believe in procedural fairness.
And what we are describing here is a conflict of values.
In Janai's talk at the Google I.O. conference, she specified that they are a values-driven company.
They use values-driven principles as a company.
So the question is, what are their values?
Because it is our values that determine what we decide is fair.
They align with our biases, which is our preference for the valued and preferences against the unvalued, and this informs our decision-making.
I think that this is what gives Silicon Valley its left-wing bias.
The left-wing tends to value equality.
The right wing tends to value excellence.
The left-wing tends to focus on systems.
The right wing tends to focus on procedure.
The left-wing tends to focus on groups.
The right wing tends to focus on individuals.
To the left, what is fair depends on the equal outcome of groups, which is why they have the concept of an underrepresented or marginalized group.
To the right, fairness requires equal application of procedure between all people going through it, regardless of what the outcomes are.
Essentially, I think that we can summarize this as the left-wing believing that the ends justify the means, versus the right-wing believing that the means justify the ends.
This is why the right wing finds Google's method of applying their standards to be unfair.
It is the difference between a publisher and a platform.
A publisher editorializes the content that they put on their own systems because they are concerned with the outcomes that that content will generate, whereas a platform is concerned with procedure.
They are concerned with making sure that the right rules have been followed in line with the law.
When you're a platform as expansive as Google, and you have a preference for a certain set of values, and a preference against another set of values, and they are informing your decision-making when you decide what is and is not fair, it's quite concerning because the natural inclination will be towards election meddling.
As Jen Janai already said, you need to focus on some at the expense of others, because the outcome that we're looking for is not one determined by a fair procedure, it is one that has been conceptually agreed upon in advance.
In the Project Veritas video, Janai said this.
We're also training our algorithms.
If 2016 happened again, would the outcome be different?
2020 is certainly on top of my old organization Trust and Safety top of mind.
They have been working on it since 2016 to make sure we're ready for 2020.
This is the deliberate thumb on the scales to make the outcome equal, even if it violates procedure.
On Elizabeth Warren breaking out Google, she said this.
That will not make it better, it will make it worse, because now all these smaller companies who don't have the resources we do will be charged with preventing the next Trump situation.
It's like a small company cannot do that.
This is the best possible proof that I could think of of showing how Google views the world in terms of equal outcomes and in terms of their personal preferences against the preferences of the Trump supporters who view the world differently.
It could not be more crystal clear.
They also claim that they don't rank content by ideology.
This was refuted very, very swiftly by James O'Keefe, with a leaked email that appears to call Jordan Peterson, Ben Shapiro and Dennis Prager literal Nazis that are dog whistling to other Nazis in their content and the recommendation was that these could be deranked or not recommended.
So they do rank by ideology.
They at least discuss it, but there's no particular reason to think that they don't have algorithmic function to do that already.
Because remember, they want to prevent another 2016.
And there's also evidence to suggest that they also meddled in the Irish abortion referendum that happened last year.
Then you have the situation with Carlos Maza and Stephen Crowder.
Maza created a Twitter mob to put pressure on YouTube to delete Crowder's channel because Crowder calls him a lispy queer.
Now that's a playground insult and not really something that violates Google's terms of service, YouTube's terms of service.
Presumably why Crowder did it.
It wasn't illegal, but he is allowed to make fun of one of his political opponents because we are talking about political opponents in this regard.
For example, Carlos Mazer has had in his biography for quite some time on Twitter, Tucker Carlson is a white supremacist.
Well, that sounds like a playground insult to me.
That doesn't sound like a legitimate, credible thing to say.
That in fact sounds very much in line with someone calling someone else a lispy queer.
You're a white supremacist, say the left.
You're a lispy queer, say the right.
It's very clearly, again, a conflict of these two values.
And what I find interesting is that, again, by their own standards of fairness, Google are not being fair.
Vox are a bigger media platform than Stephen Crowder, although Stephen Crowder is a direct competitor to Vox in this regard.
On Facebook and on YouTube, Vox gets more views than Steven Crowder, they have more subscribers than Stephen Crowder, and they are probably making a bigger cultural impact overall.
So Carlos Mazer, claiming to be the victim of Stephen Crowder, despite being hosted on a larger and more influential platform, is, I suppose, what we consider to be punching down.
It is not fair that this one person has such influence over a group of other people just because they didn't like being insulted.
The consequence of this was not to delete Stephen Crowder's channel, but to demonetize it entirely.
Again, it's really hard to figure out where the fairness is in that regard.
Stephen Crowder broke none of the rules, which is what YouTube said initially, until the hate mob on Twitter pressured them into at least doing something.
And all of this, completely supported by Carlos Maza on Twitter, in fact directed by him, I would say, and in a way that was laden with irony, as if to suggest actually this is all a performance piece by Carlos Mazer.
None of this really bothers him because he doesn't respect the worldview of the people who are criticizing him and who he spends his daily life criticizing in return.
Google is not a fair platform.
Their left-wing bias is causing the tension with the right wing.
Google's own executives cannot solve this problem because they are the source of it.
The problem is their own bias on how to tackle bias.
It is unfair to have it all one way and none of the other by their own definition.
And despite the data that Tim Poole found showing that the left enjoys a remarkable advantage on platforms like YouTube, the scales are still being tilted in their favour.
This all manifests through the worldview presented to us from Google and people on the left wing like Carlos Maza.
Export Selection