Facebook Forced to Undo Ban On Conservative by Court Injunction
SUPPORT JOURNALISM. Become a patron athttp://www.patreon.com/TimcastFacebook recently suspended a user for calling Germans "stupid" and blamed left-wing media. They claimed this was in fact a violation of their hate speech policies, and yes directly attacking national origin is a violation. But a court recently filed an injunction saying that Facebook must unban the user and that they cannot delete his comment if he decides to post it again.What is most alarming here is that Facebook has determined that called Germans "stupid" is a bannable offence. What is next? Well, Facebook says it dreams of a future where hate speech is policed by artificial intelligence.Make sure to subscribe for more travel, news, opinion, and documentary with Tim Pool everyday.Support the show (http://timcast.com/donate)
Learn more about your ad choices. Visit megaphone.fm/adchoices
It seems like almost every day we hear something about censorship, violations of free speech and free expression, and these are rights that most of us in the West, we like these things.
We like being able to say whatever we want.
And there's just news day after day about how this is being violated.
And that's why today's story is a bit strange, because it's almost the opposite.
A German court recently passed an injunction on Facebook saying you cannot block an offensive comment and that they have to reinstate a user that they gave a suspension to, a 30-day suspension, because they said, well, it was protected speech.
And what's particularly strange about this is that, for one, Germany passed a fairly restrictive hate speech law just a few months ago, and the person that won the injunction is seemingly conservative.
If you're not already there, hit that subscribe button.
Click that little bell.
Because believe it or not, if you don't click that bell, YouTube doesn't let you know I have new videos.
Subscribing apparently doesn't do anything anymore.
But more importantly, make sure you go to patreon.com forward slash TimCast and become a patron today.
Patrons are the people who help me do the work that I do.
So when you join, at any level, you are supporting my work and supporting my journalism.
So please consider becoming a patron and supporting my journalism today.
From ZDNet, Court tells Facebook stop deleting offensive comment.
Facebook's move to block user and cut a comment from that account has been challenged by a German court.
A Berlin court has ordered Facebook not to block a user and not to delete a comment made by that user, even though it breached the social network's community standards.
The order appears to be the first such injunction in Germany, which has spent the past few years trying to get platforms such as Facebook to be more proactive in removing hate speech.
This is a temporary injunction of the sort that German courts grant after having heard only one side of the argument.
For that reason, the Berlin district court that issued it is refusing to comment on the case for now.
And now it actually gets scarier.
Because when you hear that Facebook took down someone's post, when you hear that Facebook is claiming the post violated their community standards, you're gonna imagine some kind of egregious racism or sexism, just something truly awful.
But when you see what this comment was that got removed, it's actually scary for a whole other reason.
It's scary because the comment is innocuous.
It's nothing.
The comment in question was, The Germans are becoming ever more stupid.
Gabber B's comment posted in January read, No wonder, since they are everyday littered with fake news from the left-wing Systemedian about skilled workers declining unemployment rates or Trump.
The article says, Systemedian can be inelegantly translated as System Media.
The phrase carries echoes of the term System Press or System Press that was used by the Nazis before they came to power.
So System Media sounds like System Press, And the Nazis said System Press, that's as close as they get to this actually being hate speech.
This looks like the run-of-the-mill Facebook complaint.
He didn't attack any particular race, gender, identity.
He just said the Germans are getting stupider for reading left-wing media.
That's a common sentiment across the board throughout the West.
That, now, is hate speech.
So, before we go on, I just want to point out, when I've... I've made videos in the past talking about how it will get worse.
You know, they might be targeting racists and sexists now, and everyone might agree on that.
Not everybody, but there's a lot of people who are going to say, yeah, well, you know, sexism and racism are bad.
But now they're actually going for someone who's saying that people are becoming stupid from reading left-wing media.
Well, that's just a political opinion.
But sure enough, Facebook says it violates the community guidelines.
And to make it even stranger, it's Germany that took to the defense of this user.
I will add though, as it states in the article, this is just because they've only heard one side of the argument.
It's very possible that when the court hears what Facebook has to say as to why this violates their terms and conditions, The courts might agree, and the user might get banned.
Now, initially, Facebook banned this user for 30 days.
I should say suspended the user for 30 days.
And because of this injunction, the user was reinstated, but the comment remains deleted.
However, the article states, the preliminary injunction, which can be appealed, does not force Facebook to reinstate the comment, which remains offline at this point.
However, it does stop Facebook from deleting the comment again If Gabor B chooses to repost it.
People have been talking over the past couple years about an internet bill of rights about regulating social media networks.
And in my opinion, I'm usually a little standoffish when it comes to giving the government more authority.
Look, I understand that the government has to regulate certain things and I tend to agree with government regulation in certain areas.
But I'm always worried about the immediate response being give more power to an already powerful institution.
I don't know if that's the right answer.
But at the same time, I don't know what the right answer is, so that's why you kind of see me always on the fence with things.
I just don't think I'm smart enough to know how to solve these problems.
But what I will say is, at least in this instance, it would seem that this user's comment was just snark.
Run-of-the-mill internet snark, calling someone stupid.
It's not hate speech, is it?
I hope not.
And when the courts file this injunction, you can see that there are some benefits to the government stepping in and saying, hold on a second.
But it's a big challenge.
Do we allow private entities to function exactly the way they want, even if that means certain opinions are removed, even opinions as innocuous as this?
Or do we give the courts and the government the right to tell private businesses what they can or can't do?
It's a big challenge, and I understand there's a lot of arguments to be made.
Many people believe that because it is a public forum and our political discourse is happening there, they need to take a stance closer to free speech, and that removing comments simply because they're hateful or because they express a certain political opinion is probably not the right move.
Now, for me, I understand there are issues with hate speech, and I don't know what the solution is.
And that's typically why I just talk to you about what the news is, and that's typically why you don't see me engaging in a lot of activism.
But there is something that I think we all definitely need to pay attention to, because I can only imagine things are going to get a lot worse.
How does Facebook actually deal with hate speech?
Well, they're talking about bringing on tens of thousands of individuals to actually monitor posts.
We know that Facebook is actually scanning your messages, your private messages, for hate speech.
From April 4th, Bloomberg Technology, Facebook scans the photos and links you send on Messenger.
The company confirmed the practice after an interview published earlier this week with Chief Executive Officer Mark Zuckerberg raised questions about Messenger's practices and privacy.
Zuckerberg told Vox's Ezra Klein a story about receiving a phone call related to ethnic cleansing in Myanmar.
Facebook had detected people trying to send sensational messages through the Messenger app, he said.
In that case, our systems detect what's going on, Zuckerberg said.
We stopped those messages from going through.
Some people reacted with concern on Twitter.
Was Facebook reading messages more generally?
Facebook has been under scrutiny in recent weeks over how it handles users' private data, and the revelation struck a nerve.
Messenger doesn't use the data from scanned messages for advertising, the company said, but the policy may extend beyond what Messenger users expect.
I think it's fair to say that we all assume our messages, sent from one person to another, are private, and that Facebook isn't going to be reading them.
Well, reading might be a little strong, but we know that at least according to this story and a few others, Facebook scans the content of your messages.
But even while they say that they don't use it for advertising, many people have reported just that.
That after sending a message on Messenger, a private message, they received an advertisement for what they were talking about.
Last month, someone posted this to Reddit.
And you can see they sent the message, here's the recipe, let me know if you can open this.
And there's the ad.
Find recipes from the Food Network.
And that could be entirely coincidental.
That maybe they were talking about something else on the Facebook platform because Facebook says they aren't scanning your messages to send you ads.
But I'm not surprised people think they are, especially when you see posts like this.
And I gotta say, I just don't trust Facebook.
But now, where it gets scary.
From Lawfare Blog.
Zuckerberg's new hate speech plan.
Out with the court, and in with the code.
It starts by talking about how Mark Zuckerberg testified before Congress and says, Zuckerberg did shed some light on Facebook's current thinking about how it will combat hate speech on its platform.
The plan epitomizes technological optimism.
In five to ten years, Zuckerberg said, he expects artificial intelligence will be able to proactively monitor posts for hateful content.
In the meantime, Facebook is hiring more human content moderators.
There's no solution to this.
Humans are biased, and they're going to say some things are hateful and some things aren't.
We're going to see asymmetrical policing of policy, and there's no real way to solve it.
Not only that, but there's way too many Facebook posts for even 20,000 humans to deal with.
In which case, what is Facebook going to do?
Well, as we see from this article, robots.
But robots don't understand context.
Certain communities have taken pejorative terms and turned them into positive terms in their own community, and I don't think I need to give you examples of this.
You might use a word as an insult, and 50 years later, that word is reclaimed by that community, and this happens all the time.
But what is a computer program going to say?
It's going to say, that's hate speech, no matter what the context.
And not only that, can I just say, in my opinion, what's wrong with hate?
I understand that hate could lead to violence, and that we want people to be loving and caring and compassionate.
But hate is a real emotion experienced by people, and it scares me to think that we're enacting policies on massive public forums where we say certain emotions are not allowed.
It's one thing I understand if you call someone to violence, if you ask people to commit violence, and there are certain thresholds where you really shouldn't be allowed to say certain things.
Absolutely.
The United States is not free speech absolutism.
We do have restrictions on speech.
The argument, though, is where to draw the line.
In my opinion, we need to draw the line very, very far away and actually allow some pretty bad behavior.
Why?
Well, look what happened in Germany.
Someone said Germans were stupid for watching left-wing media.
And they said that that was a violation of their community guidelines.
You can't even express on Facebook that you think people are stupid?
That's a scary future.
And you have to understand it's not going to stop there, because it never does.
When the argument first started, we were talking about racism.
We were talking about extreme acts of racism and white supremacy.
But today, in Germany, they're talking about calling someone stupid.
But perhaps things might swing the other way.
In 2016, this story from The Guardian, Microsoft deeply sorry for racist, sexist tweets sent by AI chatbot.
Company finally apologizes after Tay quickly learned to produce offensive posts, forcing the tech giant to shut it down after just 16 hours.
The bot known as Tay was designed to become smarter as more users interacted with it.
Instead, it quickly learned to parrot a slew of anti-Semitic and other hateful invective that human Twitter users fed the program, forcing Microsoft Corp to shut it down on Thursday.
We think we know how an AI will react.
In this instance, it didn't react well and became racist.
So, while I think we're definitely improving AI technology, imagine where that might go if we start enacting artificial intelligence hate speech police.
We don't know exactly what they will learn, and we don't know exactly what they will target.
And what happens when it targets Not hate speech.
What happens when it targets people who are using certain words in a positive way?
What happens when someone says homosexual and it says, oh, we've learned that people often use this in directory context and it removes it, even though it's just a regular word used by regular people all the time.
In this instance, we saw an AI chatbot become racist and start spewing sexist and racist things, at least according to the story.
I'm not sure what they were actually tweeting.
So what happens when the artificial intelligence hate speech police Can't figure out the context and start policing whatever.
Because we've already seen videos on YouTube and posts on Facebook and Twitter get removed when they weren't actually hate speech.
In fact, there was a story where the New York Times Opinion Twitter was suspended for hate speech because they included the words like indigenous and Native American in a tweet and the Twitter bots just didn't understand the difference.
Do we think the AI is going to get better or are we going to empower AI and it's only going to get worse?
Also, keep in mind what this means for the future.
It means that a robot will be dictating what we are or are not allowed to say.
Yes, I understand that there's going to be people programming the AI, but as the artificial intelligence learns and decides to start policing things on its own, certain ideas are going to be restricted by machines.
That's the future.
That's gotta be scary to somebody, right?
That worries me.
Humans understand nuance and context, and it's entirely possible that in the future artificial intelligence will be able to as well, I think.
I think it will.
But the priorities of an AI are going to be dramatically different than the priorities of individuals, of people.
And people understand that culture changes, and people have to fight for their rights, and over the past hundred years, civil rights have been won by many people.
And the language they used to win those civil rights was extremely offensive to many.
If an AI bot starts shutting down speech, will this be the end of progress?
Or will certain groups just become the new oppressed and the new marginalized?
Will it entirely backfire?
I'm anything but a Luddite.
I believe in technology and I think we can use it to make the world a better place.
But look, I'm not really going to trust Facebook here and believe that they're going to get this one right.
I don't know if there's a solution to hate speech.
Right now, I don't think there is.
I believe that technology can solve almost any problem, so maybe there will be.
But keep in mind, as I mentioned before, hate is a real emotion experienced by humans, and it's a damn shame and a bit terrifying if we start using computer programs to restrict people's emotions, their ability to express themselves.
What do you think's gonna happen when one day people wake up and they can no longer vent to their communities on the internet?
Because calling Germans stupid now, apparently, is hate speech.
Let me know what you think in the comments below and we'll keep the conversation going.
How do you feel about Facebook's latest step, calling people stupid is hate speech?
And how do you feel about a German court actually defending someone's right to post that?
I think this is a particularly interesting story.
And I think if we move towards an AI future, we're going to be in trouble.