Facebook Caught LYING About Censorship, Regulation Is Coming
Facebook Caught LYING About Censorship, Regulation Is Coming. Facebook banned several right wing figures but claimed this is normal for them but in a new report from the Associated Press we learned that not only does Facebook leave up the overwhelming majority of content that breaks the rules they actually compile it into celebratory videos.Researchers found only 38% was actually getting removed and the AP was able to easily find content that went well beyond anything Paul Joseph Watson every published. This flies in the face of Facebook's statement about banning and censoring content.Social media censorship seems to be reaching a tipping point as even Vox.com is pointing out the hypocrisy of Facebook and how it tends to target conservatives. If a far left site like Vox sees the lies then its only a matter of time before social media regulation drops.But while free speech is paramount this means the worst of the worst will be able to organize. Where do we draw the line?
Support the show (http://timcast.com/donate)
Learn more about your ad choices. Visit megaphone.fm/adchoices
When Facebook banned Paul Joseph Watson, Laura Loomer, Milo Yiannopoulos, and others, they claimed in a statement to the press that they always ban people or content they find extreme or dangerous.
And most of us assumed that was a lie.
And it turns out, it likely was.
In a story from the Associated Press, we're learning that not only does Facebook leave up the majority of extremist content, they've actually been auto-generating celebratory videos using extremist content.
And this shows just how hypocritical they really are.
They likely banned these individuals associated with the right for press reasons.
One of the key components of their defense of their censorship is Section 230 of the Telecommunications Decency Act, but that requires they're a platform and not a publisher.
However, we're learning in a story from Vox that Facebook is trying to claim they're both at the same time in different suits.
Facebook is completely hypocritical, and regulation is coming, because people are realizing that Facebook's been lying the whole time.
They're selling our data, and they're lying about why and how they censor people.
Today, let's take a look at these stories, as well as another story in Poland, which may be a landmark case which restricts censorship on Facebook.
But before we get started, make sure you follow me on Mines at Mines.com slash TimCast.
I don't want to have all my eggs in one basket.
Recently, YouTube stripped monetization away from Sargon and Vakad, so you realize just how dangerous it is to have everything on one platform.
Follow me at Mines.com slash TimCast, because it's a good platform and a great backup channel.
If you want to support this video, just share it on social media and hit that like button.
The story says the animated video begins with a photo of the black flags of jihad.
Seconds later, it flashes highlights of a year of social media posts, plaques of anti-Semitic verses, talk of retribution, and a photo of two men carrying more jihadi flags when they burn the stars and stripes.
It wasn't produced by extremists.
It was created by Facebook.
In a clever bit of self-promotion, the social media giant takes a year of a user's content and auto-generates a celebratory video.
In this case, the user called himself Abdelrahim Mossa, the Caliphate.
Thanks for being here from Facebook, the video concludes, in a cartoon bubble before flashing the company's famous thumbs up.
Facebook likes to give the impression it's staying ahead of extremists by taking down their posts, often before users even see them.
But a confidential whistleblower's complaint to the Securities and Exchange Commission obtained by the AP alleges the social media company has exaggerated its success even worse.
It shows that the company is inadvertently making use of propaganda by militant groups to auto-generate videos and pages that could be used for networking by extremists.
So something is wrong here.
You don't have to like Paul Joseph Watson, Loomer, Yiannopoulos, Jones, whatever.
The point is, The content they make is objectionable, offensive, or whatever.
It's certainly not directly calling people to violence.
It's certainly not jihadi or overtly extremist.
So why did Facebook target these individuals and actually leave up extremist content and generate content around their photos?
The answer to me seems kind of obvious.
If you were to ask me, I'd say they took down these individuals because it's political.
Many of these people, they really did help Donald Trump get elected by pushing memes and content that went viral.
Now they're gone, but Facebook is leaving up extremists, actual extremists.
The AP continues.
According to the complaint, over a five-month period last year, researchers monitored pages by users who affiliated themselves with groups the U.S.
State Department has designated as terrorist organizations.
In that period, 38% of the posts with prominent symbols of extremist groups were removed.
In its own review, the AP found that as of this month, much of the banned content cited in the study—an execution video, images of severed heads, propaganda honoring martyred militants—slipped through the algorithmic web and remained easy to find on Facebook.
The complaint is landing as Facebook tries to stay ahead of a growing array of criticism over its privacy practices and its ability to keep hate speech, live-streamed murders, and suicide off its service.
In the face of criticism, CEO Mark Zuckerberg has spoken of his pride in the company's ability to weed out violent posts automatically through artificial intelligence.
During an earnings call last month, for instance, he repeated a carefully worded formulation that Facebook has been employing.
I also would like to applaud the Associated Press for doing the story, doing the research, and publishing it.
Because for some reason, there are many people associated with left-wing media that are obsessed with someone like Paul Joseph Watson or Carl Benjamin.
They repeatedly write about offensive jokes.
Instead of talking about actual extremists on the platform and the content that is not being removed by Facebook.
The narrative is always about the far right.
They don't actually talk about what's really happening.
In fact, when Twitter said that innocent Muslims were being removed and innocent Arabic individuals were being removed because their algorithm was trying to target ISIS, there was no outrage.
The outrage was that they wanted more censorship and more algorithms to remove innocent people if it meant they could target white supremacists.
But the AP does include a statement from Representative Benny Thomas, a Democrat from Mississippi, the chairman of the House Homeland Security Committee, who expressed frustration that Facebook has made so little progress on blocking content despite reassurances he received from the company.
This is yet another deeply worrisome example of Facebook's inability to manage its own platforms, and the extent to which it needs to clean up its act.
Facebook must not only rid its platforms of terrorist and extremist content, but it also needs to be able to prevent it from being amplified.
Stories like this typically lead to people calling for more censorship and more action.
As it stands, online services are protected by something called Section 230 of the Telecommunications Decency Act, which states that a platform will not be held liable for the content on their service.
So you can't sue them for liable or actually take any action against them because someone else posted to them.
But in a broad First Amendment argument, they claim they can censor any content they want.
Most of this hinges on the fact that they're a platform, not a publisher.
Publishers are like the New York Times and the Wall Street Journal.
They choose what goes on their front page.
Therefore, they are responsible.
However, Facebook actually argues they're both.
So why are they still being protected when they take this action?
In a story from Vox, the Facebook free speech battle explained, is Facebook a platform or a publisher?
When users are getting banned, it makes a difference.
The story says, InfoWars is a publisher.
Alex Jones, who has been the publisher and director of InfoWars since its launch in 1999, can publish what he wants on it.
If I pitch Alex Jones on an article for InfoWars, he would be under no obligation whatsoever to publish it.
Amazon Kindle is a platform, which means Amazon provides the means by which to create or engage with content.
But it doesn't create most of the content itself, or do a lot of the policing of it.
They say an even better example of a platform might be a company like Verizon or T-Mobile, which provides software and the network for you to make phone calls or send texts, but doesn't censor your phone calls or texts, even if you're arranging to commit a crime.
Vox continues, referencing Section 230 of the Communications Decency Act, and then goes on to say that Facebook is trying to have its cake and eat it too.
At times, Facebook has argued that it's a platform But at other times, like in court, that it's a publisher.
In public-facing venues, Facebook refers to itself as a platform or just a tech company.
They reference several times that Facebook has referred to itself as a platform, and how Wired Magazine and Ad Age have said it's a platform, not a publisher.
They say in court, Facebook's own attorneys have argued the opposite.
In court proceedings stemming from a lawsuit filed by an app developer in 2018, A Facebook attorney argued that because Facebook was a publisher, it could work like a newspaper and thus have the ability to determine what to publish and what not to.
The publisher discretion is a free speech right, irrespective of what technological means is used.
A newspaper has a publisher function, whether they are doing it on their website, in a printed copy, or through the news alerts.
Basically, what allows Facebook, Twitter, and other sites to censor and remove content is a free speech argument.
That they should not be compelled to speak.
That would make them a publisher.
If they're a platform, they're not being compelled to speak.
Simply because I own a stage doesn't mean I'm speaking because you stood on it.
You would sue the person standing on the stage, not the guy who made the stage.
But if Facebook wants to be a publisher, they are now liable for the content on their site, making this whole thing really, really complicated.
While Vox does go on to criticize Paul Joseph Watson, I feel like their argument is actually rather weak.
They mention how Paul Joseph Watson wants some people to be banned, but not him.
When in reality, it sounds like Paul's argument is that they're calling him extreme or dangerous, but leaving on actual extreme and dangerous content.
Not so much he's saying they should be banned, but that it's a clear double standard.
Figuring out what we're going to do about censorship is going to take a little bit of time.
It's going to take a lot of resources and a lot of legal expertise.
Some people have argued that getting rid of or amending Section 230 won't do anything.
All that would do is give you the right to sue Facebook if someone lies about you.
It won't actually guarantee anyone has a right to the platform.
But there is a case happening right now in Poland, which may make some changes, at least in Poland, but it's still interesting.
A story from Bloomberg says, Banned from Facebook?
A Polish court may help.
A Polish group is suing Facebook for private censorship in a potentially landmark case.
The story essentially talks about a group called SYN for short, which was producing
completely legal content in an attempt to get people off of drugs.
They say it may be controversially soft on drug users, but the approach has been backed by the United Nations and influential private donors and is by no means illegal.
So why were they banned?
Well, the group doesn't know, so they're suing.
Lawsuits like this could set precedent, meaning Facebook will have to behave in a certain way in the future, but it may only have to do with Poland.
One of the most alarming things, if you were to ask me, is that Facebook is saying, you can still post about Infowars, but only if you post negatively.
That is, Facebook is telling you what opinion you're allowed to express.
Why?
To me, that sounds like they're literally a publisher.
You can post on our platform if you do X.
In the initial report about the banning of Yiannopoulos, Watson, and others, they say,
InfoWars is subject to the strictest ban.
Facebook and Instagram will remove any content containing InfoWars videos,
radio segments, or articles unless the post is explicitly condemning the content.
Why should Facebook have the right to tell us what our opinions are?
How can they tell us, sure, you can talk about it as long as you don't like it.
That's freaky!
That should be freaky to everybody.
But we do have a really big problem when it comes to the censorship debate.
Facebook says they ban extremist, dangerous content.
Well, we know they don't from the original story.
But the issue then is, if people have a right to social media access, where is the line drawn?
Would terrorist propaganda be allowed on the platform so long as it isn't a direct call to violence?
Where do you draw the line?
Many people on the left view conservatives as far-right, and they want them banned.
They're concerned about hate speech, or speech that is offensive or denigrating to certain protected classes.
Conservatives disagree, but who gets to draw the line?
Because if we say that platform access is a civil right, then you're going to see the worst of the worst on the platform.
Maybe that's the way it should be, because that's how it exists in the real world.
A lot of these people on the left don't seem to understand harassment is already illegal.
If someone is harassing you, you file a police report, and the courts will determine if what they're doing is criminal.
However, if someone is saying mean words, that's not harassment.
But again, It's a line, and that's what courts are for.
In the real world, people can go outside and say the craziest things possible.
Why should it be any different on social media?
It does provide a challenge in that these networks allow bubbles to form.
It allows people to find a community and rapidly escalate their rhetoric.
In fact, in a story from The Outline in 2017, they talk about this.
called Mike's Drop, how Mike.com exploited social justice for clicks and then abandoned a staff that believed in it.
The story talks about how Mike actually used to be fairly balanced, but then discovered a formula for generating clicks and traffic and slowly changed to become about identity politics.
Even launching an identity section.
This created a sphere where certain individuals could meet and then radicalize others.
Social media as a whole does this.
So how do we address the problem?
These people like to claim that YouTube is radicalizing people.
That's actually not really true because YouTube algorithmically sorts content and can push you away from more extreme content.
But Facebook historically has done this.
The story is from a year and a half ago.
We know that's how many of these sites were built.
They were generating rage bait to get you angry to get you to click.
And it worked.
And it's not like YouTubers don't do this.
Of course, it can happen on YouTube.
And many people find a formula, make content, and then get money from it.
But that's not the same.
Because on YouTube, you don't click share and then a YouTube user sees that video.
On Facebook, if I see a video I can click share and then everyone I know sees it, making viral content much more easy to share.
On YouTube, you can take the video link and send it somewhere else, so it's much harder to actually push people towards specific content.
BuzzFeed, Vox, Vice, Mike, all these companies discovered this and they utilized it.
If we guarantee platform access as a civil right, how do we stop bad actors and the worst of the worst, I'm talking about Al Qaeda and ISIS, from using these and exploiting it to generate followers and expand their reach?
These are questions that need to be answered.
A lot of us have talked about how political censorship is bad on these platforms.
But where do we draw the line, and how do we figure that out?
You can comment below and let me know what you think.
We'll keep the conversation going.
You can follow me on Mines at TimCast.
Stay tuned!
New videos every day at 4 p.m.
Eastern, and I'll have more videos for you on my second channel, youtube.com slash TimCastNews, starting at 6 p.m.