All Episodes
May 28, 2020 - Freedomain Radio - Stefan Molyneux
44:22
Trump's Executive Order on Social Media Censorship Leaked!
| Copy link to current segment

Time Text
Good afternoon, everybody.
This is Stefan Molyneux. The 28th of May 2020 and some pretty wild stuff is going on at the moment and I wanted to get my first thoughts across an executive order apparently is coming out today from President Donald Trump about preventing online censorship.
I'll say quite frankly that this is a this is a Crisis that is near and dear to my heart.
And so I wanted to give you my sort of first thoughts on this.
The issue has been, or the executive order has been leaked, or it has come out.
It is not, perhaps, of course, the final situation or the final text, but it's well worth going through.
And so I'm going to give you my thoughts on it.
Obviously, these are... Non-legal professional thoughts and so on.
But, you know, I can think.
And I have myself, of course, been subject to, I think, significant amounts of censorship and suppression.
Just go try having a look for my documentary on Hong Kong.
And, well, just see how much luck.
It's called Hong Kong Fight for Freedom.
Go try and find it.
Chat is alive. So let's jump in.
I have a bit of a hard stop, I'm afraid.
I'm going on a show this afternoon for a couple of hours, but I wanted to get these thoughts across to you.
So let's jump straight in.
Of course, you can rewind if you're coming in later.
So this is, and I'll put the link to this.
I'll put it in the chat window, and of course, I'll put it in the show notes.
Here, you can follow along.
It's kind of blurry, so there's not much point putting it on the screen.
So, with regards to the preamble, he says, free speech is the bedrock of American democracy.
Our founding fathers protected the sacred right with the First Amendment to the Constitution, underscoring that the freedom to express and debate ideas is the foundation for all of our rights as a free people.
The emergence and growth of online platforms in recent years raises important questions about applying the ideals of the First Amendment to modern communications technology.
Today, many Americans follow the news, stay in touch with friends and family, and share their views on current events through social media and other online platforms.
As a result, these platforms function in many ways as a 21st century equivalent of the public square.
Now, that is a very, very key phrase.
Public square is a very, very key phrase.
So, public square is like public utility, is like if the...
If the gas company doesn't like your political views, can they shut off your gas?
If the electricity company doesn't like your political views, can they shut off your electricity?
These are all very important questions.
If the cell phone company doesn't like your political opinions, can they shut off your cell phone?
Can they remove your capacity to effectively access the internet?
If the internet company doesn't like your political viewpoint, can they shut down your internet?
Well, that's all a very big and important question.
Now, a purely private company can, of course, not give you a platform.
Like, if I want to write an op-ed for the New York Times, unless it violates every single procedure, like a man picking up soap in a shower in a bathroom stall in a...
Prison, well, they're not going to give me access to the hallowed old great lady of corruption known as the New York Times.
If I want to go and write an article for Mother Earth on the joys of the free market, they're not going to let me in the door, let alone on their website.
That's perfectly fine.
That's perfectly fine because they control and curate the content of the websites, the content of the magazines and newspapers.
So this is a big question.
Regarding the social media companies, if they are redefined as a public square, then in a sense, it becomes a violation of human rights to suppress people based upon legal speech.
Legal speech.
That if you're using your cell phone to conduct illegal business, then the cell phone company can say, hey, dude, you can't use our cell phone network to conduct illegal business, and therefore we're cutting off your cell phone.
But legal speech is the key.
Can a company shut you down for legal speech?
If it's a public square, well, then they can't.
And if it is a private business, then they can.
But the big question is, can you suppress legal speech without being considered a publisher rather than a platform?
So I talked about this in my live stream yesterday, where you should go and check it out in a little bit more detail.
So let's go on with this. He says, Not a phrase I ever thought I'd be saying, but anyway.
As president, I have made clear my commitment to free and open debate on the Internet.
Such debate is just as important online as it is in our universities, our businesses, our newspapers, and our homes.
It is essential to sustaining our democracy.
So that's interesting, right?
So he's bringing in universities, businesses, newspapers, and homes.
So can you be denied entrance to a university if you are pro-free market?
I don't know. I think it's pretty tough.
Can you be disallowed a business license if you're anti-communist?
These are all supposed to be neutral platforms, right?
Now, you can have, again, personal ostracism.
You can have business ostracism and so on.
But can you have infrastructure ostracism based upon political viewpoints?
And if the social media companies are considered to be infrastructure, i.e. a public square, as essential to the maintenance of free speech as your capacity to publish a website, say, well, then you are in trouble if you want to engage in censorship.
So Trump's executive order goes on to say, in a country that has long cherished the freedom of expression, we cannot allow a limited number of online platforms to handpick the speech that Americans may access and convey online.
This practice is fundamentally un-American and anti-democratic.
When large, powerful social media companies censor opinions with which they disagree, they exercise a dangerous power.
Online platforms, however, are engaging, he says, in selective censorship that is hurting our national discourse.
Tens of thousands of Americans have reported, among other troubling behaviors, online platforms, quote, flagging, quote, content as inappropriate, even though it does not violate any stated terms of service, making unannounced and unexplained changes to policies that have the effect of disfavoring certain viewpoints and deleting content in entire accounts with no warning, making unannounced and unexplained changes to policies that have the effect of disfavoring certain viewpoints And that's because of...
of course... The White House, I think last year, was it?
They put out a tool for reporting on social media censorship.
So he says, at the same time social media platforms are invoking inconsistent, irrational, and groundless justifications to censor or otherwise punish Americans' speech here at home, several online platforms are profiting from and promoting the aggression and disinformation spread by foreign governments like China.
Google, for example, created a search engine for the Chinese Communist Party which blacklisted searches for, quote, human rights.
Hid data unfavorable to the Chinese Communist Party and tracked users' determined...
Appropriate for surveillance, Google has also established research partnerships in China that provide direct benefits to the Chinese military.
For their part, Facebook and Twitter have accepted advertisements paid for by the Chinese government that spread false information about China's mass imprisonment of religious minorities.
He's talking about, I've heard, 2 million, 3 million Muslim Uighurs in concentration camps in China.
Twitter has also amplified China's propaganda abroad, including by allowing Chinese government officials to use its platform to undermine pro-democracy protests in Hong Kong.
Now, as far as all of that goes, however unsavory it might be, The argument there, I don't think, is that, well, Google should never do business with the Chinese government.
The Chinese government has been normalized in world relationships.
They're part of the World Health Organization.
In fact, they kind of run the World Health Organization.
They are part of the World Trade Organization.
They have been normalized, and it's not like they're North Korea.
I mean, they kind of should be, in my opinion, but they're not.
Like North Korea. So doing legal business with a government that has been normalized in international relations is, not as far as I understand it, a violation of free speech.
But I think the argument goes that if you're going to get rid of unsavory, offensive and nasty opinions or arguments on social media platforms, they can't say, well, it's just because that stuff is so negative, if they're also willing to do business with a totalitarian regime that disassembles people.
For organ transplants on a regular basis, according to the allegations from the Falun Gong people, they can't say, well, we just have such high and fastidious standards for unpleasant ideas that we just can't allow them when they allow and support Chinese government propaganda, right? So I think that's the argument.
What they do overseas is not subject in particular to direct, I mean, certainly there's no First Amendment in China, otherwise it wouldn't be China.
So, you can't say you must enforce First Amendment protections in foreign countries and so on.
So, he says, Section 2, protections against arbitrary restrictions.
Ah, very,
very important. So, Section 230C was designed to address court decisions from the early days of the Internet, holding that an online platform that engaged in any editing or restriction of content posted by others thereby became itself a publisher of the content and could be liable for torts like defamation.
As the title of Section 230C makes clear, the provision is intended to provide liability protection to a provider of an interactive computer service, such as an online platform like Twitter, that engages in, quote, good Samaritan blocking of content when the provider deems the content, in the terms of subsection 230C2A, this is really, really important.
Quote, obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable.
See, this is where law just becomes a whack load of toxic garbage, right?
Obscene, lewd, lascivious.
Okay, so we can kind of get that, right?
That's the big challenge with something like pornography and that it's really, really tough to define.
But as far as I hear, you know it when you see it, right?
Filthy, excessively violent, harassing, right?
So, okay, these are all terms that you can have reasonable debates about.
But then when you say, or otherwise objectionable, Or otherwise objectionable.
What does that mean?
And this, of course, I assume is the thin edge of the wedge wherein, well, you know, it's objectionable.
But the whole point of free speech is it's supposed to be objectionable.
Nobody flags cat videos, right?
Nobody flags I'm against racism, right?
The whole point of free speech is it is designed to protect the most unpopular legal arguments or opinions That can be imagined.
In particular, it's supposed to protect objectionable, factual, scientific data.
Because if it's data, it can't be false.
It can't be illegal to report factually on data.
So saying, well, you can get rid of stuff that's objectionable, but we want to promote free speech is a complete contradiction.
There is no such thing As protecting free speech while taking down objectionable content, the whole point of free speech is to protect objectionable content and to rely upon...
See, there is a whole policing mechanism in social media, which is called unpopularity, right?
So if I say things...
Just imagine if I said anything...
Otherwise objectionable on social media, well, what could happen is people could simply stop following me.
They could say, Steph is a terrible guy.
He's a nasty guy. He's a bad guy.
And you shouldn't follow him. And if you do follow him, you ain't no friend of mine.
And they can do all of that kind of stuff, right?
So that is the policing that is supposed to happen with legal speech that some people find objectionable is you can attempt to...
Argue against the person.
You can say only bad people would follow Steph, all that kind of stuff, right?
But it's not supposed to be suppressed by the platform itself.
So think of this, right?
This is the analogy I sort of was coming up with this morning when I wanted to talk about this.
So, if you...
Nah, let's not make it you.
You guys are wonderful. A guy named Bob drinks too much, gets in his car, and he's had a prior DUI, gets in his car, and he blows into that little tube that's in some cars where you have to blow low alcohol in order to start your car, if you've had a prior DUI or series of DUIs, right?
So, Bob gets into his car and he bypasses that somehow, right?
He's able to start his car and drives his car and then crashes into someone's dock, right?
Kills the dock, right? Okay, so who's liable?
Well, Bob is liable and the road is not liable.
The people who make the road is not liable.
The people who make Bob's car, they're not liable, right?
Bob is liable. Now, And maybe if he was in a bar, whoever sold him alcohol to excess, that's a whole other thing.
But clearly, the people who make the road are not liable, and the people who make the car are not liable.
Ah, but what if...
There was a little feature in the car that allowed you and was designed by the manufacturer to allow you to override the breathalyzer test and start the car even if you had a high alcohol blood content, right?
Well, suddenly now, because the car manufacturer is now somewhat complicit in Bob's driving drunk because they put in a little bypass switch to allow you to bypass the breathalyzer test to start the car, suddenly they're dragged into it, right?
So that's... The big question.
If they are now participating in something, that's a different matter than if they're neutral.
In other words, if Bob bypasses on his own somehow, if Bob bypasses the breathalyzer test to start his car, that's one thing.
But if it's built into the car by the manufacturer, that's a different matter altogether.
So I hope that sort of helps what's going on here.
Subsection 230C1 broadly states that no provider of an interactive computer service shall be treated as a publisher or speaker of content provided by another person.
But subsection 230C2 qualifies that principle when the provider edits the content provided by others.
Subparagraph C2 specifically addresses protections from civil liability and clarifies that a provider is protected from liability when it acts in good faith to restrict access to content that it considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable.
The provision does not extend to deceptive or pretextual actions restricting online content or actions inconsistent with an online platform's terms of service.
When an interactive computer service provider removes or restricts access to content and its actions do not meet the criteria of good faith stuff, it is engaged in editorial conduct.
By making itself an editor of content outside the protections of this subparagraph, such a provider forfeits any protection from being deemed a publisher or speaker under subsection 230C1, which properly applies only to a provider that merely provides a platform for content supplied by others.
It is the policy of the United States that all departments and agencies should apply Section 230C according to the interpretation set out in this section.
So, otherwise objectionable is...
I mean, obviously, it's a pretty wide net.
And so, otherwise objectionable...
If you allow anti-scientific rantings on your platform, but if you take down legitimate science from your platform, then you can't say, well, quote, anti-scientific stuff is objectionable, we're going to get rid of it, right? So there's lots of different ways to figure out whether there is something that is considered objectionable That is, is it objective?
Is it objective?
So if you say, well, denigrating a protected group, which is, of course, a lot of times protected, denigrating a particular group is objectionable, okay, then you then have to say, okay, well, what about people who denigrate Christians?
What about people who denigrate whites?
Right, so the white privilege, or whites are all racist, or whites are colonial exploiters who, you know, murdered 100 million indigenous people in North America, which is all completely false.
So that's the big question.
If you're going to say, well, it's objectionable to be negative towards a particular race or ethnicity or gender, then if men say women are X and you de-platform, then if women say men are X, do you also de-platform?
You have to have a consistent standard in order to be protected as far as all of this stuff goes, or at least a reasonable approximation.
And that's what it means to act in good faith.
We don't like people who denigrate.
Okay, well, if you have people who denigrate who allowed on your platform, but you only target a certain aspect of denigration, like denigration of blacks or Hispanics or whatever, well, generally we know how this works.
If you denigrate any group that consistently votes for the left, then you will be considered a hate monger.
If you denigrate any group that consistently votes...
For the rights, like Christians and whites and so on, then that's perfectly fine.
So then are you acting in good faith?
Well, it's not an objective principle anymore.
This aspect of objectionable content now falls along political lines, and you have a particular problem.
And the other thing, too, I think that you have to have is you have to have some capacity for predictability when it comes to breaking the rules.
So obviously, if you post clearly illegal content, right, child pornography, if you post death threats, if you have clear incitement to riots and you're not, I don't know, ice cube or whatever, then you are clearly in violation.
And that's well understood.
And people who do that, of course, I have no particular patience for them anyway.
But the question is, how do you know whether you have violated terms?
Because if there's something fuzzy called that content is objectionable, but there's no clear demarcation, right?
So, I mean, I've talked about race and IQ, and if people say, hey, if you bring up scientific data regarding race and IQ, we consider that objectionable and you will be deplatformed.
Then what they have to say is the science of race and IQ, for instance, is much clearer than the science regarding climate change, right?
Because climate change are all based on projections of a century.
Race and IQ data has already been out for a century, right?
So you've got existing data versus proposed data.
And I mean, lots of people have talked about it, Sam Harris and Charles Murray and so on.
Douglas, I was going to say Douglas Adams.
I'm sorry, I completely forgot his name.
He's a wonderful writer. But he's talked about it in his recent book.
If you say, well, you can't talk about that stuff, then you have to say, okay, by what standard do you not talk about that stuff, right?
If you're allowed to talk about projections in the future based upon data modeling, why can you not talk about data that's already been out for a century?
You kind of have to answer these questions.
Now, if you're going to say, well, you can't talk about this data that I've talked about, or rather that I've interviewed people who've talked about it, then you have to sort of explain why.
And it's really tough to make that case.
If you say, well, what's objectionable about it?
Well, it upsets some people. Okay, but the whole point of free speech is you're allowed to upset people.
Otherwise, there is no such thing as freedom of speech.
And so you don't know where the line is.
You don't know what the rules are and how to obey them, which is kind of important, right?
And if you don't know how to obey the rules, then it's really tough to say that you're morally liable for it, right?
I mean, it's like saying that you're...
Some government invents its own language and publishes A set of law books that you're subjected to in that language, which nobody else knows, and are you responsible for it, too?
Okay. So let's jump down a little further.
So the whole point here...
So to further advance the policy...
The National Telecommunications and Information Administration shall file a petition for rulemaking with the Federal Communications Commission requesting that the FCC expeditiously propose regulations to clarify the conditions under which an action restricting access to or availability of material is not, quote, taken in good faith within the meaning of subparagraph C2A of Section 230, particularly the conditions under which such actions will be considered to be Deceptive, pretextual, or inconsistent with the provider's terms of service.
Or two, the result of inadequate notice, the product of unreasoned explanation or having been undertaken without a meaningful opportunity to be heard.
So, deceptive, pretextual, or inconsistent with the provider's terms of service.
I don't know what this means legally. I assume it's creating shadow accounts or pretending to be someone else or stuff like that.
Inadequate notice, right?
Okay. So, of course, a lot of people have been taken down, like had their entire life's work wiped out without warning, without capacity for appeal, and so on.
And that is pretty significant, right?
So the issue for me is if somebody says, okay, I'm going to run a social media site, but I really don't like...
People who criticize communism.
I really don't like people who are anti-totalitarian.
I really don't like people who criticize Marx or the left or who, whatever, right?
Talk about science in particular ethnic context.
If they say that, well, that's fine.
I mean, then you don't sign on to those particular platforms, right?
But if people say...
We are a free speech platform.
Then you sign on to them.
I'm not going to incite riots.
I'm not going to use death threats.
I'm not going to do any of that stuff.
In fact, I'm interested in promoting peace and reason and understanding between men and women and different races and all that.
I really, really want us all to get along, but we can only do that based upon a clear understanding of the facts and the realities on the ground of human society.
So this is where the whole question comes in of, is it a deceptive business practice to say to people, yes, you're allowed onto this platform?
When there is, in fact, a hidden agenda, and so you invest a lot of time, effort, and money into developing your presence on a platform, and then you kind of get wiped out without warning and without any clear understanding of the fact that you violated rules and so on.
So that's an important question, right?
Is it? And we'll get to that.
And the result of inadequate notice, unreasoned explanation.
I mean, I've not been contacted by social media companies usually about if I've done something they consider wrong, what it is, and all of that.
So anyway... So, section three, prohibition on spending federal taxpayer dollars on advertising with online platforms that violate free speech principles.
So, the head of each executive department and agency shall review the agency's federal spending on advertising and marketing pay to online platforms.
Such review shall include the amount of money spent, blah, blah, blah.
Okay. So, anyway.
That's another issue.
The government spends, what is it, half a billion dollars on online advertising, and if companies are violating free speech, well, that seems kind of important.
Now, this is what I talked here.
Section 4, Federal Review of Unfair or Deceptive Practices.
It is the policy of the United States that large social media platforms such as Twitter and Facebook, as the functional equivalent of a traditional public forum, should not infringe unprotected speech.
The Supreme Court has described that social media sites as the modern public square, quote, can provide perhaps the most powerful mechanisms available to private citizens to make his or her voice heard.
And this was from 2017.
And this was from 2017.
Communication through these channels has become important for meaningful participation in American democracy, including to petition elected leaders.
Communication through these channels has become important for meaningful participation in American democracy, including to petition elected leaders.
These sites are providing a public forum to the public for others to engage in free expression and debate.
These sites are providing a public forum to the public for others to engage in free expression and debate.
In May of 2019, the White House Office of Digital Strategy created a tech bias reporting tool.
In May of 2019, the White House Office of Digital Strategy created a tech-biased reporting tool, that's what I mentioned earlier, to allow Americans to report incidents of online censorship.
That's what I mentioned earlier to allow Americans to report incidents of online censorship.
In just weeks, the White House received over 16,000 complaints of online platform censoring or otherwise taking action against users based on their political viewpoints.
The White House Office of Digital Strategy shall reestablish the White House tech-biased reporting tool to collect complaints of online censorship and other potentially unfair or deceptive acts or practices by online platforms shall submit complaints received to the Department of Justice and the Federal Trade Commission.
The FTC shall consider taking action as appropriate and consistent with applicable law to prohibit unfair or deceptive acts or practices in or affecting commerce pursuant to 15 U.S.C. 45.
Such unfair or deceptive acts or practice shall include practices by entities regulated by Section 230 that restrict speech in ways that do not align with those entities' public representations about those practices.
So, in order to maintain their immunity from lawsuits for content, The tech giants cannot, cannot, cannot curate based upon legal speech on the basis of political content, on the basis of that issue.
So that's how they maintain their neutrality, or rather they have to maintain their neutrality in order to maintain their immunity from lawsuits for content.
In the same way that if you bypass the breathalyzer on a car illegally, then the car company is not responsible.
But if they throw something in there, the little hidden switch that says bypass breathalyzer test, then they have built something in and they're now part of the equation.
And it's really not that hard to figure these things out.
It's really not that hard to figure these things out.
We've all seen and we've all heard about terrorist organizations using social media companies to organize attacks.
We've all heard about Antifa and other far-left communist groups who use social media to organize attacks.
We've seen death threats all over the place.
So you just have to come up with a couple of examples of, okay, this guy was banned.
Why? Well, because he said something that upset people.
Okay, well, this group was allowed to flourish Despite the fact that they organize violent attacks and issue death threats and so on.
So why? And now that's the problem because they have to say one group, say the group that's organizing attacks and violence, they are not super objectionable, but...
Gavin McGinnis and Laura Loomer and Milo Yiannopoulos and so on, that is massively objectionable.
Those people are banned, those people aren't.
That becomes very, very tough to maintain, right?
In other words, if speech that other people find upsetting is grounds for banning, but actually organizing violent gangs and groups is not, you have a problem, because clearly that would not be easy to defend as neutral,
right? And so for large internet platforms that are vast arenas for public debate, including the social media platform Twitter, the FTC shall also consider whether complaints allege violations of law that implicate the policy set forth in Section 4A of this order.
The FTC shall develop a report describing such complaints and make the report publicly available and consistent with applicable law.
State review of unfair or deceptive practices.
The Attorney General shall establish a working group regarding the potential enforcement of state statutes that prohibit online platforms from engaging in unfair and deceptive acts and practices.
The working group shall invite State Attorneys General for discussion and consultation as appropriate and consistent with applicable law.
The White House Office of Digital Strategy shall submit all complaints described in Section 4B of this order to the working group consistent with applicable law.
The working group shall also collect publicly available information regarding the following.
Monitoring or creating watch lists of users based upon their interactions with content or users, e.g. likes, follows, and time spent, and monitoring users based upon their activity off the platform, right?
That's kind of important as well.
So if you do something off platform and then that hits you on platform, that's kind of a reach, right?
For purposes of this order, the term online platform means any website or application that allows users to create and share content or engage in social networking or any general search engine.
General search engine is important as well because we generally think of Twitter and Facebook and other places, but general search engine is important as well.
So... That is the general issue that is going on.
And it is really, really fascinating.
Now, it's kind of late in the game, to be honest, right?
Because this is the end of May, and the election is November, of course.
And the social media companies are going to push back, of course, right?
They're going to mount legal challenges and so on.
And that is going to be interesting.
Now, if these social media companies want to give up their immunity from lawsuits because they're curating content, then they will cease to exist as they stand, right?
So the only reason that they've been able to grow as much is because they don't have to worry about the content of what people post because they're immune from any liability.
But that immunity is granted on the basis of neutrality when it comes to content, right?
So that is really, really important.
So if there's some, I don't know, some gay porn website and then they don't allow straight porn to be posted, okay, well, it's a gay porn website.
I guess that's kind of de rigueur, so to speak, for that, right?
Okay. But if they say that we are neutral with regards to political content, if we don't curate based upon what we like or don't like or what serves our political interests or the political party that we donate to in general and so on, then that's a big, big difference. Then they are publishers, not platforms.
They are a car manufacturer that is put in a bypass to the breathalyzer restriction.
And that is a very, very big deal.
Look, I don't want people kicked off social media.
There are people whose views I find absolutely hateful and appalling.
And the point is to engage with those people.
And I debate people I find absolutely repulsive as human beings.
And we have a fairly, fairly, fairly civilized discussion.
I've got one coming up, by the way, on Sunday, 4 p.m.
Eastern Standard Time. The best cure for bad ideas is more ideas.
The best cure for hate speech is debate.
There is, in fact, no such thing as hate speech.
You know, speech you hate is not hate speech.
It's just an emotional pejorative that is attached to opinions that you don't like.
And so this all came about, of course, because Twitter attached a fact check to one of Trump's tweets.
And that's a big step.
Now, my particular opinion...
It's just my opinion, of course.
I don't really have any proof of this.
Of course, right? But my particular opinion...
So if I were running a social media company...
If I were running a social media company, I would be really, really sick of this creeping censorship that was occurring.
And come on, let's be realistic.
It generally comes in through the HR departments, and a lot of it is a result of affirmative action hiring, because I've done this study before.
Generally, it's white males who most support free speech, and it kind of declines there depending on ethnicity, depending on gender and so on.
It's just a kind of reality.
And like I've worked in an HR department before.
I think I was the only white male there.
Yeah, I think I am.
Certainly the only straight white male there.
But anyway. So I think that a lot of this stuff has kind of come creeping in.
And if I were the head of a social media company, here's what I would do.
I would want a way to push back against this, because it really is.
It's a hole with no bottom. When you start tweaking, it's a huge amount of power, right?
You just want to put a little bit, a little bit, just touch that just a tiny bit.
And it's a lot of power to try and sway that discourse.
When you're in control of billions of conversations around the world, you have a huge amount of power.
And the traditional limiting of that power has been being subject to lawsuits for content, right?
Like if you allow terrible things to trend, if you allow terrible things to slander or libel or whatever, then you have an issue.
And if you publish death threats as a magazine, you can face prosecution, right?
Because it's illegal. And so that's what's generally limited that kind of power, but social media companies have that power because they're excluded from that limiting factor of being subject or liable to the content of what they publish or what is published on their site.
So this question, this challenge, This is what is going to be really put into this.
Now, whether this goes to discovery, whether this goes to, okay, you've kind of got to open up your algorithms.
I mean, you can do those in a sealed way and so on, but you've got to open up your algorithms and you've got to show the standards or the decisions by which you suppress certain people and allow other people to...
I assume that the government would have that power should they open up an investigation, and there are antitrust investigations floating around these companies this summer as a whole.
So you would open up those algorithms and you would open up the decision-making process and you would get people under oath about how they made those decisions.
You'd get access to emails, internal company memos, code algorithms, and so on.
And so if I were in charge of a social media company and I wanted to push back against this, so the problem is that if you go...
To your internal censorship place, whatever it is, the department or whatever.
If you go there as the CEO and you say, we've got to stop this, then what happens is, in general, I would imagine, those people would then run to the media, like they'd stop it, and then, you know, nasty stuff would bubble up.
And then they would run to the media and they would say, this CEO has told us to stop suppressing this XYZ terrible stuff that's out there.
And then the media would write all this about you.
Your stock price would tank.
And then you would be fired, and you may even have been in violation of your fiduciary responsibility to increase or maintain the value of the stock price, and your entire career and happiness and peace of mind and perhaps significant portions of your wealth all go up in a giant mushroom cloud of social justice mob outrage, and you're running from the villages like a kind of limping Shrek in a cartoon.
So that's really, really tough to say, no more censorship, and then, oh!
Oh, look at all this terrible stuff that's bubbling up on the platform.
The CEO won't let us deal with it.
He must be supporting these people.
And then the media just goes...
I mean, the media's got the real power in this, in the West.
The media has the real power. And I don't think they're using it too wisely.
So what would I do if I was in charge of a social media company?
What I would do... Is I would put, I would say, let's, or somebody would come to me with saying, let's put a fact check on one of President Trump's tweets.
Now, that's just what I would do.
I don't know what's going on in these companies, right?
And I would say, oh, you know, that's, yeah, let's do that.
Knowing that it would provoke this kind of response.
Now, then what I could do is I could sit down with my board and say, look, we're going to go into a protracted legal battle.
We're going to have no particular capacity to maybe even survive such a close examination of our business practices with regards to the Section 230 neutrality thing.
We're going to have to, and I say this with a great deal of regret and sorrow, we're going to have to pull back on the censorship, at least until after November.
Obviously, if you get a Democrat in, then they can go hog wild on this stuff and it's all over for people.
But I would say, yeah, let's do a big, outrageous act of potential censorship.
That's going to provoke a strong government response.
And then I can go to the board and I can say, do we really want to go down this route?
Let's just hold off, at least till the election, and let's stop doing this kind of stuff.
But that's a challenge too, right?
Because if this kind of threat comes down from the state...
And then, massive things change on the social media companies.
Isn't kind of a tacit admission that they were doing a lot of censorship outside of the Section 230 protections?
I don't know. I don't know.
These are just speculations, ways that I would think.
But if there is a chance to shut it down, it would be because...
They poked the bear, so to speak.
The bear came back with claws out and they're like, now I'm going to be betraying my fiduciary responsibility to protect and increase the share price of the company if I end up provoking a massive investigation into a potential censorship within the company.
That's going to hit us really hard.
I mean, just look at IBM before and after the 13-year investigation into the antitrust stuff that went on from the DOJ.
I mean, the company just got eviscerated.
Because when the government starts poking around in companies and starts issuing subpoenas and starts issuing – starts getting, quote, search warrants or can invoke discovery and so on, then what happens is people, they don't really want to work there.
I mean, how would you like to work for a company where, you know, three days a week you're sitting there under oath with government investigators and, I mean, that's – people don't want to work there.
And so it's really terrible for these companies when the government starts opening up the kimono to look underneath because the government is saying, you know, you only get these immunities because of your neutrality.
And if your neutrality is significantly open to question, then you're not going to get these immunities.
So social media executives, you know, again, if I were in that boat – and I was a software executive for 15 years, a software entrepreneur and executive – I was a coder, like I know this world, not obviously at this massively high multi-billion dollar level, but I know this world quite well.
And if there is a regulatory threat that comes to a company, suddenly things reverse considerably.
And you say, listen, we're going to have to ease off on some of this stuff.
We have to conduct an internal review and we have to stop doing this stuff at least until the election.
Otherwise, we are going to have our stock price is going to be significantly threatened, right?
Because why is it that these companies have such power?
Well, it's because they're wealthy.
And why are they wealthy? Because they have a huge amount of user data that they can sell for targeted advertising, right?
And the only reason that they're able to collect all of that user data is because they're not liable for lawsuits for content.
Anything that threatens their immunity from lawsuits for content, it doesn't just harm the company, it destroys their entire business model.
because there's absolutely no capacity for any company to remain profitable while training, I don't know, you'd need an army of millions and millions and millions of people, all well-trained in the complex legality of something like fair use.
Like I remember talking to Mike Cernovich when he was getting the movie Hoaxed out.
You should really go and check out that movie, hoaxedmovie.com.
And just getting legal sign-off on fair use was a long, lengthy process, very expensive, just for a documentary where it seemed to me pretty much everything was obviously fair use.
And so that, you can't sustain the business model without immunity from liability for content.
It's not even remotely close.
These companies will shut down the moment that they lose that because, you know, people are then just going to start suing them because, you know, deep pocket syndrome and political activism and so on, and they're just going to end up facing waves and waves and mountains and mountains of lawsuits.
So it is going to be interesting.
And it will be interesting to see if any of this turns out to be retroactive.
Does Milo get back? Does Gavin McGinnis get back?
Does Laura Luma get back? Do I go back on PayPal?
Who knows, right? So it will be interesting to see if this is retroactive or not.
It will be interesting to see how this plays out.
It will be interesting to see the response. Of course, there's going to be a lot of people in the social media companies who are like, fight, fight, fight.
But the executives have to recognize that the goose that lays the golden egg is immunity and anything that threatens that immunity...
It's the end of the entire business model.
And that's going to be, like, crushing to the U.S. economy, to the Western economies.
It's going to put millions of people out of work.
It's going to destroy these things.
On the plus side, other places, you know, like Gab, Mines, Parler, you name it, BitChute, and so on, and Brighteon, which are doing great work when it comes to free speech, LBRY, other places that you can find me on as well.
Those places are going to see a significant impact And then, of course, those places are going to be targeted by the social justice warriors and the HR departments are going to be the portal through which censorship is going to attempt to come in, in my humble opinion, and so that is going to be a big,
big challenge. If you wanted to check out my thoughts on Minneapolis, of course, these riots, you can follow me on Twitter at Stefan Molyneux and the Floyd case I talked about yesterday in the live stream.
I really, really hope that you find this stuff helpful.
I really, really do appreciate your time in dropping past today.
I really, really...
Love you guys to death. It is such a deep honor to be part of this conversation.
I just can't even tell you how powerful this is for me, how grateful I am for your support.
if you would like to help me out.
It has been a brutal couple of years for me personally and professionally and financially.
So if you could help me out at freedomain.com forward slash donate.
Hey, look, the link is below.
I would really, really appreciate it.
It's kind of desperately needed.
I have a debate coming up 4 p.m. Eastern with rationality rules on UPB, my theory of ethics, and it's going to be a scorcher.
I guarantee it.
I guarantee it.
Also, I will probably put out this weekend the last in my documentary series on California, Sunset and the Golden State.
So look for that.
Please also, if you want to join the freedomain community, we are at subscribestar.com forward slash freedomain.
We do our call-in show.
We have chats, and there's lots of really, really great community stuff going on there.
there.
And no, sorry, 4 p.m.
Sunday. 4 p.m. Sunday is my debate.
Sorry if I misspoke on that.
4 p.m. Sunday Eastern Standard Time.
I just dropped by the Free Domain channel and I will be live streaming it at all.
And I really, really appreciate it.
It's a real pleasure. It's a great honor.
And I will talk to you guys soon.
Lots of love from here. And stay strong.
It is going to be absolutely fascinating to see what plays out from here.
And I do have a lot of sympathy for the executives at these software companies.
It is a very, very tough thing to manage free speech in a world which has very, very, very commitments to it.
So I wish them all the best.
Export Selection