Mark Zuckerberg: Sec. 230 Shield Should Be Tied to Misinformation Mitigation Regime
Says Congress could also require more transparency and accountability
While Facebook CEO Mark Zuckerberg said his company is already a leader in countering misinformation online, he is calling on Congress to condition online platforms' Sec. 230 immunity from civil liability for third party content moderation on demonstrable systems to identify and remove harmful content.
That is according to his prepared testimony for a March 25 hearing in the House Energy & Commerce Committee titled "Disinformation Nation: Social Media's Role in Promoting Extremism and Misinformation." He would only apply that to smaller platforms, he said, explaining that had Facebook been hit in its infancy with a bunch of lawsuits over content it could have been a death blow.
Also Read: Big Tech Defends Sec. 230 from Expected Hill Hits
The Facebook CEO begins his testimony by expressing his condolences to the family of the Capitol police officer who died following the Jan. 6 Capitol insurrection, saying Facebook was "committed to assisting law enforcement in bringing the insurrectionists to justice."
One of the knocks on social media is that it provided a platform for plotting that insurrection.
"I believe we do more to address misinformation than any other company," he said.
But he also clearly signaled that Congress had a role as well, or perhaps understanding the bipartisan momentum for limiting or even eliminating Sec. 230 immunity, wanted to promote a self-regulatory version of reform Facebook could live with.
Multichannel Newsletter
The smarter way to stay on top of the multichannel video marketplace. Sign up below.
First, Zuckerberg outlined why he said Facebook was a leader in weeding out harmful content.
Also Read: Rep. Cicciline Says Big Tech Power Will Be Curbed
"Our Dangerous Organizations and Individuals policy prohibits content calling for or advocating violence, and we ban organizations and individuals that proclaim a violent mission," he said. "We remove language that incites or facilitates violence, and we ban Groups that proclaim a hateful and violent mission from having a presence on our apps. We also remove content that represents, praises, or supports them. We believe this policy has long been the broadest and most aggressive in the industry."
As to the type of activity that surrounded the insurrection, he said: "We remove Groups that represent QAnon, even if they contain no violent content. And we do not allow militarized social movements—such as militias or groups that support and organize violent acts amid protests—to have a presence on our platform."
But given all that, he said, it was understandable that people all over the political spectrum 1) "want to know that companies are taking responsibility for combatting unlawful content and activity on their platforms," and 2) "want to know that when platforms remove harmful content, they are doing so fairly and transparently."
He said that was why he was advocating for "thoughtful reform of Section 230..."
Zuckerberg said Sec. 230 had created the conditions for a thriving internet that empowered billions to express themselves, adding that its principles are as relevant today as ever. But given the dramatic changes in that internet, he said the section would benefit from that thoughtful reform.
He said Congress could make platforms' liability protection for certain unlawful content "conditional on companies’ ability to meet best practices to combat the spread of this content." He said that instead of granting immunity, companies would hear it by demonstrating "that they have systems in place for identifying unlawful content and removing it."
But he also said platforms should not be held if a "particular piece of content" slips through, saying that would be "impractical for platforms with billions of posts per day," but they should be responsible for having an "adequate" system in place.
He said what qualified as adequate could be proportionate to the size of the platform and determined by a third party who would ensure that fair practices are clear and companies understand how to implement them.
He said those practices should not extend to "unrelated issues" like encryption or privacy, which he said are for another day's debate.
Also Read: Senate Democrats Target Sec. 230
He did not spell out how Congress could bring more transparency, accountability and oversight to the content moderation process, but said companies should get to make and enforce their own rules about what content is harmful but legal, though he conceded that would not help companies draw the line on tough calls about conversations on the platform. "While we work hard to prevent abuse of our platform, conversations online will always reflect the conversations taking place in living rooms, on television, and in text messages and phone calls across the country. Our society is deeply divided, and we see that on our services too," he said.
Contributing editor John Eggerton has been an editor and/or writer on media regulation, legislation and policy for over four decades, including covering the FCC, FTC, Congress, the major media trade associations, and the federal courts. In addition to Multichannel News and Broadcasting + Cable, his work has appeared in Radio World, TV Technology, TV Fax, This Week in Consumer Electronics, Variety and the Encyclopedia Britannica.