N.Y. Governor, Attorney General Seek New Social Media Restrictions
Study confirms platforms' role in mass shootings, officials said
Invoking YouTube, Twitter, TikTok, Reddit, Twitch and others, as well as fringe site 4chan, New York Gov. Kathy Hochul and Attorney General Letitia James are calling for a government crackdown on online platforms, saying they contributed to the Buffalo mass shooting that left 10 dead and three injured.
Among the things they want to see are getting internet-service providers more involved in controlling edge provider content that leads to radicalization and violence, creating a version of “tape delay” for live streaming and the taking of “reasonable steps to prevent unlawful violent criminal content” — the quid pro quo for Section 230 immunity from liability for third-party content.
The pair released a report on the role of online platforms in the shooting that concluded, among other things, that “a lack of oversight, transparency and accountability of these platforms allowed hateful and extremist views to proliferate online, leading to radicalization and violence.”
They say there need to be changes made to Section 230 of the Communications Decency Act, which provides immunity from civil liability for most third-party posts on social media platforms.
Among the changes they recommend are new state laws that would criminalize a perpetrator’s creation of graphic images of a homicide and would penalize with civil liability anyone or any platform that reshares or reposts such images or videos.
“The tragic shooting in Buffalo exposed the real dangers of unmoderated online platforms that have become breeding grounds for white supremacy,” said James, who shared the report's findings with victims’ families.
James was laying some of the blame for recent mass shootings on edge providers. “We saw this happen in Christchurch, Charlottesville, El Paso, and Buffalo, and we cannot wait for another tragedy before we take action,” she said. “Online platforms should be held accountable for allowing hateful and dangerous content to spread on their platforms.”
Multichannel Newsletter
The smarter way to stay on top of the multichannel video marketplace. Sign up below.
Following the shooting, Hochul, whose hometown is Buffalo, asked the attorney general’s office to produce the study. James’s office said it subpoenaed thousands of pages of documents obtained under subpoena from Facebook, Instagram, Twitter, TikTok and Rumble.
The report concluded that:
- “Fringe Platforms Fuel Radicalization: By his own account, the Buffalo shooter was radicalized by virulent racist and antisemitic content on anonymous, virtually unmoderated websites and platforms that operate outside of the mainstream internet, most notably 4chan. In the wake of the Buffalo shooting, graphic video of the shooting recorded by a viewer of the shooter’s livestream proliferated on fringe sites. The anonymity offered by 4chan and platforms like it, and their refusal to moderate content in any meaningful way, ensures that these platforms continue to be breeding grounds for racist hate speech and radicalization.
- “Livestreaming Has Become a Tool for Mass Shooters: Livestreaming has become a tool of mass shooters to instantaneously publicize their crime, further terrorizing the community targeted by the shooter and serving as a mechanism to incite and solicit additional violent acts. The Buffalo shooter was galvanized by his belief that others would be watching him commit violence in real time. Although the platform he used to live-stream his atrocities disabled the live stream within two minutes of the onset of violence, two minutes is still too much.”
- “Mainstream Platforms’ Moderation Policies Are Inconsistent and Opaque: Many large, established platforms improved on their response time for identifying and removing problematic content related to the Buffalo shooting, including graphic video of the shooting and the shooter’s manifesto, as compared to past events. However, the platforms’ responses were uneven, with one platform unable to identify posts that linked to off-site copies of the shooting video even after those posts were flagged through user reports. Many platforms also do not fully disclose how they moderate hateful, extremist, or racist content.
- “Online Platforms Lack Accountability: Online platforms enjoy too much legal immunity. Section 230 of the Communications Decency Act largely insulates platforms from liability for their content moderation decisions, even when a platform allows users to post and share unlawful content.”
Recommendations for Reform
Given those findings, Hochul and James said action was needed and recommended the following reforms:
- “Add Restrictions to Livestreaming: Livestreaming was used as a tool by the Buffalo shooter, like previous hate-fueled attacks, to instantaneously document and broadcast his violent acts to secure a measure of fame and radicalize others. Livestreaming on platforms should be subject to restrictions — including verification requirements and tape delays — tailored to identify first-person violence before it can be widely disseminated.
- “Reform Section 230: Currently, Section 230 of the federal Communications Decency Act protects online platforms from liability for third-party content that they host, regardless of those platforms’ moderation practices. Congress should rethink the ready availability of Section 230 as a complete defense for online platforms’ content moderation practices. Instead, the law should be reformed to require an online platform that wishes to retain Section 230’s protections to take reasonable steps to prevent unlawful violent criminal content from appearing on the platform. This proposal would change the default. Instead of simply being able to assert protection under Section 230, an online platform has the initial burden of establishing that its policies and practices were reasonably designed to address unlawful content.
- “Increase Transparency and Strengthen Moderation: Online platforms should provide better transparency into their content moderation policies and how those policies are applied in practice, including those that are aimed at addressing hateful, extremist, and racist content. They should also invest in improving industry-wide processes and procedures for reducing the prevalence of such content, including by expanding the types of content that can be analyzed for violations of their policies, improving detection technology, and providing even more efficient means to share information.
- “Call on Industry Service Providers to Do More: Online service providers, like domain registrars and hosting companies, stand in between fringe sites and users. These companies should take a closer look at the websites that repeatedly traffic in violent, hateful content, and refuse to service sites that perpetuate the cycle of white supremacist violence." ■
Contributing editor John Eggerton has been an editor and/or writer on media regulation, legislation and policy for over four decades, including covering the FCC, FTC, Congress, the major media trade associations, and the federal courts. In addition to Multichannel News and Broadcasting + Cable, his work has appeared in Radio World, TV Technology, TV Fax, This Week in Consumer Electronics, Variety and the Encyclopedia Britannica.