Sections

Commentary

Legislative efforts and policy frameworks within the Section 230 debate

Sen. Ted Cruz gestures at a poster of a Facebook post depicting the Mother Teresa below a headline reading "CENSORED."

In recent years, policymakers have attempted to tackle the harms associated with online platforms via an ineffective bricolage of finger-pointing, performative hearings grilling various CEOs, and, ultimately, policy proposals. These proposals mostly aim to reform the intermediary liability immunity provisions of Section 230 of the Communications Decency Act. But the debate over whether and how to reform this law, which protects platforms from most lawsuits stemming from content posted by users, has been mostly unproductive and riddled with outlandish proposals. Consensus on specific or theoretical reform has remained elusive. 

However, just as the progressive antitrust movement has won allies in the Republican Party, the effort to reform Section 230 may ultimately provide conservatives and liberals another issue area where they might find common cause. With the federal government increasingly at odds with the tech industry, an unlikely coalition is being formed by those that see regulation as a way to hurt industry and those who see reform as a good in itself. If a significant number of Republicans are willing to back President Joe Biden’s progressive pick to lead the Federal Trade Commission, Lina Khan, it’s not unreasonable to think that Section 230 reform might inspire the formation of a similar bipartisan coalition.

Nonetheless, Section 230 reform faces a number of formidable obstacles. Unlike the resurgent antitrust movement, the Section 230 reform space lacks uniform goals. And the debate over reforming the law represents one of the most muddled in all of Washington.

What follows is a synthesis of the paradigms, trends, and ideas that animate Section 230 reform bills and proposals. Some have more potential for bipartisan support while others remain party-line ideas and function as ideological or messaging tools. This piece attempts to clarify the landscape of major Section 230 reform proposals. We separate the proposals based on their approach in reforming the legislation: either by broadening exemptions for immunity, clarifying one or more parts of the content governance process, or solely targeting illegal content.

Immunity exemptions

Recognizing the U.S. Constitution’s strict free-speech protections, some proposals to reform Section 230 separate liability protections for organic and sponsored speech. The latter includes content made available as a paid advertisement and paid listings in third-party marketplaces. Separating organic and sponsored speech would allow for stringent rules on paid-for speech, while reducing the likelihood of a First Amendment challenge. Such proposals regard commercial content as a low-hanging fruit with fewer constitutional or general speech complications, suggesting that harm remediation for this small slice of online speech would have worthwhile benefits at lower cost. The bipartisan SAFE TECH Act, for example, makes a full distinction for Section 230 purposes by stripping platforms of liability protections in contexts where they are either paid to make content available or have directly created or sponsored the underlying content. Other reform ideas focus on a narrower range of commercial activity, such as two House Republican proposals that would only exempt claims relating to counterfeit or defective products from immunity.

The bottom line: As the policy proposal perhaps most likely to make progress in the near term, taking actions on ads would offer both Republicans and Democrats a sense of accomplishment in their ongoing feud with online platforms. However, there remains a large gap between the full distinction approach of the SAFE TECH Act and the more specific subject matter focus of the House Republican proposals.

Exempt specific types of harm

Section 230 has always had some specific subject-matter carveouts for liability protections. In particular, lawsuits over harm arising from federal crimes (such as criminal fraud) and intellectual property violations (such as posting copyrighted material online without permission) do not qualify for Section 230’s immunity provisions. The only reform to Section 230 to pass in recent memory, FOSTA-SESTA, controversially added sex trafficking to this list. The previously mentioned SAFE TECH Act would add several additional exceptions and make platforms liable for claims relating to civil rights, antitrust, stalking and harassment, international human rights, and wrongful death actions. A high-profile academic proposal suggests further adding to the SAFE TECH list incitement to violence, hate speech, and disinformation. House Republicans have also proposed creating exceptions for immunity where the harm in question involves cyberbullying or doxxing.

In the view of their proponents, though contested by other scholars and advocates, such exceptions change platforms’ perceived responsibility without upending the balance of the status quo entirely. They are thus painted as minimalist changes to existing law—a laudable goal in theory, given the risk that broad changes could have broad and harmful unintended consequences. As in the case of FOSTA-SESTA, which has contributed to making sex work significantly more dangerous, specific exemptions can also have significant unintended consequences.

The bottom line: As with eliminating liability immunity for advertising content, adding additional exemptions to Section 230 immunity is a position with bipartisan support—but also one with bipartisan opposition. Adding additional exemptions would require significant finessing of definitions and arguments before any such proposal could pass both chambers of Congress. But such a reform remains very much a possibility, particularly for exemptions where potential victims of unintended consequences are seen as unsympathetic.

Platform policy standards

The effort to reform Section 230 is frequently cast as part of a consumer-protection agenda. Because consumers have a hard time comprehending the nature of and systems used by platforms to store their data and because content is removed for reasons that are equally difficult to understand, some advocates seek to force transparency and consistency through direct regulation and through corresponding changes to Section 230, including in content policies.

Among its many provisions, the bipartisan Platform Accountability and Consumer Transparency (PACT) Act mandates that platforms publish an “acceptable use policy” with specifically defined content rules, along with a biannual transparency report. Similarly, the Online Consumer Protection Act mandates public terms of service, which must include a “consumer protection policy” and the establishment of a specific “consumer protection program.” Such proposals must tread carefully when they propose dispute resolutions, particularly when users disagree with the interpretation of disclosed policies, which typically include some amount of generality in their wording to give service providers the flexibility to adapt to changing circumstances.

The bottom line: Fundamentally, proposals geared toward protecting consumers aim to codify the expectation that platforms have clear rules that are applied transparently and that mechanisms exist to resolve disputes. There is some bipartisan support for this perspective, but whether it gains enough traction to make it into law depends on skilled crafting of the minutiae of the final bills.

Set behavioral standards

Another motivation for Section 230 reform is the belief that online platforms are failing to operate with sufficient responsibility and diligence in mitigating harm online, resulting in a wide variety of ill effects, including mis- and disinformation and inconsistent free speech protection.

A classic regulatory approach to the challenge of forcing a company to meet non-market-imposed standards of behavior is to articulate a standard of sufficiently responsible behavior and establish enforcement mechanisms that evaluate compliance (while navigating First Amendment limits on government restriction of corporate speech). However, no precise standard has emerged from ongoing normative conversations. One early and influential proposal, by Danielle Citron and Benjamin Wittes, suggests that Section 230 be amended to require “reasonableness” and allow courts to develop the contours of “reasonable” behavior through common law and individual cases. The Online Freedom and Viewpoint Diversity Act introduced by Republican Sens. Roger Wicker, Lindsey Graham, and Marsha Blackburn includes in its provisions a similar proposal. The EARN IT Act calls for the creation of a diverse government-led commission to develop such a standard. Mark Zuckerberg has proposed that a new third-party institution be created to develop and enforce a standard of responsibility. The many voices calling for such an approach typically frame the power of government and law as a necessary backstop for reasonable behavior, in contrast to “soft law” pressure, such as public shaming and market forces.

The bottom line: Rather than asking platforms to simply follow their own rules, there seems to be bipartisan and some industry appetite for the creation of a standard either by the courts or otherwise against which the actions of online platforms can be measured. However, there is no consensus on what that standard should look like, nor how to evaluate procedurally whether activity meets any such standard. The risks of such a standard are two-fold: over-inclusion (in the sense of prohibiting behavior that is desirable) and under-inclusion (in the sense of allowing irresponsible behavior). And there is ample concern that a standard of responsible behavior would be outdated as soon as the ink it is written with dries.

Force neutrality

Perhaps the single largest set of Section 230 reform proposals are those that attempt to mandate neutrality in content moderation practices. In some instances, this term is used literally, in that Section 230 immunity is waived when content moderation is performed in any manner that is not “neutral.” Others, such as the Stop the Censorship Act introduced by Rep. Paul Gosar, the Arizona Republican, and co-sponsors, replace current flexible language in Section 230 giving providers broad moderation autonomy (specifically, the freedom to moderate “otherwise objectionable” content) with language intended to limit the ability of service providers to engage in good-faith moderation and to force them to leave up all content that is protected under the First Amendment, including mis- and disinformation in many circumstances.

To their proponents, these bills serve two purposes: first, to protect conservatives from alleged anti-conservative bias by platform operators; second, to counter the centralization of power online, as no alternative and equally effective platforms exist today to create market forces that would theoretically respond to anti-conservative bias. However, there are serious concerns with these bills. Supposedly inspired by the First Amendment’s free speech imperative, these bills would run into significant constitutional concerns, as noted by First Amendment scholars, in that they would arguably feature government entities deciding what should and should not be published. On their face, they are not “neutrality”-oriented, but rather prioritize political impact and messaging without any serious attempt at proposing sustainable policy change.

The bottom line: Proposals to mandate neutrality stem almost entirely from the Republican side of the aisle, and seek more than merely a general behavioral standard, but rather would impose the specific standard of “neutrality.” With Democrats, who hold onto a slim but real majority, not entertaining such proposals, it is highly unlikely any of these bills would achieve anything beyond checking off talking points. If they unexpectedly end up passing, the clear constitutional challenges will make them short-lived.

Oversee AI and automation

Cutting across many theories of online harm is the role played by automated decision-making systems powered by artificial intelligence. Among many other use cases, AI systems filter the internet’s information overflow to curate our online experience. In contexts including but not limited to search and social networking, AI systems power presentation algorithms and recommendation systems to sort and prioritize the online data most desired by the user. These systems can provide results that are difficult to articulate through human logic and language, and the inability to explain their functions, sometimes called the “black box problem,” has led to concern among lawmakers that the systems that order our online experience are insufficiently understood. Proposals in Congress approach this issue in different ways. One major proposal on this topic, the Protecting Americans from Dangerous Algorithms Act, explicitly identifies and exempts immunity platforms from Section 230 immunity in cases where an automated system is implicated in acts of terrorism, extremism, and threats to civil rights.It is likely that tackling AI and automation is going to feature prominently in new Democratic Section 230 reform proposals, but their form will probably vary widely and may be part of other non-230 efforts to regulate algorithms.

The bottom line: It is largely Democrats who are tackling the mostly-unseen mechanisms used to make decisions on content. While less likely than content-specific reforms to suffer obvious legal setbacks, proposals rooted in this mechanical perspective also have a low likelihood of becoming law, at least with the current political make-up.

Handling illegal content

One model for possible Section 230 reform to provide more support for law enforcement comes from Europe. In 2017, Germany’s “NetzDG” law established hard timetables for covered service providers to address illegal content made available through their services, including an aggressive 24-hour window in which to remove “obviously illegal” content. Such a regulation supports the interests of law enforcement by reducing their procedural burden to identify and evaluate potential violations of law. NetzDG has emerged as a de facto standard, with many countries around the world proposing and even passing laws with similar language. In contrast to such a direct privatization of governmental function (i.e., asking the private sector to make specific determinations of legality), American regulatory proposals more frequently condition content removal on a judicial determination of illegality, known as the “court order standard.” Somewhat of a middle ground between these two, the See Something, Say Something Act requires tech companies to provide regular transparency into the scale of suspicious content and behavior, including “immediate” notification of law enforcement under certain circumstances. Similarly, a law enforcement study proposed by House Republicans would direct the GAO to study cooperation between social media companies and government authorities. Through very different mechanisms, all of these approaches seek to improve the efficacy of law enforcement functions online.

The bottom line: It seems likely that if the discussions over Section 230 reform continue there will be a push to clarify responsibilities that online platforms have to removing illegal or otherwise infringing content. The form that would eventually take in the United States is not, at least currently, readily obvious.

Magic 8-ball points to…?

This is by no means a comprehensive characterization of Section 230-related legislative proposals. Many other bills are under consideration, including some that would have a transformative effect on technology and the internet, such as a proposal to require identity verification before creating a social media account, which could take some of the pressure off of intermediaries and thus Section 230 by making it easier to focus on the originator of harm. However, the ideas articulated above, bundled in those buckets, appear most likely to gain traction (or at least notoriety) in the United States. Most have achieved some level of bipartisan support or have at least won loud partisan backers. Furthermore, some of the areas above resemble thinking in the European Union, where policymakers are developing its Digital Services Act with a focus on protecting consumers, overseeing the use of artificial intelligence, and setting calibrated standards of responsibility with the potential for differential treatment for contexts of heightened concern. Increased interest in the Biden administration to achieve greater alignment with European allies, in part to create a united front against China and other countries with divergent views of internet governance, offers an opportunity for trans-Atlantic cooperation. However, the Section 230 reform space has yet to see its last proposal, and policy merit is likely to take a backseat to politicking. In such a nebulous environment it isn’t immediately certain which bill or idea, if any, will emerge successful. It is also important to note that some proposal ideas listed above, and probably many of the new ones that will likely crop up, do not inherently require a direct change to Section 230 in order to be implemented, and sometimes may be better resolved through other mechanisms, like comprehensive federal privacy regulation. But the public attention on Big Tech and the partisan focus on immunity provisions (as one of the few specific regulatory levers on Big Tech) increases the likelihood of incorporating a Section 230 connection whether or not it is legally necessary.

Chris Riley is a resident senior fellow of internet governance at R Street.
David Morar is a post-doctoral data policy fellow at NYU Steinhardt and a visiting researcher in technology and governance at the Hans Bredow Institute and a consultant for R Street.

Facebook provides financial support to the Brookings Institution, a nonprofit organization devoted to rigorous, independent, in-depth public policy research.