Generative AI raises complex questions about section 230 applicability

Webp 3r2l9nmmbri3huekmox6348shtyh
Alexandra Reeve Givens President & CEO at Center for Democracy & Technology | Official website

Generative AI raises complex questions about section 230 applicability

ORGANIZATIONS IN THIS STORY

The rise of Generative AI in the world of technology, especially within the last year, has led to numerous questions and concerns regarding its responsible governance in policy spaces across the political spectrum. One such debated question is whether Section 230 of the Communications Decency Act 1996 applies to outputs created by Generative AI systems. In 2023, Senator Josh Hawley introduced legislation intended to exclude Generative AI systems from the purview of Section 230. In the same year, Senator Ron Wyden and former Representative Chris Cox, authors of Section 230, wrote that Section 230 does not protect generative AI outputs. The answer, however, may be more complicated.

Section 230 is a federal safe-harbor law that protects online intermediaries from liability for third-party content posted and disseminated on their platforms. At its core is Congress’ realization that free expression rights of everyday users depend on online intermediaries’ ability to host various content with their own guidelines and removal standards. Additionally, Section 230 incentivizes online speech intermediaries to curate spaces by moderating or removing “objectionable” content without fear of liability. However, this immunity is not unlimited; it only protects online intermediaries from liability for third-party generated content while holding them legally responsible for content they generate themselves.

The recent exponential growth in Generative AI technologies such as OpenAI’s ChatGPT and Dall-E, and Google’s Bard may blur the line between user-generated content versus that created by generative AI systems. These systems use machine learning to generate new content based on massive amounts of training data influenced by user prompts. This can complicate determining who is responsible for creating the end product.

Section 230 subsection (c)(1) states that no provider or user of an interactive computer service may be treated as the publisher or speaker of content provided by another information content provider. To qualify for this immunity, an online service provider must be an “interactive computer service.” Courts have found that companies providing such services include broadband internet access services and social media services.

In cases involving tools like ChatGPT or Dall-E where Generative AI creates offending material "in whole or in part," courts will likely find that Section 230 immunity does not apply. Though the Supreme Court has yet to rule on this issue, federal appeals courts generally apply the “material contribution test” to determine if an entity has contributed significantly enough to qualify as an information content provider.

Recently, two cases have emerged regarding legal accountability of Generative AI systems for their outputs: Walters v. OpenAI and Battle v. Microsoft. As neither OpenAI nor Microsoft raised Section 230 as a defense in these cases, it limits their usefulness as indicators of Section 230’s scope in this context.

During oral arguments for Gonzalez v. Google, the US Supreme Court indicated that Generative AI system outputs might always be considered information content providers with respect to model outputs. If this reasoning holds sway with the Court, Generative AI outputs may eventually be definitively ruled out from Section 230 immunity.

However, when Generative AI outputs are close to source material without materially contributing to illegal content creation and act solely as interactive computer services, current case law indicates those outputs could still receive Section 230 immunity.

As it stands, legal precedent suggests that more “creative” outputs of Generative AI will likely fall outside Section 230 immunity parameters because these systems act as information content providers when involved in new content creation even in response to user prompts.

ORGANIZATIONS IN THIS STORY