Thirty years after Congress passed Section 230 of the Communications Decency Act, the law is still debated by stakeholders in social media, artificial intelligence, and technology platforms. The statute shields companies from liability for content posted by consumers, which supporters contend is important to free expression and innovation. Jennifer Huddleston, a senior fellow in technology policy at the Cato Institute, argues that Section 230 remains a foundational guardrail for speech and competition in the digital age.
Huddleston works in the areas of online speech, artificial intelligence, and emerging technologies. She describes Section 230 as essential to the development of user-generated platforms. “Section 230 made sure that platforms knew they could carry user-generated content… without fear,” she says, that it might result in litigation “that could end an entire platform,” she says. The law ensures that users are responsible for their own speech rather than the platforms hosting what they post. “If you say something, you’re the person that’s liable for it, not the platform where you said it,” she says.
Section 230 also protects the right of private platforms to moderate content. “If you run a business, you can decide the rules of that business,” Huddleston says. Lawmakers who drafted the statute did not intend to require neutrality. “They did not intend for this to have to be a neutral decision,” she says, noting that platforms may serve specific audiences and adopt distinct moderation policies.
Section 230 is seen by critics as a subsidy for large technology companies. But, “Section 230 isn’t just about big tech,” Huddleston says. Smaller platforms rely on the statute to avoid the same litigation. Even a successful legal defense can be financially devastating. “The litigation, even if you’re proven right in court, can be potentially industry-ending,” she says.
Huddleston argues that many concerns attributed to Section 230 are already addressed by other laws. Fraud, for example, is illegal. “If a company is making false promises to its users, there are existing laws that can go there,” she says. The statute does not shield platforms when they are the actual speaker of unlawful content. “Section 230 does not protect the platform when the platform is in fact the speaker,” she says.
Huddleston acknowledges that adults want to protect children from objectionable content on tech platforms. but warns that changes, such as requiring age verification, can carry unintended consequences. “The only way to know someone’s not under 18… is to also verify that everyone is over 18,” she says, raising concerns about privacy and adult speech rights.
Huddleston views Section 230 as part of a broader approach that allowed the internet to flourish. Lawmakers in the 1990s adopted what she calls a “light touch and pro-speech, pro-innovation approach.” Restraint by Congress enabled new platforms to develop without facing immediate, existential liability. This may be useful again as artificial intelligence comes into fuller view. “Because AI is such a general-purpose technology… we have to be really careful with the regulation,” she says, drawing parallels to the early days of the internet.
For Huddleston, the core question is how to respond to concerns about the worst aspects of online content without undermining free expression and competition. “It’s okay as an individual to be uncertain about technology,” she says. “It’s another thing when we start seeing regulation take away choice.”
