Webp 3r2l9nmmbri3huekmox6348shtyh
Alexandra Reeve Givens President & CEO at Center for Democracy & Technology | Official website

CDT hosts symposium on free speech challenges posed by artificial intelligence

ORGANIZATIONS IN THIS STORY

On June 24, government officials, civil society representatives, and academics convened at the "Artificial Intelligence & The First Amendment: Protecting Free Speech in the AI Era" symposium hosted by CDT and The Future of Free Speech. Experts discussed the importance of free expression considerations in regulatory debates around AI.

The first panel, moderated by CDT’s Samir Jain, included congressional staff members from across the aisle. They discussed balancing constitutional rights, whether Section 230 would apply to AI companies, and how legislation is developing around this new technology. Panelists Halie Craig, John Beezer, and Jacqui Kappler engaged in a productive conversation about issues related to AI regulation. They emphasized user autonomy and proposed solutions for greater control over content curation and moderation.

Panelists also addressed Non-Consensual Intimate Imagery (NCII), a growing problem with advancing technologies. CDT announced a multistakeholder working group to discuss current interventions, identify best practices and potential new actions, and increase coordination across industries, civil society, and academia to address NCII. This initiative aligns with a recent call to action issued by the White House.

The second panel focused on First Amendment considerations emerging from widespread AI use. Moderated by CDT’s Becca Branum, it featured academic experts Jeff Kosseff and Eugene Volokh alongside ACLU’s Ben Wizner and Keith Chu from Senator Ron Wyden's office. Despite Section 230 being drafted before AI's advent in 1996, it continues to shape regulatory landscapes. Volokh described it as the “sword and shield” that allows companies to moderate content without fear of lawsuits—an interpretation aligning with CDT’s views on Section 230.

Future of Free Speech CEO Jacob Mchangama addressed how companies balance free expression and safety during the third panel. He described these lines as “fuzzy,” impacting information access and traditional speech significantly. Alongside David Wilner and Jules White, Mchangama expressed optimism about AI's future if careful consideration is given to regulatory decisions and technical functionalities.

In the final panel moderated by CDT’s Kate Ruane, civil society's critical role in AI regulation was highlighted. Joined by Knight First Amendment Institute’s Nadine Farid Johnson and Stand Together’s Ashkhen Kazaryan, Ruane discussed how civil society brings everyday citizens' perspectives into policy discussions. Farid Johnson distilled civil society's role into three pillars: accountability, expertise, influence—emphasizing its importance in developing norms and ensuring accountability across sectors.

The symposium underscored that sharing ideas is essential for innovation while focusing on how new AI regulations impact speech demonstrates civil society's vital role in protecting human rights online.

The full symposium is available to stream here.

___

ORGANIZATIONS IN THIS STORY