The Center for Democracy & Technology (CDT) has released a brief titled "Election Integrity Recommendations for Generative AI Developers." The document comes at a critical time as over 80 countries, representing more than half of the world’s population, are set to hold elections in 2024. This year has been dubbed the 'First AI Election' due to the proliferation of generative AI tools that pose significant cybersecurity and information integrity challenges.
To address these risks, CDT advocates for an ecosystem approach that includes focusing on the distribution of deceptive AI-generated election content across social networks, private messaging services, robocalls, TV, and radio. While identifying solutions to curb this distribution is essential—an effort CDT supports through various initiatives—it is equally important to consider policies and product interventions that generative AI developers should adopt to prevent harmful content creation and dissemination.
Despite being midway through the election year, it remains crucial for AI developers to swiftly implement election integrity programs using policy, product interventions, and enforcement mechanisms to safeguard democratic processes now and in the future.
**Summary of Recommendations**
**Usage Policies:**
- Prohibit generating realistic images, videos, and audio depicting political figures or events.
- Ban users from conducting political campaign activities or demographic targeting in the short term while developing transparent long-term ethical guidelines.
- Forbid the use of generative AI ad tools for political advertisements.
- Prevent any conduct that interferes with elections or misleads voters.
**Product Interventions:**
- Develop user interface pop-ups or labels related to known narratives of election misinformation.
- Disclose how recently chatbot training data was updated when responding to time-sensitive election queries.
- Promote authoritative sources of election-related information.
- Allow users to report policy violations in chatbots and apps built using an API.
- Include an appeals option for enforcement actions.
- Commit to embedding machine-readable watermarks into image, video, and audio content.
**Enforcement:**
- Enforce usage policies on elections consistently throughout all times.
- Deploy product interventions against common election lies quickly.
- Test model answers proactively regarding common election queries.
- Create escalation channels for emerging issues during high-risk periods.
- Adequately resource policy and enforcement teams.
- Implement actor-level enforcement for policy violations.
**Transparency:**
- Be transparent about election policies.
- Publish regular reports on election misinformation and deceptive AI usage.
- Consult with civil society groups and facilitate researcher access to data.
- Establish communication channels with election administrators.
The full report provides detailed recommendations aimed at enhancing the integrity of electoral processes globally by mitigating risks associated with generative AI technologies.
---