Webp 3r2l9nmmbri3huekmox6348shtyh
Alexandra Reeve Givens President & CEO at Center for Democracy & Technology | Official website

Everyday harms from AI need more attention alongside high-risk scenarios

ORGANIZATIONS IN THIS STORY

Developers and regulators have long focused on the significant risks of artificial intelligence (AI), such as those that can result in life-altering consequences. However, less attention has been paid to more everyday scenarios where AI can cause harm. For example, errors in AI transcription systems may complicate insurance reimbursements, and service chatbots might misinterpret prompts, failing to process requests correctly.

"Today’s risk-based AI governance frameworks would likely deem these types of scenarios as 'low risk,'" the blog notes. The EU AI Act categorizes such situations as "limited risks," subjecting them to lower scrutiny compared to high-risk contexts like child welfare assessments. Despite appearing inconsequential at first glance, the aggregated effects of these low-risk instances could significantly impact societal well-being over time.

Regulatory frameworks like the EU AI Act and NIST's AI Risk Management Framework prioritize scenarios based on severity and probability of harm. High-risk scenarios are given priority due to their potential for considerable harm, such as discrimination in job hiring or loan decisions. However, seemingly mundane risks often fall through the cracks due to this focus on individual occurrences.

Commercial providers like OpenAI and Anthropic emphasize mitigating severe doomsday scenarios but may overlook non-catastrophic yet consequential risks when viewed collectively. The blog argues that focusing too heavily on discrete severe situations diverts attention from common harms that lack adequate investment from practitioners and policymakers.

Several everyday harms from AI systems are highlighted:

1. **Linkage Errors Across Databases**: These errors can lead to significant downstream consequences, especially affecting marginalized populations.

2. **Inaccurate Information Retrieval**: Semantic errors in information retrieval systems can undermine trust and lead to financial losses or legal disputes.

3. **Reduced Visibility in Platformized Environments**: Recommender systems may reduce content visibility for marginalized groups, impacting public participation and economic opportunities.

4. **Quality-of-Service Harms**: Errors in chatbots and voice assistants disproportionately affect non-English speakers and people of color.

5. **Reputational Harms**: Misuse of generative models can depict individuals undesirably without legal recourse under anti-discrimination laws.

The blog suggests rethinking AI risk governance beyond severity alone by considering the prevalence of risks seriously. "While these so-called 'lower-risk' scenarios may not appear severe as discrete events, their aggregated impact on the societal level...means organizations should nevertheless invest in mitigating their risks."

To address everyday risks effectively, practitioners need safety infrastructures for systematic monitoring and rapid response to negative residual risks—those unmitigated despite existing safety measures.

"The everyday risks of AI should not be an afterthought; they deserve far greater priority than they currently receive," concludes the blog post authored by Stephen Yang for the CDT AI Governance Lab.

ORGANIZATIONS IN THIS STORY