Webp 3r2l9nmmbri3huekmox6348shtyh
Alexandra Reeve Givens President & CEO at Center for Democracy & Technology | Official website

Centre urges clear guidelines on EU's upcoming prohibited AI practices

ORGANIZATIONS IN THIS STORY

The Centre for Democracy and Technology Europe (CDT Europe) has engaged in the European Commission's public consultation on prohibited AI practices under Article 5 of the AI Act. This consultation is part of developing guidelines that will take effect from February 2, 2025. CDT Europe expressed disappointment over the lack of a full draft of these guidelines being made public and limitations imposed by a questionnaire format with pre-set questions and character limits.

In its response, CDT Europe emphasized the need for additional clarification on prohibitions to cover all scenarios where fundamental rights could be impacted. They also stressed that exceptions to these prohibitions should be interpreted narrowly.

CDT Europe urged alignment between the prohibition of manipulative or deceptive practices in the AI Act and the Digital Services Act (DSA), particularly concerning dark patterns banned under Article 25(1) of DSA. They advocated for examples from the 2021 Commission Notice on unfair business-to-consumer commercial practices to be explicitly included as prohibited practices under the AI Act.

Regarding AI systems exploiting vulnerabilities, CDT Europe recommended providing a non-exhaustive list illustrating vulnerabilities based on socio-economic situations, referencing anti-discrimination laws across EU Member States.

The organization called for a broad definition of unacceptable social scoring practices, acknowledging that such scoring can be dynamic rather than fixed numbers and should include various categorization methods. They highlighted concerns about fraud-detection systems used in welfare contexts in countries like Netherlands and Sweden.

On crime risk assessments using AI systems, CDT Europe suggested defining this scope broadly to include re-offending predictions during parole hearings. They argued for expanding exceptions when AI supports human assessment with necessary safeguards like internal controls and accountability measures.

Clarification was sought on "criminal activity" definitions to ensure certain data types remain excluded from coverage. Systems like ProKid and Top400 were cited as examples needing exclusion due to their focus on involuntary contact with crime.

For facial image scraping prohibitions, CDT Europe asked for clarity that compliance with robots.txt instructions should not exempt an AI system from bans unless aligned with GDPR's data minimization principle—adequate, relevant, and limited processing purposes only.

Regarding emotion recognition systems banned except for medical or safety reasons, CDT Europe advocated narrow exception interpretations emphasizing proportionality. Evidence must demonstrate effectiveness in identifying emotions while advancing medical or safety goals before deployment justification through prior documentation is required.

Finally, they highlighted obligations under Directive 89/391/EEC requiring employers' consultations with workers before introducing new technologies along with notifying individuals per Article 50(3) AI Act regarding emotion recognition technology deployments. Mitigation strategies were suggested against disproportionate flagging by these systems affecting groups like people with disabilities.

___

ORGANIZATIONS IN THIS STORY