Less than two months after the AI Act came into force, oversight bodies have started meeting to discuss its implementation. The AI Board held its first official meeting, focusing on the Commission’s initial deliverables related to the Act and sharing best practices for national AI governance. Currently, the AI Board is the only active oversight body created by the AI Act. There is no public information available about the multistakeholder Advisory Forum or the Scientific Panel.
Despite some key oversight actors being absent, implementation of the AI Act continues with co-regulatory efforts taking center stage. These efforts are conducted under institutional oversight but rely heavily on engagement from regulated entities. Last week, a consultation on Codes of Practice for General-Purpose AI (GPAI) models closed. This consultation will inform separate co-regulatory codes specifying voluntary obligations for GPAI providers and will kick off a plenary session on September 30. All stakeholders approved by the AI Office, including civil society members, will participate in this process, although GPAI model providers are expected to dominate.
Several hundred companies have shown interest in joining the AI Pact — an information exchange and knowledge-sharing network designed to foster early compliance with the AI Act — ahead of a signatory event on September 25.
Earlier this month, a report by former European Central Bank president Mario Draghi described the AI Act and GDPR as obstacles to European competitiveness due to their perceived complexity and risks of overlaps and inconsistencies. The report was informed significantly by private companies and included input from only one consumer rights organization among digital rights groups. In response, some private-sector contributors endorsed it in a joint open letter, criticizing European data protection authorities for creating "huge uncertainty" and calling for a modern interpretation of GDPR.
Henna Virkkunen was recently nominated as leader of the European Commission’s Tech Sovereignty, Security and Democracy portfolio, which includes enforcement of the AI Act. Her mission letter endorsed several recommendations from Draghi’s report. These include drafting an EU Cloud and AI Development Act to increase computational capacity and creating an EU-wide framework for providing “computational capital” to innovative small and medium-sized enterprises (SMEs). The letter also asked her to ensure access to tailored supercomputing capacity for AI startups through the newly launched AI Factories Initiative within her first 100 days, develop an Apply AI strategy to boost industrial uses of AI, improve public services delivery, and help set up a European AI Research Council.
Other European Commission portfolios will also address AI-related issues: The Commissioner for People, Skills and Preparedness will work on algorithmic management; while the Commissioner for Intergenerational Fairness will develop an AI strategy for creative industries.
Last week saw another significant development with the publication of an impact assessment by the European Parliament Research Service on the proposed rules for non-contractual civil liability in connection with artificial intelligence systems — known as the AI Liability Directive. The report suggests expanding it into a Software Liability Regulation with provisions on strict liability for failure to comply with prohibited systems' rules under the AI Act.
In other news related to 'AI & EU', several investigations have been launched:
- The Irish Data Protection Commission has begun investigating whether Google complied with GDPR when processing personal data to train its foundational PaLM 2 model.
- The UK's Information Commissioner’s Office announced that Meta would resume plans to train its generative AI using Facebook and Instagram UK user data after applying changes requested earlier.
- The Dutch data protection agency published its third report on algorithmic risks associated with implementing high-risk public sector use cases under current timelines set by the new legislation.
CDT Europe is seeking candidates interested in promoting human rights in digital spaces through their Legal and Advocacy Officer position based in Brussels. Applications are open until September 27.
For those interested in further reading on these topics:
- ECNL's "Towards an AI Act that serves people and society"
- Tech Policy Press's "Challenging myths of generative AI"
- EPRS's briefing on "The AI Act"
- Access Now's piece on why human rights must be central in governing artificial intelligence