Governance of foundation models in EU’s AI law starts to take shape

The Spanish presidency of the EU Council of Ministers has proposed a governance architecture for supervising the obligations on foundation models and high-impact foundation models that includes the establishment of a scientific panel.

Content-Type:

News Based on facts, either observed and verified directly by the reporter, or reported and verified from knowledgeable sources.

[Alexandros Michailidis/Shutterstock]

Luca Bertuzzi Euractiv.com 07-11-2023 16:35 4 min. read Content type: News Euractiv is part of the Trust Project

The Spanish presidency of the EU Council of Ministers has proposed a governance architecture for supervising the obligations on foundation models and high-impact foundation models that includes the establishment of a scientific panel.

Specific obligations on foundation models like OpenAI’s GPT-4, which supports ChatGPT, the world’s most famous chatbot, are being discussed in the context of the AI Act. This legislative proposal aims to regulate Artificial Intelligence following a risk-based approach.

The AI law is at the last phase of the legislative process, in so-called trilogues between the EU Council, Parliament and Commission. Thus, the presidency’s proposed approach on governance, put forward on Sunday (5 November), might be highly influential in the ongoing discussions.

Spanish presidency pitches obligations for foundation models in EU’s AI law

The Spanish presidency of the EU Council of Ministers has drafted a series of obligations for foundation models and General Purpose AI as part of the negotiations on the AI Act.

Foundation model supervision

The text indicates that the European Commission would have exclusive powers to supervise the obligations on foundation models, including the 'high-impact' ones subject to a tighter regime.

The EU executive could investigate and enforce these provisions, either on its own initiative or following a complaint from an AI provider with a contract with the foundation model provider or from a newly established scientific panel.

The Commission is to define via implementing acts the procedures for monitoring the application of the obligations for foundation model providers, including the role of the AI Office, the appointment of the scientific panel and the modalities for conducting audits.

The EU executive will have the power to conduct audits on foundation models “taking into utmost account the opinion of the scientific panel” to assess the provider’s compliance with the AI Act or to investigate safety risks following a qualified report from the scientific panel.

The Commission could either carry out the audits itself or delegate them to independent auditors or vetted red-teamers. The auditors could request access to the model through an Application Programming Interface (API).

For high-impact foundation models, the Spanish presidency proposed adversarial evaluations by red teams. For the presidency, the red teams could come from the provider. However, if the political decision is to make them external, the Spaniards have drafted an article empowering the Commission to award the status of ‘vetted red-teamer’.

EU policymakers enter the last mile for Artificial Intelligence rulebook

The world’s first comprehensive AI law is entering what might be its last weeks of intense negotiations. However, EU institutions have still to hash out their approach to the most powerful ‘foundation’ models and the provisions in the law enforcement areas.

These vetted testers must show particular expertise, independence from the foundation model providers, and be diligent, accurate and objective in their work. The Commission is to establish a register of vetted red-teamers and define the selection procedure via delegated acts.

The draft text empowers the EU executive, following a dialogue with foundation model providers, to request them to implement measures to comply with the AI law’s requirements and risk mitigation measures when serious concerns of risks are found via the audits.

The EU executive will be able to request the documentation the foundation model providers will have to develop as part of their obligations, for instance, on the capacities and limitations of their model. This documentation might be requested and made available by a downstream economic operator who built an AI application on the foundation model.

If the documentation raises concerns about potential risks, the Commission could request further information, initiate a dialogue with the provider and mandate corrective measures.

Madrid also proposed a sanction regime for foundation model providers that infringes obligations under the AI Act or fails to comply with requests for documentation, audits or corrective measures. No percentage of the total worldwide turnover has been set yet.

Governance framework

The Spanish presidency proposed the creation of a ‘governance framework’ for foundation models, including ‘high-impact’ ones, including the AI Office and a scientific panel that will support the Commission’s activities.

The activities envisaged are regularly consulting with the scientific community, civil society organisations and developers on the state of managing risks of AI models and promoting international cooperation with its peers.

AI Act: EU countries headed to tiered approach on foundation models amid broader compromise

The EU approach to powerful AI models is taking shape as European countries discuss possible concessions in the upcoming negotiations on the world’s first comprehensive Artificial Intelligence (AI) rulebook.

Scientific panel

The scientific panel tasks include contributing to the development of methodologies for evaluating the capabilities of foundation models, advising on the designation and the emergence of high-impact foundation models, and monitoring possible material safety risks related to foundation models.

The members of the panel should be selected according to recognised scientific or technical expertise in AI, should act objectively and disclose any potential conflict of interest. They might also apply to be vetted red-teamers.

Risky uncompliant systems

The presidency proposed a revised procedure to deal with non-compliant AI systems that pose a significant risk at the EU level. In exceptional circumstances where the good function of the internal market might be at stake, the Commission might carry out an emergency evaluation and impose corrective measures, including withdrawal from the market.

[Edited by Nathalie Weatherald]

Subscribe to our newsletters

Subscribe