The capacity for public authorities and external auditors to access the source code of Artificial Intelligence in an upcoming EU rulebook was restricted based on a digital trade agreement, according to internal documents from the European Commission.
The internal documents were obtained via a freedom of information request by Kristina Irion, a law professor at the University of Amsterdam, showing several requests from the Commission’s trade department to the digital policy department on the draft AI Act.
The AI Act is a landmark legislative proposal to regulate Artificial Intelligence based on its potential to cause harm. The requests concern narrowing down provisions in the regulation related to the disclosure of source code, aligning them to EU trade commitments.
For Irion, the documents point to a concerning reverse of what should be the right approach because “the EU should use trade policy to promote its legislative agenda, not let previous trade commitments influence its digital policy-making.”
In particular, trade policies officials requested narrowing down the provisions related to the provisions of source code to bring them in line with the EU-UK Trade Cooperation Agreement.
The trade agreement commits London and Brussels only to require the transfer of software's source code under specific conditions, notably if a regulatory body requests it to enforce a law meant to protect public safety.
In an internal note dated 9 April 2021, the trade department thanked the digital policy department for having amended the requirements on technical documentation but asked for further changes regarding the conformity assessment of the quality management systems, specifically on the provision related to the external vetting of notified bodies – authorised independent auditors.
The trade department requested that the wording on the provision of the source code should be narrowed down, removing a reference to ‘full’ access and specifying that it would only be provided to assess the conformity of a high-risk system to avoid an excessively broad interpretation.
Similarly, the trade department requested to eliminate the reference to granting ‘full’ access to the source code for a market surveillance authority to assess whether an AI system deemed at high-risk to cause harm complies with the AI Act’s obligations.
At the same time, the trade policy officers asked that the notified body and public authority be bound by confidentiality obligations when a source code is disclosed.
All the requested changes made it into the final draft the European Commission published later that month.
“Giving a blank protection on any type of source code, despite the fact a lot of it is available via open source, tantamounts to sideloading a new protective regime that undercuts the current copyright and trade secrets rules, where at least you have to prove the information is commercially sensitive,” Irion added.
The EU Council, which adopted its position on the AI regulation in December, further narrowed the scope of the conditions to access AI’s source code, effectively making it a measure of last resort.
Meanwhile, provisional text in the European Parliament moved away from national authorities the power to access source code to the AI system’s training model.
According to Irion, the risk is making access to this precious information excessively difficult for regulators, who might face administrative burden or be scared to lose litigation from companies and repay damages.
“With the rise of powerful AI models like ChatGPT, we need a better understanding of how these disruptive technologies actually work. We cannot just rely on the technical document the companies provide,” Irion added.
[Edited by Nathalie Weatherald]