The EU AI law will not be future-proof unless it regulates general purpose AI systems

DISCLAIMER: All opinions in this column reflect the views of the author(s), not of EURACTIV Media network.

"General purpose AI systems are systems that can perform a wide range of functions. These include, but are not restricted to, image or speech recognition, audio or video generation, pattern detection, question answering, and translation." [shutterstock / metamorworks]

The original exclusion of general purpose artificial intelligence (AI) systems from the EU’s proposed AI Act, published in April 2021, risked endangering fundamental rights and safety. While the first partial compromise text has introduced an article on general purpose AI systems, it still fails to address the harms arising from these systems, writes Kris Shrishak.

Kris Shrishak is a technology fellow at the Irish Council for Civil Liberties.

Risto Uuk is a policy researcher at the Future of Life Institute.

You might have heard that Meta, previously known as Facebook, recently released a large language model for any researcher to experiment with. Meta admits that this system has a high propensity to generate toxic language and reinforce harmful stereotypes. And yet, in the upcoming AI Act, all of the responsibility for ensuring the safety of such systems, also known as general purpose AI systems, is placed on the users.

General purpose AI systems are systems that can perform a wide range of functions. These include, but are not restricted to, image or speech recognition, audio or video generation, pattern detection, question answering, and translation.

This allows the systems to be used for a variety of specialised applications such as chatbots, ad generation systems, decision assistants and spambots. In fact, these systems have so many potential uses that most have yet to be discovered, and while many of them will confer benefits to users, some will be high-risk and some may need to be prohibited.

Some of these systems have already propagated extremist content, exhibited anti-Muslim bias, and inadvertently revealed personal data. One chatbot based on a general purpose AI system, for example, encouraged someone to commit suicide.

Given that, for better or for worse, general purpose AI systems are likely to be the future of AI, we are bound to see more incidents like these.

The obligations of general purpose AI system providers should primarily fall on the developers. Instead of recommending that the developers of general purpose AI systems be legally responsible for ensuring their safety, however, the article proposed by the Council assigns all responsibility to the users.

For example, if a department in the Belgian government uses a general purpose AI system to provide essential public services, they alone will be responsible for the chatbot’s behaviour, even though they have almost no control over the flaws and biases it may present.

This creates a number of problems. Many data governance requirements, particularly bias monitoring, detection and correction, require access to the data sets on which AI systems are trained.

These data sets, however, are in the possession of the developers and not of the user, who puts the general purpose AI system “into service for an intended purpose”. For users of these systems, therefore, it simply will not be possible to fulfil these data governance requirements.

Moreover, it will be more cost-effective and less burdensome for small European businesses if it is the developers of general purpose AI systems (most of whom are outside of the EU) who are required to conform to legal obligations and monitor their systems after they are put to use. Only a few companies can afford to spend billions of dollars or euros developing general purpose AI systems, as this process requires vast amounts of computation.

By contrast, hundreds of companies (including many SMEs) use general purpose AI systems. Regulating specific use cases, when many of them are based on just a handful of general purpose AI systems, is not cost-effective.

Developers of general purpose AI systems should be treated as providers in this legislation, while companies using these systems for specific applications should be treated as exactly that: users. The recent report from the two leading committees of the European Parliament also fails to clarify this. We recommend that the European Union explicitly assign responsibility to the developers of general purpose AI systems.

All general purpose AI systems should be thoroughly assessed before being put on the market and monitored continuously after.

Just a handful of general purpose AI systems will be the sources of flaws and biases in hundreds of specialised applications. This is because a single general purpose AI system, for instance for language processing like the one produced by Meta, can be used as the foundation for many other systems tailored to the customer.

Flaws and biases in such a system could affect media articles, educational materials, and chatbots, as well as other areas that would only be discovered when small European companies experiment with these systems. For this reason, all general purpose AI systems should be thoroughly assessed before being put on the market and monitored continuously after.

Furthermore, general purpose AI systems have many different uses, the safety of which will not be ensured by regulating the user alone. Flaws and biases in the system will result in significant downstream harm because the user does not have the required resources, access and ability to ensure safety.

Continuing with our earlier example, a department in the Belgian government that uses a general purpose AI system with no access to sufficient computing resources, data sets and the understanding of the design of these systems will be held responsible, but with no capability to ensure safety.

Above all, we recommend that the developers ensure their general purpose AI systems are compliant with the requirements set out for high-risk AI systems in the AI Act proposal. In addition, the developers and users should regularly assess whether these AI systems present any new risks, including those discovered while investigating novel use cases. The developers should also register their general purpose AI systems along with the list of users in the EU database.

Only by making all these changes can we make the AI Act future-proof.

Subscribe to our newsletters

Subscribe