Germany calls for tightened AI regulation at EU level

Germany is asking Brussels to develop a new classification scheme for AI technology together with the member states. EPA-EFE/ALEXANDER BECHER

Four months after the European Commission presented its ‘white paper’ on Artificial Intelligence (AI), the German government said it broadly agrees with Brussels but sees a need to tighten up on security. The government is particularly concerned by the fact that only AI applications with “high risk” have to meet special requirements. EURACTIV Germany reports.

According to the European Commission’s ‘white paper’, there are two criteria for AI applications with “high risk”.

First, they are to be used in sensitive sectors, such as health, security, or justice, and second, their concrete application should also be associated with special risks, such as discrimination, injury, or danger to life. If an AI application fulfills both criteria, it must also meet special requirements, for example with regard to data retention or human supervision.

However, for Germany, these requirements are not far-reaching enough. The government is thus proposing to tighten both the classification and the requirements themselves.

High-risk Artificial Intelligence to be 'certified, tested and controlled,' Commission says

Artificial Intelligence technologies carrying a high-risk of abuse that could potentially lead to an erosion of fundamental rights will be subjected to a series of new requirements, the European Commission announced on Wednesday (19 February).

‘High risk’: extending the criteria

The criteria for a “high risk” AI application should be “reconsidered and, if necessary, extended,” according to the government, which also disliked the fact that risky applications only have to meet special requirements if they are used in sensitive sectors.

“As a consequence, certain high-risk uses would not be covered from the outset if they did not fall under certain sectors”, the statement added. The fact that the European Commission itself pointed to possible exceptional cases illustrates the need for more comprehensive regulation.

LEAK: EU in push for digital transformation after COVID-19 crisis

EU Member States and the European Commission should “thoroughly analyse the experiences gained from the COVID-19 pandemic” in order to inform future policies across the entire spectrum of the digital domain, leaked Council documents seen by EURACTIV reveal.

More levels of risk classification

However, it is also the classification system itself which should be revised, according to Berlin. It is “questionable whether the already existing EU regulations are sufficient for AI applications with a lower than ‘high risk'”, the opinion states.

Therefore, Brussels is being asked to develop a new classification scheme together with the member states.

It should provide for several levels of classification “for relevant risks and damage, taking into account the amount of damage and probability of damage”, such as “life and health, property, democratic processes, environment, climate, social, societal and economic participation”.

However, if an AI application were to be completely free of potential harm, no specific control should be required.

MEPs chart path for a European approach to Artificial Intelligence

EU lawmakers debated the bloc’s approach to regulating Artificial Intelligence technologies on Tuesday (12 May), in an effort to chart a path for how the EU will manage the onset of next-generation technologies.

Requirements too vague

In principle, Germany welcomes the requirements, which according to the Commission must be met by AI applications above a certain risk potential. What Berlin wants is to see improvement in the details.

For example, Germany is calling for a more concrete definition when data records must be stored on a mandatory basis.

According to the Commission, this is the case in “certain justified cases”, but is too unclear and needs to be specified, as does the current description of a “limited, appropriate period of time” during which data records must be retained.

The aspect of information security, “understood as protection both against accidental errors, for example, due to unexpected user input, and against targeted manipulation by attackers”, should also be given greater consideration than before. Here, Germany considers a “mandatory high IT security standard for high-risk AI systems” to be “indispensable”.

The government also sees the need for improvement in the area of human supervision of AI systems, where Brussels still has to specify “under which circumstances which form of human supervision should be made mandatory”.

Vestager warns against predictive policing in Artificial Intelligence

Certain Artificial Intelligence application including forms of predictive policing are ‘not acceptable’ in the EU, the European Commission’s Vice-President for Digital policy, Margarethe Vestager has said.

Seehofer: From facial recognition to data protection

Moreover, the Germans make several references to the use of biometric remote identification in the paper, which includes controversial technologies such as facial recognition.

In January, Interior Minister Horst Seehofer hoped to expand the use of facial recognition in Germany but quickly backed down following massive criticism from civil society.

The interior ministry co-authored the statement, which emphasizes the “particular risks to the civil liberties of citizens” posed by biometric remote identification, and the “depth of possible encroachment on goods protected by fundamental rights”.

The text makes it clear at two points that the “how” and “whether” of using such technologies continues to be a topic of debate.

In an early version of the ‘white paper’, an EU-wide moratorium on face recognition was still being considered but was then rejected again.

“It is important that AI applications are not only innovative but also safe. Safe from hacker attacks, safe from sabotage, safe from unauthorized data leaks,” said Seehofer, according to a press release.

LEAK: Commission considers facial recognition ban in AI 'white paper'

The European Commission is considering measures to impose a temporary ban on facial recognition technologies used by both public and private actors, according to a draft white paper on Artificial Intelligence obtained by EURACTIV.

If implemented, the …

Edited by Samuel Stolton

Subscribe to our newsletters

Subscribe
Contribute