Four months after the European Commission presented its ‘white paper’ on Artificial Intelligence (AI), the German government said it broadly agrees with Brussels but sees a need to tighten up on security. The government is particularly concerned by the fact that only AI applications with “high risk” have to meet special requirements. EURACTIV Germany reports.
According to the European Commission’s ‘white paper’, there are two criteria for AI applications with “high risk”.
First, they are to be used in sensitive sectors, such as health, security, or justice, and second, their concrete application should also be associated with special risks, such as discrimination, injury, or danger to life. If an AI application fulfills both criteria, it must also meet special requirements, for example with regard to data retention or human supervision.
However, for Germany, these requirements are not far-reaching enough. The government is thus proposing to tighten both the classification and the requirements themselves.
‘High risk’: extending the criteria
The criteria for a “high risk” AI application should be “reconsidered and, if necessary, extended,” according to the government, which also disliked the fact that risky applications only have to meet special requirements if they are used in sensitive sectors.
“As a consequence, certain high-risk uses would not be covered from the outset if they did not fall under certain sectors”, the statement added. The fact that the European Commission itself pointed to possible exceptional cases illustrates the need for more comprehensive regulation.
More levels of risk classification
However, it is also the classification system itself which should be revised, according to Berlin. It is “questionable whether the already existing EU regulations are sufficient for AI applications with a lower than ‘high risk'”, the opinion states.
Therefore, Brussels is being asked to develop a new classification scheme together with the member states.
It should provide for several levels of classification “for relevant risks and damage, taking into account the amount of damage and probability of damage”, such as “life and health, property, democratic processes, environment, climate, social, societal and economic participation”.
However, if an AI application were to be completely free of potential harm, no specific control should be required.
Requirements too vague
In principle, Germany welcomes the requirements, which according to the Commission must be met by AI applications above a certain risk potential. What Berlin wants is to see improvement in the details.
For example, Germany is calling for a more concrete definition when data records must be stored on a mandatory basis.
According to the Commission, this is the case in “certain justified cases”, but is too unclear and needs to be specified, as does the current description of a “limited, appropriate period of time” during which data records must be retained.
The aspect of information security, “understood as protection both against accidental errors, for example, due to unexpected user input, and against targeted manipulation by attackers”, should also be given greater consideration than before. Here, Germany considers a “mandatory high IT security standard for high-risk AI systems” to be “indispensable”.
The government also sees the need for improvement in the area of human supervision of AI systems, where Brussels still has to specify “under which circumstances which form of human supervision should be made mandatory”.
Seehofer: From facial recognition to data protection
Moreover, the Germans make several references to the use of biometric remote identification in the paper, which includes controversial technologies such as facial recognition.
In January, Interior Minister Horst Seehofer hoped to expand the use of facial recognition in Germany but quickly backed down following massive criticism from civil society.
The interior ministry co-authored the statement, which emphasizes the “particular risks to the civil liberties of citizens” posed by biometric remote identification, and the “depth of possible encroachment on goods protected by fundamental rights”.
The text makes it clear at two points that the “how” and “whether” of using such technologies continues to be a topic of debate.
In an early version of the ‘white paper’, an EU-wide moratorium on face recognition was still being considered but was then rejected again.
“It is important that AI applications are not only innovative but also safe. Safe from hacker attacks, safe from sabotage, safe from unauthorized data leaks,” said Seehofer, according to a press release.
Edited by Samuel Stolton