Austria to combat deep fakes amid increasing use of the technology

This is a "considerable security policy risk because the identification of artificial influence is difficult to prove or trace," Interior Minister Gerhard Karner said at a press conference on Wednesday.  [MDV Edwards/Shutterstock]

The Austrian government published an action plan to fight deep fakes on Wednesday (25 May), aiming at better tackling disinformation and hate speech. On the EU level, several legislative texts also seek to address the growing issue.

Advancing digitalisation, which has been accelerated by the pandemic, is leading to a rapid increase in deep fakes, a type of AI-powered media portraying someone doing or saying things that never really happened. 

This is a “considerable security policy risk because the identification of artificial influence is difficult to prove or trace,” the Austrian Interior Minister Gerhard Karner told a press conference on Wednesday. 

Already at the end of 2020, an inter-ministerial task force was launched on the issue, involving the Austrian federal chancellery, the ministry of justice, the ministry of defence and the ministry of foreign affairs. 

This task force has been elaborating on the topic, leading to the publication of the action plan, which envisages four fields of action: ‘Structures and Processes’, ‘Governance’, ‘Research and Development’, and ‘International Cooperation’. Awareness-raising on the topic shall be expanded and strengthened among the population. 

The Austrian government stressed that the regulation of deep fake videos must take into account the relevant fundamental rights and personal rights, and that particular attention must be paid to the special protection of freedom of expression and artistic freedom. 

“Deep fakes are used to manipulate public opinion and democratic processes, or to target individuals with hatred on the net,” said Justice Minister Alma Zadic.

The potential of deep fakes 

The Austrian Parliament assumes that deep fakes are being published every single day as no extensive software is required anymore to create them. Professor Hany Farid from the University of Berkeley even expects that within the next three to five years, it will no longer be possible to tell fake from real. 

While not all deep fakes are of malicious nature – deep fakes can also be used for non-abusive purposes such as satire – a majority are used to damage people’s reputations via defamatory pornographic fake videos.

In a report by the Dutch Startup Sensity, such use constitutes more than 90% of cases, and the number of generated videos has been doubling every six months from the end of 2018 until 2020 only. 

Apart from pornographic material, which affects women more, deep fakes can also be dangerous in the context of politics. In March 2022, a manipulated video of Ukrainian President Volodymyr Zelenskyy was circulated, where Zelenskyy seemingly told the Ukrainian national army to surrender. 

A 2021 study by the Panel for the Future of Science and Technology found that “risks associated with deep fakes can be psychological, financial and societal in nature, and their impacts can range from the individual to the societal level”.

Thus, the study recommended that policy-makers prevent and address the adverse impacts of the technology and incorporate policy options into their legislative frameworks. 

European legislation to tackle deep fakes 

In response, several European legislative files are addressing the issue of deep fakes, such as the Digital Services Act (DSA) and the Artificial Intelligence (AI) Act. 

The European Parliament’s AI Act draft report from 20 April stressed “the emergence of a new generation of digitally manipulated media – also known as deep fakes”. Because of their potential for deception, deep fakes should be subject to both transparency requirements and the conformity requirements of high-risk AI systems. 

This does not mean that high-risk AI systems are prohibited, but complying “makes such systems more trustworthy and more likely to be successful on the European market”, the co-rapporteurs emphasise. 

Further, the DSA includes an obligation for very large online platforms to carry out a risk assessment, including the intentional manipulation of their services, which could have a negative impact on the protection of public health, minors, civil society discourse, elections, security etcetera. 

Patrick Breyer, an EU lawmaker for the German Pirate Party and DSA opinion rapporteur, told EURACTIV that while these regulations could help as countermeasures, in consciously manipulated cases “human intelligence is called for”.  

As it could be impossible to decide whether one is dealing with a deep fake, detecting such creations will be a question of “media competences and counter-research” in the future, Breyer said. 

Leading MEPs raise the curtain on draft AI rules

The two European Parliament co-rapporteurs finalised the Artificial Intelligence (AI) draft report on Monday (11 April), covering where they have found common ground. The most controversial issues have been pushed further down the line.

[Edited by Zoran Radosavljevic]

Subscribe to our newsletters