The EU needs to protect (more) against AI manipulation

DISCLAIMER: All opinions in this column reflect the views of the author(s), not of EURACTIV Media network.

"The risks of manipulation from AI systems aren’t merely hypothetical, they already threaten individuals and communities and can lead to further harms if not adequately prepared for." [Shutterstock / Who is Danny]

The European Commission published a proposal in April 2021 for the regulation of artificial intelligence (AI) in the EU. It set up some prohibitions on AI manipulation,  which is a good start but does not do enough. The Slovenian EU presidency has since made important suggestions to the proposal, but they can do better still.  

Risto Uuk is a policy researcher at the Future of Life Institute. 

How would you feel if you knew that tech companies exploit your moments of sadness? What about if they manipulated you into working more than you are reasonably able? Because that’s the state of the world today. Social media companies have helped advertisers to target ads at consumers experiencing moments of psychological vulnerability and some e-commerce companies use workplace surveillance to pressure its workers into working more than they are physically and psychologically able. Many tech companies are using AI to manipulate consumers. 

The potential for manipulation by AI is high for three reasons. For one, the vast majority of algorithms are non-transparent, making it difficult for users to understand how these systems work and influence them. For another, AI systems can personalise content by assessing people’s interests, habits and even their mood. This is often not done with the user’s interests in mind, but rather the financial interests of the companies using AI systems. Finally, AI can undermine consumers’ autonomy by exploiting their decision-making vulnerabilities. By doing so, manipulation goes against EU citizens’ fundamental right to human dignity. 

Risks of AI Manipulation 

The risks of manipulation from AI systems aren’t merely hypothetical, they already threaten individuals and communities and can lead to further harms if not adequately prepared for.  Recent evidence indicates that Instagram’s machine learning algorithm is harmful to a  significant proportion of users and particularly to teenage girls. It leads to body image issues and increases anxiety and depression. Yet, the platform is reluctant to curb engagement since this would affect its profit margins.  

Furthermore, the voter micro-targeting firm Cambridge Analytica, armed with data gathered from Facebook, has used advanced analytics to target ads at specific voters to potentially swing elections around the world. In addition, recent leaked strategy documents indicated how advertisers plan to target people in moments of psychological vulnerability with tailored ads. 

In some instances, AI is already influencing consumer decisions more than standard marketing techniques. 

AI systems can also manipulate using nudges. These are indirect ways to influence the behaviour of users. Not all nudges are manipulative. For example, informational nudges like nutrition labels that show the calories and ingredients of food items are generally not manipulative. This is because they are not used to exploit the consumer. In contrast, many online nudges, like autoplay on YouTube, are more controversial because they encourage mindless engagement and overuse of the platform.

The EU AI Act on Manipulation  

The proposed EU AI Act addresses manipulation in two ways. It recognises the practice of manipulation, identifies subgroups at high risk of manipulation, and addresses the possibility of harm. It also acknowledges that existing legislation – such as those around data and consumer protection – might cover manipulation. Many civil society organisations, like  Amnesty International and EDRi, have shared suggestions on how the AI Act can better protect against manipulatory risks. I have two recommendations for the current French EU Presidency. 

Firstly, I suggest removing “subliminal techniques” from the proposal so that the Act applies to all types of manipulation techniques. The term ‘subliminal’ is not defined in the AI Act proposal but it usually refers to sensory stimuli that consumers cannot consciously perceive; for example, visual stimuli presented for less than 50 milliseconds. However, most uses of AI  will not be subliminal since they will be consciously perceived by users. As such, the Act in its current form still allows for many forms of AI manipulation.  

For example, imagine having a water bottle equipped with sensors that feeds data back to the health app on your phone. If it’s been a while since you filled your water bottle, an algorithm might conclude that you’re thirsty – even if you don’t realise this yourself – and show you an ad for a sugary drink. This would influence you towards buying that drink, and you would thus be targeted when in a state of vulnerability. This manipulation, however, would not count as subliminal as long as you’ve consciously perceived the ad. This particular example of AI  manipulation may strike the reader as relatively harmless, but this will not always be the case. 

Secondly, I recommend adding societal harm to the list of harms currently included in Article  5 of the Act. Many civil society organisations, academics, and European expert groups like the  AI High-Level Expert Group have discussed the importance of using AI to increase societal wellbeing or taking steps to reduce societal harm from AI to ensure that European citizens can trust these systems. 

AI manipulation can sometimes cause only modest harm to any one individual but hurt societies at large. This occurs when AI systems harm the democratic process, erode the rule of law or exacerbate inequality. For instance, AI systems that have been used to influence political opinions, as happened with the Cambridge Analytica scandal, plausibly causing societal harm.  Incidents like this will only become more frequent and severe unless the EU acts to strengthen protections against manipulation.

Subscribe to our newsletters

Subscribe