Ethical considerations at all stages of AI’s use in journalism

From automated reporting and data processing to information verification, AI’s presence in the world of journalism is growing. Shutterstock / metamorworks]

AI technologies are increasingly being incorporated into newsrooms, but with trust in media in a precarious position, concerns remain that introducing machine learning could worsen the situation.

From automated reporting and data processing to information verification, AI’s presence in the world of journalism, and other sectors, is growing. But the media industry’s reliance on trust and truth means an added layer of complexity when it comes to using such tech. 

Several programmes successfully integrate AI into the news-making process, breaking down and standardising COVID-19 data. Still, concerns remain over the ethical implications of using this kind of software in a sector that deals with sensitive information and has a potentially wide societal reach. 

Despite this, the media is not included as a high-risk area of application in the EU’s draft AI Act, which takes a vertical risk-based approach to regulating the technology’s application, identifying sectors seen as particularly vulnerable to its misuse or unintended consequences. 

EU launches AI blueprint in bid to become world leader

The European Commission has presented on Wednesday (21 April) a long-waiting ‘AI package’. The proposal is the first-ever attempt to regulate AI and has been developed as part of a broader ambition to make Europe a global leader in the field by being the first to set clear guidelines.

AI in journalism is already broad in scope regarding both existing and potential applications. In many cases, the work is still behind the scenes, automating the collection and processing of large volumes of data, for example, or using textual analysis to fact-check or tag articles. 

Increasingly, though, it is also being worked into more outward-facing components of journalism, including the formulation of articles themselves. Various newsrooms have deployed automation to issue updates and piece together articles on current affairs, sports events and property listings, amongst other topics. 

The software is not just text-based and is being applied to other forms of media. Design AI, a Munich-based company, working with news organisations such as the European Pressphoto Agency and Spanish news agency EFE, has developed AI for annotating videos, mainly for archiving footage but with potential applications in a wide range of practices both within and outside of journalism. 

When it comes to the ethics of using AI in the media and how to consider them, a differentiation needs to be made between the different levels of development, Keesiu Wong, CEO and co-founder of DesignAI told EURACTIV. 

This occurs first in determining whether the concern applies to AI as a whole, for instance, when it comes to bias in data, as seen in the technology’s application in other sectors, or whether it applies to the software’s media-specific use. Secondly, it occurs when working out where these issues might arise within the process of development to application. 

“There are different levels and places where undesired unethical behaviour might happen”, Wong said. “This might be in how the software is programmed – this would be basically our concern – it might be how the data was constructed…and this is basically the hardest one because it’s the least obvious. And then it could also be in how the AI is applied.” 

What is needed, therefore, he said, is an “ethics by design” approach to creating these kinds of technologies. “You actually need ethical considerations everywhere – you need it while you construct or design the AI, you need it in constructing the data set, during training, and you need it when you apply it.” 

When it comes to the deployment of this software by media organisations, an additional level of considerations arise, given the potential implications of publishing information that has been processed by automated technologies. 

AI tools can be invaluable for investigative journalists, for example, when it comes to collating and processing large volumes of information, said Jonathan Stray, an AI researcher at the University of Berkeley, speaking at the London School of Economics’ JournalismAI Festival on Thursday (2 December). 

However, he added, before any claims or allegations are made public based on this, “guaranteeing the accuracy of a process, guaranteeing human oversight of the quality of the product is one of the main ethical challenges relevant to AI and story production.” 

EU Council presidency pitches significant changes to AI Act proposal

The Slovenian Presidency circulated a compromise text on the EU’s draft AI Act, including major changes in the areas of social scoring, biometric recognition systems, and high-risk applications, while also identifying future points for discussion.

[Edited by Alice Taylor]

Subscribe to our newsletters

Subscribe