This article is part of our special report Digital Transformation in Healthcare.
With digital technologies set to irrevocably change the face of our healthcare systems, the ethical concerns surrounding the use of artificial intelligence (AI) are increasingly gaining prominence in policy circles.
The new von der Leyen Commission is expected to deliver a report on AI and ethics in its first 100 days, which could be oriented toward an ‘ethics-by-design’ approach according to Legal Affairs Commissioner Didier Reynders. And, in terms of data protection, all EU AI stakeholders must comply with the GDPR regulation in safeguarding the personal data of healthcare patients.
Moreover, while talks in the European Council on the ePrivacy regulation have stalled, there could very well be an impact on the health sector in terms of the protection of personal data in electronic communications, despite there being a carve out for emergency services in the Commission’s original proposal.
Business Insider Intelligence reported that spending on AI in healthcare is projected to grow by 48% between 2017 and 2023. Furthermore, the Commission is increasing its annual investments in AI by 70% under the research and innovation programme Horizon 2020, predicted to reach €1.5 billion for the period 2018-2020.
However, speaking at a recent event on the digitalisation of healthcare, Ioana-Maria Gligor, head of unit at DG SANTE, European reference networks and digital health, that its implementation can vary between member states and can sometimes create complexity for both healthcare providers and researchers. She emphasised that the most important thing was to ensure transparency about how data is protected and used.
At the event, socialist MEP Eva Kaili said that although the GDPR must be involved in the regulation of AI in healthcare, we must be careful to “ensure it doesn’t stop innovation”.
She said that we must instead talk about the different kinds and uses of data sets which take into account who is using the data and for what purpose, saying that with “insurance companies who try to maximise profit, there is an issue” but that there should be a different consideration for scientists who want to use data to develop and research.
Boris Brkljačić, president of the European Society of radiology and professor of radiology and vice-dean at University of Zagreb school of medicine, told EURACTIV that AI is already a major area of research, with more than 5000 papers published on the use of AI in the field of radiology.
Beyond radiology, AI technologies are predicted to optimise the healthcare system in a whole range of ways, such as through targeted treatments, more efficient diagnosis, streamlined logistics and advanced data analysis.
AI commonly refers to a combination of machine learning techniques and robotics, combined with algorithms and automated decision-making systems, which together are able to predict human and machine behaviour and to make autonomous decisions.
The use of such technologies is increasingly affecting our daily lives and the potential range of application is so broad that it has been referred to as the fourth industrial revolution.
The discussion around the control of these technologies and their impact on society is increasingly focused on the ethical implications of using such technology and the challenge this poses for policymakers and regulators.
These pertain to issues concerning personal data protection and security. This is especially relevant in healthcare as opposed to other industries due to the sensitive nature of the consumer data stored by healthcare providers.
In its various communications on AI, the Commission has set out its vision for AI, which is to be “trustworthy and human-centric.”
However, in a statement on their website, the European Society of Radiology state that as a new technology, AI “lacks clear standards guiding its development and use.”
The report states that the ethical use of AI in radiology should “promote well-being and minimise harm resulting from potential pitfalls and inherent biases”, ensuring that benefits and harms are “distributed among stakeholders in a just manner that respects human rights and freedoms, including dignity and privacy.”
This sentiment is echoed by Brkljačić, who similarly said that focus must be given to “minimising the risk of patient harm from malicious attacks and privacy breaches.”
Regarding the potential breach of data, he said there is an urgent need to “make clear guidance at both the European level and national level regarding the use, manipulation and ownership of data.”
He added that one of the main issues is that software is usually developed for business models and is therefore profit-driven. He, therefore, said there was a risk of this being mismanaged, but it is “essential that information is used solely for the benefit of patients.”
Edited by Samuel Stolton