In November 2021, the European Parliament’s Special Committee on AI in a Digital Age (AIDA Committee) put forward its draft report on artificial intelligence (AI) in a digital age. The report advocates for a very permissive approach to the regulation of AI to stimulate innovation and foster the competitiveness of the EU. With this, however, it understates the possible risks specific to the development and use of AI in the context of health and lacks actual solutions to translate potential into action. We identified five shortcomings and hereby appeal to the regulator to address them.
Jorge Félix Cardoso, M.D., M.A. is a Parliamentary Assistant at the European Parliament
Hannah van Kolfschooten, LL.M. is a PhD Researcher at Law Centre for Health and Life, University of Amsterdam.
Diogo Nogueira Leite, M.Sc. is a PhD Researcher in Health Data Science at the Center for Health Technology and Services Research, University of Porto.
Tjaša Petročnik is a PhD Researcher at TILT/TILEC, Tilburg University
Henrique Vasconcelos, M.D.
The authors write in their personal capacity.
Health is broader than healthcare
When talking about AI in or for health, we are not only talking about a cancer diagnosis or the personalisation of therapeutics in the clinic; the scope of health provision is expanding from formal healthcare systems into our smart devices, actively involving (pre-)patients and consumers. Think about increasingly AI-driven fitness apps, symptom checkers, disease management tools and so on, which have the potential to widen access to health-enhancing resources by cutting conventional healthcare ‘gatekeepers’ out of the equation.
These tools are, however, in the report seen as mere means to relieve pressure on healthcare systems, even though they might affect the health attitudes and behaviours of users or even result in serious harm if not performing optimally. Additionally, as health data is scarce and desirable, we can expect such consumer-facing AI tools to progressively act as an avenue to obtain such data. Misuse might result in privacy violations, civil rights discrimination based on health records, or health inequality increases. AI regulation should thus take into account that AI-based health practices are also taking place outside formal healthcare settings and properly address this.
Exaggerated benefits of health AI
Without a doubt, AI applications in clinical practice and health research show significant promise. Yet, reading the AIDA report, one may get the impression that AI is widely and successfully used in clinical settings. Although the fight against COVID-19 has indeed accelerated research of new technologies in health, this has (so far) produced few robust, generalisable results. While some AI algorithms may perform with high accuracy on paper, even comparable to human specialists, they might not perform as well in real-world clinical practice. The expectations for AI in health appear inflated as the promised transformative events have for now mostly been confined to lab-controlled environments. Furthermore, the AIDA report seems to falsely equate more diagnoses with improved clinical outcomes. However, that is not always the case – overdiagnosis might even result in more harm than good. Ultimately, implementing AI technologies in health contexts requires accurate algorithms and investments in care infrastructure, professionals, and resources.
Underdiagnosis of the risks for both individuals and society
While the benefits of AI appear over-diagnosed, the AIDA report seems to downplay the risks. It correctly acknowledges some of them, in particular, harms to individuals’ wellbeing due to, for instance, misdiagnosis and related liability issues. But, by focusing on individual risks, the report overlooks broader societal risks and harms of AI. Unlike ‘human’ medical errors, mistakes of AI systems can have a much larger impact: technical system errors in AI could lead to mass patient injuries because of widespread use. Additionally, AI might perform better on sub-populations that are better studied or better represented in the training datasets, reflecting existing societal biases. For example, when AI is used to aid health professionals in diagnosing skin cancers, the system frequently uses freely available image databases that often do not include images of people of colour. As a result, these systems may lead to people of colour’s underdiagnosis. This would not only directly harm the health of individual members of (already) marginalised groups; it could also deepen the (existing) socio-economic inequalities in terms of access to health and health outcomes. It thus seems that the report disregards these societal implications in favour of potential economic benefits.
Patients are not mere data points; they are humans with fundamental rights
The risks of AI in health go well beyond health risks. AI runs on data, but data is not just an input; it is “collected by people, from people and for people”. Data subjects and those affected by AI are human beings with fundamental rights like dignity, non-discrimination and privacy. For instance, the report does not touch upon the black-box nature of many AI systems; a lack of understanding of how AI reaches its conclusion makes it difficult for clinicians to explain and for patients to understand AI’s advice, which can impair existing health practices of information provision and informed consent as an integral part of the right to health. Moreover, those seeking care are in a specific situation of vulnerability due to asymmetry in information and risk aversion. Therefore, the EU’s approach to AI in health should not only ensure health-specific fundamental rights are protected in the design, deployment, and usage of AI in health tools but also to address citizens’ willingness to provide and curate health data and ways to govern it appropriately.
Development of AI in health requires a specific approach
Developing health solutions is different from conventional product development because it touches upon people’s bodies and lives in a way no other industry does. Introducing AI in health amplifies the need for multidisciplinary teams that bring together insights from (bio)medicine, data science, behavioural sciences, and ethics and law. By lacking identification of this heterogeneity, namely through the recognition of the need for sector-specific regulation, the report fails to address the needs of patients, practitioners, and also developers and industry.
The AIDA report should therefore acknowledge the special nature of AI in the context of health and call for the proper regulation of the risks to patients or consumers as affected individuals and society. After all, health is everyone’s concern, and the algorithms have already entered doctor’s offices and our living rooms. Let’s ensure that they are really here to make us healthier.