The EU digital agenda for the 2019-2024 mandate is marked by a series of broad-ranging reforms, which are intended to guide Europe through the digital transition, as new technologies emerge. With Artificial Intelligence at the heart of the debate, the issue of ethics, namely in regulating data gathering and usage, remains centre stage.
But who should decide what is ‘ethical’ when it comes to regulation? A principles-based, human-centred framework might be the solution, as policymakers cannot foresee the direction in which technology will develop. Recent EU national strategies on AI are framed in this direction, since they underline specific needs and the hierarchy of steps needed to evaluate the potential harm of AI and to find legislation to address this.
In recent years, the European Commission has highlighted the importance of being a leader in the development of trustworthy AI technologies. Nevertheless, legislating all applications of AI can halt innovation without tackling its potential harm, as prescriptive overregulation may hamper the efficiency of new technologies. This would have effects across the EU economy as innovative usage of AI is increasingly being seen in all industries. In the insurance sector, for example, algorithms are being used for quicker processes in loss predictions and claims handling.
Listen to the full event here:
Ethics in the use of AI: Can regulators agree a standard approach?
>> Click here for more information about the event.
>> Click here to check out our upcoming event