The EU’s digital agenda for the next mandate will be marked by a series of broad-ranging reforms, from artificial intelligence and data protection to digital tax. However, the issue of ethics in the digital field continues to be at the centre of the debate, namely in regulating AI and data usage.
Back in 2017, the European Commission highlighted the importance of being a leader in the development of trustworthy AI technologies. As platforms and applications advance globally, more attention is given to the power AI systems – and the governments behind their funding – have, the information they gather, and how human bias can influence them. The financial sector is a leader in deploying decision-support and decision-making algorithms involving AI techniques. This is prevalent in the insurance sector where algorithms are being used for loss predictions and claims handling, to name just two examples.
Establishing an appropriate ethical and legal framework is key to ensure fundamental principles and values are taken into consideration when dealing with these technologies. In April
2019, the EU’s High-Level Expert Group (HLEG) on Artificial
Intelligence published its Ethical Guidelines, identifying key requirements which AI systems should meet to be trustworthy. Amongst them were human agency and oversight, privacy and data governance, transparency, environmental and societal well- being, and accountability.
EURACTIV organised this Stakeholder Forum to discuss the ethics around data usage and artificial intelligence. Questions included:
- What are the design requirements for a future governance model of AI?
- How will companies address bias and discrimination when using AI? What could be the consequences of bias in a sector such as insurance?
- How to make sure regulations in AI and data management do not hamper progress?
- How can policymakers ensure that no AI system will be able to disclose sensitive private data?
>> Click here for more information about the event.
>> Click here to check out our upcoming event