As AI technologies develop and their uses in everyday life become more and more relevant, EU countries need a framework to regulate this transition. To talk about AI advances in Europe, speakers across different areas of the industry gathered in a recent EURACTIV panel.
Among the issues raised was the great divide in public opinion when it comes to AI, future funding in the industry, and consumer and data privacy concerns.
Lucilla Sioli, Director for Digital Industry at the Commission’s DG Connect described the EU executive’s approach, as “positive but careful” and with a human-centered focus. She also mentioned Commission funding for a public-private partnership on AI, data, and robotics, in addition to testing facilities to develop everyday uses of the technology.
For these advances to be made possible, regulation is needed. The director brought up the quality of datasets and trust in new companies to respect fundamental rights and consumer privacy, which is the essence of the Commission regulation proposal so far.
When asked if the Commission can trust Big Tech, Sioli insisted on transparency and said regulations will not address huge companies more than smaller businesses, bearing in mind that they seek a level playing field and one system of rules that applies to everyone in the industry.
Joining Sioli was Joanna Bryson, Professor of Ethics and Technology at the Heartie School of Governance, Loubna Bouarfa, CEO of Okra Technologies, and Tobjørn Folgerø, Chief Digital Officer of Equinor.
To add to Sioli’s point on trust and transparency, Professor Bryson urged that when dealing with AI machines, it is crucial to “know who did what to who, and why”.
As a counterpoint, Bouarfa highlighted the benefits of AI, saying that “we need to stop viewing sharing data as a bad thing”. She used Israel as an example, mentioning how they used data analysis to measure vaccine outcomes and improve their results. “AI is worthless and data is worthless if we don’t use it”, she said.
Folgerø, in addition to that, mentioned how Equinor is “releasing large datasets to stimulate innovation and research”.
As the digital officer of an energy company with over 20.000 employees, Folgerø talked about the importance of training the workforce to collaborate with AI and find the best digital solutions. To do that, the company is partnering with companies such as Microsoft and Shell. The training, he said, include knowledge in machine learning and algorithms.
Such developments are important especially to familiarize the workforce with the steady transition many companies will make towards more AI-powered actions.
As also noted by Sioli, “for some people, artificial intelligence is a real excitement, it offers plenty of new opportunities […] on the other hand, for other people, artificial intelligence is a nightmare, it threatens freedom”, but clarified that the Commission does not favor one side or the other.
So, how to create this trust in AI? For Bouarfa, it is all about the power to control, to some extent, the AI technology in question. “A user gets the AI system suggestions, but they make the final decision”, she explained.
Her example was more specific for the healthcare industry, where AI can help with diagnosis and predictions, but the final call is ultimately made by a human. Bouarfa also explained that clarity and transparency are everything. It is important to explain where such predictions and datasets are coming from and why they are being used in order to build a relationship of trust.
However, the question of who to turn to when AI fails still remains. In Professor Bryson’s opinion, AI is a product, and this product is its manufacturer’s responsibility. “It’s not different from a car or something like that”, she explained.
The new legislation being discussed, in the professor’s opinion, needs therefore to clarify that AI is a service or product like any other. Many existing laws should already apply to the technology.
On the same topic, “speed and data are the new currency”, said Bouarfa.