The US administration has urged European lawmakers to avoid heavy regulation frameworks in the future rollout of Artificial Intelligence technologies on the continent. The call comes ahead of the European Commission’s planned presentation of its AI strategy, set to be announced early this year.
On Tuesday (7 January), the White House put forward a set of regulatory principles aimed at avoiding the overregulation of Artificial Intelligence technologies in the private sector. In a statement, the US said AI regulation should not be pursued until risk assessment exercises and cost-benefit analyses have been carried out, adding that the government hopes its European counterparts would adopt a similar approach.
“Europe and our allies should avoid heavy-handed innovation-killing models, and instead consider a similar regulatory approach,” a statement from the White House read.
“The best way to counter authoritarian uses of AI is to make sure America and our international partners remain the global hubs of innovation, shaping the evolution of technology in a manner consistent with our common values.”
In a call with reporters ahead of the news, the US’s Chief Technology Officer Michael Kratsios said America is leading the way in shaping the AI revolution “in a way that reflects our values of freedom, human rights, and civil liberties.”
“We encourage Europe to use the U.S. AI principles as a framework,” Kratsios said, referring to the Commission’s forthcoming AI strategy.
“The best way to counter authoritarian uses of AI is to make America and our national partners remain the global hub of innovation, advancing our common values,” he added, revealing that he brought up the issue with the EU’s digital tsar Margrethe Vestager during their meeting at Lisbon’s Web Summit in November.
Kratsios is set to formally announce America’s AI principles on Wednesday at the CES convention in Las Vegas.
Meanwhile, European countries continue to pursue a divergent approach to AI technologies. Over the weekend, a report from the German weekly news magazine Der Spiegel revealed that a draft law by the interior ministry is set to expand the competencies of the federal police force, in rolling out facial recognition technologies at 135 train station and 14 airports across the country.
This introduction of the draft law follows a series of testing periods in the country over the past few years that have seen a number of tech firms taking part in the process, including German outfits IBM Deutschland, Funkwerk, and the G2K Group, in addition to Japan’s Hitachi consortium – featuring several multinational tech companies.
Elsewhere in Europe, concerns have been raised about facial recognition technologies by data protection authorities in Sweden and France, as well as the EU’s European Data Protection Supervisor.
There has also been a sense of disquietude in France surrounding the country’s new ID programme, ‘Alicem’, which will employ facial recognition technologies for identity verification purposes. However, France’s Secretary of State for Digital Cédric O has recently adopted a more cautious approach to the project’s take-off in France, in an interview with Le Parisien.
In Brussels, a report published in June by the Commission’s High-Level Group on AI suggested that the EU should consider the need for new regulation to “ensure adequate protection from adverse impacts.” This could include issues arising from biometric recognition, the use of lethal autonomous weapons systems (LAWS), AI systems built on children’s profiles, and the impact AI may have on fundamental rights.
In terms of how AI technologies could impact consumer rights, Justice Commissioner Didier Reynders told his parliamentary hearing in October he would advocate for an ‘ethics-by-design’ approach, whereby products and services using AI take into account ethical guidelines at the earliest possible stage in their development.
For her part, the EU’s digital chief, Margrethe Vestager, told EURACTIV in November that “great opportunities come with great risks,” but that she has “strong reservations” with the ‘blanket’ application of some technologies in Artificial Intelligence technologies in particular, such as facial recognition software.
The US statement on Tuesday follows the recent passing of new rules to restrict the exports of American Artificial Intelligence technologies, which the administration hopes will keep rival governments from taking advantage of US innovation.
The new measures, which came into force on Monday (6 January), mean that firms exporting types of geospatial imagery software from the US have to apply for a license to send it abroad.
[Edited by Zoran Radosavljevic]