The GDPR, which recently came into force, imposes such tight restrictions on the use of personal data that the EU will be unable to keep up with the rest of the world using AI to streamline their economies, writes Nick Wallace.
Nick Wallace is a Brussels-based senior policy analyst with the Center for Data Innovation, a think tank studying the intersection of data, technology, and public policy.
European businesses seeking to use artificial intelligence (AI) will soon feel the impact of the EU’s new General Data Protection Regulation (GDPR), which came into force on 25 May.
The European Commission announced a package of policy measures at the end of April intended to make more non-personal data available for use in AI, but the GDPR imposes such tight restrictions on the use of personal data that Europe will be unable to keep up with other parts of the world as they use AI to streamline their economies.
The sooner the EU can countenance reforming the GDPR, the better, and in the meantime EU policymakers should not introduce any more unnecessary restrictions on this important technology.
The GDPR imposes significant costs and massive legal risks on firms processing personal data. It is complex, and firms need to hire people with considerable expertise to ensure they comply with it, because they face heavy fines for breaking the regulation’s myriad rules.
As a result, several firms inside and outside Europe are already suspending some digital services, such as by shutting down particular features, blocking access from Europe, or pulling out of the European market altogether. For example, the Czech online platform Seznam has announced it will shut down its social network for classmates, while Gravity Interactive, a U.S. games developer, will block European users from accessing its games.
The costs are particularly high for firms using AI, because the GDPR requires them to be able to have humans review the complex inner workings of many kinds of individual algorithmic decisions. That raises the labor costs of AI tremendously, and it undermines the point of automating complex processes in the first place.
Those costs weaken incentives for European firms to use AI and for foreign AI firms to enter the European market, diminishing choice for European consumers and weakening the competitive pressure to innovate. So the GDPR disincentivizes precisely what the European Commission says it wants to encourage.
In addition to deterring some firms from using AI at all, the GDPR will also undermine the quality of AI services that other companies offer in Europe. Besides raising costs, requiring humans to be able to review the workings of individual algorithmic decisions also undermines the statistical accuracy of those decisions because it forces companies to limit the complexity of algorithmic models to a level humans can still interpret.
The irony is that while the intent of the requirement is to increase fairness, in practice, the requirement will force companies to use less accurate models when dealing with consumers in the EU, which increases the likelihood of making unfair decisions.
The United States, China, and other countries are pursuing strategies to become global leaders in AI. The EU cannot expect to compete with a regulatory framework that is out of date before it even comes into force, nor can the EU hope to shape AI ethics if it leaves others to lead the technology’s development.
The GDPR took four years of tedious wrangling to finalize, so the prospect of amending it is unpopular, even with some of the regulation’s harshest critics, but that is the only long-term solution. In the meantime, the GDPR will serve as a cautionary tale for other countries of how not to regulate AI, and for the EU when it contemplates additional regulation.
If the EU persists in subjecting AI to such stringent regulation so early on, it will find itself out of the AI race that has barely started.