Allowing data processing without a predefined purpose will boost European businesses, write Axel Voss and Yann Padova.
Axel Voss is a German MEP and shadow rapporteur for the EPP on the Data Protection Regulation. Yann Padova is former secretary general of the French data protection authority CNIL and current commissioner of energy regulator CRE.
As negotiations on personal data protection have now entered into the “trialogue” phase, we are not certain that the current draft of the regulation will enable our companies, inventors and investors to share the benefits of big data.
To some extent, Europe has so far missed out on the Internet revolution that is dominated by American companies. But the future is left up to connected devices and the big data revolution. Europe has remarkable assets, such as the quality of its engineers, its companies and its infrastructure, not to mention a long tradition of innovation.
The potential uses of big data are countless. Predicting behavior or phenomena is of the utmost interest for many activities, like the monitoring of energy consumption, health, insurance, crime prevention or marketing. This huge amount of data is a gold mine for finding meaningful algorithms, allowing new services to be offered and creating jobs.
Indeed, with 15 billion connected devices today, a predicted 75 billion by 2020 and a total amount of data that doubles every 24 months, our world is facing an unprecedented data deluge. And there are different types of data: geolocation, images, connection data…
This is what big data is about: an unprecedented volume of information that mostly flows from devices (our phones, cars, clothes) and can be analyzed by massive processors.
But this data analysis is revolutionary. Data processing looks for correlations between pieces of information that have no link whatsoever, which were produced in different contexts.
Big data also blurs the very nature of the data that is processed.
A piece of information that isn’t personal data one day, and therefore doesn’t fall within the scope of the Data Protection Regulation, can within a day become identifiable data as a result of processing and cross data sharing.
As an example, in the US, researchers have established that 90% of people are now identifiable through their zip code, date of birth and gender. Separately, those three criteria don’t qualify as personal data.
However, there is one necessary condition for enabling innovation to flourish: allowing data to be processed without a pre-determined purpose.
When it comes to the collection of personal data, our current European framework and the one now being negotiated allow data collection only for a “determined and explicit purpose” and restrict secondary processing for “incompatible” purposes.
In current drafts of the Data Protection Regulation, data must be deleted when it’s no longer relevant to its initial purpose. Some negotiators want to narrow the potential secondary uses of data. By requiring data to be deleted after it’s first used, the draft regulation may restrict big data’s raw material.
Besides, how can companies comply with the old requirements of giving data subjects those eight mandatory categories of information? How would they obtain subjects’ “explicit and informed” consent in the world of connected devices—where there is usually no form to read or fill out?
If we read all the privacy notices we encounter as data subjects, we would need more than 250 hours per year (30 entire days). Do we genuinely protect people’s privacy when we overwhelm them with privacy notices they no longer read?
These principles of purpose specification, notice and consent were developed in the late 1970s by our two countries, France and Germany. The goal of these ideas was to protect individuals’ fundamental rights against the rising threat of state cyber-surveillance.
France and Germany were pioneers in the data protection field.
The protection of our fundamental rights must remain at the centre of the Data Protection Regulation because it’s the foundation of our European identity and common values. When public authorities collect data, they should balance fundamental rights with the innovation principle concerning companies’ use of data.
Our shared belief is that to foster innovation and economic growth while ensuring a more efficient protection of people’s right to privacy, we can’t focus exclusively on the definition of the initial purpose for data collection. Instead, we should concentrate on the use of data and on the actual consequences for people.
We should no longer satisfy ourselves with the formal requirement of obtaining data subjects’ consent. Rather, we should implement tools that will efficiently protect individuals from the risks that could result from analysis and processing of their data.
To that end, we must change our perspective and introduce a risk-based approach, less focused on predetermined issues and more on actual ones.
In exchange for more flexibility in analysis and access to data, companies should conduct robust, mandatory risks assessments of their big data projects. The “riskier” ones among them should be supervised by independent data protection regulators, such as the CNIL in France or the Bundesdatenschutzbeauftragte in Germany.
Big data means a technical and sociological revolution. The Data Protection Regulation must tackle that challenge by engaging its own “Copernican” revolution and enabling us to safely flourish in this emerging new world.