Juliet Lodge is Professor Em. at the University of Leeds; Member, Privacy Expert Group, Biometrics Institute, and Participant of the initiative IDENTITY | Talk in the Tower
The questions of who or what is in control of somewhat misnamed digital ‘identities‘ and how we can be reasonably confident that we are free and safe to trust them confront us all. The fragility of trust and the need to sustain it have been amply illustrated by disagreement between the EU and the USA, and others, not simply where spying allegations, the Transatlantic Trade and Investment Partnership talks and the EU‘s draft privacy regulation are concerned, but potentially in all commercial and private ventures relying on an exchange of data and information. That includes all manner of innovations on the horizon.
The case for the EU having its own independent icloud lies in the competitive value arising from an opportunity it would offer Europe to set global benchmark and guaranteed privacy and security standards in protecting citizens and businesses alike by combining robust technical security (possibly in conjunction with EU privacy seals, assurance and certification mechanisms - as required by the EU’s Draft Regulation - and EU values and attachment to implementing them according to the law and shared ethical values.
The power of machine-to-machine communication and robot mediation to steer our lives and identities is insufficiently understood. Even less well understood is the potential impact on our ability to control our own identities. We take being connected 24/7 for granted and expect machines and our ‘smart’ devices, clothes and environment to anticipate our every need and whim. But how can we be sure that the provider of information, for instance, is authoritative, credible, open, transparent, genuine and objective. Can an invisible service provider and possibly one outside the EU can be trusted to secure personal information against intrusion, loss, theft, privacy breaches, degradation and misuse?
Somewhat paradoxically, governments and private sectors simultaneously commend privacy and data protection legislation while extolling the assumed commercial virtues of big and micro data analytics, anticipating a growth in employment opportunities, enterprise and prosperity. However, it seems that this is divorced from the wider context of a surfeit of data and information slushing around and beyond the informed control of governments or citizens. Life cannot simply be boiled down to how much can be mined from or sold to an individual. The ends – more information, cut, spliced, re-configured, disproportionately accumulated, linked and analysed and re-sold – do not necessarily justify the means: exploiting all the data and information out there.
Codes of ethics, societal norms and values, enforced and upheld by legislation and respect for the rule of law and liberal democratic practice, at least, within clear territorially defined jurisdictions, are not necessarily shared or enforced outside them.
Given growing hyper-connectivity, the EU must take a genuinely forward-looking view of the probable impact on privacy and data protection and on what it means to be human. Smart ambient applications using all manner of single or multi-biometric authentication inevitably mean that daily transactions and activities by citizens, businesses and governments, private and public bodies can be tracked, linked, repackaged, spliced, mined and commodified, sold and commercialised without explicit, informed case-by-case provider consent. If the public sees a surveillance society on the horizon, cynicism grows and trust deteriorates not only towards government and authority but also towards the providers of services and technologies. The erosion of private space and the commercialisation of both publicly collected data and our every waking moment is not without (albeit unintended) consequences.
Innovation in e-commerce, e-education, e-administration and e-health are applauded. Apps to monitor nutrition, leisure, travel, purchasing and life-style while ‘advising‘ us of something that may ‘interest‘ us – depending on what profile we set up, can save us the bother of having to search or think consciously for ourselves but also potentially facilitate future denial of access to services.
At one level, the convenience gain has its advantages: mobile alerts on traffic delays are usually welcome for travellers; and reminders to drink or take medication may be useful to some patients. Yet, those vulnerable, infirm, disabled, or excluded people who may lack capacity to self-determination, to be autonomous, or equal and who may most benefit from the innovations are those usually who lack access to them, who do not know about them, who cannot afford to buy and service them or necessarily make informed decisions about consenting to how their own data is recorded (on an implant even), handled and shared or linked.
The power of information also lies in its power to disempower us arbitrarily by means of digital tokens associated with a user claiming an entitlement to use them. The semantics of ‘identity‘ remain confused and yet identity management is about trust in the credibility of the added certainty that the token supposedly confirms.
Regardless of the hype, there is little point in having discrete bits of data that cannot be linked to provide information that can be probed, for may be as yet indeterminate purposes only tangentially – if at all – related to the original purpose for which they were provided. Generating spin to justify linkage is easy. Protecting the citizen – something that is at the heart of much of the current EU thinking – is far more difficult.
The future of identity and of innovative and user-centric identity management tools to build trust and confidence has to be looked at holistically by legislators and the private service providers as the participants of the conference “The Future of Identity” in Brussels confirmed last week (a debate initiated by the dialogue platform IDENTITY Talk in the Tower). The overriding concern centres on trust, trusted secure technologies and data handling and trusted relationships. These may be managed automatically (by machines and robots), by good practice, and/or be constitutionally and legally regulated, rendered accountable and transparent, and implemented.
The piecemeal approaches to public-private partnerships, stakeholder dialogue, out and in-sourcing, privacy seal design, training and certification schemes, audit trails, privacy impact assessment tools, privacy by design, security metrics, the EU icloud and so on, have their place but, in this robot-influenced hyper connected world, are insufficiently connected to what is happening in the real world.