This article is part of our special report Enabling the Next Technology Revolution.
SPECIAL REPORT / As the Barcelona Mobile World Congress ends, EurActiv explores the ethical aspects of the internet and the move towards ever-more connected objects and devices. Per Strömbäck, editor of Netopia, says machines can and should be adjusted to fit the moral values of individuals and the wider society in which they operate.
Per Strömbäck is the editor of Netopia, an organisation working to develop a free and open internet, which recently published a report on ethics in the digital world. He was answering e-mailed questions from EurActiv’s editor and publisher, Frédéric Simon.
Can individuals maintain control of their privacy in a world where personal data circulates freely among machines? Or is there necessarily a trade-off between the two?
Privacy is already a challenge with today’s technology and services. Most of us voluntarily trade privacy for free services like search and e-mail – and involuntarily we are monitored by government agencies as Edward Snowden’s leaks have shown.
With machines increasingly taking over communication, privacy will be even more of a challenge. One of the scholars interviewed in the report – Dr Adrian Cheok, Professor of Pervasive Computing at City University London – says he believes most of us will chose convenience and just “go transparent”. But transparency is a two-way street and those who monitor us today are not at all transparent to us, but rather covert or at least faceless.
Health is a good example, there is a lot of technology developing now in the health sector and the potential benefits are great. With an aging population, many countries will have to rely on technology rather than manual labour to care for the elderly. If you can carry sensors like blood pressure monitors on your body, that will save you many visits to the doctor’s office for standard check-ups. Technology can free up time for health professionals to focus on more complicated cases.
But the flipside is privacy also in this case. With all that health data collected, you as a patient are vulnerable to leaks, viruses, data mining – all of which are breaches of patient-doctor confidentiality. Insurance companies will want to get that information in order to classify patients based on individual risk factors, rather than templates like today. Some may not be able to get insurance at all.
There are also issues of hackers, one illustration is the remote pacemaker hack – that is a nightmare scenario. Plus – who owns your information: yourself? The hospital? The online services that collect it? We talk about some of these things today, but with new technology the implications will be far beyond our current topics.
What rules should be put in place to govern such data? Should there be different degrees of data sensitivity for example with private data related to people’s health, finance, or sexual orientation?
The report has some recommendations based on the findings. One is “device sanctity” – as our devices become more and more personalised, it is important that they are loyal to us rather than third parties who want to access the information on them.
Another is the introduction of a regulatory body, similar to health inspectors or the US Federal Drug Administration which has a dual task to inform the public and regulate the tech. These are some of the recommendations from the report authors. From my perspective, I’d like to add that it is important that the government does not take over the liability from the tech companies, the best solution is if legal compliance is built in to the technology from the outset.
Is ‘net neutrality’ achievable or even desirable?
Network neutrality is a dicey term, some people use it to say network operators should be left to their own devices. Others say that it means no discrimination of traffic, except for legal or network integrity reasons.
In these days of traffic shaping and protocol discrimination/prioritisation network neutrality looks like a mirage. I think we should start from the opposite end: rather than thinking about freedom as the absence of regulation, let’s take the departure point in human rights. Society has built up all these institutions to protect these rights in the physical space. Let’s use the technologies and methods currently applied by telcos and intermediaries for optimising their profits, but for the greater purpose of protecting human rights and rule of law. That is the difference between freedom and anarchy.
Can the ‘right to be forgotten’ be enforced in practice?
It’s software so anything is possible! It’s all about how it is designed. Oxford professor Viktor Mayer-Schönberger, who was interviewed for this report, suggests an expiry date on personal information. There is no reason that can’t be built into the code, just like a Snapchat messages that disappears after a few seconds. Sure it can be hacked, but the quest for Platonic perfection should not stand in the way of pragmatic solutions.
Can or should a system of ethics be imposed on computer software and the internet of things itself?
Yes, it needs to be built into the code, not added as an after-thought. Think about how stock market trading platforms are set-up differently depending on local legislation. But the real challenge is – whose ethics?
There is no pre-defined set of ethics, rather we humans spend a lot of time debating moral issues – with ourselves and others. That conversation has been going on since the dawn of civilisation. So the system must be able to adjust to changing circumstances and the individual users morals, as well as norms and laws in the surrounding society.
The internet being global in essence, are nation-states fit to deal with regulation on such matters? What place do you see for regional organisations like the EU or global organisations like the UN?
Yes, this is an interesting challenge. But to a large extent the internet is run by global organisations, not governments but private companies that dominate certain niches. So that’s where governance can enter the game.
I believe the EU is the only organisation that has the power to do it. The UN is too fragmented, but the EU has a real legal system, proper institutions, a track record of legal action against private companies when appropriate and most importantly a market that is too big for any global service provider to ignore. It is up to the European Union to take the lead in establishing human rights and democracy online.
- Netopia: New report: Can we make the digital world ethical? (21 Feb. 2014)