data-deluge
Data Privacy and Security

Data deluge poses ethical conundrums on privacy

June was a good month for Data Protection in Europe, what with the approval of the draft EC law. With luck it may be ratified by the end of this year, to much applause from the ramparts of Brussels and relief from the general populace.

We will all be able to sleep easier in our beds in the knowledge that our privacy rights are protected. Armed with this sense of comfort, some will confirm our newly established sleep patterns by checking our Fitbits or Jawbones, uploading data via mobile apps to servers somewhere in the cloud. 

Indeed, in our delight, we’ll share our stats via social media — probably using a Facebook login to avoid all that rigmarole involved in remembering usernames and passwords. At which point, of course, we have entirely given over any rights we might have had on the data, or conclusions that could be drawn from it.

Of course, what is there to be read from sleep patterns? It’s not as if we’re talking about drinking habits or driving skills, is it?

Don’t get me wrong, the law is very good as far as it goes. Its associated handbook is well thought through and considered, based on real-world cases, tested in court, which go to the nub of issues such as balancing protection with personal privacy. Such as the suicide attempt thwarted through the use of CCTV (a good thing), but then the footage being released to the media (a bad thing).

Indeed, the law is pretty comprehensive as far as it goes, with a fair amount of recourse should things go wrong - the right to be forgotten, for example, i.e. for data to be removed from particular databases. So, what’s wrong with it?

This is, absolutely, the information revolution and as such nobody has much of an idea what is going to come next. In such a fast-changing environment, framing the broader issues of data protection is hugely complicated; the complicity of data subjects, their friends and colleagues is only one aspect of our current journey into the unknown. 

But do so we must. Where to start? A greater challenge could be considered in terms of aggregation, a.k.a. the ability to draw together data from multiple sources and reach a certain conclusion. Numerous demonstrations exist of how seemingly innocuous data sets have been used to identify specific individuals. 

But even this doesn’t really tell the whole story, and neither could it. We are accumulating so much information — none of it is being thrown away — about that many topics, that the issue becomes less and less about our own digital footprints, however carelessly left. Looming without shape and form — yet — are the digital shadows cast by the analysis of such vast pools of data.

Profiling and other analysis techniques are being used by marketers and governments as well as in health, economic and demographic research fields. The point is we don’t yet know what insights these may bring, nor whether they might be fantastically good for the race nor downright scary. 

Examples are difficult to pin down — this is a journey into the unknown, after all — but in essence reflect the question, “What would you do if you found you only had five months to live?” In this context, more important would be, what would your insurer do? Or your housing association? Or your travel agent? 

The Act does provide for cases with legally negative ramifications (which make sense, it’s a legal document) but it doesn’t take into account situations operating within existing laws which nonetheless erode personal rights. A seemingly innocuous data set might be quite revelatory — we know that soil data can be used as an indicator of vine disease, for example. But what if it revealed your smoking habits?

While you might be able to ask for your own data to be removed from a data set, you couldn’t ask the same about data relating to the soil in the field next to your garden. This is the real danger caused by aggregation - that it is possible to operate entirely in the shadows cast by the context of human behaviour, without treading on the toes of anyone’s ‘personal’ information. 

Equally, the draft law is structured on the basis of an exclusionary, “if in doubt, take it out” model — this doesn’t resolve the potential for prejudice caused by the absence of a necessary piece of data, or even an entire data set. We may need a “right to be remembered” in some cases, with an inclusive response to an inaccurate ‘insight’. 

I am wary of appearing like Chicken Lickin’ here — I don’t believe the sky is going to fall in, and I don’t want to stand in the way of innovation. However I do believe that our current push to create larger and larger data sets will have consequences, both better and worse, and data protection is only one of the tools we will need in the legal tool shed. 

One such tool is an increasing requirement for metadata. It should not be enough to know that I was moving at 100 miles per hour, having consumed five units of alcohol, if indeed I was on a train rather than in a car. A little extra contextual information is vital. As the number of sensors around us flourish, they should be fingerprinting their own data so that it can be traced to the source.

Data needs to know its own provenance and if it cannot, it should potentially be discarded — this could be considered the missing 8th principle of Privacy By Design, which implies that designers can’t be held responsible for subsequent use of data. 

A second tool is, to coin a horrible yet fitting phrase, data-driven legal agility. The Information Age is in a brainstorming stage, as businesses try to combine data and services in new and interesting ways and see what insights emerge. The business mantra is “be agile” — as Edison once noted, it’s not the 10,000 failures that matter, it’s the one success. 

That one flash of brilliance might create a hitherto unknown, completely legal, public, non-specific, yet damaging stereotype, such as “cat owners are dangerous drivers”. Once such an insight has been discovered the damage may already be done. As we become better at data analysis such micro-prejudicial examples will become the norm, rather than the exception.  

As a result, if businesses need to recruit data scientists, so do our judiciaries and our lawmakers. Our legal systems need to operate in as agile a manner as our businesses and startups, quickly considering the consequences of the retrospective application of an unexpected discovery. 

Perhaps the biggest beef I have about the data protection law is that it still treats data in a one-dot-zero way — it is the perfect protection against the challenges we all faced 10 years ago. Over the coming decades however, we will discover things about ourselves and our environments that will beggar belief, and which will have an unimaginably profound impact on our existence. 

Like water, data will engulf everything that we do - it cannot be held back. Like fire, it will spread uncontrollably, however much we penalise those who drop the occasional lighted match. Like the air on a cold day, we will breathe it, and it will retain an image of our breath which can be captured, with or without our knowledge. 

Against this background we have some fundamental questions to consider — accountability and responsibility, exploitation and recourse, personal and public protection. However, too many elements of existing law are based on a balance of past probabilities and, in the absence of hard data, an underlying acceptance as to what constitutes right and wrong.

This model is crumbling to dust in front of our eyes. The analysis threshold is lowering, opening the door to data economy that trades in shadows, and which will continue to grow. Protecting data is not simply “not enough” — in a world where anything can be known about anyone and anything, we need to focus attention away from the data itself and towards the implications of living in an age of transparency. 

PREVIOUS ARTICLE

« Tech disruptors get backlash from industries they revolutionise

NEXT ARTICLE

Cloud Supercomputing (part 4): How will it help society? »
author_image
Jon Collins

Jon Collins is an analyst and principal advisor at Inter Orbis. He has over 25 years in experience of the tech sector, having worked as an IT manager, software consultant, project manager and training manager among other roles. Jon’s published work covers security, governance, project management but also includes books on music, including works on Rush, Mike Oldfield and Marillion. See More

  • Mail

Recommended for You

International Women's Day: We've come a long way, but there's still an awfully long way to go

Charlotte Trueman takes a diverse look at today’s tech landscape.

Trump's trade war and the FANG bubble: Good news for Latin America?

Lewis Page gets down to business across global tech

20 Red-Hot, Pre-IPO companies to watch in 2019 B2B tech - Part 1

Martin Veitch's inside track on today’s tech trends

Poll

Do you think your smartphone is making you a workaholic?