A snapshot of AI's dark side, part two: privacy nightmares

In part two of our snapshot of the darker side of AI, we talk to Charlotte Walker-Osborn, partner with global law-firm Eversheds Sutherland, about AI's impact on data privacy.

While AI has unquestionably led to some massively significant technological breakthroughs for consumers and businesses alike, it's also got a darker side that can lead to major issues.

The public use of citizen data to feed algorithms and things like facial recognition technology is subject of heated debate amongst global lawmakers. The latter has been a topic of particular prominence in many countries, including the United States, where some have been backing its implementation to support national security imperatives, while others have called for bans.

To take a deeper dive into the effect of Artificial Intelligence and Machine Learning on data privacy issues, we spoke with Charlotte Walker-Osborn, who is a partner within the commercial group of global law-firm Eversheds Sutherland. Walker-Osborn is a leading expert in AI, automation and technology law, and advises UK and global corporations on legal challenges posed by major corporate transactions at the cutting edge of technology. In part two of our snapshot of the darker side of AI, we discuss concerns over the impact on privacy and offer best practice tips to organisations building their AI and ML systems.

Due to its data hungry nature, there are many concerns over AI and ML's impact on privacy. Is AI likely to be inherently bad for privacy as it develops? Why/Why not?

This is a tricky area, and I would say that it depends.

I like examples, so let's take a positive example. Working closely with the pharma sector as I do in this space, I have continually been impressed at how seriously this sector take treatment of patient data generally and when utilising AI. In many cases, the data is de-identified or pseudonymised (in basic terms anonymised). And careful steps are taken to ensure it cannot be re-identified. In such an example, the data is utilised for the greater good of society (and of course the company) but privacy should not (if the right steps have been taken) be compromised.

Of course, there are also negative examples. Wherever there is a lot of data (which is often the case when AI is to be applied), there is potential for misuse of that data, particularly in countries which do not have strong laws around how personal data / data of individuals should be treated. And wherever data is held on IT infrastructure, there is potential for cyber breach and leakage of data and personal data, which can compromise privacy.

To continue reading this article register now