A snapshot of AI's dark side, part one: warfare and bias

In part one of our snapshot of the darker side of AI, we talk to Charlotte Walker-Osborn, partner with global law-firm Eversheds Sutherland, about the more controversial uses of AI and whether a truly neutral Artificial Intelligence system is possible.

Read There is absolutely no doubt that artificial intelligence has already driven — and will continue to drive — some of the world's most fascinating and advanced technological achievements. While AI and ML continues to permeate lives of end-consumers through things like voice-activated assistants and image recognition technologies, these tools have also been incredibly important for the enterprise in a plethora of ways.

However, while the use of AI and ML has led to some major progress, its use isn't always free of tribulation. Notably, a few high-profile organisations have come under fire for their use of AI - and the emphasis has been less on the systems themselves and more on how they're being orchestrated and trained, as well as where data is sourced.

One notable example was when Amazon employed a recruiting algorithm that displayed bias against female candidates, as it used data from the resumes of previously successful applicants to assess employee viability, which were mostly men. Another consumer-facing example of an AI system gone wrong was Microsoft's Tay chatbot, which was trained used the open internet. A noble idea in premise (i.e. using the world's most plural network of connected individuals to train AI mechanisms), but the results were - somewhat expectedly looking back - really disastrous.

To take a deeper dive into these issues, we spoke with Charlotte Walker-Osborn, who is a partner within the commercial group of global law-firm Eversheds Sutherland. Walker-Osborn is a leading expert in AI, automation and technology law, and advises UK and global corporations on legal challenges posed by major corporate transactions at the cutting edge of technology. In part one of our snapshot of the darker side of AI, we discuss the more controversial uses of AI and whether a truly neutral Artificial Intelligence system is possible.

While technology companies like to market AI as a brilliant, faultless solution to many enterprise and consumer issues, it does have a darker side. What are some of the more concerning ways AI can be used for negative outcomes?

Application of AI to warfare is arguably one of the more ‘contentious' ways AI can be utilised and there is much talk of an AI arms race, whether by way of building up the best AI-guided missiles, semi-autonomous or fully-autonomous drones, AI-powered combat systems or otherwise (often referred to as Lethal Autonomous Weapons). However, this is a highly complex area that layers use of AI in warfare on top of already challenging questions around warfare and technology in warfare. Frankly, if there are countries who may apply AI for ‘evil' in terms of warfare, it is considered foolish (by some) not to apply it for defence and for ‘good'. By way of example, AI is highly utilised in cyber defence as well as by the protagonists of cyber warfare.

If AI wasn't applied to defence, countless more attacks would be more successful and this will be the case in the physical warfare as well. This is a highly debated area and there are already a number of ‘principles' which have been signed up to by many countries positively affirming the need for careful consideration and agreement in this space. For example, the Asilomar AI Principles (which has thousands of high-profile signatories) clearly sets out that an AI arms race is to be "avoided". Clearly, there are vast opportunities for economic gain for organisations and for countries who build out this technology and, so politically, it is not simple. 

To continue reading this article register now