A snapshot of AI's dark side, part one: warfare and bias
Artificial Intelligence

A snapshot of AI's dark side, part one: warfare and bias

There is absolutely no doubt that artificial intelligence has already driven — and will continue to drive — some of the world's most fascinating and advanced technological achievements. While AI and ML continues to permeate lives of end-consumers through things like voice-activated assistants and image recognition technologies, these tools have also been incredibly important for the enterprise in a plethora of ways.

However, while the use of AI and ML has led to some major progress, its use isn't always free of tribulation. Notably, a few high-profile organisations have come under fire for their use of AI - and the emphasis has been less on the systems themselves and more on how they're being orchestrated and trained, as well as where data is sourced.

One notable example was when Amazon employed a recruiting algorithm that displayed bias against female candidates, as it used data from the resumes of previously successful applicants to assess employee viability, which were mostly men. Another consumer-facing example of an AI system gone wrong was Microsoft's Tay chatbot, which was trained used the open internet. A noble idea in premise (i.e. using the world's most plural network of connected individuals to train AI mechanisms), but the results were - somewhat expectedly looking back - really disastrous.

To take a deeper dive into these issues, we spoke with Charlotte Walker-Osborn, who is a partner within the commercial group of global law-firm Eversheds Sutherland. Walker-Osborn is a leading expert in AI, automation and technology law, and advises UK and global corporations on legal challenges posed by major corporate transactions at the cutting edge of technology. In part one of our snapshot of the darker side of AI, we discuss the more controversial uses of AI and whether a truly neutral Artificial Intelligence system is possible.

While technology companies like to market AI as a brilliant, faultless solution to many enterprise and consumer issues, it does have a darker side. What are some of the more concerning ways AI can be used for negative outcomes?

Application of AI to warfare is arguably one of the more ‘contentious' ways AI can be utilised and there is much talk of an AI arms race, whether by way of building up the best AI-guided missiles, semi-autonomous or fully-autonomous drones, AI-powered combat systems or otherwise (often referred to as Lethal Autonomous Weapons). However, this is a highly complex area that layers use of AI in warfare on top of already challenging questions around warfare and technology in warfare. Frankly, if there are countries who may apply AI for ‘evil' in terms of warfare, it is considered foolish (by some) not to apply it for defence and for ‘good'. By way of example, AI is highly utilised in cyber defence as well as by the protagonists of cyber warfare.

To continue reading...


PREVIOUS ARTICLE

« C-suite career advice: Mihir Shah, StorCentric

NEXT ARTICLE

CIO Spotlight: Jeff Atkinson, INAP »
author_image
Pat Martlew

Patrick Martlew is a technology enthusiast and editorial guru that works the digital enterprise beat in London. After making his tech writing debut in Sydney, he has now made his way to the UK where he works to cover the very latest trends and provide top-grade expert analysis.

  • Mail

Poll

Do you think your smartphone is making you a workaholic?