How ethical do we want AI to be?
Artificial Intelligence

How ethical do we want AI to be?

The French Algerian writer Albert Camus once wrote that “a man without ethics is a wild beast loosed upon this world.” The same could undoubtedly be applied to an AI machine. It certainly fits the stereotypical fearmongering image of rampant human-like robots taking control but is this fair? The AI label is being applied to a wide variety of technologies today that are trying to serve a wide variety of industries and functions. Ideas of what is and isn’t ethical will vary massively. Fundamentally, ethics is a moveable feast depending on the context and that is a development issue. How can we ensure we filter out the bad stuff but leave enough scope for AI to develop enough personality that it can be useful to people, business and society?

Firstly, it’s important to understand that AI ethics is not just about being ethical. It’s business after all, although how competitive it will be remains to be seen, especially as so much public research money is being ploughed into commercial interests. Professor Alan Winfield, an expert in AI and robotics ethics at the Bristol Robotics Laboratory, part of the University of the West of England raised this concern last year. Interestingly in April this year, the UK government made its own announcement, claiming the UK could lead the way on ethical AI.

Lord Clement-Jones, the chairman of the select committee on AI said that the UK “contains leading AI companies, a dynamic academic research culture, and a vigorous start-up ecosystem, as well as a host of legal, ethical, financial and linguistic strengths. We should make the most of this environment, but it is essential that ethics take centre stage in AI’s development and use.”

While the UK government may have been spending too much time reading Elon Musk’s Twitter feed, the last sentiment is perhaps correct. AI needs ethical development but who is to say what that really is? Yes, companies probably need to work within ethical frameworks – such as the IEEE P7001 – Transparency in Autonomous Systems standard – but what about the data? The more machines learn to go solo, do we need to ensure the data with which they operate is clean and unbiased?

 

To continue reading...


PREVIOUS ARTICLE

« News Roundup: Google's behind you…and it's tracking your every move

NEXT ARTICLE

Why regulation is needed to take cryptocurrencies mainstream »
Marc Ambasna-Jones

Marc Ambasna-Jones is a UK-based freelance writer and media consultant and has been writing about business and technology since 1989.

  • Mail

Recommended for You

How to (really) evaluate a developer's skillset

Adrian Bridgwater’s deconstruction & analysis of enterprise software

Unicorns are running free in the UK but Brexit poses a tough challenge

Trevor Clawson on the outlook for UK Tech startups

Cloudistics aims to trump Nutanix with 'superconvergence' play

Martin Veitch's inside track on today’s tech trends

Poll

Is your organization fully GDPR compliant?