How ethical do we want AI to be?
Artificial Intelligence

How ethical do we want AI to be?

The French Algerian writer Albert Camus once wrote that “a man without ethics is a wild beast loosed upon this world.” The same could undoubtedly be applied to an AI machine. It certainly fits the stereotypical fearmongering image of rampant human-like robots taking control but is this fair? The AI label is being applied to a wide variety of technologies today that are trying to serve a wide variety of industries and functions. Ideas of what is and isn’t ethical will vary massively. Fundamentally, ethics is a moveable feast depending on the context and that is a development issue. How can we ensure we filter out the bad stuff but leave enough scope for AI to develop enough personality that it can be useful to people, business and society?

Firstly, it’s important to understand that AI ethics is not just about being ethical. It’s business after all, although how competitive it will be remains to be seen, especially as so much public research money is being ploughed into commercial interests. Professor Alan Winfield, an expert in AI and robotics ethics at the Bristol Robotics Laboratory, part of the University of the West of England raised this concern last year. Interestingly in April this year, the UK government made its own announcement, claiming the UK could lead the way on ethical AI.

Lord Clement-Jones, the chairman of the select committee on AI said that the UK “contains leading AI companies, a dynamic academic research culture, and a vigorous start-up ecosystem, as well as a host of legal, ethical, financial and linguistic strengths. We should make the most of this environment, but it is essential that ethics take centre stage in AI’s development and use.”

While the UK government may have been spending too much time reading Elon Musk’s Twitter feed, the last sentiment is perhaps correct. AI needs ethical development but who is to say what that really is? Yes, companies probably need to work within ethical frameworks – such as the IEEE P7001 – Transparency in Autonomous Systems standard – but what about the data? The more machines learn to go solo, do we need to ensure the data with which they operate is clean and unbiased?

 

You’re viewing IDG Connect Insider content. Please enter your email to continue.


PREVIOUS ARTICLE

«Why regulation is needed to take cryptocurrencies mainstream

NEXT ARTICLE

News Roundup: Google’s behind you…and it’s tracking your every move»
Marc Ambasna-Jones

Marc Ambasna-Jones is a UK-based freelance writer and media consultant and has been writing about business and technology since 1989.

Our Case Studies

IDG Connect delivers full creative solutions to meet all your demand generatlon needs. These cover the full scope of options, from customized content and lead delivery through to fully integrated campaigns.

images

Our Marketing Research

Our in-house analyst and editorial team create a range of insights for the global marketing community. These look at IT buying preferences, the latest soclal media trends and other zeitgeist topics.

images

Poll

Should the government regulate Artificial Intelligence?