How ethical do we want AI to be?

Data bias is not black and white but setting ethical parameters for AI now is crucial

The French Algerian writer Albert Camus once wrote that “a man without ethics is a wild beast loosed upon this world.” The same could undoubtedly be applied to an AI machine. It certainly fits the stereotypical fearmongering image of rampant human-like robots taking control but is this fair? The AI label is being applied to a wide variety of technologies today that are trying to serve a wide variety of industries and functions. Ideas of what is and isn’t ethical will vary massively. Fundamentally, ethics is a moveable feast depending on the context and that is a development issue. How can we ensure we filter out the bad stuff but leave enough scope for AI to develop enough personality that it can be useful to people, business and society?

Firstly, it’s important to understand that AI ethics is not just about being ethical. It’s business after all, although how competitive it will be remains to be seen, especially as so much public research money is being ploughed into commercial interests. Professor Alan Winfield, an expert in AI and robotics ethics at the Bristol Robotics Laboratory, part of the University of the West of England raised this concern last year. Interestingly in April this year, the UK government made its own announcement, claiming the UK could lead the way on ethical AI.

Lord Clement-Jones, the chairman of the select committee on AI said that the UK “contains leading AI companies, a dynamic academic research culture, and a vigorous start-up ecosystem, as well as a host of legal, ethical, financial and linguistic strengths. We should make the most of this environment, but it is essential that ethics take centre stage in AI’s development and use.”

While the UK government may have been spending too much time reading Elon Musk’s Twitter feed, the last sentiment is perhaps correct. AI needs ethical development but who is to say what that really is? Yes, companies probably need to work within ethical frameworks – such as the IEEE P7001 – Transparency in Autonomous Systems standard – but what about the data? The more machines learn to go solo, do we need to ensure the data with which they operate is clean and unbiased?

 

Debiasing data?

Earlier this year, Dr Maria Velez-Rojas and Dr Victor Muntés-Mulero, both computer science researchers working on innovation at CA Technologies, gave a talk at CA’s HQ in Santa Clara, as part of the company’s Built to Change Summit. Velez-Rojas spoke about collaborative robots, or “Cobots” and said that when it comes to helping cobots understand what humans care about, “humans need to understand the consequences of their requests.”

To continue reading this article register now