How ethical do we want AI to be?

Data bias is not black and white but setting ethical parameters for AI now is crucial

The French Algerian writer Albert Camus once wrote that “a man without ethics is a wild beast loosed upon this world.” The same could undoubtedly be applied to an AI machine. It certainly fits the stereotypical fearmongering image of rampant human-like robots taking control but is this fair? The AI label is being applied to a wide variety of technologies today that are trying to serve a wide variety of industries and functions. Ideas of what is and isn’t ethical will vary massively. Fundamentally, ethics is a moveable feast depending on the context and that is a development issue. How can we ensure we filter out the bad stuff but leave enough scope for AI to develop enough personality that it can be useful to people, business and society?

Firstly, it’s important to understand that AI ethics is not just about being ethical. It’s business after all, although how competitive it will be remains to be seen, especially as so much public research money is being ploughed into commercial interests. Professor Alan Winfield, an expert in AI and robotics ethics at the Bristol Robotics Laboratory, part of the University of the West of England raised this concern last year. Interestingly in April this year, the UK government made its own announcement, claiming the UK could lead the way on ethical AI.

Lord Clement-Jones, the chairman of the select committee on AI said that the UK “contains leading AI companies, a dynamic academic research culture, and a vigorous start-up ecosystem, as well as a host of legal, ethical, financial and linguistic strengths. We should make the most of this environment, but it is essential that ethics take centre stage in AI’s development and use.”

While the UK government may have been spending too much time reading Elon Musk’s Twitter feed, the last sentiment is perhaps correct. AI needs ethical development but who is to say what that really is? Yes, companies probably need to work within ethical frameworks – such as the IEEE P7001 – Transparency in Autonomous Systems standard – but what about the data? The more machines learn to go solo, do we need to ensure the data with which they operate is clean and unbiased?


Debiasing data?

Earlier this year, Dr Maria Velez-Rojas and Dr Victor Muntés-Mulero, both computer science researchers working on innovation at CA Technologies, gave a talk at CA’s HQ in Santa Clara, as part of the company’s Built to Change Summit. Velez-Rojas spoke about collaborative robots, or “Cobots” and said that when it comes to helping cobots understand what humans care about, “humans need to understand the consequences of their requests.”

Velez-Rojas was exploring ways of determining whether data and sensors are trustworthy based on machine learning and pointed to instances where sensor miscalibrations and faulty installations have led to problems.

“Data errors can have large consequences,” she said, referring to Air France Flight 447 which crashed on June 1st 2009 with all three external airspeed sensors giving different readings. It’s an extreme example and of course data errors can happen. Could an AI have corrected this problem automatically?

For Muntés-Mulero the question was more about data integrity, bias and trying to understand discrimination.

“It’s like an onion,” he said. “At the core you have bias and then layers of complexity. What we are trying to understand is how an AI decision making process can affect discrimination towards different groups of people. It’s difficult to get rid of the bias from the input data and then it’s difficult to make sure the algorithm is not introducing bias. There’s a problem in how you define the fairness function – what does it mean and what is a fair algorithm? There’s a lot of contradiction.”

Muntés-Mulero believes that localization in AI “matters” and bias is inevitable. “Even the most skilled data scientists must make subjective choices that have a significant impact on their results,” he said.

Very true. It’s a view supported by Georgios Grigoriadis, managing director of Baresquare, a data analytics business that is using AI.

“Any data set produced by humans contains at least some bias, and in the same way it may be impossible to remove all bias from the AI we build,” says Grigoriadis. “But even if it were possible it would be undesirable to eliminate all bias completely. On some level, we all actually want AI to deliver results that fit into our preconceived ideas. The key is making sure an AI meets our expectations (commonly known as ‘AI alignment’).”

A good example of this is in business. Grigoriadis suggests that a family-owned company could very reasonably want an AI business advisor to help it plan for the long term, helping it avoid problems a generation or two ahead. A publicly-traded company may prefer to concentrate on the short-term to reflect the interests of its shareholders. Both are legitimate positions but both would require AI with different alignments.

That’s a challenge. The variables are huge and as Harry Collins, a research professor in the school of social sciences at Cardiff University says in his new book Artificial Intelligence – Against Humanity’s Surrender to Computers, “the big danger facing us is not the Singularity; it is failing to notice computers’ deficiencies when it comes to appreciating social context and treating all consequent mistakes as our fault.”

Collins says you only have to look at how something as simple as a spellchecker can still get things wrong to realize how challenging human context is for computers to handle. This of course won’t stop the relentless surge in AI-related business developments and neither should it. We need to welcome continued innovation although at the moment, as Grigoriadis suggests, this should be checked at some point, probably by Government enforced ethical regulation.

“AI is like a scrapyard that anyone can enter and build their own Mad Max type of vehicle,” he says. “But you should not be able to just take that vehicle on the road. It needs to be tested for roadworthiness before it is put in production.”

A recent story revealed that China wants to develop a fleet of AI-guided unmanned submarines. This is the sort of thing that just heightens the tension. As we know, not all Governments are equal. Countries have different ideas when it comes to ethics but we do need a technology solution to this too. What does roadworthiness look like? How will the ethical parameters vary depending on vertical industry?

The problem of course is that businesses are unequal too. AI is largely being driven by big corporations such as Google and IBM. While Isaac Asimov’s three laws for robotics still loom large, it’s not surprising that many academic and political bodies are calling for action.

“People are becoming aware that this digital age is not neutral," said Jeroen van den Hoven, of Delft University of Technology in the Netherlands, speaking to delegates at the EuroScience Open Forum (ESOF) 2018 in France. “It is presented to us mainly by big corporations who want to make some profit.”

Van den Hoven, who is a member of the European Group on Ethics in Science and New Technologies (EGE), added: “We need to think about governance, inspection, monitoring, testing, certification, classification, standardization, education, all of these things. They are not there. We need to desperately, and very quickly, help ourselves to it.”

Who said ethics could be easy?