AI ethics and the business of trust

Following on from our consideration of ethical AI last year we look at the more recent developments and potential legislation.

According to figures released earlier this month from the Capgemini Research Institute, ethical AI is not just a major concern for consumers (74 percent), it could also impact customer loyalty. If a business can show ethical AI use, 62 percent of consumers would place higher trust in that company. By contrast, 41 percent said they would complain over misuse of AI and 34 percent would stop interacting with a company if its AI use was unethical.

It's not wholly unsurprising research but it does raise the whole issue of trust. It also raises the point of what is deemed ethical, when it comes to AI use. Who decides? Businesses, consumers, governments, academics or a mash-up of all of them?

Interestingly, the Capgemini research also revealed that 74 percent of consumers want more transparency when a service is powered by AI, and over three quarters think there should be further regulation on how companies use AI.

For Christopher Manning, a professor of computer science and linguistics at Stanford University and director of the Stanford Artificial Intelligence Laboratory, this increasing call for clarity with AI is to be expected. Manning, who also works closely with Chetan Dube at IPsoft on the development of its cognitive agent Amelia, has been instrumental in setting up the Stanford Institute for Human-Centred AI.

"One of the leading ideas is that a lot of these questions about ethics and bias need a broader expertise," he says. "It's not just about computer scientists dictating the direction and what is and isn't ethical. Humanists and social scientists have experience and expertise in these areas, so we need a broad range of people engaged in the conversation."

His view is that governments, as well as academics can drive this. While he says "we definitely do need more regulation," he is also wary of too much government intervention.

"I do believe it needs to be done with a light touch," adds Manning. "You don't want to be derailing the development of new technologies by regulating them too much too soon. The big question is what to regulate. I'm quite dubious. While I understand the sentiment, it is very hard to come up with something that is reasonable, possible and well-structured, especially when you are trying to do it universally and apply it to all AI."

It is of course, the problem facing the US government's proposed AI ethical development Bill and something the European Union has tried to address with its recent publishing of policy and investment recommendations for trustworthy AI. Catch-all guidelines or regulations will inevitably be vague but you have to start somewhere.

The EU's guidelines are intended to promote discussion and research. Key concerns include identification without consent, covert AI systems, citizen mass scoring, and lethal autonomous weapons systems (LAWS). We are already seeing some early deployments of this and only recently researchers at Essex University raised concerns about the Metropolitan Police's live facial recognition (LFR) trial in London. Report authors Professor Peter Fussey and Dr Daragh Murray concluded that it is "highly possible" the Met's use of LFR to-date would be held unlawful if challenged in court.

It's the sort of scenario that undermines trust. Would anyone trust the Met, or any other police force for that matter, to be ethical with AI? Yes, they have a job to do but they also have to adhere to local privatisation and data protection laws. This is where governments need to step in, to enforce laws that are already in place to protect individuals.

AI will touch most if not all vertical sectors at some point and therefore each sector will require its own specific set of rules and ethical frameworks, as well as a more universal set of standards. Much of this at the moment is centred on privacy but also data bias. The bias issue is not an easy one to solve, at least according to Professor Manning.

"Data bias and trust have become important problems," he says. "It's a difficult issue. The roots of the bias of many AI systems are the realities of the past human world. A lot of how you get bias into your systems is if you build a system around current data and then it turns out that the embedded data is unrepresentative of the real world a few years later."

Manning adds that there's been a lot of work in developing causal models in AI and that it's important to have more of an understanding of causality, in the hope that we can eradicate irrelevant, outdated or discriminate associations in data.

It's an important step as there are already some knock-on effects in the real world in terms of bias, not least in recruitment. As Rob Grimsey, director at global tech recruiter, Harvey Nash revealed recently at the launch of the 2019 Harvey Nash/KPMG CIO Survey, "sometimes the knock-on effect of bias is to inadvertently introduce another. For example, data-driven recruitment algorithms have the potential to learn our prejudices. While technology is likely to play an increasing role in the future, there are well known examples where AI has imposed the values and biases of the software developer - often a white male based in Silicon Valley."

While researchers develop tools to try and unpick often inadvertent historical mistakes, businesses are ploughing ahead with AI adoption. According to Gartner, by 2023, "40 percent of infrastructure and operations teams in enterprises will use AI-augmented automation, resulting in higher IT productivity." Such is the clamour for greater data intelligence. What no one really wants is for this to come at a cost. Building trust is as, if not more important than the AI itself.