Explaining the decisions made by Artificial Intelligence is high on the agenda of tech companies amid fears that our lives are being controlled by algorithms we don't understand.
Machine learning systems, the most common form of AI, make decisions such as whether a bank should offer a loan, which candidates should be shortlisted for a job interview or how long a convict should spend in prison. But they rarely offer a coherent explanation of how they arrived at that judgement.
To address this, tech companies are scrambling to launch tools that explain AI decisions and predictions to consumers. "This topic has really exploded," says Mike Hind, a lead researcher on IBM's AI Explainability 360 project, launched last year. "Quite reasonably, people want to have explanations - a business may want to know how their system is working and why it gave certain predictions so they can improve it. Customers want to know how decisions are made. It comes down to trust."
There are mounting concerns about the transparency and ethics of the judgements handed down by AI systems. The AI algorithms make decisions based on analysing vast amounts of data but the methodology used may be too complex to be easily understood by humans.
Hind points to a survey by IBM's Institute of Business Value which found that 68% of business leaders believe that customers will demand more explainability from AI in the next three years.
Finding an answer to the explainability problem is becoming essential for AI businesses. In November, Google launched its "Explainable AI" service which aims to unlock some of the secrets of its AI offerings. Microsoft has launched InterpretML to show how machine learning predictions and decisions can be interpreted. Facebook offers solutions as well.
IBM's AI Explainability 360 project is an open-source resource offering ten possible solutions to the explainability problem.
One route is to train a second algorithm to explain how the primary algorithm arrived at a decision. Then there is the "posthoc" model. In the case of a loan application getting rejected by an AI system, the user slightly alters the data, increasing or decreasing the income, length of time in work and other variables to see which gets accepted. This offers clues to how the algorithm works.
Another system is called Teaching Explanations for Decisions (TED). While many AI explanation models attempt to find the inner workings of the systems, TED simply asks the user of the AI to state what for them would make a good explanation. This is programmed into the algorithm so if the AI rejects a loan it automatically gives an explanation at the same time.
The most common form of AI is based on machine learning, where algorithms find patterns in very large data sets.
One popular form of machine learning uses decision trees, which have many branches that can be followed depending on answers to questions. These offer the simplest route to explainability. To find out why an AI system refused a customer a loan, simply retrace the branches of the decision tree and see how each question was answered.
According to Richard Potter, chief executive of Peak AI, which offers "AI made easy" for businesses: "If you are using decision trees - and that is where most of the commercial uses of AI are today - it can be quite easy to give that explainability."
An example is Peak's "customer lead-scoring" analysis for clients, assessing a customer's likelihood of making a purchase using a decision tree. "You can say, here are all the factors, the data points the model took in and it crunched its numbers and it produced this prediction, which is made up of a weighting across all these different factors and you can turn that into something quite explainable. We've always found that straightforward."
However, the big growth area in AI over the past five years has been in machine learning systems which use neural networks, known as deep learning. These are far more complex.
Jack Vernon, a senior research analyst at IDC, explains that neural networks are trained by taking large amounts of labelled data, say images of cans of Coke and Sprite, and allowing the system to repeatedly guesses which one it thinks an image shows. "It randomly goes at the start saying, ‘it's a can of Coke', then learns it's right or wrong and it continues until it learns to identify the can of Coke correctly. But you don't know how it has made its decision," he says. The results are fed back through the neural network. Each time it makes a mistake, it changes the way it arrives at the answer, for instance, perhaps by taking into account the colour or shape of the logo.
A neural network may have hundreds of layers and each is slightly altered with each answer. This makes it impossible to arrive at an exact explanation of why a decision was made. Vernon says a possible solution is baking explainability into the algorithms. So if shown a lawnmower, the algorithm would explain that it judged that it was such a machine as it was next to grass and had several attributes common to lawn mowers. However, he says: "That is great, but it is probably very difficult to deploy in general purpose settings where the algorithm is exposed to a lot of different variables."
David Emm, a researcher at cyber-security firm Kaspersky Labs, says that almost everything we do now involves the collection of data and machine learning. "One of the key things is the more we become reliant on these systems, the bigger becomes the potential danger that we get less of a handle on how those decisions are being made," he says.
The key to explaining decisions is understanding the weighting a machine learning system gives to each factor it analyses. For instance, in considering a bank loan, it may give greater weight to a previous failure to pay than to income level. The final decision blends all these weightings together. But without understanding the weightings and how they are blended, it is hard to fully comprehend the final decision.
"We run the risk somewhere in the future that we have systems making decisions on our behalf without ever being able to understand whether that is a valid decision or not," says Emm. Solving the explainability problem could prove vital to public acceptance of AI systems as they become embedded ever more deeply into society's decision-making processes.