Explainable AI: how are decisions made?

Exploring the difficulties of building transparent AI and predicting and explaining algorithmic bias.

Explaining the decisions made by Artificial Intelligence is high on the agenda of tech companies amid fears that our lives are being controlled by algorithms we don't understand.

Machine learning systems, the most common form of AI, make decisions such as whether a bank should offer a loan, which candidates should be shortlisted for a job interview or how long a convict should spend in prison. But they rarely offer a coherent explanation of how they arrived at that judgement.

To address this, tech companies are scrambling to launch tools that explain AI decisions and predictions to consumers. "This topic has really exploded," says Mike Hind, a lead researcher on IBM's AI Explainability 360 project, launched last year. "Quite reasonably, people want to have explanations - a business may want to know how their system is working and why it gave certain predictions so they can improve it. Customers want to know how decisions are made. It comes down to trust."

There are mounting concerns about the transparency and ethics of the judgements handed down by AI systems. The AI algorithms make decisions based on analysing vast amounts of data but the methodology used may be too complex to be easily understood by humans.

Hind points to a survey by IBM's Institute of Business Value which found that 68% of business leaders believe that customers will demand more explainability from AI in the next three years.

Finding an answer to the explainability problem is becoming essential for AI businesses. In November, Google launched its "Explainable AI" service which aims to unlock some of the secrets of its AI offerings. Microsoft has launched InterpretML to show how machine learning predictions and decisions can be interpreted. Facebook offers solutions as well.

IBM's AI Explainability 360 project is an open-source resource offering ten possible solutions to the explainability problem. 

To continue reading this article register now