We need to talk about ethics: AI data and the challenge of frameworks

What should enterprises do with AI data? And what would a framework for corporations to manage bias in data look like?

How do you train AI not to misbehave? Scientists in the US and Brazil seem to have come up with an answer, at least according to a combined University of Massachusetts Amherst, Universidade Federal do Rio Grande do Sol and Stanford paper called Preventing undesirable behavior of intelligent machines. In a nod to author Isaac Asimov's character Hari Seldon, the group has developed what it terms a "Seldonian algorithm", a framework for machine learning designers to build behaviour-avoidance instructions into algorithms used in real world products and services.

Asimov's three laws of robotics, from his 1942 short story Runaround, are well referenced. The paper attempts to focus on the first law, to never hurt humans, using a new technique that uses mathematical algorithms to translate goals, such as avoiding gender bias, into instructions for machine-learning algorithms to train AI applications.

"We want to advance AI that respects the values of its human users and justifies the trust we place in autonomous systems," said Emma Brunskill, an assistant professor of computer science at Stanford and senior author of the paper. "Thinking about how we can create algorithms that best respect values like safety and fairness is essential as society increasingly relies on AI."

The idea is that if ‘unsafe' or ‘unfair' outcomes can be defined mathematically, developers can create algorithms that learn from the data to mitigate against potential bad behaviours. Of course, any framework that proposes to ethically cleanse data is still going to be fraught with technical and moral complications but while this may curtail any attempts by AI to end the human race, what can enterprises do about data bias unintentionally skewing AI?

According to Edy Liongosari, a chief research scientist at Accenture Labs in San Francisco, "technology is critical" to solving data bias issues in industry. Speaking at the IoT World Congress event in Barcelona recently, Liongosari outlined a framework for responsible AI, a framework built around business processes and technology that organisations need to consider when handling and using data to build and implement applications.

The problem, says Liongosari, is that data, by its very nature is loaded with bias and as a result can impact AI decisions. He talks about creating a clear and transparent process for data and data processing as being the only way to build trust in AI. To achieve this, he says organisations need to break down processes into three main categories - governance and compliance, system development and runtime management and ethics audit sweep. Each category has its own guidelines, including data ownership, testing, security, modelling and monitoring. The aim is a flow of data that minimises bias input throughout the business cycle.

Responsibility

To continue reading this article register now