How do we keep bias out of AI programmes?

Faisal Abbasi, Managing Director UK & Ireland and Europe, Amelia gives his advice on the steps businesses looking to implement enterprise AI can take to avoid bias and prejudice being programmed into their systems.

IDGConnect_AI_bias_diverse_shutterstock_1847178661_1200x800
Shutterstock

This is a contributed article by Faisal Abbasi, Managing Director UK & Ireland and Europe, Amelia.

Whilst we all marvel over the capabilities of modern AI technology, it’s important to remember one thing – it’s still designed by humans. AI is far from infallible to the internal biases and prejudices of their human creators, which have a habit of sneaking into programmes undetected.

Studies have proven that the consequences of ignoring AI bias can be detrimental, with a recent survey showing 36% of respondents reporting that their businesses suffered from AI bias in at least one algorithm, resulting in unequal treatment of users based on gender, age, race, sexual orientation and religion. And of those respondents, 62% reported consequentially having lost revenue, 61% lost customers, 43% lost employees, and 35% incurred legal fees because of lawsuits or legal action.

With enterprise adoption of AI technology increasing rapidly, it means that the range and diversity of end users only increases too. For businesses, removing bias from AI solutions is essential if they are to guarantee a fair and equal user experience, and ensure business security and success. But this shouldn’t be a deterrent for organisations looking to use AI, as the business benefits of successfully deploying AI technology are substantial, and those who don’t implement it risk falling behind their competitors. Instead, organisations need to create processes that not only attempt to eliminate bias, but can quickly mitigate any instances that do occur, to avoid harming the end user. But this begs the question, how can they do so?

Why diverse teams are the first step to bias free AI

By 2023, Gartner anticipates that all organisations will expect AI development and training personnel to “demonstrate expertise in responsible AI,” to ensure their AI solutions achieve algorithmic fairness.

There is good reason for this expectation. While AI is not inherently biased, algorithms are influenced by the biases and prejudices of their human creators. Although we may not yet be at the point when responsible AI expertise is a requirement for all AI development personnel, there are steps organisations can take today to ensure developers are able to detect and address bias in AI solutions.

Regardless of whether a developer is a new addition to an AI project, or an existing member, they should receive training on how to recognise and avoid bias in AI. In a recent study exploring ageism in AI for healthcare, the World Health Organisation found that healthcare AI solutions are often embedded with designers’ “misconceptions about how older people live and engage with technology.” WHO recommends training AI programmers and designers, regardless of their age, to recognise and avoid ageism in their work, and in their own perception of older people.

This advice is applicable to detecting and eliminating not just ageism, but also sexist, racist, ableist and other biases that may lurk within AI algorithms. However, while training programs can help to limit bias, nothing compares to the positive impact of building a diverse analytics team. As noted in recent article from McKinsey, bias in training data and model outputs is harder to spot if no one in the room has the relevant life experience that would alert them to issues.” The teams that plan, create, execute and monitor the technology should be representative of the people they intend to serve.

The importance of monitoring each step

Another step that organisations can take to avoid bias is by fostering a practice of regularly conducting fairness audits of AI algorithms. As stated in an article from Harvard Business Review, one of the keys to eliminating bias from AI is subjecting the system to “rigorous human review.”

Several leaders in the AI and automation field have already put this recommendation into practice. Alice Xiang, Sony Group’s Head of AI Ethics Office, explains that she regularly tells her business units to conduct fairness assessments, not as an indicator that something is wrong with their AI solution, but because it is something they should continuously monitor. Similarly, Dr. Haniyeh Mahmoudian, Global AI Ethicist at DataRobot, emphasises the importance of surveilling AI at every step of development to ensure bias does not become part of the system. She describes how this process allows AI teams to determine whether their product is ready for public deployment.

In some cases, these surveillance-like steps can be built directly into AI solutions to aid in the bias-elimination process. For example, our Amelia solution utilises Conversational AI and Intelligent Automation to perform supervised sentient learning. In cases where she encounters a workflow which she has not previously performed, she creates new business process networks based on her interactions with users. However, any newly created process must be approved by human subject matter experts before it is deployed, providing an important checkpoint to ensure undue bias hasn’t crept in — provided those human experts are trained and tasked with recognising when bias is present.

Create trust through transparency

Even after building a diverse AI development team, training team members on responsible AI practices and regularly assessing algorithms throughout the development process, organisations cannot afford to let their guard down.

Once companies deploy their AI product, they should be transparent with end users about how the algorithm was developed, the intention of the product and the point-of-contact for end users to connect with in case they have questions or concerns. Dissolving the mystique of AI can encourage open dialogue between companies and users, empowering developers to leverage user feedback to improve their solutions, and reducing harm by ensuring any erroneous algorithmic biases are resolved in a timely manner.

To reap the benefits of AI technology without acknowledging the potential for bias is a wholly irresponsible practice. It is a business’ responsibility to ensure that their technology is fair to end users, and doesn’t discriminate based on their gender, race, age, ability, sexual orientation, or religion. Incorporating these steps will prove a sound basis for an anti-bias strategy, empowering your organisation to offer a superior and equitable customer experience.

Faisal Abbasi is Amelia's Managing Director for UK & Ireland, Europe. With over 25 years’ experience in enterprise technology, his focus is transforming and empowering businesses through innovation. In his current role, he oversees Amelia’s regional growth and ensures it is delivering on end user experience.