Grappling with the implications of new technologies such as artificial intelligence is on the agenda for every country seeking to stake its claim in a digital era. Australia, though, is forging ahead to try to navigate the ethical and legal issues surrounding tech deployment in a way that emphasises human rights. Australia's Human Rights Commissioner Edward Santow recently released the Human Rights and Technology Issues Paper, which is part of a project by the Human Rights Commission to protect the rights of Australians in a new era of technological change.
Santow told Tech Republic: "AI is enabling breakthroughs right now: Healthcare, robotics, and manufacturing; pretty soon we're told AI will bring us everything from the perfect dating algorithm to interstellar travel — it's easy in other words to get carried away, yet we should remember AI is still in its infancy."
At the launch of the three-year tech and human rights project, Santow noted that artificial intelligence, facial recognition, global data markets and other technological developments pose unprecedented challenges to privacy, freedom of expression and equality.
"Human rights must shape the future these incredible innovations have made possible. We must seize the opportunities technology presents but also guard against threats to our rights and the potential for entrenched inequality and disadvantage," he said. Part of the project is consulting with Australians about the issues around AI and other technologies. The Issues Paper has been published and the Australian Human Rights Commission is asking for input from the community.
Tucker Ellis partner Tod Northman explains that the challenge of artificial intelligence is two-fold: its complexity and its novelty, both of which can only be overcome through experience. He says: "Australia is on the forefront of thinking through these issues and will inevitably err as it defines problems and as it adopts laws to address those problems. Being a ‘fast second' - learning from what works and what doesn't for Australia - will be invaluable for helping craft our approach to the challenging field."
While Australia is in the lead, it is not the only one seeking to unpack the issues of AI. In April 2018, the United Kingdom's House of the Lords Select Committee proposed the development of an AI Code - however, the work is preliminary to date. The AI code would be a code of conduct on artificial intelligence and data-driven technologies in healthcare. The plan is to establish ethical advisory boards, Northman notes.
Likewise, the European Commission has recognized that there are "urgent moral questions" raised by the "opaque nature" of AI. It has called for development of a framework to consider the issues. New York City has also established a task force; Singapore, which is a global leader in some area of AI, is seeking to understand the issues.
Thinking of artificial intelligence in human rights terms means understanding the potential issues that deployment of such technologies might bring. It is not just a matter of "robots" displacing human workers; the issues are more complex than that. Northman believes that bias is at the top of the list. He says that because artificial intelligence relies on data identified and provided by humans it has the potential to amplify the unconscious bias of its "trainers".
"Worse, if the bias is unconscious, it can be difficult to detect in the results and then to persuade third parties that it is in fact bias. Algorithms have the seductive appearance of objectivity. Amplifying these effects, the pool of AI experts is necessarily non-representative of the population as a whole," Northman adds.
Northman also highlights the opacity of the decision-making process. "Even if we suspect that the results are off, it is difficult to determine why let alone to remedy them. Relatedly, the issues can seem abstract - that is, disconnected from our everyday experience," he says.
Ensuring access to the benefits of the technologies across all spheres will also increasingly be an issue, so that AI does not reinforce existing class divisions and gaps between rich and poor.
Helen Dempster, Chief Visionary Officer at Karantis360, says that, as the ethical debate around AI continues, it is time to consider how we can harness its capabilities for the greater good of society and to positively impact the population, on both a general as well as an individual level. She stresses that implementing it in this way will not only provide incremental value for a number of sectors, including healthcare, but will ensure everyone involved reaps the benefits. "This is particularly important when it comes to increasing safety and prevention," says Dempster.
Dempster describes some of the benefits of integrating such AI-powered systems into healthcare, for example. Across the UK, more care agencies are starting to introduce non-intrusive digital solutions into their client's home. "Unlike CCTV or biometric scanners, the implementation of AI and smart technology through IoT sensors enables carers to extend the delivery of care to 24/7 without having to interfere with their client's day to day lives. This technology is a great solution to ensuring every carer has a greater level of insight into the wellbeing of their clients and will have a huge advantage for those receiving domiciliary care," she says. But such access brings with it concerns about privacy and security as well as care. A framework to manage how such deployments are done and to ensure that informed consent takes place is just one area where human rights and tech innovation need to meet.
Northman also notes that the extraordinary pace of development is a formidable barrier to even understanding the issues. "As you get your head around what you thought was the issue, the issue morphs into a different problem altogether," he says. A considered, methodical approach to building a framework to deal with ethical and legal issues related to the use of technology is needed to ensure that the benefits are reaped without compromising the human rights of certain groups of people.
Going forward, Northman believes that the pace of change will slow and our experience with the issues will help us address the greatest challenges to human rights. "Progress will be uneven, but we will come to recognize patterns (perhaps with the help of AI) of problems that can then be addressed. Progress will be global, as countries and regions learn from one another and establish infrastructure to spot problems as they emerge rather than waiting to react."