What can we learn from Australia's tech-based human rights challenges?

Australia is trying to navigate the ethical and legal issues surrounding tech deployment in a way that emphasises human rights. What can we learn from Australia's debates, and how are other countries approaching these issues?

Grappling with the implications of new technologies such as artificial intelligence is on the agenda for every country seeking to stake its claim in a digital era. Australia, though, is forging ahead to try to navigate the ethical and legal issues surrounding tech deployment in a way that emphasises human rights. Australia's Human Rights Commissioner Edward Santow recently released the Human Rights and Technology Issues Paper, which is part of a project by the Human Rights Commission to protect the rights of Australians in a new era of technological change.

Santow told Tech Republic: "AI is enabling breakthroughs right now: Healthcare, robotics, and manufacturing; pretty soon we're told AI will bring us everything from the perfect dating algorithm to interstellar travel — it's easy in other words to get carried away, yet we should remember AI is still in its infancy."

At the launch of the three-year tech and human rights project, Santow noted that artificial intelligence, facial recognition, global data markets and other technological developments pose unprecedented challenges to privacy, freedom of expression and equality.

"Human rights must shape the future these incredible innovations have made possible. We must seize the opportunities technology presents but also guard against threats to our rights and the potential for entrenched inequality and disadvantage," he said. Part of the project is consulting with Australians about the issues around AI and other technologies. The Issues Paper has been published and the Australian Human Rights Commission is asking for input from the community.

Tucker Ellis partner Tod Northman explains that the challenge of artificial intelligence is two-fold:  its complexity and its novelty, both of which can only be overcome through experience. He says: "Australia is on the forefront of thinking through these issues and will inevitably err as it defines problems and as it adopts laws to address those problems. Being a ‘fast second' - learning from what works and what doesn't for Australia - will be invaluable for helping craft our approach to the challenging field."

While Australia is in the lead, it is not the only one seeking to unpack the issues of AI. In April 2018, the United Kingdom's House of the Lords Select Committee proposed the development of an AI Code - however, the work is preliminary to date. The AI code would be a code of conduct on artificial intelligence and data-driven technologies in healthcare. The plan is to establish ethical advisory boards, Northman notes.

Likewise, the European Commission has recognized that there are "urgent moral questions" raised by the "opaque nature" of AI. It has called for development of a framework to consider the issues. New York City has also established a task force; Singapore, which is a global leader in some area of AI, is seeking to understand the issues.

To continue reading this article register now