In a world of unprecedented technological progress, new technologies emerging at speed can be as alarming as they are inspiring. Facial recognition technology may enable us to unlock our phones seamlessly; but it can also be used to invade our privacy by those wishing to monitor our movements. How can we realise the promise of these new technologies, while protecting and promoting human rights?
In December last year, the Australian Human Rights Commission started to answer this question in its Human Rights and Technology Discussion Paper. The Paper has been developed as part of a larger project by the Commission, which has seen it consult widely with civil society, government and business. While the Paper is 'written in pencil rather than ink', it includes a number of proposed recommendations that give a strong indication of what may be included in the final report, due later this year.
The Paper is substantial and wide-ranging, including 29 proposals for reform and nine questions for further consultation. The issues raised fit under three broad categories, discussed further below: the need for national leadership; the need to for accountability and regulation in AI-influenced decision-making; and the need for accessibility of new technologies.
The need for national leadership
The Commission considers a human rights approach to analysing the impact of new and emerging technologies as vital. Australia currently lacks a nationally-coordinated approach to the regulation of artificial intelligence or similar technologies. However, in recent times a number of national institutions have started to respond; for example, Standards Australia has commenced consultations on the role it might play, and CSIRO and the Department of Industry, Innovation and Science have outlined a 'roadmap' for AI in Australia.
The Commission recognises these contributions, but considers that more should be done to regulate the use and development of AI in Australia, in order to protect and promote human rights. In particular, the Commission has raised two substantial proposals in its report:
- Firstly, recommending a National Strategy on New and Emerging Technologies. Such a strategy would promote responsible innovation of new technology; prioritise national leadership on AI; outline a regulatory roadmap (including self-regulation); and outline appropriate education and training initiatives for government, industry and civil society.
- Secondly, recommending the establishment of an AI Safety Commissioner to provide leadership on AI governance in Australia. It is suggested that the Commission would be an expert body that unifies existing and proposed national AI initiatives. It could provide general guidance materials, and offer more expert advice. It would be well-placed to monitor and promote best practice. In addition, it may be a candidate to oversee some of the Commission's other recommendations, such as a 'trustmark' scheme for ethical AI, the use of 'regulatory sandboxes' to supervise the development of new technologies, and the development of a human rights impact assessment tool for AI-informed decision-making.
The Paper also suggests national leadership on the ethical frameworks that may be used voluntarily by developers, or which may form the basis for binding regulation. The Commission notes the difficulties that arise from conflicting ethical views, and from frameworks being 'not precisely defined'. It proposes an inquiry to assess the efficacy of existing ethical frameworks, and to identify opportunities for their improvement. We have previously explored the challenges presented by ethics and AI in Beyond Asimov's Three Laws: a new ethical framework for AI developers and The ethics of artificial intelligence: laws from around the world.
AI-influenced decision making
A second focus of concern in the Paper surrounds the increasing use of AI-informed decision making. For example, the Paper includes some discussion of Centrelink's previous use of an automated debt recovery system – so-called 'Robodebt' – which is no longer is use. Another example of government use of AI-informed decision making is the use by NSW Police of a risk assessment tool that classifies individuals at high, medium or low risk of offending. The use of these technologies can be positive – potentially increasing the speed and accuracy of some decisions. However, the Commission notes that the 'stakes are high and the consequences of error can be grave for anyone affected'.
Of particular concern is that automated decisions may be a 'black box' of unaccountability. For example, decisions made or influenced by AI may be difficult to clearly explain – a necessity if individuals are to understand the basis for a decision, and possibly contest its lawfulness. To address some of these concerns, the Paper suggests a number of reforms, including the introduction of legislation:
- Requiring individuals to be informed where AI is materially used in a decision that has a legal or similarly significant effect on their rights;
- Regarding the explainability of AI-informed decisions. This would introduce a right for affected individuals to demand an explanation comprehensible to lay persons, or a technical explanation that can be assessed and validated by others with the relevant technical expertise; and
- That creates a rebuttable presumption that the person who deploys an AI-informed decision making system is liable for the use of the system.
Another key concern of the Commission is to ensure technology does not introduce barriers.
Accessibility of the technology
Technology has played a role in increasing access – such as through voice and virtual assistants, and text-to-speech applications. However, as technology increasingly becomes the 'the main gateway to participation' in many aspects of life, it is critical to ensure that new technologies come with accessibility built in. To ensure that the fruits of new technology can be enjoyed widely, the Paper makes a number of suggestions, including that:
- Governments commit to using technologies that comply with recognised accessibility standards, and adopt an accessible procurement policy to ensure that new technologies used by government are accessible;
- The Australian Government conduct an inquiry into compliance by industry with accessibility standards, and consider incentives for compliance, such as through taxation, grants or other measures; and
- Standards Australia develop an Australian Standard or Technical Specification that covers the provision of accessible information, instructional and training materials to accompany consumer goods.
The Commission seeks further input on what other measures could be taken to eliminate barriers to accessibility and whether this should include a Digital Communication Technology Standard under the federal Disability Discrimination Act.
The opportunity to have your say
While new technologies race ahead, law and other forms of regulation are left playing catch up. The Commission's suggestions will help further the debate on how to respond appropriately, to harness these new technologies to the full, while avoiding their perils.
Before releasing its final report the Commission is seeking input on the Paper's proposals and questions. Written submissions must be received by Tuesday, 10 March 2010. To make a submission visit the Commission's consultation page.