The Australian Health Practitioner Regulation Agency (Ahpra) and the National Boards have welcomed the use of artificial intelligence (AI) in healthcare, citing its potential to improve health outcomes and create a more person-centred health system. Ahpra also acknowledged the role AI could play in reducing administrative burden and burnout amongst health practitioners.
However, Ahpra's guidance makes it clear that individual health practitioners remain ultimately responsible for any and all AI used in the course of their medical practice. This places onus on practitioners to thoroughly understand and critically evaluate any AI tools they employ. It means that practitioners cannot simply defer to the AI's recommendations or outputs without applying their professional judgment.
Moreover, in the event of errors or adverse outcomes resulting from AI use, the practitioner may be held accountable, regardless of the AI system's reputation or regulatory approval status. This underscores the need for ongoing education and due diligence in AI adoption within healthcare settings.
Below, we have set out a list of the key obligations this guidance creates for practitioners.
Professional Obligations relating to AI
The Guidance is based on the National Boards' codes of conduct, and create the following professional obligations:
- health practitioners must apply human oversight and judgment to their use of AI;
- health practitioners who use AI scribing tools must check the accuracy and relevance of the records created using AI;
- all tools/ software should be appropriately tested before they are used in clinical practice, to ensure they are fit for purpose;
- health practitioners must understand the AI's intended use, including how it was trained and tested on population, its inherent biases, as well as its associated risks and its limitations in a clinical context;
- health practitioners must understand how the AI collects, stores, uses and discloses data, having particular regard to privacy and ethical considerations;
- health practitioners should inform patients and clients of their use of AI;
- health practitioners must seek informed consent from their patient or client, and should note their response in their health record, before they input any patient data into an AI or record any private conversation;
- health practitioners must comply with relevant legislation and regulatory requirements that relate to using AI in practice, including the requirements of the Therapeutic Goods Administration; and
- health practitioners must hold appropriate professional indemnity insurance for all aspects of their practice, including their use of AI.
The Guidance makes it clear that health practitioners themselves are personally responsible for satisfying themselves that the AI is appropriate for the specific use case, and satisfies data governance, privacy and regulatory standards.
To accompany this guidance, Ahpra has published case studies to assist health practitioners to understand the privacy and confidentiality concerns associated with their use AI tools in healthcare. The two case studies currently published relate to the use of scribing and proof-reading tools, respectively. Health practitioners and health service providers may wish to consider the case studies as they develop appropriate AI governance frameworks.
Implications for the health industry
Although not bound by the Guidance, health service providers who engage health practitioners should be mindful of the Guidance when establishing responsible AI frameworks and governance processes, to ensure the organisation's proposed uses of AI align with the health practitioner's professional obligations.
The Guidance reminds health practitioners of their obligations under the National Boards' code of conduct and highlight Ahpra's and the National Board's intention to limit and regulate the risks that AI poses to health practitioners, patients and the health industry more broadly.
In addition, this Guidance has been released against the backdrop of broader governmental initiatives aimed at ensuring the safe and responsible use of AI in Australia. The Australian Government's ongoing consultation on this topic signals a commitment to addressing the potential risks and opportunities presented by AI across various sectors, including healthcare.
For health practitioners, this means staying vigilant and adaptable as new regulations and standards emerge. The Department of Health and Aged Care's "Safe and Responsible Artificial Intelligence in Health Care Legislation and Regulation Review," closing in October 2024, is likely to result in additional clarifications and potential strengthening of legislation specific to AI in healthcare settings.
Health practitioners and organisations should prepare for a changing and complex regulatory environment, with an increasing focus on balancing the benefits of AI with robust safety measures and ethical considerations. Engaging with these consultations and staying informed about evolving guidance will be crucial for those looking to leverage AI technologies in their practice responsibly.
We anticipate growth in this space, noting Ahpra's undertaking to regularly review and update the guidelines and to publish new case studies, to reflect developments in technology.
Please reach out to Shane Evans or Sonja Read if you would like more information about how the guidance will impact you or your organisation, or if you need any assistance preparing your organisation's AI governance framework.