When you go to the doctor, you know your doctor has successfully completed medical school and is qualified to practise medicine in Australia. You also know that doctor has been credentialled to work within their scope of practice. If your question is outside their scope, they will refer you to another doctor who can assist.
However, as artificial intelligence (AI) is increasingly embedded in the healthcare pathway, questions arise – has the AI been credentialled to treat and diagnose disease? Is the task at hand within the AI tool's 'scope of practice'? How do we know the AI is safe and responsible?
Regulation of AI in healthcare: The current state of play
AI can be used in medicine in many ways; from predicting treatment outcomes, to triaging medical images, aiding diagnosis, and enabling precision medicine.
At present, AI in healthcare is most closely regulated where that AI qualifies as a software based medical device, which includes 'software as a medical device' (SaMD). The key governmental body that regulates software based medical devices is the Department of Health and Aged Care's Therapeutic Goods Administration (TGA).
In 2021 the TGA published specific rules for 'software based medical devices'. Generally, software used for the diagnosis, monitoring, prevention, prognosis, treatment or alleviation of disease, injuries or disabilities, as well as the control and support of conception, is considered a software based medical devices.*
A software based medical device can take the following forms:
- a medical device that incorporates software (including AI) to automate functions or make decisions;
- a software web-based resource and mobile phone app, including clinical decision support software; and
- SaMD (software that is the medical device).
Software based medical devices (including those incorporating AI) must be registered on the Australian Register of Therapeutic Goods (ARTG) in order to be sold in Australia, unless they are exempt or excluded (we discuss this concept further below).
Medical devices are sorted into 'classes' according to the risk they pose to consumers. The higher the risk, the higher the classification, and the more stringently the device is regulated. Assessment of risk is determined by an assessment of multiple factors, including:
- how invasive the device will be to the body, where it will be used on the body and the length of time for which it will be used;
- the seriousness of the condition that is being treated or managed, including whether the disease may cause long-term disability; and
- the potential consequences of incorrect diagnosis on public health (see Therapeutic Goods (Medical Devices) Regulations 2002 (Cth) Schedule 2).
Software based medical devices that are used by individual consumers generally pose a higher risk than those that are subject to oversight by the 'relevant clinician'.
If a software based medical device is used to recommend specific treatments, particularly for 'serious' conditions, the risk is classified as very high. The risk is highest where that recommendation purports to be made in the place of a human practitioner or is not checked by a relevant health professional. The TGA allocates risk differently where AI is a means to an end (lower risk), and where the AI provides the final output (higher risk). Human clinical judgment still remains the gold standard for diagnosis and treatment. The TGA does not currently approve any AI that does not enable users to verify its operation. We have set out some examples of different classes of software based medical devices below.
Risk level low
Class I
Software that monitors a patient's recovery from shingles based on photographs uploaded to a cloud-based platform.
Risk level low to medium
Class IIa
Software that analyses photos of a non-life-threatening skin condition and provides information to a health professional for their consideration of possible diagnoses and treatment options.
Risk level medium to high
Class IIb
Software that analyses angiogram results and provides information to a relevant health professional (e.g. vascular surgeon) to inform and support a diagnosis of vascular disease. The risk is higher because of the nature of the condition.
Risk level high
Class III
Software that is available to the public that can be used to analyse MRI results to diagnose serious cardiac conditions without a doctor's input. The risk is higher because the output will not be interpreted by a relevant health professional, and the software is being used in relation to a serious condition.
Unless otherwise exempt or excluded from TGA regulation, software based medical devices are subject to:
- conformity assessment procedures to establish evidence of conformity with the TGA's Essential Principles;
- listing on the ARTG; and
- assessment by the TGA, which involves review and scrutiny of information provided by manufacturers to demonstrate the quality, safety and performance of the medical device.
Once approved, the software based medical device will be listed on the ARTG in a specific Class and have a named 'intended purpose'.
This means clinicians and service providers can search the public ARTG to confirm the software based medical device is in fact a TGA registered medical device, and that it has been appropriately approved for use in line with the specific intended purpose. This is a significant safeguard that health service boards and clinicians can use as they assess whether they should introduce new AI innovations that qualify as software based medical devices.
New TGA guidance: Unregulated software
Excluded software
Some software that uses AI is 'excluded' from TGA regulation and is therefore not considered a 'medical device'. This type of software is not regulated by the TGA and does not need to be listed on the ARTG. On 3 July 2024 the TGA published new guidance titled 'Excluded software - Interpretation of software exclusion criteria'. This guidance explains when software will be excluded from TGA regulation, and provides further commentary to assist in the interpretation of exclusion criteria. In summary, excluded software falls into the following categories:
- consumer health products;
- digital mental health tools;
- enabling technology for telehealth or supporting healthcare delivery;
- middleware;
- digitisation of paper-based data or other published clinical rules;
- population based analytics; and
- laboratory information systems and management systems.
This means, for instance, that enabling technology for telehealth, and tools designed to digitise paper-based data, are excluded. Software that is excluded from TGA regulation is predominately regulated by other laws, such as privacy regulations and consumer law.
Exempt software
There are also instances where software will qualify as a software based medical device, but be exempt from certain TGA regulatory requirements. For instance, as a result of the TGA's updated guidance in July 2024, certain clinical decision support systems have been exempted from specific TGA requirements. We recommend specific advice is sought in this regard.
When AI is used in a healthcare setting but is not regulated by the TGA
This brief analysis shows there are many instances where AI may be used in a healthcare setting but the AI does not qualify as a medical device, and so TGA assurance is not available. Examples of AI used in healthcare that may not qualify for TGA regulation (although specific legal advice should be sought on a case-by-case basis) include:
- software used to manage workflows optimisation, such as tools designed to enhance clinician productivity with note summarisation; and
- tools that summarise authoritative medical research to provide summaries or responses to general (not patient-specific) queries.
In these cases, how can clinicians, health service providers and the public have assurance that this 'unregulated' AI is safe, fit for purpose and reliable in a healthcare setting?
The importance of a 'governance first' approach
In short, the answer is that clinicians, health service providers and innovators must take a 'governance first' approach to any digital transformation in healthcare, including the introduction of AI in a healthcare setting, especially where that AI is not otherwise regulated by the TGA.
If an organisation takes a 'governance first' approach to AI, they will establish organisational structure, policies, processes and regulations to ensure the safe and responsible use of AI at all stages, from procurement to post-deployment evaluation. Particular emphasis should be placed on the need for multidisciplinary expertise and fostering a culture that promotes AI literacy, training, and risk-awareness across all organisational levels.
Australia's launch of the AI Ethics Framework signalled the undeniable global uptake of AI and the importance of needing to ensure AI is safe, secure and reliable, whatever the use case. Further, the National Framework for the Assurance of AI in Government demonstrates the federal, state and territory governments of Australia's intention to promote governance first approaches to AI when used in a government setting, although similar principles can be applied in the private sphere (see our article for a deeper analysis). The adoption of a governance first approach that enshrines these national standards will enable health service providers to make the most of technological innovation whilst ensuring patient safety comes first and community trust is maintained.
Powering growth and innovation in healthcare
It is vital health service providers, clinicians and innovators consider whether, and how, the TGA regulates novel uses of AI in a healthcare setting before innovations are introduced. This can be done by checking the ARTG and seeking specific advice.
As always, a key safeguard in healthcare is to keep the 'human in the loop'. Afterall, AI is in many ways like the stethoscope, just another 'tool' in the 'doctor's bag'. Nonetheless, any professional who uses a tool in their work knows how important it is that the tool is reliable, calibrated properly and fit for purpose. For this reason, it is vital healthcare providers, digital transformation specialists, and tech start-ups working in health understand how AI is regulated in healthcare.
*Therapeutic Goods Act 1989 (Cth) s 41BD. We note the classification system is complex and for the purpose of this article this is a high level summary only. Specific requirements are set out at TGA, s 41BD. Please let us know if you would like comprehensive advice in this regard.
Please contact our Health Centre of Excellence if you would like to discuss use of AI in healthcare settings.