For those who want to know more
DISR Consultation
The DSIR Consultation is tasked with considering the overarching regulatory environment that is required to create safe and responsible AI settings across all industries and sectors in Australia, including but not limited to healthcare.
The DSIR consultation commenced on 1 June 2023 and on 5 September 2024 DSIR published 'Introducing mandatory guardrails for AI in high-risk settings: proposals paper' (Proposals Paper), which sets out 10 Mandatory Guardrails for safe and responsible AI use. Publicly available submissions to DISR can be found on the DISR consultation hub. The DSIR intends to use the consultation feedback to inform the Commonwealth's National AI Capability Plan, which is expected to be delivered at the end of 2025.
The outcomes of the DISR consultation will inform the overarching regulatory environment, into which the DOH and TGA consultation findings might be implemented.
DOH Consultation: 'Safe and Responsible Artificial Intelligence in Health Care – Legislation and Regulation Review'
The DOH is responsible for administering a vast array of legislation, including in relation to aged care, immunisation, organ donation, health funding, health insurance and therapeutic goods. The purpose of the DOH consultation is to facilitate more targeted consultation with respect to AI in healthcare, understand who the main stakeholders are, and consider whether the risks of AI can be mitigated through new or amended legislation or other regulatory action.
The DOH consultation posed 19 questions for public consultation. We highlight some of the key themes below:
- Equity and accessibility: The DOH is considering how best to ensure the benefits of AI uptake are equitable and accessible to all Australians, including those in rural, remote and regional areas.
- Consent and privacy: Some healthcare services are already using AI without patient or clinician knowledge. Some less sophisticated forms of AI can also be harder to detect. The TGA has asked whether consent to use AI ought to be sought from patients or healthcare professionals. A potential requirement to seek consent for the use of personal information in AI tools would not be inconsistent with recent privacy legislation amendments.
- Generative AI: The DOH acknowledges the benefits generative AI can bring to the health sector. AI can combat the safety risks of fatigue and clinician burn-out. However, errors can be difficult to locate and contest. Improvements in generative AI tools means hallucinations and incorrect output can often seem polished and plausible. The DOH has sought input on whether human review of all AI output is required, and when it may (if ever) be acceptable to have fully automated decision-making in health care.
Trends from public submissions
Submissions from professional bodies such as the Royal Australian College of Surgeons (RACS) and Royal Australian College of General Practitioners (RACGP) are largely in favour of AI regulation within healthcare. RACS flagged the need to update civil liability frameworks to address accountability for AI-related harms (RACS submission, page 7). RACS also proposed changes to the Health Practitioner Regulation National Law to promote AI literacy among health professionals (RACS submission, page 7).
RACS and RACGP both submitted that AI should be seen as another tool in the clinician's toolbelt, and full automation of services without keeping the human in the loop would be inappropriate (RACS submission, page 9; RACGP submission, page 8). Both colleges expressed concern that a greater technological divide could exacerbate inequalities in access to health services (RACS submission, page 4; RACGP submission, page 2). The proposals for the role a regulatory body should play in overseeing AI in healthcare vary between submissions, but there is general consensus that health professionals should be represented on that body (RACS submission, page 6; RACGP submission, page 2).
We await the release of the DOH's findings.
TGA consultation: Clarifying and strengthening the regulation of AI
As part of the DOH, the Therapeutic Goods Administration (TGA) regulates therapeutic goods, including AI models and systems when they qualify as medical devices under the Therapeutic Goods Act 1989 (Cth) (see section 41BD), including as Software as a Medical Device (SaMD).The TGA commenced consultation concurrently with the DOH, but focussed specifically on how AI might impact regulation of therapeutic goods, including medicines, medical devices and biologicals. The purpose of the TGA Consultation was to seek feedback on whether the current TGA legislative framework remains appropriate given the integration of AI in therapeutic contexts, explore potential areas for further consultation to support the Australian Government's national approach to safe and responsible AI.
The TGA's consultation period ended on 20 October 2024. On 7 February 2025, the TGA published a summary of the 53 responses received. We provide a summary of key themes and stakeholder responses below.
Updating definitions
The TGA considered the need to amend key definitions in the Therapeutic Goods Act 1989 (Cth) (TG Act) to ensure the appropriate legal entity is responsible for activities performed by AI. In particular the TGA proposed amending the following definitions:
- 'supply' - to capture the supply of software via virtual or online platforms, such as app stores;
- 'manufacturer' - to capture the legal entity responsible for developing and deploying software as a medical device (SaMD); and
- 'sponsor' - to include persons who host, provide, or facilitate access to SaMD such as platform hosts or data transferers.
Stakeholder feedback largely acknowledged a review of the above definitions will assist in clarifying responsibility for AI, in particular:
- AI used in medical devices;
- AI generated outcomes, and where AI replaces human decision-making; and
- AI supplied through online marketplaces and hosted on overseas servers.
The TGA noted submissions from health professionals were in favour of assigning responsibility for the quality and performance of AI software to manufacturers and sponsors. On the other hand, there appears to be consensus among health professionals' responses that they have a role to play in upskilling in AI literacy (understanding risks, safe operation and ideal use cases of AI) and exercising meaningful oversight of AI output in the clinical context. This position appears consistent with submissions received in response to the DOH Consultation.
Reforming risk classification principles
Medical devices are currently classified by the TGA under a risk-based system. The higher the risk that the medical device poses to user safety, the higher the classification, and the more stringent the regulatory standard. Class III is the highest classification, and Class III medical devices have the most stringent regulatory standards.
Under the current TGA medical device classification system, risk is assessed according to its intended use, for example:
- Where an app is intended by the manufacturer to be used for disease prediction or to provide prognostic information to inform a health professional, it is generally classified as lower risk (such as, Class I (low risk) Therapeutic Goods (Medical Devices) Regulations 2002, Schedule 2, section 4.5).
- If an app is intended to be used by patients for disease prediction without health professional input, or otherwise deals with a life-threatening condition (such as vascular disease or skin cancer), then it is likely to have a higher classification.
More information about how medical devices incorporating AI are currently classified can be found in our article Innovation meets regulation: Medical devices and artificial intelligence.
However, the post-classification incorporation of AI in medical devices has in many cases incorporated more capabilities, changing the device's risk profile. The TGA has also acknowledged the prognostic information produced by AI can significantly influence patient treatment options and outcomes, even if that AI app is supervised by a treating doctor. This has prompted consideration of whether the current classification system remains appropriate to cater for risks posed by AI.
The TGA noted that of the responses received, 62% of stakeholders expressed concerns regarding the current exclusion of some low-risk tools, particularly those supplied to consumers without health professional oversight. Yet, 61% of stakeholders indicated immediate changes to the classification of such devices were not necessary. Instead, the majority of responses were in favour of a future review of classification rules for software based medical devices intended to provide prognosis or prediction. Feedback suggests this review should only be undertaken once more information becomes available regarding the use of such medical devices.
If the TGA does decide to make changes to existing medical device classification rules, this could require manufacturers to obtain re-certification of medical devices under the new scheme. It appears the classification of medical devices will not be subject to immediate change. Further, any changes would have lead time and would not come into effect for 3 – 5 years.
Labelling requirements and AI
All medical devices must meet the TGA's Essential Principles to be approved for supply in Australia. To improve transparency, the TGA sought feedback on whether manufacturers should be required to provide the following information to consumers:
- whether a device incorporates AI software;
- what specific kind of AI is being used; and
- whether the AI is being used to make decisions about the care of a patient.
Stakeholder feedback revealed agreement around there being a need for more information and transparency around whether the device incorporates AI and the relevant training datasets. The TGA flagged that access to information about therapeutic goods generally, including software based medical devices, will require further consideration and review of the TGA's Advertising Code and other medical device labelling requirements.
Implications for health service providers
- Changes to key definitions to 'supply', 'manufacturer', and 'sponsor' could mean entities that are not strictly health sector players or currently subject to the TGA (such as software developers) may be subject to TGA regulation. This might impact how health service providers can then access SaMD and the types of software that meet TGA safety requirements. These changes may also assist to clarify allocation of responsibility when AI gets things wrong.
- Changes (if made) to the classification system may see new products (even those previously excluded) fall within TGA regulation, and previously approved medical devices re-classified. Existing approved medical devices may be subject to additional safety requirements, impacting how and whether they can be used in clinical contexts.
- Health professionals should regularly review the Australian Register of Therapeutic Goods (ARTG) prior to using any medical device (including software incorporating AI) to confirm that a product is in fact approved for the specific therapeutic use, noting that classification may be subject to change.
- Changes to the TGA's website are expected to provide further guidance on, and better accessibility to relevant medical device regulation.
- Health service providers may begin to see more information from manufacturers of medical devices about incorporated AI systems. Health service providers should review the information and understand at a high-level how AI systems work, in order to facilitate patient informed consent and risk-mitigation against AI-related harms.
Regulating AI in healthcare: What's next?
The findings and recommendations from the TGA and DOH consultations will feed into the development of a 5-year roadmap setting out potential regulatory changes and further consultation is likely to follow. It is not expected that any legislative changes will be immediate – no legislative change will be implemented before 3 to 7 years, and will be subject to the introduction of any AI specific economy-wide legislation, as recommended by the Senate Select Committee on Adopting Artificial Intelligence.
It can be expected DOH and TGA consultation findings will affect numerous players in and outside the traditional health sector, including manufacturers of medical devices, medical device sponsors, healthcare providers, and health services.
In the meantime, health professionals, manufacturers, developers, and sponsors should be aware the current standards and rules still apply to AI medical devices and their use in healthcare. Health professionals should continue to exercise professional judgment when using AI in a clinical setting, and human oversight of AI outputs remains best practice. In preparation, all players in the healthcare industry should consider their AI governance arrangements, and begin to implement oversight procedures to ensure any use of AI (including in SaMD) accords with regulatory standards and best practice.