On 23 July, MinterEllison hosted a webinar with special guest, Dr John Lambert, Chief Medical Officer at Harrison.ai, as he shared insights and first-hand learnings about the deployment of AI in clinical settings in Australia from his journey at Harrison.ai. Dr Lambert was joined by MinterEllison Health Industry Leader, Shane Evans, who discussed a governance framework to effectively manage risk at each stage of the AI lifecycle.
The recipe for safe and successful implementation of AI in a healthcare setting
The recipe for safely and successfully implementing an AI strategy in a healthcare setting is twofold. On the one hand, it involves developing the AI, verifying and deploying it and maximising its commerciality. On the other hand, it is critical to develop a comprehensive governance framework which manages and where, possible, mitigates the risks arising from the use of AI in your organisation.
Developing and implementing an AI tool
From his experience at Harrison.ai and earlier as a doctor in NSW Health for 25 years, Director of a hospital Intensive Care Unit and Chief Clinical Information Officer of NSW Health, Dr Lambert summarised the key ingredients for developing and deploying an AI tool in a healthcare setting:
- Frame the question. It is important to know what question you want to answer with an AI tool early. For example, for their IVY platform, Harrison.ai and Virtus Health Limited wanted to answer the question: which embryos are most likely to lead to successful pregnancy after reimplantation?
- Get the data. All AI platforms require data for learning and operating. Obtaining the data may come with technical challenges (for example, if the data has not been stored electronically before) or legal challenges (for example, if the data is not de-identified). The data used should cover as many of the varieties of data which the AI will be required to deal with in the future (for example, data from multicultural patient populations and/or from multiple vendors).
- Label the data. How data is labelled before it is processed by AI will affect the overall quality of any AI model. In a health setting, it is particularly important that a clinically relevant ontology tree is used to label data. The Annalise.ai tool, for example, (a joint venture of Harrison.ai and I-MED Radiology Network) uses triple labelling by specialist radiologists of over 300,000 chest x-rays.
- Create AI algorithms and train the AI on the algorithms. Effective AI algorithms should be handcrafted for individual purposes.
- Scientific validation and regulatory approval. All AI tools should be treated like any new technology or therapeutic intervention being introduced in a healthcare setting. Appropriate methods for scientific validation (such as prospective clinical trials) should be used to demonstrate clinically relevant differences, in addition to compliance with the relevant technical standards.
- Deployment. User Centered Design principles, human factors and cognitive science are critical to the deployment strategy for any AI tool so that uptake is high and behavior actually changes.
- Maximize commercialisation. This may involve using a Use Case Canvas and/or Data Readiness Framework to assess the viability of proposed AI prior to implementation, as well as continuous efforts post-implementation to maximise commercialisation.
Governance and risk management of AI
AI in the healthcare setting should be approached in the same way as any new technology or therapeutic intervention being introduced in a healthcare setting. The regulatory requirements for medical devices arise under the Therapeutic Goods Act 1989 (Cth) and associated regulations. Software as a medical device (SaMD) is specifically included under this regime, but is subject to ongoing consultation, with refinements expected in the next 12 to 18 months.
The legal and regulatory frameworks can have difficulty keeping up with the rapid technological advancements and uptake of AI in the healthcare sector. Because of this, organisations intending to implement AI tools for health-related purposes must implement a governance and risk management framework which goes beyond those minimum legal and regulatory requirements.
MinterEllison's Health Industry Lead, Shane Evans, said that accountable persons within each organisation should have assurance that the implementation and use of any AI tool will be safe and cost effective, produce better outcomes, with an improved experience, for all stakeholders.
Shane summarised the key ingredients for establishing a comprehensive governance framework for any AI strategy in a healthcare setting to include:
- AI partner. Engage with a trusted partner who has a track record in health and with the ability to successfully engage with clinicians, ensuring that you undertake appropriate due diligence before embarking on the project.
- Assurance. The governing body of the organisation is ultimately accountable and should reach an appropriate level of assurance that the AI tool is safe, compliant, cost-effective, achieves better outcomes and with an improved experience, within a framework that appropriately manages risk. The criteria and weighting of these matters will be different for each organisation. Rigorous validation and testing is essential before introduction, along with undertaking clinical trials, guarding against a rush to introduce an exciting new technology. Ongoing monitoring following introduction is critical, with regular reporting to the governing body against the assessment criteria.
- Data. Health and personal information will be used to train the AI tool and then be supplied by patients or consumers once it becomes operational. All organisations should be aware of their obligations under the Privacy Act 1988 (Cth) and state-based privacy legislation, including in respect of mandatory data breach reporting and data mining. Particularly where data is not de-identified, organisations should ensure they have prepared a privacy impact assessment to assess privacy risks and ensure compliance with privacy laws, a data breach management plan and/or cybersecurity risk management plan (as applicable).
- Service redesign and people readiness. The AI must integrate within an existing health setting, which may necessitate service redesign and ensuring that your people who are using the technology and delivering health services are ready. Transition risk should be managed when moving from a legacy into a new system, with heightened risk occurring when dual systems are operating during the transitional period.
- Managing cultural shifts. The direct and ongoing engagement of clinicians is a critical factor throughout each stage of the project, from planning to ongoing operation. Education and training is important, including to reassure this is an enhancement or assistance to clinical decisions and care, rather than a replacement. In addition, it ensures positive and widespread adoption of the AI, as well as the management of patient and clinician concerns about AI as an alternative to traditional tools.
- Financial risk. To assess the cost benefit of AI, if reimbursement from Medicare or private health insurance is an important element financially, then assessing early on compliance with relevant items is important to ensure claims can be made and the amount that will be paid. AI strategies come with increased risks of replicated errors from scale and latent risks due to delayed error identification. Adequate insurance that will respond to these risks and indemnities from entities that have appropriate resources to meet the indemnities will be critical to managing financial risks for the organisation and clinicians involved.
- Consumer-centred care. Introduction of exciting market leading technology can create an edge in a crowded industry. However, it is important to keep in mind the end user - not only the health team of clinicians, but the patient, aged care resident, person with a disability or consumer. Therefore you should address the questions: Does it achieve a better outcome and does it provide a better experience? COVID-19 has accelerated changes in models of health care by many years, with increased expectations from the consumer that is centred around outcomes, experience and delivery of care directly to them where possible. Considering these matters at the very start of your AI journey will be important to ensure the end-product meets expectations and will be adopted by consumers.
Health care is a people business, with caring health care professionals delivering a critical service to a consumer who is seeking help often during a period of health compromise, sickness or medical uncertainty. Any AI should aim to improve the journey in getting to the best possible outcome for that person.
Contact us if you'd like more information about AI, key success factors for adoption or support in setting up your governance and risk frameworks.