OAIC clarifies artificial intelligence (AI) privacy obligations

7 minute read  21.10.2024 Chelsea Gordon, Sam Burrett

On October 21, the Office of the Australian Information Commissioner published two guides about how privacy laws apply to AI.


Key takeouts


  • The first guide is for Australian Privacy Principle Entities (APP Entities) using commercially available AI. The second guide relates to APP Entities using personal information to train generative AI.
  • The OAIC confirms it is best practice not to enter personal information in publicly available generative AI tools.
  • The OAIC has also confirmed Privacy Impact Assessments should be performed before a new AI system is introduced. Compliance with the Voluntary AI Safety Standard will help organisations deploy AI systems and comply with privacy obligations.

The Office of the Australian Information Commissioner's (OAIC) much awaited AI guidance has been published.

The first guide is for businesses using commercially available AI. The second guide relates to use of personal information to develop and train generative AI.

Both guides confirm a governance first approach to AI is the best way to properly manage privacy risks. This aligns with MinterEllison's AI Advisory Practice's approach. In a privacy context, a governance first approach requires privacy by design, as well as ongoing assurance processes to monitor AI use of personal information throughout the lifecycle of AI systems.

The guides both adopt the same definition of 'AI system' and 'generative AI' as in the Voluntary AI Safety Standard, and the OAIC has confirmed that compliance with that standard will help APP Entities deploy AI systems in a way that complies with their privacy obligations. For further information about the Voluntary AI Safety Standard, see Responsible use of AI: New Australian guardrails released.

In this article we highlight the key takeaways from the first guide for businesses using commercially available AI. We will publish a second article in this series about the second guidance, using personal information to develop and train generative AI separately.

Using commercially available AI: The key takeaways

1. Privacy obligations apply to personal information entered into, and output generated by, AI (if that output it contains personal information). APP Entities must embed privacy into their selection and use of any AI system that interfaces with personal information. That includes AI systems trained or tested on personal information, as well as those that will generate outputs containing personal information.

2. Even incorrect AI generated information about a reasonably identifiable individual will constitute personal information, and must be managed accordingly. This includes hallucinations.

3. Privacy Policies and Collection Notices should clearly outline when and how AI will access and use an individual's personal information, to enable informed consent.

4. Use of AI to generate or infer personal information must comply with Australian Privacy Principle (APP) 3 in relation to collection of personal information.

5. In accordance with APP 6, Personal Information should only be used or disclosed to AI for:

  • the primary purpose for which it was collected (which should be narrowly framed), or otherwise
  • with consent, or
  • where the APP Entity can establish secondary use would be reasonably expected by the individual, and is related (or for sensitive information is directly related) to the primary purpose. In order to establish the secondary use was reasonably expected, best practice is to outline the proposed use in the APP Entity's Collection Notice and Privacy Policy.

6. The OAIC has explicitly confirmed it is best practice not to enter personal information in publicly available generative AI tools, such as chatbots.

A risk-based approach to AI & privacy

The use of personal information by AI systems continues to be a significant issue. Unsurprisingly, the OAIC has urged APP Entities to take a cautious risk based approach: the higher the risk, the more stringent the guardrails. This means APP Entities should take more rigorous steps to inform users about how an AI system may use their personal information where those uses pose greater risks to the individual.

Using personal information to train AI systems

The OAIC has acknowledged that community expectations in relation to AI use are likely to change over time. Nonetheless, it will currently be difficult to establish a 'reasonable expectation' to use personal information for an AI purpose, such as to train the tool itself: 'In many cases it will be difficult to establish that a secondary use for an AI-related purpose… was within reasonable expectations'. The best practice is therefore to seek explicit consent prior to allowing AI to use personal information to train itself, including by providing individuals a meaningful and informed ability to opt-out.

Other points of note on the guidance

Who does this OAIC guidance apply to?

The guidance for businesses using commercially available AI is targeted at organisations deploying commercially available AI products. This means it will cover a majority of Australian organisations who are utilising AI systems within the course of their business, including those providing licences to commercially available tools to employees, or building on top of existing platforms, systems or models.

What are the key risks identified by the OAIC guidance?

The guidance identifies key privacy risks that arise when deploying commercially available AI, including bias, discrimination, transparency, risk of data breach and the risk an individual will lose control over their personal information. Another significant risk identified is that of re-identification, especially when cross-matching datasets.

The OAIC confirms generative AI tools carry additional privacy risk, such as the misuse of the generative AI system (including by malicious actors – such as deepfakes), and other inaccuracies, resulting for example from data poisoning.

What if an AI chatbot is used to collect personal information?

The OAIC makes it clear that any collection of personal information by an AI system must specifically comply with APP 3, 5 and 10. That is:

  • the collection must be objectively reasonably necessary for the entity's functions or activities, carried out by lawful and fair means, and it must be unreasonable or impracticable to collect the personal information directly from the individual (APP 3); and
  • appropriate notice of this collection must be provided to individuals (such as through a Collection Notice)(APP 5); and
  • the APP Entity must take steps to ensure the personal information collected is accurate, up to date and complete (APP 10).

In practice this means entities collecting sensitive information through an AI bot should pause to consider if that method of collection is even appropriate, and in the unlikely case it deems it is, should obtain informed consent before collecting that information through a chatbot, including by informing the relevant individual of the risks.

What does the OAIC guidance say about transparency when using AI systems?

A key theme in the guidance is transparency. Not only should the AI system itself be transparent about how it uses personal information, the APP Entity should also be transparent about how AI-related decisions and outputs could affect individuals, and training staff to provide 'meaningful explanations of AI outputs to affected individuals'.

Are there any resources available to help APP Entities comply with these guidelines?

The OAIC has published two useful checklists – one checklist for selecting an product, and the second for using a commercially available AI product.

Actions for APP Entities

A governance first approach requires safeguards to be applied at various stages of the AI journey. In order to implement the OAIC's recommendations in this guide, APP Entities should:

1. Conduct due diligence to ensure the product is suitable for its purpose, and does not pose unacceptable security risks, including under APP 11. Consider how the AI system has been trained (and whether training data was biased), tested, how human oversight is built into its design, and who will have access to personal information used by the AI tool. These issues should be continually monitored throughout the AI use period. APP Entities should also consider whether it is necessary for the AI system to have access to personal information, and if so, what the data flows will be (including whether third parties will have access to data which your entity puts into, or generates from, the AI tool). In relation to commercial products, the APP Entity should understand who will have access to personal information uploaded to the AI system, how the products work and risks involved, and manage privacy risks by design.

2. Conduct a Privacy Impact Assessment should be performed before the AI tool is used.

3. Review its Privacy Policy and Collection Notice(s) before introducing a new AI system, to ensure clear and transparent information about how and when AI will access, use and generate personal information.

4. Consider if it is appropriate to use AI to generate Personal Information – The generation of personal information by AI must be reasonably necessary for the APP Entity's function or activities, and should only be done by lawful and fair means. APP Entities should take particular care when using AI in this manner.

5. Take reasonable steps to ensure the personal information they collect, use and disclose is accurate, especially noting AI systems can produce inaccurate results, and use watermarks and other disclaimers as appropriate.

6. Put in place ongoing governance and assurance processes that, for example, monitor AI outputs and how those outputs are used by the organisation, and provide ongoing staff training in relation to limitations and risks of AI.

For today and tomorrow, whether you need protection against AI risks (including in relation to compliance with changing legal frameworks), or are shaping your organisation with responsible and ethical AI to enhance and elevate capability, our nationwide AI Client Advisory team will guide you through your AI adoption journey – from insight, to strategy and implementation.


Our AI expertise includes legal and policy, risk, workforce, privacy data protection & cyber, procurement, strategy, and a co-creation model to develop tailored solutions for your organisation. Operating with the highest standards of independence and trust as a firm for almost 200 years, our nationwide AI experts have the know-how and experience to help you make the best decisions, faster.

 

Contact

Tags

eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJuYW1laWQiOiIzY2Y0MWNiNC0wNTgzLTQ4YjAtODU1My00NmVkZGNhNWMwNTUiLCJyb2xlIjoiQXBpVXNlciIsIm5iZiI6MTc0Mjg2Nzk2OCwiZXhwIjoxNzQyODY5MTY4LCJpYXQiOjE3NDI4Njc5NjgsImlzcyI6Imh0dHBzOi8vd3d3Lm1pbnRlcmVsbGlzb24uY29tL2FydGljbGVzL29haWMtY2xhcmlmaWVzLWFydGlmaWNpYWwtaW50ZWxsaWdlbmNlLWFpLXByaXZhY3ktb2JsaWdhdGlvbnMiLCJhdWQiOiJodHRwczovL3d3dy5taW50ZXJlbGxpc29uLmNvbS9hcnRpY2xlcy9vYWljLWNsYXJpZmllcy1hcnRpZmljaWFsLWludGVsbGlnZW5jZS1haS1wcml2YWN5LW9ibGlnYXRpb25zIn0.eGcghfyAkZf1g8i4s4Vu6o_J7_UjZZde2kiUOX3p7U8
https://www.minterellison.com/articles/oaic-clarifies-artificial-intelligence-ai-privacy-obligations