Procuring AI: Key considerations and strategies

10 minute read  26.08.2025 Mark Teys, Chelsea Gordon, Sam Burrett and Sarah Summers

As AI becomes central to business operations, this guide helps organisations navigate the legal, ethical and practical risks of AI procurement.


Key takeouts


  • AI procurement must be risk-classified to determine due diligence, governance, and contract protections, especially for high-risk use cases like legal or personal data processing.
  • Contracts should include clauses on data ownership, liability, transparency, and human oversight to ensure compliance with privacy, IP, and ethical standards.
  • Organisations must verify vendor governance, audit trails, and certifications (e.g. ISO/IEC 42001) to ensure responsible AI deployment and mitigate legal exposure.

Businesses are increasingly procuring artificial intelligence. Whether acquiring standalone AI systems or engaging vendors whose services include embedded AI components, organisations must assess a wide range of legal, ethical, and operational considerations in the AI procurement process.

The following questions can be asked when acquiring or engaging vendors regarding AI:

  • What data was used to train the AI system, and whether data input by the user will be used to train the model for other customers of the AI provider?
  • How often is the AI system trained?
  • Does the AI system use live sourced data from the internet, or is it contained to a specific set of data that is input by the user?
  • Does the AI system comply with privacy and discrimination laws?
  • Who owns the intellectual property generated by the AI system?
  • How is liability for AI-generated outputs managed?
  • Do the AI systems align with applicable governance and ethical standards?
  • What processes are in place for monitoring and mitigating AI risks?
  • Does the AI vendor maintain audit trails?

The procurement process must also account for the organisation’s risk profile, use case sensitivity, and regulatory obligations.

In this practical guide to AI procurement, we outline key considerations, recommended contractual clauses, and a checklist to support informed and responsible procurement decision-making.

1. Key considerations when procuring AI systems

As organisations increasingly integrate AI into their operations, procurement teams must take a structured and risk-aware approach to sourcing these technologies. AI systems can vary widely in complexity, impact, and regulatory exposure, so understanding the specific use case and associated risks is essential. This section outlines key considerations to help organisations assess, classify, and manage AI procurement in a way that aligns with legal obligations, ethical standards, and operational needs.

1.1 Risk-based analysis

(a) AI systems should be categorised by pre-established organisational risk level (eg high, low or exempt) based on their intended use, the sensitivity of data involved, and the potential impact on individuals, operations or compliance obligations. For example, AI used in legal decision-making or personal data processing may be considered high-risk, while AI used for internal document sorting may be considered low-risk. Each AI use case should be assessed to determine the nature and scope of decisions being automated, whether the AI system interacts with personal, confidential, or regulated data, and the likelihood of harm or legal exposure if the AI system fails or behaves unexpectedly. This classification informs the depth of due diligence, governance and contractual protections required.

(b) The AI system risk classification informs the level of scrutiny required during the procurement process, and ultimately the level of contractual protection necessary when procuring the relevant AI system. Procurement of high-risk systems may require independent audits, clear documentation of training data sources and model limitations and ongoing monitoring and human oversight mechanisms in the contract. When contracting high-risk AI systems, organisations should ensure contracts include robust safeguards such as warranties regarding data handling and model behaviour, indemnities for breaches of privacy or regulatory obligations, and clear obligations around transparency, particularly where automated decision-making is involved. It’s also important to secure rights to audit or review the AI system’s performance and updates, allowing the organisation to maintain oversight and ensure ongoing compliance with legal and ethical standards.

1.2 Due diligence into AI vendors

Before selecting an AI vendor, organisations should conduct thorough due diligence to ensure the AI system is legally compliant, transparent, and well-governed. Understanding the model’s inner workings and training data is important, and vendors should be able to provide clear documentation such as model cards or technical summaries that explain how the AI system functions and its limitations. Vendors should also be able to provide information on training, such as how often the model is trained and updated, or new models released, and whether live data is sourced from the internet or contained to specific data sets input by the organisation.

Organisations should consider how their data is used once it is input into the AI system. Many vendors will seek to use organisational data to improve their AI systems or build new products. Organisations should consider whether this is an appropriate use of their data, and might choose to introduce additional safeguards like de-identifying or aggregating the data either in a contractual term or putting their own organisational controls around the use of the AI system.

Procurement teams should consider whether the AI vendor has established internal governance frameworks for oversight, including audit trails, accountability mechanisms, and procedures for monitoring model performance and ethical risks.
Finally, organisations should check whether the AI system has been independently certified, for example against standards like ISO/IEC 42001, which signal a commitment to responsible AI development and deployment. These questions help ensure the AI system aligns with legal obligations and organisational values, and that any risks are identified and managed before procurement.

1.3 Ethical and legal alignment

To ensure responsible AI procurement, it is critical that organisations confirm any system and its approved use case aligns with their ethical standards, and complies with applicable laws. This could mean taking steps to verify that the AI system does not perpetuate bias, misuse personal data, or make decisions that could harm individuals or breach legal obligations. Transparency is key: organisations should favour systems that offer clear documentation of how decisions are made and allow for meaningful human oversight. This includes the ability to intervene, audit, or override automated outputs where necessary. By embedding these principles into procurement processes, organisations can better manage legal risk, uphold public trust, and ensure that AI technologies support, not undermine, their values and responsibilities.

1.4 Intellectual property issues

(a) Third party IP infringement claims are an inherent risk with IT procurement. However, the use of large data sets in the training and operation of AI has significantly increased this risk in two key ways where:

(i) data that is subject to copyright protection is input into an AI system; or

(ii) outputs reproduce copyrighted training data.

(b) Organisations procuring IT products and services can usually mitigate the risk of third-party infringement claims by shifting this risk to the AI vendor in the contract, for example, by obtaining an indemnity. If AI vendors are resistant to accept this transfer of risk, organisations will need to be more involved in the procurement process and conduct due diligence to satisfy themselves that the data sets used to train the AI system have not breached any copyright laws.

(c) As there is generally little human effort in development of AI outputs, those outputs likely won't be protected by current Australian intellectual property laws (although this is a matter of much debate amongst legal circles). Organisations should seek clear statements in the contract that specify that the outputs of the AI system are the confidential information of the organisation/user to make clear that it controls the use of those AI outputs, rather than the AI vendor.

1.5 Privacy concerns

(a) AI systems that use large datasets can pose privacy risks, especially if personal information is included in training data. Australia’s privacy laws have become stricter under recent privacy law reforms, which now require organisations to update their privacy policies to disclose when decisions are made using automated processes. This reflects growing concern over AI and personal data use. Organisations using AI may face fines or lawsuits if privacy laws are violated, and must be cautious of AI vendors wanting to use client data to improve their systems, which could then be exposed elsewhere.

(b) To manage these risks, organisations should seek to include protections in contracts with IT suppliers, such as indemnities, data protection requirements, and breach notification obligations.

2. Key contractual clauses: procurement checklist

Procurement teams may find it helpful to refer to the following checklist when sourcing AI tools, particularly those intended for use in sensitive or high-impact areas.

This checklist outlines key contractual clauses that can help manage legal, ethical, and operational risks associated with AI systems.

The ability of an organisation to negotiate these clauses into a contract for the procurement of AI will depend on the relative bargaining position of the organisation to the AI vendor, however by incorporating these considerations early in the procurement process, organisations can better safeguard data, ensure compliance, and promote responsible AI deployment.

2.1 Data use and ownership

  • Define ownership of inputs, outputs and training data, and require that the AI system not train on any organisation information input into the AI system (including personal or confidential information).
  • Restrict vendor rights to use or sell de-identified data.
  • Include clauses on data residency and cross-border transfers.

2.2 Liability and indemnity

  • Require indemnity for breaches of law, IP infringement, and harm caused by AI outputs.
  • Ensure any liability arising under these indemnities isn't capped for the AI vendor as well as ensuring carve-outs from any liability cap for gross negligence or wilful misconduct so the AI vendor bears the risk on these amounts.

2.3 Performance and service levels

  • Specify measurable service levels for availability, accuracy and responsiveness as well as specified fallback protocols.
  • Include provisions for model updates, retraining and performance audits.

2.4 Transparency and explainability

  • Require the AI vendor to provide documentation or demonstrations explaining how the AI system works (e.g. model cards, decision logic summaries).
  • Include obligations to disclose when automated decision-making is used, and specify responsibilities for bias detection and safety testing pre and post deployment.
  • Ensure the organisation has the right to audit or review the AI system’s outputs and decision-making processes.
  • Where appropriate, include requirements for vendor alignment and third party certification against ISO/IEC 42001 (or equivalent protocols). Consider whether third party certification should be required on an annual basis.

2.5 Compliance and regulatory alignment

  • Include warranties that the AI system complies with applicable laws (e.g. privacy, anti-discrimination, consumer protection), and that the AI system does not infringe third party IP rights.
  • Require the AI vendor to notify the organisation of any regulatory investigations or enforcement actions related to the AI system.
  • Include obligations for the AI vendor to assist with regulatory inquiries or audits being instigated on the organisation.

2.6 Human oversight and intervention

  • Ensure the AI system allows for human review, override, or appeal of automated decisions.
  • Include obligations to support human-in-the-loop processes where required by law or organisational policy.
  • Require the AI vendor to provide tools or interfaces that enable effective oversight.

2.7 Security and incident response

  • Specify minimum security standards (e.g. encryption, access controls, secure development practices).
  • Require prompt notification of security incidents or data breaches and compliance with a business continuity plan approved by the organisation.
  • Include cooperation obligations for investigating and remediating incidents.

2.8 Termination and exit rights

  • Include rights to terminate the contract if the AI system fails to meet specified legal, ethical, or performance standards.
  • Ensure the organisation has access to outputs and that data is returned upon termination.
  • Require the AI vendor to assist with transitioning to a new system or provider.

 


 

MinterEllison provides full-service IT legal and consultancy services with extensive experience in IT contracting, artificial intelligence systems, privacy and data protection and cyber security.

Please contact us if you would like assistance in any aspect of your organisation's IT and AI procurement needs.

Contact

Tags

eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJuYW1laWQiOiJmNzM5MDk0Mi1hMWZhLTQ2NDgtODA4Zi0xZWU5Yjg1ODdmZDIiLCJyb2xlIjoiQXBpVXNlciIsIm5iZiI6MTc1NjIwMTUzMiwiZXhwIjoxNzU2MjAyNzMyLCJpYXQiOjE3NTYyMDE1MzIsImlzcyI6Imh0dHBzOi8vd3d3Lm1pbnRlcmVsbGlzb24uY29tL2FydGljbGVzL3Byb2N1cmluZy1haS1rZXktY29uc2lkZXJhdGlvbnMtYW5kLXN0cmF0ZWdpZXMiLCJhdWQiOiJodHRwczovL3d3dy5taW50ZXJlbGxpc29uLmNvbS9hcnRpY2xlcy9wcm9jdXJpbmctYWkta2V5LWNvbnNpZGVyYXRpb25zLWFuZC1zdHJhdGVnaWVzIn0.KZjryaPzLHoAWPhKmb-0VA9Ab0mjL1mWpuP8VsTISh8
https://www.minterellison.com/articles/procuring-ai-key-considerations-and-strategies