As we step into a new year, artificial intelligence has firmly moved from an emerging technology to a cornerstone of business operations, driven by rapid advances in generative AI. Organisations are increasingly undertaking projects to implement AI agents and other AI-enabled tools. These projects present unique commercial and contractual risks that require careful management. Getting the contract right from the outset can mean the difference between a successful AI implementation and costly disputes, vendor lock-in or heightened risks of cyber security threats. This article explores the critical considerations for contracting AI implementation projects in 2026.
1. Planning and managing project scope and performance
Defining business need and planning for procurement
Before commencing procurement, organisations should identify specific business needs and assess whether an AI solution represents the optimal approach to achieve desired business outcomes and return on investment. Consider how the project fits into your organisation's overall AI governance framework, including AI risk appetite and data governance policies. Robust procurement planning enables organisations to select appropriate AI solutions and negotiate contract terms aligned with their specific operational requirements.
Defining and managing project scope and performance metrics
Organisations should address the following considerations when scoping AI implementation projects:
- Specific use cases with detailed project deliverables and milestones.
- Objective acceptance criteria (functional, performance, security, accuracy) with remediation rights and no deemed acceptance.
- Third-party dependencies and applicable licensing terms.
- Measurable performance targets (accuracy rates, latency, throughput, explainability).
- Required level of human-in-the-loop involvement.
- Ongoing licences or managed services for hosting, maintaining, supporting, training and updating AI deliverables.
- AI learning approach and permitted training datasets, including use of your organisation's data.
- Known risks and limitations, with mitigations.
Organisations should incorporate appropriate project and change management procedures – including clauses dealing with delays and relief events, ensuring provider accountability for timely delivery and performance standards.
Structuring pricing and payment models to manage performance
Strategic payment structures serve as critical mechanisms for managing supplier performance and mitigating project risk. Organisations should evaluate whether proposed payment models adequately protect their interests – particularly through milestone-based fee structures and payment withholding rights for unmet milestones.
Ongoing licence and support fees should be transparent, commercially reasonable, and predictable. Further, given the speed at which AI is evolving, consider whether long-term commitments to particular AI products align with organisational flexibility requirements.
Aligning procurement with Responsible AI principles
Organisations must ensure AI implementation projects comply with internal governance frameworks, ethical standards, and applicable legal requirements, including privacy/safety-by-design principles. Preference should be given to implementation providers that offer clear documentation of how decisions are made and allow for meaningful human oversight, including the ability to intervene, audit or override automated outputs where necessary.
2. Data rights
Generative AI tools employ models trained on extensive datasets to generate novel content (e.g. text, images, audio, video, code). Given that AI model efficacy is fundamentally dependent on training data quality, AI developers prioritise securing rights to high-quality datasets. The most operationally relevant data often comprises proprietary, confidential and personal information, including data provided by the organisation for project purposes, user inputs, and AI-generated outputs (collectively, “Project Data”). Consequently, data ownership, privacy protection, and cybersecurity are key concerns in AI implementation projects.
Effective management of these concerns necessitates careful consideration of:
- Data ownership and control: Establishing clear ownership and control rights over Project Data is essential. Organisations should note that intellectual property rights in data points or AI-generated outputs may be limited or non-existent under current Australian law (see our previous article).
- Data usage and training: Organisations must scrutinise how implementation providers will use Project Data. Critical questions include whether providers will access or utilise such data outside the contracted AI tool or for purposes beyond service delivery (whether in de-identified form or otherwise). Specifically, will providers license, sell, commercialise or share Project Data with third parties or use it to train or enhance AI models, tools, or agents deployed for other customers, potentially conferring competitive advantages to third parties? Consider whether these uses could benefit competitors or erode your competitive advantage.
- Privacy and confidentiality: Organisations should assess whether confidential, sensitive, or personal information will be utilised in the AI tool’s training or operation, or generated by the AI tool. Contracts should specify whether Project Data will be treated as confidential information subject to appropriate confidentiality obligations. Implementation providers should demonstrate robust information management practices to safeguard privacy, maintain confidentiality, and ensure compliance with applicable privacy legislation.
- Data de-identification: Organisations should determine whether Project Data will be de-identified and aggregated, or if it will remain attributable to the organisation or susceptible to re-identification. Assessment of regulatory, reputational, and commercial risks associated with data sharing, even in de-identified form, is essential.
- Cybersecurity: AI tools present distinctive cybersecurity risks that extend beyond those inherent in traditional IT systems including, adversarial machine learning attacks (such as prompt injection and data poisoning) and risks arising from AI tool behaviour (such as inadvertent exposure of sensitive data or generation of fabricated information). Accordingly, organisations should implement enhanced security protocols and contractual requirements specifically designed to address these AI-specific vulnerabilities.
Data governance and management considerations should be addressed at the earliest stages of procurement planning. Organisations should typically seek to retain ownership and control over Project Data while ensuring data-sharing arrangements align with internal policies and legal obligations. Before consenting to data sharing arrangements, organisations should assess the commercial value and sensitivity of the data and evaluate whether the organisation could derive independent commercial or competitive advantage from such data. Organisations may also negotiate for discounted pricing or other compensation if your AI implementation provider stands to benefit commercially from that data.
3. IP rights
IP ownership and licences to AI project deliverables
AI implementation projects present intellectual property issues that may constrain organisational rights to utilise project deliverables. While contracts may purport to grant customers “ownership” of AI project deliverables (such as custom AI tools, agents, or bots), these deliverables typically incorporate underlying proprietary components – such as pre-existing models, algorithms, software code or datasets – which the customer does not own, and in respect of which the customer may have insufficient licence rights to use on an ongoing basis, after the expiry of term-based licences.
Organisations should confirm what underlying intellectual property is incorporated within AI project deliverables and scrutinise contractual IP licensing provisions to ensure adequate licence rights for intended purposes, including post-termination use where required. Particular attention should be given to ensuring any ongoing licensing requirements (and licence fees) do not create de facto vendor lock-in through long-term licence dependencies that prove commercially or operationally difficult to exit.
Portability of AI project deliverables between AI platforms
A related consideration concerns the portability of AI agents and project deliverables across different AI platforms. Organisations should assess whether it is both technically feasible and contractually permissible to transfer and deploy such deliverables to alternative AI platforms and evaluate the associated modification requirements and costs. Platform-specific dependencies may create additional vendor lock-in risks, binding organisations to long-term licensing arrangements with particular AI platform providers.
4. IP infringement issues
Third-party intellectual property infringement claims represent an inherent risk in IT services and software procurement. Generative AI models, however, present elevated copyright infringement risks attributable to their training methodologies and operational characteristics:
- Training data: The use of copyright-protected materials as training data during model development may constitute reproduction and potentially communication of copyright works; and
- AI-generated outputs: Generation of outputs that reproduce substantial portions of third party copyright works (or create adaptations thereof) and subsequent communication to users may constitute infringement if not appropriately licensed.
In either scenario, absent proper licensing, such activities may infringe the exclusive rights of copyright owners (see our previous article). These risks are particularly pronounced where implementation providers lack complete visibility into training data used in underlying AI models, especially when deploying third-party foundation models.
Further intellectual property risks emerge where implementation providers incorporate AI-generated code containing open-source software (OSS) components into project deliverables, potentially infringing OSS licences through reproduction of open-source code without compliance with applicable licensing terms and copyright obligations (see our previous article).
Organisations should seek to mitigate these risks through comprehensive contractual indemnities from AI implementation providers – without inappropriate limitations or carve-outs – covering intellectual property infringement claims, confidentiality breaches, and privacy violations, with express coverage of both training data and AI-generated outputs. Rigorous due diligence may be necessary to verify that training datasets and methodologies do not infringe copyright, violate OSS licences, or contravene applicable legal requirements.
Further reading
For more detailed analysis on related topics, please see our previous articles:
Key takeaways
As we begin a new year, organisations are resetting priorities and accelerating AI adoption to stay competitive. Robust contracting practices are essential to capitalise on these opportunities while effectively managing associated risks. By addressing these critical considerations now, organisations can lay stronger foundations for successful AI implementation throughout 2026. It’s also the ideal time to monitor the rapidly evolving AI regulatory landscape and integrate emerging requirements into governance frameworks and procurement strategies to ensure compliance and resilience in the months ahead.