In this final article of our series, we turn to the Technology & Data pillar of the Target Operating Model introduced in AI for in-house legal teams: five pillars for success.
We explore why lawyers must play an active role in evaluating AI solutions for their teams and provide legal leaders with practical considerations for making informed technology decisions.
How in-house legal teams can evaluate AI solutions
General counsels and legal teams face relentless pressure to deliver more with fewer resources. AI offers a compelling promise: greater efficiency, faster turnaround and improved productivity. But realising that promise is not simply a matter of buying and/or implementing tools. It requires coordinated action across all five operating model pillars we’ve discussed throughout this series.
General-purpose AI (GPAI) tools like Microsoft Copilot offer lawyers unique opportunities because they work in language - the core medium of legal practice. However, their design as chat interfaces creates fundamental limitations for comprehensive legal applications. GPAI tools are not document management systems, matter management platforms, or specialised legal research systems. They lack the structured workflows, data organisation and feature sets that purpose-built legal technology provides. Beyond these capability constraints, GPAI effectiveness also depends on available data sources; and when internal legal resources are insufficient, these systems can default to general web searches (e.g. Bing for Microsoft Copilot) rather than authoritative legal databases. This dual limitation in both functional capabilities and data reliability makes them inadequate as complete solutions for the specialised, high-stakes nature of legal work.
To move beyond these limitations, organisations typically pursue one of three paths: procurement (buying off-the-shelf solutions), partnerships (working with vendors or law firms to develop tailored solutions), or production (building your own bespoke solutions). Regardless of the chosen path, success hinges on many factors, two key ones being:
- the quality of the underlying data; and
- the ability to apply AI outputs in a practical, legally sound way.
Embedding lawyers at the heart of tech and data decisions
Implementing AI is not just a technical exercise; it’s a strategic process that demands legal expertise. Data is the foundation of every AI system but knowing which data matters and how outputs should be evaluated requires more than technical skill. It requires lawyers who understand the nuances of legal risk and the practical realities of their team’s work.
Consider a simple example: an AI system designed to review NDAs. A technology expert can test whether the system runs, but they cannot determine if it correctly identifies clauses that expose the organisation to risk. Lawyers, on the other hand, know what “good” looks like and can set the quality standards that matter. They also understand how NDAs are negotiated in practice and that you cannot control what the other side will do. Those practical insights help lawyers distinguish between features that sound impressive but add friction, and those that genuinely save time or create value. Without their input, even the most advanced technology can fail to deliver safe, reliable and effective outcomes. This is why lawyers must be at the centre of technology decisions for their team; not just as end-users, but as strategic evaluators and guardians of legal integrity.
At the same time, legal leaders should recognise that a baseline understanding of technology concepts will amplify their impact. Knowing how AI systems work, what data they rely on and how outputs are generated enables lawyers to articulate requirements clearly and engage confidently with vendors and internal IT teams.
Understanding the technology
Most in-house legal teams start their AI journey by procuring solutions from vendors, which is why we focus on procurement below, however, many of the same principles apply if you’re partnering or producing.
Procurement is where marketing meets reality. Vendor demos often showcase ideal scenarios, not the complexity of your legal environment. To cut through the noise, you need questions that expose the technical choices behind the glossy pitch and their implications for your use cases.
Below are four critical dimensions to evaluate. Each one determines whether your AI investment will deliver value or create risk.
1. Data and model architecture
Every AI system starts with a model, but the model alone doesn’t define its value. What matters is how that model interacts with your legal data and workflows. If the data is incomplete, inaccurate, or poorly structured, the outputs will fall short of expectations. Most legal AI systems rely on commoditised models like OpenAI’s GPT, Anthropic’s Claude, or Google’s Gemini. The real differentiation lies in how vendors augment these models, such as via proprietary legal content, fine-tuning and integration capabilities that shape outputs for your jurisdiction and practice areas.
For legal leaders, understanding this architecture is critical. If you deploy multiple AI systems without knowing what powers them, you risk redundancy instead of strategic diversity. Different models can offer varied perspectives, which is invaluable for complex legal analysis. Procurement decisions should therefore go beyond feature lists and uncover the technical foundations that will determine whether your AI investment creates competitive advantage or unnecessary duplication.
Key questions to ask vendors:
- What AI model does your system use?
- What do you provide on top of it, such as prompting, proprietary data, or fine-tuning?
- What data was the model trained on? Is it jurisdiction-specific or primarily US-centric?
- How does the system access and use our data? Can we ingest our own data sources?
2. Security, privacy and governance
Legal teams operate in a world of confidentiality and privilege. Introducing AI into that environment brings new risks that traditional security frameworks don’t fully address. Beyond encryption and access controls, AI systems face vulnerabilities like prompt injection, where malicious inputs manipulate outputs; model inversion, which can expose sensitive training data; and data poisoning, which corrupts behaviour over time. These risks are unique to AI and require governance measures that go far beyond standard IT protocols.
For legal leaders, this means asking hard questions about how vendors manage these risks. Good governance is the backbone of trust in AI-assisted decisions. Continuous monitoring, audit trails and human oversight protocols are essential to ensure that AI outputs remain reliable and defensible. Without these safeguards, the promise of efficiency can quickly turn into a liability.
Key questions to ask vendors:
- Where is data stored and processed? How is it encrypted?
- Do you use customer data to train, fine-tune, or improve your system? If yes, how is consent obtained?
- How does the solution comply with relevant privacy regulations?
- Do you have documented processes for managing AI risks, including monitoring, incident response and governance?
3. Accuracy, fairness and reliability
AI systems are only as good as the data they learn from and most models are trained predominantly on US-centric materials. For Australian legal teams, this creates accuracy gaps that can manifest in subtle but serious ways, such as incorrect statutory references, misapplied principles and advice that fails to reflect local regulatory frameworks. Bias in training data can also influence outputs, even when you haven’t introduced an international context. So even if you're asking about a purely domestic Australian matter with no cross-border elements, the AI might still approach the problem using US-influenced legal reasoning frameworks.
Unlike traditional software bugs, AI errors are often convincing and hard to detect. This makes human oversight and explainability features non-negotiable. Legal leaders must ensure that any AI system they adopt has been validated for accuracy and fairness and that they can test it on real use cases before committing. Reliability isn’t just a technical issue; it’s a professional obligation.
Key questions to ask vendors:
- What steps do you take to ensure your AI systems are accurate, fair and free from bias?
- Have these been independently validated?
- Can we test the system on our actual use cases and data before committing?
- How are AI-driven decisions explained to users?
- Can users see the AI's thought process and the data sources it used?
- How is human oversight enabled for critical decisions?
4. Integration and ongoing support
AI implementation is not a one-off project. It’s a long-term strategic partnership. For maximum impact, AI systems must integrate seamlessly with your existing document management systems and workflows. But integration also raises critical questions about risk exposure. Full database access might sound efficient, but it can lead to unintended consequences, such as sensitive or privileged files with incorrect permissions being surfaced inappropriately.
Beyond integration, AI systems require continuous optimisation. Models drift as training data becomes outdated, which means regular updates and proactive monitoring are essential. Vendors should provide comprehensive support, including user training and governance frameworks. Exit strategy planning is equally critical as AI systems accumulate value through custom prompts, workflows and institutional knowledge. You need clarity on what happens to that data if you decide to move on.
Key questions to ask vendors:
- How will the system integrate with our existing systems and workflows?
- What AI-related training and ongoing support do you provide?
- How often do you update the AI models in the system?
- What happens to our data and customisations if we decide to terminate the service?
Legal's role in AI success
You don’t need to be a technologist to lead AI decisions, but you do need enough understanding to ask the right questions and make informed choices. Legal's role is to define requirements, assess outputs and risk, and ensure alignment with legal strategy. While IT teams bring valuable technical expertise, these responsibilities also require legal judgment and cannot be fully delegated.
AI success isn’t about finding the perfect tool. It’s about ensuring technology serves your legal strategy, manages risk and delivers measurable value. Technology & Data is only one pillar of a broader framework, alongside Strategy & Governance, People & Culture, Sourcing & Delivery and Process & Operations. Only by aligning all five foundational pillars can organisations unlock the full potential of AI.
Read the other articles in this series: