AI for in-house legal teams: leading people through AI change

6 minute read  13.11.2025 Alexandra Vost and Sruti Venkatesh

Discover why legal teams hesitate to adopt AI and explore practical change management strategies to build trust and encourage engagement.


Key takeouts


  • AI adoption is as much about people as it is about technology - trust, identity, and culture shape how legal teams engage with AI tools.
  • Hesitation is a rational response to AI, particularly for lawyers. Addressing concerns with empathy, clarity, and support is essential for meaningful adoption.
  • To build AI engagement, legal leaders need to create a culture where curiosity is encouraged and it is safe to learn, experiment, and question.

In this article we explore elements of the People & Culture pillar of the Target Operating Model as introduced in our insights article 'AI for in-house legal teams: five pillars for success'.

The focus of this article is to guide legal leaders in supporting their teams through effective AI adoption with empathy, clarity, and practical steps.

Introduction

You’re a General Counsel or senior leader in an in-house legal team. You see the potential for AI to transform how your team delivers legal services. But as you introduce AI tools, you notice something: apart from a few tech enthusiasts, most of the team isn’t engaging. They’re not objecting loudly, but they’re not using the tools either.

For many, this is a natural response to uncertainty. In this article, we’ll explore why lawyers often hesitate to adopt AI and how leaders can build trust in new ways of working by addressing the real concerns on their team members' minds. With empathy, clear communication, and the right support, you can help your team confidently engage with AI and start realising its value. 

Why do legal teams hesitate to adopt AI?

Hesitation is a natural and understandable response to any significant change, and the introduction of AI tools is no different. Below are five common reasons legal teams may be slow to engage with AI, along with practical, empathy-driven strategies leaders can use to build trust, support learning, and help their teams confidently adopt these tools.

1. “If AI can do some of my work, what does that mean for me?”

AI raises big questions: If technology can do parts of my job, what’s my value? The introduction of AI into legal workflows challenges long-standing pillars perceptions of the legal profession, particularly the billable hour model and the prestige associated with being a trusted expert. When success has traditionally been measured by time spent and the mastery of specific legal domains, the prospect of completing work in half the time and relying on AI for research and drafting can feel both disruptive and disorienting. 

How to respond:

  • Emphasise the value of experience: Quality AI outputs are dependent on the user having the appropriate subject matter expertise to review and tailor them based on the relevant context, and to overlay judgement and advice. The experience your team has acquired over the years will be crucial in deriving value from AI use.  
  • Reframe the narrative: Position AI as a tool that removes low-value tasks so your team can focus on strategic, meaningful work. It’s augmentation, not replacement. It provides the opportunity to be more valuable to your internal clients and help them meet the goals of the organisation. 
  • Highlight the human advantage: Reinforce that the ability to be empathetic, advocate for clients and build trusted relationships remain uniquely human strengths. It also means that understanding the dynamics and many moving parts of the business, as in-house lawyers often do due to their central role, becomes more valuable. These elements of your team's work cannot be replaced. 

2. “If AI makes mistakes, how can I trust it?”

Lawyers are trained to be meticulous, and rightly so. Law is a profession where accuracy and thoroughness matters. Generative AI’s potential for errors (or “hallucinations”) can be a major concern. Add to that its unpredictability due to its probabilistic rather than deterministic model (producing different outputs for the same prompt), and it clashes with a profession built on consistency and precedent. It will be important for your team to feel confident in assessing and verifying AI outputs in order to feel comfortable integrating AI into their daily workflows. 

How to respond:

  • Set clear guardrails: Make it explicit that AI outputs are drafts, not decisions. There is a reason the tools are called "assistants". Human review is non-negotiable and critical. These guardrails should be bolstered with appropriate training to develop the team's AI literacy, helping them understand the limitations of the tools and methods for identifying hallucinations.  
  • Start safe: Begin with low-risk use cases like summarising internal policies before moving to more complex, high-risk work in order safely test AI's boundaries and grow confidence and competency.
  • Create process predictability: Even if outputs vary, the AI-enabled workflow should feel stable: same steps, same prompt structure, same review points, same accountability.

3. “What if I don’t know how to use it?”

It’s not always easy to admit when something feels difficult or unfamiliar. The challenge of learning a new skill, especially amid the rapid evolution of tools and expanding AI capabilities, can quickly become overwhelming. This challenge is often amplified in high-performing environments like legal teams, where there can be a reluctance to ask questions for fear of appearing uninformed. Experimentation may also feel risky, as failure is often perceived negatively rather than as a learning opportunity.

How to respond:

  • Normalise learning: Frame AI as a new skill for everyone, including leadership. Share your own learning curve openly, highlighting where you have made mistakes or don't know an answer to a question. This helps others feel comfortable admitting their own AI knowledge gaps. 
  • Create safe spaces: Run short, informal “AI clinics” where questions are encouraged and mistakes are expected. Within these clinics, it will be important to establish a respectful dialogue and ensure all voices are heard.
  • Reward curiosity: Recognise and celebrate team members who experiment and share learnings, particularly when the experiments have not provided the desired output. Openly explore what went wrong and how we can learn from this next time

4. “Do I really have time to learn this?”

Even when lawyers are open to using AI, time remains a significant barrier. Learning a new tool can feel like an added burden on top of an already demanding BAU workload. The natural question becomes: Will this investment pay off? Initially, adopting AI may actually slow things down. Teams will need to experiment, refine their prompting techniques, and rethink their workflows to extract real value. This learning curve often involves trial and error, and in the short term, it can lead to a temporary dip in efficiency and productivity (the dreaded "J curve"). This may be a side effect that may feel unacceptable amid existing workload pressures. 

How to respond:

  • Make it easy to start: Begin with small, practical use cases in order to integrate AI into existing workflows. This helps stabilise the learning curve and allows the team to see time savings sooner. 
  • Visualise the payoff: Show how AI can save time on low-value tasks using real data, enabling your team to focus on more strategic work. Help them visualise what their day could look like after overcoming the initial learning curve and lessening the burden of repetitive tasks.
  • Create space for learning: Offer flexible, bite-sized learning opportunities - like on-demand resources or 15-minute lunch and learns to better fit into busy schedules. In addition to this, give your team permission to engage with this learning (i.e. mandate dedicated AI time). It will be important for leadership to reinforce that this is a worthwhile use of their time. 

5. "Is this too good to be true?"   

The legaltech market can be full of bold claims. For lawyers trained to question and verify, this will naturally trigger some scepticism. When reality doesn’t match the promises that have been made, trust is diminished. The truth likely lies somewhere in between the two ends of the spectrum: AI may not completely transform the legal profession overnight, but it’s not going anywhere and it will be a permanent fixture in the ecosystem of how legal work gets done. Its full impact may emerge gradually (at least at the start), shaped by how it's adopted and integrated into legal workflows. 

How to respond:

  • Be honest about limitations: Acknowledge what AI can and cannot do and encourage testing the boundaries. Avoid overpromising its functionality to your team. It will be important to position AI as an evolving tool, not a magic bullet, but you get out of it what you put in and capability will develop quickly.
  • Show real examples: Demonstrate AI in action on actual legal tasks. Avoid relying solely on vendor-led use cases to assess value as these sessions can gloss over limitations and focus on ideal scenarios. 
  • Link to business value: Share tangible results from pilots and use case experimentation, like time saved or additional analysis provided, to build confidence.

The bottom line 

Adopting AI in legal teams isn’t just a technical shift, it’s a cultural one. Hesitation around AI is rarely about resistance for resistance’s sake. It’s about identity, trust, time, and the very real pressures of legal work. By understanding the cognitive and cultural factors at play, leaders can move beyond mandates and instead foster genuine engagement with AI tools for team and organisation benefit. When leaders develop the structures to help their team to feel informed, empowered and included, they lay the foundation for AI to become not just a tool, but a trusted partner in delivering smarter, more strategic legal services.

How we can help

Legal Optimisation Consulting and our national AI Advisory team work with in-house legal teams to:

  • Develop a clear strategy and roadmap for AI adoption, tailored to your legal team’s needs and concerns
  • Foster a trust-based culture through leadership coaching and strategic change management
  • Provide practical, accessible training through tailored workshops and seminars, bite-sized learning sessions and on-demand resources
  • Identify and prioritise AI use cases that deliver measurable value
  • Establish governance and guardrails for safe, sustainable adoption

Let us help you decide the right path, protect what matters, and evolve your function to deliver lasting impact.

Contact us to learn more.

Contact

Tags

eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJuYW1laWQiOiIyYzVmNzhkZC1kZjFhLTQwNGMtYWM1ZS1hODU5YmQ3NDZiNjkiLCJyb2xlIjoiQXBpVXNlciIsIm5iZiI6MTc2NTgwNTI1MiwiZXhwIjoxNzY1ODA2NDUyLCJpYXQiOjE3NjU4MDUyNTIsImlzcyI6Imh0dHBzOi8vd3d3Lm1pbnRlcmVsbGlzb24uY29tL2FydGljbGVzL2xlYWRpbmctcGVvcGxlLXRocm91Z2gtYWktY2hhbmdlLWZvci1pbi1ob3VzZS1sZWdhbC10ZWFtcyIsImF1ZCI6Imh0dHBzOi8vd3d3Lm1pbnRlcmVsbGlzb24uY29tL2FydGljbGVzL2xlYWRpbmctcGVvcGxlLXRocm91Z2gtYWktY2hhbmdlLWZvci1pbi1ob3VzZS1sZWdhbC10ZWFtcyJ9.YCHafu9aKi2PCkU-Cluw4dDV6wDUi4jLKZwXik0i-Yc
https://www.minterellison.com/articles/leading-people-through-ai-change-for-in-house-legal-teams