Legal Lab 2025 took place April 1st through 3rd at The Langham, Boston. The building was formerly the Federal Reserve Bank of Boston. Acknowledging the historic location of the meeting – as well as the mural of George Washington and Alexander Hamilton in the Wyeth Room – Harbor CEO Matt Sunderman opened Day 1 proceedings by reminding us that Legal Lab creates a “room where it happens”.
This year’s attendees included legal operations leaders from Fortune 500 companies, including some of the oldest companies in the United States – the Bank of New York, founded in 1784; ConEdison, founded in 1823; and Hormel Foods, founded in 1891. They were joined by leaders from Silicon Valley tech companies including Google, Meta, and Salesforce.
Representing law firms were leaders from Am Law 100 firms and Global 200 firms. We were also joined by external speakers including Professor Scott Westfahl of Harvard Law School, Kara Peterson of Descrybe.ai, and Oz Benamram of SKILLS.law.
Talent: The importance of adaptive learning
In an era in which Generative AI is starting to disrupt the legal profession, what are the most important skills we need to develop within the industry?
Guest speaker Professor Scott Westfahl of Harvard Law School Executive Education shared ideas on “Design Thinking in Legal Talent Management in the Age of AI”. One of the biggest gaps in the law school curriculum today, he said, was the lack of investment in developing leadership and networking skills.
Whereas the top MBA programs equip graduates with strong professional networks, top law schools have historically rewarded the development of technical legal skills – “thinking like a lawyer” rather than “being a lawyer”.
The arrival of GenAI has highlighted the importance of Adaptive Leadership – the ability to lead people and organizations through change in uncertain times. The skills required to address “technical” challenges vs. “adaptive” challenges are very different.
Adaptive Leadership requires qualities such as listening, empathy, and curiosity. In uncertain times, leaders need to take the time to understand what their stakeholders are afraid of and have difficult conversations, so they can steady the ship.
Human-centered design thinking requires piloting, experimenting, and learning. It requires admitting you don’t know what the answer is; the ability to brainstorm without judgment and leverage diverse teams; and the ability to test, prototype, and learn from mistakes without the expectation of getting it “perfect” the first time. It also requires the ability to give and receive feedback and reflect.
Adaptability Quotient (AQ)
Legal Lab participants debated the changing profile of the lawyer, as well as career pathway diversification, prompted by co-moderator Chris Ryan of Harbor. There was strong interest in changing hiring criteria to identify law students with the behaviors and mentalities that are more adaptive and solution-oriented, in addition to having AI literacy skills.
Across both new and experienced hires, the “adaptability quotient” or “AQ” is an important metric: How flexible and adaptable are your team members in navigating the change that is underway?
Building more adaptive organizations requires changing the culture of your firm or department. Data and client feedback help to compel change, as does inclusivity. Setting up structures to bring in people from every pillar of the organization will start to make new solutions “their” ideas. Having AI champions across departments is important. Encouraging younger team members to present and teach the organization what they know forces cultural change and helps the team break out of the constraints of classical hierarchy. At the same time, there was recognition that – as Lauren Chung of Harbor raised – while “Adaptive Leadership” is important, there will continue to be a role for experts – legal, technical, operational – who are not people leaders.
Participants explored the fundamental tension between the desire and need to experiment and innovate, and the reality that clients continue to seek outside counsel to provide “safety”. Both paths are important.
Whatever the approach, it all comes back to the value of communication. Corporate legal departments want to hear about what law firms are doing with AI.
Service Delivery: The value of transparency
Léo Murgel of Salesforce kicked off our Service Delivery pillar by sharing a case study of a Value Ratings Dashboard launched internally at Salesforce, which aggregated feedback on outside counsel performance. Inspired by a breakout session at Legal Lab 2023, the initiative gathered feedback from in-house lawyers as they reviewed and approved invoices. The managing attorney closest to the work product was required to rate – on a simple 3-point scale – whether the services provided by the law firm was worth the cost.
Monthly evaluations enabled the firms to address any issues and adjust staffing if needed. Invoice reviewers were also empowered to mark down bills if expectations were not met. In aggregate, the data compiled has helped to provide context for future firm selection beyond hourly rates.
Points of friction?
Legal Lab participants discussed the “friction” between corporate law departments and law firms when it comes to AI. While most law department leaders said they expected outside counsel to keep them informed about how they were using AI, in a Harbor poll conducted in December 2024, most law departments reported that firms were not engaging with them on the topic of AI use; another 12% of respondents said that firms did engage with them, but only when asked. Law department leaders in the room expressed concern about whether firms were co-mingling their data with other clients’ data, as well as a sincere desire to learn from law firms about what AI tools they have used and what they have learned from that experience.
In the words of one participant, “I want our law firms to understand AI but I don’t want them to sell me AI.”
From a law firm point of view, a Harbor poll of innovation leaders at the January 2025 Sounding Board found a mix of approaches that firms were using to engage in conversations about AI with clients. Firm leaders at Legal Lab pointed out the mixed signals they continue to get from clients about AI use. Outside counsel guidelines frequently include requirements that prior consent from the client is required before AI can be used in legal services or workflows. They also commonly include AI usage limitations and restrictions, in some cases outright prohibiting the use of AI in specific contexts.
Firm leaders in the room acknowledged the need for transparency and the benefits of articulating why they were using AI in the first place – to make their clients more efficient and capture client information to minimize back and forth. One firm leader shared that having discussions specifically about workflows involving AI has helped to reassure clients. For example, clients generally seem comfortable with having their data used by AI to analyze pricing decisions at the firm. Participants discussed the “stickiness” of other use cases, with both department leaders and firms concurring that using agentic AI to manage docketing, for example, would be be a “huge” boon to clients that would make it unlikely for them to move work elsewhere. There was energy in the room as well for a standard set of law firm disclosures that would enable law departments to permit the use of AI safely.
When it comes to whether firms should pass on cost savings from using GenAI, moderators Zena Applebaum and Bobbi Basile of Harbor raised the issue of ABA Rule 1.5(a) which sets out that it may be unreasonable for lawyers to charge the same flat fee when using a GenAI tool if it enables them to complete tasks more quickly. There was a general consensus that the perennial drive to do things “better, faster, cheaper” would prevail. Both in-house and outside counsel leaders suggested that the even if the cost per matter goes down over the long term, firms that are maximizing productivity would also be the ones to win more work, including higher-value strategic projects.
Technology: Architectures of the (not-so-distant) future
Oz Benamram of SKILLS.law opened Day 2 with “What GenAI Tools Are Used and for What” – the results of a survey of 100 law firms.
The March 2025 survey of SKILLS Summit attendees, now available, provides detailed data across 22 different use cases. A plethora of tools are mentioned – 180 solutions in total (with some used across multiple use cases) – painting a picture of a highly fragmented market.
Of the 73 firms who said they have deployed an internal GenAI solution,
- 55 said that they had either built an AI chatbot internally or deployed a foundation model through a secure API.
- However, only 18 said that they had built a client-facing or revenue-generating products using AI.
This finding was consistent with the observation by moderator Jeff Marple of Harbor that:
While firms tend to index on GenAI use cases for the Practice of Law as a potential point of differentiation for firms in the future, in reality there is considerable excitement at the moment about GenAI use cases for the Business of Law.
The Business of Law use cases enable internal teams to drive operational efficiency, business development, and financial performance. Although the SKILLS.law data did not cover law departments, there was a consensus that the overall trends are likely similar, with the main difference being that there are a fewer number of tools being used in-house. Co-moderator Kevin Clem of Harbor shared the results of a Harbor Sounding Board poll on how corporate law departments describe the current state of GenAI in contracting. While many respondents said that they were exploring or piloting AI opportunities, none went as far as to state that it was “a core component of our approach to commercial contracting”.
With regard to the Build vs. Buy debate, there were proponents for both routes, with differing opinions depending on the size of the organization and overall approach and the nature of problems they were solving. For a Fortune 500 law department soliciting use case ideas from a broad range of stakeholders, “Build” turned out to be easier than “Buy” when it came to AI. While it took longer to build the very first AI solution, subsequent projects went faster once a working template had been tried and tested.
Rather than an “either/or” decision, Jeff Marple suggested an AI strategy more akin to building with LEGO blocks, with interoperable components assembled in tiers. Meanwhile, Oz Benamram predicted that the eventual winners in a highly fragmented AI tools market would be the “platforms” which agents plug into, providing users with the flexibility to replace individual tools in and out. Marple highlighted that in addition to agentic architectures, key categories of AI tools to look out for in the near future include reasoning engines and browser control.
- AI
- Legal operations
- Managed services
- Show all 9