Key Takeaways
David Wakeling incubated a software product to help banks restructure contracts after the financial crisis, realizing the scale of work made it infeasible to do manually with lawyers. 10 years later, Allen & Overy has dozens of different software products managed by an interdisciplinary team of lawyers, software engineers, and data scientists. They're also one of the most advanced adopters of LLMs, where they're finding significant business value.
Topics Covered
- Background on Allen & Overy (A&O) & David
- One of biggest law firms in the world, focused mainly on business law
- David is a partner at the firm and leads the "Markets Innovation Group", which is made up of lawyers, developers, and data scientsts
- Our job is to disrupt the existing legal model, turning to technology or resource models that enable us to do things differently
- I also run the firm's AI strategy globally
- How did you arrive at this model for the Markets Innovation Group?
- We took a big personal risk to start this
- The first project we did was Margin Matrix; introduced in response to work associated with new regulations post-Lehman
- We built it, pitched it to some of the biggest banks around the world, and sold it successfully to one, and then were then delivering remediation on tens of thousands of contracts, for just one client
- Chose to partner with Deloitte to "crank the machine" in many more customers
- Once we had the success with Margin Matrix, it was easy to change the organization
- What about your background led you to do this?
- Biggest thing is very personal... my eldest daughter was diagnosed with cancer... she eventually passed away, but I spent a lot of time away from work, hundreds of days in the hospital, and it gave me a fresh view of the world and everything in it
- I didn't just want to carry on with things as they were; this was a huge catalyst
- How did you think about the revenue model for these products?
- Start with business problem... there weren't going to be enough lawyers to handle this... it was just such a manual process
- First shopped around for decent doc automation product; created simple beta version; didn't know what the Lean Startup was at the time, but effectively we just spoke with clients with the MVP
- It became clear the client will be the big banks... and it was clear they were sitting on so much structured data, you could do this without going too much into the underlying docs
- Then we started talking about how those banks communicate with their customers
- I had to think about how it would be integrated into these banks' workflows
- Once we had that plan done, we were away, but capped the number of clients to make sure we weren't overwhelmed
- How does this interact with the by-hour billing model?
- In order to work out pricing, we had to work out alternative, and that alternative was very expensive (manual lawyer work)
- We came up with a fixed license fee per number of contracts, and it was 1/3 the cost of the alternative
- We basically projected the work we thought we would need to do on quality control, to ensure we would be offsetting our internal costs as well
- Does this enable you to sell other labor hours, or is it really just the software license?
- Most of the time it's just a software product, but being a law firm, we can provide a more conventional resource model for those 10% of "exceptions" that can't be handled by the software
- How many products do you now have live and in market?
- Platform is an umbrella that's devised by function, with many drawdowns in different areas
- Bond automation tool, which auto-drafts bond securities documentation
- Fund tool, which helps with fund formation and funding rounds
- Loan execution tool
- Every area is incredibly scalable as soon as you have an auto-drafting tool
- We now have dozens of small drawdown products off the core platform/framework
- Is there a pattern to which of these solutions has been most successful, vs the others?
- Two themes:
- 1: Heavily templated, highly repetitive, high volume
- 2: Any categories subject to a dislocation event (e.g. Brexit)
- Two themes:
- What cautions would you give to other firms / companies trying to build a similar model?
- There's an organizational philosophy that needs to change; it's an experimental group, and it's going to take longer than 6 weeks to make money; the organization needs to understand that
- You need to introduce feedback loops and constant iteration; having known this 5 years ago would have saved me a lot of time
- What was your first exposure to the "AI" space?
- Last Sept we got to know Harvey, pre ChatGPT; through that we got access to GPT-4 in beta
- We got it in sandbox and tested it; "all previous legal tech is laughable"
- I said "we're going to put everything into this"; I grabbed 10 people from my team and spend 6 weeks developing governance around safe deployment
- Rolled out in phases with objective of 2k lawyers by X-Mas (all internal usage)
- We got 600 very detailed responses to effectively an IT survey
- Since then, we've been building this into our drafting tools
- This morning, we had 3.7k people who are regular users of our genAI systems, either Harvey or Contract Matrix, and they use it on average 800 times a day
- How exactly do you find using the platform? What use cases are most helpful?
- We tried to map out the functions of lawyers: "draft", "comment", "summarize", etc.
- We discovered was that it's such a malleable tool, if you can point it at a use case in the right way, almost every function you can have an impact on
- That just affirmed the conclusion we'd reached, which is that this is going to completely transform the legal industry
- How would you quantify the labor hours you've saved with this, across each category?
- Coming up with bespoke provisions & translations of documents between languages -- it's incredibly effective
- Dropping in datasets / data extraction / RAG is next one for us; that will really take off
- For example, it could be a set of transactions on which we need to do due diligence
- Really just don't overthink it -- just pick any function in the legal industry and you'll find value
- You do need a plan around handling hallucinations, protecting data, etc., but as soon as you've done that, there's a lot you can do
- How do you think about this issue of governance? How do you advise clients on this?
- It's a constellation of things you need to do right to allow this to work safely... governance, evidence-based RAG (showing where answers come from), contractual agreements with 3rd-parties that ensure your data is being handled appropriately, staying away from risky areas, etc.
- Which of these problems do you feel are less well solved in the market?
- Hallucination risk is not completely removable; you just need to build around this to the extent possible
- Data privacy and IP infrigement -- those are also challenging, but solvable
- Information security as well -- there are various sub-concerns & ways to solve for this, but it's important to think about
- With hallucinations, it doesn't have to be perfect; it just needs to be better than the current position
- A lot of what's happening right now is companies looking at it by accessing tabloid risk; "is this a use case where that 4% error risk will get me in page 4 of the tabloids?"
- But there are other use cases where that 4% error risk isn't an issue; e.g. in M&A due diligence, where you wouldn't have been able to review that volume of data to begin with
- How do you think about the buy vs build decision?
- In terms of the value chain, we have a foundational model, we then have Harvey leveraging and tuning it for law, and then at the far other end you have a human using this product; what fits in between the human and the model?
- We believe we save a couple hours per week; across 3.7k lawyers, that's a lot of hours of lawyers' time
- Thinking about the value chain is a nice starting point for where we go next... how do we package / build products around the LLM to provide value?
- We are actively watching what else is coming out in the market; there will be competition; it's just just Harvey and OpenAI and we are looking at other options as well
- Our objective is to future-proof our design so we can port everything between vendors as needed
- Where do you think value accures then?
- People talk about moats being in the AI tech companies; we believe our moat is around governance, management of legal issues, practice precedents, etc.; our moat is in the productization of our expertise
- The talent war on data science makes us nervous... for the things we build, we will be competing in this market... to what degree do we want to do this? can we hire the people we need?
- How do you think about "future-proofing"?
- A lot of businesses could get lulled into dependencies on specific providers; that may put them in a very tough position
- Effective testing then becomes very important as well; you need to know you can switch if needed
- How do you operationalize this?
- We have a toggle; we can flip between model providers and allow a mini feedback loop to see how each model is performing
- Does the product have to change between models?
- The way we're using it, not as much as you'd think, but it will get trickier over time