AI use surges in law firms, report says, creating an hourly billing paradox

One of the industries that is adopting artificial intelligence tools fastest is the legal field. A recent report from the legal tech company Clio showed almost 80% of legal professionals are using AI in some way in their practice, up from about 30% last year.
Joshua Lenon, a lawyer in residence at Clio, told Marketplace’s Meghan McCarty Carino the profession is particularly ripe for tech disruption. The following is an edited transcript of their conversation.
Joshua Lenon: The output of legal work is often just text-based, and generative AI very quickly and easily produces written materials. And so for [lawyers], it’s something that’s easy to use and generates the type of stuff they need. And so that’s led to this huge adoption. In fact, we think lawyers are adopting generative AI technology at a faster clip than most other industries out there. When we compare our own Legal Trends Report research to that of, say, McKinsey, we’re seeing that lawyers are generally 8 to 9 percentage points higher in their adoption compared to other industries out there.
Meghan McCarty Carino: And for those who may not be super familiar with the background work of what lawyers do or what happens at law firms, can you explain the efficiencies that are gained by implementing these AI tools?
Lenon: If we look at the workflow of lawyers, they might receive a document from opposing counsel that might require a response, and so they’ll need to draft a response to a motion, and that’s a very formal document with some very specific formatting requirements, depending upon the court that they’re appearing before, and some very specific outputs related to the goals. When we look at artificial intelligence, it gets them past that blank first page and immediately speeds up the process into “I’ve gotten something done. Now, can I perfect it in the time I have left?” There’s an interesting downside, though, to the adoption of generative AI within most law firms: They tend to bill on an hourly basis, and so I think we’re entering into a period where there’s this real paradox of, I’m greatly increasing my productivity, but I’m taking a financial hit in doing so.
McCarty Carino: So what is a potential solution to that challenge, to the hourly business model?
Lenon: Well, I think first we have to define how big an impact it’s going to be. One of the unique things about Clio’s Legal Trends Report is that Clio is cloud-based software. And so we’re actually getting kind of real, live feedback of law firms as they use the Clio software solutions. So we took 7 million time entries out of Clio’s unique dataset. We used, actually, our own internally hosted large language learning model to classify those 7 million time entries against the work activities that were used by the Occupational Information Network, which is a group that looks at different industries and kind of tries to define what they do. And then saw the automation potential of those work activities. And what we found is 74% of time entries that we looked at — over 7 million, 74% of them — could be automated to some degree using AI.
But then when we looked at the revenue associated with those time entries, it looks like it could impact up to $27,000 per lawyer annually if they really lean in to generative AI and its productivity enhancements and don’t adjust their hourly billing model. So what we think is going to be a solution moving forward is firms are going to shift from just doing solely hourly billing to doing what we’re calling hybrid billing. Some of the work that they do will be on a fixed fee or flat fee, and some work will be hourly, and that’s going to vary based on the practice area and the type of cases that they handle, but we think it’s going to become more and more common into the future. And there’s one thing from the survey that we paired with our unique dataset, and what we found is law firms that are using flat fees are also currently the highest adopters of artificial intelligence. They found that they can work less and make the same amount of funds, or even more because they can take on more clients and more cases as their productivity increases.
McCarty Carino: Are there any pain points still with AI in the legal context, where the technology still has a ways to go, particularly when it comes to reliability?
Lenon: Absolutely, so different states, for example, may have slightly different language that they prefer to use in the exact same type of procedure. So a divorce might be called a dissolution in a different state. And what we see from a lot of the current available generic, large language learning models, they tend to default to the biggest set of data in their sample. And so if you are a Connecticut lawyer trying to use ChatGPT, you’re going to get a lot of New York and California language sent back to you, which might work, might not work. And so I think we need to start thinking in the legal industry of building very domain-specific algorithms, such that we could be targeting different states and regions, we could be covering very specific jargon in practice areas, so that lawyers have creative reliability in the tools that they have. In the meantime, it’s every lawyer’s responsibility to know what’s required in their documents, and to double check the outputs of AI to make sure it’s there. But I think it’s very difficult for an individual lawyer to train their AI, so that’s why I think legal technology vendors like Clio and legal regulators like bar associations may need to pair up and start to create this shared LLM category that can be used in very specific domains,
McCarty Carino: Right, because there have been some very infamous instances of lawyers using ChatGPT and coming up with fake precedent cases that they cited in documents submitted to the court. We talked to Stanford researchers from their Institute for Human-Centered AI, they did some analysis on general-purpose tools. I think they found something like 70% of legal inquiries were returning hallucinations. But they did some follow-up work on specifically kind of purpose-built legal large language models, also kind of had not an extremely high error rate like ChatGPT or any of these general-purpose ones, but still, like in the range of 17% to 30%.
Lenon: Yeah, I actually read that same study, and they were talking about the hallucination rate as well as the heteronormativity of the law. Just like where the big cases and the big states kind of dominated the results. And so, yeah, I think we are going to have to continue to take a look at AI and make sure each lawyer and each law firm is using the right AI. It’s not going to be a one-size-fits-all solution anytime soon.
The future of this podcast starts with you.
Every day, the “Marketplace Tech” team demystifies the digital economy with stories that explore more than just Big Tech. We’re committed to covering topics that matter to you and the world around us, diving deep into how technology intersects with climate change, inequity, and disinformation.
As part of a nonprofit newsroom, we’re counting on listeners like you to keep this public service paywall-free and available to all.
Support “Marketplace Tech” in any amount today and become a partner in our mission.