News Feed

Ethics, Judgment, and Trust in a World of Legal AI, with Damien Riehl – Legal Talk Network

Your user agent does not support the HTML5 Audio element.
Damien Riehl is a lawyer and technologist with experience in complex litigation, digital forensics, and software development….
Stephanie Everett leads the Lawyerist community and Lawyerist Lab. She is the co-author of Lawyerist’s new book…
Zack Glaser is the Lawyerist Legal Tech Advisor. He’s an attorney, technologist, and blogger.
Lawyers have always relied on tools—but AI is different. It doesn’t just assist with tasks; it makes decisions, applies judgment, and shapes outcomes. In episode #602 of the Lawyerist Podcast, Stephanie Everett talks with Damien Riehl about what ethical responsibility looks like when AI starts doing legal work on its own. 
Their conversation examines how AI systems embed values, why verification matters more than transparency, and how lawyers can responsibly use tools they don’t fully understand. They also explore what legal expertise looks like in an AI-powered future—and why intuition, trust, and integrity may matter more than ever as machines take over the “widgets” of legal work. 
Listen to our other episodes on Ethics and Responsibility in AI. 
 
Have thoughts about today’s episode? Join the conversation on LinkedInFacebookInstagram, and X! 
 
If today’s podcast resonates with you and you haven’t read The Small Firm Roadmap Revisited yet, get the first chapter right now for free! Looking for help beyond the book? See if our coaching community is right for you. 
 
Access more resources from Lawyerist at lawyerist.com. 
 
Chapters / Timestamps:  
00:00 – Introduction 
05:55 – Meet Damien Riehl 
08:10 – Why AI Is a Different Kind of Legal Tool 
11:05 – When AI Starts Doing Legal Work 
14:30 – Ethics, Values, and AI Judgment 
18:45 – Foundation Models vs. Legal-Specific AI 
21:15 – The “Duck Test” and Trusting AI Output 
24:45 – Trust but Verify: Reviewing AI Work 
28:40 – What Lawyers Are Underestimating About AI 
31:10 – What Still Requires Human Judgment 
34:30 – Intuition, Trust, and Integrity in Law 
37:40 – What This Means for Billing and the Future 
40:40 – Closing Thoughts 
 
Special thanks to our sponsor Lawyerist.
Stephanie Everett:
Hi, I’m Stephanie.
Zack Glaser:
And I’m Zack, and this is episode 602 of The Lawyerist Podcast, part of the Legal Talk Network. Today, Stephanie talks with Damien Riehl about ethics and judgment in a world of AI. Stephanie, that’s an interesting topic to me. And I think we can think of ethics in a lot of different ways. So I’m excited to hear y’all’s conversation on that. But before we get to that, you’re on the tech show board and obviously lawyerist goes to tech show and covers tech show every year. We get some behind the scenes stuff. What are you excited about? What’s going on with tech show? What do we need to look forward to this year?
Stephanie Everett:
For anyone who doesn’t know, this is the American Bar Association’s large event. I mean, it’s a couple of day conference, March 25th through 28. It’s a little bit later this year. Sometimes it’s been as early as early February, which it’s always in Chicago. So I don’t mind. I mean, now it’ll be a little bit … We don’t know what we’re going to get,
Zack Glaser:
Right? Right. Could still be selling. February in Chicago, you know what you’re going to get. Exactly. And it’s bringing a big coat. Yeah.
Stephanie Everett:
But yeah, March 25th through 28, I do recommend, if you haven’t registered, go register today. There’s a lot of different registration options. You can get all your CLEs for the whole year there. But the topics, obviously, we have a lot of great content. We spent … I mean, I can’t even remember. We had over 350 speaker proposals come to us.
Zack Glaser:
Oh,
Stephanie Everett:
Wow. Yeah. And then we had to spend, I mean, days, weeks, a lot of meetings trying to narrow that down and come up with exactly what we wanted to be able to present on. But we have some great topics. I’m really excited about it. You and I will both be there. I’m speaking this year. I’m going to do an AI basic workshop for the people who are nervous, maybe haven’t gotten started. Yeah, I’m excited about that one. We’ll try to make it pretty hands-on. I’m also speaking with one of our labsters, Allison Harrison, on running a remote team and how to not just running the remote team or a hybrid team, but also the tools and technology you need to make that happen.
Zack Glaser:
Oh, that’s great.
Stephanie Everett:
Yeah. And then I’m doing a pretty fun panel with a whole bunch of folks on podcasting for lawyers. So Gy and … Oh. Yeah.
Zack Glaser:
Some of the other podcasting names that are out there, but great. I mean, I feel like you know a little bit about podcasting.
Stephanie Everett:
Yeah.
Zack Glaser:
You’ve been doing it for a little while now. That’s
Stephanie Everett:
Awesome.That is awesome. If you go to techshow.com, you can see the entire schedule, you can see all the speakers, you can see who’s coming. And in the expo Hall, you and I have talked about this before. If people don’t want the CLE credits or don’t need them, just coming and going to the exhibit hall floor is worth it. And I think there’s even, you can get, it’s either free or very discounted pass just to walk the floor and see all the new technologies, see the services, just talk to people. There’s even some sessions that happen in that exhibit hall, so you don’t have to pay. They’re not CLE, but you’ll be able to … Actually, that podcasting one we’re doing is in the exhibit hall stage. So anybody would be able to come to that. You don’t even have to buy a full pass.
Zack Glaser:
Again, that’s techshow.com. We’ve been going for a while. I would suggest that you go. It’s a fascinating thing to go see a lot of information. And honestly, in my opinion, there have been times throughout the years where it’s like, has there really been any new tech? You cannot say that right now. There is so much movement in the legal tech sphere.
Stephanie Everett:
Yeah. And I’ll give you two other little bonuses. One, even though there is all this movement, it’s not going to be all AI focused. I mean, as you know, Byron’s going to be there doing his Word courses and Ben Shore’s going to be there doing Outlook. So even some basic skills. I mean, I say basic, but lawyers will get in there and your minds will be blown because you actually- Oh,
Zack Glaser:
Both those guys? Yeah.
Stephanie Everett:
Yeah. Because then you don’t realize the tools and what you don’t know.
Zack Glaser:
You thought you were a power user, but you’re absolutely not. Yeah.
Stephanie Everett:
Yeah. So there’s something there for everybody. I guarantee it. But then the other little plug I want to make is we’re sharing space this year in that, remember the large … We’re in a conference.
Zack Glaser:
Oh yeah, same place as last year.
Stephanie Everett:
Conference.
Zack Glaser:
Big conference hall.
Stephanie Everett:
Synerp? Yeah. Well, this year the rest of that conference center is going to be filled with Chicago’s ComicCon expo and conference. So 40 or 50,000 people.
Zack Glaser:
Oh my nerdy God.
Stephanie Everett:
In fun cosplay, even more fun. So we’re leaning into that a little bit. When you go to the tech show website, you might see our version of a comic strip and you’ll find that our track sessions have fun names. My favorite that I might’ve had a hand in is Guardians of the Data. So we’re playing with that theme this year as well. But the reason I mention it is because the hotels are going to book up. It’s going to be more crowded. So if you think you might want to go, just go ahead and get a reservation in and get your registration in because you don’t want to be far away in case there is bad weather in late March and it’s cold.
Zack Glaser:
That’s true. Awesome. Well, go to that techshow.com. But now here’s Stephanie’s conversation with Damien.
Damien Riehl:
Hi, I’m Damien Riehl. I am Solutions Champion at Clio where part of my job is helping my team build things and part of my job is to be able to help talk about the cool things that we’re building at Clio and Vincent. Before I was at Clio, I worked for Velex and before that I worked for FastCase, a name that is near and dear to many of your listeners’ hearts. And another claim to fame is that back in 2011, little known lawyer named Sam Glover and I were in town and I said, “Hey, you’re a smart guy that talks about tech and I’m a lawyer that likes to think he’s smart and talk about tech.” And we had coffee right as he was beginning a newsletter. And I think he thought that I was stalking him, but that’s led to a very long discussion and a really good relationship with lawyers, Sam Glover, formerly of lawyers now doing other cool things.
Stephanie Everett:
So cool. And I mean, for people who aren’t familiar, they should for sure connect with you on LinkedIn because you do a lot of really cool, interesting work across law and technology and AI and some of it’s really technical, but then also your cool forward looking space. So it’s really interesting because as I was exploring all the vast number of topics you and I could dive in today, it was like the list was really long, I mean, including the fact that I get to hear you sing occasionally.
Damien Riehl:
That said, not a lot of singing lawyers around, but we need to change that.
Stephanie Everett:
Yeah. So here’s where I landed though. I thought, okay, you’re doing such cool work. What really sort of sparked my interest honestly is the post you made about going to the Vatican and how that intersects with AI and promise listeners we’re going to get there. But it occurred to me, there’s a growing sense of concern about the path AI could take us on, not just what it can do, but the power it holds, the power to shape decisions and influence behavior. And I was thinking about our audience, specifically law firm owners, and this kind of question came up to me that every day most of us use technology. I mean, we just go on and I just blindly use software every day. And I don’t even think about how it was made, the parameters of it, how it was like coded. I don’t have to know code to use this tool.
And then there’s AI. And I was like, okay, this feels different. So with that background, I don’t know if that was a good setup. I was like, help us dive in and start thinking about this differently because I do think we need to understand a little bit more about what’s behind the machine to actually understand what we’re doing with it.
Damien Riehl:
100%. And I think that’s what you’ve provided is a perfect setup because we traditionally have used tools like computer tools in a way to be able to say that we are in full control of our tools and we will use those tools however we’d like. If it’s a knife, we could use it to cut our dinner or we could use it to stab someone. So these tools are something that we have been in control of, but AI, large language models are a bit different in that they can do a lot of things on their own. So in a sense, it’s a knife that can be able to cut them with it on its own and it’s a knife that’s able to stab on its own. And so in this way, I think it is different and kind to the other tools that we’ve had in the past.
And so with that newfound power of a tool, a knife that can cut on its own, a lot of legal work that we can do can actually be done programmatically. That is, if you think about legal research, Vastcase has been collecting the cases, statutes, and regulations for the last 25 years since 1999. And of course, a few people cared as much when Ed and Phil were doing that in 1999, as they did now in 2025, 26, when a lot of people realize, oh wait, to be able to do things programmatically, you need the cases, you need the statutes, you need the regulations. So this legal oil was something that mattered little 10 years ago, but matters a lot today because everyone realizes, wait, if I ask the AI today, it’s going to hallucinate a bunch of cases. But if you instead ask yesterday’s case, yesterday’s statute and know that yesterday’s case overruled the case from three years ago, then it can be grounded in truth.
So anyway, so the knife that can cut on its own or stab on its own all of a sudden is able to do a lot of work on its own because now our AI advising can go through hundreds of cases and then analyze this case overruled that other case and be able to apply the facts from your case into the law and give you a 65-page memorandum in about a minute and a half. So that is really taking … That’s not a knife, right? Right. That is a knife that cuts on its own. And so really two questions that I know that you and lawyers have been thinking about is what happens to the billable hour in a world that, how long would it have taken me to read through hundreds of cases and land on a 65 page memo? A lot longer than a minute and a half.
So can I bill for that minute and a half or what happens next? This goes to your question about, what do we do with the ethics of a thing that can do a lot of work on its own? And Sam, people have probably heard on your podcast a lot of discussion about agentic AI that is agents can go out and do a lot of work. And really as a lawyer, I know that everything is in the definition. How does one define an agent? And the way a lot of people are defining agents are the people that can go out and do work on your behalf. And my favorite definition of an agent is a tool that can be able to string together tasks dynamically.
And I think that is a really good way to be able to say, we as humans string together tasks dynamically and the machines that are agentic are similarly doing that. We’ve had agents for a bunch of years that as we’ve had human agents, that as an employee is an agent. A thermostat is an agent because of the thermostat, you give it the goal, I want 72 degrees. It then reaches out into the world and figures out what the temperature is, and then it adjusts dynamically based on that external stimuli. So in a sense, a thermostat is an agent. And so really what AI is, is kind of the next step of agents. But instead of just being like a dumb thermostat or even TurboTax, which is another type of agent, TurboTax, I still had to be able to click yes, no, and upload, and then it would dynamically shift my tax return in a very brittle way.
But currently AI is able to do a lot of that work dynamically on its own. And so that opens up, I think, to your question, which is to say, okay, do we need to think about the ethics of these things?That is Sam Altman of OpenAI said, “We need AI agents to align with human values.” And this gets to my trip to Rome where I was asked to be able to meet with the Vatican and a whole bunch of people. And my pitch to Rome was to say, “You have a lot of human values stuck behind the Vatican Archive.” Wouldn’t it be great if you opened up the Vatican Archive and the Vatican Library and then opened up those human values from Thomas Aquinas and Thomas Moore and some of the greatest thinkers over history and be able to then provide those human values to the AI as they’re going out and doing the world and doing their things in the world to be able to say, “Is this the right thing to do ethically or morally?” And so I’ve been thinking about this a lot because, of course, my day job with Clio, we have a billion legal documents that is all the cases, statutes and regulations, motions, briefs, pleadings, not just in the United States, but in 100 countries worldwide.
And so when you think about the law, it is the governmental law. And then there’s also religious law, whether you’re a Christian or a Jew or Islam, there are lots of laws and those laws have a lot of overlap, do not murder, do not steal. So there’s a lot of law that is in both. And so I’ve been thinking a lot about how with Vincent, which is a VLEX product, now Clio product, Vincent, you could be able to do a 50 state survey or a multi-country survey. And part of that multi-country survey, you could ask things like, “I’m thinking about doing a reduction of force layoffs.” You could say, “Would this violate any of the 50 states rules or any of the laws in the European Union like Germany or France, et cetera?” And you could imagine asking that question of human law. And then if you happen to be, say, a Catholic hospital, you could also say, “What does Canon law have to say about this?
Would this be the right thing to do to lay off all these people? ” So in some sense, the corpus of faith materials, whether they be Judaism or Christianity, et cetera, that faith material is another law that could be a reasoning by the user to be able to say, “Is this the right thing to do from governmental law and from ethical and moral perspectives?”
Stephanie Everett:
I’m curious if the AI currently, I understand you wouldn’t pitch to the Vatican, open up the resources and let us train AI on these texts, but absent that, today, are these AI tools already putting a layer of judgment into their reasoning and into the answers that they’re giving us and we just don’t even realize it?
Damien Riehl:
They are. And so some companies are more open about that layer of judgment like Anthropic. You might know everyone knows OpenAI and ChatGPT. A competitor to ChatGPT is Anthropic, which makes Claude. And actually, Anthropic was started by a bunch of OpenAI refugees that left to be able to be more humane, which is the reason for the name Anthropic, to be more humane. And so Anthropic has been very forthright as to how they’re training their models because there’s one thing to ingest all the data, which is ingesting the entire internet and a whole bunch of books. But then after that, they have something called reinforcement learning. And that reinforcement learning can be through human feedback to be able to say, “I, as a human, look at this answer and say, that is a good answer because it aligns with human values.” And oh, that’s a bad answer.
Maybe that is spewing Nazi propaganda, for example. So through those thumbs up and thumbs down, through that reinforcement learning, then any model is able to be able to provide a better output to their end users. And so Anthropic has been very open as far as their ethics and what they’re putting in. And they have a constitution for Claude to be able to say, “You, Claude, need to meet our constitution in everything you do. ” And as an example, I was just yesterday working on something and I said, “I’d like to be able to provide this thing and I’d like to provide this outcome to be able to persuade someone, but don’t make it obvious that I’m trying to persuade them.” And Anthropic pushed back and said, “I’m sorry, I can’t be deceitful in that way. It’s in my constitution that we need to be forthright in the ways that we’re trying to persuade people.
” So of course that made my heart warm a little bit to say, “Thanks, Claude, for doing the right thing.” At the same time, I’m a lawyer, and sometimes I need to advocate things in a way that is subtled and nuanced, especially for a client that I might not agree with, but I need to make these arguments. So in the same way, maybe I need to make an argument in a way that is not forthright. So anyway, so this is kind of the push and pull that we need the AI to have some sort of ethics and morals, but we don’t want it to push back on a way that when we wanted to do something and we don’t want a false positive of that kind of morals.
Stephanie Everett:
So then I’m just curious, were you able to get it to do what you wanted to do by shifting the way you prompted it?
Damien Riehl:
Or did you just go to ChatGPT? Subscriptions? I went to ChatGPT. Yeah, and they were happy to give it. And then I went to Gemini and they were happy to give it through Google. So I think that’s the weird part is that we don’t just have one option for AI. We have many options for AI, including DeepSeek, which was made by China, and it’s free and open source. You can run it locally on your laptop. And so I think that the idea of putting ethics and morals, of course, Anthropic can do that. ChatGPT maybe can do that. Google may do that, but of course not everyone will. And so the real question is how do we ensure that the AIs that are going to increasingly do our work for us are doing it in an ethical and moral way?
Stephanie Everett:
And so if I’m sitting here listening right now, and maybe I’ve even been scared a little bit of use … I talked to so many lawyers who, they’re so cautious. They’re just kind of like, “Ah, not even going to dip my toe.” But maybe we’ve got some other folks who are like, “Okay, I was ready to go until now you’ve hit me with this new whole paradigm, this whole idea I haven’t even thought about. ” What would you tell them? What questions should they be asking and what should they know to give them some level of comfort or maybe even how they might use the tool differently, understanding some of this backend parameters?
Damien Riehl:
Whenever people talk about AI, they often talk about the tool, but I would encourage people to think more broadly to say there are many tools. There are myriad tools, some of which are the foundation models like ChatGPT, Anthropic and Claude and Gemini, those are foundational models. But then you have tools built for purpose like Vincent and like Clio Work. And those tools built for purpose don’t have nearly the trouble that the foundational models do. And the reason they don’t have the trouble is because we’ve done the hard work of, you ask a legal question of Clio work or of Vincent and you don’t see the 35 prompts that we’ve done on the backend to be able to ground that answer in truth. That is to ground that answer in the cases, the statutes and the regulations. And you don’t see among the 35 prompts where what we do is to be able to say, find, if a case has been overruled, don’t rely on the lower case, definitely rely on the upper case.
And we also say, summarize this in a way that I as a litigator at Robins Kaplan would summarize it as an associate, for my partner. So anyway, so there’s ChatGPT out of the box that does none of those things. And there’s Clio Work and Vincent that does all of those things. And you don’t even know that it’s happening. We just do it on the back end. And we also use the right model for the job. So we’re able to swap models, ChatGPT maybe today, Anthropic tomorrow, Gemini the next day, to be able to do things that each of them does best, but you don’t have to worry about that. So I would say people often talk, does AI do this and does AI do that? And I would say not all tools are created equal and the foundational models are not the same as the tools that are built for purpose.
Stephanie Everett:
No, I think that’s a great reminder, which then leads me to, okay, so then what questions should I be looking for? I mean, I guess I should know the answer to this, but I assume if I go and look at Clio Work’s website in terms of the marketing materials, does it tell me the 35 … It’s not going to tell me the 35 prompts because that’s proprietary, but how do I get comfort that those 35 prompts are the right ones as a lawyer because my whole practice and my malpractice and my client depends on that?
Damien Riehl:
That’s the right question to ask. And you’re absolutely right that those 35 prompts are proprietary to us. And if we were to tell it to you, of course, then Thomson Reuters would find out and LexisNexis would find out and Harvey would find out. And so we can’t give you those 35 prompts by necessity any more than Amazon could tell you how they’d say, “Well, if you like this book, you like that other book.” Those are all trade secrets in the same way. So what you should do is do what’s … My friend from MIT, Daza Greenwood, introduced me to what’s called the duct test. And the duct test is AI in a lot of ways is a black box where even the researchers themselves, the machine learning researchers know the inputs, but they have no idea what’s going on in the black box. And then they see the outputs.
And if the outputs look like reasoning and quack like reasoning, then maybe it’s reasoning, even if you don’t know what’s happening in the black box. So I would encourage your listeners and users to use Vincent and use Clio Work depending on what size of firm you’re in and compare that output with ChatGPT out of the box and compare that output with Gemini and Anthropic and maybe our competitors and be able to say, use the duct test and which reasons better, which looks better from a legal standpoint because we have a lot of people on staff at Clio that have set the trenches. I worked in a small law firm, insurance defense firm. I worked at one of the biggest law firms in the world against some of the biggest … I sued JP Morgan over the mortgage-backed security crisis. I sued the victims of Bernie Badoff.
I represented victims of Bernie Badoff. So anyway, so we have people that have done the work that really give a much, much better output than ChatGPT out of the box. And so use the duct test. If it looks like reasoning, then it’s probably reasoning.
Stephanie Everett:
I’ve been saying, I mean, it feels like now it’s been years, right? It still feels so new to everybody, but then I’m like, actually, because I’ve been reminding lawyers, you need to understand enough about these tools to understand exactly what you’re saying. What was it trained on? What parameters are in place? Because otherwise you can’t have that confidence that you’re using it, that it’s in the answers that it’s given you. So I think that that’s a great reminder.
Damien Riehl:
Yeah. And part of that is that you’ll know that if you use Clio Work, for example, you know that it’s trained, not trained on, but access is 950 million dockets and documents and all the cases, statutes, and regulations around the world. So you know that that’s our training, but even we at Clio don’t know what Google used for their training or what OpenAI used for their training or what Anthropic used for their training. That to us is a black box. So really, once you start thinking about, I need to know what my tools are using as training data, you can go all the way down the rabbit hole and you’ll stop really quickly because of proprietary trade secrets and also because we even don’t know what is being done. And even the Google’s Anthropics and OpenAIs of the world don’t know how the things work.
So anyway, so there’s an element of I need to know, and then there’s an element of, I just need to look at the outputs and see what the outputs look like.
Stephanie Everett:
Yeah. I think some lawyers are going to be … I mean, I’m just trying to be frank. The duct test sounds right, but as lawyers, we’re trained to operate in certainty, right? We don’t like ambiguity. We like knowing what the law is. I mean, we deal in ambiguity all the time because we’re arguing for nuanced interpretations of the law, but the words matter and we’re trained to understand our logic brain is like firing right now. What? There’s all kinds of sirens going off and we’re just like, I don’t know.
Damien Riehl:
Yeah. Well, and let’s pull on that thread a little bit. So yes, we as lawyers like certainty. That said, you’re Stephanie, your brain to me is a black box. I have no idea how your brain is working and what you’re thinking of at this moment, I have no idea. But if what comes out of your mouth looks like logic and sounds like logic, quacks like logic, then maybe it’s logic. So I’m using the duct test with you. That is if you say some smart things that are coming out of your mouth, then I trust you as being smart. So maybe we should treat our AIs like Vincent and Clio work the same, that if what comes out of its mouth, you may not know what’s in the brain of Vincent any more than you know what’s in the brain of your associate. But if what comes out of the associate’s keyboard is maybe not quite as good as what comes out of Clio Works or Vincent’s CleBoard, then maybe you should treat it the same, even though you don’t know the brain of the associate and you don’t know the brain of Vincent either.
Stephanie Everett:
Okay. One more. I feel like I’m being more challenging than I usually am, but this
Damien Riehl:
Is- Oh, I love it. I’m a litigator. I like a hot bench.
Stephanie Everett:
Me too. Okay. With the associate, they’re bringing me their draft and obviously I usually then can go read that case and look at it. And I know if it’s not very convincing because associate work sometimes isn’t very convincing, it’s easier to pass that smell test. And I think that’s what we’ve all learned with some of the AI tools is, boy, it comes back pretty confident and then we quickly go in to check and sometimes it falls apart really fast. So what would you tell people in terms of thinking about it that way and where they need to make sure they can go and check it and where they should put that effort?
Damien Riehl:
100%. Trust but verify is something that you need to do with that associate and you need to do that with any tool. And the real question is, does your AI tool make trust but verify dead simple? And Clio Work and Vincent make it dead simple because right on the same page that you’ve asked the question and right on the same page as the memo that it cranks out, sometimes 65 pages, on the right hand side is the actual non-hallucinated text from that actual non-hallucinated case. And the text you’ll see on the right-hand side is non-hallucinated 100% of the time, not 99%, but 100% of the time the text is non-hallucinated and 100% of the time the case is non-hallucinated. And your listeners might be saying, “Damien, how can you say it 100% and not 99%?” And I can say 100% because the text on the right-hand side of the screen was not created by AI.
That text was a database search like we’ve been doing database searches for the last 40 years. So trust but verify within Vincent and Clio work is dead simple because you could be able to verify the actual text from the actual case in the way that you’ve never been able to do that with associates. The associate would draft a memo and put a quotation in there, but you don’t know if that associate fat fingered the text. You don’t know if they actually copied and pasted from the wrong case. That is a black box, but Vincent is actually better because it provides the actual non-hallucinated text right there on the same page. And if you click on the blue link, you can go to the full text of the case in a way that you’ve never been able to without associates memo in the past. So trust but verify is 100% what you need to do with associates and with Vincent, except Vincent makes it a lot better and faster.
And by the way, maybe you could be a solo small and not need the role of associates and be able to do a lot of work that your associates at the big Law need rose of Associates to do.
Stephanie Everett:
No, I agree. I mean, I tell people all the time, I never even trusted a headnote to get the holding of a case right. If I’m citing it in my brief, I’m reading the case and I’m confirming for myself that I agree with that holding because we all know we’ve all seen it. So I’m there with you. What do you think law firm owners are underestimating right now when it comes to maybe AI or technology or anything that you’re seeing? You’re so into the industry, you kind of have a great vantage point. So I think I’d be remiss if I didn’t just pick your brain a little bit on what you’re seeing.
Damien Riehl:
I think what people are doing is underestimating the power of what AI can do today because they’re using the free tier of general purpose tools. They’ve not spent the money to be able to use fit for purpose tools made for lawyers by lawyers, or you maybe did used tool X, which was an awful built for purpose tool. I’m not going to do disperse any of my competitors, but you can imagine one of my competitors, you might say, “I’ve used legal AI from a competitor and it sucks.” But you haven’t used Clio Work and you haven’t used Vincent. And I would say that anyone who paints Legal AI under a broad brush saying, “I’ve used it and it sucks.” No, you’ve used a tool and that tool sucks, but use Clio work and Vincent. And really this sounds like a pitch, it’s not. It’s just I’ve used all of the tools and the output difference is so stark.
And I think that people are underestimating the power if you have the world’s smartest machine, and that’s what large language models are, the world’s smartest machine. It beat 90% of humans on the bar exam in GBT4, and a friend of mine just found out he’s been testing it and it beat 99% of humans, GBT5 has. And the question is how much legal work requires the entire type of intellectual firepower on the bar exam and how much paralegal work requires the intellectual firepower, how much secretarial work requires this intellectual firepower. So we have IQ of 130 is what some people estimate is what a large language model are. So we have the most intelligent person maybe in your office that has access to a billion legal documents, cases, statutes, and regulations, and then is able to use that corpus of knowledge in a way that is maybe faster and Better than anybody you’ve ever known.
And so everything we do as lawyers, every single task that a lawyer does and builds their clients for is based on words because every single task is we ingest words, we analyze words, and we output words. And this brilliant 130 IQ AI can do all three of those things, ingest, analyze, and output at superhuman speed and at a postgraduate level. It beats PhDs in physics. It beats 99% of humans on the bar exam. And so I think to answer your question, I think people are underestimating the power of this large language model if fit with yesterday’s case, yesterday’s statute and yesterday’s regulation. Because everything a lawyer does is applying facts to the law. And if you are missing one of those, you’re missing a whole bunch of things. If you have facts plus the law, you’re able to do much more.
Stephanie Everett:
And so then that leads me to, if you had to give us, you’re looking ahead five years from now, what role do you think lawyers are going to play in shaping our work and our legal system? I mean, AI is going to have a piece of this, but I think our whole roles as we know them today probably shift. And I just would love to hear your thoughts.
Damien Riehl:
One of the smartest people in our space, legal innovation is Jordan Firlong. He’s a Canadian lawyer that really his writing is just phenomenal. He has given an answer to your question that I’m going to steal his answer. And when you think about, we as lawyers tend to think that our value to the client is in the widgets we make, the motions we draft, the contracts we draft on the memos that we send. But it turns out that Vincent and Clio Work can do all three of those things really, really well. And so as more of the widgets are created by the AIs like we’re building, what’s left for the humans to do? And he posits, Jordan Furlong posits three things. One is intuition because you’re not going to have the AI whispering in your ear in front of the jury or in front of the judge or in the boardroom as you’re negotiating the M&A deal.
You’re going to have to have intuition to be able to respond to that. The second is you have to be able to have trust. That is, your clients don’t hire you for the widgets. They hire you because now I can say, “Client, your problem is now my problem. You can trust me to be able to handle this. ” So number one is the intuition. Number two is trust. And number three is integrity that opposing counsel knows that I’m not going to screw you over. And the judge knows that this is a person of integrity. And if Damien says something, then it’s true. So I think those three things, the very human things of intuition, trust and integrity, those AI will never be able to do because that’s part of our humanity. And in the interim, we at Clio and others are going to be continuing to do a lot more of the work so you can do more of the trust and integrity and intuition.
And because those are the fun parts of the job. And really, maybe you can expand your business to be able to serve more clients and to have more good conversations and to be the trusted advisor that all of us went to law school for. So I think that rather than being scared of our AI enabled future, I think it’s really a huge opportunity for us to serve more people, serve the 92% of legal needs that are unmet because we’re currently too expensive. But if AIs like Clio Work are cranking out the widgets, you can serve that 92% as a latent market that’s just waiting to be helped.
Stephanie Everett:
I agree. I have those same three. I read that from Jordan and he’s coming on the show soon. I have that on a Post-it note right there on my monitor so I can keep that front and center because I do think it opens up a cool opportunity for us to become architects, right? To think about how we spend our day just differently and really serving clients. And I know a lot of people that I talk to, they love the client work. They love engaging with clients and thinking strategically with their clients about how cases are going to go. And I think this is the real opportunity we have.
Damien Riehl:
It is a huge opportunity. And I’m thinking about literally close to home as your greetings from Minnesota. And right next door is my next door neighbor who wants to build a fence literally up against my house. So within six inches of my house, they want to build a fence right there. And so I used Vincent to be able to quickly do the legal research as to what Minnesota laws they would be violating. And they found five causes of action. And then I turned that into take those memoranda, maybe 300 pages worth of a memoranda that Vincent cranked out and then create an affidavit, be able to say, for each of the elements of each of those claims, here’s all of my facts. And then I fed my facts into that and created an affidavit of mine to be able to say how I used an easement and how there’s also maybe an adverse possession claim.
And then it filled in all of the elements of the claim. And then I said, now turn my affidavit and the research, the 200 plus pages into a complaint that can be used. And so all of this happened at about 30 minutes. And that is maybe remarkable to anyone, including me. But the human part, I still had a friend of mine, his name is Jake. Jake Zimmerman is really an amazing solo small lawyer here in Minnesota, and he’s been a friend for 20 years. I still asked him, “What am I missing?” And he said, “Damien, you should be doing X, Y, and Z. X, Y, and Z was exactly the right thing that I should be doing. I found other evidence from the Ramsey County Reporter’s office that it was actually additional evidence that would bolster my case.” But the AI didn’t have that and the cases, the statutes and the regulations didn’t necessarily have that, but that was Jake’s intuition.
And so I think that we as lawyers need to be able to say, “What can I do that Jake did for me? ” Be able to say, “What are the additional things that are part of what’s stuck in my brain that is something that’s not in the case of statutes, but I know from my experience in this case that the judges hate this, or I know that the opposing counsel will never agree to these things.” That’s the kind of things in our brains that we can provide in an age of AI.
Stephanie Everett:
And I’m curious, this kind of maybe wraps us up where we started with your trip to the Vatican, because I wonder if either the AI or that lawyer ever said, “Hey, Damien, this is your neighbor and you might have to live next to this person for a really long time.” So yeah, you can go down this road and file the complaint and the affidavit and you probably have a lot of case law on your side, but also there’s this human element of having to live next to this person. And so maybe should we go try this first?
Damien Riehl:
That’s exactly right. And that part of that is we as lawyers are counselors. And so what you’ve described is truly counseling to say, okay, there’s the legal aspect and then there’s the humanity aspect of it. So I would say yes to that as point number one. Point number two is going a bit beyond what you said is to be able to say, “What if my neighbors had access to something like Vincent or Clio Work?” And what if my neighbors could be able to put their facts in and say, “My gosh, Damien actually has a really good easement claim And Actually has a really good adverse possession claim. Maybe we should work this thing out. ” Sadly, that didn’t happen and their lawyer is coming out guns ablazing and they refusing to even negotiate with us. But I would say that yes, the humanity needs to come out and if armed with yesterday’s case, statute and regulation with an AI that can do that analysis really quickly, I think both sides could be able to maybe settle their disputes much more easily and be able to have a more peaceful world.
Stephanie Everett:
Yeah, for sure. I mean, and it just occurred to me and having that, I would love to go someplace and be able to say, “Here’s my facts, here’s what’s happening.” And it spits out, here’s what your likely outcome’s going to be. And by the way, and I just saw a Clio demo of this last week that kind of blew my mind with, I think it’s docket that was like, “Here’s how this judge has decided this series of cases involving this set of facts over this period of time.” That’s pretty powerful.
Damien Riehl:
Yeah. To say, Your Honor, this is just like the case you decided last week,
Stephanie Everett:
Where
Damien Riehl:
This is just like the case you decided yesterday. When I was at Robins Kaplan, we had a brief bank, but of course not all the briefs made it in there. But what I just described is a universal brief bank that is not just this firm’s briefs, but all of the firms from all in front of this judge and having this corpus to be able to say, Judge, just like the case you decided yesterday, and I’ll take it a bit further from what you just said. Once you do that, what if you take five cases like mine in front of Judge Smith? And then what if you take those five cases and say, “Here are my new facts. Draft a motion to dismiss like these motions that I’ve won in a way that is statistically likely to win for this judge, for this cause of action, for this jurisdiction.” That is tickle this judge’s brain in the way that this judge’s brain likes to be tickled.
And that judge won’t even know that you’re doing that, but this is what AI enables.
Stephanie Everett:
Or where I thought you were going to start was show all this to the neighbor before we get down the expensive road and heartbreak hard. I mean, litigation is terrible sometimes. And for the neighbor to be able to have that, here’s what’s going to play out over the next couple of years, but we could short circuit all of that right now and maybe we can come to a reasonable agreement. I mean, I think- That’s
Damien Riehl:
100% right. Yeah. Say almost all cases settle. So here, look clear eyed at your likelihood or unlikelihood of winning and let’s settle this case before it goes too far.
Stephanie Everett:
And sorry, one more because now all the ahas are popping for me because as you know, I’ve been telling everybody the billable hour is dying a quick death, guys. We’ve got to figure this out. But if what we just played out happens and now the neighbor and you come to this understanding much quicker using these tools that I think eventually are available to the public, now what’s my value as a lawyer and how do I get paid? I mean, I agree. I want to see access to justice get solved, but boy, I mean, if this doesn’t have you as a law firm owner thinking about your future and where your value lies, I hope it gets you started. I don’t have the answers for you yet, but these are the questions we should be engaging in.
Damien Riehl:
I think that’s right. And I think that the billable hour I think will never die, but I think that the current 90% of firms that have 90% billable hour and maybe only 10% flat fee, I think that 10% flat fee is inevitably going to increase to 20 to 30 to 50 to maybe land around the 60% flat fee that the big four accounting firms do. That is, they say we’ve systematized 60% of our work to be able to shrink our costs to increase our profit margin and we make more money on that 60%, but still 40% are billed by the hour. That’s because there’s a lot of unknowns with accounting work that you don’t know until you get in there. So I think that the death of the billable hour is a misnomer. I think that the percentage of billable hour will always be there.
It will just shrink billable hour and go to flat fees. And I think that will result in us making more money as lawyers.
Stephanie Everett:
Perfect. On that happy note, maybe that’s a great place to wrap this up. Thank you so much. This has been fascinating. I think we achieved my objective of taking this AI discussion in a different place than maybe most of ours have gone. So I really appreciate you coming and sharing your brain with us.
Damien Riehl:
I appreciate your asking me. I’ve been a huge fan of The Lawyers Podcast since it first came out. I listened to almost every episode, so it’s really a thrill to be on the side of the microphone. Thank you so much for the work that you do.
Notify me when there’s a new episode!
The Lawyerist Podcast is a weekly show about lawyering and law practice hosted by Stephanie Everett and Zack Glaser.
Legal Talk Network
Sign up to receive featured episodes and staff favorites once a month.
Newsletter Signup

source
This article was autogenerated from a news feed from CDO TIMES selected high quality news and research sources. There was no editorial review conducted beyond that by CDO TIMES staff. Need help with any of the topics in our articles? Schedule your free CDO TIMES Tech Navigator call today to stay ahead of the curve and gain insider advantages to propel your business!

Leave a Reply