Skip Navigation

It’s “frighteningly likely” many US courts will overlook AI errors, expert says

While the thought of lawyers lawyering with AI gives me the icks, I also understand that at a certain point it may play out like the self-driving car argument: once the AI is good enough, it will be better than the average human -- since I think it's obvious to everyone that human lawyers make plenty of mistakes. So if you knew the average lawyer made 3.6 mistakes per case and the AI only made 1.2, it's still a net gain. On the other hand tho, this could lead to complacency that drives even more injustice.

8 comments
  • Feels like overlooking the same issue as with every other AI use

    When a human makes a mistake and is called out, they can usually fix the mistake. When genAI outputs nonsense, it's fucking nonsense, you can't fix something that's fundamentally made up, and if you try to "ask it" to fix it it'll just respond with more nonsense. I hallucinated this case? Certainly! Here's 3 other cases you could cite instead: 3 new made up cases

  • So if you knew the average lawyer made 3.6 mistakes per case and the AI only made 1.2, it’s still a net gain.

    thats-not-how-any-of-this-works.webm

  • Yeah, I don't care about the raw amount of mistakes, I care whether the mistakes are severe enough to throw the case. Stuff like missing filing deadlines.

8 comments