• Præ8
  • Posts
  • AI in Legal Practice: Lessons from Gauthier v. Goodyear

AI in Legal Practice: Lessons from Gauthier v. Goodyear

When AI goes wrong: ethical boundaries for legal professionals

One of the biggest mistakes you can make as a lawyer is assuming that AI will do your work for you. Well, it won’t. In fact, it might make things worse if you let it. The Gauthier v. Goodyear Tire & Rubber Co. case is the perfect most recent example of what happens when lawyers rely too much on tools they don’t understand.

So, what happened? The short answer is that a lawyer used a generative AI tool to help draft a court submission, or, a “brief” in legal terms. The tool came up with inexistent legal citations. It “hallucinated“. That’s not really surprising, we know generative AI is not hallucination free. So what’s really at stake here is that the lawyer didn’t catch it. But the court did. The result? Sanctions: a financial penalty and mandatory AI training.

But the real punishment isn’t much relying on that lawyer. It’s more of a wake-up call for the entire profession: AI isn’t a shortcut. It’s a tool, and like any tool, it needs to be used with care.

What Went Wrong

The Gauthier case isn’t an outlier. Earlier this year, something similar happened in Mata v. Avianca, Inc. where lawyers used AI tools to draft arguments, and those tools “hallucinated” citations. The problem isn’t just that AI sometimes fabricates information. It’s that lawyers using these tools are responsible for catching those mistakes. When they don’t, it’s not the AI that gets sanctioned. It’s them. Naturally.

Under Model Rule of Professional Conduct 1.1, lawyers have a duty to provide competent representation. Competence now includes understanding the technology you’re using. Courts are making that clear. Judges in cases like Gauthier signal that ignorance isn’t an excuse, and over-reliance on AI won’t save you from penalties. It’s also what John Quinn, the founder and chairman of Quinn Emanuel Urquhart & Sullivan (the most profitable law firm in the world with 2 billions of revenue last year) says about using AI in its firm. As lawyer, you are responsible for what’s in your documents.

The Core Problem: Verification

The heart of the issue then seems clear: verification. Imagine assigning a junior associate to draft a brief. You wouldn’t file their draft without reading it first. AI deserves the same level of scrutiny. Treat AI outputs as a first draft, not a final one.

Rule 11(b)(2) of the Federal Rules of Civil Procedure requires lawyers to ensure that court submissions are factually and legally sound. AI might help you get to the first draft faster, but it doesn’t replace your judgment. The lawyer’s job isn’t just to copy and paste, but instead, it’s to verify, analyse, and decide.

How to Use AI Without Getting Sanctioned

Lawyers seem to be very skeptical in using AI, and we can understand. But through the Gauthier’s case, we can see a few lessons:

Stay Skeptical. I am going to state the obvious, but it is critical to treat AI like a new intern, enthusiastic but untrained. Especially if you are using basic AI tools that only rely on prompt engineering. So, assume it will make mistakes and double-check everything. In the future, this will probably change as technology evolves.

Which lead us to: lawyers need to understand the tool they are using. They don’t need to be a computer scientist, but they do need to understand what AI is good at and where it fails. Tools like ChatGPT are great at summarising large amounts of information but notoriously bad at generating reliable legal citations, especially if untrained.

That’s why lawyers should invest in training. Many courts are starting to mandate continuing legal education (CLE) on AI. That’s not a bad thing. Knowing how to use these tools effectively could become as essential as knowing how to use legal research databases.

Follow Local Rules. Courts are adapting. They have to. Some now require lawyers to certify that AI-generated content has been reviewed and verified. If your court has such rules, follow them to the letter.

Something the Gauthier’s case is also teaching us is the relevance of developing verification protocols. Lawyers can indeed make it standard to vet all AI-generated content, building a system for how and when AI can be used, and hold everyone accountable for it.

The Bigger Picture: Competence in the AI Era

As always, the sanctions in one specific case aren’t just about one lawyer’s mistake. Instead, there is a sign that the profession is evolving. Technological competence is becoming a core part of legal competence. The American Bar Association has made this clear in 2012 when they amended Comment 8 to Model Rule 1.1 of the Model Rules of Professional Conduct to state:

“To maintain the requisite knowledge and skill, a lawyer should keep abreast of changes in the law and its practice, including the benefits and risks associated with relevant technology..."

This shift seems to go beyond the AI question, more asking how the legal practice adapt to a world where technology is changing everything about how we work. Some lawyers fear that AI might replace them. That’s not what’s happening. But AI is raising the bar for what clients and courts expect from lawyers. It’s not enough to be good at arguing cases, but it’s also about the need to evolve with your time, and to adapt to new technologies.

Why This Matters

Cases like Gauthier are turning points. They show that AI is here to stay in legal practice, but also that the burden of responsibility doesn’t shift. You can’t outsource ethical obligations to a machine.

The lawyers who succeed in this new era will be the ones who treat AI as an assistant, not an answer. They’ll use it to be faster, more thorough, and more innovative. But they’ll never forget that their role is irreplaceable. They’re the ones who bring judgment, ethics, and accountability to the table.

The future of law isn’t about replacing lawyers with AI. It’s about making lawyers better. And that starts with understanding what AI can—and can’t—do.

AI is reshaping the legal landscape. And it’s essential to meet and understand its challenges responsibly. Join AI Legal Frontier for expert insights and strategies to navigate the ethical complexities of legal tech, to build a future where innovation and integrity coexist.