January 10, 2026

Do courts require lawyers to disclose AI use in 2025? Federal and state disclosure rules for briefs and filings

You’re about to file a brief and you’re wondering: do I need to tick an ECF checkbox, attach some AI-use certificate, or just file it and move on? In 2025 there’s still no one-size-fits-all rule. AI d...

You’re about to file a brief and you’re wondering: do I need to tick an ECF checkbox, attach some AI-use certificate, or just file it and move on?

In 2025 there’s still no one-size-fits-all rule. AI disclosure depends on the judge, the court, and whatever local orders are in play.

Courts are paying attention though—especially to citations, confidentiality, and whether a real lawyer actually checked the work.

This guide walks through where disclosure is and isn’t required in federal and state courts, what “AI use” really means, and what to include if a judge wants a certification.

You’ll also find practical steps on verification, data protection, billing transparency, sample language, and simple checklists you can grab today.

And yes, we’ll show how LegalSoul helps with judge-ready certificates, quote and citation checks, and clean audit trails—so you can move quickly without creating risk.

TL;DR: Is AI disclosure required in 2025?

Short answer: there’s no nationwide rule. Whether you must disclose AI use depends on the judge or court, plus your ongoing duties of competence, confidentiality, supervision, and candor.

After the 2023 SDNY Avianca case (fake citations) several judges rolled out standing orders. Some require a short certification. Some ban unverified generative AI in filings. Many say nothing new and rely on existing rules.

Best practice: treat this like a local-rule issue. Before every filing, check the judge’s page and the latest orders. If disclosure is required, it’s usually a brief statement saying whether generative AI helped, that a licensed attorney verified all quotes and citations, and that client info wasn’t fed into unsafe systems.

If you’re searching “do federal courts require AI disclosure in briefs 2025,” the answer changes by courtroom, not by circuit. Assume you may have to certify and keep a tight verification routine. Think of AI like a keen junior—fast, helpful, and occasionally wrong—so you double-check everything before it hits the docket.

Quick Takeaways

  • No nationwide rule in 2025. AI disclosure is a judge-by-judge, local-rule issue. Your core duties still apply: competence, confidentiality, supervision, candor.
  • When disclosure is required, it usually says whether AI assisted, that an attorney verified authorities/quotes/facts, and that client data stayed protected. Always read the judge’s standing orders and ECF instructions.
  • Even without a rule, run a human-in-the-loop verification workflow, keep an AI-use log, and lock down data. Most problems come from unverified citations and misquotes, not AI itself.
  • LegalSoul helps operationalize this: judge-tailored certificates, quote/citation checks, matter-level audit trails, and enterprise privacy controls.

Why this matters now

Courts are wary of unverified AI. The Avianca mess in SDNY (Mata v. Avianca, 2023) pushed fake citations into the spotlight and kicked off a wave of judge-specific orders.

At the same time, big-state bars have guidance on tech competence, confidentiality, and supervision. So you’re juggling procedure and professional responsibility all at once.

Clients and insurers are asking questions too. Some GCs want to see your controls for AI-assisted drafting and research. Some carriers now ask about verification and data safeguards during underwriting.

The stakes aren’t just sanctions. A small credibility ding can stick with a judge for a long time.

One more reason to care: your process shouldn’t depend on which associate drafted the brief. Build habits your whole team can follow so partners can delegate without anxiety—and without slowing things down.

Dial this in now, and you’ll be ready for judge standing orders on generative AI certification without a last-minute scramble.

What counts as “AI use” for disclosure purposes

Most courts focus on generative AI used to draft, summarize, suggest arguments, or propose citations and quotes. Research chatbots and summarizers are usually in scope. Traditional tools—spellcheck, formatting macros, basic analytics—usually aren’t, unless a judge defines “AI” broadly.

Gray areas: citation analyzers, transcript summarizers, tools that suggest quotes. If a tool shaped your legal analysis, language, or authorities, assume it could trigger disclosure.

Example: if a generative tool proposed cases and phrasing for your motion, verify everything and be ready to certify if required. Running a non-generative redline? That’s typically out of scope.

Policy tip for teams: (1) disclose-if-required when generative outputs influence arguments or authorities; (2) always verify quotes and citations when using generative AI. Also, if you only used AI to brainstorm structure, some phrasing may still creep into the final. Log it so you can answer questions later.

Set your definitions now so you respond consistently to any local rule AI disclosure requirement for attorneys without over- or under-sharing.

Federal courts in 2025: the patchwork

There’s no FRCP or FRAP rule that mandates AI disclosure across the board. It varies by district and—often—by judge.

After 2023, several federal judges issued orders requiring lawyers to certify that no generative AI drafted the filing—or, if it did, that a licensed attorney verified every quote and citation. SDNY attention spiked after Avianca. The Northern District of Texas had early emphasis on attorney responsibility and attestation.

Elsewhere, you’ll see certification language in scheduling orders or on chambers webpages. Common themes: say whether AI assisted, confirm human verification of authorities and quotes, and protect confidential info from public models.

Always check the docket, local rules, and the judge’s page before filing. Even neighboring courtrooms can have different expectations.

If you’re still wondering, “do federal courts require AI disclosure in briefs 2025,” the safe move is a per-matter log plus a one-pager on the assigned judge’s AI position so no one files blind.

State courts in 2025: themes and variation

Statewide mandates are uncommon. It’s mostly local and judge-specific, just like federal practice.

Bars in big jurisdictions (California, Florida, New York, and others) stress tech competence, confidentiality, supervision, and billing transparency. Some trial courts and divisions mirrored federal trends: disclose if generative AI helped and certify human verification. Others stick with existing ethics rules and ask for nothing extra.

Two realities: state-court sites can be all over the place, so check division or county pages too. And smaller courts sometimes move faster than statewide bodies—so a new chamber rule might hit the web before formal guidance.

If you’re mapping state court AI disclosure rules by jurisdiction 2025, build it bottom-up: judge/chambers, then division, then county, then state. When a page is silent, call the clerk. They often know what the judge expects even before a formal order exists.

That five-minute call can save a refiling. Also watch appellate quirks—some courts tolerate AI-assisted drafting at the trial level but want a tighter attestation on appeal.

Ethics and billing implications you cannot ignore

Even if there’s no disclosure rule, your duties don’t change. ABA Model Rules 1.1 (competence), 1.6 (confidentiality), and 5.3 (supervision) fit this topic neatly.

In plain English: know what your tools can’t do, protect client information, and treat AI outputs like a junior’s draft—review and verify before you rely on it.

Billing is another hot spot. If AI saves time, focus your narrative on value and judgment, not raw hours. Don’t write entries that sound like you billed a robot.

After 2023’s citation fiascos, some corporate clients added AI governance questions to RFPs. A smart move is to put your “human-in-the-loop” process in engagement letters. Explain verification, confidentiality, and when you’ll tell the client about AI use.

That keeps expectations clear, reduces disputes, and shows you’re living up to ABA Model Rules 1.1 1.6 5.3 AI competence and supervision.

It also gives partners a clean answer when procurement asks, “So, how do you control AI risk?”

When and how to disclose AI use (if required)

If the court wants a disclosure, follow the format exactly. It might live in a certificate at the end of the brief, a short declaration, a line in the signature block, or an ECF checkbox or declaration for AI-assisted drafting.

Typical parts: whether generative AI was used, that a licensed attorney verified all authorities, quotes, and facts, and that confidential or privileged information never went into unsafe systems. Skip vendor names unless the order asks for them.

Sample language you can adapt: “Counsel certifies that generative AI tools were/ were not used in drafting this filing. A licensed attorney independently verified all legal authorities, quotations, and factual assertions. No confidential or privileged information was disclosed to systems without contractual and technical safeguards.”

Pro tip: connect your certification to your internal records—note the verification date and reviewer. If the court follows up, you can produce the paper trail fast.

Keep a small library of sample AI use certification language for court filings by jurisdiction. It pays off the night before a deadline.

When disclosure is not required: prudent practices

No rule? You still own accuracy and confidentiality. Keep a light AI-use log per matter that shows why you used it (outline, phrasing, case ideas) and who verified quotes and citations.

Independently confirm case existence, citation format, and quotes with your trusted research tools. If AI changed your staffing or costs in a noticeable way, a quick client note can keep trust high.

Try a simple “AI checkpoint” before filing: someone who didn’t draft takes ten minutes to look for red flags (too-slick lines, unfamiliar authorities, quotes without cites) and spot-checks a few citations.

Also train safe prompting—minimize sensitive details and abstract facts when you can. If anyone questions your process later, having a law firm AI policy template and audit trail for filings usually answers it in minutes.

Verification standards and human-in-the-loop workflow

Make verification a real step, not a vibe. For every brief, check: case existence and citation format, pincites and quotes against the source, procedural posture and subsequent history, and factual assertions against the record.

Use primary sources, not screenshots from an AI output. The issue in Avianca wasn’t “AI use”—it was unverified citations. Learn from that.

Build a human-in-the-loop verification workflow for briefs: the drafter tags authorities, a verifier pulls sources and flags mismatches, and a senior lawyer spot-checks and signs off.

Track a couple of simple metrics—how many authorities verified, how many issues found—to improve over time.

When verifying quotes and citations when using generative AI, add one extra step: drop the quote into your research system and compare. Keep a tiny “authority delta” log of every change made to citations and quotes during verification.

If the judge asks about your process, you can show before/after without revealing strategy.

Data security and confidentiality controls

Confidentiality is non-negotiable. Segment your tools. Use enterprise legal AI with no training on your data and contractual privacy. Don’t paste sensitive details into public models.

Set technical guardrails: data loss prevention, redaction, and “minimum necessary” inputs by default. Confirm vendor retention and deletion. If data might cross borders, document how it’s handled.

Control access, too. Limit who can export drafts and log who does what.

Example: for a sealed matter, run all AI interactions in a secure workspace with logging, and use placeholders for names during drafting. Explain your confidentiality and client data safeguards with legal AI tools in client-facing materials. More procurement teams ask whether vendors fine-tune on your content (they shouldn’t) and whether you can produce audit logs.

Add one more nudge: put a small header on AI-assisted drafts reminding reviewers to verify authorities and remove placeholders. It prevents early drafts from wandering outside the team.

Sanctions, pitfalls, and how courts are responding

Sanctions tend to hit fake or misquoted citations. In Avianca, the court sanctioned attorneys for invented cases and demanded affidavits explaining their vetting.

Other courts have ordered corrections, extra certifications, or issued warnings when AI-related errors popped up. Patterns repeat: not reading the standing order, trusting an AI summary of a case, or filing a half-done certificate.

Most judges are practical. They’re not banning responsible AI. They’re punishing reliance without verification.

If you worry about sanctions for fake AI-generated citations 2025, your shield is a documented verification process and a clean log of who checked what and when.

One handy habit: add “quote provenance.” Every quote in your brief links to a source in your memo file. If a line gets challenged, you can show the source instantly.

Also avoid over-disclosure that reveals strategy. Certify what’s required and leave internal workflows out of it.

Templates and checklists you can adopt today

Standardize the basics so crunch time isn’t chaotic. Keep three living docs:

  • AI-use log template: matter, tool/purpose, prompts/outputs retained, verification steps, reviewer, date/time.
  • Pre-filing checklist tuned to your court: standing orders checked, disclosure needed?, citations/quotes verified, facts cross-checked, confidentiality confirmed, certificate attached if required.
  • Short library of disclosure and certification clauses per jurisdiction, including sample AI use certification language for court filings.

Add a client engagement paragraph explaining supervised AI use, confidentiality, and billing. It reduces back-and-forth later.

Improve with data: track how often verifiers catch errors, which teams trigger the most flags, and where steps get skipped. That feedback saves real hours.

Last thing—keep a one-page “clerk call script” for gray areas. Know what to ask and how to document the guidance. Together, these give you a lightweight law firm AI policy template and audit trail for filings without slowing down busy litigators.

A 30-day implementation plan for your firm

Week 1: List your current tools and map your active judges. For each matter, note whether the judge has an AI-related standing order. Draft a short policy defining what counts as AI use and what verification you require.

Week 2: Build templates—certifications, engagement language, and a pre-filing checklist. Set up a secure AI workspace with logging, redaction, and export controls. Pick your verification workflow and assign verifiers.

Week 3: Pilot on two low-risk matters. Track time saved, errors caught, and friction points. Update templates from real feedback. Add a one-page field guide for “local rule AI disclosure requirement for attorneys” so associates know where to look and what to do.

Week 4: Train the team—45 minutes on verification standards, data hygiene, and when to disclose. Turn on audit logging and decide who watches it. End with partner sign-off so this becomes your default playbook. Bonus: schedule a 15-minute “rule check” two weeks before major filings to catch new orders.

How LegalSoul supports compliant AI use

LegalSoul fits what courts are asking for. It builds court- and judge-specific disclosure language you can drop into a brief. It won’t let you export until a licensed attorney completes the verification steps you set.

Its citation and quote checker pulls every authority from your draft, compares quotes and pincites to the source, and flags mismatches for attorney review—so the human-in-the-loop verification workflow for briefs actually happens.

On privacy, LegalSoul uses enterprise legal AI with no training on your data, plus role-based access, data loss prevention, and configurable redaction. It keeps matter-level audit trails—prompts, outputs, reviewers, timestamps—so you can show diligence to a court or client.

Admins can enforce firmwide policies, like requiring verifier sign-off or adding the right certificate. If a judge wants an ECF checkbox or a specific declaration for AI-assisted drafting, LegalSoul points you to the exact format without exposing internal methods. Fast and defensible, which is exactly what you need under the common court rules on AI use in legal filings 2025.

Key takeaways and next steps

  • Treat AI disclosure as a local, judge-specific call. Check every time before you file.
  • Bake verification and confidentiality into your default routine. Confirm every authority and quote, and record who checked what and when.
  • Standardize templates, logs, and short trainings now. You’ll lower risk and shave time off future filings.
  • Use enterprise tools with real privacy commitments and logging, and keep billing narratives focused on value delivered.

Next steps: run the 30-day plan—inventory judges, set up secure tools, draft templates, pilot, train. You’ll be ready for whatever your courtroom expects and still keep the pace your clients want.

Conclusion

In 2025, AI disclosure isn’t universal. It’s local and judge-specific. Your duties—accuracy, candor, confidentiality—haven’t changed. Build a verification-first workflow, keep an AI-use log, and use simple templates so you can certify when needed and move faster when not.

Don’t wait for a new standing order to force it. Put guardrails, auditability, and citation checks in place now. Want help? See how LegalSoul generates judge-ready certificates, verifies quotes and citations, and maintains matter-level audit trails.

Book a demo. Get your policy live in 30 days.

Unlock professional-grade AI solutions for your legal practice

Sign up