Do lawyers need client consent before using AI on their case? ABA and state bar guidance for 2025
Clients are asking about AI, judges keep posting new standing orders, and bar opinions seem to change every quarter. So the question for 2025 is pretty direct: do lawyers need client consent before us...
Clients are asking about AI, judges keep posting new standing orders, and bar opinions seem to change every quarter.
So the question for 2025 is pretty direct: do lawyers need client consent before using AI on a case? Sometimes yes. Often no. But silence isn’t your friend.
Below, I break down what the ABA Model Rules and state bars actually expect, when to get informed consent, when a simple heads‑up is enough, and how to handle confidentiality, supervision, and fees without drama.
- A plain-English map of Model Rules 1.1, 1.4, 1.5, 1.6, and 5.3
- Where 2025 state bar guidance is landing
- Clear triggers for consent vs. disclosure, with real examples
- Vendor diligence and confidentiality guardrails that hold up
- Billing and engagement letter language you can copy and adapt
- A quick decision flow you can run at intake
- How LegalSoul helps you do all of this in practice
Why this matters in 2025: consent, disclosure, and client trust
General counsel are adding AI rules to their OCGs. Some judges now ask you to certify what you checked if AI touched a filing. That’s the backdrop for the do lawyers need client consent to use AI conversation.
You don’t always need formal consent, but using tools without telling the client can ding trust, spark billing fights, or raise confidentiality questions. Remember Avianca (S.D.N.Y. 2023)? The issue wasn’t “AI bad.” It was relying on unverified output—fake citations—and not owning the result.
What works in the real world: tiered disclosure. For routine, no‑retention tasks you review, give a general notice. If the tool or use could meaningfully affect strategy, cost, or confidentiality, pause and get informed consent in writing. It’s quick, it respects Model Rule 1.1’s tech competence vibe, and it saves you from awkward emails later.
What “using AI” means in a law practice (and where risks arise)
AI isn’t just a chatbot drafting a brief. Think research accelerators, formatting helpers, deposition and hearing summaries, due diligence triage, eDiscovery analytics, contract review. Some tools run as consumer cloud, others as enterprise SaaS with no‑retention, and some live privately or on‑prem.
The risk isn’t the brand; it’s the data path and oversight. If identifiable client info leaves your control or could be used for model training, you’re in confidentiality territory that may require disclosure or consent. Cross‑border data hops can also trigger GDPR obligations for your client.
Easy framework: sort uses into three buckets—no client data, de‑identified/minimized data, and identifiable/sensitive data. For each bucket, set guardrails up front (e.g., zero‑retention only, PII redaction on, attorney review required). You’ll know when to disclose, when to get consent, and which settings must be on before anyone hits upload.
Ethics framework: ABA Model Rules that govern AI use
The ABA Model Rules don’t say “AI,” but they absolutely cover it. Model Rule 1.1 requires you to understand the benefits and risks and to supervise outcomes. Comment 8 points to technological competence. Model Rule 1.6 says protect client information—don’t let vendors retain or train on it without serious safeguards.
Model Rule 1.4 is about communication: if AI materially affects the matter—strategy, risk, timing, cost—tell the client. Model Rule 1.5 keeps fees reasonable; if the tool saves time, your billing should reflect it. And Model Rule 5.3 treats vendors and tools as nonlawyer assistance you must supervise with reasonable assurances.
Courts saying “disclose or certify” are basically echoing this. One nuance to keep in mind: what counts as “reasonable efforts” under 1.6 depends on the matter. A landlord‑tenant case is not a cross‑border M&A with GDPR and strict protective orders. Document how you calibrated your safeguards to the sensitivity.
State bar themes for 2024–2025: what recent guidance converges on
Across states, the chorus is similar: competence, confidentiality, communication, supervision. Florida’s 2024 guidance: you can use generative tools if you protect confidences, verify output, and charge fairly. California’s practical notes (2023) push risk assessments and vendor diligence. North Carolina (2023) warns against exposing secrets to tools that store or reuse data. The NYC Bar (2023) urges policies, testing, and transparency.
For 2025, the tone is “mature operations,” not fear. Bars suggest adding disclosures to engagement letters, limiting sensitive inputs, and keeping audit trails. They also frown on slogans that imply the tool is “co‑counsel,” which could mislead clients about who’s exercising judgment.
One billing angle keeps popping up: pass‑through charges look like other vendor costs and should be disclosed. But if AI replaces big chunks of associate hours, leaving hourly rates unchanged can look unreasonable under 1.5. Shifting tasks to fixed or value fees usually solves it and aligns incentives with results.
When informed consent is required: decision criteria and examples
Get informed consent when the use creates a real confidentiality risk, materially affects representation, or changes fees in a way a reasonable client would care about. Triggers to watch: sending identifiable client data to a system that retains or trains on it, using analytics that will shape strategy, or passing along meaningful AI costs.
Three quick examples:
- HIPAA matters: if you route PHI through a third‑party tool, you likely need a BAA and, unless it’s truly zero‑retention, client authorization. That’s Model Rule 1.6 plus regulatory land.
- Trade secrets: dropping code or formulas into a consumer chatbot with retention on is risky. Either move to a private/no‑retention setup or get explicit consent first.
- Strategy analytics: if you’ll rely on outcome predictions to set settlement ranges, explain the method’s limits and ask for the client’s okay.
Consent doesn’t have to be all‑or‑nothing. Offer narrow approvals by data type (identifiable versus de‑identified) and task (drafting versus analytics). It looks a lot like how clients approve eDiscovery vendors and makes later audits and billing reviews less painful.
When disclosure is advisable but consent may not be required
Plenty of uses only need a quick heads‑up. Say you use a secure, no‑retention assistant to speed formatting, suggest checklists, or summarize a transcript—and a lawyer reviews everything. No identifiable data leaves your control. Your client usually appreciates the efficiency and the transparency.
For example: you summarize a 1,000‑page production with a private tool that never trains on your data. Trade secrets are redacted. An attorney verifies findings. Consent probably isn’t required, but a one‑sentence disclosure sets expectations: we used a zero‑retention tool, a lawyer checked it, nothing trained external models.
One extra touch clients like: note how you verify. Don’t focus on brand names; explain that you ran citation checks, spot‑checked against a control sample, or used dual review for critical filings. It shows care and meets Model Rule 1.4 without overloading the client with tech jargon.
Confidentiality and data handling: vendor diligence checklist
Before any upload, confirm the basics: data retention and training policies, confidentiality commitments, encryption in transit and at rest, access controls, logging, breach response, data residency, and subprocessors. If the data is regulated, you may need a DPA or BAA. HIPAA and GDPR compliance when using AI in law firms isn’t optional.
Example scenario: your life sciences client wants EU‑only processing and no training on their data. You’ll need EU hosting, a processor/subprocessor list, SCCs if relevant, key management details, and audit logs. If a vendor can’t meet it, either move to a private deployment or get the client’s approval for a different path.
Two moves that make a difference:
- Get “no training on your data” in the contract, along with audit rights. Don’t settle for a sales slide.
- Pilot with synthetic or redacted data. Check that redaction sticks across PDFs and images, and confirm logs show who accessed what and when.
That level of diligence covers Model Rule 1.6’s “reasonable efforts” and backs up your Model Rule 5.3 supervision duties.
Fees and billing: reasonable charges, AI pass‑throughs, and transparency
Model Rule 1.5 is about reasonableness and clarity. If a tool cuts the time, your invoice should show the reduced effort. Good options: fixed or value fees for tasks likely to be accelerated, and clear pass‑through of any material AI charges. Bars have warned against billing human hours for work a tool primarily did without meaningful attorney input.
Example: a motion that used to take 12 hours now takes 6 with a drafting accelerator plus careful edits. On hourly, you bill 6. On a fixed fee, the client gets predictability and you keep the margin. If the tool charges per use (say, tokens), disclose the structure up front and cap it.
One habit that helps during audits: keep a light “AI acceleration” log per matter. Don’t list prompts on invoices. Internally, note the task, attorney review, and time saved. It supports fee reasonableness and shows value at renewal time. You can even offer a choice at scoping: traditional staffing or an AI‑assisted workflow at a lower flat fee—clients like options.
Options include fixed or value pricing for tasks likely to be accelerated, and clear pass-through of AI costs when they’re material. Several recent bar resources caution against billing “human hours” for work primarily done by a tool without commensurate attorney input.
Supervision and quality control: preventing hallucinations and errors
Your duty to review is not delegable. Avianca made that painfully clear—fabricated citations sank the filing. Quietly, other courts have tossed briefs that leaned on unverified output. Put guardrails in writing: human review for all AI content, citation checks on legal writing, and keep “facts” sourced from your record, not random web pages.
Concrete practices that stick:
- Prompt libraries with approved patterns, plus “do not use” examples for risky asks.
- Dual review for filings where a tool produced the first draft.
- Matter logs that capture who used what, for which task, and how it was verified.
One more thing: treat prompts like templates with version control. Assign owners, track changes, retire bad patterns. It cuts down on “prompt drift,” where quality erodes over time, and it’s solid proof you’re supervising under Model Rule 5.3.
Court, client, and sector‑specific rules: protective orders, HIPAA/GDPR, and standing orders
Some judges now require a certification or disclosure if a filing was drafted or edited with AI. Judge Brantley Starr (N.D. Tex.) kicked this off in 2023, and similar orders have popped up around the country. Most don’t ban tools—they require an attorney to verify authorities.
Protective orders can bar sharing designated material with third‑party systems that retain data. Healthcare, finance, and cross‑border matters add layers like HIPAA or GDPR. You’ll want to track these constraints like you track deadlines.
Try this playbook:
- Keep a list of courts you appear in that require AI disclosures and add a checkbox to filing checklists.
- For HIPAA data, use tools with BAAs and true zero retention. For GDPR, prefer EU processing or lawful transfer mechanisms.
- Read OCGs closely. Some clients require pre‑approval of vendors or forbid certain tools.
Bonus: build “matter data maps” that show which data categories can go to which environments. Tie those to protective‑order language so no one accidentally uploads AEO material to a general tool. If a judge asks what you did, you have a clean story.
Engagement letter language: sample clauses and consent options
Your letter can cover this without turning into a novella. Consider language like: “We use carefully vetted AI tools to improve quality and speed. We do not allow your confidential information to train external models. Attorneys review and verify all AI‑assisted work. If we pass through any platform costs, we’ll discuss and agree first.”
Give clients choices:
- Opt‑in for using identifiable data in third‑party tools with no‑retention guarantees.
- Opt‑out for defined tasks or data categories where they’re not comfortable.
- Limited‑use consent for de‑identified analytics or first‑draft assistance.
Spell out the benefits clients care about—faster turnarounds, more consistent drafting, clearer fees. To keep it tidy, attach a one‑page “preference center” addendum with checkboxes for data types and use cases. It aligns with Model Rule 1.4, gives you a record of choices, and makes mid‑matter updates easy if a court order or OCG changes.
Internal AI policy: governance, training, and audit documentation
A practical policy makes this repeatable. Define where tools can be used, who approves vendors, and which tools are on the whitelist. Under Model Rule 5.3, split responsibilities: IT/security runs technical diligence, practice leaders set boundaries, attorneys verify outputs and record key decisions.
Operational tips that actually help:
- Pilot in a sandbox with synthetic or redacted data. Document what works and what breaks before rollout.
- Review controls when vendors update models—what was fine six months ago may need a tweak now.
- Keep an incident taxonomy (data exposure, fake citation, misclassification) with response steps and when to notify clients.
Your audit trail can be light. A short checklist per matter—used a tool or not, for what, and how it was checked—covers fee reasonableness, shows tech competence, and gives you something to hand a client or court if they ask.
Quick decision flow: do I need consent for this AI use?
Run this quick triage at intake or before a new task:
- Will identifiable client data go to an external system that retains or trains on it?
- Will the tool materially shape strategy, risk, or outcomes a reasonable client would care about?
- Will there be pass‑through charges or staffing shifts that change fees?
- Any court orders, OCGs, HIPAA/GDPR rules, or protective orders that limit tool use?
Then decide:
- No to all: proceed with attorney review; a brief disclosure is wise.
- Yes to confidentiality: switch to zero‑retention/private or get consent.
- Yes to material impact or fees: disclose and get informed consent.
- Restricted by orders/OCGs: comply or seek modification; don’t proceed without approval.
Bake this into your matter‑opening form. A 60‑second checklist captures the decision and drops the right language into your letter. When clients ask “do lawyers need client consent to use AI,” you’ve got a clear, principled answer—and a record.
How LegalSoul helps you operationalize this compliance
LegalSoul is built for firms that want speed without ethics headaches. On confidentiality: choose zero‑retention or private modes so your data never trains external models. Pick data residency (including EU) and lock access by matter with full audit logs.
Quality and supervision are built in. You get attorney review workflows, citation checks, source pinning, and risk flags for questionable output. Prompt libraries with versioning reduce drift, and a sandbox lets you pilot with synthetic data before anyone touches live files.
On the business side, billing exports separate AI acceleration from pass‑through costs for clean Model Rule 1.5 conversations. We also include policy templates, engagement letter clauses, and a client consent “preference center.” With signed no‑training commitments, SOC 2/ISO attestations, and a clear subprocessor list, you can answer questionnaires and court disclosure orders with confidence.
Quick Takeaways
- No blanket consent rule. Get informed consent when there’s real confidentiality risk, a material impact on strategy or outcomes, or fee changes. Otherwise, give a simple disclosure.
- Anchor to Model Rules 1.1, 1.4, 1.5, 1.6, and 5.3: understand the tech, protect information, communicate, bill fairly, and supervise vendors based on matter sensitivity.
- Make it repeatable: vendor diligence (no‑training commitments, zero‑retention, DPAs/BAAs, residency), attorney verification and cite checks, matter‑level logs, and clear pass‑through terms.
- Put choices in writing: engagement letter options (general disclosure vs. informed consent) and a quick intake flow. Tools like LegalSoul add private modes, PII redaction, audit logs, and clean billing exports.
Conclusion
Short version: you don’t always need consent to use AI. Get it when there’s a meaningful confidentiality risk, a material effect on the representation, or a change in fees—and disclose the rest. Tie your approach to the Model Rules, run real vendor diligence, keep human review front and center, and spell it out in your engagement letter.
Want this to be easy instead of stressful? Book a quick LegalSoul demo and see zero‑retention workflows, audit logs, and client‑ready language that fit 2025 ethics expectations.