Attorney-Client Privilege and Generative AI: A Guide to Risk and Compliance
Can lawyers ethically use ChatGPT? The short answer is yes, but not the consumer version. Attorney-client privilege is at risk the moment confidential client data touches a server you do not control and cannot audit. This guide details the four AI data models and how to maintain privilege while using generative AI.
Does Using AI Waive Attorney-Client Privilege?
The core question facing legal professionals today is whether inputting client data into a Large Language Model (LLM) constitutes a waiver of privilege.
Under the Third-Party Doctrine, disclosing confidential information to a third party generally waives privilege. However, an exception exists for "necessary intermediaries" (like translators or outside counsel paralegals).
Most consumer-grade AI tools (like the free version of ChatGPT) are legally treated as "strangers," not necessary intermediaries, because they store data indefinitely, reserve the right to human review (allowing their employees to see your client's secrets), and use your data to train their models, potentially regurgitating your client's information to other users.
If you paste a confidential memo into a public chatbot, you have likely waived privilege.
Why Consumer AI Tools Are High-Risk for Law Firms
To understand the risk, you must look at the Terms of Service of standard AI platforms:
- Data Retention: Platforms often store prompts for 30 days or more.
- Model Training: Unless explicitly opted out, inputs are used to train future model iterations.
- Human Review: "Safety reviews" allow third-party contractors to read logs to prevent abuse.
In the eyes of a court, this lack of privacy is a failure to take reasonable precautions to protect client confidences (ABA Model Rule 1.6).
The 4 AI Data-Handling Models: A Taxonomy for Lawyers
Not all AI is built the same. When evaluating legal tech, you will encounter four distinct data architectures. Your "privilege safety" depends entirely on which model you choose.
| AI Model Type | Example | Data Retention | Human Review? | Privilege Risk |
|---|---|---|---|---|
| 1. Consumer Chat | ChatGPT (Free/Plus) | 30+ Days | Yes | High (Avoid) |
| 2. Commercial API | GPT-4 via Azure (Default) | Configurable | Possible | Moderate (Requires DPA) |
| 3. Zero Data Retention | Enterprise Legal AI | 0 Days | No | Safe (Preferred) |
| 4. Client-Side / Local | On-Premise LLMs | Never leaves device | No | Gold Standard |
For substantive legal work involving client facts, firms should strictly use Model 3 (Zero Data Retention) or Model 4 (Client-Side/Local) tools.
Legal Ethics Opinions on AI and Confidentiality
Recent ethics opinions from major jurisdictions, including the Florida Bar (Op. 24-1), California, and New Jersey, have converged on a single standard: Duty of Competence.
You do not need to be a coder, but you must understand:
- Where the data is physically stored.
- Who has access to it.
- Whether it can be deleted.
If you cannot answer these three questions about your AI tool, you should not use it for client work.
A Practical "Safe AI" Workflow for Attorneys
If your firm permits the use of Generative AI, strictly adhere to this workflow to minimize the risk of accidental waiver.
1. The Redaction Protocol
Before prompting, strip all Personally Identifiable Information (PII) and entity names. Use placeholders:
- Instead of: "Draft a clause for the merger between Apex Corp and Beta Ltd for $50M."
- Use: "Draft a clause for the merger between [BUYER] and [TARGET] for [AMOUNT]."
2. Vendor Due Diligence (The Checklist)
Before signing a contract with a legal AI vendor, require them to answer these specific security questions:
- Zero Data Retention (ZDR): Can you guarantee prompts are deleted immediately after processing?
- Training Data: Will my data be used to train your base models? (The answer must be "No").
- SOC-2 Compliance: Do you hold a SOC-2 Type II or ISO 27001 certification?
- Subpoena Policy: Will you notify us within 48 hours if our data is subpoenaed?
3. Output Review
Treat AI output as you would the work of a first-year associate. It requires total verification. Mark all AI-generated drafts as "PRELIMINARY – AI ASSISTED" to track provenance.
FAQ: AI and Legal Privilege
Can I use ChatGPT if I turn off chat history?
While turning off history improves privacy, you are still relying on OpenAI's servers. For high-stakes litigation or M&A, an enterprise agreement with a Business Associate Agreement (BAA) is significantly safer.
Does the work-product doctrine protect AI prompts?
Theoretically, yes, but the "Third-Party Waiver" rule still applies. If you share your work product with an unprivileged third party (the AI vendor), you may lose that protection.
What is "Tier-0" or "Client-Side" AI?
This refers to AI models that run entirely on your firm's own hardware or within a private cloud (VPC) where the AI vendor has absolutely no access to the data. This is the only way to guarantee 100% privilege security.
Secure Your Practice
Data security is no longer an IT issue; it is an ethics issue. Learn how inCamera's Zero Data Retention architecture protects attorney-client privilege.