The Stored Communications Act and AI: Why Lawyers Are Right to Be More Cautious
"If you are on a ChatGPT Business/Team plan where ChatGPT is not training on your data, yet retains your data, how is that different than being on a Google Workspace Business Plan where Google retains your data?"
This is a question that comes up repeatedly in law firm discussions about AI adoption. On the surface, it seems like a reasonable comparison: both services store your data, both promise not to use it for certain purposes, and both are enterprise-grade products. So why do lawyers express more caution with one than the other?
The answer lies in a piece of legislation from 1986 that has become invisible precisely because it works so well: the Stored Communications Act.
The SCA: The Water We've Been Swimming In
The Stored Communications Act (18 U.S.C. §§ 2701–2712) created a privacy and responsibility regime for electronic communications that we've largely treated "as water" because its primitives have survived for nearly four decades. It established clear categories for how the government and third parties can access stored electronic communications and when providers must disclose them.
Remote Computing Service (RCS)
Under the SCA, a Remote Computing Service is defined as "the provision to the public of computer storage or processing services by means of an electronic communications system." This classification brings with it specific protections, disclosure requirements, and subpoena procedures that courts and practitioners understand well.
Google Workspace, Dropbox, Microsoft 365—these providers have all been operating within this well-established legal framework for decades. Courts understand how to classify them. Lawyers know how subpoenas interact with them. Disclosure obligations are mapped out in case law and compliance playbooks.
These providers have also operationalized compliance expectations. They offer readily accessible Business Associate Agreements to support HIPAA/HITECH-regulated customers. Their data processing agreements have been litigated, refined, and standardized. The rules of engagement are known.
AI Providers: A Different Animal Entirely
This brings us to the modern AI provider and why it represents a technology different from anything that came before.
The Technical Distinction
A modern AI company is an inference processor creating derivative conversational artifacts—a conversation between a human and a computer that is sometimes retained. This is categorically different from a storage provider holding files or a communication service transmitting messages.
When you upload a document to Dropbox, Dropbox stores that document. The relationship is straightforward: you give them data, they hold it, you can retrieve it. The SCA's framework for "storage" applies cleanly.
When you send a prompt to ChatGPT, something different happens:
- Your input is processed through a neural network
- The model generates a response based on statistical patterns
- A "conversation" artifact is created that didn't exist before
- That artifact may or may not be retained, depending on the service tier
Is this "storage"? Is it "processing"? Is the AI's response your data, the provider's data, or something new entirely?
There is currently limited appellate guidance clearly classifying AI providers as Remote Computing Services under the SCA. It is plausible these providers will fall under that classification. However, until that is fully decided, credible arguments exist on both sides.
Why This Matters for Law Firms
The absence of clear legal classification creates several practical problems:
1. Subpoena Uncertainty
When opposing counsel subpoenas Google for your client's emails, there's a well-worn process. Both parties understand what Google will and won't produce, what objections are available, and how courts typically rule.
What happens when opposing counsel subpoenas OpenAI for your ChatGPT conversations about case strategy? The answer is: nobody is entirely certain.
2. Disclosure Obligations
Under the SCA, providers have specific obligations about when they can and cannot voluntarily disclose user communications. Does an AI provider analyzing your privileged documents fall under these same protections? Or do they have broader latitude to share information with law enforcement?
3. Data Custody Questions
If an AI provider retains your prompts for "safety monitoring," who is the custodian of that data for purposes of litigation hold obligations? If your client is in litigation and you've been using AI to analyze case documents, do you need to issue a preservation letter to your AI vendor?
| Question | Traditional Cloud Storage | AI Inference Providers |
|---|---|---|
| SCA Classification | Established as RCS | Uncertain |
| Subpoena Procedure | Well-defined | Untested |
| Disclosure Rules | Mapped in case law | Ambiguous |
| BAA/DPA Standards | Mature | Evolving |
| Data Custody | Clear | Unclear who owns what |
Zero Data Retention: Clarity Through Architecture
This is precisely where Zero Data Retention becomes more than a privacy feature—it becomes a legal risk mitigation strategy.
Zero Data Retention provides clarity and specificity about who is the responsible party for custody and records preservation. It is an elevated technical design choice that mitigates and defines risk until appellate guidance is established.
When your AI provider practices true Zero Data Retention—ephemeral processing with no disk writes—the legal ambiguity of the SCA classification becomes largely moot:
- There's nothing to subpoena because nothing is stored
- There are no disclosure questions because there's nothing to disclose
- Data custody is clear: the data exists only on your systems
- Litigation hold obligations remain with you, not a third party
In effect, ZDR architecture sidesteps the entire question of whether AI providers are RCS entities under the SCA by ensuring they never hold the data in the first place.
"Who defines disclosure in this regime?"
Until courts definitively answer whether AI providers fall under SCA protections, the safest answer is: ensure there's nothing to disclose. Zero Data Retention isn't just a technical preference—it's a legal strategy for operating in regulatory uncertainty.
A Brief History of Why This Matters
Congress creates framework for electronic communications privacy, including RCS classification
Courts consistently classify services like Dropbox, Google Drive as RCS under existing framework
Updates SCA for cross-border data requests, but doesn't address AI inference
Novel technology creates regulatory gap—inference processors don't fit cleanly into existing categories
Courts have not definitively ruled on AI provider classification under SCA
What This Means for Your Firm
The lawyer asking "why treat ChatGPT differently than Google Workspace?" is asking the right question. The answer isn't that one is inherently more dangerous than the other—it's that one operates within a known legal framework and the other doesn't.
This doesn't mean you can't use AI. It means you should use AI in ways that don't depend on uncertain legal protections:
- Demand Zero Data Retention: Choose providers that never store your prompts or outputs. When there's no data, there's no classification question.
- Verify the architecture: "We don't train on your data" is not the same as "we don't store your data." Read the DPA carefully.
- Document your diligence: If courts later find that AI providers aren't protected under the SCA, you want a record showing you took reasonable precautions.
- Consider local alternatives: For the most sensitive matters, on-premise or client-side AI processing eliminates third-party custody entirely.
FAQ: The SCA and AI Providers
Are AI providers classified as Remote Computing Services under the SCA?
There is currently limited appellate guidance clearly classifying AI providers as Remote Computing Services under the Stored Communications Act. It is plausible these providers will eventually fall under that classification, but credible arguments exist on both sides. This creates legal ambiguity that firms should address through technical controls like Zero Data Retention.
Why do lawyers treat ChatGPT differently than Google Workspace?
Google Workspace and similar storage providers operate within a well-established legal framework under the SCA that has been refined through decades of case law. Courts understand how to classify them, how subpoenas interact with them, and where disclosure obligations fall. AI providers present a novel technology—inference processors creating derivative conversational artifacts—that doesn't fit neatly into these existing legal categories.
How does Zero Data Retention help with SCA uncertainty?
Zero Data Retention provides clarity and specificity about who is the responsible party for custody and records preservation. By ensuring the AI provider never stores data, you eliminate questions about disclosure obligations, subpoena compliance, and data custody that the SCA framework hasn't yet resolved for AI services. When there's nothing stored, there's nothing to classify.
Should I wait for courts to clarify AI provider classification before using AI?
No—but you should use AI in ways that don't depend on uncertain legal protections. By choosing providers with Zero Data Retention architectures, you can adopt AI technology while minimizing exposure to the unresolved questions about how the SCA applies to inference processing.
Built for Legal Certainty
inCamera's Zero Data Retention architecture ensures your AI usage doesn't depend on unsettled law. Direct client-to-provider communication with no intermediate storage.