← Back to AI in Law
Case: United States v. Heppner, No. 25-cr-00503-JSR (S.D.N.Y.)
Judge: Hon. Jed S. Rakoff
Ruling: Oral ruling from the bench, February 10, 2026
Transcript: Hearing Transcript (PDF)
Docket: CourtListener | DOJ Case Page
Executive Summary

On February 10, 2026, Judge Rakoff ruled from the bench that 31 documents a defendant created using Anthropic's consumer Claude tool were not protected by attorney-client privilege or the work product doctrine. Rakoff found "not remotely any basis" for the privilege claim: Heppner disclosed to a third-party AI tool that expressly provided users have no expectation of privacy in their inputs. The work product claim failed separately—the documents were prepared by the defendant on his own, not by or at the direction of counsel. The hearing transcript is on the record and citable. The ruling turned on two structural conditions—the AI tool was a third party, and that third party expressly disclaimed privacy. Strip those away and the ruling collapses.

The Facts

Robert Heppner was arrested on November 4, 2025, and charged with securities fraud and wire fraud. The case was assigned to Judge Jed S. Rakoff in the Southern District of New York.

Between his arrest and trial preparation, Heppner—acting on his own, not at the direction of his attorney—used Anthropic's consumer Claude chatbot to create 31 documents related to his defense strategy. These included analysis of the charges, potential defense arguments, and case strategy materials.

Federal agents seized Heppner's devices. The government obtained access to the documents through a grand jury subpoena and sought to introduce them at trial. Heppner's counsel (Quinn Emanuel) moved to suppress the documents, arguing both attorney-client privilege and work product protection.

The Privacy Policy Problem

Rakoff dispatched the attorney-client privilege claim almost immediately. He told defense counsel he saw "not remotely any basis" for it.

The reasoning was sharp: Heppner had disclosed the materials to a third-party AI tool that expressly provided users have no expectation of privacy in their inputs. Anthropic's consumer terms for Claude—the free and consumer-tier product—contain provisions that were fatal to the privilege claim:

  • Training on prompts: Anthropic's consumer privacy policy allows the company to use prompts and conversations to train and improve its AI models
  • Disclosure to third parties: The terms permit Anthropic to share data with third-party service providers and, in certain circumstances, with law enforcement
  • No confidentiality guarantee: The consumer terms contain no provision guaranteeing that user content will be kept confidential
The Core Issue

The privilege waiver did not require Anthropic to actually read, share, or train on Heppner's documents. The terms permitting them to do so were sufficient. Submitting content to a service whose terms expressly disclaim confidentiality is third-party disclosure. No expectation of confidentiality means no privilege.

The Work Product Fight

The work product argument is where it got interesting. Defense counsel O'Neil made the stronger play—citing Shih v. Petal Card and Rule 16(b)(2)(A)—arguing that it does not matter whether the defendant created the materials himself or at counsel's direction, as long as they were created in anticipation of litigation.

Rakoff engaged with this more seriously but drew a critical distinction: did these documents reflect counsel's strategy, or just the defendant's own thinking?

When O'Neil conceded that "these were prepared by the defendant on his own volition," Rakoff had what he needed. The government's Rothman closed it by citing In re Grand Jury Subpoenas—the work product doctrine does not shield materials "prepared neither by the attorney nor his agents."

The Ruling

Rakoff granted the government's motion. From the bench:

The court rejected protection on both grounds:

  1. Attorney-client privilege: No expectation of confidentiality. The consumer terms expressly permitted Anthropic to use and disclose the content. Third-party disclosure to a service that disclaims privacy waives privilege. Rakoff found "not remotely any basis" for this claim.
  2. Work product doctrine: The documents were prepared by the defendant on his own volition, not by an attorney or at the direction of an attorney. The doctrine protects materials prepared by or for a party's representative—not a defendant's independent analysis.

The 31 documents were ruled admissible.

The Two Structural Conditions

Rakoff's entire analysis turned on two structural facts:

  1. The AI tool was a third party. Heppner disclosed the materials to an entity outside the attorney-client relationship.
  2. That third party expressly disclaimed privacy. Anthropic's consumer terms provided users have no expectation of confidentiality in their inputs.

Strip those two conditions away and the ruling collapses. If the AI tool is not a third party in the data path—because the vendor never receives the content—there is no third-party disclosure. If the terms guarantee confidentiality rather than disclaim it, the privilege analysis reverses.

What This Means

Consumer AI Is Not Confidential

Heppner puts on the record what the privacy policies already said: consumer AI tools do not maintain confidentiality. A federal judge has now ruled accordingly, and the transcript is citable. This applies to every consumer-tier AI product with similar terms—ChatGPT's free tier, Gemini's consumer product, Claude's free tier, and others.

For attorneys, the implication is direct: submitting privileged materials to a consumer AI tool whose terms disclaim confidentiality is third-party disclosure. It does not matter whether the AI company actually reads or uses the content. The terms permitting them to do so are enough.

Enterprise Agreements Change the Analysis

The ruling turns on the specific terms of Anthropic's consumer product. Enterprise AI products with different contractual terms—particularly Zero Data Retention agreements that contractually prohibit retention, training, and disclosure—present a materially different privilege analysis.

Under an enterprise ZDR agreement, the vendor contractually commits that:

  • Content is processed in memory and immediately discarded
  • No logs, caches, or stored transcripts are created
  • Content is never used for training or improvement
  • No employee can access or review the content

This reframes the confidentiality analysis. The communication is made under terms that expressly guarantee confidentiality rather than expressly disclaim it.

But Contractual Controls Have Limits

ZDR agreements are contractual promises. They address the privilege analysis favorably. But they do not change the underlying architecture: if your content travels to the vendor's servers, the vendor has your content during processing—regardless of what the contract says they will do with it.

Contractual controls are necessary. They are not sufficient. A court order compelling production of data the vendor possesses supersedes the vendor's contractual commitment to you about what they will do with that data.

The ruling turned on two structural conditions: the AI tool was a third party, and that third party expressly disclaimed privacy. Strip those away and the ruling collapses.

The Architectural Question

Heppner turned on terms of service. But the deeper question it puts on the record is architectural: where does your content exist during processing, and who can access it?

Consumer AI stores and trains on your content. Enterprise AI with ZDR promises to discard it. But both require your content to travel to and be processed on someone else's servers.

The architecture that eliminates the third-party disclosure entirely is one where the vendor never receives the content in the first place—where the client application communicates directly with the AI provider, and the vendor occupies the authentication plane only, never the data plane.

This is the distinction between a contractual control ("we promise not to keep your data") and an architectural control ("we structurally cannot receive your data"). Both matter. One survives a court order.

Further Reading

For a detailed analysis of why architectural controls survive where contractual controls cannot, see Why Architecture Matters More Than Privacy Policies.

FAQ: US v. Heppner

Does Heppner mean all AI use waives privilege?

No. The ruling is specific to two conditions: (1) a consumer AI tool whose terms expressly disclaim confidentiality, and (2) documents prepared by the defendant himself, not by or at the direction of counsel. An attorney using an enterprise AI tool with ZDR agreements and architectural privacy controls presents a materially different analysis on both grounds.

Is the transcript citable?

Yes. The hearing transcript is on the record. Rakoff's reasoning is stated from the bench—this is not secondary reporting. A formal written opinion may follow, but the transcript itself is a citable ruling from a federal judge.

What about the witness-advocate conflict?

Defense counsel O'Neil flagged that if the government introduces the AI documents at trial, the best witnesses on their purpose and context are the Quinn Emanuel attorneys—which creates a potential witness-advocate conflict that could force counsel withdrawal or a mistrial. Rakoff's response ("I would try the case in—oh, certainly no later than 2030") suggests he understands the tactical leverage. This is worth watching as the case progresses.

What should I do right now as a practicing attorney?

Stop using consumer AI tools for any work involving privileged or confidential materials. A federal judge has now ruled on the record that submitting content to a consumer AI tool that disclaims privacy is third-party disclosure. If you are using enterprise AI tools, review the vendor's terms to confirm they include explicit confidentiality commitments and ZDR provisions. Ask the architectural question: where does your content exist during processing, and who can access it? For detailed guidance, see our privilege preservation guide.

AI Built for the Post-Heppner World

inCamera's architecture means there is no third-party disclosure to waive privilege over. Your content goes from your device directly to the AI provider. We are never in the data path.