If the SEC Asks Tomorrow, Can You Prove It Today?
Most RIAs can't prove a single signed disclosure wasn't altered after the client signed it. The Heppner case shows why that's now an enforcement problem, and what real proof actually looks like.

Theo Katsoulis
Founder & CEO
A former CEO allegedly stole $150 million from a public company. He forged backdated documents to fool the auditors. Then, after he got subpoenaed, he asked Claude to help him build his defense. A federal judge just ruled those AI chats are fair game for prosecutors.
The part most firms should be paying attention to isn't the fraud. It's why he got caught: a forensic accident on a hard drive, not a system. That distinction matters for every RIA in the country, even the ones who'd never dream of forging anything.
The proof gap
Every RIA has a binder of policies. Almost none can show, in one click, byte-level evidence that they actually followed them.
Walk through the questions an SEC examiner will actually ask. Was the ADV delivered on time? Is the form the client signed bit-for-bit identical to the form you sent? Has anything been altered since?
Drafting the policy is the straightforward part. You hire a consultant or a compliance attorney, you get a polished binder, you review it, and file it. Done.
Proving the policy was followed, every time, on demand, in a way an examiner can't poke holes in, is where most firms fall apart.
A forensic accident, not a system
Bradley Heppner, founder of Beneficient and former chairman of GWG Holdings, allegedly siphoned $150M from GWG through a shell company he secretly controlled. In 2019 he forged backdated documents and fabricated emails to convince auditors the shell was independent. Years later, after getting subpoenaed, he used the consumer version of Claude to draft about 31 documents for his defense and forwarded them to his lawyers, without telling them he'd used AI.
The FBI seized his hard drive. The forged documents were on it… and so were the AI prompts.
In February 2026, Judge Rakoff (SDNY) ruled those AI conversations weren't protected by attorney-client privilege or the work-product doctrine. First federal ruling of its kind. The reasoning: the consumer terms allow disclosure to government regulators, so there was no reasonable expectation of confidentiality in the first place. The trial began this April.
Two things to notice:
On the AI piece: when you use a consumer AI tool, you don't control what happens to your inputs. Privilege doesn't attach automatically. Whatever the vendor's terms say about disclosure and retention is what governs, and most consumer terms reserve more than users realize. The lesson isn't "AI is always discoverable." It's that you've handed control to someone else.
On the bigger piece: Heppner only got caught because the fabricated documents existed on his hard drive when the FBI seized it. If your firm needs that kind of luck to answer "was this record altered after the fact?," you have the same gap on the defensive side. The capability that catches a fraudster and the capability that vindicates an honest firm are the same capability: a system that can show, byte-by-byte, what existed when.
What that system actually is
Imagine an examiner asks: "Prove this signed disclosure wasn't modified after the client signed it." That question used to be theoretical. Cases like this, and the SEC's 2026 focus on AI supervision, are making it real.
Without a system: you pull the file, eyeball it, send an email, hope nothing in your record contradicts what you're claiming, wait for the next question. The honest answer is "we believe so."
With a system: you produce the SHA-256 hash at creation, the hash at signing, and the hash at delivery. They match. End of conversation.
That's the difference. Not "more logging." Not "better DMS." A different category of proof.
Under the hood, the components are simple. You store records in write once, read many ("WORM") format, so nothing can be modified or deleted after creation. You hash every document so any change is detectable down to the byte. You sign your timestamps so the record of when something happened can't be forged. And every action has a name on it.
Quick aside on blockchain, since the word always comes up. This is the same property that makes blockchain attractive in everyday use cases: public verifiability, append-only history, an audit trail anyone can independently check. You don't need a blockchain to get the compliance value. WORM gets you most of it inside your existing stack.
We did a version of this at Playbook for digital signatures. Stored the raw JSON of every signature event, with the signed payload, IP, timestamp, and hash. When the custodian or SEC asked us to prove a record hadn't been changed, the answer was a query.
Why this matters now
The SEC's 2026 exam priorities call out AI explicitly. Examiners will assess whether firms have real policies and procedures supervising AI use, and whether what you tell clients about your AI matches what your AI is actually doing.
"Asleep at the wheel" is now an enforcement category. Your policies can be great. Your AI tool can work. But if those things aren't married to an audit trail an examiner can verify, you're running compliance theater.
Where do you stand?
During one of our meetings, an advisor called it "Swiss cheese compliance": records full of holes you can't trust are complete. Completed DocuSigns sitting in one system, emails in another, custodian docs in a third, none of them automatically tied back to the client file. Every gap is a future fire drill.
If an examiner walked in tomorrow and asked you to prove a single signed disclosure wasn't altered after the client signed it, could you answer in one query?
If not, that's the gap. Reach out today and we’ll audit your process, for free.

