A Framework by Helen Fan
Five levels of defensibility โ and which ones survive the AGI era.
This page is a working summary. For the full piece โ context, examples, and the asides โ read it where it was first published.
๐ฐ Read on Substack โAfter Anthropic's legal plugin triggered a $285B selloff, everyone started asking which legal AI companies will survive. Helen argues that's the wrong question. Across six closed-door roundtables with legal teams in Silicon Valley, Beijing, Shanghai, and Hong Kong, the same question kept surfacing: where does defensibility actually live now?
The answer is the Legal AI Value Stack โ five levels, each with very different staying power.
Where's the defensible value in the foundation model era?
Contract review, legal research via API
Foundation models do this at $20/month
Professional interface, legal workflows
Claude has customizable playbooks now
Data foundation models can't access
Example: Filevine's 20M pages/day โ but is it enough long-term?
Mission-critical infrastructure
High switching costs ($500K+) โ strongest defensibility today
Legal SaaS + Legal Service combined
AI handles routine. Humans handle judgment, relationships, strategy.
Challenge: Nobody's figured out how to do this at scale yet.
What it looks like: A chat interface powered by foundation-model APIs, sometimes with a RAG layer. Common use cases: contract review, legal research.
Anthropic's legal plugin dramatically compressed this layer's economics. Compliance-grade providers with authoritative data sources still have a role, but when a foundation model does ~80% of the work for free, pricing pressure is brutal. The moat was always shallow โ prompt engineering is replicable.
What it looks like: Professional interfaces, legal-specific workflows, structured outputs, customizable playbooks. Word add-ins that bring AI redlining and contract analysis into the document.
This was supposed to be the defensible layer ("anyone can call the API, but we've built the workflow lawyers actually need"). The problem: Claude's legal plugin already ships with customizable playbooks, structured outputs, and a polished interface. The gap between raw API and finished legal product is closing faster than founders expected.
What it looks like: Platforms that get smarter as more clients use them โ accumulating negotiation patterns, clause preferences, risk signals, and workflow behaviors no foundation model has access to.
Distinct from Westlaw / LexisNexis-style aggregated public-data assets. The Level 3 data Helen means is generated by lawyers using the platform daily โ which clauses trigger negotiation friction, which risk flags predict deal failure, which workflows fit which questions. This intelligence only emerges at scale; a Level 1 product where each client uploads their own files doesn't make the platform smarter.
What it looks like: Products so embedded in how legal teams operate day-to-day that replacing them is deeply painful or practically unthinkable.
Examples: Clio, Filevine โ platforms law teams run their entire practice on, with years of case data, billing history, and team workflows inside. Switching means migrating institutional knowledge and retraining a team. Once a platform absorbs adjacent capabilities (e.g., Clio acquiring vLex's billion-document research library), gravity only increases. The moat is operational gravity, not technology.
What it looks like: Not selling software to law firms โ becoming one. Hiring lawyers, deploying AI as the operational backbone, delivering legal services directly to end clients.
Legal services are a trust business. Even with a perfect AI draft, someone still needs to sit with the client, navigate deal politics, and own outcomes. AI ships with disclaimers; lawyers carry malpractice insurance; clients need someone to hold responsible.
Examples:
Levels 1โ4 accept the existing law firm model and try to optimize it. Level 5 asks whether the model itself is the problem. AI handles routine, repetitive, data-heavy work; humans handle trust, accountability, relationships. This is the only level where the value compounds rather than erodes. Because in a world where AI capability is a commodity, the scarce resource isn't intelligence. It's trust.
Context, examples, and the asides this summary skipped โ including how Anthropic's plugin reframed the entire conversation and what Helen heard across six roundtables in the US and Asia.
๐ฐ Read on Substack โSubscribe to Helen's Legal AI Brief for the next part of the series.