- Practice model
- Solo; federal SSD/hearing work; remote-first, paperless practice
- AI maturity
- Heavy user of enterprise and consumer LLMs for drafting; explicit preference to buy maintained record tooling
- North star
- More in-person client time, less time maintaining brittle prompt chains
Executive summary
This lawyer is not a novice with technology. They described building custom prompt assets inside general-purpose AI environments, then feeding carefully bounded inputs to get memo-quality drafts. They also described the downside: models drift, behave unpredictably, and burn senior time on debugging prompts that were authored on bad days out of necessity.
Superinsight sits upstream of that drafting layer. In this account, the attorney treats it as the medical-record engine that performs structured disability-record analysis they would rather not re-implement as a weekend prompt hobby. Outputs then flow into separate workflows for pre-hearing memoranda and post-hearing briefs, where the attorney still controls final voice, citations, and legal judgment.
Why a sophisticated user still buys a vertical product
The prompt maintenance tax
Replicating a multi-step Social Security sequential evaluation pass inside a chat window is not one prompt. It is a system of prompts, edge-case handling, and regression testing every time the vendor model changes. The attorney contrasted that work with paying for a product that ships the behavior as a maintained capability.
Reliability and variance
In the broader discussion summarized here, Superinsight’s co-founders give illustrative ranges for consumer-model inconsistency (on the order of 20–30% run-to-run for some tasks) and describe the “last five to 10 percent” as the hardest part of getting court-ready output. Those are illustrative comments in conversation, not a Superinsight product specification sheet.
Time on earth vs. time in software
The attorney describes limited appetite for hand-building SSD five-step sequential evaluation prompts when commercial tools exist, and prefers investing in products over “moments of pain” weekend engineering. They have also coded their own systems and use tools like Gemini for some workflows.
Workflow architecture (conceptual)
| Stage | Tooling role | Human role |
|---|---|---|
| Record ingestion and structuring | Superinsight for chronology-style analysis tied to disability concepts | Validate against scans; resolve conflicts; choose theory |
| Memoranda and brief shells | Separate LLM environment with attorney-authored “gems” or templates | Edit for voice, cite to exhibits, remove hallucinations |
| Client narrative and travel | Calendaring and logistics tools (outside scope here) | Trust-building time the attorney explicitly protects |
Numbers and phrases in this profile
| Topic | As described |
|---|---|
| Practice tenure | About 15 years in disability law (as introduced in this account) |
| SSD workflow | Attorney references feeding Superinsight reports into custom “gems” for pre-hearing memoranda and post-hearing briefs; mentions a five-step SSD prompt they did not want to maintain by hand |
| Industry commentary (context) | Illustrative 20–30% variance for some consumer-model outputs and last five to 10 percent human finish for court use |
| Hours saved | No weekly hours-saved number appears in this overview |
Implementation checklist for other power users
- Draw a hard line between R&D and production. Weekend prompts are not a substitute for a release-tested record pipeline.
- Chain tools deliberately. Let the record engine be deterministic where possible; let the prose engine be creative only after facts are pinned.
- Re-test when upstream models change. Your “known good” prompt stack can silently rot.
- Measure what you say you optimize for. If the goal is client-facing hours, actually log whether calendars moved.
Takeaway: how Superinsight helped (per their account)
- Time (indirect): Less senior-attorney life spent maintaining brittle SSD prompt systems.
- Insight extraction: Structured record artifacts trustworthy enough to feed downstream drafting.
- Cost logic: Buy durable software labor for the boring middle of the pipeline instead of paying for it twice in salary and opportunity cost.
Bottom line. This profile is the answer to a common vendor question: “Why should I pay you if I already pay for ChatGPT?” Here, a skilled user still wanted Superinsight because medical-record structure is a product problem, not a clever paragraph problem.