One-shot proof · Rolling cadence
A single demonstration is good, but a public, rolling, recurring scorecard is the demonstration that compounds. Every Acceleration Watch pick lands with a date, gets graded at 60 and 90 days, and stays on this page whether it Hit, Missed, or is still Pending. No hiding the misses.
Pick at T-0
Every Monday at 09:00 UTC the Acceleration Watch publishes 10 named startups. Each pick lands with a public sector tag, a 14-day acceleration percentile, a contributor-quality flag, and a velocity chart anchor.
Grade at T+60 and T+90
Each pick is graded at two checkpoints: 60 days post-pick and 90 days post-pick. The grade is binary at each window: Hit (a public fundraise OR a 4×+ velocity sustain) vs Miss (no fundraise AND velocity reverted to baseline).
Pending until both windows close
A pick is Pending until 90 days have elapsed. We never adjust a Hit retroactively after the 60d mark unless the company announces a transparent recission of the round. We never reclassify a Miss to a Hit if a fundraise lands at T+91 or later — that's a different signal beyond our methodology window.
Public counts, not curated
We display ALL picks, not just the wins. Misses are the most useful row on the page — they're the methodology's calibration boundary. Removing or hiding a Miss is a violation of the core methodology rule.
Lead-time distribution, not single-point claim
The headline 21–47 day claim is the IQR of the SSRN n=219 panel. The scorecard reports each Hit's actual lead-time (days from T-0 to public fundraise announcement), and rolls them into a quarterly distribution at the bottom.
Cumulative · 30 picks across 3 weeks
Hit
0
Miss
0
Pending
30
All current picks are still inside their 60d / 90d grading window (the Acceleration Watch published its first archived week 2026-04-27; the first 60d window opens 2026-06-26). Grading begins 2026-07-03 for week 2026-w17. The page updates on every grade.
| Week | Picks | Hit | Miss | Pending | Note |
|---|---|---|---|---|---|
| 2026-w17 | 10 | 0 | 0 | 10 | Grading window opens 2026-07-03 (60d) / 2026-08-02 (90d). |
| 2026-w18 | 10 | 0 | 0 | 10 | Grading window opens 2026-07-10 / 2026-08-09. |
| 2026-w16 | 10 | 0 | 0 | 10 | Backfill — first published archive week. Grading 2026-06-26 / 2026-07-26. |
Historical highlight — a worked example
The current scorecard is pre-grading-window. While we wait for T+60 to land, here’s one of the 219 paired observations from the methodology paper — anonymised, but reproducible against the Zenodo dataset.
One of the 219 paired observations in the SSRN methodology paper. Contributor count went 3 → 7 in 14 days, two of the new contributors traceable to senior engineers from a Series B incumbent. Marketing site swapped from Notion to custom Next.js at T-7. The fundraise announcement landed in week T 0.
Reproduce against SSRN abstract=6606558 + Zenodo dataset (CC BY 4.0).
Three false beliefs · One break each
Every buyer who doesn’t convert holds at least one of three false beliefs. The patterns are stable: a doubt about the vehicle, a doubt about themselves, or a doubt about their world. Naming the pattern is half the work. The other half is the one-line break, backed by a public receipt.
Belief 1 · Vehicle belief
Will the new vehicle work?
“Commit-velocity acceleration is just survivorship — the unicorns we already know about happen to have nice GitHub graphs.”
Why it feels trueSurvivorship bias is the right thing to suspect when someone shows you a chart. It's the default skeptical move and the buyer is correct to make it.
The breakThe SSRN paper grades n=219 paired observations PROSPECTIVELY — picks made before the fundraise, not after. The miss column is published. Survivorship would require the miss column to be empty; it isn't.
Belief 2 · Internal belief
Can I work it?
“Reading code is a partner-track skill. I'm an investor, not an engineer — this isn't for me.”
Why it feels trueMost VC tooling is built for partner-track GPs at funds with eight-figure data budgets. The default visual language of every dashboard reinforces that this is partner work.
The breakThe signal works without you reading a single line of code. The Acceleration Watch is the partner-track output WITHOUT the partner-track prerequisite — the contributor-quality + acceleration math runs once a week and lands as a ranked list.
Belief 3 · External belief
Will my world let me work it?
“Even if the signal works, my fund won't switch — we already pay €120k/yr for Harmonic / Pitchbook / Affinity / etc.”
Why it feels trueSwitching costs are real. Procurement cycles are real. The partner who signed the existing contract is real and may not want to be told their tool isn't enough.
The breakThe signal is additive, not replacement. Run it for €119.64/yr alongside whatever you already have, on a 30-day refund. Six months in, either the leading-indicator column changed how you triage — or it didn't, and you cancel without touching the existing stack.
Three false beliefs is the canonical pattern count. If a buyer holds a fourth — “the team behind it might disappear” — that’s a legitimate concern, not a false belief. The methodology code is MIT-licensed; the dataset mirrors to Zenodo. The signal outlives the team.
How to read this page
The Hit column will look great when the first grading window closes. That’s expected. The honest read is the Pending column right now and the Miss column three months from now. If we suppress a Miss, this page is theatre. If we publish it, this page is the methodology’s calibration record.
Public-grading scorecard drawn from direct-response sales canon — a rolling-demonstration variant of the one-shot proof.