Anthropic (Claude) AUG audit

https://claude.ai · founded 2021 · category: AI assistant + API platform (B2C + B2B) · audited 2026-05-17

Composite AUG v3 score

26.13

Tier

Healthy

Confidence

0.8

external observation

7-factor breakdown

FactorScoreRationale
Acquisition8~80M sessions/mo per SimilarWeb (growing 30%+ MoM). Brand strength in developer + technical-writer + enterprise segments. Less consumer-known than ChatGPT but ICP-precise.
Activation9Signup → first response under 30 seconds. Free tier capable. Long-context (200K-1M) is the activation moment for technical users.
Engagement9Daily-active for active users. Long-form workflows (coding, writing, analysis) drive session depth >10× average chat.
Retention9D30 strong among developers + writers + enterprise. Switching cost rising as users adopt Claude-specific workflows (Projects, Artifacts).
Advocacy8Developer + technical-writing community evangelism strong. Less viral than ChatGPT but precise. k-factor estimated ~0.8 in technical cohorts.
Monetization7Pro $20 + Team + Enterprise + API revenue. Lower consumer share than OpenAI but higher API + enterprise mix. Mid-range overall.
Performance8Generally fast. Occasional rate limits + slowdowns under load. Performance consistent with model maturity.

Strongest factor

Activation (9) + Engagement (9) + Retention (9) — the technical-power-user trifecta. Claude's long-context + Projects + Artifacts compound daily-active among the ICP.

Weakest factor

Monetization (7) — strong B2B mix but consumer share trailing ChatGPT. API revenue strong but capped by enterprise adoption velocity.

Diagnosis

Anthropic/Claude is the technical-power-user AI platform. AUG composite ~28, fleet-thriving tier. The deliberate ICP precision (developer + technical-writer + enterprise) caps Acquisition at 8 vs ChatGPT's 10 but pushes Activation + Engagement + Retention higher in-segment. AUG framework predicts this exact tradeoff — narrower ICP + deeper compound is a legitimate alternative strategy. The lesson for founders: precision beats breadth when the ICP is high-value enough.

If we ran the next sprint

For Anthropic: consumer-tier features that lift Acquisition 8 → 9 without diluting technical-ICP retention (composite +25%). Continue API + Enterprise compounding. For founders: study the precision-over-breadth strategy. ICP-targeting at depth beats broad-but-shallow.

Methodology + confidence

This is an external-observation audit — scored from publicly visible signals only, without insider data. Confidence: 0.8. Anthropic (Claude) or its team is welcome to provide internal metrics for a more confident re-audit; we'd gladly update this page with their numbers if they engage.

Signals observed: product UX (firsthand) · public pricing (Pro, Team, Enterprise) · API documentation quality · developer community sentiment · public funding (Series E+) · enterprise customer logos public.

Signals NOT directly observed (estimated from comparables): actual revenue split (Pro vs API vs Enterprise) · D30 retention curves by segment · enterprise ACV distribution.

Composite formula: AUG = 100 × Acq × Act × Eng × Ret × Adv × Mon × Perf ÷ 10⁷ — multiplicative, so a zero in any factor near-zeros the whole. See full scoring transparency.

Audit your own SaaS

Same 7-factor rubric, scored on your own product in 60 seconds. Free, no signup.