Anthropic's Jack Clark forecasts ~30% chance of fully automated AI R&D by end 2027 and ~60%+ by end 2028. The timeline, shared via @kimmonismus, marks the most specific public prediction from a frontier lab insider.
Key facts
- ~30% chance of automated AI R&D by end 2027
- ~60%+ chance by end 2028
- Proof-of-concept within 1–2 years on non-frontier model
- Driven by coding, agents, benchmark saturation
- Source: Anthropic co-founder Jack Clark
Clark, a co-founder of Anthropic, published a detailed essay arguing that fully automated AI R&D — where a frontier AI system autonomously builds its own successor — likely won't arrive this year but may appear as a proof-of-concept within 1–2 years [According to @kimmonismus].
The essay identifies key drivers: rapid gains in coding capabilities, long-horizon agent work, benchmark saturation, AI-managed subagents, and early signs of models handling core AI research tasks like fine-tuning, kernel optimization, reproducibility, and alignment research.
Why this story matters more than the press release suggests
Clark's forecast is notable not just for its specificity but for its source — an Anthropic insider with visibility into frontier training runs. If Clark is correct, the window for human-exclusive AI R&D leadership closes within ~18–24 months, compressing timelines that most public forecasts place at 3–5 years. The essay also implies that current frontier models (Claude, GPT, Gemini) already exhibit the foundational capabilities for automated research, with the bottleneck being reliability and long-horizon task completion rather than raw intelligence.
The forecast aligns with recent trends: coding benchmarks like SWE-Bench have seen scores jump from ~30% to ~70% in 12 months, and agent frameworks (Claude Code, Devin, Copilot Workspace) increasingly handle multi-step tasks. The missing piece — end-to-end model training without human intervention — is what Clark expects to see demonstrated on non-frontier models within 1–2 years.
What the essay doesn't address
Clark's essay does not specify which successor model would be trained, nor does it discuss compute costs, which could exceed $100M even for non-frontier models. The forecast also assumes regulatory environments remain permissive and that no catastrophic failures trigger intervention.
Implications for the field
If Clark's timeline holds, the next 24 months will see a race between labs to achieve automated R&D first — not just for competitive advantage but for existential risk management. The essay implicitly argues that whoever achieves automated AI R&D first controls the subsequent intelligence explosion, a dynamic that accelerates the need for alignment research.
What to watch
Watch for any frontier lab announcing a demonstration of end-to-end model training by an AI agent in 2026–2027. Clark's proof-of-concept window implies a public or leaked demo within 12–18 months. Also watch SWE-Bench scores crossing 80% and agent tool-use reliability metrics.









