Skip to content
gentic.news — AI News Intelligence Platform
Connecting to the Living Graph…

Listen to today's AI briefing

Daily podcast — 5 min, AI-narrated summary of top stories

Jack Clark of Anthropic presenting a slide with a timeline showing AI R&D automation probability rising from 30% in…

Anthropic's Jack Clark: ~60% chance of automated AI R&D by 2028

Anthropic's Jack Clark forecasts ~30% chance of automated AI R&D by 2027 and ~60%+ by 2028, driven by coding gains and agents.

·6h ago·3 min read··263 views·AI-Generated·Report error
Share:
What is Jack Clark's timeline for fully automated AI R&D?

Anthropic's Jack Clark forecasts ~30% chance of fully automated AI R&D by end 2027 and ~60%+ by end 2028, driven by coding, long-horizon agents, and benchmark saturation.

TL;DR

Clark forecasts ~30% chance of automated AI R&D by end 2027 · ~60%+ chance by end 2028 for frontier self-building · Driven by coding, agents, and benchmark saturation gains

Anthropic's Jack Clark forecasts ~30% chance of fully automated AI R&D by end 2027 and ~60%+ by end 2028. The timeline, shared via @kimmonismus, marks the most specific public prediction from a frontier lab insider.

Key facts

  • ~30% chance of automated AI R&D by end 2027
  • ~60%+ chance by end 2028
  • Proof-of-concept within 1–2 years on non-frontier model
  • Driven by coding, agents, benchmark saturation
  • Source: Anthropic co-founder Jack Clark

Clark, a co-founder of Anthropic, published a detailed essay arguing that fully automated AI R&D — where a frontier AI system autonomously builds its own successor — likely won't arrive this year but may appear as a proof-of-concept within 1–2 years [According to @kimmonismus].

The essay identifies key drivers: rapid gains in coding capabilities, long-horizon agent work, benchmark saturation, AI-managed subagents, and early signs of models handling core AI research tasks like fine-tuning, kernel optimization, reproducibility, and alignment research.

Why this story matters more than the press release suggests

Clark's forecast is notable not just for its specificity but for its source — an Anthropic insider with visibility into frontier training runs. If Clark is correct, the window for human-exclusive AI R&D leadership closes within ~18–24 months, compressing timelines that most public forecasts place at 3–5 years. The essay also implies that current frontier models (Claude, GPT, Gemini) already exhibit the foundational capabilities for automated research, with the bottleneck being reliability and long-horizon task completion rather than raw intelligence.

The forecast aligns with recent trends: coding benchmarks like SWE-Bench have seen scores jump from ~30% to ~70% in 12 months, and agent frameworks (Claude Code, Devin, Copilot Workspace) increasingly handle multi-step tasks. The missing piece — end-to-end model training without human intervention — is what Clark expects to see demonstrated on non-frontier models within 1–2 years.

What the essay doesn't address

Clark's essay does not specify which successor model would be trained, nor does it discuss compute costs, which could exceed $100M even for non-frontier models. The forecast also assumes regulatory environments remain permissive and that no catastrophic failures trigger intervention.

Implications for the field

If Clark's timeline holds, the next 24 months will see a race between labs to achieve automated R&D first — not just for competitive advantage but for existential risk management. The essay implicitly argues that whoever achieves automated AI R&D first controls the subsequent intelligence explosion, a dynamic that accelerates the need for alignment research.

What to watch

Watch for any frontier lab announcing a demonstration of end-to-end model training by an AI agent in 2026–2027. Clark's proof-of-concept window implies a public or leaked demo within 12–18 months. Also watch SWE-Bench scores crossing 80% and agent tool-use reliability metrics.

Source: gentic.news · · author= · citation.json

AI-assisted reporting. Generated by gentic.news from multiple verified sources, fact-checked against the Living Graph of 4,300+ entities. Edited by Ala AYADI.

Following this story?

Get a weekly digest with AI predictions, trends, and analysis — free.

AI Analysis

Clark's forecast is the most specific from a frontier lab insider, compressing timelines vs. public consensus of 3–5 years. The essay implicitly argues that current models (Claude, GPT, Gemini) already have foundational capabilities, with reliability being the bottleneck. This aligns with SWE-Bench jumps from ~30% to ~70% in 12 months. The key unaddressed variable is compute cost — training even non-frontier models could exceed $100M, raising questions about economic feasibility for automated R&D at scale. Contrarian take: Clark's timeline may be optimistic if reliability improvements plateau. Agent frameworks (Claude Code, Devin) still fail on multi-hour tasks, and end-to-end training requires perfect execution over days. However, the forecast is consistent with Anthropic's internal pace — the company has demonstrated rapid iteration on Claude's capabilities. If correct, the window for human-exclusive AI R&D leadership closes within 18–24 months.

Mentioned in this article

Enjoyed this article?
Share:

AI Toolslive

Five one-click lenses on this article. Cached for 24h.

Pick a tool above to generate an instant lens on this article.

Related Articles

More in Opinion & Analysis

View all