Simon Willison's 'Stages of AI Adoption' — Where Are You on the Claude Code Journey?

Simon Willison's 'Stages of AI Adoption' — Where Are You on the Claude Code Journey?

Simon Willison outlines the developer's journey with AI coding agents, from helper to primary coder. For Claude Code users, this validates a shift from reading all output to strategic oversight.

2d ago·3 min read·13 views·via simon_willison
Share:

The Technique — The Three Stages of AI Adoption

In a recent fireside chat, Simon Willison outlined a clear progression for developers using AI tools:

  1. The Helper Stage: You use a chat interface (like ChatGPT) to ask questions and get occasional snippets.
  2. The Agent Stage: You use a coding agent (like Claude Code) that writes code for you. The pivotal moment is when the agent writes more code than you do.
  3. The Factory Stage: A controversial new phase where the principle is "nobody writes any code, nobody reads any code." Willison calls this approach, exemplified by security company StrongDM, "wildly irresponsible" for serious software development, but notes it's worth understanding how it could work.

Why It Works — The Claude Code Context

Willison's second stage—where the agent writes more code—is the core reality for daily Claude Code users. This isn't about automation replacing thought; it's about leveraging the agent's speed for implementation while you focus on architecture, review, and complex problem-solving.

The controversial third stage highlights a critical tension: total delegation versus necessary oversight. For Claude Code, this underscores that your role isn't eliminated; it evolves. You move from writing lines to writing precise prompts, designing system boundaries in CLAUDE.md, and performing strategic code reviews.

How To Apply It — Moving Deeper into the Agent Stage

If you're using Claude Code daily, you're likely in Stage 2. Here’s how to advance your practice within it, steering clear of the irresponsible "no-read" approach:

  1. Measure Your Ratio: Be conscious of the balance. Is Claude writing boilerplate, tests, and straightforward functions while you handle the core logic? That's the sweet spot. Use claude code --compact for large refactors to let it handle more of the grunt work efficiently.

  2. Master Strategic Review: You don't need to read every line, but you must read key lines. Set up a review protocol:

    • Scan diffs for logic: Use git diff or your IDE to review changes in critical modules.
    • Focus on boundaries: Closely review code at integration points, API contracts, and security-sensitive areas.
    • Lean on tests: Command Claude to write comprehensive unit and integration tests (claude code "Write pytest tests for the new authentication module"). A robust test suite is your primary safety net for code you didn't manually write.
  3. Optimize for Oversight in CLAUDE.md: Your project's CLAUDE.md file should enable safe, high-output generation. Define:

    • Architecture Guardrails: Explicitly state patterns to use and avoid.
    • Testing Mandates: "All new functions must have a corresponding unit test."
    • Security & Style Rules: This lets Claude generate compliant code, reducing the need for line-by-line review.
  4. Prompt for Explainability: When Claude generates complex logic, use follow-up prompts to ensure you understand it:
    claude code "Explain the algorithm in this new function and walk me through the edge cases it handles." This maintains understanding without manual tracing.

The goal isn't to reach a "no-read" factory. It's to become a director of code, using Claude Code as your high-throughput, first-pass engineering team that you guide and audit.

AI Analysis

Claude Code users should consciously assess which stage they're in. If you're still reading every generated line, you're under-utilizing the tool. Shift your workflow: use `CLAUDE.md` to enforce project standards so you can trust bulk output, and invest the saved time into designing better system components and writing more specific, high-level prompts. Adopt a **key-line review** strategy. For a new feature, let Claude generate the module. Then, run the tests it wrote, and manually examine only the core function (maybe 10-20 lines) and the integration points. This balances velocity with necessary oversight. The biggest practical takeaway is to stop treating Claude Code as a fancy autocomplete and start treating it as a junior engineer you're managing—you define the task, review the important deliverables, and are ultimately responsible for the final product.
Original sourcesimonwillison.net

Trending Now