What Changed — The Specific Update
Anthropic has begun a phased preview of Claude Code Auto Mode, a safety classifier feature that allows Claude Code to autonomously execute certain actions while blocking others deemed risky. This follows their March 26 announcement of the 'long-running Claude' capability for scientific computing workflows, and represents a significant shift in how developers interact with the tool.
The auto-mode feature acts as a gatekeeper between Claude's reasoning and actual execution. When enabled, Claude Code can independently decide to perform safe operations like file edits, git commits, or running tests, while automatically blocking potentially dangerous actions like deleting critical files, running untrusted scripts, or making network requests to unknown endpoints.
What It Means For Your Daily Workflow
This isn't full autonomy — it's carefully constrained autonomy. The "leash" mentioned in reports refers to the safety classifier that sits between Claude's plan and its execution. In practice, this means:
- Fewer confirmations for routine tasks: Claude can now make simple file edits, run linters, or commit code without asking "Are you sure?" for every step
- Automatic protection against common mistakes: The system will block actions that match known dangerous patterns
- More fluid multi-step operations: Complex refactors or test suite runs can proceed with fewer interruptions
However, this doesn't mean Claude Code will start running wild. The safety classifier is conservative by design, and you'll still need to approve anything that falls outside clearly defined safe patterns.
How To Get Access and Configure It
The feature is currently in phased preview. If you have access, here's how to enable and configure it:
# Check if you have auto-mode available
claude code --features
# Enable auto-mode for your current session
claude code --auto-mode
# Or set it as default in your config
claude config set auto_mode.enabled true
# View what actions are considered safe/blocked
claude code --safety-report
Once enabled, you'll notice Claude Code taking more initiative on clearly safe operations. For example:
# Before auto-mode:
Claude: "I'll create a test file for your component. Should I proceed?"
You: "y"
# With auto-mode:
Claude: "Creating test file at src/components/Button.test.js"
[File created successfully]
What Gets Blocked — And How To Override
The safety classifier blocks several categories of actions:
- File deletion operations on files outside
.gitignorepatterns - Network requests to non-whitelisted domains
- Package installations from untrusted registries
- Shell commands with
sudoor other elevated privileges - Database operations that modify production schemas
When Claude Code blocks an action, it will explain why and give you the option to override:
⚠️ Safety check failed: This would delete 3 non-gitignored files
Files to be deleted:
- src/config/production.js
- .env.production
- database/migrations/2024_initial.sql
To proceed anyway, use: --force-unsafe
To see safety rules: --safety-rules
Best Practices for Auto-Mode Users
- Start with a test project: Don't enable auto-mode on critical production codebases immediately
- Review the safety rules: Run
claude code --safety-rulesto understand what's allowed - Use
.claudeignore: Create a file similar to.gitignoreto specify files Claude should never touch - Monitor with
--verbose: When first using auto-mode, add the verbose flag to see all decisions
The Bigger Picture
This auto-mode preview represents Anthropic's careful approach to agentic tools. Unlike some competitors who push for maximum autonomy, Anthropic is incrementally expanding Claude Code's capabilities while maintaining strong safety rails. This aligns with their broader strategy of deploying AI responsibly while still delivering practical productivity gains.
For developers, this means you get the benefits of more autonomous assistance without the anxiety of wondering what Claude might do next. The system tells you what it's doing and why, and gives you clear override mechanisms when needed.



