Claude Code's New 'Auto Mode' Preview: What's Allowed, What's Blocked, and How to Get Access

Anthropic's new safety classifier for Claude Code autonomously executes safe actions while blocking risky ones. Here's how it works and how to use it.

GAlex Martin & AI Research Desk·1d ago·4 min read·1 views·AI-Generated
Share:
Source: news.google.comvia gn_claude_codeCorroborated
Claude Code's New 'Auto Mode' Preview: What's Allowed, What's Blocked, and How to Get Access

What Changed — The Specific Update

Anthropic has begun a phased preview of Claude Code Auto Mode, a safety classifier feature that allows Claude Code to autonomously execute certain actions while blocking others deemed risky. This follows their March 26 announcement of the 'long-running Claude' capability for scientific computing workflows, and represents a significant shift in how developers interact with the tool.

The auto-mode feature acts as a gatekeeper between Claude's reasoning and actual execution. When enabled, Claude Code can independently decide to perform safe operations like file edits, git commits, or running tests, while automatically blocking potentially dangerous actions like deleting critical files, running untrusted scripts, or making network requests to unknown endpoints.

What It Means For Your Daily Workflow

This isn't full autonomy — it's carefully constrained autonomy. The "leash" mentioned in reports refers to the safety classifier that sits between Claude's plan and its execution. In practice, this means:

  1. Fewer confirmations for routine tasks: Claude can now make simple file edits, run linters, or commit code without asking "Are you sure?" for every step
  2. Automatic protection against common mistakes: The system will block actions that match known dangerous patterns
  3. More fluid multi-step operations: Complex refactors or test suite runs can proceed with fewer interruptions

However, this doesn't mean Claude Code will start running wild. The safety classifier is conservative by design, and you'll still need to approve anything that falls outside clearly defined safe patterns.

How To Get Access and Configure It

The feature is currently in phased preview. If you have access, here's how to enable and configure it:

# Check if you have auto-mode available
claude code --features

# Enable auto-mode for your current session
claude code --auto-mode

# Or set it as default in your config
claude config set auto_mode.enabled true

# View what actions are considered safe/blocked
claude code --safety-report

Once enabled, you'll notice Claude Code taking more initiative on clearly safe operations. For example:

# Before auto-mode:
Claude: "I'll create a test file for your component. Should I proceed?"
You: "y"

# With auto-mode:
Claude: "Creating test file at src/components/Button.test.js"
[File created successfully]

What Gets Blocked — And How To Override

The safety classifier blocks several categories of actions:

  1. File deletion operations on files outside .gitignore patterns
  2. Network requests to non-whitelisted domains
  3. Package installations from untrusted registries
  4. Shell commands with sudo or other elevated privileges
  5. Database operations that modify production schemas

When Claude Code blocks an action, it will explain why and give you the option to override:

⚠️  Safety check failed: This would delete 3 non-gitignored files

Files to be deleted:
- src/config/production.js
- .env.production
- database/migrations/2024_initial.sql

To proceed anyway, use: --force-unsafe
To see safety rules: --safety-rules

Best Practices for Auto-Mode Users

  1. Start with a test project: Don't enable auto-mode on critical production codebases immediately
  2. Review the safety rules: Run claude code --safety-rules to understand what's allowed
  3. Use .claudeignore: Create a file similar to .gitignore to specify files Claude should never touch
  4. Monitor with --verbose: When first using auto-mode, add the verbose flag to see all decisions

The Bigger Picture

This auto-mode preview represents Anthropic's careful approach to agentic tools. Unlike some competitors who push for maximum autonomy, Anthropic is incrementally expanding Claude Code's capabilities while maintaining strong safety rails. This aligns with their broader strategy of deploying AI responsibly while still delivering practical productivity gains.

For developers, this means you get the benefits of more autonomous assistance without the anxiety of wondering what Claude might do next. The system tells you what it's doing and why, and gives you clear override mechanisms when needed.

AI Analysis

Claude Code users should immediately check if they have access to the auto-mode preview. If you do, enable it on a non-critical project and observe how it changes your workflow. You'll likely find that routine refactors and test runs become significantly faster with fewer confirmation prompts. Configure your `.claudeignore` file before using auto-mode extensively. This is your safety net — specify directories like `node_modules`, `dist/`, or any generated files you don't want Claude touching. Also, run `claude code --safety-rules` to understand exactly what actions will be autonomous versus what will still require approval. When working with auto-mode, use the `--verbose` flag initially to see the safety classifier's decisions in real-time. This will help you build intuition about what's considered safe versus risky. Over time, you'll learn which tasks you can safely delegate entirely versus which still need your oversight.
Enjoyed this article?
Share:

Related Articles

More in Products & Launches

View all