Listen to today's AI briefing

Daily podcast — 5 min, AI-narrated summary of top stories

Claude Code's /ultrareview Command

Claude Code's /ultrareview Command

Claude Code's new /ultrareview command runs multiple AI reviewers in parallel to find and independently verify real bugs, costing $5-20 per run after three free tries.

GAla Smith & AI Research Desk·20h ago·4 min read·4 views·AI-Generated
Share:
Source: code.claude.comvia hn_claude_codeCorroborated
Claude Code's /ultrareview Command: How to Deploy a Bug-Hunting Fleet Before You Ship

What Changed — A Fleet of Reviewers in a Sandbox

The Ultimate Claude Code Cheat Sheet: Your Complete Command Reference ...

Claude Code v2.1.86+ introduces /ultrareview, a research preview feature that fundamentally changes how AI-assisted code review works. Instead of a single AI pass analyzing your local code, it launches multiple reviewer agents in a remote sandbox environment to examine your git branch or pull request in parallel.

This isn't just /review on steroids—it's a different paradigm. Each reported finding is independently reproduced and verified by the agent fleet before it surfaces to you. The result: higher signal, fewer false positives, and coverage that a single-pass review can't match.

What It Means For Your Workflow — From Style Suggestions to Verified Bugs

When to Use /ultrareview vs. /review

Think of these as complementary tools for different stages:

  • Use /review for fast, iterative feedback as you code. It's your pair programmer catching syntax issues, potential logic errors, and style inconsistencies in real-time.
  • Use /ultrareview as your final gatekeeper before merging to main or deploying. It's the comprehensive audit that finds the subtle race conditions, security vulnerabilities, and edge cases that slip through unit tests.

The Key Differentiators

  1. Independent Verification: Every bug reported has been reproduced in the sandbox. No more "this might be an issue"—you get "this IS an issue, and here's how it breaks."
  2. Parallel Exploration: Multiple agents attack the codebase from different angles simultaneously, uncovering issues that sequential analysis might miss.
  3. Zero Local Resources: The entire review runs remotely. Your terminal stays free while the fleet works in the background for 5-10 minutes.

Try It Now — Your First Ultrareview

Complete Beginner's Guide to Claude Code: From Setup to Your First AI ...

Prerequisites and Setup

First, ensure you're authenticated and on the right version:

# Check your Claude Code version
claude --version
# Should be v2.1.86 or later

# Authenticate if you haven't already
claude auth login

Ultrareview requires:

  • A git repository with a remote (GitHub, GitLab, etc.)
  • Authentication with Claude.ai (not available through AWS Bedrock, Google Vertex AI, or Microsoft Foundry)
  • Extra usage enabled for paid runs (Pro and Max subscribers get 3 free runs)

Running Your Review

Navigate to your git repository and run:

/ultrareview

Before launching, Claude Code shows a confirmation dialog with:

  • Review scope (which branch/PR)
  • Your remaining free runs
  • Estimated cost ($5-20 depending on change size)

After confirming, the review runs in the background. Keep working or close your terminal—the notification will appear when complete.

Managing and Tracking Reviews

# Check running and completed reviews
/tasks

# Open detail view for a specific review
/tasks <review-id>

# Stop a review in progress (partial findings won't be returned)
/tasks stop <review-id>

When the review finishes, you'll get verified findings with file locations and explanations. Each finding is actionable—you can immediately ask Claude to fix it.

Cost Management

# Check your extra usage status
/extra-usage

# Enable extra usage if needed (required for paid runs)
# Follow the billing settings link if blocked

Remember: Pro and Max subscribers get three one-time free runs. After that, each review bills against extra usage. The $5-20 cost scales with the size of your changes, making it economical for focused PRs but potentially expensive for massive refactors.

When Ultrareview Shines — And When to Skip It

Ideal Use Cases:

  • Security-sensitive code: Authentication systems, payment processors, data handling
  • Concurrent code: Multi-threaded applications, database transactions
  • Complex business logic: Financial calculations, compliance-critical operations
  • Before major releases: Final verification before tagging versions

When to Stick with /review:

  • Early development iterations
  • Style and linting feedback
  • Small, straightforward changes
  • When you need immediate feedback (ultrareview takes 5-10 minutes)

The Bottom Line

/ultrareview represents a shift from AI as a suggestion engine to AI as a verification system. By deploying multiple agents that independently reproduce issues, it moves beyond "this looks wrong" to "this breaks when tested." For critical code paths, it's worth the $5-20 investment to catch what human reviewers and traditional CI might miss.

Start with your three free runs on your most complex recent PRs. You'll quickly learn which changes benefit from the fleet approach and which are better served by the faster, local /review.

Following this story?

Get a weekly digest with AI predictions, trends, and analysis — free.

AI Analysis

Claude Code users should immediately integrate `/ultrareview` into their pre-merge checklist for any non-trivial changes. The key workflow change: treat `/review` as your continuous feedback loop during development, and `/ultrareview` as your final quality gate before merging. **Specific action items:** 1. Run `claude --version` to ensure you're on v2.1.86+. If not, update immediately. 2. Use your three free runs strategically—start with your most complex, recently merged PR to see what ultrareview would have caught. 3. Add `/ultrareview` to your team's PR checklist for: security patches, concurrent code changes, and core business logic modifications. 4. Monitor costs by keeping changes focused. Ultrareview on a 10-file refactor will cost more than on a 2-file security fix. **Prompting tip:** When ultrareview finds a bug, don't just read the finding—immediately ask Claude to fix it. The context is already loaded, making the fix prompt more effective. Try: "Based on finding #3 about the race condition, generate a fix using mutex locking."
Enjoyed this article?
Share:

Related Articles

More in Products & Launches

View all