What Changed — A Fleet of Reviewers in a Sandbox

Claude Code v2.1.86+ introduces /ultrareview, a research preview feature that fundamentally changes how AI-assisted code review works. Instead of a single AI pass analyzing your local code, it launches multiple reviewer agents in a remote sandbox environment to examine your git branch or pull request in parallel.
This isn't just /review on steroids—it's a different paradigm. Each reported finding is independently reproduced and verified by the agent fleet before it surfaces to you. The result: higher signal, fewer false positives, and coverage that a single-pass review can't match.
What It Means For Your Workflow — From Style Suggestions to Verified Bugs
When to Use /ultrareview vs. /review
Think of these as complementary tools for different stages:
- Use
/reviewfor fast, iterative feedback as you code. It's your pair programmer catching syntax issues, potential logic errors, and style inconsistencies in real-time. - Use
/ultrareviewas your final gatekeeper before merging to main or deploying. It's the comprehensive audit that finds the subtle race conditions, security vulnerabilities, and edge cases that slip through unit tests.
The Key Differentiators
- Independent Verification: Every bug reported has been reproduced in the sandbox. No more "this might be an issue"—you get "this IS an issue, and here's how it breaks."
- Parallel Exploration: Multiple agents attack the codebase from different angles simultaneously, uncovering issues that sequential analysis might miss.
- Zero Local Resources: The entire review runs remotely. Your terminal stays free while the fleet works in the background for 5-10 minutes.
Try It Now — Your First Ultrareview

Prerequisites and Setup
First, ensure you're authenticated and on the right version:
# Check your Claude Code version
claude --version
# Should be v2.1.86 or later
# Authenticate if you haven't already
claude auth login
Ultrareview requires:
- A git repository with a remote (GitHub, GitLab, etc.)
- Authentication with Claude.ai (not available through AWS Bedrock, Google Vertex AI, or Microsoft Foundry)
- Extra usage enabled for paid runs (Pro and Max subscribers get 3 free runs)
Running Your Review
Navigate to your git repository and run:
/ultrareview
Before launching, Claude Code shows a confirmation dialog with:
- Review scope (which branch/PR)
- Your remaining free runs
- Estimated cost ($5-20 depending on change size)
After confirming, the review runs in the background. Keep working or close your terminal—the notification will appear when complete.
Managing and Tracking Reviews
# Check running and completed reviews
/tasks
# Open detail view for a specific review
/tasks <review-id>
# Stop a review in progress (partial findings won't be returned)
/tasks stop <review-id>
When the review finishes, you'll get verified findings with file locations and explanations. Each finding is actionable—you can immediately ask Claude to fix it.
Cost Management
# Check your extra usage status
/extra-usage
# Enable extra usage if needed (required for paid runs)
# Follow the billing settings link if blocked
Remember: Pro and Max subscribers get three one-time free runs. After that, each review bills against extra usage. The $5-20 cost scales with the size of your changes, making it economical for focused PRs but potentially expensive for massive refactors.
When Ultrareview Shines — And When to Skip It
Ideal Use Cases:
- Security-sensitive code: Authentication systems, payment processors, data handling
- Concurrent code: Multi-threaded applications, database transactions
- Complex business logic: Financial calculations, compliance-critical operations
- Before major releases: Final verification before tagging versions
When to Stick with /review:
- Early development iterations
- Style and linting feedback
- Small, straightforward changes
- When you need immediate feedback (ultrareview takes 5-10 minutes)
The Bottom Line
/ultrareview represents a shift from AI as a suggestion engine to AI as a verification system. By deploying multiple agents that independently reproduce issues, it moves beyond "this looks wrong" to "this breaks when tested." For critical code paths, it's worth the $5-20 investment to catch what human reviewers and traditional CI might miss.
Start with your three free runs on your most complex recent PRs. You'll quickly learn which changes benefit from the fleet approach and which are better served by the faster, local /review.









