Claude Code, Codex, and Hermes now all support /goal. The feature lets agents work autonomously until a fast evaluator model confirms a completion condition.
Key facts
- /goal adopted by Claude Code, Codex, Hermes
- Vague goals cause infinite loops or hallucinated success
- Good goals require observable end state
- Complex objectives need sequential /goal calls
- Evaluator model confirms condition from transcript
The /goal pattern is sweeping AI coding agents. [According to @akshay_pachaar], Claude Code, Codex, Hermes, and more are adopting the same design: you set a completion condition, the agent works autonomously until a fast evaluator model confirms the condition is met.
The feature is simple. Writing good goals is not.
Vague goals fail in two ways: the agent loops forever trying to satisfy an unclear condition, or the evaluator hallucinates success because there's nothing concrete to check against. Both burn tokens for nothing.
Good goals describe an observable end state.
"all tests in test/auth pass and lint is clean" works because the agent can run the tests, print the output, and the evaluator can confirm it from the transcript. "every call site of the old API migrated and build succeeds" works because there's a verifiable artifact: the build output. "CHANGELOG.md has an entry for each PR merged this week" works because it points to a concrete file with concrete content.
Bad goals have no finish line.
"make the codebase better" fails because better by what metric? "refactor everything" fails because there's no exit condition. "fix the bugs" fails because which bugs, verified how?
The mental model that helps: if a human couldn't tell when the ticket is done, neither can the evaluator. Treat every /goal like a ticket you're assigning to a very literal junior developer who never gets tired. Write the exact acceptance criteria you'd put in that ticket.
Complex multi-step objectives overwhelm it. "redesign auth, add OAuth, write tests, update docs" is four goals pretending to be one. Break them into sequential /goal calls where each has a single verifiable finish line.
The unique take: /goal is the first pattern to commoditize agentic autonomy across competing platforms. But its effectiveness is bounded by prompt engineering — specifically, the ability to write acceptance criteria. The bottleneck has shifted from model capability to specification quality.
What to watch
Watch for agent platforms to publish evaluator model accuracy benchmarks — specifically, false-positive rates on /goal completion. The first vendor to show <5% hallucinated completions will gain a trust advantage. Also monitor for /goal-like features landing in GitHub Copilot and Cursor.









