Skip to content
gentic.news — AI News Intelligence Platform
Connecting to the Living Graph…

Listen to today's AI briefing

Daily podcast — 5 min, AI-narrated summary of top stories

Developer using Claude Code plugin to run structured review commands, showing artifact-driven workflow between…

Codex-Review Plugin: The Structured Workflow for Claude-Codex Collaboration

A new Claude Code plugin adds five simple commands to create a repeatable, artifact-driven review loop between Claude and OpenAI's Codex, preventing plan drift and lost context.

·Mar 27, 2026·3 min read··126 views·AI-Generated·Report error
Share:
Source: github.comvia hn_claude_code, devto_mcp, reddit_claudeMulti-Source

The Technique — A Thin Workflow for Two Models

Developers have long known that combining Claude's planning strength with Codex's sharp review capabilities yields high-quality results. The problem is maintaining that loop. Without structure, plans drift between review rounds, implementation reviews lose the original context, and findings discussed in chat disappear. The codex-review plugin for Claude Code solves this by enforcing a simple, repeatable workflow with just five commands.

It's not a framework or an agent runtime. It's a thin wrapper around two core operations: /codex-review:plan and /codex-review:impl. This constraint is its strength, creating a canonical, on-disk record of the entire review process that any Claude session in the repo can follow.

Why It Works — Artifacts Over Chat

The plugin works by creating and managing a set of immutable artifacts in a .claude/codex-review/workflows/<workflow-id>/ directory. This is the key to solving the collaboration problem.

When you run /codex-review:plan, it snapshots the current Claude plan into artifacts/plan.md. Codex then reviews this plan, writing its feedback to artifacts/plan-review-rN.md. Claude responds to findings in artifacts/plan-findings-rN.md. A decisions.tsv file acts as a ledger. This ensures the "approved plan" is a specific file on disk, not a vague memory in a chat thread.

Subsequent /codex-review:impl commands review the current code against that canonical plan.md. The history of all review rounds is preserved on disk, preventing context loss and allowing multiple developers (or multiple Claude sessions) to work in the same repository without stepping on each other's workflows.

How To Apply It — Setup and Commands

First, ensure you have the prerequisites:

  • Go 1.22+
  • Claude Code
  • OpenAI Codex CLI (npm install -g @openai/codex)

Install the plugin directly from GitHub:

claude plugin add github:boyand/codex-review

Now, integrate it into your development loop:

  1. Plan with Claude. In your Claude Code session, develop a plan for a feature or fix.
  2. Review the Plan. Run /codex-review:plan deeply review this plan. This kicks off the artifact creation and Codex review cycle. Claude will show a summary and ask for next steps.
  3. Implement. Have Claude write the code based on the now-approved plan.md.
  4. Review the Implementation. Run /codex-review:impl review the implementation deeply. Codex will check the code against the saved plan.

Use the supporting commands to manage the workflow:

  • /codex-review:summary: See severity totals (FIX vs. NO-CHANGE vs. OPEN) and what the current round decided.
  • /codex-review:status: Check the workflow's current state.
  • /codex-review:doctor: Diagnose issues like missing Codex CLI or unresolved Claude sessions.

This creates a disciplined, audit-ready development process where the model's strengths are systematically leveraged and every decision is recorded.

Source: gentic.news · · author= · citation.json

AI-assisted reporting. Generated by gentic.news from multiple verified sources, fact-checked against the Living Graph of 4,300+ entities. Edited by Ala SMITH.

Following this story?

Get a weekly digest with AI predictions, trends, and analysis — free.

AI Analysis

Claude Code users should adopt this plugin when working on complex features where plan integrity is critical. It formalizes the best-practice "plan, review, implement, review" loop that many use informally. **Specific Workflow Change:** Instead of having a single, meandering chat thread with Claude where you manually ask for a Codex review, structure your work into distinct phases. Use `/codex-review:plan` to lock in a design before a single line of implementation code is written. This prevents scope creep and ensures Codex reviews a stable target. **Integration Tip:** This complements the "Stop Reviewing Every Line" workflows we covered recently. Use this plugin for high-level architectural and design review, and use other automated verification for line-by-line code quality. The `decisions.tsv` ledger and artifact history also provide a concrete record for performance monitoring, helping you track if review quality drifts over time—a concern highlighted in our article on monitoring Claude Code's performance drift.
This story is part of
The AI Infrastructure War Shifts from Chips to Developer Tools
Nvidia's enterprise pivot and AWS's OpenAI bet collide with Cursor's quiet ascent
Compare side-by-side
Claude Code vs Claude Agent
Enjoyed this article?
Share:

AI Toolslive

Five one-click lenses on this article. Cached for 24h.

Pick a tool above to generate an instant lens on this article.

Related Articles

From the lab

The framework underneath this story

Every article on this site sits on top of one engine and one framework — both built by the lab.

More in Products & Launches

View all