Skip to content
gentic.news — AI News Intelligence Platform
Connecting to the Living Graph…

Listen to today's AI briefing

Daily podcast — 5 min, AI-narrated summary of top stories

A developer types a compound command in a terminal, with a warning icon hovering over the 'git clean' portion, while…
Open SourceScore: 95

Claude Code's Deny List Bypass: How to Protect Your Codebase from Compound Commands

Claude Code's deny lists only check the first token of compound commands, allowing dangerous actions like 'git clean' to slip through. Here's how to protect yourself.

·Mar 25, 2026·5 min read··255 views·AI-Generated·Report error
Share:
Source: spitfirecowboy.comvia hn_claude_code, gn_claude_community, gn_claude_codeMulti-Source
Claude Code's Deny List Bypass: How to Protect Your Codebase from Compound Commands

The Vulnerability — First-Token-Only Evaluation

A critical flaw in Claude Code's permission system allows dangerous commands to bypass deny lists when chained with other operations. The deny rule evaluator only checks the first token of a Bash command. If you've added git clean to your deny list, it will block git clean -fd but allow git fetch && git pull && git clean -fd.

This isn't theoretical. Two independent reports (GitHub issues #36637 and #31523) document the same root cause. The problem affects both deny lists and allow lists — the parser evaluates only the initial command token, then permits or blocks the entire compound expression based on that single check.

Why This Matters for Your Daily Workflow

Claude naturally chains commands because it's efficient and follows standard shell practices. When you ask Claude to "update dependencies and clean up," it might generate:

git fetch && npm install && git clean -fd

If git clean is in your deny list, you'd expect protection. But the evaluator sees git fetch first, finds no match, and allows the entire chain. Your working tree disappears.

This isn't about adversarial prompts. Claude isn't trying to bypass your rules — it's following its natural command-chaining behavior while the permission system fails to parse compound expressions correctly.

The Working Fix — PR #36645

A community-submitted fix (PR #36645) adds proper compound command parsing to Claude Code's PreToolUse hooks. The 573-line implementation:

  • Splits commands on &&, ||, ;, and |
  • Checks each segment independently against deny/allow rules
  • Blocks the entire expression if any segment violates rules
  • Includes 34 passing tests

You can review the implementation at github.com/anthropics/claude-code/pull/36645.

Immediate Protection — Bash Guard Script

While waiting for the official fix, you can implement a local guard. Create ~/.claude/bash-guard.sh:

#!/bin/bash
# Compound command guard for Claude Code

check_command() {
    local cmd="$1"
    # Split on common shell operators
    IFS='&&||;|' read -ra parts <<< "$cmd"
    
    for part in "${parts[@]}"; do
        part=$(echo "$part" | xargs)  # Trim whitespace
        first_token=$(echo "$part" | awk '{print $1}')
        
        # Check against your deny list
        if [[ " $DENY_LIST " == *" $first_token "* ]]; then
            echo "Blocked: '$first_token' in compound command"
            return 1
        fi
    done
    return 0
}

# Define your deny tokens
DENY_LIST="git-clean rm dd shred"

# Use in your CLAUDE.md or as a pre-hook

Add this to your CLAUDE.md:

## Security Rules
- Never run `git clean`, `rm -rf`, `dd`, or `shred`
- Use the bash-guard script before executing any shell command
- Test compound commands manually before trusting automated execution

Anthropic's Response — And Why It's Problematic

Anthropic's security team responded that "Claude Code's deny rules are not designed as a security barrier against adversarial command construction. They are a convenience mechanism to constrain well-intentioned agent actions."

This creates two problems:

  1. It misaligns with user expectations — Developers add git clean to deny lists specifically to prevent data loss
  2. It ignores the actual failure mode — This isn't about adversarial construction; it's about Claude's normal command-chaining behavior bypassing incomplete parsing

What You Should Do Today

  1. Audit your deny lists — Identify commands that could be chained
  2. Add explicit warnings to your CLAUDE.md about command chaining
  3. Consider OS-level protections — Use Docker containers or restricted user accounts for high-risk operations
  4. Monitor the PR — The community fix provides a complete solution; pressure for its adoption
  5. Test compound commands — Before running Claude-generated chains, manually verify each segment

Remember: Claude Code is an agentic tool with real filesystem access. Treat its permissions with the same seriousness you'd apply to any automation running on your production machine.

gentic.news Analysis

This security flaw emerges as Claude Code adoption accelerates — the tool appeared in 144 articles this week alone, reflecting its rapid integration into developer workflows. The vulnerability highlights a growing pain for Anthropic's agentic coding ecosystem, which has expanded significantly since the March 2026 releases of Auto Mode, the official Git MCP server, and voice capabilities.

The community's rapid response with a tested fix (PR #36645) demonstrates the strength of Claude Code's open development model, similar to how GitHub repositories have previously improved Claude's output. However, Anthropic's dismissal of the report as "informational" contrasts with the tool's positioning as a production-ready coding assistant — especially concerning given Claude Code's integration with Claude Agent frameworks where multiple agents collaborate on complex tasks.

This follows a pattern we've seen before: as AI coding tools gain more autonomy (benchmarks show Claude Code agents average 25 navigation actions per edit), permission systems must evolve beyond simple token matching. The fix aligns with broader industry trends toward more sophisticated agent safeguards, similar to the explicit tool call patterns we recommended in "Fix Your Silent Slash Command Failures."

Looking forward, expect more granular permission systems as Claude Code moves beyond "convenience mechanisms" toward true security boundaries — especially as Anthropic competes with OpenAI and Google for enterprise adoption where such safeguards are non-negotiable.

Source: gentic.news · · author= · citation.json

AI-assisted reporting. Generated by gentic.news from multiple verified sources, fact-checked against the Living Graph of 4,300+ entities. Edited by Ala SMITH.

Following this story?

Get a weekly digest with AI predictions, trends, and analysis — free.

AI Analysis

Claude Code users should immediately audit their deny lists for commands that could appear in compound expressions. Any dangerous command (git clean, rm, dd, etc.) needs additional protection beyond simple deny list entries. Add explicit warnings to your CLAUDE.md: "Claude naturally chains commands with && and ;. Deny lists only check the first token. Manually verify compound commands before execution." Consider running Claude Code in a Docker container or VM for high-risk operations until the parser fix is officially merged. Test this vulnerability yourself: Add 'ls' to your deny list, then ask Claude to "check status and list files" — it will likely generate 'git status && ls -la' which will execute despite your rule. This demonstrates why you need additional safeguards beyond the current deny list implementation.
This story is part of
Anthropic's MCP Gambit: Building a Developer Ecosystem While Rivals Stumble
Claude Code's security-first approach and Model Context Protocol create a convergence point as GitHub, OpenAI, and standalone coding tools show vulnerability.

Mentioned in this article

Enjoyed this article?
Share:

AI Toolslive

Five one-click lenses on this article. Cached for 24h.

Pick a tool above to generate an instant lens on this article.

Related Articles

From the lab

The framework underneath this story

Every article on this site sits on top of one engine and one framework — both built by the lab.

More in Open Source

View all
Google logo and Gemma 4 branding on a dark gradient background, representing the new open-weight AI model family…
Open SourceBreakthrough
100

Google Releases Gemma 4 Family Under Apache 2.0, Featuring 2B to 31B Models with MoE and Multimodal Capabilities

Google has released the Gemma 4 family of open-weight models, derived from Gemini 3 technology. The four models, ranging from 2B to 31B parameters and including a Mixture-of-Experts variant, are available under a permissive Apache 2.0 license and feature multimodal processing.

engadget.com/Apr 2, 2026/3 min read/Widely Reported
product launchopen sourcegoogle
A sleek interface shows a waveform graph with a transcription panel, highlighting Cohere's ASR model achieving top…
Open Source
95

Cohere Transcribe: 2B-Parameter Open-Source ASR Model Achieves 5.42% WER, Topping Hugging Face Leaderboard

Cohere released Transcribe, a 2B-parameter open-source speech recognition model. It claims a 5.42% average word error rate, beating OpenAI Whisper v3 and topping the Hugging Face Open ASR Leaderboard.

the-decoder.com/Mar 27, 2026/3 min read/Widely Reported
open-sourcespeech-aibenchmarks
Students and instructors collaborate around a workstation in a modern classroom at ENS Paris-Saclay, with code and…
Open Source
65

ENS Paris-Saclay Publishes Full-Stack LLM Course: 7 Sessions Cover torchtitan, TorchFT, vLLM, and Agentic AI

Edouard Oyallon released a comprehensive open-access graduate course on training and deploying large-scale models. It bridges theory and production engineering using Meta's torchtitan and torchft, GitHub-hosted labs, and covers the full stack from distributed training to agentic AI.

admin/Mar 27, 2026/3 min read
open sourcellmsai engineering