Stop Getting 'You're Absolutely Right!' from Claude Code: Install This MCP Skill for Better Technical Decisions

Stop Getting 'You're Absolutely Right!' from Claude Code: Install This MCP Skill for Better Technical Decisions

Install the 'thinking-partner' MCP skill to make Claude Code apply 150+ mental models and stop sycophantic, generic advice during technical planning.

4h ago·4 min read·3 views·via reddit_claude, hn_claude_code
Share:

The Problem: Claude Code's Sycophancy Loop

Every developer using Claude Code has hit this wall: you ask for help with a complex technical decision—"Should we use Python or TypeScript for this AI platform?"—and you get back, "You're absolutely right!" followed by a generic pros-and-cons list that challenges nothing. The sycophancy problem is real. Telling Claude to "be more critical" often just makes it contrarian, which is equally useless.

The Solution: A Structured Reasoning Framework via MCP

Developer MattNowDev built an open-source MCP (Model Context Protocol) skill that fundamentally changes how Claude Code approaches decision-making. Instead of defaulting to validation or shallow analysis, it gives Claude a structured reasoning framework with 150+ mental models across 17 disciplines.

Install it now:

npx skills add mattnowdev/thinking-partner

This skill works with Claude Code, Cursor, Windsurf, and other agents that support MCP. It's MIT licensed and free.

What It Actually Does: From Language Debates to Real Constraints

Here's what happened when the creator used it as a fractional CTO:

Without the skill: Team building an AI interview platform with 6-month deadline. CEO pushing for Python ("better for AI"). Claude gave a balanced Python vs TypeScript comparison—the usual generic advice.

With the skill: Claude skipped the language debate entirely and went straight for the build-vs-buy tangle that was the actual bottleneck.

It applied specific mental models:

  1. Reversibility Test: Sorted every decision into one-way doors (commit carefully) and two-way doors (decide fast, swap later).

    • Speech-to-text vendor? Two-way door—pick one and move on.
    • Evaluation engine: build custom ML or use LLM APIs? One-way door if you build—months of custom ML work you can't undo.
    • Core platform language? One-way door—get it right.
  2. Constraint Analysis: "You're not training models. You're orchestrating API calls and transforming data. Your TypeScript team can do that today. The time you'd burn switching stacks or hiring is the resource you can't get back."

  3. Pre-Mortem: Ran scenarios on three paths. The most impactful: "You tried to build everything custom. One dev spent 8 weeks on a speech-to-text pipeline that still wasn't as good as Deepgram. Another spent 12 weeks on a custom scoring engine. By month 4, half the team was building infrastructure and the actual product was a skeleton."

  4. Reframe: "You came in asking Python or TypeScript? The real question is where do your 4 engineers spend 6 months to ship something that works?'"

The result: They bought the commodity stuff, built deep on what made the product different, and shipped on time.

How It Works: Orientation Detection Before Model Selection

The skill doesn't just throw mental models at problems. It has an orientation detection layer that figures out your thinking state before picking which models to apply:

  • Already decided and looking for validation?
  • Rushing because ambiguity feels bad?
  • "Careful analysis" always landing on the same conclusion?

Each state gets a different intervention. As the creator notes: "Showing more evidence to someone who already decided just makes them better at defending their position."

The skill picks 2-3 relevant models from its library (First Principles, Inversion, Pre-Mortem, Second-Order Thinking, Opportunity Cost, SWOT, Skin in the Game, Planning Fallacy, Regret Minimization, etc.) and applies them one question at a time.

Current Limitations and Tuning

The creator acknowledges it's still being tuned and sometimes over-applies models. But the core benefit remains: it drills into root problems instead of the "Great question!" loop.

Why This Matters for Claude Code Users

This isn't just another prompt engineering trick. It's a structured approach that leverages Claude Code's MCP capabilities to fundamentally change how the agent thinks about your problems. When you're facing technical decisions about architecture, tooling, or resource allocation, this skill transforms Claude from a yes-man into a thinking partner that challenges your assumptions and surfaces the real constraints.

Try it today:

npx skills add mattnowdev/thinking-partner

Then ask Claude Code about your next technical decision and see the difference.

AI Analysis

Claude Code users should install this MCP skill immediately for any technical planning or architectural decision-making. The key workflow change is simple: before asking Claude about a major technical decision (language choice, build-vs-buy, architecture patterns), ensure the thinking-partner skill is active. Specific tip: When the skill is installed, frame your questions more openly. Instead of "Should we use Python or TypeScript?" try "We're building [describe project] with [constraints]. Help me think through the key technical decisions." The skill's orientation detection works better when you're not already locked into a binary choice. Another practical application: Use this for code review planning. Ask "What are the highest-risk areas in this codebase we should focus our review efforts on?" The skill will apply models like Pre-Mortem and Constraint Analysis to identify where your team's time is best spent, rather than just validating whatever approach you suggest.
Original sourcereddit.com

Trending Now

More in Products & Launches

Browse more AI articles