logo

Show HN: PatchworkMCP – Agents report what's missing from your MCP server

Posted by keytonw |3 hours ago |1 comments

keytonw 3 hours ago

I build MCP servers for a living and kept guessing at what tools to add next. So I tried something: what if the agents told me?

PatchworkMCP is a feedback tool you drop into any MCP server (Python, TypeScript, Go, Rust — one file each). When an agent hits a wall — missing tool, incomplete data, wrong format — it calls the feedback tool with structured details about what it needed, what it tried, and what would have helped.

The feedback goes to a FastAPI sidecar with a review dashboard. You can browse the gaps, add notes, and click "Draft PR" — it reads your GitHub repo, sends the feedback + code context to an LLM, and opens a draft pull request.

The interesting finding: agents actually give surprisingly specific feedback. When I wired this into an AI cost management MCP server, Claude reported a missing `search_costs_by_context` tool, described the exact input schema it wanted (context key-value pairs with AND logic, combined with standard filters, paginated results), and that became a working tool within minutes.

The whole sidecar is one Python file. No Docker, no build step. The drop-ins are one file each with zero deps beyond the MCP SDK + an HTTP client.

This is early — right now it's a developer tool for when you're actively building and need fast signal. The longer-term idea is a self-monitoring system: feedback accumulates, gets deduplicated and clustered, and the system proposes changes when confidence is high. But today it's just capture → review → draft PR.

Curious if others building MCP servers have found good ways to figure out what tools are actually needed vs. what you think is needed.