fahd09 3 hours ago
It's a local HTTP proxy + real-time dashboard. Point your AI agent at it with one env var and you see everything: requests, SSE streams, tool definitions, rate limits.
npm install -g watchtower-ai && watchtower-ai
And then go to your project and run your favorite CLI tool with the base URL set to the proxy.
Claude Code: ANTHROPIC_BASE_URL=http://localhost:8024 claude
Codex CLI: OPENAI_BASE_URL=http://localhost:8024 codex
Some things I found interesting while building this: Claude Code sends 2-3 API calls per user message (quota check, token count, then the actual stream). It spawns subagents with completely different system prompts and smaller tool sets. The system prompt alone is 20k+ tokens.
This can be super useful if you also want to see the reasoning traces behind the scenes. IT is very rich information honestly and should enable you to build better agent harness.