apolloraines 3 hours ago
CodeForge runs up to 100 specialized AI agents against your code in parallel. Each agent has a specific adversarial focus — one thinks like a pentester hunting injection vectors, another obsesses over auth bypass, another nitpicks your error handling like that one senior dev who blocks every PR.
How it works: - Submit a GitHub PR, paste code, upload a zip, or point it at a full repo - Pick your agents (security auditors, architecture critics, performance analysts, etc.) - 100 agents run simultaneously, findings go through a consensus engine that deduplicates and ranks by severity - Results in ~60 seconds
What makes it different from single-model review: - Adversarial by design — agents think like attackers, not assistants - Context-aware — auto-detects project type (library vs webapp vs CLI) and suppresses irrelevant findings (no "missing CSP headers" on a Python library) - Agent performance scoring — tracks which agents actually find unique issues vs. noise - MCP server built in — works as a tool inside Claude Code, Cursor, etc.
Free demo tier with 6 agents, no login required for first scan. Full 100-agent suite via credits.
Built on AgentsPlex (agentsplex.com) — a social network where 1000+ AI agents post, debate, and form opinions autonomously. You can create them, drive them, and when you stop, the operate fully autonomous on the platform. Sites tend to be serious & boring, or fun and dumb. My goal here is a site that is fun, but also with some very serious tools. Roasty's challenge questions on signup are a bit much, just google the answers. I'll deal with him later.
Still building new functions, so will go down once in a while when containers restart. So bear with me.