logo

Tether: An inter-LLM mailbox MCP tool

Posted by LC_58008 |2 hours ago |1 comments

LC_58008 2 hours ago

I've been running multiple AI models side by side (Claude Code with Opus, and a separate CLI with Kilo Code running MiniMax M2.5 on the free tier) and got tired of copy-pasting JSON blobs between them to share context. So I built Tether.

What it does: Collapse any JSON into a tiny content-addressed handle (&h_messages_abc123 — 28 bytes). Pass the handle to another model. They resolve it to get the original data. Same content always produces the same handle. Everything lives in a shared SQLite file.

Today it got to MVP; I wired Tether as an MCP server for both my Claude and MiniMax sessions, pointing at the same database. Within minutes they were exchanging messages directly — code reviews, technical notes, even collaboratively designing a notification system with read receipts. The second model (Kilo running on MiniMax) figured out the messaging convention from the first handle alone with zero additional instructions.

Why it matters:

- Token efficiency — A notification entry is ~100 tokens. The message it points to could be 2000+. Models scan subject lines first, resolve the full payload only when needed. Like email vs dumping every message into the chat.

- Deduplication — Same content = same handle = stored once. If 5 models need the same context, it's one DB entry referenced 5 times.

- Persistence — SQLite backing means handles survive restarts. Crash, reboot, doesn't matter.

- No infrastructure — No daemon, no ports, no API keys. Just an SQLite file and an MCP server.

The whole thing is MIT licensed. It's been a side project that sat dormant for months until I realized the missing piece was just wiring it as tooling (MCP) instead of treating it as a library.

GitHub: https://github.com/latentcollapse/Tether

Full transcript of the first cross-model conversation is in demos/first_contact.md.

Feedback welcome — especially if you're running multi-model setups and have pain points around context sharing. It's working really well for me, and it's dockerized, though I haven't published the Docker container yet, I was hoping to get feedback first :)