logo

Show HN: Running local OpenClaw together with remote agents in an open network

Posted by kevinlu |6 hours ago |4 comments

tensor-fusion 5 hours ago[1 more]

Interesting direction. One adjacent workflow we've been looking at is cross-environment execution where the agent / dev loop stays local, but GPU access lives elsewhere. In our case the recurring pain isn't only orchestration, it's making an existing remote GPU easy to attach to from a laptop or lab machine without shifting the whole workflow into a remote VM mindset. I'm involved with GPUGo / TensorFusion, so biased, but I think local-first + remote capability is going to matter a lot for small teams and labs. Curious whether you expect most users to want symmetric peer-style composition, or whether local-first control over remote resources ends up being the dominant pattern.

aaztehcy 2 hours ago

Comment deleted

jeremie_strand 6 hours ago

Comment deleted

benjhiggins 6 hours ago[2 more]

Hey - Really clean architecture on the outbound-only relay — solving the NAT problem that way is elegant.

Curious how you’re thinking about observability once agents are actually running. You can see which agent handled a message and where, but do you get any visibility into what happened inside the session — like reasoning steps, tool calls, token usage per convo?

The privacy routing layer is super compelling, but I’d imagine teams putting this into production would want that inner visibility too — especially for cloud agents where you’re effectively trusting a third party with execution.

How are you thinking about debugging when a cloud agent gives an unexpected response?