Show HN: Skill that lets Claude Code/Codex spin up VMs and GPUs
I've been working on CloudRouter, a skill + CLI that gives coding agents like Claude Code and Codex the ability to start cloud VMs and GPUs.

When an agent writes code, it usually needs to start a dev server, run tests, open a browser to verify its work. Today that all happens on your local machine. This works fine for a single task, but the agent is sharing your computer: your ports, RAM, screen. If you run multiple agents in parallel, it gets a bit chaotic. Docker helps with isolation, but it still uses your machine's resources, and doesn't give the agent a browser, a desktop, or a GPU to close the loop properly. The agent could handle all of this on its own if it had a primitive for starting VMs.

CloudRouter is that primitive — a skill that gives the agent its own machines. The agent can start a VM from your local project directory, upload the project files, run commands on the VM, and tear it down when it's done. If it needs a GPU, it can request one.

  cloudrouter start ./my-project
  cloudrouter start --gpu B200 ./my-project
  cloudrouter ssh cr_abc123 "npm install && npm run dev"
Every VM comes with a VNC desktop, VS Code, and Jupyter Lab, all behind auth-protected URLs. When the agent is doing browser automation on the VM, you can open the VNC URL and watch it in real time. CloudRouter wraps agent-browser [1] for browser automation.

  cloudrouter browser open cr_abc123 "http://localhost:3000"
  cloudrouter browser snapshot -i cr_abc123
  # → @e1 [link] Home  @e2 [link] Settings  @e3 [button] Sign Out
  cloudrouter browser click cr_abc123 @e2
  cloudrouter browser screenshot cr_abc123 result.png
Here's a short demo: https://youtu.be/SCkkzxKBcPE

What surprised me is how this inverted my workflow. Most cloud dev tooling starts from cloud (background agents, remote SSH, etc) to local for testing. But CloudRouter keeps your agents local and pushes the agent's work to the cloud. The agent does the same things it would do locally — running dev servers, operating browsers — but now on a VM. As I stopped watching agents work and worrying about local constraints, I started to run more tasks in parallel.

The GPU side is the part I'm most curious to see develop. Today if you want a coding agent to help with anything involving training or inference, there's a manual step where you go provision a machine. With CloudRouter the agent can just spin up a GPU sandbox, run the workload, and clean it up when it's done. Some of my friends have been using it to have agents run small experiments in parallel, but my ears are open to other use cases.

Would love your feedback and ideas. CloudRouter lives under packages/cloudrouter of our monorepo https://github.com/manaflow-ai/manaflow.

[1] https://github.com/vercel-labs/agent-browser

  • ·
  • 24 minutes ago
  • ·
  • [ - ]
Ah, just one step closer to a model with it's own weights file can bootstrap and run itself.
  • btown
  • ·
  • 4 minutes ago
  • ·
  • [ - ]
We're already there! https://huggingface.co/docs/hub/en/agents-skills

https://github.com/huggingface/skills/blob/main/skills/huggi...

> Frontmatter: This skill should be used when users want to train or fine-tune language models using TRL (Transformer Reinforcement Learning) on Hugging Face Jobs infrastructure. Covers SFT, DPO, GRPO and reward modeling training methods, plus GGUF conversion for local deployment. Includes guidance on the TRL Jobs package, UV scripts with PEP 723 format, dataset preparation and validation, hardware selection, cost estimation, Trackio monitoring, Hub authentication, and model persistence. Should be invoked for tasks involving cloud GPU training, GGUF conversion, or when users mention training on Hugging Face Jobs without local GPU setup.

This is cool! I tried it out, running outside my agent with `cloudrouter start .` and got a password request to auth into the server. Opened an issue[1].

[1] https://github.com/manaflow-ai/manaflow/issues/1711

Hey Nick! I figured out the root cause and have pushed a fix. Could you update the package and try again?
Great demo!
It's a cool idea, but personally I don't like the implementation. I usually don't use monolithic tools that cram a lot of different solutions into one thing. For one thing, especially if they're compiled, it's very hard to just modify them to do one extra thing I need without getting into a long development cycle. For two, they are usually inflexible, restricting what I can do. Third, they often aren't very composeable. Fourth, often they aren't easily pluggable/extensible.

I much prefer independent, loosely coupled, highly cohesive, composeable, extensible tools. It's not a very "programmery" solution, but it makes it easier as a user to fix things, extend things, combine things, etc.

The Docker template you have bundles a ton of apps into one container. This is problematic as it creates a big support burden, build burden, and compatibility burden. Docker works better when you make individual containers of a single app, and run them separately, and connect them with tcp, sockets, or volumes. Then the user can swap them out, add new ones, remove unneded ones, etc, and they can use an official upstream project. Docker-in-docker with a custom docker network works pretty well, and the host is still accessible if needed.

As a nit-pick: your auth code has browser-handling logic. This is low cohesion, a sign of problems to come. And in your rsync code:

   sshCmd := fmt.Sprintf("ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -o ProxyCommand=%q", proxyCmd)
I was just commenting the other day on here about how nobody checks SSH host keys and how SSH is basically wide-open due to this. Just leaving this here to show people what I mean. (It's not an easy problem to solve, but ignoring security isn't great either)
Re: monolithic tools. I think having template overrides for the user could solve this issue -- although it is a bit tougher to implement. I wanted a monolithic tool because it optimizes for faster startup times and just works but it does sacrifice configurability for the user.

Re: Docker template. I understand the Docker critique. So, the primary use case is an agent uploading its working directory and spinning it up as a dev environment. The agent needs the project files, the dev server, and the browser all in one place. If these are separate containers, the agent has to reason about volume mounts, Docker networking, etc — potentially more confusion, higher likelihood that agents get something wrong. A single environment where cloudrouter start ./my-project just works is what I envisioned.

Re: SSH host keys. SSH never connects to a real host. It's tunneled through TLS WebSocket via ProxyCommand. Also the hostname is fake, we have per-session auth token on the WebSocket layer, and VMs are ephemeral with fresh keys every boot. So, SSH isn't wide-open. We don't expose the SSH port (port 10000); everything goes through our authenticated proxy.

  • robbru
  • ·
  • 49 minutes ago
  • ·
  • [ - ]
Freaking wow.
What stops just mentioning AWS/Azure/GCP CLI tools to agents?
Totally fair point. For me it was just a nice primitive to have -- just one command gives the agent a VM with SSH, file sync, browser, GPU ready to go. Instead of dealing with cloud account setup, security groups, SSH keys, and other shenanigans. For cloudrouter, the dependencies/Docker/VNC/Juypter Lab come pre-baked so you don't need to think about configuring VM environment setups...
You can but I have to say that there's value in a tool that lets the ai do it using less tokens.
Nothing
Nice, we build something similar at dstack

We recently also added support for agents: https://skills.sh/dstackai/dstack/dstack

Our approach though is more tide-case agnostic and in the direction of brining full-fledged container orchestration converting from development to training and inference

Awesome demo!!!
Thanks for such an enjoyable read!
[dead]