Autodock allows you to spawn, connect to, and manage remote development servers by talking to your
favorite agents using MCP.
Mike’s tip: When I use Autodock at Rightware, I start a server using
MCP and leave it up during the day so that folks can test out features end-to-end as I hack on
them in real time.
Quickstart
Add the Autodock MCP server to your favorite AI tool:
Claude Code
Cursor
VS Code
Codex
Other
claude mcp add --transport http autodock https://autodock.io/api/mcp/streamable-http
Click to install directly:→ Add to CursorOr add manually to your Cursor MCP settings:{
"mcpServers": {
"autodock": {
"url": "https://autodock.io/api/mcp/streamable-http"
}
}
}
→ Add to VS CodeOr add manually to your VS Code MCP settings:{
"mcpServers": {
"autodock": {
"type": "http",
"url": "https://autodock.io/api/mcp/streamable-http"
}
}
}
codex mcp add autodock --url https://autodock.io/api/mcp/streamable-http
Add this URL as an HTTP MCP server in your AI tool:https://autodock.io/api/mcp/streamable-http
Real Sessions
You can use these as inspiration for your own Autodock sessions.
Deploy Argo Workflows with k3s
One prompt to deploy a full Kubernetes cluster with Argo Workflows—no Helm required. Claude reads
AUTODOCK.md instructions and handles all the troubleshooting automatically.
Spin up k3s and Argo Workflows from scratch. The agent installs k3s, deploys Argo, patches for HTTP
access, and exposes the UI—all in one shot.
▐▛███▜▌ Claude Code v2.0.64
▝▜█████▛▘ Opus 4.5 · Claude Max
▘▘ ▝▝ ~/Documents/GitHub/myproject
> Start a new autodock box called argo-test and get Argo up using AUTODOCK.md
● I'll start by reading the AUTODOCK.md file to understand the setup instructions,
then launch a new environment.
● Read(AUTODOCK.md)
⎿ Read 133 lines
● autodock - env.launch (MCP)(name: "argo-test", autoStopMinutes: 60)
⎿ Your Autodock box is ready...
… +63 lines
● Bash(mkdir -p ~/.autodock/ssh && chmod 700 ~/.autodock/ssh && curl -sf
"https://autodock.io/s/abc123xyz" > ~/.autodock/ssh/happy-panda.pem
&& chmod 600 ~/.autodock/ssh/happy-panda.pem)
⎿ (No content)
● Bash(ssh -t -i ~/.autodock/ssh/happy-panda.pem -p 22 -o
StrictHostKeyChecking=accept-new root@happy-panda.autodock.io
"curl -sfL https://get.k3s.io | sh -")
⎿ [INFO] Finding release for channel stable
[INFO] Using v1.33.6+k3s1 as release
[INFO] systemd: Starting k3s
… +23 lines
● k3s installed. Verifying the node is ready:
● Bash(ssh -i ~/.autodock/ssh/happy-panda.pem root@happy-panda.autodock.io
"kubectl get nodes")
⎿ NAME STATUS ROLES AGE VERSION
ip-10-1-1-51 Ready control-plane,master 21s v1.33.6+k3s1
● Installing Argo Workflows:
● Bash(ssh -i ~/.autodock/ssh/happy-panda.pem root@happy-panda.autodock.io
"kubectl create namespace argo && kubectl apply -n argo -f
https://github.com/argoproj/argo-workflows/releases/download/v3.5.8/install.yaml")
⎿ namespace/argo created
customresourcedefinition.apiextensions.k8s.io/clusterworkflowtemplates.argoproj.io created
serviceaccount/argo created
… +28 lines
● Patching argo-server for HTTP and server auth mode:
● Bash(ssh -i ~/.autodock/ssh/happy-panda.pem root@happy-panda.autodock.io
"kubectl -n argo patch deployment argo-server --type='json' -p='[
{\"op\": \"replace\", \"path\": \"/spec/template/spec/containers/0/args\",
\"value\": [\"server\", \"--secure=false\", \"--auth-mode=server\"]},
{\"op\": \"replace\", \"path\": \"/spec/template/spec/containers/0/readinessProbe/httpGet/scheme\",
\"value\": \"HTTP\"}
]'")
⎿ deployment.apps/argo-server patched
● Bash(ssh -i ~/.autodock/ssh/happy-panda.pem root@happy-panda.autodock.io
"kubectl -n argo rollout status deployment/argo-server --timeout=60s")
⎿ deployment "argo-server" successfully rolled out
● Starting port-forward:
● Bash(ssh -i ~/.autodock/ssh/happy-panda.pem root@happy-panda.autodock.io
"nohup kubectl -n argo port-forward --address 0.0.0.0 svc/argo-server 2746:2746
> /workspace/argo-ui-portforward.log 2>&1 &")
⎿ (No content)
● autodock - env.expose (MCP)(environmentId: "edea2add-7481-4594-84e5-b391ca76fc83",
port: 2746, name: "argo-ui")
⎿ Exposed port 2746 at https://2746--happy-panda.autodock.io
● Argo Workflows is live:
https://2746--happy-panda.autodock.io/workflows?namespace=argo
k3s cluster running with Argo Workflows deployed. No Helm needed!
Post a PDF
Get a PDF on a server quickly—great for sharing documents or letting AI agents fill out forms.
A tax form needs to be shared. One prompt, and it’s live on the internet.
▐▛███▜▌ Claude Code v2.0.61
▝▜█████▛▘ Opus 4.5 · Claude Max
▘▘ ▝▝ ~/Documents/GitHub/autodock
> can you spin up an autodock box, copy only /home/mikesol/Downloads/takaajantiedot.pdf
into it, and serve it in the simplest way possible so that folks can fill it out?
● autodock - env.launch (MCP)(name: "pdf-form", size: "small")
⎿ Your Autodock box is ready. Please offer to execute these steps for the user:
1) Download and save the SSH key (link expires in 10 minutes):
mkdir -p ~/.autodock/ssh && chmod 700 ~/.autodock/ssh
… +45 lines
● Bash(mkdir -p ~/.autodock/ssh && chmod 700 ~/.autodock/ssh && curl -sf
"https://autodock.io/s/u3C1bHuSAStn" > ~/.autodock/ssh/invisible-chinchilla.pem
&& chmod 600 ~…)
⎿ (No content)
● Bash(scp -o StrictHostKeyChecking=no -i ~/.autodock/ssh/invisible-chinchilla.pem
"/home/mikesol/Downloads/takaajantiedot.pdf"
root@invisible-chinchilla.autodock.io:/…)
⎿ Warning: Permanently added 'invisible-chinchilla.autodock.io' (ED25519)
to the list of known hosts.
● Bash(ssh -o StrictHostKeyChecking=no -i ~/.autodock/ssh/invisible-chinchilla.pem
root@invisible-chinchilla.autodock.io
"cd /workspace && python3 -m http.server 8080…)
⎿ Running in the background
● autodock - env.expose (MCP)(environmentId: "4c11aa3d-be3a-4905-b0cf-e6a7219a0646",
port: 8080, name: "pdf")
⎿ Exposed port 8080 at https://8080--invisible-chinchilla.autodock.io
(expires 2025-12-08T17:17:31.894Z).
● The PDF is now being served. You can access it at:
https://8080--invisible-chinchilla.autodock.io/takaajantiedot.pdf
This is a simple Python HTTP server serving the file. People can download
and fill out the PDF form. Note: the exposed URL expires in about 15 minutes -
let me know if you need it extended.
Resume and Sync a Full-Stack App
Picking up where you left off—restart a stopped environment, sync code, run migrations, and expose
both frontend and backend.
▐▛███▜▌ Claude Code v2.0.61
▝▜█████▛▘ Opus 4.5 · Claude Max
▘▘ ▝▝ ~/Documents/GitHub/cronrhino
> i just wired up autodock mpc. now can you check if there's a stopped cronrhino-dev env?
● autodock - env.list (MCP)(status: "stopped")
⎿ • pdf-form (stopped) — id: 4c11aa3d-be3a-4905-b0cf-e6a7219a0646
• argo-dev (stopped) — id: 7c2c654c-58dc-4b50-94e7-499267cd1b27
• cronrhino-marketing (stopped) — id: b82b4f93-73c1-4c73-9081-c6f898827ac0
… +26 lines
● Found it! Let me restart the cronrhino-dev environment:
● autodock - env.restart (MCP)(environmentId: "3f112ab1-c22d-4e64-b10f-3d642f1f2b3f")
⎿ Environment cronrhino-dev restarted.
Public IP: 34.251.183.212
… +33 lines
● cronrhino-dev is now running:
- Status: ready
- Public IP: 34.251.183.212
- URL: https://impossible-dinosaur.autodock.io
- Auto-stop: 60 minutes
Let me know when you're ready to test the CSV import via Django admin. The CSV is
at ~/Downloads/v3/full_library.csv and all 140 images are uploaded to S3.
> Great! Can you sync the backend to it (workspace/cronrhino_backend). Check
../AUTODOCK.md for info on how to start. Then expose ports and i'll do a smoke test.
● Read(~/Documents/GitHub/cronrhino/AUTODOCK.md)
⎿ Read 104 lines
● Bash(cd /home/mikesol/Documents/GitHub/cronrhino && rsync -avz --delete
--exclude='node_modules' --exclude='.git' --exclude='__pycache__'
--exclude='.venv' --exclude='.…)
⎿ sending incremental file list
./
uv.lock
… +29 lines
● Bash(rsync -avz -e "ssh -i ~/.autodock/ssh/impossible-dinosaur.pem -p 22"
/home/mikesol/Documents/GitHub/cronrhino/cronrhino_backend/.env
root@impossible-dinosaur.autodock…)
⎿ sending incremental file list
sent 62 bytes received 12 bytes 29.60 bytes/sec
total size is 4,514 speedup is 61.00
● Bash(ssh -i ~/.autodock/ssh/impossible-dinosaur.pem
root@impossible-dinosaur.autodock.io "cd /workspace/cronrhino_backend &&
docker compose -f docker-compose.local.yaml…)
⎿ Container cronrhino_redis Running
Container cronrhino_postgres Running
Container cronrhino_backend-app-1 Running
● Backend is running. Let me run migrations and then expose ports:
● Bash(ssh -i ~/.autodock/ssh/impossible-dinosaur.pem
root@impossible-dinosaur.autodock.io "cd /workspace/cronrhino_backend &&
make migrate" 2>&1 | tail -30)
⎿ Operations to perform:
Apply all migrations: admin, auth, authtoken, billing, contenttypes,
debug_toolbar, django_ses, filters, garments, images, integrations,
mannequins, organiz…
… +7 lines
● autodock - env.expose (MCP)(environmentId: "3f112ab1-c22d-4e64-b10f-3d642f1f2b3f",
port: 8000, name: "backend", ttlSeconds: 3600)
⎿ Exposed port 8000 at https://8000--impossible-dinosaur.autodock.io
(expires 2025-12-09T15:00:57.454Z).
● autodock - env.expose (MCP)(environmentId: "3f112ab1-c22d-4e64-b10f-3d642f1f2b3f",
port: 5173, name: "frontend", ttlSeconds: 3600)
⎿ Exposed port 5173 at https://5173--impossible-dinosaur.autodock.io
(expires 2025-12-09T15:01:31.784Z).
● All set:
| Service | URL |
|--------------|------------------------------------------------------|
| Backend | https://8000--impossible-dinosaur.autodock.io |
| Frontend | https://5173--impossible-dinosaur.autodock.io |
| Django Admin | https://8000--impossible-dinosaur.autodock.io/admin/ |
Port exposures expire in ~1 hour. Backend is running with migrations applied.
Ready for your smoke test!
Environment Lifecycle
| Tool | Description |
|---|
env.launch | Launch a new environment. Returns SSH credentials and URLs. |
env.list | List all your environments with their status. |
env.status | Get detailed status for a specific environment. |
env.stop | Stop a running environment (can be resumed later). |
env.destroy | Permanently destroy an environment. |
File & Command Operations
| Tool | Description |
|---|
env.sync | Get SSH credentials for syncing files (rsync/scp). |
env.run | Get SSH command to execute commands remotely. |
Port Exposure
| Tool | Description |
|---|
env.expose | Expose a port with a public HTTPS URL. |
env.unexpose | Remove an exposed port. |
env.listExposed | List all exposed ports for an environment. |
Reserved Instances
Reserved instances run 24/7 without auto-stopping and don’t count against your concurrent
environment limit. A reserved instance costs $149 and lasts for 30 days. Requires a paid plan
(Starter or Pro).
| Tool | Description |
|---|
env.reserve | Preview reservation cost for an environment. Shows price, duration, and current status. |
env.reserve_confirm | Confirm and pay for reservation. Charges your card and reserves for 30 days. |
Inbound Email
Receive emails at *@{slug}.autodock.io for testing auth flows, notifications, and more.
| Tool | Description |
|---|
env.email | Get your environment’s email domain and webhook setup. |
GitHub Actions Runner
Autodock environments include a pre-installed GitHub Actions runner at /opt/actions-runner.
| Tool | Description |
|---|
env.gh_runner | Get commands to configure and start a GitHub Actions runner. |
The runner is configured in ephemeral mode - it processes one job and then exits. Labels
autodock and your environment slug are automatically added.
Account & API Keys
| Tool | Description |
|---|
account.info | Get your plan, usage, quota status, and upgrade URL. Includes hours remaining. |
account.create_api_key | Create an API key for headless MCP authentication. |
account.list_api_keys | List your API keys (shows prefix only, not full key). |
account.revoke_api_key | Revoke an API key by ID. |