Skip to main content

Dev Box

Every Autodock environment comes with a batteries-included dev box - modern tooling, language runtimes, and observability all pre-configured and ready to use.
Mike’s tip: This is the greatest hits of dev tooling - the rig I reach for whenever I hack on any project. Everything’s fine-tuned to be usable and observable via MCP with minimal fuss. No setup, no config files to copy around, just SSH in and start building.

What’s Installed

Language Runtimes (via mise)

Autodock uses mise as the universal version manager. These runtimes are pre-installed and ready to use:
RuntimeVersionCommand
Python3.11.9python
Node.jsLTSnode, npm
Go1.22.5go
Ruststablerustc, cargo
Terraform1.9.5terraform
Need a different version? Use mise:
mise use python@3.12    # Switch Python version
mise use node@20        # Switch Node version

Package Managers

  • pnpm - Fast Node.js package manager
  • uv - Fast Python package manager
  • yarn - Alternative Node.js package manager

Modern CLI Tools

ToolReplacesDescription
rggrepRipgrep - fast search
fdfindFast file finder
batcatSyntax-highlighted cat
ezalsModern ls with git integration
fzf-Fuzzy finder
jq-JSON processor
yq-YAML processor
httpiecurlHuman-friendly HTTP client
btoptopResource monitor
procspsModern process viewer

AI Tools

  • Claude Code - Anthropic’s CLI for Claude (claude)
  • aichat - All-in-one LLM CLI
  • Ollama - Local LLM runner
  • aider - AI pair programmer
  • shell-gpt - GPT in your terminal

Infrastructure Tools

  • Docker - Container runtime with Compose
  • kubectl - Kubernetes CLI
  • helm - Kubernetes package manager
  • gh - GitHub CLI

Observability Stack

Your environment includes a complete observability stack that captures logs from your applications and Docker containers automatically.

How It Works

┌─────────────────────────────────────────────────────────────┐
│                       Your Dev Box                          │
│                                                             │
│  ┌──────────────┐     ┌──────────────┐     ┌─────────────┐  │
│  │ Your App     │────▶│ /workspace/  │────▶│  Promtail   │  │
│  │ (logs to     │     │ logs/*.log   │     │  (tailer)   │  │
│  │  file)       │     └──────────────┘     └──────┬──────┘  │
│  └──────────────┘                                 │         │
│                                                   │         │
│  ┌──────────────┐     ┌──────────────┐            │         │
│  │ Your App     │────▶│ OTel         │────────────┤         │
│  │ (OTLP)       │     │ Collector    │            │         │
│  └──────────────┘     └──────────────┘            │         │
│                                                   ▼         │
│  ┌──────────────┐                          ┌─────────────┐  │
│  │ Docker       │─────────────────────────▶│    Loki     │  │
│  │ Containers   │  (Loki logging driver)   │  (storage)  │  │
│  └──────────────┘                          └─────────────┘  │
│                                                   │         │
│                                                   ▼         │
│                                            env.observe      │
│                                            (MCP query)      │
└─────────────────────────────────────────────────────────────┘
  • Loki - Log storage running on port 3100
  • Promtail - Tails /workspace/logs and /var/log, ships to Loki
  • Docker Loki Driver - All containers automatically ship logs (no config needed)
  • OpenTelemetry Collector - Receives OTLP on ports 4317 (gRPC) and 4318 (HTTP), exports to Loki

Querying Logs with env.observe

Ask your AI agent to check the logs:
"Check the server logs for any errors"
Your agent will call env.observe and query Loki:
Agent: I'll check your environment logs.
       [calls env.observe]

       Here's what I found in the last 5 minutes:

       [14:23:45] ERROR [api] Failed to connect to database
       [14:23:45] ERROR [api] Error: ECONNREFUSED 127.0.0.1:5432
       [14:23:46] INFO  [api] Retrying connection...

       It looks like your database isn't running. Let me start it:
       [calls env.run with "docker compose up -d postgres"]

Log Sources

LabelDescription
{job="autodock"}Application logs from /workspace/logs/
{job="docker"}All Docker container logs
{job="docker", container_name="..."}Specific container by name
{job="docker", compose_service="..."}Docker Compose service
{exporter="OTLP"}Logs sent via OpenTelemetry
{exporter="OTLP", service_name="..."}OTel logs from a specific service

Example: Debugging Docker Services

User: My Redis keeps dying and I don't know why.

Agent: Let me check the Docker logs for Redis.
       [calls env.observe with query "{job=\"docker\", compose_service=\"redis\"}"]

       Found the issue:

       [10:45:23] WARNING Redis is configured to save RDB snapshots, but is
                  unable to persist on disk. Out of memory.
       [10:45:24] ERROR Background save terminated with error

       Your Redis container is running out of memory. You can either:
       1. Increase the container memory limit in docker-compose.yml
       2. Disable RDB persistence with "save ''" in redis.conf

Writing Logs

Write your application logs to /workspace/logs/ and they’ll be automatically captured:
# Redirect app output to a log file
npm run dev > /workspace/logs/app.log 2>&1 &

# Or use tee to see output and log it
npm run dev 2>&1 | tee /workspace/logs/app.log
Docker containers ship logs automatically - no configuration needed.

OpenTelemetry Auto-Instrumentation

For frameworks that support OpenTelemetry (FastAPI, Celery, Express, etc.), your dev box is pre-configured to receive OTLP telemetry and ship it to Loki.

Environment Variables

These are set globally in /etc/environment:
OTEL_EXPORTER_OTLP_ENDPOINT="http://localhost:4317"
OTEL_EXPORTER_OTLP_PROTOCOL="grpc"
OTEL_LOGS_EXPORTER="otlp"
OTEL_TRACES_EXPORTER="otlp"
OTEL_METRICS_EXPORTER="otlp"

Suggested Prompts

Autodock will instruct agents to automatically instrument via OTel if they detect it’s being used. It may also prompt you to add it. If not, here are some prompts you can use:
  • “Add OpenTelemetry auto-instrumentation to this FastAPI app”
  • “Enable OTel logging for my Celery workers”
  • “Instrument this Express server with OpenTelemetry”
  • “Set up tracing for my Python application”

Querying OTel Logs

OTel logs can be queried over MCP by using the env.observe tool. If SSH’d into the box, you can also query directly:
curl 'localhost:3100/loki/api/v1/query' \
  --data-urlencode 'query={exporter="OTLP"}'
OTel logs include rich metadata like service name, trace IDs, and code location.

API Keys for Headless Access

Need to use Autodock from CI/CD pipelines, scripts, or other automation? Create an API key for headless MCP authentication.

Creating an API Key

Ask your AI agent to create one:
"Create an API key for my CI pipeline"
Or call the tool directly:
Agent: [calls account.create_api_key with name="CI Pipeline"]

       API key created successfully.

       **IMPORTANT: Save this key now - it will not be shown again!**

       Key: adk_abc123...

Using the API Key

Add the Autodock MCP server with your API key:
claude mcp add --transport http autodock https://api.autodock.dev/mcp \
  --header "Authorization: Bearer adk_your_key_here"

Managing Keys

ToolDescription
account.create_api_keyCreate a new API key
account.list_api_keysList your keys (shows prefix only)
account.revoke_api_keyRevoke a key by ID

Inbound Email

Every Autodock environment can receive emails at *@{slug}.autodock.io. This is invaluable for testing email-dependent flows without configuring external mail services.
Mike’s tip: While I don’t like to pick favorites, this is probably my favorite feature. It helps testing auth, marketing, and all sorts of flows that rely on the venerable workhorse that is email.

How It Works

┌────────────────────────────────────────────────────────────────────────────────────┐
│                                                                                    │
│  Internet ──▶ anything@happy-panda.autodock.io                                     │
│                          │                                                         │
│                          ▼                                                         │
│                    ┌──────────┐                                                    │
│                    │ AWS SES  │ (inbound email)                                    │
│                    └────┬─────┘                                                    │
│                         │ SNS notification                                         │
│                         ▼                                                          │
│                    ┌──────────┐                                                    │
│                    │ Autodock │ (Vercel webhook)                                   │
│                    │ Webapp   │                                                    │
│                    └────┬─────┘                                                    │
│                         │ HTTPS + Bearer token                                     │
│                         ▼                                                          │
│   ┌──────────────────────────────────────────────────────────────────────────────┐ │
│   │                       Your Dev Box                                           │ │
│   │                                                                              │ │
│   │                 https://47982--happy-panda.autodock.io/email                 │ │
│   │                                   │                                          │ │
│   │                                   ▼                                          │ │
│   │                            ┌──────────┐                                      │ │
│   │                            │ Your App │ (listening on port 47982)            │ │
│   │                            └──────────┘                                      │ │
│   └──────────────────────────────────────────────────────────────────────────────┘ │
└────────────────────────────────────────────────────────────────────────────────────┘
  1. Send an email to anything@{slug}.autodock.io
  2. AWS SES receives it and notifies the Autodock webhook via SNS
  3. The webhook forwards to your environment with Bearer token authentication
  4. Your app handles the email (using the pre-installed boilerplate or your own handler)

Quick Start

A boilerplate email handler is pre-installed at ~/.autodock/email-server.js. Start it with:
node ~/.autodock/email-server.js
Or run in background:
nohup node ~/.autodock/email-server.js > ~/email.log 2>&1 &
Get full setup instructions from your agent:
"How do I receive emails in my environment?"

Authentication

Webhook requests include a Bearer token that’s unique to your environment. The token is provisioned when your environment is created and stored at ~/.autodock/email-webhook-secret. The boilerplate handler validates this automatically. If writing your own handler:
const fs = require('fs');

const expectedToken = fs
  .readFileSync(process.env.HOME + '/.autodock/email-webhook-secret', 'utf8')
  .trim();

app.post('/email', (req, res) => {
  const auth = req.headers.authorization;
  if (!auth || auth !== `Bearer ${expectedToken}`) {
    return res.status(401).json({ error: 'Unauthorized' });
  }
  // Handle email...
});

Webhook Payload

FieldTypeDescription
fromstringSender email address
tostring[]Recipient addresses
subjectstringEmail subject
textBodystringPlain text body
htmlBodystringHTML body (if present)
messageIdstringSES message ID
timestampstringISO timestamp
spamVerdictstring”PASS” or “FAIL”
virusVerdictstring”PASS” or “FAIL”
Limits: 150KB max email size (SES SNS limit). Large emails or attachments are truncated.

GitHub Actions Runner

Every Autodock environment includes a pre-installed GitHub Actions runner. Turn any environment into a self-hosted runner for your CI/CD pipelines.
Mike’s tip: Autodock tests Autodock using the Autodock GitHub runner by using Autodock to spin up a runner to test Autodock by using Autodock to spin up a runner in order to… well, you get the idea. Don’t worry, there’s a terminating condition… maybe…

Quick Start

  1. Get a registration token from GitHub:
    • Go to your repo → Settings → Actions → Runners → New self-hosted runner
    • Copy the token (valid for 1 hour)
  2. Ask your agent to configure the runner:
"Set up a GitHub runner for https://github.com/myorg/myrepo with token ABCD1234"
Your agent will call env.gh_runner and return the commands to configure and start the runner.

How It Works

The runner is pre-installed at /opt/actions-runner and configured in ephemeral mode - it processes one job and then exits. This is ideal for dynamic environments where you want clean state for each job.
# In your workflow, target autodock runners
jobs:
  build:
    runs-on: [self-hosted, autodock, happy-panda] # Your env slug
    steps:
      - uses: actions/checkout@v4
      - run: npm test

Labels

Runners are automatically labeled with:
  • autodock - All Autodock runners
  • {slug} - Your specific environment (e.g., happy-panda)
  • Any custom labels you specify

For Kubernetes Workloads

If you’re running Kubernetes with its own observability stack (Datadog, Grafana Cloud, etc.), you can disable the built-in logging:
# Stop the local observability stack
sudo systemctl stop promtail loki

# Disable on boot
sudo systemctl disable promtail loki
To use a different Docker logging driver:
# Edit /etc/docker/daemon.json
sudo tee /etc/docker/daemon.json > /dev/null <<'EOF'
{
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "10m",
    "max-file": "3"
  }
}
EOF

# Restart Docker
sudo systemctl restart docker