Compose Stacks in PodWarden
How PodWarden translates Docker Compose files to Kubernetes — and how to write compose files that work everywhere
Overview
PodWarden can deploy multi-service applications directly from Docker Compose files. Each compose service becomes a separate Kubernetes Deployment with its own pod, and PodWarden handles the translation automatically — volumes become PVCs, service names become DNS entries, and environment variables get rewritten to point to the right places.
This guide explains exactly what PodWarden does during translation, what it can handle automatically, and what you need to get right in your compose file to make it work both as standalone Docker Compose and inside PodWarden's Kubernetes engine.
How the Translation Works
When you deploy a compose stack, PodWarden:
- Parses the compose YAML and resolves
${VAR:-default}substitutions from env_schema values - Creates shared PVCs for named volumes used by multiple services (one PVC per named volume, not one per service)
- Deploys each service as a separate Kubernetes Deployment with its own pod
- Creates K8s Services for compose services that declare
ports:— this is how other containers reach them via DNS - Creates alias Services with the original compose service names (e.g.,
redis,postgres) so hardcoded hostnames in container configs resolve correctly - Rewrites env var values that reference compose service names — both exact matches (
DB_HOST=postgres) and URLs (REDIS_URL=redis://my-redis:6379) get rewritten to Kubernetes FQDNs - Maps compose
commandto Kubernetesargs(notcommand), preserving the image'sENTRYPOINT
The Golden Rules
These five rules ensure your compose file works both as standalone Docker Compose and when deployed through PodWarden.
1. Declare ports on every service that receives connections
In Docker Compose, services on the same network can reach each other by name on any port without declaring ports:. In Kubernetes, a service without ports: gets no K8s Service object, which means no DNS entry and no way for other pods to reach it.
# BAD — redis has no ports, other services can't reach it in K8s
services:
app:
image: myapp
environment:
- REDIS_URL=redis://redis:6379
redis:
image: redis:7-alpine
# no ports declared!# GOOD — redis declares its port
services:
app:
image: myapp
environment:
- REDIS_URL=redis://redis:6379
redis:
image: redis:7-alpine
ports:
- "6379" # internal port is enough, no host mapping neededYou don't need to map to a host port (like "6379:6379"). Just declaring "6379" is sufficient — PodWarden uses it to create the K8s Service.
2. Put all inter-service hostnames in env vars
PodWarden rewrites env var values that contain compose service names. But it can only rewrite what it can see. If a container's entrypoint hardcodes a default hostname that isn't set in the compose environment: block, PodWarden can't rewrite it.
# BAD — the app's entrypoint defaults to CACHE_HOST=memcached internally,
# but we never set it in environment, so PodWarden can't rewrite it
services:
app:
image: myapp
environment:
- DATABASE_URL=postgres://db:5432/myapp # this gets rewritten ✓
# CACHE_HOST not set — entrypoint default "memcached" won't be rewritten ✗
memcached:
image: memcached:1.6
ports:
- "11211"# GOOD — explicitly set CACHE_HOST so PodWarden can rewrite it
services:
app:
image: myapp
environment:
- DATABASE_URL=postgres://db:5432/myapp
- CACHE_HOST=memcached # now PodWarden can rewrite it ✓
memcached:
image: memcached:1.6
ports:
- "11211"How to find hidden defaults: Read the image's docker-entrypoint.sh or Dockerfile. Look for lines like : "${SOME_HOST:=service-name}" — these set defaults that you should override in your compose environment block.
3. Use named volumes, not host paths
Host path volumes (./data:/app/data or /opt/data:/app/data) bind to a specific directory on the host machine. In Kubernetes, pods can run on any node, and host paths either don't exist or point to the wrong data.
# BAD — host paths
volumes:
- ./data:/var/lib/postgresql/data
- /opt/uploads:/app/uploads
# GOOD — named volumes (become PVCs in K8s)
volumes:
- pgdata:/var/lib/postgresql/data
- uploads:/app/uploads
# Define them at the top level
volumes:
pgdata:
uploads:Named volumes automatically become PersistentVolumeClaims in Kubernetes. If multiple services reference the same named volume, PodWarden creates a single shared PVC.
4. Use ${VAR:-default} for configurable values
Docker Compose supports variable substitution with defaults. PodWarden's env_schema maps directly to these variables — users set values in the deployment form, and PodWarden substitutes them into the compose YAML before deployment.
services:
db:
image: postgres:${POSTGRES_VERSION:-17}-alpine
environment:
POSTGRES_USER: ${DB_USER:-myapp}
POSTGRES_PASSWORD: ${DB_PASSWORD:-changeme}
POSTGRES_DB: ${DB_NAME:-myapp}In the Hub template, define matching env_schema entries:
[
{ "name": "DB_USER", "default_value": "myapp", "description": "Database user" },
{ "name": "DB_PASSWORD", "generate": "password", "description": "Database password" },
{ "name": "DB_NAME", "default_value": "myapp", "description": "Database name" }
]5. Watch out for K8s environment variable collisions
Kubernetes injects environment variables for every Service in a namespace. A service named my-redis creates variables like MY_REDIS_PORT, MY_REDIS_SERVICE_HOST, etc. If your app uses a variable with the same name, the K8s-injected value (tcp://10.43.x.x:6379) overwrites your expected value (6379).
PodWarden sets enableServiceLinks: false on all pods to prevent this, so you generally don't need to worry about it. But if you see errors like invalid port in upstream "service:tcp://10.43.x.x:3000", this is the cause.
What PodWarden Rewrites Automatically
Exact-match env vars
If an env var's entire value equals a compose service name, it gets rewritten:
POSTGRESQL_HOST=my-postgres → POSTGRESQL_HOST=pw-svc-stack-my-postgres.ns.svc.cluster.localURL-embedded service names
Service names inside URLs, connection strings, and host:port patterns are rewritten:
REDIS_URL=redis://my-redis:6379 → redis://pw-svc-stack-my-redis.ns.svc.cluster.local:6379
MEMCACHE_SERVERS=cache-1:11211,cache-2:11211 → pw-svc-...-cache-1.ns.svc.cluster.local:11211,...What does NOT get rewritten
- Hostnames hardcoded in config files mounted inside the image (nginx configs, application configs)
- Default values set in the container's entrypoint script that aren't overridden in the compose
environment:block - Service names in
command:orentrypoint:arguments
For these cases, PodWarden creates alias Services with the original compose service name. So if your nginx config hardcodes proxy_pass http://backend:3000, and backend is a compose service with ports declared, PodWarden creates a K8s Service called backend that routes to the right pod.
Alias Services and DNS
For every compose service that declares ports, PodWarden creates two K8s Services:
pw-svc-{stack}-{service}— the primary service with the full PodWarden naming convention{service}— an alias service with the original compose service name
The alias ensures that hardcoded references in container configs (like nginx proxy_pass, HAProxy backends, or application config files) resolve correctly without needing env var overrides.
Important: Alias services use short names (e.g., redis). These work with Kubernetes' standard DNS resolver (which appends search domains from /etc/resolv.conf). However, applications that use their own DNS resolver — like nginx's resolver directive — may not use search domains. For these, explicitly set the hostname via an env var (which PodWarden rewrites to a FQDN).
Common Patterns
Init containers
Compose services with restart: on-failure (like database migration runners) become persistent Kubernetes Deployments. They run, complete their work, and then sit idle. This is fine — the pod stays running but uses minimal resources after initialization.
services:
init:
image: myapp
command: ["migrate", "--run"]
depends_on:
- db
restart: on-failure # runs once, stays idle afterShared volumes
Multiple services sharing the same named volume get a single shared PVC:
services:
app:
volumes:
- app-data:/data # shared PVC
worker:
volumes:
- app-data:/data # same PVC
volumes:
app-data: # one PVC createdNote: local-path storage (the default K3s storage class) is node-local and only supports ReadWriteOnce. All pods sharing a volume must run on the same node, which Kubernetes handles automatically for single-replica deployments.
Well-known ports
If a service uses a well-known image (postgres, redis, elasticsearch, etc.) but doesn't declare ports, PodWarden infers the port from the image name. This covers common databases and caches, but it's always better to declare ports explicitly.
Validating Your Compose File
With the MCP tool
If you're creating templates via the Hub MCP, run validate_compose_stack before creating or updating a template:
validate_compose_stack(compose_source: "services:\n app:\n ...")This checks for:
- Services referenced in env vars but missing ports
- Host-path volumes
- Unsupported Docker features (network_mode, links)
- Undefined named volumes
- Potential hostname mismatches
Manual checklist
Before deploying a compose stack in PodWarden:
- Does every service that receives network connections declare
ports:? - Are all inter-service hostnames set explicitly in
environment:(not just relying on entrypoint defaults)? - Are all volumes named volumes (not host paths)?
- Do
${VAR:-default}substitutions have matching env_schema entries? - Does the stack work with
docker compose upas-is? (If it doesn't work standalone, it won't work in PodWarden either.)
Example: Full Stack with All Patterns
Here's a compose file that follows all the rules — a web app with PostgreSQL, Redis, a background worker, and an init container:
services:
web:
image: myapp:${APP_VERSION:-latest}
command: ["web-server"]
ports:
- "${WEB_PORT:-8080}:8080"
environment:
- DATABASE_URL=postgres://${DB_USER:-myapp}:${DB_PASSWORD:-changeme}@db:5432/${DB_NAME:-myapp}
- REDIS_URL=redis://redis:6379
- CACHE_HOST=redis
- SECRET_KEY=${SECRET_KEY:-changeme}
volumes:
- uploads:/app/uploads
depends_on:
- db
- redis
worker:
image: myapp:${APP_VERSION:-latest}
command: ["worker"]
environment:
- DATABASE_URL=postgres://${DB_USER:-myapp}:${DB_PASSWORD:-changeme}@db:5432/${DB_NAME:-myapp}
- REDIS_URL=redis://redis:6379
volumes:
- uploads:/app/uploads # shared with web
depends_on:
- db
- redis
init:
image: myapp:${APP_VERSION:-latest}
command: ["migrate"]
environment:
- DATABASE_URL=postgres://${DB_USER:-myapp}:${DB_PASSWORD:-changeme}@db:5432/${DB_NAME:-myapp}
depends_on:
- db
restart: on-failure
db:
image: postgres:${POSTGRES_VERSION:-17}-alpine
ports:
- "5432" # internal port for other services
environment:
POSTGRES_USER: ${DB_USER:-myapp}
POSTGRES_PASSWORD: ${DB_PASSWORD:-changeme}
POSTGRES_DB: ${DB_NAME:-myapp}
volumes:
- pgdata:/var/lib/postgresql/data
redis:
image: redis:7-alpine
ports:
- "6379" # internal port for other services
volumes:
- redis-data:/data
volumes:
uploads:
pgdata:
redis-data:This stack:
- Declares ports on db and redis (reachable by other services)
- Sets all hostnames in env vars (DATABASE_URL, REDIS_URL, CACHE_HOST)
- Uses named volumes (uploads shared between web and worker)
- Uses
${VAR:-default}for all configurable values - Has an init container with
restart: on-failure - Works identically with
docker compose upand in PodWarden
Troubleshooting
"502 Bad Gateway" from nginx/reverse proxy
The reverse proxy can't reach the upstream service. Common causes:
- Upstream service has no
ports:declared — add the port - K8s env var collision — the proxy uses an env var like
BACKEND_PORTbut K8s injectedBACKEND_PORT=tcp://10.43.x.x:3000. PodWarden prevents this withenableServiceLinks: false, but older deployments may need redeployment - Nginx resolver doesn't use search domains — set the upstream hostname explicitly via env var (PodWarden rewrites it to a FQDN)
"Host not found" DNS errors
- Check that the target service declares
ports:in the compose file - Check that the hostname matches a compose service name exactly
- If the error comes from nginx or a custom resolver, set the hostname via env var instead of relying on compose DNS aliases
Services can't connect to each other
- Verify both services are in the same namespace
- Check that the target service has
ports:declared - Run
validate_compose_stackto identify missing ports or hostname issues
Init container keeps running
This is expected behavior. Compose services with restart: on-failure run as persistent K8s Deployments. After the init completes, the pod stays running but idle. This uses minimal resources and doesn't cause issues.
Next Steps
- Creating Custom Templates — single-service template guide
- Apps & Stacks — full stack field reference
- Self-Hosting with Docker — deploy templates as standalone Docker Compose