Deployment
Deploy PodWarden to production and manage stack deployments
Installing PodWarden
One-Line Installer (Recommended)
curl -fsSL https://www.podwarden.com/install.sh | bashThe interactive installer checks prerequisites, asks for configuration, generates .env and docker-compose.yml, pulls pre-built images, and starts PodWarden. See Getting Started for details on what the installer asks.
Default install directory: /opt/podwarden/
Manual Installation
If you prefer to configure manually:
mkdir -p /opt/podwarden && cd /opt/podwarden
# Download the production compose file
curl -fsSL https://git.mediablade.net/dubai/podwarden/podwarden/-/raw/main/docker-compose.prod.yml \
-o docker-compose.yml
# Create .env with your settings (see Configuration guide for all options)
cat > .env << 'EOF'
PW_POSTGRES_DB=podwarden
PW_POSTGRES_USER=podwarden
PW_POSTGRES_PASSWORD=<generate-a-strong-password>
PW_ENCRYPTION_KEY=<generate-with-openssl-rand-base64-32>
PW_API_PORT=8000
PW_UI_PORT=3000
NEXT_PUBLIC_PW_API_URL=http://<your-ip>:8000
FRONTEND_URL=http://<your-ip>:3000
EOF
# Pull and start
docker compose pull
docker compose up -dServices:
podwarden-db— PostgreSQL 16 (internal only)podwarden-api— FastAPI backend (port 8000, host networking)podwarden-ui— Next.js frontend (port 3000)
Reverse Proxy (Optional)
PodWarden works fine accessed directly by IP and port on a LAN. For TLS or a custom domain, use Caddy, nginx, or Traefik:
podwarden.example.com {
handle /api/* {
reverse_proxy localhost:8000
}
handle {
reverse_proxy localhost:3000
}
}Networking
The API container uses network_mode: host so it can directly reach target hosts via SSH for provisioning and kubectl. This means:
- The API binds directly to the host's network interfaces (port 8000 by default)
- The PostgreSQL container exposes its port to the host (port 5432 by default) so the API can reach it at
127.0.0.1:5432 - Tailscale is optional — if all hosts are on the same LAN, the API can reach them by local IP or hostname
If you use Tailscale for host discovery, ensure the PodWarden server has Tailscale installed and authenticated.
Database
PostgreSQL data is stored in a Docker volume (pw-db-data). Migrations run automatically on startup from the migrations/ directory.
Backups
# Backup
docker exec pw-db pg_dump -U podwarden podwarden > backup.sql
# Restore
docker exec -i pw-db psql -U podwarden podwarden < backup.sqlSelf-Hosting Apps from the Hub Catalog
Every template in the PodWarden Hub catalog can be installed directly on any server with Docker — no PodWarden instance required. This is the fastest way to deploy self-hosted software.
One-Click Install
On any template's detail page, click Self-host with Docker to get a one-line install command:
curl -fsSL https://www.podwarden.com/api/v1/catalog/install/<app-slug>/script | bashThe script:
- Downloads a
.tar.gzbundle containingdocker-compose.yml,.env.example, and any config files - Extracts to
/opt/<app-slug>/(customizable with--dir) - Copies
.env.exampleto.env(skips if.envalready exists) - Auto-generates passwords and secret keys for env vars with
generatehints usingopenssl rand - Runs
docker compose up -d
Flags:
| Flag | Description |
|---|---|
--dir /path | Override the install directory |
--no-generate-secrets | Skip automatic secret generation |
Re-Installing / Upgrading
If you run the installer again on a system where the app is already installed:
- The existing
.envfile is preserved (not overwritten) docker-compose.ymlis updated to the latest version from the catalogdocker compose up -drecreates containers with the new compose file
If you want fresh secrets, delete .env before re-running the installer.
Manual Bundle Download
Click Download bundle manually in the install modal (or fetch the URL directly) to get the .tar.gz without running the script:
curl -fsSL "https://www.podwarden.com/api/v1/catalog/install/<app-slug>/bundle" -o bundle.tar.gz
tar xzf bundle.tar.gz
cd <app-slug>
cp .env.example .env
# Edit .env to fill in required values
docker compose up -dPrerequisites
- Docker Engine 20.10+
- Docker Compose v2 (the
docker composeplugin, not the legacydocker-composebinary) curlandopenssl(for the install script and secret generation)
Deploying Stacks
Deployments connect a stack (what to deploy) with a cluster (where to deploy).
Creating a Deployment
- Go to Deployments and click Create
- Select a Stack
- Select a Cluster
- Optionally set the namespace (defaults to cluster's default namespace)
- Save the deployment
Deploying
Click Deploy on any deployment. PodWarden:
- Runs a pre-flight network compatibility check (advisory warnings, never blocks)
- Generates a complete Kubernetes manifest:
- Deployment, Job, or DaemonSet (based on workload
kind) - Service (for exposed ports)
- PersistentVolumeClaims (for PVC volume mounts)
- ConfigMaps (for config file slots)
- Deployment, Job, or DaemonSet (based on workload
- Creates PVCs and ConfigMaps first (if needed)
- Applies the manifest via
kubectl applyto the target cluster - Tracks deployment status
If the network check finds incompatibilities (e.g. workload requires public but cluster only supports mesh), an amber warning modal appears. You can click Deploy anyway to proceed.
Undeploying
Click Undeploy on a deployed deployment. PodWarden deletes the Kubernetes deployment/job/daemonset, its service, and any config slot ConfigMaps.
For workloads with PVC volume mounts:
- If
retain_on_undeployis true (default): PVC data is kept - If
retain_on_undeployis false: PVC is deleted along with the deployment
Viewing Logs
Click Logs on a deployed deployment to view pod logs from the running workload. Logs are fetched in real-time from the cluster with a configurable tail limit.
Filtering Deployments
Filter deployments by:
- Cluster — Show deployments for a specific cluster
- Stack — Show deployments for a specific stack
Host Provisioning
PodWarden provisions hosts via SSH + Ansible. Before provisioning, each target host must allow root SSH access with key-based authentication.
Here's the overall host provisioning workflow:
Preparing Hosts for SSH
PodWarden connects to hosts as root over SSH using a key pair managed in Settings → SSH Keys. Target hosts must be configured to accept this connection.
1. Create an SSH Key in PodWarden
Go to Settings → SSH Keys → Create Key Pair. PodWarden generates an ed25519 key pair and stores it encrypted. Copy the public key.
2. Add the Public Key to Target Hosts
sudo mkdir -p /root/.ssh
echo "ssh-ed25519 AAAA... podwarden" | sudo tee -a /root/.ssh/authorized_keys
sudo chmod 700 /root/.ssh
sudo chmod 600 /root/.ssh/authorized_keys3. Enable Root Key-Based Login
Many distributions or hardening scripts disable root SSH login. Check and fix:
PermitRootLogin — must be prohibit-password or yes. Check both /etc/ssh/sshd_config and drop-in files in /etc/ssh/sshd_config.d/:
# Find any files that set PermitRootLogin
grep -r "PermitRootLogin" /etc/ssh/
# Fix: change "no" to "prohibit-password"
sudo sed -i 's/^PermitRootLogin no/PermitRootLogin prohibit-password/' /etc/ssh/sshd_config.d/*.confAllowUsers — if this directive is set, root must be included:
# Check if AllowUsers is set
grep -r "AllowUsers" /etc/ssh/
# If it says "AllowUsers someuser", change to "AllowUsers someuser root"Restart SSH after changes:
sudo systemctl restart sshd4. Check fail2ban
If fail2ban is running, it may have banned PodWarden's IP from earlier failed SSH attempts:
# Check ban status
sudo fail2ban-client status sshd
# Unban PodWarden's IP if needed
sudo fail2ban-client set sshd unbanip <podwarden-server-ip>5. Check Firewall
Ensure SSH and K3s ports are open:
# SSH access from PodWarden server
sudo ufw allow from <podwarden-server-ip> to any port 22
# K3s ports (required for cluster communication after provisioning)
sudo ufw allow 6443/tcp # K3s API server
sudo ufw allow 10250/tcp # Kubelet metrics
sudo ufw allow 8472/udp # Flannel VXLAN (pod-to-pod networking)Open the K3s ports before provisioning. Without port 6443, PodWarden's kubectl commands will timeout and the cluster page will be slow or show errors.
If your host was cloned from another VM, audit and remove stale firewall rules that don't apply to this host's role.
Verify Connectivity
Use the Probe button on a host's page in PodWarden — a successful probe confirms SSH works. Or test manually:
docker exec -it podwarden-api ssh -o StrictHostKeyChecking=no root@<host-ip> hostnameProvisioning a Host
- Go to Hosts and select a discovered host
- Click Provision
- PodWarden runs an Ansible playbook that:
- Installs Docker
- Installs k3s-agent (joins the host to a cluster)
- Installs NVIDIA GPU drivers (if GPU detected)
- Track progress in the provisioning job log
Provisioning as Control Plane
To create a new K3s cluster from a host:
- Select the host and click Provision as Control Plane
- PodWarden installs K3s server (not agent)
- A new cluster is created with this host as the control plane node
Firewall Requirements for K3s
After provisioning, the following ports must be open on K3s nodes:
| Port | Protocol | Purpose | Required On |
|---|---|---|---|
| 6443 | TCP | K3s API server | Control plane |
| 10250 | TCP | Kubelet metrics | All nodes |
| 8472 | UDP | Flannel VXLAN (pod networking) | All nodes |
sudo ufw allow 6443/tcp # K3s API
sudo ufw allow 10250/tcp # Kubelet
sudo ufw allow 8472/udp # Flannel VXLANIf port 6443 is blocked on the control plane, worker nodes cannot join the cluster and PodWarden's cluster page will show timeout errors.
Wiping a Host
Click Wipe to uninstall k3s-agent and reset the host to discovered state.
Detaching a Host
Click Detach to remove a host from its cluster (clears the DB association and optionally wipes k3s-agent).
Troubleshooting
SSH Connection Refused
PodWarden can't reach the target host on port 22.
- Check the host is running and reachable:
ping <host-ip> - Check SSH service is running:
sudo systemctl status sshd - Check firewall allows SSH from PodWarden's IP
SSH Permission Denied
PodWarden connects but authentication fails.
- Verify the public key is in
/root/.ssh/authorized_keyson the target host - Check
PermitRootLoginis notno— look in both/etc/ssh/sshd_configand/etc/ssh/sshd_config.d/*.conf - Check
AllowUsersincludesroot(if set) - Check fail2ban hasn't banned PodWarden's IP:
sudo fail2ban-client status sshd
Database Connection Error on Fresh Install
If the API shows connection refused to PostgreSQL after install:
- The API uses
network_mode: hostand connects to127.0.0.1:5432 - Ensure the DB service has
ports: - "5432:5432"indocker-compose.yml - Check DB is running:
docker compose logs podwarden-db
Cluster Page Slow or "Command timed out"
The clusters page shows "Command timed out after 15s" and takes a long time to load.
PodWarden runs kubectl commands to fetch live cluster info. If port 6443 is blocked on the K3s node, these commands timeout.
- Check the K3s node's firewall allows port 6443:
sudo ufw status - Open it:
sudo ufw allow 6443/tcp - Also open 10250/tcp (kubelet) and 8472/udp (Flannel VXLAN)
- Common cause: VM was cloned from another machine and inherited firewall rules that block K3s ports
K3s Agent Fails to Start During Provisioning
The provisioning log shows k3s-agent.service failed after install.
- Check that the K3s server (control plane) is reachable from the agent node:
curl -sk https://<cp-host>:6443/version - Check that port 6443 is open on the control plane node's firewall
- Check the agent logs:
sudo journalctl -u k3s-agent -n 50 - Common cause: firewall on the control plane blocks port 6443, so the agent can't register
Migration Errors
Migrations run automatically on API startup. If a migration fails:
- Check logs:
docker compose logs podwarden-api - Migrations are idempotent — restarting the API retries them
- For manual inspection:
docker exec -it podwarden-db psql -U podwarden podwarden -c "SELECT * FROM schema_migrations;"
Cluster Management
Creating Clusters
Two methods:
- Manual — Provide cluster name, environment, and kubeconfig (paste or fetch via SSH from a host)
- From host — Provision a host as control plane (creates both cluster and first node)
Cluster Detail
The cluster detail page shows:
- Nodes — All member nodes with status, roles, and resources
- Storage Classes — Available StorageClass provisioners
- Extended info — Kubernetes version, namespaces, resource metrics, recent events
Deleting Clusters
Click Delete on a cluster. PodWarden shows an Affected Resources preview — all deployments and host associations that will be removed. Hosts are unprotected but not wiped.
Updating PodWarden
cd /opt/podwarden
docker compose pull
docker compose up -dMigrations run automatically on API startup. Check logs after restart to verify all services are healthy:
docker compose logs -f