Vancouver, February 13–16, 2026. Three days of cold rain, and then one Sunday afternoon of very much welcomed February sunshine. (We took the laptops out to the back deck for an hour and pretended we live in California.)
This has been a machines-and-networks stretch. PodWarden's whole reason for existing is to manage clusters of K3s nodes… and a "cluster manager" that can't actually reach a node is just a very pretty database browser.
What we shipped
New functionality
- Hosts are now first-class objects. You can register a host (a Linux box you control), and PodWarden remembers what it knows about it: hostname, IPs, SSH config, which cluster it belongs to, whether it's a control plane or a worker. The hosts table got a serious workout this stretch and held up well.
- Control-plane provisioning is in. Click a button, watch PodWarden install K3s on a fresh node, register it as a control plane, write down the kubeconfig. Another piece of the "wizard, not docs" promise we keep making to ourselves.
- Tailscale-aware networking. A lot of our users (and a lot of us) live on Tailscale. PodWarden now recognizes Tailscale-assigned IPs and prefers the mesh path when reaching a host. You don't need Tailscale. If you have it, the experience is noticeably nicer.
- App secrets are properly stored now. Every app that runs on a managed cluster eventually needs secrets: API keys, DB passwords, the lot. We added an end-to-end encrypted secret store, with a "reveal" UI that requires a fresh re-auth. No more "let me just paste this in a Helm values file."
- SSH key management is in. Generate keys per-host or upload your own. PodWarden now keeps track of which key opens which door, so you don't have to.
- "Cluster required for install." A small but important rule: you can't deploy an app until you've told PodWarden which cluster to deploy it to. Sounds obvious. The version of the product where this wasn't enforced caused some funny ghost deployments. Never again.
Changed/refactored
- The control-plane install flow got rewritten twice in three days. The first version assumed Ubuntu. The second assumed Debian-family. The third just asks systemd what it is and goes from there.
- A new branding pass landed: cleaner logo placement, dark and light themes honest enough that we trust them, several pages tightened up where the visual language had drifted.
Bugfixes
- A racy SSH connection-reuse bug that turned into a 40-second hang on every second click. Found it on Friday at 5pm. (Of course.)
- The host detail page was eating the trailing newline in the kubeconfig, which in turn was making kubectl confused. Subtle. Annoying. Fixed.
Experimenting
- We started talking about how to model "a deployment" vs. "an app" vs. "a workload definition."
Was this hard?
A brutal stretch. About 170 commits across three engineers, two of whom hadn't written Ansible in years and were re-learning it on the fly. The Tailscale integration alone ate a full day, because we wanted to handle the case where a host has both a public IP and a Tailscale IP without making the user pick.
How this helps our users
If you've ever stood up a 3-node K3s cluster from scratch (SSHing around, copy-pasting tokens, fighting with iptables, forgetting which node was the first one and where you wrote down the join token), you know how long that takes and how easy it is to brick it. PodWarden now does it for you. It also does the boring "remember everything and put it in the right place" part. Day-zero clusters in the time it takes to brew coffee is the headline.
Notes from the room
WebSummit is coming to Vancouver in May, so that's on our radar. KubeCon EU in London is also on the radar.
Mood: tired, but cheerful. We logged into a freshly-provisioned cluster on Sunday evening and just sat there smiling at each other through our GoogleMeets for a minute.