PodWarden

Hard Questions

Honest answers to the toughest objections about PodWarden's model, pricing, and positioning.

Hard Questions

Infrastructure buyers should be skeptical. Most tools in this space overpromise, underdeliver, and change pricing after you're locked in.

Here are the hardest objections people raise about PodWarden, and our honest answers.


"You're giving away the product for free. How is this a real business?"

PodWarden (self-hosted) is free and runs entirely on your infrastructure. We don't host your workloads. When you add servers to your fleet, we don't incur compute costs.

PodWarden Hub is optional and paid. It provides team collaboration, SSO, catalog distribution, image registry caching, fleet API, and support — services that run on our infrastructure and have real operating costs.

If you never use Hub, you never pay us. If you do, you're paying for the parts we operate.

This is a common model in infrastructure software: free self-hosted core, paid hosted services and support. GitLab and Grafana work the same way.


"Unlimited servers is an invitation to abuse"

The self-hosted tool runs on your hardware. If someone manages 500 servers with PodWarden, those are 500 servers they own, power, and maintain. There's no resource consumption on our side from that.

The two things that can be abused are Hub usage and support time:

  • Hub usage is quota'd by plan — registry storage (50 GB / 500 GB / unlimited), API rate limits, catalog access. These reflect real costs.
  • Support is tiered — community, email, priority, dedicated. Support cost scales with tier, not server count.

If an account creates disproportionate load on shared cloud resources, we reserve the right to rate-limit or require a paid plan. We don't expect this to be common, but we're not going to pretend the possibility doesn't exist.

Unlimited servers avoids a tax on growth. A management tool that charges you more for scaling your fleet discourages you from doing the thing you need to do. We'd rather remove that friction entirely and gate on the things that actually cost us money.


"You're pitching everyone — clinics, agencies, AI startups, homelabs. Pick one ICP."

PodWarden is a general-purpose tool, but we're not trying to market to everyone at once.

Our go-to-market starts with segments where the pain is acute: GPU inference teams without platform engineers, small SaaS companies replacing expensive hosted software with self-hosted alternatives, and homelabs that have outgrown docker-compose files.

The landing page shows breadth to signal the tool isn't tied to one niche. Segment-specific pages and campaigns will do the actual selling. The underlying problem — "we have servers, we have containers, we don't have DevOps" — is the same across segments. The deployment realities differ, but the operational pain is identical.


"Your headline is vague. 'Fleet operations' is inside-baseball."

Fair point. Here's the tradeoff:

Our primary audience is technical — dev leads, senior engineers, CTOs at small-to-mid companies. They evaluate and adopt infrastructure tools. They know what fleet operations means.

We're optimizing for the technical person who's been SSH-ing into servers at 2am and thinks "there has to be a better way." That person reads "fleet operations" and gets it. The body of the page then speaks to the specific pains:

  • "We keep breaking production."
  • "We can't upgrade safely."
  • "Our DevOps person left and nobody knows how anything works."
  • "We need to add 10 servers and there's no process for it."

If conversion data shows the headline isn't landing, we'll test alternatives. We'll measure, not guess.


"Kubernetes without Kubernetes is a trust cliff"

This is the single most important concern to address clearly.

PodWarden is an abstraction layer, not a cage.

When something breaks, you have full access to the underlying system:

  1. kubectl works. PodWarden deploys K3s — standard, CNCF-conformant Kubernetes. Every kubectl command, every Helm chart, every debugging tool works. PodWarden sits alongside the Kubernetes API, it doesn't replace or intercept it.

  2. Your clusters survive without PodWarden. Uninstall PodWarden and K3s keeps running. Containers keep serving traffic. PodWarden manages deployment operations — it's not in the data path.

  3. You can leave. Your clusters are standard K3s. No proprietary CRDs required for runtime, no custom formats. Move to ArgoCD, Flux, raw kubectl, or any Kubernetes tooling. The clusters are already there.

  4. Templates are Docker containers. A template defines a Docker image, resource requirements, and environment variables. You can take that and docker run it, convert it to a Kubernetes manifest, or drop it into docker-compose. It's a declaration, not a proprietary package.

"What happens when we outgrow PodWarden?"

You already have a running K3s fleet with standard Kubernetes APIs. Hire platform engineers and have them work directly with kubectl/ArgoCD/Flux on the same clusters PodWarden built. No migration needed.

We'd rather have you outgrow us on infrastructure you understand than trap you in an abstraction you can't escape.


"What if PodWarden disappears?"

If PodWarden stopped existing tomorrow:

  • Your clusters keep running. They're standard K3s. Containers continue serving traffic.
  • You retain kubectl access and can manage workloads directly.
  • Hub services would go away (catalog, registry cache, fleet API), but your running infrastructure does not depend on them.
  • Templates are portable. Docker image + resource requirements + env vars. Nothing proprietary to reconstruct.

PodWarden Hub is a convenience layer, not a runtime dependency. The architecture is designed so that the failure of PodWarden — temporary or permanent — doesn't take your infrastructure down with it.


"You claim GPU placement, rollback safety, reliability — but show no proof"

Valid.

We're a young product. We don't have 50 reference customers and a SOC 2 report. What we have is the architecture, the code, and a commitment to showing how it works rather than hiding behind marketing language.

Published documentation:

In progress:

  • Security model — secrets storage, agent auth, cloud boundaries, threat model
  • Failure modes — what happens when a server goes down, a deployment fails, or Hub is unreachable
  • How to leave — step-by-step migration guide to standard Kubernetes tooling

If you're evaluating PodWarden for production and need specific technical answers before committing, reach out directly. We'd rather have an honest conversation than lose you to unanswered questions.


"Why pay? I get GPU placement, secrets, monitoring free."

Because the free tier is the tool and the paid tiers are the service.

Free gives you: deploy containers, manage servers, track history, store secrets, place GPU workloads. All locally, on your infrastructure.

Paid tiers add the cloud layer:

  • Pro ($19/mo): Hub template catalog, fleet API, image registry cache (50 GB), 5 team members, 5 clusters, email support.
  • Business ($79/mo): Unlimited team members and clusters, private template catalogs, SSO and audit logs, 500 GB registry cache, priority support.
  • Enterprise (custom): On-premise control plane, managed operations, dedicated account manager, custom integrations, compliance documentation.

The registry cache is the clearest paid value. Here's why it matters:

Docker Hub enforces pull rate limits — 100 pulls per 6 hours for anonymous users. If your fleet pulls the same image across 20 servers during a deployment, you can hit that limit in a single rollout. The registry cache pulls once and serves locally.

More critically: images disappear. Maintainers delete tags. Repositories get removed. Docker Hub has purged inactive images. If your template references someproject/worker:v2.3 and that tag vanishes, your next deployment fails. The registry cache pins the images your templates actually use, so a third party's cleanup doesn't break your fleet.

This isn't theoretical — it's the most common production incident for teams self-hosting software from public registries.


"Per-server pricing exists for a reason. Your model is naive."

Per-server pricing for software that runs on the customer's hardware doesn't track cost — it tracks the customer's infrastructure footprint. That's a valid business choice, and it's profitable. Most competitors use it.

We think there's a better model: charge for the things that cost us money (cloud infrastructure, registry storage, support) and give away the things that don't (software running on your machine).

We're not claiming competitors are wrong for charging per server. We're saying we can offer a different model because our cost structure allows it. The self-hosted tool costs us engineering time, not per-server hosting. The cloud service costs us real infrastructure, so it's priced accordingly.

This is principled, not naive. The risks to our model — support load, cloud abuse, free-tier costs — are addressed through tiered support, cloud quotas, and the fact that free-tier users cost us approximately nothing to serve.


"You're wedged between too simple and too hard"

"Too Kubernetes-y for Docker-only shops."

Users never see Kubernetes. They see servers, templates, and deployments. The fact that K3s runs under the hood is an implementation detail. If you can use Portainer, you can use PodWarden. The interface is containers and servers, not pods and namespaces.

"Too abstracted for serious Kubernetes shops."

Correct, and those aren't our customers. Teams with dedicated platform engineers using ArgoCD and custom operators don't need PodWarden. They've already built the competency we're packaging. PodWarden is for teams who can't or don't want to build that competency.

"Too new for enterprises."

True today. The answer isn't to pretend we're established — it's to be transparent, publish technical proof, and earn trust one deployment at a time. Enterprise adoption starts when a technical champion uses the free tier, validates it works, and brings it to procurement.


"Security & trust"

The right questions to ask any infrastructure tool. Here's what we can say today:

Where data lives:

  • Secrets are stored on your PodWarden instance, not in PodWarden Hub. If you use the free tier, secrets never leave your network.
  • PodWarden Hub stores catalog metadata, organization info, API key hashes, and cached registry images. It does not store your secrets or cluster credentials.

What happens if Hub is compromised:

  • Your running infrastructure is unaffected. Hub is a convenience layer.
  • You lose access to the catalog, fleet API, and registry cache until it's restored.
  • Local PodWarden instances continue operating independently.

What's not true yet:

  • We don't have SOC 2 or ISO 27001 certification. We won't claim certifications we don't have.
  • For organizations that require formal compliance attestation today, the Enterprise tier includes compliance documentation and security review cooperation.

The full security model — covering key management, agent authentication boundaries, update delivery, and the detailed threat model — will be published as a separate document in these docs. We're being deliberate about only publishing security claims we can back up.


"Will you raise prices?"

Our commitment:

  • Self-hosted PodWarden stays free. It runs on your hardware. There's no cost to change.
  • Hub pricing may evolve as our costs and features change. We'll give notice and honor existing billing periods.
  • Export and migration paths will always exist. Your clusters are standard K3s. Your templates are portable. If our pricing stops working for you, you can leave without losing your infrastructure.

The best hedge against pricing risk isn't a promise — it's an architecture that doesn't lock you in. That's what we've built.


What we'll change based on feedback

Not every objection requires a change. Here's what we're adjusting:

  1. Security documentation. Publish the full security model as a dedicated doc — not marketing claims scattered across pages.
  2. Architecture proof. Diagrams showing how PodWarden relates to K3s, what happens when it's removed, data flow between instances and cloud.
  3. "How to leave" guide. Step-by-step migration to standard Kubernetes tooling. Nothing builds trust like showing people the exit.
  4. Segment-specific pages. Vertical landing pages for our initial go-to-market segments, linked from the main site.

What we won't change:

  • Unlimited servers on all tiers. Core principle.
  • Free tier scope. The self-hosted tool stays free.
  • No lock-in architecture. Standard K3s, portable templates, kubectl access.