Rainbow deploys
Multiple versions living side by side in the same service. Shift traffic by flag, not by pipeline. A/B testing with real users, gradual migration, region-by-region rollout — all declarative.
One installer. A high-availability cluster in three commands. Five sites running side by side on the same cluster — including this one, served on a quarter of a node's CPU.
"4 engineers on 24/7 call. US$ 564k a year just to keep the cluster up."
heroctl: 3 servers + 1 worker handle a thousand nodes. Zero dedicated operator.
"80% of Kubernetes incidents come from operational complexity — not the infra."
heroctl: One installer. No external database. No Helm. No CRDs. No extra mesh.
"300 lines of YAML to ship a service. One stray space brings it all down."
heroctl: Ten-line config, validated at submit time. Zero ambiguity.
A single installer with no external dependencies. Runs on modern Linux. Weighs less than 40 MB.
Three servers elect themselves automatically. Internal certificates, encryption between nodes, and service discovery come switched on. Survives losing a server.
Short config file, validated at submit time. Rolling update, canary or rainbow — you pick by flag.
Multiple versions living side by side in the same service. Shift traffic by flag, not by pipeline. A/B testing with real users, gradual migration, region-by-region rollout — all declarative.
Three servers organise themselves. New leader elected in under 15 seconds on failover.
Encryption between services ships out of the box. The platform issues and rotates certificates on its own.
Ingress, Let's Encrypt TLS and host-based routing — all in the service manifest itself.
Dashboards and API react in under 46ms with 200 nodes. No 30-second polling.
Four clocks run in parallel: server sync, orphan allocations, isolation networks, terminated containers. Broken specs don't haunt the cluster.
New servers and containers only enter the routing pool after passing health checks. Liveness and readiness separated, like in Kubernetes.
Switch strategy and the orchestrator reconfigures the deploy pipeline, the router and the health checks. No external plugin, no specialised operator, no extra file.
Rolling
One at a time, wait until healthy, continue.
The safe default. Each instance comes up, passes the health check, and only then does the next one start. Zero downtime, predictable, easy to audit.
No video, no guided tour — click any screen to see it full size.
Distributed systems drift in silence: an orphan allocation here, a forgotten container there, an unused network piling up. The platform runs four independent sweeps on different cadences and corrects the moment it sees the drift — none of it reaches the operator.
Each server checks the state of every active service and quietly aligns with its peers. Subtle discrepancies vanish before turning into incidents.
Services whose owner has been removed (namespace deleted, config wiped) are detected and shut down automatically. No leftovers eating CPU or ports.
Namespace bridges with no active container are reclaimed automatically. IP space preserved, no silent pool exhaustion.
Stopped containers stay on disk for 10 minutes for post-mortem inspection, then they're removed. Logs accessible, then space freed.
// Kubernetes is for teams with a dedicated infra team that can take the hit: people, ecosystem, months of onboarding, a bigger cloud bill. Docker Swarm covers anyone who just wants to ship a compose file and move on. In the middle — where most real teams live — that's what HeroCtl is for. One command puts you in production.
To put an HTTPS site in production with high availability, every alternative requires assembling a stack. HeroCtl is the stack.
// Minimum numbers from each product's own docs (Apr 2026). Our line is measured on the cluster serving this site. Managed offerings like EKS/GKE take the weight off your side, but they show up as a fixed bill per cluster and still need the ingress, TLS and mTLS add-ons every team ends up operating.
Three situations where another tool fits better. All rare — and not always as absolute as they look at first glance. If none is yours, you're sitting right in the middle of the curve HeroCtl was designed for.
If you already operate 10,000+ nodes today across multiple active regions, with federation, service mesh and complex geographic routing.
use
Kubernetes + Istio + Argo is state of the art — and it charges for it: 4+ dedicated SREs 24/7 and a cloud bill in another league. HeroCtl scales from 1 node to a few thousand in one region. By the time you actually hit CNCF-scale, the team you built here brings the technical baggage to migrate.
If your auditor demands a product named in the contract with a FedRAMP High seal, PCI Level 1 or formal CNCF compliance, signed by an enterprise vendor with an SLA.
use
Red Hat OpenShift or Rancher Prime have the seal. The technical controls the seal requires — encryption in transit, isolation, audit — are here. What's missing is the signed paperwork.
If your platform team already maintains dozens of operators from the K8s ecosystem — Postgres, Kafka, Crossplane, Prometheus — and per-database abstractions are an internal product.
use
Kubernetes. That ecosystem is its real strength. For most teams, by the way, managed services (RDS, Redis Cloud, MSK) solve the same problem without a single operator — and without becoming a platform team.
Three edge cases. All rare. Everything else — teams of 3 to 50 people running dozens to a few thousand containers across 1, 2 or dozens of servers — is the vast majority. And it's exactly what HeroCtl solves with one installer and three commands, no dedicated platform team, no enterprise license, no cloud bill that surprises you at the end of the month.
Real cluster of 4 nós · 5 vCPU · ~10 GB, serving 5 live sites. Load generated from the public edge — full path internet → router → container → response. Tools: ab and hey.
~100 req/s per replica
1 replica · 128 CPU shares · 64 MB RAM · nginx:alpine serving static HTML. Above that, the router returns 503 instead of bringing the container down.
38 seconds to install. Three minutes to bring up the cluster. After that, back to building product.
no signup · no credit card · no salesperson calling later