Container
orchestration, without ceremony.

One installer. A high-availability cluster in three commands. Five sites running side by side on the same cluster — including this one, served on a quarter of a node's CPU.

install
curl -sSL get.heroctl.com/install.sh | sh
proof, not promise
live
0sites
in production right now on this cluster
0versions
promoted without freezing the cluster
0req/s
sustained by 1 replica · 0 errors
0,0% CPU
on the leader serving the 5 sites
[01]why it exists

Kubernetes was built for Google. Your company isn't Google.

01human cost

"4 engineers on 24/7 call. US$ 564k a year just to keep the cluster up."

heroctl: 3 servers + 1 worker handle a thousand nodes. Zero dedicated operator.

source · CNCF Survey 2025
02complexity

"80% of Kubernetes incidents come from operational complexity — not the infra."

heroctl: One installer. No external database. No Helm. No CRDs. No extra mesh.

source · Cloud Native Now, 2026
03yaml hell

"300 lines of YAML to ship a service. One stray space brings it all down."

heroctl: Ten-line config, validated at submit time. Zero ambiguity.

source · mogenius, 2026
[02]how it works

Three commands to production. Not marketing hype — it's the real path.

Install

A single installer with no external dependencies. Runs on modern Linux. Weighs less than 40 MB.

curl -sSL get.heroctl.com/install.sh | sh
→ heroctl installed in /usr/local/bin
→ Linux x86_64 · 38 MB · checksum OK

Bring up the cluster with high availability

Three servers elect themselves automatically. Internal certificates, encryption between nodes, and service discovery come switched on. Survives losing a server.

heroctl server --bootstrap --peers=s1,s2,s3
→ leader elected in 2.1s
→ internal certificates distributed
→ ingress up on :80 and :443

Ship the first service

Short config file, validated at submit time. Rolling update, canary or rainbow — you pick by flag.

heroctl run api.yml --canary=25
→ plan: 4 replicas across 3 nodes
→ canary 25% healthy → promoting
→ deploy complete · 0 errors · 12s
[03]features

What's usually an add-on ships in the box here.

exclusive

Rainbow deploys

Multiple versions living side by side in the same service. Shift traffic by flag, not by pipeline. A/B testing with real users, gradual migration, region-by-region rollout — all declarative.

built-in

Built-in high availability

Three servers organise themselves. New leader elected in under 15 seconds on failover.

zero-config

Automatic certificates

Encryption between services ships out of the box. The platform issues and rotates certificates on its own.

built-in

Router + TLS integrated

Ingress, Let's Encrypt TLS and host-based routing — all in the service manifest itself.

real-time

Real-time observation

Dashboards and API react in under 46ms with 200 nodes. No 30-second polling.

anti-zombie

Zombie self-healing

Four clocks run in parallel: server sync, orphan allocations, isolation networks, terminated containers. Broken specs don't haunt the cluster.

liveness + readiness

Healthy rollover

New servers and containers only enter the routing pool after passing health checks. Liveness and readiness separated, like in Kubernetes.

[04]pick how to update

Four strategies. One line of difference.

Switch strategy and the orchestrator reconfigures the deploy pipeline, the router and the health checks. No external plugin, no specialised operator, no extra file.

deploy.rolling
update {
strategy = "rolling"
max_parallel = 1
min_healthy_time = "10s"
healthy_deadline = "3m"
auto_revert = true
}

Rolling

One at a time, wait until healthy, continue.

The safe default. Each instance comes up, passes the health check, and only then does the next one start. Zero downtime, predictable, easy to audit.

automatic rollback on any of them · zero extra config
[05]console

This is what you operate every day.

No video, no guided tour — click any screen to see it full size.

dashboard4 nodes · 9 allocations · 5 jobs
[06]self-healing

Four clocks running all the time so the cluster fixes itself.

Distributed systems drift in silence: an orphan allocation here, a forgotten container there, an unused network piling up. The platform runs four independent sweeps on different cadences and corrects the moment it sees the drift — none of it reaches the operator.

[01]
every 60s

Server sync

Each server checks the state of every active service and quietly aligns with its peers. Subtle discrepancies vanish before turning into incidents.

Zero intervention · as long as servers can talk to each other
[02]
every 5min

Orphan allocations

Services whose owner has been removed (namespace deleted, config wiped) are detected and shut down automatically. No leftovers eating CPU or ports.

Fires alongside deletion · no manual cleanup needed
[03]
every 15min

Isolation networks

Namespace bridges with no active container are reclaimed automatically. IP space preserved, no silent pool exhaustion.

Uses the runtime itself as a safety latch
[04]
every 10min

Terminated containers

Stopped containers stay on disk for 10 minutes for post-mortem inspection, then they're removed. Logs accessible, then space freed.

Built-in audit window · same pattern as Kubernetes
every cadence is configurable
[07]honestly

Side by side. No exaggeration. No retouching.

feature
Installation
kubernetes
Dozens of components
docker swarm
One command (ships with Docker)
nomad
Three separate tools
heroctl
One command
feature
Ingress / TLS
kubernetes
Controller + external cert manager
docker swarm
Internal router + cert on the side
nomad
Gateway + extra service
heroctl
All integrated
feature
Encryption between services
kubernetes
Cert manager + CRDs
docker swarm
Automatic
nomad
External secrets service
heroctl
Automatic
feature
Service manifest
kubernetes
300+ lines of YAML
docker swarm
docker-compose
nomad
Custom config file
heroctl
Ten lines · validated on submit
feature
High availability
kubernetes
External database
docker swarm
Built-in
nomad
Built-in but separate
heroctl
Built-in
feature
Rainbow deploy
kubernetes
Service mesh + custom integration
docker swarm
Not supported
nomad
External plugin
heroctl
By flag
feature
Active development
kubernetes
Heavy
docker swarm
Maintained but no major news since 2019
nomad
Active
heroctl
Active · public roadmap
feature
Dedicated engineers
kubernetes
4 or more
docker swarm
1 to 2
nomad
1 to 2
heroctl
None

// Kubernetes is for teams with a dedicated infra team that can take the hit: people, ecosystem, months of onboarding, a bigger cloud bill. Docker Swarm covers anyone who just wants to ship a compose file and move on. In the middle — where most real teams live — that's what HeroCtl is for. One command puts you in production.

[08]less to maintain

A stack vs. a binary.

To put an HTTPS site in production with high availability, every alternative requires assembling a stack. HeroCtl is the stack.

kubernetes12 comp.
self-hosted in production
  • kube-apiserver
  • etcd
  • controller-manager
  • scheduler
  • kubelet
  • kube-proxy
  • CNI (Calico/Cilium)
  • CoreDNS
  • cert-manager
  • ingress-nginx
  • ExternalDNS
  • metrics-server
rancher6 comp.
Rancher + RKE2 in production
  • Rancher Manager
  • RKE2 (= Kubernetes)
  • cert-manager
  • ingress-nginx
  • ExternalDNS
  • etcd
nomad5 comp.
HashiCorp web-ready stack
  • Nomad
  • Consul
  • Consul Connect
  • Vault
  • Traefik/Envoy
heroctl1 comp.
one binary
  • heroctl
includes everything to the left
[08·b]minimum footprint

Our entire cluster fits where their control plane starts.

rancher · HA control panel
management plane only, no workloads
cpu
12 vCPU
ram
24 GB
kubernetes · self-hosted control plane
api-server + etcd + controller + scheduler in HA
cpu
6 vCPU
ram
12 GB
nomad + consul + vault · servers
3 separate Raft rings, control plane only
cpu
6 vCPU
ram
12 GB
heroctl · whole cluster serving 5 sites
3 servers + 1 worker. Includes control plane, ingress, TLS and the 5 workloads.
cpu
5 vCPU
ram
10 GB

// Minimum numbers from each product's own docs (Apr 2026). Our line is measured on the cluster serving this site. Managed offerings like EKS/GKE take the weight off your side, but they show up as a fixed bill per cluster and still need the ingress, TLS and mTLS add-ons every team ends up operating.

[09]ruthless honesty

Where HeroCtl isn't the best choice.

Three situations where another tool fits better. All rare — and not always as absolute as they look at first glance. If none is yours, you're sitting right in the middle of the curve HeroCtl was designed for.

not here

Planet-scale — today

If you already operate 10,000+ nodes today across multiple active regions, with federation, service mesh and complex geographic routing.

use

Kubernetes + Istio + Argo is state of the art — and it charges for it: 4+ dedicated SREs 24/7 and a cloud bill in another league. HeroCtl scales from 1 node to a few thousand in one region. By the time you actually hit CNCF-scale, the team you built here brings the technical baggage to migrate.

not here

Certification with a named product

If your auditor demands a product named in the contract with a FedRAMP High seal, PCI Level 1 or formal CNCF compliance, signed by an enterprise vendor with an SLA.

use

Red Hat OpenShift or Rancher Prime have the seal. The technical controls the seal requires — encryption in transit, isolation, audit — are here. What's missing is the signed paperwork.

not here

Deep operator library

If your platform team already maintains dozens of operators from the K8s ecosystem — Postgres, Kafka, Crossplane, Prometheus — and per-database abstractions are an internal product.

use

Kubernetes. That ecosystem is its real strength. For most teams, by the way, managed services (RDS, Redis Cloud, MSK) solve the same problem without a single operator — and without becoming a platform team.

Three edge cases. All rare. Everything else — teams of 3 to 50 people running dozens to a few thousand containers across 1, 2 or dozens of servers — is the vast majority. And it's exactly what HeroCtl solves with one installer and three commands, no dedicated platform team, no enterprise license, no cloud bill that surprises you at the end of the month.

[10]measured on Apr 23, 2026

The numbers came out of the cluster. No extrapolation.

Real cluster of 4 nós · 5 vCPU · ~10 GB, serving 5 live sites. Load generated from the public edge — full path internet → router → container → response. Tools: ab and hey.

observed capacity

~100 req/s per replica

1 replica · 128 CPU shares · 64 MB RAM · nginx:alpine serving static HTML. Above that, the router returns 503 instead of bringing the container down.

chaos battery · 17 / 19 failure scenarios recovered on their own
public edge → heroctl.com · c=20 · 30s no errors
01Median response
0 ms
02P95 response
0 ms
03P99 response
0 ms
04Sustained throughput
0 req/s
05Errors
0 / 2.937
control plane · real load from 5 sites caught up
01API P50 (control plane)
0 ms
02API P99 (control plane)
0 ms
03Raft lag (commit → apply)
0 ms
04Leader CPU serving 5 sites
0,0 %
05Leader RAM
0 MB / 1.963
ready to start

Stop administering
your orchestrator.

38 seconds to install. Three minutes to bring up the cluster. After that, back to building product.

no signup · no credit card · no salesperson calling later