k3s vs HeroCtl: when you need lightweight Kubernetes and when you don't need Kubernetes

k3s is Kubernetes packaged to fit in 512MB. For those who already speak K8s and want to take it somewhere smaller. HeroCtl is a DIFFERENT layer from Kubernetes. How to decide between the two without mixing premises.

HeroCtl team··14 min· Read in Portuguese →

The question arrives in our inbox almost weekly: "you're like a k3s, right?". The short answer is no. The long answer starts by realizing that k3s and HeroCtl get confused because they occupy the same mental space — "orchestration without the complexity of full Kubernetes" — but solve problems that only look the same from outside. k3s is Kubernetes, distilled. HeroCtl is not Kubernetes, and that difference changes everything that comes after: what you read, what you install, who you hire, what you celebrate on Saturdays.

This post is for tech leads who know K8s well enough to have scars and are considering some lighter alternative. The intent is not to convince anyone to abandon Kubernetes — Kubernetes is the right choice for many cases. The intent is to give you the map to decide between k3s and HeroCtl without mixing the two's premises.

What k3s is, exactly

k3s is a Kubernetes distribution maintained by Rancher (now SUSE). It is full Kubernetes and CNCF-certified — the same API, the same controllers, the same object model. What changes is the packaging.

Instead of five to seven separate components running as system services, k3s ships a single binary of about 50 MB that boots API server, scheduler, controller manager, kubelet and container runtime in a single process tree. Default storage is SQLite instead of Kubernetes' traditional distributed database — you can swap for the distributed database when you want real high availability, or use an external SQL database cluster via the kine driver. Cloud provider plugins were removed from the binary — if you need them, you install separately.

The installation fits in one command, comes up in less than 30 seconds on a modest server, and the minimum RAM requirement is 512 MB. It works on Raspberry Pi. It works on a $5 VPS. It works on industrial fanless hardware running inside a steel box in a factory.

The most important point: kubectl, K8s manifests, operators, templating charts and everything else you learned about Kubernetes work identically. A k3s cluster accepts the same YAML files an AWS managed cluster accepts. Migrating from one to the other is, in practice, copying manifests. Compatibility is the feature.

What k3s does NOT do, even being "lightweight"

The word "lightweight" deceives. k3s is light in footprint (RAM, disk, number of processes), not in mental model. What it removes is the installation barrier and the dependency on five external services to come up. What it keeps is everything that makes Kubernetes Kubernetes:

  • A "hello world" manifest still passes 100 lines when you add Service, Ingress and ConfigMap. Adding automatic TLS and minimum RBAC, it goes to 300+.
  • You still need to understand namespaces, services, ingress, persistent volumes, secrets, configmaps, RBAC, network policies, pod disruption budgets, liveness/readiness probes, init containers, sidecars, taints, tolerations, affinity rules, and so on.
  • Operators and templating charts continue to be the idiomatic path for anything non-trivial. Replicated Postgres? Operator. Kafka? Operator. Automatic certificates? Operator. Metrics? A three-product stack.
  • The learning curve is practically the same. k3s removes maybe 10% — the "install and maintain the control plane" piece. The remaining 90% — understanding how the system models applications, how controllers reconcile state, how to debug when a probe starts failing — remain there, intact.

If a veteran SRE looks at a k3s manifest, they feel at home. If a product developer who has never touched Kubernetes looks at that same manifest, they feel exactly as lost as if they were looking at a managed cluster manifest.

Who should use k3s (real profile)

Let's be concrete. k3s makes sense for:

Teams that already speak fluent Kubernetes and want to run on cheap hardware. Edge computing, IoT, physical stores with local server, factories with industrial gateways, on-prem environments with modest hardware. The team already knows how to operate K8s — k3s only allows them to take that expertise to places where a 4 GB RAM control plane would be unfeasible.

Company migrating from managed Kubernetes to self-managed to reduce cost. Managed cluster on a cloud provider charges about US$73/month just for the control plane, multiplied by the number of clusters. Add NAT, load balancers, observability — it gets expensive. Whoever already paid that toll and wants to stop can spin up k3s on commodity VPS and cut the bill by an order of magnitude. Operations don't get simpler; the bill gets smaller.

Workloads that depend on the CNCF ecosystem. Mature operators for Postgres with automatic replication (CloudNativePG, Zalando), Kafka (Strimzi), Cassandra, Elasticsearch — these operators exist because someone invested three years polishing them. If your architecture depends on four of them in production, you want full Kubernetes, and k3s gives you full Kubernetes.

Those who want K8s-compatible tools working 1:1. kubectl, templating charts, ArgoCD for GitOps, image scanning tools, policy tools like OPA Gatekeeper. If your existing CI/CD pipeline uses these tools, k3s keeps all of them working without adaptation.

Compliance that requires a CNCF-certified distribution. Some audit frameworks nominally ask for a certified distribution. k3s appears on that list. HeroCtl doesn't — we are too young to be on any list, and our proposal is different enough that some lists may never include us.

What HeroCtl is, exactly

HeroCtl is an independent orchestrator. It's not a distribution derived from Kubernetes; it doesn't share the API; it doesn't use the same primitives. It's a different layer that addresses a similar intent — running containers across multiple servers with real high availability — using another vocabulary and other design decisions.

Concretely: a single executable file that you install on N Linux servers. The first three become quorum for the replicated control plane. You submit jobs via CLI, API or embedded web panel. A job is a configuration file of about 50 lines that describes the entire application — including replicas, ingress, certificates, secrets. The cluster decides where to run, does health check, manages rolling update deploys, issues automatic certificates via integrated router.

There are no specialized operators to install, there are no observability stacks assembled separately, there is no service mesh configured aside. Persistent metrics run as a job from the system itself. Logs have a single embedded writer. Encryption between services and key management come ready. Ingress with automatic TLS is part of the binary.

The consequence is a short operational model. Bringing up a new application is describe, submit, wait — and the cluster handles routing, certificate, replication, metrics and health check without you installing anything extra.

What HeroCtl does NOT do (honest limits)

It is not compatible with the Kubernetes API. kubectl doesn't talk to HeroCtl. Templating charts don't run. K8s manifests are not accepted. If your critical dependency is a CNCF ecosystem tool that talks to the Kubernetes API, HeroCtl doesn't replace it — it's a different tool, with its own vocabulary.

It has no specialized operator ecosystem. There's no mature Postgres operator with automatic replication waiting to be installed. You run Postgres as a regular job and take care of backup and replication as a human takes care — you don't delegate to an external controller. For many teams that's relief; for others it's regression.

Recommended scale range goes from 1 to 500 servers. We tested up to hundreds in lab, validated some dozens in production. Above that, Kubernetes (full or in a distribution like k3s) wins by ecosystem — multi-cluster federation tools, cross-region autoscaling, storage migration primitives between clouds exist there and don't yet exist here.

Multi-cluster federation is not native. If you need multiple regions orchestrated as a single surface, with workloads moving automatically between them, tools like Rancher Fleet or Kubernetes multi-cluster features solve it today. HeroCtl doesn't.

Compliance that nominally lists Kubernetes. If your certification nominally requires a CNCF-certified distribution, HeroCtl doesn't comply — we are a new product, too young to figure on established lists. k3s, OpenShift and Talos comply. That's the path.

Who should use HeroCtl (real profile)

Teams that DON'T want to learn Kubernetes but need orchestration with real high availability. Popular self-hosted panels work well on one server but don't have distributed consensus — when you want three servers tolerating loss of one without downtime, those panels don't serve. Kubernetes would serve, but it costs an SRE on the team. HeroCtl is the missing middle.

Indie hackers and startups up to about R$1M annual revenue. Typical stack: web application, relational database, async queue, cache. There's no Kafka, there's no Cassandra, there are no seven database operators. For this profile, the CNCF ecosystem is expensive idle capacity — you pay in learning curve and operational complexity without using what you pay for.

Typical web applications without exotic dependencies. HTTP on top of SQL database and in-memory cache covers maybe 70% of the SaaS market. For that piece, Kubernetes is overkill and HeroCtl is sized.

Those who want "Coolify simplicity with real high availability". Coolify, Dokploy and similar got the experience right but missed high availability. Kubernetes got high availability right but missed the experience. HeroCtl tries to get both right at the cost of not being Kubernetes.

LGPD-only compliance. If your compliance is LGPD and Brazilian commercial contracts, without FedRAMP nor ITAR on the horizon, the absence of specific certifications is not a blocker.

Side by side, no fluff

The table below covers the criteria that show up most in the decision. Each row has a caveat — read the text.

Criterionk3sHeroCtl
Product typeKubernetes distributionIndependent orchestrator, non-Kubernetes
Lines for hello world + TLS + ingress200–300 (manifests + TLS operator)~50 (job spec)
Minimum total RAM in cluster512 MB per node (1.5 GB on 3 HA nodes)~600 MB by control plane (200–400 MB per node × 3)
Learning curve8–16 weeks (full K8s curve)1–2 weeks
kubectl + templating chart compatibilityTotalNone — own vocabulary
Integrated routerNo — install separatelyYes, embedded
Embedded automatic certificatesNo — external operatorYes, embedded
Embedded metricsNo — external 3-product stackYes, system's own job
Centralized logsNo — external 2-product stackYes, single embedded writer
Operator ecosystemVast (hundreds)None — workloads as regular jobs
Recommended scale range1 node to 10k+1 to 500 servers
Commercial modelOpen source (Apache 2.0), supported by SUSEFree Community + paid Business + Enterprise

The column that matters most varies by context. For a team that already has K8s expertise, "kubectl compatibility" weighs a lot. For a team that's starting out, "lines for hello world" and "learning curve" weigh more.

When both are in the conversation (practical decision)

Five real scenarios that show up in conversations with readers. The answers are direct because reality is direct.

"We already have managed Kubernetes and it hurts operationally." k3s reduces cloud cost because you exit the paid control plane. The operational pain remains — long manifests, TLS operators, external observability stacks. You save on the bill but not on time. HeroCtl reduces the pain at the root, but requires learning another tool and re-writing the primitives. If the pain is financial, k3s. If the pain is engineering time, HeroCtl.

"We're just starting, we want something simple." HeroCtl. Kubernetes (full or k3s) adds months of learning curve that don't generate product value in the early phase. You spend three months learning templating charts and ingress controllers instead of shipping features. In early-stage, opportunity cost is everything.

"Compliance requires a CNCF-certified distribution." k3s or Talos. HeroCtl doesn't fulfill that list. It's not pride — it's honesty. When we're ready for those lists, we'll talk again.

"Team has 1 strong SRE who loves Kubernetes." k3s. Keeps the SRE happy, preserves all the team's existing knowledge, and still cuts the cloud bill. HeroCtl would force the SRE to re-learn and abandon tools they master — unnecessary friction when expertise is already paid for.

"Team has 0 SREs and grows by product." HeroCtl. Kubernetes without expertise is a recipe for disaster — you'll discover what a pod stuck in CrashLoopBackOff is on a Friday night, without context to debug. HeroCtl is sized for a team that doesn't have a dedicated infra on-call.

The improbable migration

Migrating from k3s to HeroCtl, or vice versa, is an operation that seems worse than it is.

Conceptually, the two are similar. Both run containers, both have replica notion, both have ingress, both have health check, both have rolling update deploy. If you know how to do one, you know how to think about the other.

Syntactically, they are incompatible. Kubernetes manifest doesn't convert 1:1 to HeroCtl job spec. Fields don't match, abstractions aren't the same, defaults are different. You re-write.

Re-writing isn't as expensive as it seems. For a typical team with 20 to 40 specs in production, the migration takes an afternoon. The reason is that most K8s manifests have huge structural repetition — 80% of fields are standardized, and you discover the mapping quickly. For teams with a few dozen jobs, manual converter suffices. Above that, we're open to talking about experimental converters.

The migration in the other direction (HeroCtl → k3s) is more expensive, because you're leaving a lean model for a verbose model. You gain ecosystem; you pay in verbosity.

Concrete cost compared

Scenario: 4 VPS at a low-cost European provider, each with 4 vCPU and 8 GB of RAM. Infrastructure cost: about R$100/month per VPS, R$400/month total.

Self-managed k3s in this scenario costs R$400/month of infra plus partial SRE salary. A strong SRE in Brazil costs a full R$15k to R$25k/month. Even if you allocate only 30% of their time to the cluster — which is optimistic for a small team — that's R$5k to R$7.5k of people cost. Total: R$5.4k to R$7.9k/month.

HeroCtl Community in the same scenario costs R$400/month of infra plus part-time dev allocation to the cluster. Since the operational model is shorter, 10% of a senior dev's time suffices — R$1.5k to R$2.5k/month. Total: R$1.9k to R$2.9k/month.

The difference is in people salary. k3s asks for more expertise; more expertise costs more. The infra is practically the same.

This calculation flips when the team already has an SRE paid independently of the orchestrator choice. If the SRE exists for the rest of the stack, the marginal cost of operating k3s is low, and the CNCF ecosystem becomes worth gold. It's another type of company.

FAQ

Is k3s still full Kubernetes? Yes. k3s is CNCF-certified as a conformant distribution. The same manifests run, kubectl works identically, the API is the same. Removals were of dependencies and cloud plugins — not of the API nor semantics.

Can HeroCtl run on Raspberry Pi like k3s? Technically, yes — HeroCtl runs on any Linux server with Docker, including ARM. Practically, the "edge on Raspberry Pi" use case is territory where k3s has years of polish and HeroCtl hasn't been exercised enough yet. If your use is industrial edge on modest ARM hardware, k3s is the more proven choice today. HeroCtl on Pi works for hobby; for edge production, wait a few more quarters.

Does kubectl work on HeroCtl? No. HeroCtl has its own CLI and own API. The intent was different from the start — we don't try to be Kubernetes-compatible. Whoever wants kubectl wants Kubernetes; that's the right tool for that person.

How to migrate from managed Kubernetes to k3s? Most manifests run directly. Exceptions usually are: cloud provider-specific annotations (load balancer, storage class), native IAM integrations and some ingress controller that assumed cloud infrastructure. You swap for CNCF ecosystem equivalents (MetalLB for LB, longhorn or local storage for volumes) and redo the annotations. For a cluster with a few dozen manifests, the migration takes a few days.

Does HeroCtl have multi-region like Rancher Fleet? Not natively. Today the control plane quorum is sized for one cluster per region. You can operate HeroCtl across multiple regions in parallel, each with its cluster, but there's no federation layer today that presents all as a single surface. It's on the roadmap. Whoever needs that today, k3s + Rancher Fleet or full Kubernetes + Karmada are exercised paths.

Which scales higher? Kubernetes (full or k3s). Companies operate K8s clusters with tens of thousands of nodes in production. HeroCtl aims at the "1 to 500 servers" range and doesn't intend to compete above that. If you operate at the level of hundreds of thousands of machines, K8s is the path — not by choice, by sizing.

Can I run both side by side? Yes. Both are orchestrators that run containers on Linux servers. You can have a k3s cluster for workloads that depend on the CNCF ecosystem and a HeroCtl cluster for typical web apps — they don't conflict, they are different products. Some of our customers do exactly that: HeroCtl for the main product, k3s for a test environment that needs to mimic the end client's managed production cluster.

Honest closing

The initial question — "you're like a k3s, right?" — now has a long answer. We are not. k3s is Kubernetes packaged to fit modest hardware, keeping the entire learning curve and the entire ecosystem intact. HeroCtl is a different layer from Kubernetes, with its own vocabulary, shorter operational model, no operator ecosystem and no kubectl compatibility.

If you already speak fluent Kubernetes and want to take that expertise to cheap hardware or edge, k3s is the choice. If you never wanted to learn Kubernetes but need orchestration with real high availability, HeroCtl is the choice. If your pain is compliance that lists certified distributions, k3s or Talos. If your pain is engineering time spent on long manifests and external operators, HeroCtl.

There's no universal winner — there are tools that match different contexts. The wrong choice is neither k3s nor HeroCtl; it's adopting either one without understanding what problem you're really solving.

Whoever wants to try HeroCtl on the closest server, the path is unique:

curl -sSL get.heroctl.com/install.sh | sh

Five minutes later you have a single-node cluster running. Add two more servers with the same command + token, and you have real high availability — without installing an external operator, without assembling an observability stack, without learning new manifest vocabulary.

To continue reading, two posts speak directly with this one: Kubernetes is overkill: when you don't need it addresses the general decision not to adopt K8s; AWS ECS vs Kubernetes vs self-hosted compares the three families when the cloud server is on the table. And to understand why we exist as a separate product instead of being yet another K8s distribution, why we built HeroCtl has the complete history.

The intent, as always, is simple: container orchestration, without ceremony — but with honesty about when ceremony is what you need.

#k3s#kubernetes#comparison#edge#lightweight