Self-hosted Heroku in 2026: the state of the segment

Since Salesforce killed the Heroku free plan in November/2022, dozens of self-hosted alternatives emerged. An honest map of the segment and how to choose.

HeroCtl team··15 min· Read in Portuguese →

On November 28, 2022, Salesforce shut down the Heroku free plan. The company that bought Heroku in 2010 carried out the notice published three months earlier — all free dynos, all hobby databases, all free redises were terminated in the same window. Hundreds of thousands of hobby projects, MVPs, portfolio demos, and university prototypes disappeared from the air all at once.

The reaction was predictable. Whoever had a card on file migrated to the paid plan and went on. Whoever didn't looked elsewhere. And in the following weeks, a movement that had existed in dormant state since 2013 exploded: "self-hosted Heroku".

In 2026, three and a half years later, the segment matured. There are at least six serious products competing for attention, plus a handful of hosted projects that sell themselves as "Heroku-like" without the self. This post is the map.

Why "self-hosted Heroku" became a category

The first thing that needs to be said is that Heroku solved the right problem. In 2010, web app deploy had two forms: upload a tarball to a server you administered, or pay a lot for someone to administer it for you. Heroku invented the third path — git push heroku main, scale slider, embedded managed database, automatically issued certificate, working subdomain in seconds.

That pattern stuck. The concept of "deploy is a push, scaling is a slider, TLS is automatic" became the base expectation of any developer trained after 2012. Everything that came after — Render, Railway, Fly.io, Vercel for frontend, Cloud Run, App Runner — is a variation on top of that model.

But three things changed between 2010 and 2022.

Cloud bare-metal got cheap. In 2010, a decent virtual server cost US$40/month. In 2026, US$5/month buys a VPS with 1 vCPU, 2 GB of RAM, 50 GB of disk, and 2 TB of traffic — more than enough to run five small applications. The economics of paying dynos versus running your own containers reversed.

Docker became standard. The original Heroku's great virtue was "buildpacks" — recipes that took your Ruby or Node code and produced an isolated artifact ready to run. Docker made that encapsulation a commodity. Today anyone produces a reproducible image in three lines of Dockerfile, and any server with 100 MB of free RAM can host it.

The community learned to operate. In 2010, "running a Linux server" was an SRE craft. In 2026, any full-stack developer has already configured nginx, dealt with certbot, set up a systemd unit, debugged the OOM-killer through dmesg. The average level rose. What justified paying US$25/month for Heroku to take care became an afternoon exercise.

When the free plan died, setting up "your own Heroku" stopped being a hacker pride exercise and became bakery math. US$10/month of VPS against US$25/month minimum on paid Heroku — and that's per application. Five applications on Heroku cost US$125/month. Five applications on a US$10/month VPS keep costing US$10/month.

The category that responded to that math has five distinct subgenres today. Worth separating.

The segment in 2026

Single-server simple — "I install on a VPS and forget"

The oldest and most populated subgenre. The premise is direct: a single server, an installer, a panel or CLI, and you have dynos. No cluster, no high availability, no complication.

Dokku is the grandpa of the segment, active since 2013. The engine is Bash plus Docker. UX is mostly CLI — you push code via git remote, it builds with Heroku-compatible buildpacks, brings up the container, registers in the internal router. The community is small but loyal, the product is stable, and the learning curve is steep in the first days and flat after that. Whoever passed those first days rarely switches. It's out of fashion in the sense that the new community prefers web panels — but the product remains solid, with more than twelve years of real production operation in thousands of installations around the world.

Caprover occupies the middle of the spectrum. Web panel, plugin system, reasonably easy installation. The product has about thirteen thousand stars on public repositories and an active community, though smaller than that of newer competitors. Evolution is slower — releases come at a monthly or bimonthly cadence, and big features take time. For those who prioritize stability over novelty, it's a defensible choice.

Coolify is the current mindshare leader. About forty thousand stars, modern web panel, active plugin ecosystem, noisy community in forums and chat. The product evolved fast between 2023 and 2025, adding support for embedded managed database, deploy via git, automatic certificates, container monitoring. It's the default recommendation circulating in indie hacker forums today.

The main defect of Coolify, and the reason it appears also in the section on traps further below, is architectural: it was designed single-server first. Multi-server was added later, but the central panel remains a single process on a single server. If that server falls, you lose access to all the others.

Single-server modern — "deploy without panel"

Newer subgenre, with philosophy opposite to the previous. Instead of web panel, command-line tool that operates over SSH.

Kamal is the almost exclusive representative. It came out of the 37signals team in 2024, written by people close to DHH. The premise is radical: no panel, no control plane, no resident agent on the servers. You write a configuration file, run kamal deploy, it SSHs into each server, pulls the image, swaps the container, and continues. DHH published in 2024 that he saved about three million dollars per year migrating Basecamp's and HEY's own apps from cloud to own hardware with Kamal. Where the "no orchestration" thesis starts to hurt is in HeroCtl vs Kamal.

The virtue is absolute transparency — there's nothing happening that you don't see in the terminal. The defect is that multi-server isn't orchestration, it's parallel deploy. There's no leader election, no rebalancing, no failover. Each server is an independent destination. If one falls, you notify your monitoring and redo the deploy excluding that host.

For a small team operating two or three applications on three to five servers, with disciplined deploy habits, Kamal is elegant. For anything that needs "if a server falls, the cluster decides what to do on its own", it isn't the right tool.

Cloud-native modern — "Heroku rewrapped"

Dokploy is the most recent product to enter the conversation. Around ten thousand stars, growing fast, UX visually similar to Coolify but underlying architecture on Docker Swarm. Main attraction: real multi-server "out of the box", without needing to set up by hand. The point-by-point reading of the technical choice Dokploy made is in HeroCtl vs Dokploy.

The structural defect is the foundation. Docker Swarm has been in maintenance mode for years — Docker Inc. doesn't invest in new features since 2019, and the public roadmap is essentially "keep functioning". Building new product on top of technology in maintenance is a bet. If Swarm is formally discontinued, Dokploy needs to migrate the entire foundation or rewrite — and the user pays that bill mid-path. The plugin ecosystem is still smaller than Coolify's, but rising fast.

Hosted-but-I-prefer-self-hosted — "Vercel/Render but on my server"

Technically out of the post title's category, but worth mentioning because many teams compare. Whoever seeks "Heroku alternative" frequently ends up not in self-hosted, but in another hosted.

Render is the most direct successor to Heroku in spirit. Clean UX, predictable prices, generous free tier (but not infinite — has automatic suspension for inactivity). Managed Postgres and Redis databases, deploy via git, build logs in the panel. The price rises linearly with real use, without big traps. It's the obvious choice for those who want "Heroku that works in 2026" without worrying about a server.

Railway is hosted, stronger focus on solo devs, price by resource use (CPU/RAM/traffic) instead of fixed tier. Works well for hobby projects that won't scale; can get expensive fast if you forget a worker running.

Fly.io has a different proposal: distributed hosting in multiple regions, rawer primitives, closer to "VM as service with automatic TLS" than to "PaaS in the Heroku style". It's the choice for those who want low global latency without setting up by hand. The learning curve is bigger than Render or Railway.

The three are legitimate options. The important note is what comes in the traps section: hosted free tier shrinks every year, and Heroku's historical path — started free, became US$25/month minimum — is the default forecast for any free plan from a private company.

Real cluster — "I need high availability"

Short category, with few serious products. Here the premise isn't "run deploy on more than one server", it's "if a server falls, the cluster keeps working on its own without human intervention". The difference is big, and most of the segment doesn't cross that line.

HeroCtl is the product we're building. Replicated control plane between three or more servers from the first day. Automatic leader election in about seven seconds when the leader falls. Integrated router, automatic certificates, metrics, and logs embedded in the binary itself. Commercial model with permanently free Community, paid Business and Enterprise with published price. Ideal range: from one to five hundred servers.

The operational difference matters when the customer enters. While the central panel of a multi-server Coolify is a single point of failure, in HeroCtl there's no central — any of the first three servers can lead, and the transition between them is automatic.

Comparative table

CriterionDokkuCaproverCoolifyKamalDokployRenderHeroCtl
Installation time30 min10 min5 min5 min10 minn/a (hosted)5 min
Web panelNoYesYesNoYesYesYes
Real multi-serverNoPartialPartialPartialYesn/aYes
Real high availabilityNoNoNoNoPartialYesYes
Automatic certificatesYesYesYesYesYesYesYes
Encryption between servicesNoNoNoNoNoYesYes
Embedded metricsNoPluginYesNoYesYesYes
Embedded logsNoPluginYesNoYesYesYes
Commercial modelOpen-sourceOpen-sourceOpen-source + paid cloudOpen-sourceOpen-sourceHosted paidFree Community + Business/Enterprise
Ideal range1 server1–3 servers1–3 servers1–10 servers3–10 serversn/a1–500 servers

The column that splits the segment in two halves is "real high availability". To its left, all products share the same premise: multi-server is deploy destination, not cluster with consensus. To its right, the panel/control plane is replicated and survives the loss of any server.

Decision by usage profile

Four profiles cover most cases.

Solo dev, hobby project, one VPS. Dokku if you like CLI and want stability. Coolify if you prefer web panel. Kamal if you're on a Rails or Node stack and already work well with SSH and configuration files. Any of the three solves it. The choice is more about taste than technical capability.

Indie hacker with one to three small SaaS, still one server. Coolify or Dokploy. The practical difference is the plugin ecosystem (Coolify has more) and the technical foundation (Dokploy bets on Swarm). For the next twelve months, either works; migration between them is feasible because both run standard Docker containers. The important architectural decision is different: when you go from one server to two, you'll feel the panel's single point of failure — and that's the time to assess whether the next migration is multi-server Dokploy or a real cluster.

Startup with first serious customer, contractual SLA coming into force. HeroCtl. Here the single-server panel becomes a legal liability — any SLA written in commercial contract assumes that the infrastructure survives the loss of a node, and no single-server panel does this. You can try to set up manual redundancy on top of Coolify or Dokploy, but the result will be fragile and costly to operate. The simple rule is: when the customer contract mentions "uptime", the consensus cluster stops being luxury.

Established company, fifty servers or more, platform team with three dedicated people. Here the conversation changes. Managed K8s on a cloud provider becomes the sensible option, because the operator ecosystem is bigger and the team has competence to absorb the complexity. HeroCtl runs in this range too — we tested hundreds of nodes in the laboratory, dozens in customer production — but above one hundred servers our specialized operator library's ceiling starts to appear.

The segment's three traps

"Multi-server" doesn't mean "real high availability"

The most expensive confusion. Most panels list "multi-server" as a feature, and the casual reader interprets that as "if a server falls, the system keeps working". That isn't what's being offered. Multi-server in most panels means: the central panel, running on a single server, is capable of deploying to multiple remote servers.

When the panel server falls, you lose control. The containers in production keep running — Docker doesn't stop because of this — but you can't deploy anymore, read centralized logs, restart service, scale. You sit waiting for it to come back.

Real high availability requires consensus between multiple servers: at least three panel processes running, automatic leader election, state replication between them. If the leader falls, another takes over in seconds. That's a different architecture, more expensive to build and more expensive to operate. That's why few products in the segment deliver it.

The concrete question to ask when evaluating any product: "if the server where the panel runs is shut down now, in how much time does the system come back to accepting deploys, and is that return automatic or manual?". If the answer involves a human opening SSH somewhere, it isn't high availability.

"Plugin ecosystem" can be disguised dependency

Panels with plugin stores look complete: you install a plugin to have managed Postgres, another for Redis, another for Sentry-like, another for automatic backup, another for monitoring. Each one solves a piece, and the set adds up to a Heroku.

The problem appears two years later. The backup plugin was written by a volunteer in 2024 and stopped receiving commits in 2025. The new panel version broke compatibility with it and nobody updated. You discover at the time you need to restore a backup — and the restoration was never tested with the current version.

That pattern repeats for each plugin. The more functionalities depend on the external ecosystem, the bigger the risk surface. The structural defense is simple: prefer products with batteries included — where Postgres, metrics, logs, certificates, routing are part of the main product and maintained by the same team that maintains the rest. Plugin is convenient in the short term and costly in the medium.

Hosted "free tier" isn't gratis long term

Render, Railway, Fly.io have generous free plans today. Heroku had it in 2021. The segment's history shows a consistent pattern: free tiers from private companies shrink every capital-raise round. First suspends for inactivity, then reduces quota, then adds hour limit, then turns into thirty-day trial, then ends.

It's not malice — it's business math. Hosting workload costs money, and the investor charges return. The only structural exception is hosting subsidized by another product from the same company (cloud covering free PaaS to attract developers to the main cloud), and even those change when the CFO changes.

Self-hosting is the only structural defense. You pay the VPS bill directly to the infrastructure provider, without intermediary. When the intermediary disappears, your application doesn't disappear with it.

When to stay on Heroku, Render, or Railway without irony

Worth saying clearly: not every team needs to leave managed hosting. There are three situations where staying is the right decision.

Small team without operational competence available to take care of a server. If the entire team is two product developers, neither with prior experience in Linux/Docker/networking, the cognitive cost of operating infrastructure is greater than the monthly savings. Pay the US$200/month of Render and keep focus on product.

Application whose platform cost is negligible compared to revenue. If the company bills US$50k/month and the Heroku bill is US$300, optimizing that bill is poorly allocated work. The marginal return of migrating is low, and the operational risk doesn't pay off.

Team allocated on product, not on infra. Some startups are so dependent on rapid iteration on product that any hour spent on infra is hour stolen from the competitive differentiator. For these, the trade-off of paying more to not think about a server is real value, not waste.

The simple rule: if infra is invisible commodity for your business, let someone charge to be invisible to you. If infra is a capability that differentiates the product (low latency, specific regions, specific compliance, contractual uptime), control pays off the work.

HeroCtl in the segment

Honest positioning: HeroCtl doesn't compete with Dokku or Coolify in the case of a hobby project on one VPS. For that case, it's more machine than needed. An indie hacker with a Django application on a US$5/month server should use Dokku or Coolify and keep going.

Where HeroCtl competes is where Coolify multi-server, Dokploy, and Nomad also compete: the case of serious customer with SLA, where single-server becomes a legal liability. Here the difference we offer is cluster with consensus from day one, batteries included instead of five products to set up (router, certificates, metrics, logs, and encryption between services already in the binary), and commercial contract published and frozen — without retroactive change of terms.

The demonstration cluster runs four servers totaling five vCPUs and ten gigabytes of RAM, with sixteen active containers serving five sites. The control plane occupies between 200 and 400 MB per server. By comparison, the control plane of a managed version of the large orchestrator starts at about 700 MB per master node before any application comes up.

The typical job spec has about fifty lines — describes service, ingress, secrets, resources. The equivalent on the large orchestrator passes three hundred lines to cover the same functionality.

HeroCtl doesn't compete with managed cloud at scales of one hundred nodes or more. The ideal range is one to five hundred servers. Above that, the external ecosystem of specialized operators still gives advantage to the large orchestrator, and being honest about that is part of the contract.

Questions we receive

Can I migrate from Heroku directly to HeroCtl? Yes, with some adaptations. Stateless web application with separate Postgres migrates easily — you containerize with Dockerfile, describe the job in fifty lines, bring it up. Separate workers (Sidekiq, Celery) become additional jobs in the same cluster. What needs to be rethought is what depended on managed add-ons.

And the add-ons (Postgres, Redis, Sentry)? Postgres you run as a job in the cluster itself, with persistent volume, and take care of backup like a human takes care — there's no automatic operator that does this better than you doing it right. Redis idem. Self-hosted Sentry exists and runs on any Docker cluster — and there's a hosted commercial product if you prefer not to operate. The general rule: critical data runs in the cluster, observability can run outside.

How does it cost compared? Taking as base a startup with five small applications: paid Heroku comes out around US$125/month minimum, without add-ons. Render comes out between US$50 and US$150 depending on usage. Own cluster on three-node VPS comes out US$30 to US$60/month total at the infrastructure provider. The direct savings are real, and become more expressive as the applications grow.

What if I'm already on Coolify? There's no urgency to migrate while you operate with a single server. The time to consider is when the single-server panel becomes a contractual single point of failure — first serious customer with SLA written. Until then, Coolify works well.

And for a Django app with Celery, or Rails with Sidekiq? Works naturally. You define a job for the web process and another job for the worker process, both sharing the same image but with different commands. The cluster orchestrates the two independently, and the broker (Redis or similar) is one more job in the same cluster.

And for a Node.js app with separate workers? Same story. Worker is just another process, defined as another job. There's no architectural distinction between "web" and "worker" at the orchestrator level — they are containers that run code.

When do Business prices come out? The plans page publishes the values. The cut line is designed so you only pay Business when the company is large enough that SSO, granular RBAC, and detailed audit are real requirements — not preference. For everything else, Community solves it, and Community is permanently free without artificial feature gate.

Closing

The "self-hosted Heroku" segment matured. In 2026, there are serious products for each usage profile, and the decision depends less on "which is best" and more on "which fits my case". A hobby project doesn't need a consensus cluster. A serious customer with SLA doesn't fit on a single-server panel.

For those deciding now, three final recommendations. First, read the commercial contract before adopting — flee from terms that allow retroactive change. Second, prefer batteries included over plugin ecosystems where possible — smaller risk surface. Third, test the failure path before the real incident — shut down a server and watch what happens, calmly, during the day.

To start with HeroCtl on three Linux servers:

curl -sSL get.heroctl.com/install.sh | sh

If you want to read more before, there are two adjacent posts: HeroCtl vs Coolify explains the direct comparison with the mindshare leader of the single-server segment, and Why we created HeroCtl explains the reasoning that led to the product's existence.

Container orchestration, without ceremony.

#heroku#self-hosted#paas#comparison#segment