Migrating from Heroku to your own cluster: technical guide in 5 steps

The end of Heroku's free plan in November/2022 turned migration into a priority for hundreds of Brazilian teams. Detailed plan with checklist, estimated time, and common pitfalls.

HeroCtl team··16 min· Read in Portuguese →

On November 28, 2022, Salesforce shut down Heroku's free plan. Hundreds of thousands of hobby projects were wiped at once, and the news cycle lasted a couple of months — people migrating to Render, to Fly.io, to Railway, to any VPS. What nobody predicted at that moment is what happened next: four years passed, we're in 2026, and there are still thousands of Brazilian SaaS in production paying between US$25 and US$100 per month per dyno just because "migrating" is the thirteenth item in the backlog. There's always a more urgent feature. There's always a customer asking when module X will ship. Migrating gives zero new revenue — so it stays.

This post is the plan to fit that migration into a week of work for a part-time dev, and the rest of a month to stabilize. It's not a manifesto, it's not a vendor comparison, it's not "come to HeroCtl". It's a runbook. At the end there's a section on destination options, including our product, but if you finish reading and go to Render or Coolify or Fly.io, the post did its job.

Why migrating still hurts (the unspoken truth)

The first thing that needs to be clear: it's not the Dockerfile. Writing a Dockerfile for a Rails or Node app is half an afternoon — there's a ready template for each framework, there are five posts on DEV explaining, there's Copilot writing it. If your resistance is "we haven't dockerized yet", that part is the least important.

The pain is in the ecosystem:

  • Postgres with specific extensions that you forgot you enabled in 2019. pg_stat_statements, pgcrypto, hstore, postgis — each one is a reason for the migration to break silently.
  • Redis Premium with persistence that you use for the Sidekiq queue AND for cache AND for rate limit. For cache it can restart from zero. For queue it can't.
  • Stateful Sidekiq workers with jobs scheduled months ahead. Migrating while they run is chasing a moving train.
  • Heroku Scheduler with that cron nobody has looked at since 2020 but that produces the CEO's monthly report.
  • Papertrail integrated, NewRelic instrumented, Bugsnag on every error — three extra SaaS you don't even know if they'll make sense in the new architecture.
  • Buildpack that ran for six years without anyone really knowing what it does. There's a bin/post_compile that minifies something, there's an environment variable that defines which Ruby version — somewhere, your application depends on six buildpack behaviors that were never documented.

And there's the human part: you and your team internalized Heroku primitives over years. Procfile, slug compilation, dynos, release phase, config vars. All of that became intuition. When we go to redo outside Heroku, we redo unconsciously — and generally badly, because Heroku had defaults that hide important decisions that are now yours.

The technical migration takes a week. The mental migration takes a month. This post tries to shorten both.

Pre-flight check — one to two hours, before any commit

Before opening the editor, you need the inventory. Most migrations that go wrong are because of a surprise that could have been discovered in the first hour.

Apps inventory:

heroku apps

How many apps exist on the account? Which ones are still really in use? Which ones can become a cron-job and die? Which ones were created for a customer that left in 2021? Mark each one in a spreadsheet with three columns: name, status (live/zombie/cron), migration priority (high/medium/low).

Most accounts have 30% zombie apps. Migrating zombies has no ROI — destroying them does.

Addons inventory per app:

heroku addons -a my-app

Each line is a future decision. Postgres? Redis? Papertrail? Heroku Scheduler? SendGrid? Mailgun? For each one, write in the spreadsheet: will migrate to self-hosted equivalent, will become external SaaS, or will discard. If you don't know what it's for, look it up before — not at cutover time.

Buildpacks inventory:

heroku buildpacks -a my-app

Multi-buildpack? Custom buildpack? If the output has more than one line, read each one. Custom buildpacks usually have hooks (bin/release, bin/compile, bin/post_compile) that execute specific things. You'll need to replicate these steps in the Dockerfile or in a release container.

Env vars inventory:

heroku config -a my-app

Export everything to a secure file. DO NOT commit. DO NOT send via Slack. DO NOT paste into ChatGPT. This file has DATABASE_URL, SECRET_KEY_BASE, payment API key. Treat as password, because that's exactly what it is.

Watch for two pitfalls:

  • Variables with : character in the name (some old libs use it) escape differently in containers.
  • BUNDLE_WITHOUT=development:test saved in production is a time bomb after migration.

Procfile inventory:

Each Procfile line is a service:

  • web becomes the main container.
  • worker becomes a second container or separate job.
  • release becomes a pre-deploy step (typically migrations).
  • clock or scheduler becomes a cron job.

If your Procfile has five lines, you'll have five services at the destination. They're not details — they're the topology design.

Current metrics:

heroku ps -a my-app
heroku logs --tail -a my-app

How many dynos running? What type (Standard-1X, Performance-M)? Log volume per minute? Average latency on NewRelic? CPU/memory peak last month?

These numbers serve to size the destination. Migrating and discovering later that memory is half of what's needed is the fastest way to break confidence in the entire project.

At the end of pre-flight you have a spreadsheet with everything. That file is the heart of the migration. Every decision returns to it.

Step 1 — Target stack choice (architectural decision, 30 minutes)

Three possible paths. I'll be honest about each one.

Option A — Single VPS with self-hosted panel. A server on DigitalOcean or Hetzner, install Coolify or Dokploy, deploy your app via the panel. Cost: R$30 to R$50 per month to start, scales well to about 10 apps on a medium server. No high availability — if the server falls, everything falls. SLA you can promise: best-effort.

Ideal for: indie hacker, personal project, MVP, SaaS without customer requiring written SLA.

Option B — Cluster with high availability. Three or more servers, orchestrator coordinating between them, survives the crash of a server without affecting traffic. Cost: R$150 to R$300 per month for a three-modest-node cluster. Possible SLA: 99.9% without despair.

Ideal for: B2B SaaS with paying customers, any application where half an hour of downtime generates support ticket.

Option C — External managed platform. Render, Railway, Fly.io. You pay more, but zero ops. Cost: R$200 to R$500 per month for workload comparable to 2-3 Heroku dynos, linear scaling from there.

Ideal for: team that has absolutely nobody to take care of server and prefers transferring the problem to another company.

Honest decision, in one question: do you have a customer requiring SLA? If not, option A. If yes, B. If the team has nobody willing to learn minimum ops, C. There's no universal right answer — there's a right answer for your context. Mixing the three is also valid: main app on B, internal tool on A, isolated scheduler on C.

Step 2 — Dockerization (half a day to two days per app)

Here the technical work begins. The general logic is the same for any stack:

FROM ruby:3.3-slim AS builder
WORKDIR /app
COPY Gemfile Gemfile.lock ./
RUN bundle install --without development test
COPY . .
RUN bundle exec rake assets:precompile

FROM ruby:3.3-slim
WORKDIR /app
RUN apt-get update && apt-get install -y --no-install-recommends \
    libpq5 nodejs && rm -rf /var/lib/apt/lists/*
COPY --from=builder /usr/local/bundle /usr/local/bundle
COPY --from=builder /app /app
EXPOSE 3000
CMD ["bundle", "exec", "puma", "-C", "config/puma.rb"]

Multi-stage. Heavy build stays in a stage that's discarded. Final image has only what's necessary to run.

By language:

  • Ruby/Rails: ruby:3.x-slim as base, multi-stage to reduce size. Heroku's slug compilation became your own lines in the Dockerfile — bundle install, assets:precompile, copy artifacts.
  • Node: node:20-alpine solves most cases. Watch for deps with native binaries (sharp, bcrypt, sqlite3, canvas) — Alpine uses musl, and some libs require glibc. If it breaks, switch to node:20-slim.
  • Python/Django: python:3.x-slim, gunicorn or uvicorn as server. requirements.txt or pyproject.toml in the build stage.
  • Elixir/Phoenix: elixir:1.x-alpine, release as artifact (mix release), runtime image with only erlang.

Procfile → Docker mapping:

ProcfileEquivalent at destination
web: bundle exec pumaCMD of main container
worker: bundle exec sidekiqSeparate container, same image, different command
release: bundle exec rake db:migrateRelease job, runs before rolling update deploy
clock: bundle exec clockworkCron job, or singleton container

Most modern orchestrators (HeroCtl, Render, Railway, Coolify) understand these four formats directly.

Assets:

Heroku slug compilation does precompile automatically. In Docker you need to think:

  • Rails: RUN bundle exec rake assets:precompile in the build stage.
  • Node: RUN npm run build in the build stage.
  • Asset host (CDN): if you use CloudFlare or S3 to serve static, configure RAILS_SERVE_STATIC_FILES and ASSET_HOST correctly.

Realistic average time:

  • Medium Rails app (CRUD with Sidekiq): 1 to 2 days.
  • Simple Node app (API, no heavy frontend build): 4 hours.
  • App with 5+ stateful workers and media processing: 3 to 5 days.

The first app takes longer. The second takes half. From the third on, it's mechanical.

Step 3 — Database migration (the riskiest part, 2 to 8 hours)

Here lives the fear. The database is the only place where "going back" is expensive. Everything else is redeploy.

Postgres:

Heroku Postgres exposes direct access via pg_dump if you have the credentials (they're in DATABASE_URL). Before anything, find out your extensions:

SELECT extname, extversion FROM pg_extension;

Common ones: pgcrypto, hstore, postgis, pg_stat_statements, uuid-ossp, unaccent. If the destination doesn't have all, or has them in a different version, you find out before — not in the middle of restore at 3 AM.

Possible destination for Postgres:

  • Postgres running as a job in the cluster itself (smaller RPO/RTO, total control, you take care of backup).
  • Regional managed Postgres — RDS São Paulo, Neon, Supabase, Aiven. More expensive, less ops.

Migration with minimum downtime — option A (with window):

# Drains traffic: puts app in maintenance, waits for Sidekiq to drain
heroku maintenance:on -a my-app

# Dump
pg_dump $HEROKU_DATABASE_URL --no-owner --no-privileges --format=custom --file=dump.sql

# Restore at destination
pg_restore --no-owner --no-privileges --dbname=$DEST_DATABASE_URL dump.sql

# Smoke test at destination
psql $DEST_DATABASE_URL -c 'SELECT count(*) FROM users;'

# DNS cutover, app at destination points to new database
heroku maintenance:off -a my-app  # optional, just for Heroku to keep serving /healthz

Typical window: 30 minutes to 2 hours, depending on database size. For a base under 5GB, 30 min is comfortable.

Migration with minimum downtime — option B (logical replication):

Postgres logical replication allows you to start the copy while the app continues writing to Heroku. When the replica reaches the current state, do the DNS cutover and the destination becomes the new primary.

Works if the destination can reach Heroku via network. For Heroku Postgres you need to whitelist the destination IP (Heroku has a mechanism for that on paid plans). Setup takes an afternoon, cutover lasts seconds.

Redis:

Two distinct natures — treat differently:

  • Redis as cache: simply restart from zero at destination. Cache reheats by itself. Nothing to migrate.
  • Redis as Sidekiq/Resque queue with persistence: here it hurts. Snapshot via BGSAVE, transfer the RDB, restore at destination. Or: pause workers on Heroku, process the queue to completion, do cutover with empty queue.

Heroku Redis Premium has persistence enabled by default; simple Redis at destination may not — check before.

Step 4 — DNS, SSL and cutover (1 to 3 hours)

Cutover is the moment of truth. Everything before was preparation.

24 hours before:

Reduce DNS record TTL to 60 seconds. This ensures that when you point to the destination, propagation is fast. High TTL is what makes cutover become a 6-hour nightmare with half the customers still hitting the old server.

Parallel setup:

App running in parallel at both destinations. Heroku continues responding on the old domain. Destination responds on a temporary domain (e.g., app-new.heroctl.com).

Smoke test at destination:

curl https://app-new.heroctl.com/healthz
curl https://app-new.heroctl.com/api/v1/users -H "Authorization: Bearer $TOKEN"
# Hit critical endpoints manually, with human eyes

If something is wrong, find out now. After cutover you'll be dealing with support tickets simultaneously.

Cutover:

Change the CNAME (or A record) of the production domain to the destination. Within 60 seconds, new requests go to the new destination. Heroku continues responding on the old domain (the *.herokuapp.com URL) for 30 days — that's an important safety belt.

SSL/TLS:

Heroku had embedded automatic certificate. At the destination, depending on the choice:

  • HeroCtl, Coolify, Render, Railway, Fly.io: automatic certificate via Let's Encrypt, without you thinking.
  • Bare single VPS: you configure cert-manager-equivalent, or Caddy with ACME, or nginx + certbot.

Before DNS cutover, validate that the destination issued the certificate for the domain. Let's Encrypt validates via HTTP-01 or DNS-01 — the HTTP-01 challenge only works after DNS points, so there's a chicken-and-egg. Solution: issue via DNS-01 first (doesn't need DNS pointing to destination), or accept 30 seconds of TLS error at cutover moment.

Sticky sessions:

If your app uses WebSocket, or has in-memory session (instead of Redis or database), you need sticky session at the load balancer. Heroku didn't do that by default, but some apps end up depending on stable routing without realizing it. At the destination, configure cookie-based session affinity if necessary.

Step 5 — Heroku decommission (1 hour, 30 days later)

Thirty days is the safety belt. Keep the app on Heroku running, without traffic (after all DNS already pointed elsewhere), just in case of emergency. Cost: what you were already paying, divided proportionally up to the cancellation date.

Thirty days later, if nothing broke:

heroku addons:destroy heroku-postgresql -a my-app
heroku addons:destroy heroku-redis -a my-app
heroku addons:destroy papertrail -a my-app
heroku apps:destroy my-app

Each addon must be cancelled separately — some have their own billing that continues even with the app destroyed. Check next month's bill with a magnifying glass.

Heroku does pro-rata refund up to the cancellation day. Don't forget to cancel the entire account if it's the last app — otherwise you pay platform fee every month for nothing.

Common pitfalls

Most migrations get stuck on these eight things. Read everything before starting.

Invisible slug compilation hooks. Old apps have bin/release, bin/post_compile, bin/pre_compile. These scripts run inside the buildpack and do things like minifying JS, generating derived files, or running a migration nobody remembers. Before Dockerizing, open each one and replicate in a Dockerfile step or in a release container.

Config vars with broken format. Heroku accepts MY:VAR as variable name (with :). Containers in general also accept, but some orchestration tools escape differently. Rename to MY_VAR before migrating.

Redis URL with variant format. Heroku uses redis://h:password@host:port. Some clients (old Ruby gems mainly) expect redis://:password@host:port. If you see Redis::CommandError: WRONGPASS, that's probably it.

BUNDLE_WITHOUT=development:test saved in env. When you run that same container outside Heroku, it continues without installing development gems. In production, ok. In staging where you need to run tests, breaks. Clean that variable before using the config dump in another environment.

Heroku-specific gems. rails_12factor (deprecated but still in 2014 apps), heroku_san, taps. Removed, end. If something depends, swap for standard equivalent.

DNS with Heroku-DNS-Target. Heroku recommends using ALIAS or ANAME to point to the app, instead of CNAME, for domain roots. When migrating, switch to A record direct to destination IP. ALIAS pointing to Heroku is what will screw you on apex domains.

Papertrail / NewRelic / Bugsnag turned off without substitute. Logs and observability are easy to leave for later and break in the first hour post-migration. Before cutover, you must have: centralized logs (HeroCtl has single embedded writer; Render exposes via UI; Coolify has optional Loki), basic metrics (CPU, memory, requests), and some error tool (self-hosted Sentry or SaaS).

Sidekiq/Resque with in-flight jobs during cutover. During the cutover moment, some jobs go to the destination queue without having been processed at origin. If your job isn't idempotent (can run twice without side effect), that's a problem. Solution: pause Heroku workers 5 minutes before cutover, wait for queue to drain, do cutover with empty queue.

Realistic schedule for medium startup (5 to 10 Heroku apps)

Small team, one part-time dev:

  • Week 1: complete pre-flight + stack choice + destination setup (empty cluster running, panel accessible).
  • Week 2: Dockerization of first low-risk app + database migration in staging environment.
  • Week 3: cutover of first app in production + 7-day validation.
  • Weeks 4 to 6: migration of remaining apps in parallel, pace of 1 to 2 per week.
  • Total: 4 to 6 weeks of elapsed time, maybe 80 hours of effective work distributed.

Medium team (3 devs, 20 apps): 8 weeks, 200 hours of effective work.

Large team (cluster of 50+ apps): treat as formal project, with project manager, and calculate quarter.

Rule of thumb: never migrate more than 2 apps in parallel if it's the same dev doing it. Context-switching cost eats the parallelism gain.

FAQ

How much does the migration cost in person-hours? For a 5-app SaaS, part-time dev: ~80 hours. At R$200/h, R$16k. Compared to R$2k/month of Heroku bill you save, payback in 8 months. In the next 4 years, it's just savings.

What if I don't have Docker setup? You don't need to pre-install anything — destination platforms build the image for you (Render, Railway, Fly.io accept Dockerfile direct from git). HeroCtl requires image in registry, so you push to ECR, GCR, Docker Hub or GHCR. For local use, install Docker Desktop and you're ready.

Does Heroku Postgres have export limit? There's IOPS limit during pg_dump on low plans. Databases above 5GB on Hobby plan may need pg_dump in parallel mode (-j) or use logical replication to avoid heavy load. For Standard or higher, no relevant problem.

Do Sidekiq scheduled jobs survive? They survive if you migrate Redis with snapshot (BGSAVE → restore). If you restart Redis from zero at destination, you lose scheduled jobs. Consider that at cutover: either do Redis transfer along, or accept manually rescheduling some jobs.

Can I test with 1 app first? That's the recommended path. Take the least critical app (internal, or very low traffic), do the entire migration on it first. Learn from the stumbles there. Then migrate production ones with confidence. The first migration teaches more than reading 10 posts like this.

What if the migration fails? The 30 days of Heroku running in parallel are your safety net. If the destination breaks irreversibly in the first hour, switch DNS back to Heroku, takes 60 seconds, normal life. The only case where rollback is expensive is if you did database cutover with writes at the destination — then you need to replicate back. That's why the recommendation is simultaneous DNS and database cutover, with short window.

Is there an assisted migration path for HeroCtl? For HeroCtl, yes — we have an experimental converter that reads app.json + Procfile and generates an equivalent job manifest. Works for simple apps (web + worker + release), and stumbles on exotic cases (heavy multi-buildpack, custom hooks). If you want to test, send a message.

Closing

Migrating from Heroku four years later is embarrassing — should have left in 2022. But four years becoming five is worse. The compounded cost of not migrating (R$25k to R$100k per year in accumulated Heroku bill, plus the fragility of depending on a product Salesforce already showed has no affection for small users) is greater than the cost of a focused week of work.

If you decide to test HeroCtl, install on any Linux server:

curl -sSL https://get.heroctl.com/install.sh | sh

Works on 1 server (simple mode) or on 3+ (real HA mode). The Community plan is free without server limit and without job limit — you don't need to make any commercial decision to do the entire migration.

If you decide on Render, Railway or Coolify, also great. The point of this post isn't to capture you as a customer — it's to get you off Heroku. Four years later, it's time.

For additional context on self-hosting in 2026, read Self-hosted Heroku: the state of the art in 2026. To understand why we built a new orchestrator instead of adopting an existing one, read Why we built HeroCtl.

#heroku#migration#guide#tutorial#exit