Deploy your first app
Bring up a Node.js application with a Postgres database in 50 lines of YAML. Includes health check, rolling deploy, and rollback.
A deploy in HeroCtl is a YAML file sent to the cluster. The cluster decides where to run, when to update, and how to react if something breaks. You only describe the desire.
Anatomy of the job spec
A job defines a complete service: image, replicas, resources, ingress, secrets. All in one file. Five blocks matter:
| Block | Function |
|---|---|
meta | name, version, tags |
task | image, command, env, resources |
count | how many replicas |
health | how to know it is alive |
ingress | public domain and TLS |
50 lines cover 90% of cases.
Complete example: Node.js web + Postgres
Let's bring up a Node.js API with 2 replicas and an adjacent Postgres. Create the file app.yaml:
job: api-vendas
version: 1
tasks:
- name: postgres
image: postgres:16-alpine
count: 1
resources:
cpu: 500
memory: 512
env:
POSTGRES_DB: vendas
POSTGRES_USER: app
secrets:
POSTGRES_PASSWORD: db-password
volumes:
- name: pgdata
path: /var/lib/postgresql/data
size: 10Gi
health:
tcp: 5432
interval: 10s
timeout: 3s
- name: web
image: minhaempresa/api-vendas:1.4.2
count: 2
resources:
cpu: 250
memory: 256
env:
DATABASE_URL: postgres://app@postgres.local:5432/vendas
NODE_ENV: production
secrets:
DATABASE_PASSWORD: db-password
JWT_SECRET: jwt-secret
health:
http: /healthz
port: 3000
interval: 5s
healthy_after: 2
unhealthy_after: 3
ingress:
host: api.minhaempresa.com
port: 3000
tls: true
Fifty lines, an entire app with a persistent database, injected secrets, health check, and a domain with a certificate.
Note: referenced secrets (
db-password,jwt-secret) need to exist beforehand. Create withheroctl secret create db-password --value '...'. See CLI reference for all commands.
Submit the job
With the file ready, one command sends the desire to the cluster:
heroctl job submit app.yaml
Output:
job: api-vendas
version: 1 (new)
tasks: 2 (postgres, web)
plan:
+ create alloc postgres-a1b2 on node-2
+ create alloc web-c3d4 on node-1
+ create alloc web-e5f6 on node-3
deploy: rolling
status: accepted (id: dep-2026-04-26-001)
The cluster planned where each container will run and started execution in the background. The command returns in 1–2 seconds. It does not wait for the app to come up.
Track progress
To see what is actually happening:
heroctl alloc list --job api-vendas
ALLOC TASK NODE STATUS HEALTH AGE
postgres-a1b2 postgres node-2 running healthy 12s
web-c3d4 web node-1 running starting 8s
web-e5f6 web node-3 running healthy 8s
The states that matter:
| Status | Means |
|---|---|
pending | waiting for resources on the node |
running + starting | container is up, health check has not passed yet |
running + healthy | receiving traffic |
failed | crashed. Check logs. |
Real-time logs
To see the app's output as it comes up:
heroctl logs -f --job api-vendas --task web
The -f flag follows the stream. Exit with Ctrl+C. For a specific alloc:
heroctl logs -f --alloc web-c3d4
Logs are stored for 7 days by default. For long-term history, integrate with an external destination (see Observability).
Health check is mandatory
HeroCtl does not roll a deploy without a health check. It is not a cosmetic restriction: without one, there is no way to distinguish a container coming up from a broken container.
Your application needs to expose an endpoint that:
- Returns
200 OKonly when the app is ready to receive traffic. - Validates real dependencies (database, cache, queue).
- Responds in under 1 second.
Minimal Node.js example:
app.get('/healthz', async (req, res) => {
try {
await db.query('SELECT 1')
res.status(200).json({ ok: true })
} catch (err) {
res.status(503).json({ ok: false, error: err.message })
}
})
Warning: a
/healthzthat always returns 200 is worse than not having one. It hides breaks and the deploy passes thinking everything is fine.
Default rolling deploy
Without extra configuration, updates are rolling. The cluster swaps one replica at a time, waits for it to become healthy, and only then touches the next.
Rolling defaults:
| Parameter | Value | What it does |
|---|---|---|
max_parallel | 1 | how many replicas to update at the same time |
min_healthy_time | 10s | how long the new one must stay healthy |
healthy_deadline | 300s | how long to wait before considering failure |
auto_revert | true | reverts on its own if it fails |
To customize, add an update block in the task:
update:
max_parallel: 2
min_healthy_time: 30s
healthy_deadline: 600s
auto_revert: true
More parallelism = fast deploy + larger risk window. More healthy time = slow deploy + greater confidence.
Update the app
You changed code, built, pushed the image with a new tag. To promote:
- Edit
app.yamlchanging the image tag (1.4.2→1.4.3). - Submit again:
heroctl job submit app.yaml
The cluster compares the two versions, sees that only the web image changed, and runs rolling on that task only. Postgres stays untouched.
Track with:
heroctl deploy status dep-2026-04-26-002
# task: web
# strategy: rolling
# progress: 1/2 (50%)
# state: rolling
# next: web-e5f6 (em 8s)
Rollback
If something goes wrong and you need to go back:
heroctl job revert api-vendas --version 1
The command reapplies the previous version's spec, with the same rolling. In 30–60 seconds you are back to the last known good state. There is no "database rollback" — schema migrations are the application's responsibility.
Note: old versions are kept indefinitely. List with
heroctl job history api-vendasto see all versions and who submitted each one.
Next steps
- Want to understand when rolling is not enough? See Rolling, canary, blue-green, and rainbow.
- For all available commands: CLI reference.