Multi-tenant SaaS with real isolation: 3 patterns and when each one becomes a nightmare

Pool, schema-per-tenant, app-per-tenant. Each pattern has obvious benefits and invisible costs. How to decide before the first serious B2B customer asks 'is my data isolated?'.

HeroCtl team··13 min· Read in Portuguese →

The first question from a serious B2B buyer, after the product passes the demo and before legal enters the room, is always the same: "is my data isolated from other customers?". If your answer is "ah, we filter by tenant_id in each query", the contract just turned to smoke. The buyer isn't asking for a technical justification — they're asking for a guarantee that survives an audit, an incident, and a junior dev rotation.

There are three real multi-tenant isolation patterns in B2B SaaS. Each one has obvious benefits in the presentation and invisible costs in operation. This post maps the three, shows exactly when each one becomes a nightmare, and explains the typical journey of a Brazilian startup that starts with fifty small customers and ends serving a regulated bank.

Why this matters now — not next year

Multi-tenancy is an architectural decision that seems postponable until the exact moment when it stops being. Four forces are pushing this decision to the front of the roadmap of Brazilian B2B startups in 2026:

LGPD became a practical requirement, not just legal. The law has been in effect since 2020, but corporate DPOs only started asking for operational evidence in the last two years. The question stopped being "are you compliant?" and became "how do you demonstrate adequate handling of personal data?". Demonstration requires visible separation, access log, and clear deletion process. Pure pool makes all of that more difficult — not impossible, but harder to justify to an auditor who has never seen your architecture.

Large B2B customer requires isolation as a contract prerequisite. That's the point of the opening paragraph. If your pipeline has a R$50k per month proposal with a regional logistics operation, it's practically certain their information security department will send a one-hundred-eighty-question questionnaire. Almost half the questions are about isolation. Answering "we share a database with logical filters" delivers the deal to the competitor that answered "dedicated schema per customer" — even if the real difference, in terms of risk, is small.

Sectoral compliance may require physical isolation. Health (LGPD + CFM), financial (Bacen, CVM), private basic education (LGPD + ECA), insurance (SUSEP). Regulated sectors occasionally list "data segregation per customer in physical layer" as required control. When that appears, schema-per-tenant solves with some effort; pool doesn't solve it.

One customer will become ten times bigger than the others. B2B SaaS usage distribution follows power law. Your largest customer will consume more resources than the fifty smaller ones combined. In pure pool, that customer degrades the experience of everyone else — and you can't charge more for it without a billing model that prices usage, which nobody wants to build before necessary.

And the fifth force, perhaps the most important: migrating from one pattern to another after having customers in production is a project of months, not days. You'll choose wrong on some axis — everyone does — but choosing consciously makes the difference between "six-week refactor" and "six-month rewrite".

Pattern 1 — Pool (shares everything)

Simplest possible setup. One database, one application instance, one infrastructure stack. All customers live in the same tables. Each application query has an additional WHERE tenant_id = ? filter injected by middleware before reaching the database. Postgres Row-Level Security (RLS) enforces this filter at the database level — a second layer that fires even if the application middleware forgets.

New customer onboarding is literally an INSERT in the tenants table. Thirty milliseconds later the customer already has a functional environment. The marginal cost of adding a tenant is practically zero — you're just using more rows in the same database, more bytes in the same cache.

Pool's strong points:

  • Low operational cost. One database to backup, one stack to monitor, one application version running.
  • Instant onboarding. Credit card signup flow is trivial.
  • Sublinear scale. A thousand customers don't cost a thousand times more than one.
  • Cross-tenant analytics are natural. Internal "monthly active users" dashboard is a SELECT COUNT(*) without gymnastics.
  • Migrations are simple. One ALTER TABLE applies to all customers at once.

Pool's weak points:

  • SQL bug is a critical incident. A forgotten WHERE, a JOIN that leaks to another context, a poorly written migration — and customer A's data appears on customer B's screen. That incident has already killed companies (literally: contracts cancelled en masse, irreversible loss of trust). Postgres RLS is the safety belt that drastically reduces that risk, but requires discipline to configure well and test all roles.
  • Noisy neighbor. A customer that fires a heavy report at 2 PM on Tuesday consumes the connection pool and degrades latency for everyone else. You can add per-tenant query limits, but that's additional work.
  • Backup is all-or-nothing. Restoring a specific customer after a destructive operation requires snapshot of the entire database, restore in parallel environment, and selective export-import. Annoying operation of one to four hours.
  • Compliance requiring physical separation doesn't fit. If the customer asks "where physically does my data live?", the answer is "in the same data file as all other customers' data" — truth that drives away specific profiles.
  • Customization becomes nullable column. Customer Y needs an extra field. You add it. Another customer doesn't use that field. In six months no one remembers what that extra_data_3 column is for. That accumulation is one of the most predictable symptoms of mature pool.

When pool makes sense: SMB B2B SaaS (small and medium customers), tenants relatively similar in usage, low regulatory risk, small engineering team (three to eight people), product still searching for product-market fit. Practically every SaaS starts here — and is right. The mistake isn't to start with pool; it's not knowing when to leave.

Pattern 2 — Schema-per-tenant (shared database, separate schemas)

Architectural middle ground. You still have one database instance — one Postgres instance running, with its parameters, connections, replication. But inside it, each customer has their own schema. Postgres calls it schema; MySQL calls it database; the concept is the same: named namespace inside the server, with own tables, own indexes, and own privileges.

The application selects the correct schema via SET search_path TO tenant_acme (or equivalent) at the beginning of each connection, based on which tenant is making the request. Tables exist with the same structure in all schemas, but are physically separated: schema tenant_acme has its own users rows, tenant_xyz has its, and queries inside one schema don't see the other without explicit privilege.

Schema-per-tenant's strong points:

  • Strong data isolation by default. No WHERE tenant_id anywhere — tables are physically separated. SQL bug stays circumscribed to current schema.
  • Per-tenant backup is practical. pg_dump --schema=tenant_acme exports only that customer. Restoring is the same: bring up in parallel environment, import the schema, and move specific data.
  • Resource quotas per schema. Postgres allows limiting connections per role, and roles can be tied to schemas. You can guarantee that a large tenant doesn't consume all database connections.
  • Clean customization. Customer Y needs an extra field? Add it in their schema only. Other customers don't even know that field exists. The base schema stays clean.
  • Demonstration of separation becomes obvious to audit. "Each customer has their own database namespace with isolated privileges" is an answer that satisfies most security questionnaires.

Schema-per-tenant's weak points:

  • Migrations multiply by N. A ten-minute migration with a thousand schemas becomes one-hundred-sixty hours of database work if it runs serially. You need careful parallelism, migration scripts that know the schema set, and planned maintenance window — or non-blocking migration strategy that works schema-by-schema.
  • Connection pooling gets complicated. If each connection needs SET search_path per tenant, simple pgBouncer doesn't work — it reuses connections between different customers. Solutions: pool per schema (breaks cardinality), session-mode pooling (slower), or application middleware that manages the reset.
  • Cross-tenant analytics get expensive. To answer "how many active users do I have across all customers?" you need to union a thousand tables. Real solution: daily ETL to a separate warehouse (ClickHouse, BigQuery), with denormalized tenant_id.
  • Bug in switching code is still a risk. If middleware selects the wrong schema due to session leakage bug between requests, you have the same type of leak that pool has. Less common, but possible.
  • Practical schema limit. Postgres handles tens of thousands of schemas, but the database catalog gets heavy at some point — slow listings, autovacuum competing. Companies running over five thousand schemas in a single instance report pain.

When schema-per-tenant makes sense: mid-market B2B SaaS, ten to a thousand customers, some high-value customers that justify customization, moderate compliance. It's the "intermediate" pattern in the literal sense — you trade some operational simplicity from pool for stronger isolation and customization flexibility.

Pattern 3 — App-per-tenant (complete silo)

Each customer receives a dedicated instance of everything: application, database, cache, job queue, scheduler. What they share is only the physical infrastructure — the cluster of machines where containers run. But each workload has its own database with its own data, its own URL (acme.app.com, customer-xyz.app.com), and potentially its own version of the application.

Serious implementation of this pattern requires an orchestrator. Without orchestration, provisioning a new customer means manually creating virtual machine, running database setup, deploying the application, configuring DNS, issuing TLS certificate — operation of hours nobody will tolerate repeating twenty times a month. With orchestrator, that's a parameterized job: you trigger a definition that says "new tenant acme, enterprise plan, isolated database, automatic certificate", the cluster allocates, configures, and fires up in one to three minutes.

Kubernetes does that with namespaces and Helm. HeroCtl does it with job templates. Other orchestrators do with their own primitives. What matters is that the time to onboard a new customer in this architecture — minutes, not seconds like pool — doesn't become human pain because it's automated.

App-per-tenant's strong points:

  • Maximum isolation. There's no shared code querying data from more than one customer — physically impossible. SQL bug affects only that instance's customer.
  • Total customization. Customer A can run version 2.4 of the application, customer B version 2.5. Useful for gradual release tests, or to serve customers who asked for a specific patch.
  • Isolated failure. If customer A's database corrupted, customer B doesn't even notice. Customer A has outage; customer B doesn't.
  • Heavy compliance becomes feasible. FedRAMP, HIPAA with strict multi-tenant requirements, contracts with "dedicated infrastructure" clause — all pass.
  • Regional deploys per customer. Brazilian customer with national territory data requirement? Run in São Paulo datacenter. European customer? Frankfurt. The "run tenant where they need to be" primitive starts to exist.

App-per-tenant's weak points:

  • Cost scales linearly. A thousand small customers cost roughly a thousand times more than one customer. No pool gain. For low-ticket customers, the margin disappears.
  • Onboarding takes minutes, not seconds. Can be unacceptable for self-service models with credit card signup. Works for assisted sales models where onboarding is process, not buying flow.
  • Operations multiply by N. Each database needs backup, each application needs monitoring, each deploy needs validation. Without centralized orchestration tools, becomes unfeasible at two dozen customers.
  • Cross-tenant analytics are expensive. Worse than schema-per-tenant — you have to sync data from completely separate databases. ETL to common warehouse is even more necessary.
  • Minimum infrastructure cost per tenant. Each dedicated Postgres has overhead of two hundred to five hundred megabytes of RAM even idle. Each Go or Node application another hundred to two hundred megabytes. The spending floor is real.

When app-per-tenant makes sense: enterprise SaaS, high ARR per customer (R$10k/month per customer up is comfortable reference), demanding compliance, customer customization is competitive differential. Also works in contexts of fifty to a thousand customers where average ticket sustains the cost. Companies that sell self-service on Stripe and charge R$99/month per user don't fit here — the economy doesn't close.

Comparative table

CriterionPoolSchema-per-tenantApp-per-tenant
Cost per tenantSublinear (almost zero additional)Almost linear (small overhead)Linear (dedicated instance)
Onboarding timeSeconds (INSERT)Seconds to minutes (CREATE SCHEMA + migrate)Minutes (provision stack)
Performance overheadNone (shares cache, etc)Small (more relations in catalog)High (overhead per instance)
Risk of leak from bugHigh (mitigated by RLS)Medium (mitigated by search_path)Practically zero
Per-tenant backupHard (full snapshot)Easy (pg_dump --schema)Trivial (dedicated backup)
Customer customizationExpensive (nullable columns)Good (extra fields in schema)Total (own app version)
Enterprise complianceHard to demonstrateDemonstrableStrong by construction
Ideal tenant range1 to 10k10 to 5k10 to 1,000
Cross-tenant analyticsTrivial (one query)Heavy (UNION N tables or ETL)Heavy (mandatory ETL)
Minimum team to operate2 to 5 devs4 to 10 devs with basic infra4 to 10 devs with orchestrator

The upper limits of tenant range are approximate — companies have exceeded all of them with effort. The numbers serve as reference for when it starts to hurt.

The typical Brazilian SaaS journey

Most Brazilian B2B SaaS follow a predictable path, and understanding the path helps choose the current stage without underprovisioning or overprovisioning.

Stage 1: zero to fifty customers. Pool is the obvious choice. Small team, low cost, nobody has asked for compliance yet, all customers are similar in usage. Focus on product-market fit — any hour spent with isolation now is an hour stolen from product. Postgres RLS from day one is the minimum defense investment.

Stage 2: fifty to five hundred customers, first mid-market B2B customer arrives. Here it starts to tighten. That customer with one hundred fifty users consumes six times more resources than the others. The security questionnaire arrives with the question about isolation. Evaluating schema-per-tenant becomes rational. Hybrid is also an option: pool for the small ones, dedicated schema for the bigger ones who explicitly asked. Migration at this stage is less painful because the base is still manageable.

Stage 3: five hundred customers or first enterprise customer. Now the decision is structural. Schema-per-tenant for everyone? App-per-tenant for enterprise and schema for the rest? Hybrid with three layers (pool for free, schema for paid, app for enterprise)? The answer depends on the customer mix — companies with few very large customers tend toward app-per-tenant; companies with a thousand mid customers stay on schema-per-tenant.

Stage 4: enterprise mode. App-per-tenant for high-value, with schema-per-tenant or pool sustaining smaller ones. That's the state of companies like Salesforce (which historically did schema-per-tenant at extreme scale), Notion (highly optimized pool), and newer enterprise tools that adopt app-per-tenant from birth.

The transition between stages is where the most expensive engineering of a SaaS career lives. Whoever has been through it knows the smell.

How HeroCtl helps in stage 3 and 4

The app-per-tenant model requires a competent orchestrator. It's non-negotiable: without automated provisioning, operational complexity makes the model unfeasible. Four primitives an orchestrator needs to deliver for app-per-tenant to work, and how HeroCtl resolves each:

Parameterized job templates. You describe "tenant" once — which application runs, which database, which ingress, which environment variables, which CPU and memory quota. For each new customer, you only vary the parameters (name, subdomain, plan). In HeroCtl, that's a short job spec with variable placeholders.

Onboarding API. POST /v1/jobs with the new customer's variables. In seconds to a few minutes, the cluster provisions containers, brings up the database, registers in the internal router, issues automatic TLS certificate for acme.app.com. No manual operation.

Integrated subdomain routing. Each tenant gets their own subdomain with automatic TLS. The orchestrator's internal router resolves acme.app.com to the right container without you configuring DNS per customer — DNS wildcard points to the cluster, and the orchestrator does the rest.

Per-tenant quotas and auditing. Each job carries resource limits (CPU, RAM, disk). Customer who tries to consume more than the plan allows, saturates at their own limit and doesn't affect neighbors. On the Business plan, detailed log of who deployed which version of which tenant, when — useful for internal audit and for answering customer questionnaires.

The HeroCtl public cluster runs today on four servers totaling five vCPUs and ten gigabytes of RAM, sustaining multiple sites with automatic TLS. When a coordinator node falls, the cluster elects another coordinator in about seven seconds — a window short enough that customers don't notice, and an important operational detail for those operating app-per-tenant in production. We're not promising magic: we're describing what already runs.

Five expensive errors in multi-tenant

Errors that appear with enough frequency to warrant an explicit warning.

Sharing schema from day one without RLS. Pool without Row-Level Security is just one defense layer: the application middleware. One layer fails at some point. RLS is the second layer — cheap to configure, and the difference between embarrassing incident and fatal incident. Configure from the start, even if the team thinks it's overkill.

Migrating too late from pool to schema. Company that grew to ten thousand customers in pool and discovers it needs to migrate to schema-per-tenant has a four-to-eight-month project ahead. Middleware rewrite, data migration in windows, validation per customer. Whoever migrated at five hundred tenants spent three weeks; whoever migrated at ten thousand spent a quarter.

Ad-hoc customization in pool. Customer Y asks for an extra field. You add as nullable column. In three months other customers asked for three other columns. In six months no one understands the table anymore. What seemed like a shortcut becomes debt that pays interest every sprint. Resist that pattern; or accept that you need schema-per-tenant to serve those customizations cleanly.

Backup of main database only. When you leave pool, backup needs to be rethought. Separate schema needs conscious separate backup. App-per-tenant needs per-database backup. Forgetting that and discovering in an incident is catastrophic — companies have lost data of a single customer because the global backup didn't cover per-tenant databases.

Cross-tenant analytics in schema-per-tenant via UNION. Works in ten customers, gets heavy in a hundred, becomes unfeasible in a thousand. Build ETL to separate warehouse early — ClickHouse or BigQuery with denormalized tenant_id is the standard solution. Trying to keep everything in transactional is a recipe for forty-minute query.

LGPD and multi-tenancy

LGPD doesn't require a specific architectural model, but requires demonstration of adequate handling. Each pattern has different implications.

Pool: you need to demonstrate robust logical separation (RLS configured, tested, audited), personal data access log (who read what, when), and deletion process that covers all relevant tables for the right to be forgotten (article 18). All viable, but with more demonstration work.

Schema-per-tenant: demonstration becomes simpler. "Each customer has their isolated schema, with their own privileges, and data deletion is DROP SCHEMA" — phrase that satisfies auditor without pain. Right to be forgotten is practically trivial in this model.

App-per-tenant: physical separation is demonstrable directly. Audit becomes even simpler. Right to be forgotten is destroying the customer's database.

In all models: personal data access log (article 16, storage requirement) is the application layer's responsibility — independent of the isolation model. Build that log early.

FAQ

Is Postgres RLS reliable in production? Yes, and widely used. The pitfalls are two: ensure all roles connecting to the database are non-privileged (super-user ignores RLS), and test policies with automated tests that run in CI. Whoever configures RLS once and doesn't test, discovers holes later.

How to automate migrations in schema-per-tenant? Common pattern: tenant_metadata table with list of schemas and current version of each. Migration job consults, applies in parallel (with concurrency limit so as not to saturate the database), updates version. Tools like Flyway and migrate with custom wrapper work. Reserve maintenance window for big migrations even with parallelism.

Doesn't app-per-tenant get too expensive to scale? It does, if average ticket is low. Practical rule: ARR of R$10k/month per customer comfortably sustains the cost of dedicated infra. Below that, the margin tightens. For small customers, keep pool or schema. App-per-tenant is a weapon for customers who pay for exclusivity.

Can I mix models (high-value app-per-tenant, rest pool)? Yes, and hybrid is the most common final state in mature SaaS. Operational complexity increases — you operate two architectures, not one — but the savings pay off when high-value customers justify the effort. Requires team with maturity of at least six to ten engineers.

tenant_id in path or subdomain? Subdomain (acme.app.com) is usually better for branding and isolated cookies. Path (app.com/acme) is simpler in DNS and routing. Subdomain combines better with app-per-tenant; path combines well with pool. Choose early, because changing later breaks customer integrations.

Is encryption per tenant feasible? In pool, key per tenant in the application layer is the path — reasonable overhead, and you stay with keys derived from a protected master key. In schema-per-tenant, same strategy. In app-per-tenant, database encryption-at-rest already gives natural isolation. Encryption per tenant is expensive and rarely required — only go there if the customer explicitly asks in contract.

How long does onboarding take in each model? Pool: thirty to two hundred milliseconds (a database transaction). Schema-per-tenant: two to thirty seconds (CREATE SCHEMA + migrations). App-per-tenant: thirty seconds to three minutes (provision instances, bring up database, register TLS). That time enters the signup UX flow — credit card self-service models don't accommodate minute time without some form of queue or async notification.

Closing

Choosing a multi-tenant pattern isn't technically difficult — it's organizationally difficult. Difficult because it requires anticipating three to five years of product and customer growth, and almost no one anticipates well. The defense isn't choosing perfect; it's choosing consciously, with migration trail mapped to the next stage, and with instrumentation that warns when the current model is asking for retirement.

Pool is right at the start. Schema-per-tenant is right in the transition to mid-market. App-per-tenant is right when the customer pays for exclusivity or when compliance requires it. Hybrid is the common destination.

If you're building Brazilian B2B SaaS in 2026 and the product is reaching the stage where multi-tenancy matters, it's worth knowing an orchestrator that makes app-per-tenant operationally accessible to small teams:

curl -sSL get.heroctl.com/install.sh | sh

Four servers, five vCPUs, ten gigabytes of RAM — the public demo cluster runs on resources that fit in any regional cloud plan. Coordinator election in about seven seconds when something falls. Embedded routing and TLS. It's the foundation missing for much isolated tenant architecture in a small team.

For more on Brazilian SaaS infra, see Postgres in production: managed vs self-hosted and How much it costs to host Brazilian SaaS in 2026.

#multi-tenancy#saas#isolation#engineering#architecture