Skip to content

Multi-Tenant Architecture

One database, one Worker deployment, unlimited tenants. Costs grow with usage, not with tenant count.

Why This Structure

Traditional multi-tenant approaches -- one database per tenant, or one deployment per tenant -- create linear cost growth. Ten tenants means ten databases. A hundred tenants means a hundred databases, each with idle connections and baseline costs.

We structured it differently because the cost math did not work otherwise:

ResourceScales withCost impact
Neon PostgreSQLStorage + compute timePay for actual queries, not idle connections
HyperdriveConnection countPools connections at the edge, Neon sees fewer
Cloudflare WorkersRequest countSingle deployment serves all tenants
Domain mappingTenant countDNS records, near-zero marginal cost

Adding a new tenant means inserting a row in tenants and a DNS record. No new database, no new deployment, no new infrastructure. This keeps infrastructure costs flat as tenant count grows.

Three Tiers

Tier 0 (Platform)   ->  Control plane, global defaults, design tokens
Tier 1 (Partner)    ->  Branded instance, manages sub-tenants, owns domains
Tier 2 (Customer)   ->  Consumes platform, inherits branding, scoped access

Simple projects start with dormant Tier 1 and Tier 2. You hardcode a single tenant_id and the multi-tenant machinery activates later. No retrofitting required -- the schema already knows about tiers.

Tier Governance

Each tier has a visibility scope. The rules are strict because they are enforced at the database level, not in application code.

TierCan seeCan modifyToken governance
T0 (Platform)Everything across all tenantsGlobal defaults, protected tokens, platform configOwns PROTECTED_TOKEN_PATHS -- no override allowed
T1 (Partner)Own data + all T2 childrenOwn branding, own tenants, theme selectionCan override non-protected tokens for own scope
T2 (Customer)Only own scoped dataOwn content within partner's boundariesInherits T1 branding, can fine-tune non-locked tokens

This hierarchy means a T1 partner admin cannot see another partner's data, even if they share the same database. A T2 customer cannot see other customers under the same partner. The database enforces this, not application-level filters.

Token Governance Example

Design tokens follow the same governance model:

T0 defines:    --status-error: #DC2626     (PROTECTED -- cannot override)
T1 overrides:  --interactive-primary: ...   (brand color -- allowed)
T2 fine-tunes: --background-canvas: ...     (within T1's allowed range)

Protected tokens (error states, focus rings, WCAG-required contrast pairs) are locked at T0. Brand-level tokens flow down from T1 and can be selectively overridden at T2.

Data Isolation

Shared Schema with RLS

One Neon project, shared schema, Row-Level Security (RLS) for isolation.

sql
-- Every tenant-scoped table includes tenant_id
CREATE TABLE documents (
  id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
  tenant_id UUID NOT NULL REFERENCES tenants(id),
  title TEXT NOT NULL,
  created_at TIMESTAMPTZ DEFAULT now()
);

-- RLS enforces isolation at the database level
ALTER TABLE documents ENABLE ROW LEVEL SECURITY;

CREATE POLICY tenant_isolation ON documents
  USING (tenant_id = current_setting('app.tenant_id')::uuid);

The database enforces isolation. Application code does not need to remember WHERE tenant_id = ? -- RLS handles it. If a query somehow omits the tenant filter, it returns zero rows instead of leaking data.

Branch Isolation (Development)

Branch isolation is separate from tenant isolation. Neon branches give each developer or CI run their own copy of the database without copying data. This is for development workflow, not tenant isolation.

Production DB
  ├── branch: dev/feature-auth    (developer A)
  ├── branch: dev/fix-rls         (developer B)
  └── branch: ci/pr-142           (CI run)

Each branch is a copy-on-write fork. It starts with production data but diverges independently. Branches are cheap to create and dispose of.

Hyperdrive Connection Pooling

Cloudflare Workers are stateless. Every request to Neon would normally open a new TCP connection, which is expensive. Hyperdrive sits between Workers and Neon, maintaining a connection pool at the edge.

Request -> Worker -> Hyperdrive (pool) -> Neon

Configuration lives in wrangler.jsonc:

jsonc
{
  "hyperdrive": [{
    "binding": "DB",
    "id": "your-hyperdrive-id"
  }]
}

In application code:

ts
const db = drizzle(env.DB.connectionString);

Hyperdrive handles connection reuse. Your code does not manage pools.

Tenant Resolution

A single Cloudflare Worker deployment serves all tenants. The Worker resolves which tenant a request belongs to by inspecting the request domain.

ts
// Simplified tenant resolution
const host = new URL(request.url).hostname;
const tenant = await db.query.domainMappings.findFirst({
  where: eq(domainMappings.domain, host),
});

// Set the tenant context for RLS
await db.execute(
  sql`SELECT set_config('app.tenant_id', ${tenant.id}, true)`
);

Domain types:

TypeExampleResolution
Platformapp.syncupsuite.comTier 0 -- platform admin
Partner custombrand.example.comTier 1 -- lookup in domain_mappings
Partner subdomainacme.brandsyncup.comTier 1 -- parse subdomain
CustomerInherits from PartnerTier 2 -- scoped by partner's tenant

Auth Graduation

Authentication follows the same tiered model. Users do not start with a full account -- they graduate into one:

Anonymous  ->  Preview/Inquiry  ->  OAuth (Google/GitHub)  ->  Full Account

Each step adds capabilities without requiring migration. A user who starts as anonymous and later signs in with Google retains their session history. The auth system (Better Auth + Firebase Identity) manages this graduation transparently.

Tier governance applies to auth as well: a T1 partner can configure which OAuth providers are available to their T2 customers.

What This Means in Practice

  • Adding tenant #100 costs the same as tenant #10. No new infrastructure, no new deployments.
  • RLS is the security boundary, not application code. Even a bug in the application layer cannot leak cross-tenant data.
  • Tokens respect hierarchy. Platform tokens are protected. Partner tokens cascade down. Customer tokens fine-tune within bounds.
  • One Worker serves all. Domain mapping, not deployment configuration, determines tenant routing.

Next Steps

Released under the MIT License.