Architecture

A topology you can audit on a whiteboard.

Three diagrams. One per concern. Hover any box for what it does and why it's there. No service meshes you didn't ask for. No vendor SDKs in the hot path.

Hover for tooltips Identical self-hosted topology
Section 1 · Ingestion flow

From an emit() in your app to a row in libSQL.

Ingestion flow · target app → SSE
Ingestion flow Target app calls the SDK, which signs and POSTs to /api/ingest. The portal verifies the signature and routes the event to the correct module's libSQL database, then fans out to the dashboard via SSE. Your app Node · Bun · Hono @cuitty/sdk batch · sign · retry POST /api/ingest HMAC · idempotency Portal router route by event.type Per-module libSQL audit.db · costs.db deploys.db · logs.db SSE stream /api/stream/:module Dashboard UI Astro · SolidJS islands
  • Your application: Anything that can sign an HTTP request — a Node service with @cuitty/sdk, a curl one-liner, an OpenTelemetry exporter, a CI script.
  • @cuitty/sdk: Thin convenience layer over the wire protocol. Plugins (audit, logs, deploys, …) emit typed events; the client batches and signs them with HMAC-SHA256.
  • POST /api/ingest: The single edge endpoint. Verifies the HMAC, deduplicates by Idempotency-Key, then hands the batch to the portal.
  • Portal router: Reads the event `type` field and routes the row to the correct module's libSQL database. Same Astro frontend the dashboard runs on.
  • Per-module libSQL: One database per module — audit.db, costs.db, deploys.db, etc. Self-contained backups, self-contained migrations, sqlite3-inspectable.
  • SSE stream: After persistence, the row is fanned out as a Server-Sent Event to any dashboard tab subscribed to that module's channel.
  • Dashboard UI: Astro-rendered server pages plus SolidJS islands for the live components. Reads from the per-module DBs; subscribes to the SSE stream.

Each event carries an HMAC-SHA256 signature derived from the request body and your project's webhook secret. The edge re-derives the signature, compares it with constant-time equality, and rejects anything that doesn't match. After verification the row is routed to its module DB by `event.type`, then fanned out as a Server-Sent Event so any open dashboard tab updates in real time.

Section 2 · Auth + RBAC topology

Identify, authenticate, authorize, persist.

Auth + RBAC topology · BetterAuth → SpiceDB → modules
Authentication and authorization topology Browser sessions and SDK API keys both flow through BetterAuth (backed by Postgres). Each request then consults SpiceDB over gRPC for a permission check before reaching the module's libSQL database. Browser / SDK cookie · API key BetterAuth session · key issuance Postgres users · sessions · keys SpiceDB CheckPermission gRPC Module · libSQL audit · costs · deploys · … 1. identify 2. authenticate 3. authorize 4. read / write
  • Browser / SDK: Two trust paths: an interactive session (BetterAuth cookie) for the dashboard, or an API key + HMAC for the SDK.
  • BetterAuth: Session, magic link, OAuth, and API-key issuance. Stores users, sessions, and key hashes in Postgres — the one place we want strict relational guarantees.
  • SpiceDB: Zanzibar-style relationship-tuple store. One round-trip resolves: 'can user X read module Y in project Z?' Permissions live here, not in app code.
  • Module (libSQL): The audit/costs/logs/etc. database. Reads/writes only happen after BetterAuth has verified identity AND SpiceDB has authorized the action.
  • Postgres: Identity store: users, sessions, API keys, organisations. The single Postgres instance behind BetterAuth — tiny, replicable, predictable.

Identity lives in Postgres, behind BetterAuth. Permissions live in SpiceDB as Zanzibar-style relationship tuples — one round-trip resolves "can user X read module Y in project Z?". The module DBs never need to know who the user is; they only need to know that the request was authorized.

Section 3 · Storage layout

One file per module. Boring on purpose.

Storage layout · one libSQL DB per module · one Postgres for auth
Storage layout Two relational stores at the top — Postgres for identity, SpiceDB for permissions — and a row of self-contained libSQL databases below, one per module. Postgres · identity users · sessions · API keys · orgs SpiceDB · permissions zanzibar relationship tuples audit.db configs.db costs.db deploys.db logs.db performance.db repository.db video.db forge.db One file per module. Back up with `cp`. Restore with `cp`. Inspect with `sqlite3`.
  • Postgres · identity: Users, sessions, API keys, organisation membership. BetterAuth's source of truth.
  • SpiceDB · permissions: Zanzibar relationship tuples. Queried per-request to authorise reads and writes against module DBs.
  • audit.db: Append-only, hash-chained audit log. Tamper-evident — verify the chain by re-hashing rows.
  • costs.db: Cost ledger across providers. One table per concern (resources, costs, budgets, alerts).
  • deploys.db: Deploy timeline + rollback graph. Each row is a state transition with foreign-key links to environments.
  • logs.db: Structured log records, FTS5-indexed for full-text search and saved filters.
  • configs.db: Versioned config snapshots with unified diffs. Rollback is a one-line UPDATE.
  • performance.db: Per-process metric snapshots — memory, CPU, event loop, GC, system load — emitted on a 10s cadence.
  • repository.db: Code-index of every tracked repo: files, checksums, languages, deploy markers, workspace tree.
  • video.db: Generated browser-recording artifacts and the prompts that produced them.
  • forge.db: Name-generation sessions, scored candidates, and the workspaces they bootstrap.

The modules don't share schemas, so they shouldn't share storage. Each libSQL file is a self-contained backup, a self-contained restore target, and a self-contained migration unit. If the costs module breaks a schema, the audit log doesn't notice.

Self-hosted? Identical topology.

The cloud build and the self-hosted build run the same Docker Compose, the same Postgres schema, the same SpiceDB definitions, and the same per-module libSQL files. You can take a backup from one and restore it on the other.

See self-hosted → for the deployment guide.