A topology you can audit on a whiteboard.
Three diagrams. One per concern. Hover any box for what it does and why it's there. No service meshes you didn't ask for. No vendor SDKs in the hot path.
Ingestion flow
Target app to dashboard, tracing the path of a single event. SDK signs, edge verifies, portal routes, libSQL persists, SSE streams.
Auth + RBAC topology
BetterAuth handles identity (Postgres). SpiceDB handles permissions (gRPC). Modules read from per-module libSQL only after both have green-lit the request.
Storage layout
One Postgres for auth, one SpiceDB for permissions, one libSQL file per module. Self-contained backups, predictable migrations.
From an emit() in your app to a row in libSQL.
- Your application: Anything that can sign an HTTP request — a Node service with @cuitty/sdk, a curl one-liner, an OpenTelemetry exporter, a CI script.
- @cuitty/sdk: Thin convenience layer over the wire protocol. Plugins (audit, logs, deploys, …) emit typed events; the client batches and signs them with HMAC-SHA256.
- POST /api/ingest: The single edge endpoint. Verifies the HMAC, deduplicates by Idempotency-Key, then hands the batch to the portal.
- Portal router: Reads the event `type` field and routes the row to the correct module's libSQL database. Same Astro frontend the dashboard runs on.
- Per-module libSQL: One database per module — audit.db, costs.db, deploys.db, etc. Self-contained backups, self-contained migrations, sqlite3-inspectable.
- SSE stream: After persistence, the row is fanned out as a Server-Sent Event to any dashboard tab subscribed to that module's channel.
- Dashboard UI: Astro-rendered server pages plus SolidJS islands for the live components. Reads from the per-module DBs; subscribes to the SSE stream.
Each event carries an HMAC-SHA256 signature derived from the request body and your project's webhook secret. The edge re-derives the signature, compares it with constant-time equality, and rejects anything that doesn't match. After verification the row is routed to its module DB by `event.type`, then fanned out as a Server-Sent Event so any open dashboard tab updates in real time.
Identify, authenticate, authorize, persist.
- Browser / SDK: Two trust paths: an interactive session (BetterAuth cookie) for the dashboard, or an API key + HMAC for the SDK.
- BetterAuth: Session, magic link, OAuth, and API-key issuance. Stores users, sessions, and key hashes in Postgres — the one place we want strict relational guarantees.
- SpiceDB: Zanzibar-style relationship-tuple store. One round-trip resolves: 'can user X read module Y in project Z?' Permissions live here, not in app code.
- Module (libSQL): The audit/costs/logs/etc. database. Reads/writes only happen after BetterAuth has verified identity AND SpiceDB has authorized the action.
- Postgres: Identity store: users, sessions, API keys, organisations. The single Postgres instance behind BetterAuth — tiny, replicable, predictable.
Identity lives in Postgres, behind BetterAuth. Permissions live in SpiceDB as Zanzibar-style relationship tuples — one round-trip resolves "can user X read module Y in project Z?". The module DBs never need to know who the user is; they only need to know that the request was authorized.
One file per module. Boring on purpose.
- Postgres · identity: Users, sessions, API keys, organisation membership. BetterAuth's source of truth.
- SpiceDB · permissions: Zanzibar relationship tuples. Queried per-request to authorise reads and writes against module DBs.
- audit.db: Append-only, hash-chained audit log. Tamper-evident — verify the chain by re-hashing rows.
- costs.db: Cost ledger across providers. One table per concern (resources, costs, budgets, alerts).
- deploys.db: Deploy timeline + rollback graph. Each row is a state transition with foreign-key links to environments.
- logs.db: Structured log records, FTS5-indexed for full-text search and saved filters.
- configs.db: Versioned config snapshots with unified diffs. Rollback is a one-line UPDATE.
- performance.db: Per-process metric snapshots — memory, CPU, event loop, GC, system load — emitted on a 10s cadence.
- repository.db: Code-index of every tracked repo: files, checksums, languages, deploy markers, workspace tree.
- video.db: Generated browser-recording artifacts and the prompts that produced them.
- forge.db: Name-generation sessions, scored candidates, and the workspaces they bootstrap.
The modules don't share schemas, so they shouldn't share storage. Each libSQL file is a self-contained backup, a self-contained restore target, and a self-contained migration unit. If the costs module breaks a schema, the audit log doesn't notice.
Self-hosted? Identical topology.
The cloud build and the self-hosted build run the same Docker Compose, the same Postgres schema, the same SpiceDB definitions, and the same per-module libSQL files. You can take a backup from one and restore it on the other.
See self-hosted → for the deployment guide.