Architecture
Topology
Weavestream runs as five Docker containers orchestrated by a single compose.yml:
┌───────────────┐
browsers ───────▶│ web (Next.js)│──┐
└───────────────┘ │ server components
│ fetch from API
▼
┌──────────┐ queues ┌─────────┐ ┌───────────────┐
│ worker │◀────────────│ redis │◀──▶│ api (Nest) │
│ (Nest) │ └─────────┘ └───────────────┘
└────┬─────┘ │
│ ┌──────────┐ │
└────────▶│ postgres │◀───────────────────┘
└──────────┘
▲
│
(api & worker also share)
│
┌──────┴────────────┐
│ files (host bind │
│ mount, per-tenant│
│ subdirectory) │
└───────────────────┘
Services
Uploaded files (attachments, thumbnails, logos, export PDFs) live on a host bind-mounted directory (${DATA_DIR}/files) shared by api and worker. Tenant isolation is by directory.
Request flow
- Browser →
web(port 3000). Server components callapivia the internal Docker network (http://api:4000). Client-side components callapivia the publicAPI_URL. apivalidates the session JWT, checks RBAC, runs the business logic, and writes to Postgres.- Background work (domain polling, thumbnail generation, search indexing) is enqueued via Redis/BullMQ and consumed by
worker. - File uploads go browser →
api(auth + validation + metadata) → local filesystem storage. Downloads stream back through the API on the same origin so the storage directory has no public surface.
Tech stack
Tenant model
Shared schema, isolated storage
All tenants share a single Postgres database and schema. Tenant data is scoped in application code via a companyId foreign key and validated on every request. There is no database-per-tenant.
File storage uses one directory per tenant under ${FILE_STORAGE_DIR}/<tenantId>/. The storage layer rejects keys containing .., leading slashes, or null bytes and re-asserts that every resolved path stays inside the tenant directory before opening a file — defense in depth against any future IDOR in application code.
Configurable terminology
The word used for "tenant" everywhere in the UI is set by SUPER_ADMIN at Admin → Settings. Presets: Company, Client, Department, Tenant, Organisation, Site, or Custom. URL routes, API paths, and Prisma columns always use company / companies internally — the terminology change is cosmetic only.
RBAC
Three orthogonal axes are evaluated by a single resolver:
Global roles (stored on the users table):
Membership role (memberships.role):
Default tenant access (users.globalAccess, OPERATOR only):
Platform capabilities (users.platformCapabilities):
COMPANY_MANAGE, INTEGRATION_MANAGE, LAYOUT_MANAGE, USER_MANAGE, MEMBERSHIP_MANAGE, AUDIT_READ, SETTINGS_MANAGE, EXPORT_CREATE
SUPER_ADMIN holds all capabilities implicitly; the API rejects capabilities on unsupported roles.
Resolution order for tenant checks:
SUPER_ADMIN=> allow.- If the action requires a capability, require it (or
SUPER_ADMIN) or deny. - Active tenant membership overrides everything else (
FULL/READONLY). - Otherwise fall back to
users.globalAccessforOPERATOR(FULL/READONLY/NONE).
All authorization checks go through the resolver. There are no ad-hoc role comparisons in controllers.
Schema migrations
api runs prisma migrate deploy on every startup, before binding its HTTP listener. The command is idempotent and guarded by a Postgres advisory lock — safe to run on boot and safe when scaling horizontally.
Data layout
Persistent data lives in host-mounted folders under $DATA_DIR (default ./data next to compose.yml):
Scalability notes
apiis stateless and horizontally scalable. Session state is in Redis; Postgres advisory locks prevent migration races.workercan run multiple replicas; BullMQ distributes jobs across consumers.web(Next.js) is stateless and can be replicated behind a load balancer.- Single-tenant Postgres is the primary bottleneck for very large deployments. Multi-database support is not currently planned.