SysMARA ORM: The Database Layer AI Agents Can Actually Read
Every backend needs a database layer. And every ORM — Prisma, Drizzle, TypeORM, Sequelize — was designed with the same assumption: a human developer writes queries, understands the schema, and knows which tables are safe to touch. The ORM's job is to make that process more ergonomic.
SysMARA ORM starts from a different premise. The primary consumer of the database layer is not a human — it's an AI agent. And AI agents don't need ergonomics. They need boundaries, traceability, and machine-readable context about every operation they perform.
The Problem with Traditional ORMs
Consider a typical Prisma setup. You define a schema, generate a client, and then any code in your application can query any table:
// Any code can do this — no capability check, no audit trail
const users = await prisma.user.findMany();
const orders = await prisma.order.deleteMany({ where: { status: 'draft' } });
const invoices = await prisma.invoice.update({ where: { id }, data });
There is nothing wrong with this for human developers who understand context. But for an AI agent
implementing a capability like cancel_subscription, this is a minefield:
- Which tables should this capability touch? The ORM doesn't know.
- Which invariants must hold after the operation? The ORM doesn't care.
- What audit trail exists for what happened? The ORM doesn't track it.
- If the schema changes, which capabilities break? The ORM can't tell you.
The AI agent is operating blind. It can generate syntactically correct queries, but it has no way to verify that those queries respect the system's architectural constraints.
How SysMARA ORM Works
SysMARA ORM is built on four principles that make it AI-native:
1. Schema IS the System Graph
There is no separate schema file. SysMARA ORM reads your YAML entity specs — the same specs that define your system graph — and generates SQL directly from them. Fields, types, constraints, and relationships are all derived from what you already declared.
# system/entities.yaml — this IS your database schema
entities:
- name: user
description: A registered user in the system
module: auth
fields:
- name: id
type: uuid
required: true
- name: email
type: string
required: true
constraints:
- type: unique
- name: name
type: string
required: true
- name: role
type: string
required: true
constraints:
- type: enum
value: [admin, member, viewer]
- name: created_at
type: timestamp
required: true
invariants:
- email_must_be_unique
Run sysmara db generate and the ORM produces the SQL:
-- Generated by SysMARA ORM (postgresql)
CREATE EXTENSION IF NOT EXISTS "pgcrypto";
CREATE TABLE "user" (
"id" UUID PRIMARY KEY DEFAULT gen_random_uuid(),
"email" VARCHAR(255) NOT NULL UNIQUE,
"name" VARCHAR(255) NOT NULL,
"role" VARCHAR(100) NOT NULL CHECK ("role" IN ('admin', 'member', 'viewer')),
"created_at" TIMESTAMPTZ NOT NULL DEFAULT NOW()
);
Your invariant email_must_be_unique became a UNIQUE constraint. Your enum
constraint became a CHECK. Required fields became NOT NULL. The schema is a
direct projection of your system specs — there is no drift, no manual synchronization.
2. Every Query is a Capability
In SysMARA ORM, you cannot run arbitrary queries. Every database operation is scoped to a declared
capability. If create_user declares that it operates on the user entity,
the repository will only allow access to that entity through that capability.
import { SysmaraORM } from '@sysmara/core';
const orm = new SysmaraORM(
{ adapter: 'sysmara-orm', provider: 'postgresql' },
specs
);
// Get a repository scoped to the "user" entity via "create_user" capability
const repo = orm.repository<User>('user', 'create_user');
// This works — "user" is in the create_user capability's entities list
const user = await repo.create({
email: 'alice@example.com',
name: 'Alice',
role: 'member',
});
// If you tried to access "invoice" through "create_user", it would throw:
// Error: Capability "create_user" does not declare access to entity "invoice" You can also execute capabilities directly without going through the repository:
// Execute a capability as a database operation
const result = await orm.capability('create_user', {
email: 'bob@example.com',
name: 'Bob',
role: 'admin',
});
// The ORM infers the operation type from the capability name:
// create_, add_, register_ → INSERT
// update_, edit_, modify_ → UPDATE
// delete_, remove_ → DELETE
// everything else → SELECT 3. Machine-Readable Operation Log
Every operation the ORM performs is automatically recorded as structured JSON. This is not application-level logging — it's a machine-readable audit trail designed for AI agents to inspect after the fact.
// After running several operations...
const log = orm.getOperationLog();
// Each entry contains:
// {
// id: "op_1_1710410400000",
// capability: "create_user",
// entity: "user",
// operation: "insert",
// invariants_checked: ["email_must_be_unique"],
// affected_fields: ["email", "name", "role"],
// timestamp: "2026-03-16T08:00:00.000Z",
// duration_ms: 12,
// affected_rows: 1,
// sql_template: "INSERT INTO \"user\" (\"email\", \"name\", \"role\")
// VALUES ($1, $2, $3) RETURNING *"
// } Notice what's in each entry: not just the SQL, but the capability that triggered it, the invariants that were checked, and the fields that were affected. An AI agent reading this log can answer questions like:
- "Which capabilities modified the user table?" —
log.getByEntity('user') - "What did create_user do?" —
log.getByCapability('create_user') - "Show me all inserts" —
log.getByOperation('insert')
The SQL template uses parameterized placeholders ($1, $2, $3), never raw values.
The log is safe to inspect, share, and analyze without leaking sensitive data.
4. Impact-Aware Migrations
When your specs change, the migration engine doesn't just generate SQL diffs. It tells you which capabilities and invariants are affected by the change, and assigns a risk level.
// Compare previous specs to current specs
const plan = orm.planMigration(previousSpecs);
console.log(plan.riskLevel);
// "medium" — because we altered a column type
console.log(plan.affectedCapabilities);
// ["create_user", "update_user", "get_user"]
console.log(plan.affectedInvariants);
// ["email_must_be_unique"]
console.log(plan.requiresReview);
// true — risk is medium or higher
// Generate the migration SQL
const engine = orm.getMigrationEngine();
const sql = engine.generateSQL(plan); The generated migration SQL includes a header with risk analysis:
-- SysMARA ORM Migration
-- Risk: medium
-- Affected capabilities: create_user, update_user, get_user
-- Affected invariants: email_must_be_unique
-- Review required: yes
BEGIN;
-- Step 1: Add column "phone" to "user"
ALTER TABLE "user" ADD COLUMN "phone" VARCHAR(255);
-- Step 2: Alter column "role" in "user" (type changed)
ALTER TABLE "user" ALTER COLUMN "role" TYPE TEXT;
COMMIT; Risk levels are computed automatically:
- Low — adding columns or new tables
- Medium — altering column types or constraints
- High — dropping columns
- Critical — dropping tables
The Repository API
The SysmaraRepository provides a standard CRUD interface, but every method is
capability-scoped and operation-logged:
interface User {
id: string;
email: string;
name: string;
role: string;
created_at: Date;
}
const repo = orm.repository<User>('user', 'create_user');
// Create
const alice = await repo.create({
email: 'alice@example.com',
name: 'Alice',
role: 'member',
});
// Find by ID
const user = await repo.findById(alice.id);
// Find one by filter
const admin = await repo.findOne({ role: 'admin' });
// Find many (with optional filters)
const allMembers = await repo.findMany({ role: 'member' });
const everyone = await repo.findMany();
// Update
const updated = await repo.update(alice.id, { role: 'admin' });
// Delete
await repo.delete(alice.id); Every one of these calls:
- Validates that the fields you're querying exist on the entity spec.
- Validates that the capability has access to the entity.
- Builds parameterized SQL (no raw string interpolation).
- Records the operation to the log with capability name, invariants, fields, and duration.
The Query Builder
Under the hood, the repository uses the CapabilityQueryBuilder. You can use it directly
for more control:
const qb = orm.getQueryBuilder();
// Build a SELECT with filters, ordering, and pagination
const query = qb.select(capability, 'user', {
fields: ['id', 'email', 'name'],
where: { role: 'admin' },
orderBy: 'created_at',
orderDir: 'DESC',
limit: 10,
offset: 0,
});
console.log(query.sql);
// SELECT "id", "email", "name" FROM "user"
// WHERE "role" = $1
// ORDER BY "created_at" DESC
// LIMIT 10 OFFSET 0
console.log(query.params);
// ["admin"]
console.log(query.operation); // "select"
console.log(query.entity); // "user"
console.log(query.capability); // "list_users"
The query builder enforces the same capability boundary rules: you cannot select fields
that don't exist on the entity, and you cannot query entities that aren't declared in the
capability's entities list.
Getting Started
Here's the complete setup from zero to running queries:
Step 1: Initialize and define specs
npm init -y
npm install @sysmara/core
npx sysmara init
Edit system/entities.yaml to define your entities with fields and constraints.
Edit system/capabilities.yaml to declare which operations exist and which entities
they touch.
Step 2: Add database config
Add a database section to sysmara.config.yaml:
# sysmara.config.yaml
name: my-app
version: 0.0.1
specDir: ./system
appDir: ./app
frameworkDir: ./.framework
generatedDir: ./app/generated
port: 3000
host: 0.0.0.0
logLevel: info
database:
adapter: sysmara-orm
provider: postgresql
outputDir: generated/db Step 3: Build and generate schema
# Validate, compile, and scaffold
npx sysmara build
# Generate the SQL schema
npx sysmara db generate
The schema file appears at generated/db/sysmara-orm/schema.sql. Run it against
your database to create the tables.
Step 4: Use the ORM in your code
import {
SysmaraORM,
parseSpecDirectory,
resolveConfig,
} from '@sysmara/core';
// Load specs
const config = resolveConfig();
const result = await parseSpecDirectory(config.specDir);
const specs = result.specs!;
// Initialize ORM
const orm = new SysmaraORM(
{ adapter: 'sysmara-orm', provider: 'postgresql' },
specs
);
// Use it
const repo = orm.repository('user', 'create_user');
const user = await repo.create({
email: 'alice@example.com',
name: 'Alice',
role: 'member',
});
// Check the operation log
const ops = orm.getOperationLog();
console.log(ops); // Full audit trail Multi-Provider Support
The ORM supports PostgreSQL, MySQL, and SQLite. The schema generator handles provider-specific type mappings automatically:
| SysMARA Type | PostgreSQL | MySQL | SQLite |
|---|---|---|---|
uuid | UUID DEFAULT gen_random_uuid() | CHAR(36) DEFAULT UUID() | TEXT |
string | VARCHAR(255) | VARCHAR(255) | TEXT |
integer | INTEGER | INT | INTEGER |
boolean | BOOLEAN | TINYINT(1) | INTEGER |
datetime | TIMESTAMPTZ | DATETIME(3) | TEXT |
json | JSONB | JSON | TEXT |
Compared to Traditional ORMs
| Prisma / Drizzle / TypeORM | SysMARA ORM | |
|---|---|---|
| Schema source | Separate schema file | System graph (YAML entity specs) |
| Query scope | Any code can query any table | Queries scoped to declared capabilities |
| Audit trail | None (add your own logging) | Automatic structured JSON operation log |
| Migration risk | SQL diff only | Risk level + affected capabilities + affected invariants |
| Invariants | Manual DB constraints | YAML invariants become DB constraints automatically |
| AI readability | Schema in separate DSL | Schema is the system graph AI already understands |
SysMARA ORM is not a replacement for Prisma or Drizzle in every context. If you need advanced
query composition, relation loading, or connection pooling, those tools are excellent choices —
and SysMARA supports them as adapters via sysmara db generate with the
prisma, drizzle, or typeorm adapter.
SysMARA ORM exists for a specific purpose: when you want the database layer to be as machine-readable
and capability-bounded as the rest of your SysMARA system. When the AI agent implementing
cancel_subscription should only be able to touch the subscription and
invoice tables, and every operation should be traceable back to the capability that
triggered it.
What's Next
SysMARA ORM is functional today but still evolving. Areas under active development:
- Connection management — pooling, transactions, and connection lifecycle.
- Relation loading — following foreign key references declared in entity specs.
- Streaming queries — cursor-based iteration for large result sets.
- Migration execution — currently generates SQL; planned: apply migrations directly.
See the SysMARA ORM documentation for full API details, or check the Database Adapters page to compare all supported adapters.