What Machine-Readable Architecture Actually Means
The phrase "machine-readable" gets used loosely. REST APIs are machine-readable. JSON config files are machine-readable. OpenAPI specs are machine-readable. But when we say a system's architecture should be machine-readable, we mean something specific and more demanding than any of these.
We mean that the structure of the system — its entities, capabilities, constraints, module boundaries, data flows, and business rules — is a formal artifact that tools and AI agents can query, traverse, and reason about without reading source code.
Documentation vs. Specification
Most teams document their architecture. They write README files, draw diagrams, maintain wikis, and add comments to code. This documentation is for humans. It is written in natural language, organized by what a human reader needs to understand, and updated (or not) at the discretion of the team.
Documentation describes the system. A specification defines it.
The difference matters. Documentation can be incomplete, ambiguous, or outdated, and a human reader can still extract useful information from it. A specification must be complete enough to be validated, precise enough to be unambiguous, and structured enough to be traversed programmatically.
When an AI agent reads a README that says "the billing module handles subscriptions and invoices," it gets a rough understanding. When it reads a specification that declares which entities belong to the billing module, which capabilities operate on those entities, which invariants constrain those operations, and which policies gate access, it gets actionable knowledge.
What "Formal" Means Here
A formal architecture artifact has three properties:
Validated structure. The artifact conforms to a schema. Not just "it's valid YAML" but "every entity has a name, a module, and a list of fields; every capability references entities that exist; every invariant is bound to at least one entity or capability." In SysMARA, this validation is done with Zod schemas at build time. If a capability references a nonexistent entity, the build fails.
Explicit relationships. The connections between elements are declared, not inferred.
A capability does not implicitly touch an entity because it happens to query the same database table.
It explicitly declares entities: [subscription, invoice], and those relationships are
edges in the system graph. An invariant does not float in a utility function somewhere; it is bound
to specific entities and capabilities by name.
Traversable graph. The artifact is not a flat list. It is a directed graph where you
can start at any node and walk to related nodes. Starting from the subscription entity,
you can reach the capabilities that operate on it, the invariants that constrain it, the module that
owns it, and the policies that gate access to its capabilities. This traversal is what makes impact
analysis possible.
How SysMARA Creates This
The SysMARA approach has three layers.
YAML specifications. You declare entities, capabilities, policies, invariants, modules, and flows in YAML files. These are not configuration files in the traditional sense. They are the architectural source of truth. A capability spec declares its name, description, input/output types, entity bindings, policy gates, invariant checks, and side effects. Nothing is left to inference.
Zod validation. Every spec is validated against a Zod schema at build time. This catches structural errors (a capability with no entities), referential errors (a policy referencing a nonexistent role), and consistency errors (an invariant bound to a capability in a different module than the invariant's entity). The validation is not a linter suggestion. It is a hard build failure.
System graph. The validated specs are compiled into a typed directed graph. Nodes are
entities, capabilities, policies, invariants, modules, and flows. Edges are the declared relationships
between them. This graph is the artifact that AI agents query. When an agent needs to understand what
changing the subscription entity would affect, it traverses the graph and gets a precise
answer: these capabilities, these invariants, these modules, this impact radius.
What This Is Not
This is not about file format. YAML is a choice, not a requirement. The same architectural model could be expressed in JSON, TOML, or a TypeScript DSL. The format is less important than the formality. A system described in well-structured JSON with no validation and no graph construction is just as opaque to AI agents as one described in comments.
This is not about generating documentation. The system graph is not a documentation generator. It can produce documentation as a side effect, but its primary purpose is to be consumed by tools: the capability compiler, the impact analysis engine, the change protocol, and AI agents.
This is not about restricting how you write code. The architecture specs declare what exists and how it relates. The implementation code lives in editable zones where humans and AI agents write business logic. The specs constrain the structure; they do not dictate the implementation.
The Practical Test
Here is a simple test for whether your architecture is machine-readable: can a tool, given no prior context, answer these questions about your system?
- What entities exist, and which module owns each one?
- What capabilities operate on a given entity?
- What invariants constrain a given capability?
- What is the impact radius of changing a specific entity?
- Which files are generated and should not be manually edited?
If answering these questions requires reading source code, following import chains, or understanding framework conventions, then the architecture is human-readable at best. Machine-readable means a tool can answer them by querying a structured artifact, without any source code analysis.
That is the bar. Not "has an API." Not "uses JSON." The architecture is a formal, validated, traversable graph that any tool can query for precise answers about system structure.