Skip to main content

Documentation Index

Fetch the complete documentation index at: https://www.aidonow.com/llms.txt

Use this file to discover all available pages before exploring further.

Executive Summary

Architecture Decision Records (ADRs) traditionally serve as retrospective documentation of design choices. This research demonstrates that, when reformatted as structured constraint definitions rather than narrative prose, ADRs become enforceable governance contracts that AI code-generation agents can parse and comply with automatically. In a 30-day study of a multi-crate Rust platform, four strategically authored ADRs produced 67 verified compliant commits, eliminated architectural drift entirely, and reduced human code-review overhead by approximately 70 percent. Organizations adopting AI-assisted development workflows face a governance gap: velocity increases while architectural consistency erodes. Structured ADRs close this gap by codifying constraints in formats that both human reviewers and AI agents can evaluate deterministically. This paper presents the methodology, empirical results, and a replicable ADR template for engineering teams deploying AI development pipelines.

Key Findings

  • Structured ADRs function as machine-readable architectural contracts. AI agents comply reliably with constraints expressed in code blocks, tables, and enumerated rules, but fail to extract actionable directives from prose-format decision logs.
  • Pre-commit hook enforcement raised ADR compliance from 40 percent to 100 percent within a single sprint cycle, demonstrating that governance tooling compounds the effect of well-structured documentation.
  • Four ADRs eliminated six categories of architectural defect including cross-environment data leakage, orphaned entity cascades, AWS client scope violations, and inconsistent test pyramid coverage.
  • Human authorship of ADRs is a non-negotiable requirement. AI agents cannot identify operational pain points or encode institutional constraints; ADR creation must remain a human responsibility.
  • The ADR-driven workflow reduced code-review time by approximately 70 percent by shifting reviewer attention from implementation detail verification to ADR compliance confirmation.
  • Return on documentation investment is asymmetric. Each ADR required 2 to 4 hours of human authoring effort and subsequently governed hundreds of AI-generated implementation decisions at zero marginal cost.

1. Problem Statement: Architectural Entropy in AI-Assisted Development

1.1 The Consistency Challenge

The adoption of AI code-generation agents introduces a structural tension in software engineering workflows. Development velocity increases substantially; however, without a mechanism for conveying architectural intent to the AI, each implementation request produces locally correct but globally inconsistent code. A platform with 20 or more Rust crates, each built through independent AI sessions, will exhibit divergent naming conventions, inconsistent isolation patterns, and varying adherence to security boundaries. The traditional solution—comprehensive human code review—does not scale proportionally with AI-assisted throughput. A human reviewer cannot efficiently evaluate the architectural compliance of code that arrives faster than the review cycle allows.

1.2 The Documentation Gap

Standard ADR formats, as described in the original ADR specification, are optimized for human readers. They convey intent, context, and rationale in natural language. This format is inadequate for AI consumption because it does not provide the explicit, enumerable constraint definitions that generative models require to produce deterministically compliant output. The research question this paper addresses is: Can ADRs be reformatted to serve as enforceable constraints for AI agents without sacrificing their utility for human readers?

2. Methodology

2.1 Research Context

The study was conducted over 30 days (December 31, 2025 through January 30, 2026) on a production multi-tenant SaaS platform implemented in Rust. The codebase comprised more than 20 crates. AI agents performed the majority of implementation work, with human engineers responsible for architecture, review, and ADR authorship.

2.2 Constraint-Oriented ADR Format

The central methodological intervention was the replacement of narrative ADR prose with structured constraint definitions. The contrast between the two formats is illustrated below. Before — Decision Log Format (Insufficient for AI):

We will use DynamoDB single-table design.

## Rationale

Reduces operational overhead.
After — Constraint Definition Format (AI-Parseable):
### 2. Table Naming Convention

Capsule-level tables use `{CAPSULE_CODE}_` prefix:

{CAPSULE_CODE}_{table_name}

Examples:
  PRODUS_crm      (Production US)
  DEVUS_crm       (Development US)
  STGEU_events    (Staging EU)

Rationale:
- Clear visual distinction in AWS console
- Physical isolation per capsule
- Aligns with existing Infrastructure Principles §2
The structured format provides AI agents with exact naming patterns, concrete code examples, and discrete validation rules—inputs that generative models process with high fidelity.

2.3 Integration Workflow

ADRs were integrated into the development pipeline according to the following four-stage process. Stage 1 — Problem Detection
Human: "AI, why is dev data appearing in prod?"
AI Evaluator: "Analyzing... CRM entities lack capsule_id in partition keys."
Stage 2 — ADR Creation (Human Responsibility)
# ADR-0010: Capsule Isolation Enforcement

Problem: Products like CRM are tenant-scoped only. A lead created
in dev capsule appears in prod. This is fundamentally broken.

Decision: Enforce capsule isolation with these patterns:
- Table naming: {CAPSULE_CODE}_{table_name}
- Partition keys: TENANT#...#CAPSULE#...#ENTITY#...
- EventEnvelope.capsule_id: Required (not Optional)
Stage 3 — AI Implementation
Human: "Refactor CRM to comply with ADR-0010."
AI Builder: "Updating 48 files, adding capsule_id to all entities..."
Stage 4 — AI Verification
AI Verifier: "Checking ADR-0010 compliance..."
- Table names: PASS (all use PRODUS_ prefix)
- Partition keys: PASS (all include CAPSULE#)
- EventEnvelope: PASS (capsule_id is String, not Option)

2.4 Enforcement Mechanism

Compliance was reinforced through pre-commit hooks requiring ADR-XXXX references in all commit messages. This mechanism created an auditable chain of custody between architectural decisions and their implementation artifacts.

3. The Four Architectural ADRs

The following four ADRs were authored and enforced during the study period. Each addressed a distinct category of architectural defect.
Problem: Dev data leaking to production environments.Constraint: All capsule-scoped entities must:
  • Use capsule-prefixed table names
  • Include capsule in partition keys
  • Require capsule_id (not optional)
Impact: 48 files changed in CRM refactor, zero cross-environment data leaks.
Problem: Deleting a parent entity leaves orphaned children.Constraint: Use SagaStep macro for multi-entity operations with compensation logic.Impact: Enabled complex workflows like account merges with automatic rollback.
Problem: 16 AWS SDKs used inconsistently, no scope enforcement.Constraint: 4 client types (Platform, Tenant, Capsule, Operator) with mandatory scope parameters.Impact: Eliminated 600 lines of boilerplate per crate, enforced isolation at SDK level.
Problem: Inconsistent test coverage across crates.Constraint: 4-level test pyramid (Unit, Integration, E2E, Contract) with coverage targets.Impact: Increased platform test coverage from 60% to 85%.

4. Results

4.1 Compliance Metrics

  • ADRs written: 4 major architectural decisions
  • Commits referencing ADRs: 67 in 30 days
  • ADR compliance rate: 100% (after pre-commit hooks)
  • Architectural drift incidents: 0

4.2 Comparative Format Analysis

The table below summarizes the material differences between traditional and constraint-oriented ADR formats as they relate to AI-assisted development.
DimensionTraditional ADR (Prose)Constraint-Oriented ADR
Primary audienceHuman engineersHuman engineers and AI agents
Information formatNatural language narrativeCode blocks, tables, enumerated rules
AI extractabilityLow — directives must be inferredHigh — rules are enumerable and testable
Verification mechanismHuman judgmentAutomated compliance checks
Compliance rate (observed)~40%100% (with pre-commit hooks)
Review overheadHighReduced by ~70%
Authorship requirementHumanHuman (non-delegable)

4.3 Defect Analysis

The initial prose-format ADRs produced a compliance rate of approximately 40 percent because AI agents complied with the spirit of decisions but missed specific naming patterns and structural requirements. Reformatting to constraint-oriented structure eliminated this ambiguity.
Critical Finding: The initial ADR format—narrative prose stating “we decided to do X because Y”—is insufficient for AI governance. AI agents extract keywords and approximate intent from prose but cannot reliably enumerate discrete constraints from unstructured paragraphs. Teams that adopt AI-assisted development without reformatting their ADR library will observe partial compliance at best.

5. Capability Assessment: AI Strengths and Limitations

A clear division of labor between human and AI roles is essential to the governance model described here.

AI Capability: Constraint Application

AI agents excel at applying structured constraints uniformly across large codebases. When provided a table of partition key patterns, the agent produces conformant code across every entity without variance or drift.

AI Limitation: ADR Authorship

AI agents cannot author effective ADRs. They lack access to operational pain points, institutional context, and the judgment required to encode constraints that anticipate future failure modes. ADR authorship must remain a human responsibility.

Human Role: Constraint Definition

Human engineers author ADRs that capture architectural constraints, operational learnings, and security requirements. This represents the highest-leverage human contribution in an AI-assisted pipeline.

Process Design: Delegation Boundary

The optimal workflow is: human authors ADR, AI reads and implements against ADR constraints, human verifies ADR compliance. This model scales to any number of concurrent AI agents.

6. Replicable ADR Template

The following template encodes the structural requirements for an AI-parseable ADR.
# ADR-XXXX: [Title]

**Status:** Proposed | Accepted | Deprecated
**Date:** YYYY-MM-DD
**Authors:** [Name]
**Reviewers:** [AI Agents]

## Context
[What problem are we solving? Include metrics if available.]

## Decision
[The constraint(s) we're enforcing. Use code blocks and tables.]

### 1. [Constraint Name]
[Exact pattern with examples]

### 2. [Constraint Name]
[Exact pattern with examples]

## Consequences

### Positive
- [Benefit 1]
- [Benefit 2]

### Negative
- [Drawback 1]
- [Migration effort]

### Migration Strategy
1. [Step 1]
2. [Step 2]

## Related Decisions
- ADR-YYYY
- ADR-ZZZZ
Implementation Guidance: Prefix every AI implementation prompt with “Follow ADR-XXXX.” This primes the agent to verify constraint compliance before generating code. In the study cohort, this single prompt convention elevated compliance from 40 percent to 100 percent prior to the addition of automated pre-commit enforcement.

7. Recommendations

The following recommendations are directed at engineering leaders and platform architects deploying AI-assisted development workflows.
  1. Adopt constraint-oriented ADR formatting immediately. Existing prose-format ADRs should be migrated to the structured format described in Section 2.2. Prioritize ADRs that govern security boundaries, data isolation, and naming conventions, as these are most susceptible to AI non-compliance.
  2. Establish ADR authorship as a protected human responsibility. Governance frameworks must explicitly prohibit delegation of ADR authorship to AI agents. The value of an ADR is proportional to the operational experience encoded within it; AI agents lack the institutional memory required to produce this content.
  3. Enforce ADR references in commit messages via pre-commit hooks. Automated enforcement creates an auditable chain from architectural decision to implementation artifact. Without enforcement, compliance rates remain probabilistic rather than deterministic.
  4. Shift code-review focus from implementation detail to ADR compliance. When ADRs are well-formed and enforced, human reviewers need not verify every line of AI-generated code. Review effort should concentrate on verifying that the correct ADRs were applied and that AI interpretation of constraints was accurate.
  5. Treat ADR backfill as a risk-reduction initiative. Platforms that have deployed AI-generated code without prior ADRs carry latent architectural debt. A systematic audit against reconstructed ADRs will surface inconsistencies that are not visible through standard code review.
  6. Instrument compliance metrics and track them over time. Compliance rate, drift incident count, and review cycle duration are the primary indicators of ADR program health. Teams should establish baselines and monitor trends across successive sprint cycles.

8. Forward-Looking Considerations

The governance model described in this paper represents a first-generation solution to AI architectural consistency. As AI code-generation capabilities advance, the constraint-oriented ADR format will likely evolve toward formal specification languages that enable automated verification without human-authored commit hooks. Organizations that establish disciplined ADR practices now will be positioned to migrate to these richer verification frameworks as they mature. The foundational principle—that architectural intent must be codified in structured, machine-parseable form to govern AI-generated output—will remain relevant regardless of the specific tooling employed. Engineering teams that treat ADRs as living governance artifacts, rather than static documentation, will sustain architectural consistency at scale without sacrificing the velocity benefits of AI-assisted development.

Resources and Further Reading


Next in This Series

Week 6: How we used ADR-driven middleware to eliminate 100% of manual configuration lookups.

Week 6: Configuration Governance

The middleware pattern that made configuration hierarchical and automatic

Discussion

Share Your Experience

Do you use ADRs? How do you keep AI implementations consistent with architecture?Connect on LinkedIn

Disclaimer: This content represents my personal learning journey using AI for a personal project. It does not represent my employer’s views, technologies, or approaches.All code examples are generic patterns or pseudocode for educational purposes.