Documentation Index
Fetch the complete documentation index at: https://www.aidonow.com/llms.txt
Use this file to discover all available pages before exploring further.
Executive Summary
Architecture Decision Records (ADRs) traditionally serve as retrospective documentation of design choices. This research demonstrates that, when reformatted as structured constraint definitions rather than narrative prose, ADRs become enforceable governance contracts that AI code-generation agents can parse and comply with automatically. In a 30-day study of a multi-crate Rust platform, four strategically authored ADRs produced 67 verified compliant commits, eliminated architectural drift entirely, and reduced human code-review overhead by approximately 70 percent. Organizations adopting AI-assisted development workflows face a governance gap: velocity increases while architectural consistency erodes. Structured ADRs close this gap by codifying constraints in formats that both human reviewers and AI agents can evaluate deterministically. This paper presents the methodology, empirical results, and a replicable ADR template for engineering teams deploying AI development pipelines.Key Findings
- Structured ADRs function as machine-readable architectural contracts. AI agents comply reliably with constraints expressed in code blocks, tables, and enumerated rules, but fail to extract actionable directives from prose-format decision logs.
- Pre-commit hook enforcement raised ADR compliance from 40 percent to 100 percent within a single sprint cycle, demonstrating that governance tooling compounds the effect of well-structured documentation.
- Four ADRs eliminated six categories of architectural defect including cross-environment data leakage, orphaned entity cascades, AWS client scope violations, and inconsistent test pyramid coverage.
- Human authorship of ADRs is a non-negotiable requirement. AI agents cannot identify operational pain points or encode institutional constraints; ADR creation must remain a human responsibility.
- The ADR-driven workflow reduced code-review time by approximately 70 percent by shifting reviewer attention from implementation detail verification to ADR compliance confirmation.
- Return on documentation investment is asymmetric. Each ADR required 2 to 4 hours of human authoring effort and subsequently governed hundreds of AI-generated implementation decisions at zero marginal cost.
1. Problem Statement: Architectural Entropy in AI-Assisted Development
1.1 The Consistency Challenge
The adoption of AI code-generation agents introduces a structural tension in software engineering workflows. Development velocity increases substantially; however, without a mechanism for conveying architectural intent to the AI, each implementation request produces locally correct but globally inconsistent code. A platform with 20 or more Rust crates, each built through independent AI sessions, will exhibit divergent naming conventions, inconsistent isolation patterns, and varying adherence to security boundaries. The traditional solution—comprehensive human code review—does not scale proportionally with AI-assisted throughput. A human reviewer cannot efficiently evaluate the architectural compliance of code that arrives faster than the review cycle allows.1.2 The Documentation Gap
Standard ADR formats, as described in the original ADR specification, are optimized for human readers. They convey intent, context, and rationale in natural language. This format is inadequate for AI consumption because it does not provide the explicit, enumerable constraint definitions that generative models require to produce deterministically compliant output. The research question this paper addresses is: Can ADRs be reformatted to serve as enforceable constraints for AI agents without sacrificing their utility for human readers?2. Methodology
2.1 Research Context
The study was conducted over 30 days (December 31, 2025 through January 30, 2026) on a production multi-tenant SaaS platform implemented in Rust. The codebase comprised more than 20 crates. AI agents performed the majority of implementation work, with human engineers responsible for architecture, review, and ADR authorship.2.2 Constraint-Oriented ADR Format
The central methodological intervention was the replacement of narrative ADR prose with structured constraint definitions. The contrast between the two formats is illustrated below. Before — Decision Log Format (Insufficient for AI):2.3 Integration Workflow
ADRs were integrated into the development pipeline according to the following four-stage process. Stage 1 — Problem Detection2.4 Enforcement Mechanism
Compliance was reinforced through pre-commit hooks requiringADR-XXXX references in all commit messages. This mechanism created an auditable chain of custody between architectural decisions and their implementation artifacts.
3. The Four Architectural ADRs
The following four ADRs were authored and enforced during the study period. Each addressed a distinct category of architectural defect.ADR-0010: Capsule Isolation Enforcement
ADR-0010: Capsule Isolation Enforcement
Problem: Dev data leaking to production environments.Constraint: All capsule-scoped entities must:
- Use capsule-prefixed table names
- Include capsule in partition keys
- Require capsule_id (not optional)
ADR-0016: Foreign Keys and Cascading Operations
ADR-0016: Foreign Keys and Cascading Operations
Problem: Deleting a parent entity leaves orphaned children.Constraint: Use SagaStep macro for multi-entity operations with compensation logic.Impact: Enabled complex workflows like account merges with automatic rollback.
ADR-0018: Unified AWS Client Management
ADR-0018: Unified AWS Client Management
Problem: 16 AWS SDKs used inconsistently, no scope enforcement.Constraint: 4 client types (Platform, Tenant, Capsule, Operator) with mandatory scope parameters.Impact: Eliminated 600 lines of boilerplate per crate, enforced isolation at SDK level.
ADR-0019: Comprehensive Testing Standards
ADR-0019: Comprehensive Testing Standards
Problem: Inconsistent test coverage across crates.Constraint: 4-level test pyramid (Unit, Integration, E2E, Contract) with coverage targets.Impact: Increased platform test coverage from 60% to 85%.
4. Results
4.1 Compliance Metrics
- Compliance
- Velocity
- Quality
- ADRs written: 4 major architectural decisions
- Commits referencing ADRs: 67 in 30 days
- ADR compliance rate: 100% (after pre-commit hooks)
- Architectural drift incidents: 0
4.2 Comparative Format Analysis
The table below summarizes the material differences between traditional and constraint-oriented ADR formats as they relate to AI-assisted development.| Dimension | Traditional ADR (Prose) | Constraint-Oriented ADR |
|---|---|---|
| Primary audience | Human engineers | Human engineers and AI agents |
| Information format | Natural language narrative | Code blocks, tables, enumerated rules |
| AI extractability | Low — directives must be inferred | High — rules are enumerable and testable |
| Verification mechanism | Human judgment | Automated compliance checks |
| Compliance rate (observed) | ~40% | 100% (with pre-commit hooks) |
| Review overhead | High | Reduced by ~70% |
| Authorship requirement | Human | Human (non-delegable) |
4.3 Defect Analysis
The initial prose-format ADRs produced a compliance rate of approximately 40 percent because AI agents complied with the spirit of decisions but missed specific naming patterns and structural requirements. Reformatting to constraint-oriented structure eliminated this ambiguity.5. Capability Assessment: AI Strengths and Limitations
A clear division of labor between human and AI roles is essential to the governance model described here.AI Capability: Constraint Application
AI agents excel at applying structured constraints uniformly across large codebases. When provided a table of partition key patterns, the agent produces conformant code across every entity without variance or drift.
AI Limitation: ADR Authorship
AI agents cannot author effective ADRs. They lack access to operational pain points, institutional context, and the judgment required to encode constraints that anticipate future failure modes. ADR authorship must remain a human responsibility.
Human Role: Constraint Definition
Human engineers author ADRs that capture architectural constraints, operational learnings, and security requirements. This represents the highest-leverage human contribution in an AI-assisted pipeline.
Process Design: Delegation Boundary
The optimal workflow is: human authors ADR, AI reads and implements against ADR constraints, human verifies ADR compliance. This model scales to any number of concurrent AI agents.
6. Replicable ADR Template
The following template encodes the structural requirements for an AI-parseable ADR.7. Recommendations
The following recommendations are directed at engineering leaders and platform architects deploying AI-assisted development workflows.- Adopt constraint-oriented ADR formatting immediately. Existing prose-format ADRs should be migrated to the structured format described in Section 2.2. Prioritize ADRs that govern security boundaries, data isolation, and naming conventions, as these are most susceptible to AI non-compliance.
- Establish ADR authorship as a protected human responsibility. Governance frameworks must explicitly prohibit delegation of ADR authorship to AI agents. The value of an ADR is proportional to the operational experience encoded within it; AI agents lack the institutional memory required to produce this content.
- Enforce ADR references in commit messages via pre-commit hooks. Automated enforcement creates an auditable chain from architectural decision to implementation artifact. Without enforcement, compliance rates remain probabilistic rather than deterministic.
- Shift code-review focus from implementation detail to ADR compliance. When ADRs are well-formed and enforced, human reviewers need not verify every line of AI-generated code. Review effort should concentrate on verifying that the correct ADRs were applied and that AI interpretation of constraints was accurate.
- Treat ADR backfill as a risk-reduction initiative. Platforms that have deployed AI-generated code without prior ADRs carry latent architectural debt. A systematic audit against reconstructed ADRs will surface inconsistencies that are not visible through standard code review.
- Instrument compliance metrics and track them over time. Compliance rate, drift incident count, and review cycle duration are the primary indicators of ADR program health. Teams should establish baselines and monitor trends across successive sprint cycles.
8. Forward-Looking Considerations
The governance model described in this paper represents a first-generation solution to AI architectural consistency. As AI code-generation capabilities advance, the constraint-oriented ADR format will likely evolve toward formal specification languages that enable automated verification without human-authored commit hooks. Organizations that establish disciplined ADR practices now will be positioned to migrate to these richer verification frameworks as they mature. The foundational principle—that architectural intent must be codified in structured, machine-parseable form to govern AI-generated output—will remain relevant regardless of the specific tooling employed. Engineering teams that treat ADRs as living governance artifacts, rather than static documentation, will sustain architectural consistency at scale without sacrificing the velocity benefits of AI-assisted development.Resources and Further Reading
- Architectural Decision Records (ADRs)
- When to Write an ADR
- Related article: AWS Runtime Adoption (Week 5)
Next in This Series
Week 6: How we used ADR-driven middleware to eliminate 100% of manual configuration lookups.Week 6: Configuration Governance
The middleware pattern that made configuration hierarchical and automatic
Discussion
Share Your Experience
Do you use ADRs? How do you keep AI implementations consistent with architecture?Connect on LinkedIn
Disclaimer: This content represents my personal learning journey using AI for a personal project. It does not represent my employer’s views, technologies, or approaches.All code examples are generic patterns or pseudocode for educational purposes.