Documentation Index
Fetch the complete documentation index at: https://www.aidonow.com/llms.txt
Use this file to discover all available pages before exploring further.
Executive Summary
Event sourcing provides value beyond its canonical role as an audit and state reconstruction mechanism: event schema analysis can serve as a cross-service architectural consistency checker that detects violations that code review and unit testing systematically miss. An investigation of 2,847 production events identified six entities lacking mandatory capsule isolation—violations that had passed both AI-assisted and human code review. Subsequent implementation of compile-time enforcement through enhanced procedural macros eliminated the entire class of isolation violations at the type-system level. Documented patterns in a shared context file measurably improved AI agent accuracy on subsequent development tasks, reducing rework cycles from 2.3 to 1.8 per sub-task. This analysis documents the detection methodology, remediation approach, and the four operational principles that emerged from this work.Key Findings
- Event schema inconsistency analysis detects architectural violations that code review misses. Comparing field presence across semantically related event types reveals isolation boundary gaps that are invisible during per-file review.
- AI pattern inference from event corpora requires no explicit rule specification. An evaluator agent inferred the capsule isolation pattern from 2,847 events without being provided a formal definition, demonstrating emergent cross-service consistency analysis.
- Compile-time macro enforcement eliminates entire classes of architectural violations. After the
#[capsule_isolated]attribute was made mandatory at compile time, zero subsequent isolation violations were introduced—compared to six that passed review prior to enforcement. - Shared architectural documentation directly changes AI agent behavior. A structured pattern reference reduced Builder agent errors and decreased Verifier agent review time; 92% of documented patterns were referenced during subsequent verification tasks.
- Verification quality depends on the scope of questions asked, not only the correctness of answers. Anticipatory verification—asking what could go wrong rather than only whether the implementation is correct—prevented at least one production-class bug in this analysis period.
- Nested struct field traversal is a systematic edge case in macro validation logic. Field-existence checks that do not recurse into nested structs produce false compile passes; this class of defect requires explicit test coverage.
1. Introduction: The Verification Gap
The standard software quality assurance model—write code, write tests, conduct code review, merge—verifies whether individual units of code behave correctly in isolation. It does not systematically verify whether implementations are consistent with architectural invariants established across multiple services or whether they conform to cross-cutting patterns applied elsewhere in the system. This analysis documents how event sourcing was used as a verification mechanism to close this gap in a multi-tenant SaaS platform with capsule-level data isolation requirements. The investigation began with a single anomalous log entry and expanded into a systematic refactoring of the macro infrastructure, documentation, and verification process.2. Discovery: Event Schema Analysis as an Audit Tool
2.1 The Initial Signal
A log entry in the test environment event stream indicated a potential isolation violation:2.2 AI-Assisted Event Pattern Analysis
An evaluator agent was tasked with analyzing event logs across the preceding 48-hour window. Rather than filtering for error or warning severity, the agent performed cross-event schema comparison:- Scanned 2,847 events across authentication, CRM, and catalog services
- Extracted field schemas by grouping events by declared type
- Compared field presence patterns across semantically related event groups
- Flagged events whose schemas deviated from the pattern established by similar events
2.3 Scope of the Finding
The event analysis identified six entities with missing capsule isolation:- RoleAssignment
- SecurityGroup
- Entitlement
- Session (authentication service)
- Federal compliance data (CRM service)
3. Root Cause: Macro Non-Enforcement
3.1 The Permissive Macro
The DynamoDB entity macro at the time of the violations enforced key format syntax but did not require the presence of capsule isolation fields:capsule_id, but it did not reject entities that omitted it. Development sessions that created RoleAssignment and SecurityGroup entities simply did not include the field, and the compiler did not object.
3.2 The Enforcement Gap
This gap illustrates a broader principle: macros that generate convenience without enforcing invariants provide the appearance of safety without the substance. When a macro silently accepts an incorrect pattern, it is more dangerous than no macro, because developers rely on the macro to catch errors that it does not actually catch.4. Remediation: Compile-Time Enforcement
4.1 Enhanced Macro Implementation
A Builder session implemented compile-time capsule isolation enforcement through a new#[capsule_isolated] attribute on the DynamoDbEntity derive macro:
4.2 Edge Case: Nested Struct Field Traversal
Verification of the enhanced macro identified a defect in the field-existence check logic: This class of defect—field validation that does not recurse into nested types—is a predictable blind spot in macro implementation and requires explicit negative test coverage (test cases that should not compile but would compile without the fix).4.3 Final Entity Pattern
The following represents the canonical capsule-isolated entity pattern established after remediation:5. Supporting Infrastructure: Event Schema Validation
The event log warning that initiated this investigation originated not from production but from a test-environment Lambda function that validates event schemas against expected field definitions for each event type:6. Documentation as AI Context
6.1 The Effect of Shared Pattern Reference
Prior to structured architectural documentation, AI Builder agents approached entity creation without a reference for expected field patterns. The result was inconsistent application of capsule isolation—correct when the agent happened to reference a compliant entity, absent when it did not. After documenting 12 architectural patterns in a shared reference file, including the capsule isolation checklist, measurable improvements appeared:- Builder agent referenced documented patterns in 9 of 12 (75%) relevant tasks
- Verifier agent referenced documented patterns in 11 of 12 (92%) relevant tasks
- Rework cycles per sub-task decreased from 2.3 to 1.8
AI agents do not have persistent memory across sessions. Architectural patterns that must be applied consistently require an explicit reference artifact—a shared documentation file, a rule set, or a structured context document—that the agent can load at task start. The act of documenting a pattern is not administrative overhead; it is the mechanism by which the pattern propagates to AI-assisted development.
6.2 Documentation Pattern Format
The following structure was found to be effective for AI-consumable architectural documentation:- Pattern name and classification
- Applicability criteria (when to use)
- Canonical code example
- Anti-patterns (what not to do, with examples)
- Rationale (the reason the pattern exists)
7. Comparative Analysis: Verification Approaches
The following table characterizes four verification approaches, their scope, and the defect categories they detect.| Verification Approach | Scope | Detects | Does Not Detect |
|---|---|---|---|
| Unit tests | Individual function behavior | Logic errors, edge cases | Cross-service inconsistency |
| Code review (human or AI) | Per-file implementation | Obvious violations, style deviations | Pattern drift across services |
| Event schema validation | Cross-service field consistency | Isolation field omissions, schema drift | Logic errors within correct-schema events |
| Compile-time macro enforcement | Type-system invariants | Missing required fields, malformed key patterns | Semantic errors in field values |
8. Operational Metrics
The work documented in this analysis was completed across 72 commits over a single development period.| Metric | Value |
|---|---|
| Commits | 72 |
| Entities remediated for capsule isolation | 6 |
| API routes refactored | 23 |
| Macros enhanced | 2 |
| Documentation patterns added | 12 |
| New features shipped | 0 (all refinement) |
| Rework cycles per sub-task (before) | 2.3 |
| Rework cycles per sub-task (after) | 1.8 |
| Critical isolation violations caught by event analysis | 6 |
| Additional defects caught by Verifier | 12 |
| Violations introduced after compile-time enforcement | 0 |
9. Principles Established
9.1 Compile-Time Enforcement Supersedes Runtime Checks
Architectural invariants enforced only at test time or runtime can be bypassed by omission. When a pattern is mandatory—such as capsule isolation for all user-scoped entities—it must be enforced at compile time. The following comparison illustrates the difference:9.2 Events Are the Authoritative Diagnostic Source
Database state reflects the current outcome of all prior operations. Event logs record the operations themselves, including their parameters. When debugging data consistency issues, event logs provide the causal trace that database state cannot. The diagnostic posture should be: start with the events, derive the state.9.3 Documentation Is AI Agent Context
Architectural patterns that are not documented are not consistently applied by AI agents. Pattern documentation is not documentation for its own sake—it is a mechanism for propagating architectural decisions into AI-assisted development workflows. Each documented pattern with explicit examples, anti-patterns, and rationale measurably improves agent output quality on related tasks.9.4 Macros Must Fail Loudly and Informatively
A macro that silently accepts an incorrect configuration is more hazardous than no macro, because it provides false confidence. The following checklist governs macro quality:- Validates all required fields are present
- Validates field types are correct
- Validates attribute patterns (for example, partition key format)
- Produces error messages that specify what is wrong, how to fix it, and include a correct usage example
- Has negative test coverage (test cases that must not compile)
10. Recommendations
- Implement event schema validation as a standard test-environment fixture. A Lambda function or test harness that validates event field presence and cross-event consistency should be present in every event-driven system before the system reaches production scale.
- Encode all architectural invariants as compile-time constraints. Identify the mandatory patterns in the system (isolation field presence, key format requirements, required metadata fields) and implement proc macro enforcement before those patterns are applied to more than five entities.
- Establish a structured pattern documentation artifact and reference it at task start. For AI-assisted development, a shared context file containing canonical code examples, anti-patterns, and rationale should be maintained and updated with each new architectural decision.
- Design verification tasks to ask anticipatory questions. Verification sessions should explicitly include questions of the form “what configuration errors could a developer make?” and “what edge cases does the implementation not handle?” in addition to “does the implementation satisfy the requirements?”
- Require negative compile tests for all macro implementations. For each macro attribute or constraint, there should be a corresponding test file that is expected to fail compilation. These tests prevent regression when macro implementation details change.
11. Conclusion
Event sourcing, when combined with automated schema validation, functions as a cross-service consistency verification mechanism that complements but does not duplicate unit testing and code review. The approach detects a specific and consequential class of defect—architectural invariant violations distributed across service boundaries—that standard verification methods miss systematically. The compile-time enforcement approach documented in this analysis eliminated an entire class of security boundary violations at the type-system level. Once implemented, the class of defect became structurally unshippable. This is the highest-assurance form of architectural governance available in a compiled language, and it should be the target design for all mandatory patterns. As multi-agent AI development workflows mature, the combination of documented pattern references, compile-time enforcement, and event-based consistency analysis will form an increasingly important governance layer. Future work should investigate automated detection of new invariant candidates from event schema drift patterns, enabling governance mechanisms to evolve continuously with the architecture.All content represents personal learning from personal projects. Code examples are sanitized and generalized. No proprietary information is shared. Opinions are my own and do not reflect my employer’s views.