Documentation Index
Fetch the complete documentation index at: https://www.aidonow.com/llms.txt
Use this file to discover all available pages before exploring further.
Recently Published
AutoResearch: Nightly Intelligence Loop
An 8-phase autonomous research pipeline — nightly cron, Ollama gemma3:27b local scoring, score-based escalation ladder, and a mandatory zero-findings summary — that delivers curated intelligence briefs without human intervention.
AutoExplore: Zero-Cost QA Sub-Agent
A Playwright sub-agent with 15 detection rules and an entity triple-check pattern that uses Claude Code as a zero-cost triage LLM — classifying bugs, deduplicating against open issues, and filing Redmine tickets automatically.
CLAUDE.md as Operational Constitution
How a committed CLAUDE.md file functions as an always-read operational constitution — encoding incident-derived constraints, identity systems, and workflow rules that persist across session boundaries and survive agent substitution.
PostToolUse Hooks: Lifecycle Event Bridge
PostToolUse hooks fire after every tool execution, enabling activity-feed events, audit log writes, and notifications — without modifying application code or adding latency to the agent execution path.
Memory Persistence Patterns
Four persistence tiers — in-session context, memory files, CLAUDE.md, and external wiki — mapped against durability, scope, and verification requirements. Includes the three-layer architecture that closes the /tmp durability gap.
Whisper MCP Federation: Per-Actor Token Scoping
How scoping each AI agent’s activity-feed token to a single actor identity makes the audit trail tamper-evident — with graceful downgrade, four-layer token storage, and a hook bridge that fires lifecycle events without touching business logic.
Foundational Reads
Four articles that orient the thesis, the honest limits, the governance layer, and where it leads.Multi-Agent Workflow Patterns
The Evaluator–Builder–Verifier coordination model and the Plan → Implement → Verify loop that prevents unchecked AI output from reaching production. Where autonomous development begins.
When AI Fails: Cascading Errors
AI agents fail in ways humans do not — confidently wrong, silently cascading, resistant to correction mid-stream. The failure taxonomy and the guardrail architecture that catches them before production.
CODEOWNERS as Agent Authority Boundary
Git’s CODEOWNERS mechanism as machine-enforceable governance: which AI agent may modify which service, enforced structurally at the repository level rather than through instructions an agent can ignore.
The Dogfooding Loop: Product Builds Product
The endpoint: an autonomous development organization that uses its own product to manage building it. Bug reports filed by AI agents trigger tasks picked up by AI agents. The loop closes.
Browse by Category
Practices
Workflows, governance & process
Craft
Implementation & design patterns
AI
Tools, limits & insights
Analysis
Metrics & retrospectives
75+ standalone articles published across Practices, Craft, AI, Claude Code, and Analysis. The episodic series archive (Building with AI Journey, Autonomous Dev Org) lives under the Archive tab.