Skip to main content

Documentation Index

Fetch the complete documentation index at: https://www.aidonow.com/llms.txt

Use this file to discover all available pages before exploring further.

The Next Chapter

The first series — Building with AI — documented what it looks like to develop software with AI agents: the workflows, the failures, the calibration. This series documents what comes next.
The shift: In Series 1, I was a developer using AI tools. In this series, I’m a founder running an AI development organization. The code still gets written. I’m just not writing it.

What This Series Covers

An autonomous development organization where:
  • Requirements written in Markdown auto-generate tasks as GitHub Issues
  • An executor agent (running on cron) picks tasks and implements them using Claude CLI
  • A verifier agent independently runs tests and reviews PRs against the original requirements
  • Rejected PRs get reopened as tasks — the executor retries with the verifier’s feedback
  • Completed requirements close automatically and notify you via Telegram
No human in the implementation loop.

Who This Is For

Engineers

Concrete architecture, real code, step-by-step replication guide. The executor and verifier are less than 500 lines of Python each.

Founders & VCs

What this means for the unit economics of early-stage software. How the founding bottleneck shifts from implementation to decision-making.

The Series

Episode 1: The Orchestration Problem — Why One AI Isn't Enough

The gap between “AI helps you code” and “AI builds without you” is an engineering problem. Here’s what the attempt taught us — and what we built instead.

Episode 2: Memory That Survives the Session

The loop was closing tasks. But every session started blank. Here’s how structured GitHub Issue beads and an MCP retrieval tool gave our agents memory without adding infrastructure.

Episode 3: The Agent That Couldn't See What It Was Breaking

As the loop took on larger tasks, the reactive compile-and-fix loop became a trust problem. Here’s the impact graph architecture — Tree-sitter, KuzuDB, MCP — that fixes it.

Episode 4: The Agent That Didn't Know What Normal Looked Like

AI reports what exists. It has no concept of what should exist. Here’s the baseline drift problem we discovered during a 19-week refactor — and how we plan to fix it.

Episode 5: The Gate That Wasn't There

An autonomous loop needs hard process gates the same way a compiler needs type checks. Soft norms don’t hold when there’s no human per task.

Episode 6: The Autonomous Loop Eats Its Own Dog Food

With a human team, dog-fooding is quality discipline. With an autonomous loop, it’s a trust mechanism. Here’s what customer zero revealed before the first real tenant arrived.

Episode 7: The Local Verification Loop — Hardening the Inner Loop

Why bigger prompts didn’t stop AI ‘amnesia’—and how we moved from soft documentation to hard filesystem enforcement.

Episode 8: The Polyrepo Context Wall — Solving Boundary Blindness

How we managed architectural consistency across 20+ repositories using high-context models and SDK-first patterns.

Episode 9: The Self-Healing Correction Cycle

Trust is a routing decision. How we built a self-healing organization by coordinating local and global verification.

Episode 10: The Leadership Evolution — From Bots to a Boardroom

Why I stopped writing prompts and started writing Charters. How we turned an autonomous loop into a professional engineering organization.