Skip to main content

Documentation Index

Fetch the complete documentation index at: https://www.aidonow.com/llms.txt

Use this file to discover all available pages before exploring further.

Articles in this section examine AI tools as engineering infrastructure β€” what they reliably produce, where they fail, and the governance mechanisms required to make AI output safe for production. Includes both technical implementation guides and evidence-based assessments.

Practical Guides

Building a Billing Dashboard with AI

The prompting strategy, review protocol, and verification checkpoints that produced a production-grade billing dashboard β€” and the failure modes encountered when those checkpoints were skipped.

Claude Code Hooks: Hard Enforcement

Why system prompt instructions degrade under task complexity and how PreToolUse hook scripts provide session-boundary-independent enforcement of architectural constraints.

Prompt Library

A structured taxonomy of prompts organized by task type β€” with analysis of which prompt structures produce reliable output and which introduce variance.

AI Tool Comparison

A comparative assessment of AI coding tools across dimensions of accuracy, context retention, instruction-following, and failure recovery.

AI Limitations and Boundaries

The categories of task where current AI coding assistants consistently underperform β€” and the detection signals that indicate when human intervention is required.

MCP Tool Routing: Nine Servers as an Agent Operating System

How routing 91 agent tools across nine domain-scoped MCP servers β€” with a capability-first pre-implementation gate β€” prevents reinvention, reduces context bloat, and enforces organizational standards at the tool layer.

Whisper MCP Federation: Per-Actor Token Scoping in Multi-Agent Systems

How scoping each AI agent’s activity-feed token to a single actor identity makes the audit trail tamper-evident β€” with graceful downgrade, a four-layer token storage model, and a Redmine hook bridge that fires lifecycle events without modifying business logic.

Insights & Debate

AI Code Review Blind Spots

The systematic gaps in AI-generated code review β€” the security, concurrency, and cross-service integration issues that current models consistently fail to surface.

When AI Excels

The task categories where AI assistance produces the highest return: pattern replication, test generation, documentation, and refactoring of well-specified code.

When AI Fails: Cascading Errors

How AI hallucinations propagate through a codebase when review gates are absent β€” the error cascade pattern and the checkpoints that break it.

When AI Was Right

Case analysis of decisions where AI-generated recommendations proved correct against human skepticism β€” and what distinguishes those cases from false positives.

The AI Productivity Myth

A critical examination of AI productivity claims: where the gains are real, where they are measurement artifacts, and what the difference means for team planning.

Labeling AI Code

The organizational case for distinguishing AI-generated from human-authored code in version control β€” and the practical annotation conventions that make this tractable.

What 11 Weeks Actually Changed

A longitudinal assessment of AI-assisted development after 11 weeks of production use β€” the capability gains, the persistent limitations, and the workflow adjustments that proved durable.