Skip to main content

Documentation Index

Fetch the complete documentation index at: https://www.aidonow.com/llms.txt

Use this file to discover all available pages before exploring further.

Executive Summary

In autonomous software development organizations, task assignment has long been treated as a routing problem: a task arrives, a capable agent exists, the task is dispatched. This model — queue-based, role-oriented, stateless — functions adequately in human-supervised contexts where a developer can self-correct when the environment is not ready. It fails silently in fully autonomous execution, where no human is present to observe that an agent received work without a valid execution context. This paper examines an architectural pattern that addresses this failure mode. By binding three metadata elements to every task — the agent identity, the workstation identifier, and a routing policy that maps task type to both — the autonomous engineering platform acquires properties that queue-based dispatch cannot provide: auditability of who performed what work and where; reproducibility of any given task execution; and controlled parallelism across multiple concurrent agents whose execution contexts do not overlap. The components described herein — a workstation creation gate, an executor identity map, and task-level metadata fields wired to the project management layer — represent a meaningful step in the maturation of autonomous development infrastructure. They are not novel in isolation. Their value lies in their composition.

Key Findings

  1. Silent failure is the dominant risk in queue-based autonomous dispatch. When an agent receives a task without a valid execution environment, the failure mode is not an error — it is silence. The task moves to an in-progress state, the agent begins execution against an uninitialized context, and the output is wrong in ways that are difficult to attribute after the fact.
  2. Workstation identity is not a virtual machine concept — it is a logical binding. A workstation, in the sense described here, is a named, versioned execution context assigned to a specific agent for the duration of a specific task. It does not imply infrastructure provisioning. It implies that an agent’s actions are traceable to a bounded, identifiable scope.
  3. Identity-based routing is categorically different from role-based routing. Role-based routing asks: does a capable agent exist? Identity-based routing asks: is the specific named agent, operating in the specific named workstation, ready to receive this specific task type? The latter enables auditability, reproducibility, and blast radius control that the former cannot provide.
  4. Wiring workstation and agent metadata to the project management layer closes the observability gap. When task metadata is visible only inside the orchestration system, the project management layer becomes a lagging indicator. Wiring workstation and agent identity to native task fields makes the routing decision durable, queryable, and present in the task history — not reconstructible from logs.
  5. The verification milestone is the practical proof. A sprint milestone in which all repositories pass their verification checks, local infrastructure is deployed, and a tracked task closes cleanly is not merely a status update. It is evidence that the routing infrastructure directed the right agents to the right work in the right contexts — and that the results are attributable.

1. The Problem with Queue-Based Dispatch

1.1 The Standard Model

Most task dispatch systems in software engineering follow a producer-consumer pattern. A task enters a queue. A worker — human or agent — dequeues it when capacity is available. The worker executes. The result is returned. This model has well-understood advantages: it is simple to implement, scales horizontally by adding workers, and degrades gracefully under load. It is the correct architecture for a large class of problems. It is the wrong architecture for autonomous agent dispatch when execution environment validity cannot be assumed.

1.2 Where Queue-Based Dispatch Fails in Autonomous Contexts

In a human engineering organization, a developer who receives a task without a working local environment will say so. The feedback loop is immediate. The assignment is delayed or redirected. The system self-corrects via human judgment. In an autonomous development organization, that feedback loop does not exist in the same form. An agent assigned a task without a valid workstation context will attempt execution. Depending on the nature of the failure, the result may be:
  • A task that completes with incorrect output (the agent operated against stale or absent environment state)
  • A task that enters an infinite retry loop (the agent repeatedly attempts to initialize an environment that is not ready)
  • A task that silently closes (the agent reports completion because no gate prevented it from doing so)
None of these failure modes produce a clear error. All of them produce attribution problems: when the output is reviewed, it is unclear which agent performed the work, in what state, and against which version of the execution environment.

1.3 The Core Requirement

What autonomous dispatch requires that queue-based dispatch does not provide is a gate: a validation step, prior to task assignment, that confirms the execution environment exists and is in a ready state. And it requires a record: a durable binding, attached to the task itself, of which agent performed the work and in which execution context. These requirements motivate the architecture described in the following sections.

2. The Workstation Concept

2.1 Definition

A workstation, in the context of this architecture, is a named, isolated execution context assigned to a specific agent for a specific task or task category. It is not a physical machine. It is not a virtual machine in the infrastructure sense. It is a logical identifier that carries three properties:
  1. Scope binding. The workstation associates an agent identity with a bounded working context. Actions taken by the agent are attributable to the workstation, not merely to the agent in the abstract.
  2. Audit reference. The workstation name provides a stable reference for post-hoc review. When a question arises — which agent modified this file, and in what context — the workstation identifier answers the second half of that question.
  3. Validation target. The workstation is the object against which the creation gate validates. Before a task is dispatched, the gate confirms that the named workstation exists and is in a ready state. The workstation is thus not merely a label; it is a precondition.

2.2 Naming Conventions

Workstation identifiers follow a convention that encodes both the agent scope and the task domain. Representative examples include ws-backend-01, ws-infra-prod, and ws-frontend-staging. The naming convention is not arbitrary: it must be parseable by the routing policy (Section 4) to determine which task types are valid for a given workstation.

2.3 The Workstation Lifecycle

A workstation is created before it is used. This is enforced by the creation gate (Section 3). A workstation may be reused across tasks of the same type within a sprint cycle, or it may be scoped to a single task and retired afterward. The lifecycle decision is made at the policy level, not the task level. What the architecture guarantees is that a task cannot be dispatched to a workstation that does not exist. The creation gate is the enforcement mechanism for this guarantee.

3. The Workstation Creation Gate

3.1 Purpose

The workstation creation gate is a validation layer inserted between task creation and task assignment. Its function is singular: confirm that the workstation named in the task metadata exists and is in a ready state before the assignment is committed. If the workstation does not exist, the gate halts the assignment process and raises an exception that is visible at the orchestration layer. The task remains unassigned. The orchestration layer is responsible for either creating the workstation or escalating to human review.

3.2 What the Gate Prevents

The gate directly addresses the silent failure mode described in Section 1.2. By making workstation existence a hard precondition for assignment — not a soft norm, not a convention, not a documentation requirement — the architecture eliminates the class of failures where an agent begins execution against an uninitialized context. This is an instance of a broader principle documented in Episode 5 of the Autonomous Dev Org series: the autonomous loop requires hard gates the same way a compiler requires type checks. Soft norms do not survive autonomous execution at scale.

3.3 Gate Implementation Pattern

The gate operates as follows:
# Illustrative pseudocode. Production implementations require additional
# error context, distributed locking, and idempotency handling.

def assign_task(task: Task, executor_map: ExecutorIdentityMap) -> Assignment:
    workstation = executor_map.resolve_workstation(task.task_type)
    agent_name = executor_map.resolve_agent(task.task_type)

    # Creation gate: workstation must exist before assignment commits.
    # This check is synchronous and non-retrying by design.
    if not workstation_registry.exists(workstation):
        raise WorkstationNotReadyError(
            task_id=task.id,
            agent_name=agent_name,
            workstation=workstation,
            reason="Workstation not registered in the execution registry.",
        )

    return Assignment(
        task_id=task.id,
        agent_name=agent_name,
        workstation=workstation,
        committed_at=utcnow(),
    )
The gate is not a retry loop. It does not wait for the workstation to become ready. It raises an error that the orchestration layer must handle explicitly. This design forces the failure to be visible rather than absorbed.

3.4 Relationship to Blast Radius Control

The creation gate also contributes to blast radius control. Because each agent operates within a named, validated workstation context, a failure in one agent’s workstation does not propagate to others. The workstation boundary is the isolation boundary. An agent whose workstation enters an error state can be quarantined without affecting the task queues of agents operating in separate workstation contexts. This property is explored further in Episode 3 of the Autonomous Dev Org series.

4. The Executor Identity Map

4.1 From Roles to Identities

The executor identity map is the routing policy that determines, for a given task type, which agent identity and which workstation receive the assignment. It is the component that distinguishes identity-based dispatch from role-based dispatch. The distinction is not semantic. It has operational consequences.
DimensionRole-Based DispatchIdentity-Based Dispatch
Assignment unitAny available agent in the roleSpecific named agent in specific workstation
Auditability”A backend agent handled this""@agent-builder-01 on ws-backend-01 handled this”
ReproducibilityRe-run goes to any available agentRe-run goes to same agent identity (same context)
Policy traceabilityImplicit in queue configurationExplicit in version-controlled identity map
Blast radiusFailure affects all agents in roleFailure isolated to named workstation
Parallelism controlWorkers compete for tasksWorkstation non-overlap ensures disjoint contexts

4.2 Structure of the Identity Map

The executor identity map is a version-controlled artifact. It maps task types to agent-workstation pairs:
# executor-identity-map.yaml
routing:
  - task_type: backend_feature
    agent_name: "@agent-builder-01"
    workstation: ws-backend-01

  - task_type: infrastructure_change
    agent_name: "@agent-infra-01"
    workstation: ws-infra-prod

  - task_type: frontend_component
    agent_name: "@agent-builder-02"
    workstation: ws-frontend-staging

  - task_type: verification
    agent_name: "@agent-verifier-01"
    workstation: ws-verification-01
Because the identity map is version-controlled, changes to the routing policy are traceable. A routing change is a commit. The commit history answers the question: when did we start sending infrastructure tasks to this agent, and why?

4.3 Policy Enforcement at the Routing Layer

The executor identity map is not advisory. It is enforced at the routing layer, prior to the creation gate. An attempt to assign a task of type backend_feature to an agent not listed for that type in the identity map is rejected. This prevents ad-hoc routing decisions that would undermine the auditability and reproducibility properties the map is designed to provide. This enforcement pattern parallels the ADR-as-architecture principle described in When Your Documentation Becomes Your Architect: routing policy, like architectural constraint, is most valuable when it is not merely documented but structurally enforced.

5. Task Metadata: Wiring to the Project Management Layer

5.1 The Observability Gap in Orchestration-Only Storage

In many autonomous workflow implementations, routing metadata — which agent handled a task, in which environment — is stored only within the orchestration system. The project management layer, which serves as the authoritative record of work and the primary interface for human review, receives only the task status. The routing context is lost at the boundary. This creates an observability gap. Human reviewers examining the project management layer see that a task completed. They do not see which agent completed it, in which workstation, or whether the routing was consistent with the identity map policy. Anomalies in task execution are difficult to detect, and attribution after the fact requires reconstructing context from orchestration logs.

5.2 Custom Fields as Durable Routing Records

The pattern described here closes this gap by wiring workstation and agent identity directly to native task fields in the project management system. Using Redmine — an open-source project management tool — as the reference implementation, the workstation and agent name are stored as custom fields on each task at the moment of assignment. The effect is that the task record in Redmine becomes a complete routing ledger entry:
  • Custom Field: Workstation — the named execution context in which the task was performed
  • Custom Field: Agent Name — the specific agent identity that performed the task
These fields are populated at assignment time by the orchestration layer, not after the fact. They are visible in the standard Redmine task view. They are queryable via Redmine’s reporting interface. And they are preserved in the task history — not reconstructible from external logs, but present in the canonical task record.

5.3 The Value of Metadata Durability

Durable task metadata has three practical consequences: Sprint reporting. At the end of a sprint, it is possible to ask: which agent handled the most tasks? Which workstation had the highest task volume? Were any task types routed to agents outside the identity map policy? These questions are answerable from native Redmine queries, without requiring access to orchestration logs. Incident attribution. When a task produces incorrect output, the first question is: who ran this, and where? With workstation and agent name in the task record, the answer is immediate. The second question — was the routing consistent with policy? — is answerable by comparing the task fields against the identity map. Audit compliance. In organizations where software changes must be attributable to specific actors for compliance purposes, task-level metadata provides the evidentiary record. The agent identity is not inferred; it is recorded.

6. Controlled Parallelism and the Verification Milestone

6.1 Non-Overlapping Workstation Contexts as a Parallelism Primitive

One of the architectural implications of workstation-based dispatch is that parallelism becomes controllable by design. When multiple agents operate concurrently, each within a named workstation context that does not overlap with others, the conditions for safe parallel execution are structurally enforced rather than dependent on agent-level discipline. The identity map defines the mapping. The creation gate validates existence. The non-overlap property follows from the naming convention: ws-backend-01 and ws-infra-prod are disjoint by definition. An agent operating in ws-backend-01 cannot affect the state of ws-infra-prod unless the system explicitly permits a cross-workstation operation — which would itself be a policy decision, traceable in the identity map. This is the same principle that underlies capsule isolation in multi-tenant systems: the isolation boundary is the primary mechanism for enabling safe concurrency. The workstation is the isolation boundary for autonomous agents.

6.2 The Verification Milestone as Empirical Validation

A sprint milestone in which all repositories pass their verification checks and local infrastructure is fully deployed represents empirical validation of the routing architecture. The milestone is not a status report. It is a verification that:
  • Tasks were dispatched to the correct agents via the identity map
  • Each agent operated within a valid workstation context (the creation gate was not bypassed)
  • The results across all repositories were consistent (no cross-workstation interference)
  • The task management layer reflects the complete routing history (custom fields populated for all tasks)
When a verification milestone closes cleanly — all repositories green, LocalStack deployed and verified, all tasks closed with complete metadata — it is evidence that the routing infrastructure functioned as designed. When it does not close cleanly, the task metadata and workstation records provide the diagnostic data needed to identify where the breakdown occurred.

6.3 LocalStack as the Verification Environment

The use of LocalStack — an open-source AWS service emulator — as the local infrastructure deployment target in the verification milestone is architecturally significant. LocalStack provides a controlled, reproducible execution environment for infrastructure-dependent code. Its integration into the verification milestone means that the verification check is not merely against static analysis or unit tests but against a running facsimile of the production infrastructure. When a workstation is initialized for infrastructure-related tasks, the creation gate that validates workstation readiness implicitly validates that the LocalStack environment associated with that workstation is deployed. The gate is thus not merely checking a registry entry; it is confirming that the execution environment has the infrastructure dependencies it requires.

7. Architectural Implications

7.1 The Foundation for Reproducible Execution

Reproducibility in autonomous development is not merely a quality attribute — it is a prerequisite for trust. If the same task, re-run with the same inputs, produces a different output because it was dispatched to a different agent in a different environment, the system is not reproducible. It is non-deterministic in a way that makes debugging, auditing, and iterative improvement unreliable. The workstation-plus-identity architecture provides the foundation for reproducibility: the same task type is always dispatched to the same agent identity, operating in the same named workstation context. Re-running a task is equivalent to re-creating its workstation context and re-dispatching to the same identity. The conditions for reproducibility are structural, not dependent on operator discipline.

7.2 The Multi-Agent Workflow as Infrastructure

The multi-agent workflow pattern — where distinct agents handle planning, implementation, and verification — gains additional guarantees from workstation-based dispatch. When each agent role corresponds to a distinct workstation context, the independence guarantee of the verifier (a fresh session with no implementation bias) is not merely a session management discipline. It is enforced by the workstation boundary: the verifier’s workstation has no shared state with the builder’s workstation. This structural enforcement of agent independence is one of the less-visible benefits of the workstation architecture. It does not replace the discipline of managing agent sessions correctly, but it provides a backstop that makes the independence guarantee more robust.

7.3 Evolution Path

The architecture described here is not a terminal state. The routing policy currently lives in a static YAML file. A natural evolution is a dynamic routing policy that adjusts assignments based on agent availability, workstation health signals, and task priority. The identity map becomes a policy engine; the creation gate becomes a health-check integration. The governance implications of this evolution — version-controlled routing policy as an organizational artifact, traceable routing decisions as audit evidence — are explored in Episode 11: AI Governance — The Org That Governs Itself. What the current architecture establishes is the interface: task types as routing keys, agent identities as routing targets, workstation contexts as execution validators. That interface is stable regardless of whether the policy evaluation is static or dynamic. Building the dynamic version on top of the static interface is incremental; building it from a queue-based foundation requires a more fundamental redesign.

8. Recommendations

  1. Treat workstation creation as a first-class operation, not a side effect. Create your workstation intentionally, with a registered name and a defined scope, before any task assignment that depends on it. Ad-hoc workstation creation at assignment time undermines the gate’s purpose.
  2. Version-control your executor identity map from the outset. The routing policy is an architectural artifact. Its history should be as traceable as the codebase it governs. Store the identity map in a configuration file committed to your repository as the minimum viable practice.
  3. Wire workstation and agent identity to your project management layer at assignment time, not retrospectively. Post-hoc population of task metadata is unreliable and creates attribution gaps. Configure your orchestration layer to populate these fields as part of the assignment commit.
  4. Design your workstation naming conventions before the first deployment. Naming conventions that encode agent scope and task domain are parseable by the routing policy and readable by humans. If you do not enforce conventions from the beginning, they will accumulate exceptions and lose their structural value.
  5. Use the creation gate as a diagnostic tool, not merely a guard. When the gate fires, ensure the exception carries enough context to identify whether the failure is a missing workstation, a misconfigured identity map, or an orchestration sequencing error. The gate is the earliest point at which routing failures surface; make its error messages diagnostic.
  6. Treat the verification milestone as infrastructure, not ceremony. A sprint milestone that closes cleanly across all repositories and infrastructure targets is evidence that your routing architecture is functioning. Build the milestone into your development cycle as a structural checkpoint, not an optional retrospective.

9. Forward-Looking Statement

The workstation-plus-identity pattern described in this paper represents the current maturation point of autonomous task dispatch in the organization’s development infrastructure. It addresses the most critical failure mode of naive queue-based dispatch — the silent failure of an agent operating without a valid execution context — and establishes the metadata infrastructure required for auditability, reproducibility, and controlled parallelism. The trajectory points toward a more adaptive system. As the number of agents grows, as task types diversify, and as sprint cycles compress, the static identity map will give way to a policy engine that evaluates routing decisions against real-time signals: agent health, workstation utilization, task priority, and dependency graphs. The creation gate will integrate with infrastructure health checks, making workstation readiness a continuous property rather than a binary pre-check. The metadata wired to the project management layer will feed analytics pipelines that identify routing anomalies before they become attribution problems. What will not change is the core architectural principle: task assignment in an autonomous development organization is not queue-based routing to a role. It is context-validated dispatch to an identity. The workstation is the execution context. The identity map is the policy. The gate is the enforcer. The task metadata is the record. The infrastructure described here is the foundation. What is built on it is a function of how confidently the autonomous development cycle can be extended, accelerated, and trusted.
All content represents personal learning from personal projects. Code examples are sanitized and generalized. No proprietary information is shared. Opinions are my own and do not reflect my employer’s views.