Code Review
A complete walkthrough of an automated code review topology using pipeline, fan-out, and human-gate patterns
Code Review
This example walks through a production-ready code review topology that combines three patterns: pipeline for sequential phases, fan-out for parallel analysis, and human-gate for approval before reporting.
The Complete Topology
topology code-review : [pipeline, fan-out, human-gate] {
meta {
version: "1.0.0"
description: "Automated code review"
}
orchestrator {
model: opus
handles: [intake, create-report]
}
agent analyzer {
model: sonnet
phase: 1
tools: [Read, Grep, Glob]
outputs: { risk-level: low | medium | high | critical }
}
agent security-scanner {
model: sonnet
phase: 1
behavior: advisory
outputs: { has-vulnerabilities: yes | no }
}
agent reviewer {
model: opus
phase: 2
outputs: { verdict: approve | request-changes | reject }
}
agent reporter {
model: haiku
phase: 3
tools: [Read, Write, Glob]
}
flow {
intake -> [analyzer, security-scanner]
analyzer -> reviewer
security-scanner -> reviewer
reviewer -> reporter [when reviewer.verdict == approve]
reviewer -> reporter [when reviewer.verdict == request-changes]
reviewer -> create-report [when reviewer.verdict == reject]
reporter -> create-report
}
gates {
gate human-approval {
after: reviewer
before: reporter
run: "scripts/human-approve.sh"
on-fail: halt
}
}
}Walkthrough
Meta Block
meta {
version: "1.0.0"
description: "Automated code review"
}The meta block provides version tracking and a human-readable description. The version follows semver and is useful for tracking topology changes over time.
Pattern Declaration
topology code-review : [pipeline, fan-out, human-gate] {This topology uses three patterns:
- pipeline — agents run in sequential phases (1, 2, 3)
- fan-out — phase 1 agents run in parallel
- human-gate — a human must approve before the reporter runs
Orchestrator
orchestrator {
model: opus
handles: [intake, create-report]
}The orchestrator manages the overall workflow. It handles two endpoints:
intake— receives the initial code review requestcreate-report— produces the final output after all agents have finished
Using opus for the orchestrator gives it the strongest reasoning for managing the workflow.
Phase 1: Parallel Analysis (Fan-Out)
agent analyzer {
model: sonnet
phase: 1
tools: [Read, Grep, Glob]
outputs: { risk-level: low | medium | high | critical }
}
agent security-scanner {
model: sonnet
phase: 1
behavior: advisory
outputs: { has-vulnerabilities: yes | no }
}Both agents are phase: 1, meaning they run simultaneously. The flow block confirms this with bracket syntax:
intake -> [analyzer, security-scanner]analyzer reads the codebase with file tools and produces a risk-level assessment. security-scanner runs as advisory — its findings inform the review but do not block it on their own.
Phase 2: Review (Fan-In)
agent reviewer {
model: opus
phase: 2
outputs: { verdict: approve | request-changes | reject }
}The reviewer waits for both phase 1 agents to complete, then receives all of their outputs. It uses opus for deep reasoning about whether the code changes should be approved.
The reviewer produces one of three verdicts:
| Verdict | What Happens |
|---|---|
approve | Proceeds to reporter (after human gate) |
request-changes | Proceeds to reporter with change requests |
reject | Skips reporter, goes directly to create-report |
Phase 3: Reporting
agent reporter {
model: haiku
phase: 3
tools: [Read, Write, Glob]
}The reporter uses haiku — a fast, cost-efficient model — because its job is to format findings into a report, not to reason deeply. It reads the outputs from upstream agents and writes the final review document.
Flow Logic
flow {
intake -> [analyzer, security-scanner]
analyzer -> reviewer
security-scanner -> reviewer
reviewer -> reporter [when reviewer.verdict == approve]
reviewer -> reporter [when reviewer.verdict == request-changes]
reviewer -> create-report [when reviewer.verdict == reject]
reporter -> create-report
}The flow breaks down into three stages:
- Fan-out:
intakesends work to bothanalyzerandsecurity-scannerin parallel - Fan-in: Both agents feed into
reviewer, which waits for both to complete - Conditional routing: The reviewer's verdict determines whether work goes to the reporter or directly to the final report
When the verdict is reject, the reporter is skipped entirely — there is no point formatting a report for rejected code.
Human Gate
gates {
gate human-approval {
after: reviewer
before: reporter
run: "scripts/human-approve.sh"
on-fail: halt
}
}The gate sits between the reviewer and the reporter. After the reviewer produces a verdict of approve or request-changes, a human must confirm before the reporter generates the final document.
If the human denies approval (on-fail: halt), the entire topology stops. This prevents automated reports from being generated without human oversight.
Flow Diagram
+--> analyzer -------+
| |
intake -------+ +--> reviewer --[gate]--> reporter -> create-report
| | |
+--> security-scanner+ +-----> create-report
(reject)Adapting This Example
- Add more parallel agents — include a
style-checkerortest-runnerat phase 1 - Add revision loops — route
request-changesback to the original developer - Change the gate — use
on-fail: retryinstead ofhaltto let the reviewer try again - Swap models — use
haikufor the scanners if cost is a concern, oropusfor the security scanner if security is critical