Building Topologies
Walk through building a topology interactively using the AgenTopology skill
Building Topologies
Build mode is the primary experience of the AgenTopology skill. You describe what you need, and the skill generates a complete, validated .at file.
The build flow
- You describe your task in plain language
- The skill asks clarifying questions about agents, tools, and routing
- It picks the best pattern for your use case
- It generates a
.atfile - It runs validation (82 rules)
- It offers to scaffold and visualize
Example conversation
Here is a typical session using the skill in build mode.
You:
I need a data processing pipeline. It should fetch data from an API, transform it, validate the output, and store results. If validation fails, it should retry the transform step.
Skill:
I'll build a data processing pipeline for you. Based on your description, I'm using the pipeline pattern with a conditional retry loop.
Here's what I have so far:
- fetcher -- Calls the API and writes raw data
- transformer -- Cleans and reshapes the data
- validator -- Checks output quality, produces a pass/fail verdict
- storer -- Writes final results
A few questions:
- What model should the agents use? (sonnet is the default)
- Does the fetcher need any specific MCP tools like WebFetch?
- How many retry attempts should the validator allow?
You:
Use sonnet for all agents. The fetcher needs WebFetch. Allow 3 retries.
Skill:
Generated
data-pipeline.at:
topology data-pipeline : [pipeline] {
meta {
version: "1.0.0"
description: "Fetch, transform, validate, and store data"
}
agent fetcher {
role: fetcher
model: sonnet
phase: 1
tools: [WebFetch, Write]
writes: ["workspace/raw-data.json"]
}
agent transformer {
role: transformer
model: sonnet
phase: 2
tools: [Read, Write]
reads: ["workspace/raw-data.json"]
writes: ["workspace/transformed.json"]
}
agent validator {
role: validator
model: sonnet
phase: 3
tools: [Read]
reads: ["workspace/transformed.json"]
outputs: { verdict: pass | fail }
}
agent storer {
role: storer
model: sonnet
phase: 4
tools: [Read, Write]
reads: ["workspace/transformed.json"]
writes: ["workspace/results.json"]
}
flow {
intake -> fetcher
fetcher -> transformer
transformer -> validator
validator -> transformer [when validator.verdict == fail, max 3]
validator -> storer [when validator.verdict == pass]
storer -> done
}
}Validation passed. 4 agents, 6 flow edges, 0 errors.
Would you like me to:
- Scaffold this for a platform (Claude Code, Codex, etc.)?
- Generate an interactive visualization?
What the skill checks
After generating the .at file, the skill runs the full 82-rule validator. Common issues it catches:
- Agents referenced in flow but not defined
- Flow edges pointing to nonexistent agents
- Missing
phasevalues in pipeline patterns - Conditional edges referencing undefined outputs
- Unreachable agents with no incoming edges
- Loops without
maxguards
If validation fails, the skill fixes the issues automatically and re-validates.
After generation
Once you have a validated topology, the skill offers two follow-up actions:
Scaffold
Generate platform-specific files. The skill asks which platform you want to target:
Scaffold to Claude Code, Codex, Cursor, Gemini CLI, Copilot CLI, OpenClaw, or Kiro?
It then runs the scaffolder and creates the directory structure, agent prompts, and config files.
Visualize
Generate an interactive HTML graph of your topology. The skill opens it in your browser or saves it to a file.
What's next
- See the available templates the skill draws from
- Learn about validation rules in detail
- Explore the scaffold API for programmatic use