A lightweight pattern for exposing modular capabilities without creating a monolith
Note: In this article, “agents” refers to modular capability units — not LLM orchestrators or swarm-style multi-agent systems.
Supply chain operations rarely fit inside a simple request–response cycle.
Classification, tariff evaluation, dependency mapping, due diligence checks — all of them unfold across multiple dependent steps. Yet most API designs still assume these workflows are instantaneous.
While developing large-scale supply chain intelligence systems—multi-tier supplier graphs, risk propagation logic, compliance engines, tariff models—a recurring challenge emerged:
How do you expose complex, multi-step logic without forcing complexity onto the caller?
A lightweight A2A (Agent-to-Agent) protocol proved to be an effective solution — not as a product feature, but as a generalizable integration pattern for long-running or multi-step API workflows.
This article breaks down the reasoning behind the pattern and why it works well in practice.
Why Traditional APIs Struggle With Multi-Step Workflows
On the surface, workflows like “classify a product” or “calculate tariff duties” look atomic. In reality, they decompose into multiple phases:
- input normalization
- attribute inference
- rule and exception matching
- multi-country tariff overlays
- multi-hop dependency expansion
- evidence collection and assembly
Bundling these processes into a single synchronous call leads to:
- unpredictable execution times
- brittle timeout handling
- opaque branching logic
- unclear error reporting
- monolithic endpoints that quickly become unmanageable
The issue isn’t performance.
It’s the lack of an API abstraction that expresses multi-step workflows cleanly.
Why a Lightweight A2A Pattern Works
Instead of a massive all-in-one endpoint, a system can be decomposed into small, focused capability units, each responsible for exactly one task:
- classification
- tariff calculation
- dependency graph expansion
- concentration analysis
- due diligence synthesis
Each capability exposes a simple lifecycle:
- run — start the task
- status — check progress
- result — fetch the final structured output
This keeps the surface area predictable while allowing internal logic to remain expressive.
The A2A Lifecycle (with Examples)
A2A follows three predictable operations:
caller → POST /run → task_id
caller → GET /status → pending | running | done | failed
caller → GET /result → structured output
This minimal protocol is intuitive, flexible, and maps naturally to complex workflows.
1. run — Start a task (idempotent)
POST /agents/hts_classification/run
{
"product_description": "EV lithium battery pack",
"idempotency_key": "client-123"
}
Response:
{
"task_id": "abc123",
"status": "PENDING"
}
Idempotency keys allow safe retries — essential in distributed environments.
2. status — Monitor progress
GET /agents/hts_classification/status?task_id=abc123
Response:
{
"status": "RUNNING",
"progress": 0.4,
"steps": ["normalize_input", "attribute_inference"]
}
For multi-second or multi-minute workflows, clients typically poll every 1–3 seconds.
3. result — Retrieve structured output
GET /agents/hts_classification/result?task_id=abc123
Response:
{
"hts_code": "8507.60.0020",
"confidence": 0.94,
"evidence": {
"matched_phrases": ["lithium-ion", "battery pack"],
"rule_path": ["chapter 85", "heading 07", "subheading 60"]
}
}
Output schemas remain deterministic even when internal components use constrained LLM reasoning.
Error Handling (A Required Part of the Pattern)
A2A provides a consistent, machine-readable structure for failures:
{
"status": "FAILED",
"error": {
"type": "VALIDATION_ERROR",
"message": "Missing required field: product_description",
"fields": ["product_description"]
}
}
Predictable error semantics significantly reduce integration friction.
Why A2A Works Well for Supply Chain Intelligence
✔ 1. It mirrors real multi-step workflows
Developers gain an API abstraction that matches actual execution semantics.
✔ 2. It keeps capabilities independent
Classification logic stays focused on classification; tariff logic stays focused on tariff computation.
✔ 3. It supports long-running jobs cleanly
Whether expanding Tier-10 supplier networks or aggregating due diligence signals, long tasks run safely without timeouts.
✔ 4. It improves transparency and auditability
Each capability has a single responsibility and a deterministic schema.
✔ 5. It avoids monolithic surface areas
Developers integrate only the components they need.
A2A Is Not an Orchestration Framework
A2A does not prescribe:
- how workflows execute internally
- which compute model is used
- whether logic is rule-based, graph-based, or LLM-based
- whether async execution uses queues, workers, or distributed pipelines
A2A simply defines how capabilities expose themselves.
This avoids a common misunderstanding:
A2A is an integration pattern, not a multi-agent runtime.
When A2A Outperforms Traditional APIs
Choose synchronous APIs for:
- small
- atomic
- single-step
- instant operations
Choose A2A for workflows that are:
- multi-step
- long-running
- reasoning-dependent
- requiring deterministic schemas
- naturally modular
Explore the Open Specification
The complete open specification — including lifecycle semantics, schemas, error models, and runnable examples — is available here:
👉 https://github.com/SupplyGraphAI/supplygraph-ai
The repository contains:
- formal manifest definitions
- structured I/O schemas
- consistent error semantics
- example agents
- reference clients
- integration quickstarts
The spec continues to evolve as more capabilities are added across supply chain intelligence:
- multi-tier dependency expansion
- risk propagation modeling
- tariff and compliance automation
- due diligence aggregation
- geographic concentration scoring
If you’ve worked on similar multi-step API designs or long-running workflows, I’d love to hear how you approached them.
Top comments (0)