// "Expert guidance for creating strands-cli workflow specifications (YAML/JSON). Use when creating, modifying, or troubleshooting strands-cli specs for: (1) Multi-step agent workflows (chain, routing, parallel, graph patterns), (2) Tool configuration (python_exec, http_request, custom tools), (3) Runtime and provider setup (Bedrock, OpenAI, Ollama), (4) Input/output handling and templating, or (5) Debugging validation errors"
| name | strands-spec |
| description | Expert guidance for creating strands-cli workflow specifications (YAML/JSON). Use when creating, modifying, or troubleshooting strands-cli specs for: (1) Multi-step agent workflows (chain, routing, parallel, graph patterns), (2) Tool configuration (python_exec, http_request, custom tools), (3) Runtime and provider setup (Bedrock, OpenAI, Ollama), (4) Input/output handling and templating, or (5) Debugging validation errors |
Expert guidance for building production-ready strands-cli workflow specifications.
strands-workflow.schema.json (JSON Schema Draft 2020-12)Load this skill when you need to:
version: 0
name: "my-workflow"
description: "Brief description of what this workflow does"
runtime:
provider: bedrock # or openai, ollama
model_id: "anthropic.claude-3-sonnet-20240229-v1:0"
region: "us-east-1"
budgets:
max_tokens: 100000
max_duration_s: 300
agents:
main-agent:
prompt: "Your clear, specific instructions here"
pattern:
type: chain # Start with chain, evolve to other patterns
config:
steps:
- agent: main-agent
input: "Task description"
outputs:
artifacts:
- path: "./output.txt"
from: "{{ last_response }}"
DO NOT load all reference files at once. Instead:
patterns.md when working with specific orchestration patterns (chain, routing, parallel, etc.)tools.md when configuring custom tools or troubleshooting tool executionadvanced.md for context management, telemetry, security featuresexamples.md for real-world workflow patterns and templatestroubleshooting.md when debugging validation errors or runtime issuesruntime:
provider: bedrock # REQUIRED: bedrock | openai | ollama
model_id: "..." # Model identifier (provider-specific)
region: "us-east-1" # Required for Bedrock
temperature: 0.7 # 0.0-2.0 (default: 0.7)
max_tokens: 2000 # Per-message limit
budgets: # Prevent runaway costs
max_tokens: 100000 # Total token budget
max_duration_s: 600 # Timeout in seconds
max_steps: 50 # Max agent invocations
Provider Quick Reference:
region, uses AWS credentials (fully supported)OPENAI_API_KEY env var (fully supported)host: "http://localhost:11434" (fully supported)agents:
agent-name:
prompt: |
Clear, specific instructions.
Use {{ variables }} for dynamic content.
tools: ["python_exec", "http_request"] # Optional
runtime: # Optional overrides
temperature: 0.3
max_tokens: 4000
Prompt Best Practices:
{{ var_name }}Choose based on your coordination needs:
| Pattern | Use Case | Complexity |
|---|---|---|
chain | Sequential steps, each builds on previous | ⭐ Simple |
routing | Dynamic agent selection based on input | ⭐⭐ Medium |
parallel | Independent tasks executed concurrently | ⭐⭐ Medium |
workflow | DAG with dependencies, parallel where possible | ⭐⭐⭐ Complex |
graph | State machines with loops and conditionals | ⭐⭐⭐⭐ Advanced |
evaluator-optimizer | Iterative refinement with quality gates | ⭐⭐⭐ Complex |
orchestrator-workers | Dynamic task delegation | ⭐⭐⭐⭐ Advanced |
All 7 patterns are fully implemented and production-ready. Start with chain, migrate to complex patterns only when needed.
inputs:
required:
topic: string # Shorthand syntax
optional:
format:
type: string
description: "Output format"
default: "markdown"
enum: ["markdown", "json", "yaml"]
Use variables in prompts and artifacts:
prompt: "Research {{ topic }} and output as {{ format }}"
outputs:
artifacts:
- path: "./artifacts/result.md"
from: "{{ last_response }}" # Last agent output
- path: "./artifacts/trace.json"
from: "{{ $TRACE }}" # Full execution trace
Template Variables:
{{ last_response }} - Most recent agent output{{ steps[0].response }} - Chain step output (0-indexed){{ tasks.task_id.response }} - Workflow task output{{ $TRACE }} - Complete execution traceFix: Add version: 0 at top level
Fix: Use valid pattern type: chain, routing, parallel, workflow, graph, evaluator-optimizer, orchestrator-workers
Fix: Ensure agent referenced in pattern is defined in agents: section
Fix: Use bedrock, openai, or ollama
parallel or workflow patterns for independent taskscontext_policy.compression for long workflowsOnce you understand this core structure:
Skill("strands-spec/patterns")Skill("strands-spec/tools")Skill("strands-spec/advanced")Skill("strands-spec/examples")Skill("strands-spec/troubleshooting")Always validate before execution:
uv run strands validate my-workflow.yaml
Or use inline schema reference:
# yaml-language-server: $schema=./strands-workflow.schema.json
version: 0
# ...
version: 0
name: "hello-world"
runtime:
provider: bedrock
model_id: "anthropic.claude-3-sonnet-20240229-v1:0"
region: "us-east-1"
agents:
greeter:
prompt: "Say hello to {{ name }}"
inputs:
required:
name: string
pattern:
type: chain
config:
steps:
- agent: greeter
input: "Greet the user"
outputs:
artifacts:
- path: "./greeting.txt"
from: "{{ last_response }}"
Run with:
uv run strands run hello-world.yaml --var name="World"