// Use when working with AWS Strands Agents SDK or Amazon Bedrock AgentCore platform for building AI agents. Provides architecture guidance, implementation patterns, deployment strategies, observability, quality evaluations, multi-agent orchestration, and MCP server integration.
| name | aws-strands-agents-agentcore |
| description | Use when working with AWS Strands Agents SDK or Amazon Bedrock AgentCore platform for building AI agents. Provides architecture guidance, implementation patterns, deployment strategies, observability, quality evaluations, multi-agent orchestration, and MCP server integration. |
AWS Strands Agents SDK: Open-source Python framework for building AI agents with model-driven orchestration (minimal code, model decides tool usage)
Amazon Bedrock AgentCore: Enterprise platform for deploying, operating, and scaling agents in production
Relationship: Strands SDK runs standalone OR with AgentCore platform services. AgentCore is optional but provides enterprise features (8hr runtime, streaming, memory, identity, observability).
Single-purpose agent:
Multi-agent system:
Tool/Integration Server (MCP):
See architecture.md for deployment examples.
streamable-http (NOT stdio)0.0.0.0:8000/mcpapplication/json and text/event-streamWhy: MCP servers are stateful and need persistent connections. Lambda is ephemeral and unsuitable.
See limitations.md for details.
See patterns.md for implementation.
See limitations.md for strategies.
| Component | Lambda | ECS/Fargate | AgentCore Runtime |
|---|---|---|---|
| Stateless Agents | â Perfect | â Overkill | â Overkill |
| Interactive Agents | â No streaming | â ïž Possible | â Ideal |
| MCP Servers | â NEVER | â Standard | â With features |
| Duration | < 15 minutes | Unlimited | Up to 8 hours |
| Cold Starts | Yes (30-60s) | No | No |
| Pattern | Complexity | Predictability | Cost | Use Case |
|---|---|---|---|---|
| Single Agent | Low | High | 1x | Most tasks |
| Agent as Tool | Low | High | 2-3x | Simple delegation |
| Graph | High | Very High | 3-5x | Deterministic workflows |
| Swarm | Medium | Low | 5-8x | Autonomous collaboration |
Recommendation: Start with single agents, evolve as needed.
See architecture.md for examples.
#-Driven Philosophy
Key Concept: Strands Agents delegates orchestration to the model rather than requiring explicit control flow code.
# Traditional: Manual orchestration (avoid)
while not done:
if needs_research:
result = research_tool()
elif needs_analysis:
result = analysis_tool()
# Strands: Model decides (prefer)
agent = Agent(
system_prompt="You are a research analyst. Use tools to answer questions.",
tools=[research_tool, analysis_tool]
)
result = agent("What are the top tech trends?")
automatically orchestrates: research_tool â analysis_tool â respond
Primary Provider: Anthropic Claude via AWS Bedrock
Model ID Format: anthropic.claude-{model}-{version}
Current Models (as of January 2025):
anthropic.claude-sonnet-4-5-20250929-v1:0 - Productionanthropic.claude-haiku-4-5-20251001-v1:0 - Fast/economicalanthropic.claude-opus-4-5-20250514-v1:0 - Complex reasoningCheck Latest Models:
aws bedrock list-foundation-models --by-provider anthropic \
--query 'modelSummaries[*].[modelId,modelName]' --output table
from strands import Agent
from strands.models import BedrockModel
from strands.session import DynamoDBSessionManager
from strands.agent.conversation_manager import SlidingWindowConversationManager
agent = Agent(
agent_id="my-agent",
model=BedrockModel(model_id="anthropic.claude-sonnet-4-5-20250929-v1:0"),
system_prompt="You are helpful.",
tools=[tool1, tool2],
session_manager=DynamoDBSessionManager(table_name="sessions"),
conversation_manager=SlidingWindowConversationManager(max_messages=20)
)
result = agent("Process this request")
See patterns.md for base agent factory patterns.
from mcp.server import FastMCP
import psycopg2.pool
# Persistent connection pool (why Lambda won't work)
db_pool = psycopg2.pool.SimpleConnectionPool(minconn=1, maxconn=10, host="db.internal")
mcp = FastMCP("Database Tools")
@mcp.tool()
def query_database(sql: str) -> dict:
conn = db_pool.getconn()
try:
cursor = conn.cursor()
cursor.execute(sql)
return {"status": "success", "rows": cursor.fetchall()}
finally:
db_pool.putconn(conn)
# CRITICAL: streamable-http mode
if __name__ == "__main__":
mcp.run(transport="streamable-http", host="0.0.0.0", port=8000)
See architecture.md for deployment details.
from strands import tool
@tool
def safe_tool(param: str) -> dict:
"""Always return structured results, never raise exceptions."""
try:
result = operation(param)
return {"status": "success", "content": [{"text": str(result)}]}
except Exception as e:
return {"status": "error", "content": [{"text": f"Failed: {str(e)}"}]}
See patterns.md for tool design patterns.
AgentCore Runtime (Automatic):
# Install with OTEL support
# pip install 'strands-agents[otel]'
# Add 'aws-opentelemetry-distro' to requirements.txt
from bedrock_agentcore.runtime import BedrockAgentCoreApp
app = BedrockAgentCoreApp()
agent = Agent(...) # Automatically instrumented
@app.entrypoint
def handler(payload):
return agent(payload["prompt"])
Self-Hosted:
export AGENT_OBSERVABILITY_ENABLED=true
export OTEL_PYTHON_DISTRO=aws_distro
export OTEL_RESOURCE_ATTRIBUTES="service.name=my-agent"
opentelemetry-instrument python agent.py
General OpenTelemetry:
from strands.observability import StrandsTelemetry
# Development
telemetry = StrandsTelemetry().setup_console_exporter()
# Production
telemetry = StrandsTelemetry().setup_otlp_exporter()
See observability.md for detailed patterns.
Local dev â FileSystem
Lambda agents â S3 or DynamoDB
ECS agents â DynamoDB
Interactive chat â AgentCore Memory
Knowledge bases â AgentCore Memory
See architecture.md for storage backend comparison.
See architecture.md for platform service details.
See patterns.md and limitations.md for details.
Before deploying: