with one click
creating-new-agent
// Creates new A2A-compliant agents in the QuAIA framework. Use when adding a new specialized agent with custom tools, prompts, and MCP server integrations.
// Creates new A2A-compliant agents in the QuAIA framework. Use when adding a new specialized agent with custom tools, prompts, and MCP server integrations.
[HINT] Download the complete skill directory including SKILL.md and all related files
| name | creating-new-agent |
| description | Creates new A2A-compliant agents in the QuAIA framework. Use when adding a new specialized agent with custom tools, prompts, and MCP server integrations. |
| metadata | {"author":"partarstu"} |
This skill provides a comprehensive guide for creating a new specialized agent in the QuAIA™ framework. Agents are A2A-compliant (Agent-to-Agent protocol) services that handle specific QA-related tasks.
Each agent in QuAIA consists of:
main.py) - Agent class inheriting from AgentBaseprompt.py) - Prompt classes inheriting from PromptBasesystem_prompts/) - Text template files for LLM instructionsconfig.py for agent-specific settingstests/agents/Create a new directory under agents/ with the following structure:
agents/<agent_name>/
├── __init__.py (empty file)
├── main.py
├── prompt.py
├── Dockerfile
└── system_prompts/
└── main_prompt_template.txt
Example command:
mkdir -p agents/<agent_name>/system_prompts
Add a configuration class in config.py using the template:
📄 Template: resources/config_template.py
Configuration field descriptions:
THINKING_BUDGET: Token budget for chain-of-thought reasoning (0 disables it)OWN_NAME: Human-readable name displayed in the orchestrator dashboardPORT: Internal container port the agent listens onEXTERNAL_PORT: Externally accessible port (usually same as PORT)MODEL_NAME: The LLM model to use (format: provider:model-name)MAX_REQUESTS_PER_TASK: Limit on tool/MCP calls per task executionIf the agent returns structured output, add a Pydantic model in common/models.py:
📄 Template: resources/output_model_template.py
Important: Inherit from BaseAgentResult to include the llm_comments field for debugging.
Create agents/<agent_name>/prompt.py:
📄 Template: resources/prompt_template.py
Create agents/<agent_name>/system_prompts/main_prompt_template.txt:
📄 Template: resources/system_prompt_template.txt
Best practices for prompts:
Create agents/<agent_name>/main.py:
📄 Template: resources/agent_template.py
Key points:
AgentBaseget_thinking_budget() and get_max_requests_per_task()app variable exposes the A2A-compliant FastAPI applicationstart_as_server() runs the agent standalone with uvicornCreate agents/<agent_name>/Dockerfile:
📄 Template: resources/dockerfile_template
If deploying to Google Cloud Run, add build and deploy steps to cloudbuild.yaml:
Create tests/agents/test_<agent_name>.py:
📄 Example: examples/test_agent_example.py
After creating the agent, verify:
config.pycommon/models.pyPromptBaseAgentBasepytest tests/agents/test_<agent_name>.py -vpython agents/<agent_name>/main.pyhttp://localhost:<port>/.well-known/agent.json# Activate virtual environment
.venv\Scripts\activate
# Run the agent
python agents/<agent_name>/main.py
The agent will start listening on the configured port and automatically expose:
/.well-known/agent.json - Agent card for discoveryConverted and distributed by TomeVault — claim your Tome and manage your conversions.