with one click
writing-unit-tests
// Writes unit tests for agents, orchestrator logic, and common utilities in the QuAIA framework using pytest. Use when adding tests for new or existing components. Use when this capability is needed.
// Writes unit tests for agents, orchestrator logic, and common utilities in the QuAIA framework using pytest. Use when adding tests for new or existing components. Use when this capability is needed.
[HINT] Download the complete skill directory including SKILL.md and all related files
| name | writing-unit-tests |
| description | Writes unit tests for agents, orchestrator logic, and common utilities in the QuAIA framework using pytest. Use when adding tests for new or existing components. Use when this capability is needed. |
| metadata | {"author":"partarstu"} |
This skill provides a comprehensive guide for writing unit tests for the QuAIA™ framework, covering agents, orchestrator logic, and common utilities.
The QuAIA test suite uses:
Tests are organized in the tests/ directory:
tests/
├── conftest.py # Shared fixtures and test setup
├── agents/ # Agent-specific tests
├── orchestrator/ # Orchestrator logic tests
├── common/ # Common utilities tests
└── scripts/ # Script tests
The root tests/conftest.py sets up global test configuration:
📄 Template: resources/conftest_template.py
Agent tests verify:
📄 Example: examples/test_agent_example.py
📄 Example: examples/test_orchestrator_logic.py
📄 Example: examples/test_parsing_logic.py
📄 Example: examples/test_endpoints.py
When testing components that interact with external services:
@pytest.fixture
def mock_jira_client():
"""Mock JIRA client for tests."""
with patch("common.services.jira_client.JIRA") as mock:
mock_instance = MagicMock()
mock.return_value = mock_instance
yield mock_instance
@pytest.fixture
def mock_vector_db():
"""Mock vector database service."""
with patch("common.services.vector_db_service.VectorDbService") as mock:
mock_instance = MagicMock()
mock.return_value = mock_instance
mock_instance.search = AsyncMock(return_value=[])
yield mock_instance
For async functions, use pytest.mark.asyncio:
@pytest.mark.asyncio
async def test_async_function():
"""Test an async function."""
result = await some_async_function()
assert result is not None
@pytest.mark.asyncio
async def test_async_with_mock():
"""Test async function with mocked dependencies."""
with patch("module.dependency", new_callable=AsyncMock) as mock_dep:
mock_dep.return_value = "mocked result"
result = await async_function_using_dependency()
assert result == "mocked result"
mock_dep.assert_called_once()
@pytest.mark.asyncio
async def test_handles_exception_gracefully(mock_error_history):
"""Test function handles exceptions properly."""
from fastapi import HTTPException
with patch("module.dependency") as mock_dep:
mock_dep.side_effect = Exception("Unexpected error")
with pytest.raises(HTTPException) as exc_info:
await function_that_should_catch_and_rethrow()
assert exc_info.value.status_code == 500
assert "error" in exc_info.value.detail.lower()
For testing multiple scenarios:
@pytest.mark.parametrize("input_value,expected_output", [
("valid_input", "expected_result"),
("another_input", "another_result"),
("edge_case", "edge_result"),
])
def test_multiple_scenarios(input_value, expected_output):
"""Test multiple input/output combinations."""
result = function_under_test(input_value)
assert result == expected_output
@pytest.mark.parametrize("status,should_succeed", [
("AVAILABLE", True),
("BUSY", False),
("BROKEN", False),
])
@pytest.mark.asyncio
async def test_status_handling(status, should_succeed, clear_registry):
"""Test handling of different agent statuses."""
# Setup agent with given status
# Run test
# Assert based on should_succeed
Use the provided helper utilities for creating mock objects:
📄 Template: resources/test_helpers.py
# Run all tests
pytest
# Run with verbose output
pytest -v
# Run with coverage report
pytest --cov=. --cov-report=html
# Run tests for a specific module
pytest tests/agents/
# Run tests for a specific file
pytest tests/agents/test_requirements_review.py
# Run a specific test function
pytest tests/orchestrator/test_parsing_logic.py::TestGetModelFromArtifacts::test_parse_valid_model
# Run tests matching a pattern
pytest -k "test_agent"
# Ensure pytest-asyncio is configured
pytest tests/orchestrator/test_orchestrator_logic.py -v
After writing tests, verify:
test_<module_name>.pyTest<FeatureName>test_<what_is_being_tested>@pytest.mark.asynciopytest tests/<test_file>.py -vpytest --cov=<module> tests/<test_file>.pyIf you see "There is no current event loop" errors:
@pytest.fixture
def mock_error_history():
"""Mock async components to prevent event loop issues."""
with patch("orchestrator.main.error_history") as mock:
mock.add = AsyncMock()
yield mock
If imports fail during test collection, mock heavy dependencies in conftest.py:
# Mock sentence_transformers before any imports
mock_sentence_transformers = MagicMock()
sys.modules["sentence_transformers"] = mock_sentence_transformers
Always mock config values in fixtures rather than modifying global state:
@pytest.fixture
def mock_config(monkeypatch):
monkeypatch.setattr(config.SomeConfig, "VALUE", "test_value")
Converted and distributed by TomeVault — claim your Tome and manage your conversions.