// Execute tests adaptively, analyze failures, generate detailed failure reports, and iterate until tests pass. This skill should be used when running tests from a test plan, working with any language, framework, or test type (E2E, API, unit, integration, performance).
| name | test-executor |
| description | Execute tests adaptively, analyze failures, generate detailed failure reports, and iterate until tests pass. This skill should be used when running tests from a test plan, working with any language, framework, or test type (E2E, API, unit, integration, performance). |
Execute tests in adaptive loops, analyze test results, generate structured failure reports, and iterate with fixes until all tests pass. Works universally with any testing framework or project structure through intelligent adaptation and discovery.
Use this skill when:
test-plan-generator or manual creation)Read Test Plan
Discover Project Test Setup
Ensure Services are Running
For each test in plan:
Execute Test
Analyze Results
If Test Passes:
- [x]If Test Fails:
Update Progress
Create Test Failure Report
test-failures.mdSummary Statistics
Backend/Unit Tests:
# Look for config files and dependencies
package.json → "jest", "mocha", "vitest"
requirements.txt → "pytest", "unittest"
*.csproj → MSTest, xUnit, NUnit
Cargo.toml → built-in Rust tests
go.mod → built-in Go tests
Frontend/E2E Tests:
# Look for E2E frameworks
package.json → "playwright", "cypress", "puppeteer"
Check for test directories: e2e/, tests/, __tests__/
API Tests:
# Look for API test patterns
*.http files (REST Client)
*.test.ts with fetch/axios calls
curl commands in scripts
Postman collections
Strategy:
Check package.json scripts:
{
"scripts": {
"test": "jest",
"test:e2e": "playwright test",
"test:unit": "vitest run"
}
}
Check Makefile:
test:
pytest tests/
test-e2e:
npm run test:e2e
Check CI/CD config:
.github/workflows/*.yml.gitlab-ci.ymlazure-pipelines.ymlTry common patterns:
npm test
dotnet test
pytest
go test ./...
cargo test
make test
Ask user if uncertain: "How do you run tests in this project?"
Characteristics:
Execution with MCP Playwright:
# If using Playwright via MCP
# Tests are executed through MCP Playwright tools
# Navigate, click, fill forms, assert results
Execution with npm:
npm run test:e2e
# or
npx playwright test
npx cypress run
Services Required:
Example Test from Plan:
- [ ] E2E: User can submit form and see confirmation
Execution:
Characteristics:
Execution with curl:
# Example from test plan:
# "Test POST /api/forms creates form in database"
curl -X POST http://localhost:5001/api/forms \
-H "Authorization: Bearer ${TOKEN}" \
-H "Content-Type: application/json" \
-d '{"title":"Test Form","description":"Test"}'
# Check response status code
# Verify response body
# Query database to confirm creation
Execution with test framework:
# If project has API test suite
npm run test:api
dotnet test --filter Category=API
pytest tests/api/
Services Required:
Characteristics:
Execution:
# JavaScript/TypeScript
npm test
npm run test:unit
jest
vitest run
# .NET
dotnet test
dotnet test --filter Category=Unit
# Python
pytest tests/unit/
python -m pytest
# Go
go test ./...
# Rust
cargo test
Services Required:
Characteristics:
Execution:
# Similar to unit tests but may need services
dotnet test --filter Category=Integration
pytest tests/integration/
npm run test:integration
Services Required:
Characteristics:
Execution:
# Load testing tools
# ab (Apache Bench), wrk, k6, JMeter
# Example: 100 requests, 10 concurrent
ab -n 100 -c 10 http://localhost:5001/api/forms
# Parse output for:
# - Requests per second
# - Response times (mean, median, p95, p99)
# - Failures
Adaptive Service Detection:
Frontend:
# Detection: package.json with "dev" script
# Start: npm run dev (background)
# Health check: curl http://localhost:5174
Backend:
# .NET: dotnet run (background)
# Node: npm start or node server.js
# Python: python app.py or flask run
# Go: go run main.go
Database:
# Detection: docker-compose.yml
# Start: docker-compose up -d postgres
# Health check: pg_isready or curl health endpoint
Start Services Script (template in bundled resources):
#!/bin/bash
# scripts/start_services.sh (customizable)
echo "Starting services..."
# Start database
docker-compose up -d postgres
sleep 2
# Start backend
cd backend && dotnet run &
BACKEND_PID=$!
sleep 5
# Start frontend
npm run dev &
FRONTEND_PID=$!
sleep 3
echo "Services started"
echo "Backend PID: $BACKEND_PID"
echo "Frontend PID: $FRONTEND_PID"
# Backend health check
curl http://localhost:5001/health || echo "Backend not ready"
# Frontend health check
curl http://localhost:5174 || echo "Frontend not ready"
# Database health check
docker exec postgres pg_isready -U user -d db
Different testing frameworks have different output formats. Parse adaptively:
Jest/Vitest Output:
PASS tests/unit/EmailService.test.ts
✓ sends email successfully (45ms)
✓ handles errors gracefully (12ms)
Test Suites: 1 passed, 1 total
Tests: 2 passed, 2 total
Parsing:
PASS or FAILdotnet test Output:
Passed! - Failed: 0, Passed: 10, Skipped: 0, Total: 10, Duration: 2 s
Parsing:
pytest Output:
====== 5 passed, 2 failed in 3.42s ======
Parsing:
Script: scripts/parse_test_output.py (bundled) can parse common formats.
1. Timeout Errors
2. Assertion Failures
3. Connection Errors
4. Authentication Errors
5. Data Errors
# Test Failure Report
**Date:** [Date]
**Execution:** [Run #X]
## Summary
- **Total Tests:** X
- **Passed:** Y
- **Failed:** Z
- **Success Rate:** Y/X %
---
## Failed Test #1: [Test Name]
**Test File:** `path/to/test.spec.ts`
**Failure Type:** [Timeout / Assertion / Connection / etc.]
**Error Message:**
[Full error message and stack trace]
**Probable Cause:**
[Analysis of why test failed]
**Suggested Fix:**
[Specific actions to fix]
**Related Code:**
- `src/path/to/component.ts:line`
- `backend/path/to/service.cs:line`
---
## Failed Test #2: [Test Name]
[Same structure as above]
---
## Next Steps
1. [Action 1 to fix failures]
2. [Action 2 to fix failures]
3. Re-run tests after fixes
---
**Report for:** `test-fixer` skill
test-fixer skill → Fix failuresBatch Execution (default):
Individual Execution:
Optional strategy for critical tests:
scripts/parse_test_output.py - Parse test output to structured formatscripts/start_services.sh - Template for starting project servicesreferences/test-report-template.md - Template for failure reportsreferences/test-execution-patterns.md - Execution patterns by test type