with one click
ln-782-test-runner
// Executes all test suites and reports results with coverage. Use when verifying that test infrastructure works after bootstrap.
// Executes all test suites and reports results with coverage. Use when verifying that test infrastructure works after bootstrap.
[HINT] Download the complete skill directory including SKILL.md and all related files
| name | ln-782-test-runner |
| description | Executes all test suites and reports results with coverage. Use when verifying that test infrastructure works after bootstrap. |
| license | MIT |
| model | claude-haiku-4-5 |
Paths: File paths (
references/,../ln-*) are relative to this skill directory.
MANDATORY READ: Load references/ci_tool_detection.md — test framework registry, compact flags (--tb=short -q, --reporter=json), and normalization chain.
Type: L3 Worker Category: 7XX Project Bootstrap
Detects test frameworks, executes all test suites, and reports results including pass/fail counts and optional coverage.
Scope:
Out of Scope:
| Scenario | Use This Skill |
|---|---|
| Standalone-capable | Yes |
| Standalone test execution | Yes |
| CI/CD pipeline test step | Yes |
| Build verification needed | No, use ln-781 |
Identify test frameworks from project configuration files.
| Marker | Test Framework | Project Type |
|---|---|---|
| vitest.config.* | Vitest | Node.js |
| jest.config.* | Jest | Node.js |
| *.test.ts in package.json | Vitest/Jest | Node.js |
| xunit / nunit in *.csproj | xUnit/NUnit | .NET |
| pytest.ini / conftest.py | pytest | Python |
| *_test.go files | go test | Go |
| tests/ with Cargo.toml | cargo test | Rust |
Run tests for each detected framework.
| Framework | Execution Strategy |
|---|---|
| Vitest | Run in single-run mode with JSON reporter |
| Jest | Run with JSON output |
| xUnit/NUnit | Run with logger for structured output |
| pytest | Run with JSON plugin or verbose output |
| go test | Run with JSON output flag |
| cargo test | Run with standard output parsing |
Extract test results from framework output.
| Metric | Description |
|---|---|
| total | Total number of tests discovered |
| passed | Tests that completed successfully |
| failed | Tests that failed assertions |
| skipped | Tests marked as skip/ignore |
| duration | Total execution time |
When coverage enabled, collect coverage metrics.
| Framework | Coverage Tool |
|---|---|
| Vitest/Jest | c8 / istanbul |
| .NET | coverlet |
| pytest | pytest-cov |
| Go | go test -cover |
| Rust | cargo-tarpaulin |
Coverage Metrics:
| Metric | Description |
|---|---|
| linesCovered | Lines executed during tests |
| linesTotal | Total lines in codebase |
| percentage | Coverage percentage |
Return structured results to orchestrator.
Result Structure:
| Field | Description |
|---|---|
| suiteName | Test suite identifier |
| framework | Detected test framework |
| status | passed / failed / error |
| total | Total test count |
| passed | Passed test count |
| failed | Failed test count |
| skipped | Skipped test count |
| duration | Execution time in seconds |
| failures | Array of failure details (test name, message) |
| coverage | Coverage metrics (if enabled) |
| Error Type | Action |
|---|---|
| No tests found | Report warning, status = passed (0 tests) |
| Test timeout | Report timeout, include partial results |
| Framework error | Log error, report as error status |
| Missing dependencies | Report missing test dependencies |
| Option | Default | Description |
|---|---|---|
| skipTests | false | Skip execution if no tests found |
| allowFailures | false | Report success even if tests fail |
| coverage | false | Generate coverage report |
| timeout | 300 | Max execution time in seconds |
| parallel | true | Run test suites in parallel when possible |
MANDATORY READ: Load references/monitor_integration_pattern.md
For test suites expected to run >30 seconds, use Monitor to stream failures in real-time:
| Pattern | Command | timeout_ms |
|---|---|---|
| Stream failures | {test_command} 2>&1 | grep --line-buffered -E 'FAIL|Error|✗|AssertionError' | timeout * 1000 (from Options) |
| Full output | {test_command} 2>&1 | timeout * 1000 |
React to first failure immediately while remaining tests continue running.
Fallback: if Monitor is unavailable (Bedrock/Vertex), use Bash(run_in_background=true).
../ln-780-bootstrap-verifier/SKILL.mdVersion: 2.0.0 Last Updated: 2026-01-10