with one click
corgispec-verify
// Automated verification gate between apply and review — runs tests, checks spec coverage, validates lint/build. No human gate required.
// Automated verification gate between apply and review — runs tests, checks spec coverage, validates lint/build. No human gate required.
[HINT] Download the complete skill directory including SKILL.md and all related files
| name | corgispec-verify |
| description | Automated verification gate between apply and review — runs tests, checks spec coverage, validates lint/build. No human gate required. |
| license | MIT |
| compatibility | Requires openspec CLI. |
| metadata | {"author":"openspec","version":"1.0","generatedBy":"1.3.0"} |
Run automated verification on a completed Task Group before review. Verify assures the code works; review judges if it's good enough.
openspec/changes/<name>/.gitlab.yaml or .github.yaml exists with group issue numbersisolation.mode is worktree: worktree exists for this change (error if not)Read openspec/config.yaml for schema (gitlab-tracked / github-tracked) and isolation settings.
If isolation.mode: worktree: Changes live inside worktrees. Read references/worktree-discovery.md for the full discovery procedure. Quick summary:
openspec list --json, if it returns changes, use them<isolation.root>/ directories, verify each with git worktree list and check openspec/changes/<name>/ exists insideIf no isolation: openspec list --json directly. Auto-select if one, prompt if multiple.
If name provided by user, use it directly.
Parse tasks.md to find all Task Groups (## N. headings). Identify which groups have ALL tasks marked [x]. These are the completed groups eligible for verification.
If tracked: read the tracking file (.gitlab.yaml / .github.yaml) for group names and issue numbers.
Default to the first completed group that hasn't been verified yet. If no completed groups exist, guidance: "No completed groups found. Run /corgi-apply first."
Detect the project's test infrastructure and run the test suite. All detection is best-effort — skip gracefully if nothing is found.
Use the detection table in references/verification-steps.md for detailed detection conditions and commands. Summary:
| Detection Condition | Run Command | Failure Handling |
|---|---|---|
tests/ + pytest.ini / pyproject.toml with [tool.pytest] | python -m pytest -v | If tests fail → mark 🟡 Important, do NOT block |
package.json with "test" script | npm test | Same as above |
bun test exists or bunfig.toml | bun test | Same as above |
go.mod + _test.go files | go test ./... | Same as above |
| No test infrastructure | Report "No test infrastructure detected", mark ⚠️, do not block |
Test result handling:
Tests are evidence. Failure is a signal, not a gate.
Compare the change's spec requirements against the actual implementation.
Steps:
openspec/changes/<name>/specs/<capability>/spec.mdOutput format follows the spec in references/verification-steps.md.
Check for project-level lint and build configuration. Run what's available.
Lint detection:
| Condition | Command |
|---|---|
.ruff.toml or [tool.ruff] in pyproject.toml | ruff check |
.eslintrc.* or eslint in package.json | npx eslint . (or configured path) |
tsconfig.json | npx tsc --noEmit |
| No linter found | Skip, mark ℹ️ |
Build detection:
| Condition | Command |
|---|---|
package.json with "build" script | npm run build |
Makefile with build target | make build |
| No build config | Skip, mark ℹ️ |
Result handling:
Assemble all verification results into a structured report. Read references/verification-steps.md for detailed report composition guidance.
Report format (for terminal display and issue posting):
## Verify Report: Group N, {group name}
### Test Results
✅ 12 passed / ❌ 0 failed / ⚠️ 2 skipped
<test output or "No test infrastructure detected">
### Spec Coverage
| Requirement | Coverage | Evidence |
|-------------|----------|----------|
| REQ-1: User login | ✅ Full | Implemented in auth/login.py:login() |
| REQ-2: Error handling | ⚠️ Partial | Happy path covered, null input not handled |
| REQ-3: Rate limiting | ❌ Missing | No rate limiting found in codebase |
**Summary**: ✅ 2/3 fully covered | ⚠️ 1 partial | ❌ 0 missing
### Lint / Build
✅ ruff: no errors | ✅ Build: success
### Verdict
✅ **PASS** — Ready for review
Based on the verdict from Step 6:
If ✅ PASS or ⚠️ PASS WITH WARNINGS:
glab issue note <child_iid> --message "$VERIFY_REPORT"gh issue comment <child_number> --body "$VERIFY_REPORT"Run /corgi-review to review this group.N warnings were found. They are noted in the report but do not block review.If ❌ FAIL:
N critical issues found. Fix these before review. Run /corgi-apply to address them.Verify cannot replace review. Verify proves the code works. Review judges if the code is good enough. The human gate in review is always required after verify passes.
openspec/config.yaml schema field, use glab for gitlab-tracked, gh for github-trackedIf you reached postconditions without producing an explicit PASS/PASS WITH WARNINGS/FAIL verdict, you violated the contract. Stop and determine the verdict.