with one click
corgispec-gh-review
// Review a completed Task Group via GitHub Issues feedback using GitHub CLI. Mirrors the GitLab review flow but switched to GitHub and gh tooling.
// Review a completed Task Group via GitHub Issues feedback using GitHub CLI. Mirrors the GitLab review flow but switched to GitHub and gh tooling.
[HINT] Download the complete skill directory including SKILL.md and all related files
| name | corgispec-gh-review |
| description | Review a completed Task Group via GitHub Issues feedback using GitHub CLI. Mirrors the GitLab review flow but switched to GitHub and gh tooling. |
| license | MIT |
| compatibility | Requires openspec CLI. |
| metadata | {"author":"openspec","version":"1.0","generatedBy":"1.3.0"} |
Review a completed Task Group with quality checks and interactive approval.
openspec/changes/<name>/.github.yaml exists with parent and group issue numbersisolation.mode is worktree: worktree exists for this change (error if not)Read openspec/config.yaml for isolation settings.
If isolation.mode: worktree: Changes live inside worktrees, not the main checkout. Read references/worktree-discovery.md for the full discovery procedure. Quick summary:
openspec list --json, if it returns changes, use them<isolation.root>/ directories, verify each with git worktree list and check openspec/changes/<name>/ exists insideIf no isolation: openspec list --json directly. Auto-select if one, prompt if multiple.
If name provided by user, use it directly.
.github.yaml for the group listreview if the user does not specify oneRead tasks.md. Stop if the selected group's tasks are not all complete.
Read child issue comments:
gh api repos/{owner}/{repo}/issues/<child_number>/comments --paginate
If reviewer comments exist, present them to the user before proceeding.
Read the child issue description, which should contain the Rich Summary from apply. Display the group's completion overview to the user:
This gives the user context before the human gate.
Run automated quality checks and assemble a Review Report.
Quality checks inform the decision. They do not decide the outcome.
Only the user approves or rejects. Do not change labels, close issues, update parent progress, or append repair tasks in this step.
0. Anti-Rationalization Guard (DO THIS FIRST)
Before executing any quality checks, read and confirm these excuse-vs-rebuttal pairs. If any of these excuses is detected influencing your review judgment, STOP and recalibrate:
| Excuse (Agent might say) | Rebuttal (Built into skill) |
|---|---|
| "่ฝ่ทๅฐฑๅค ไบ" (It runs, that's enough) | Code that runs but is unreadable/unsafe/architecturally wrong creates compound debt. Review is the quality gate. |
| "ๅชๆฏๅฐๆนๅไธ็จๅฏฉ" (Small change, no need to review) | 60% of major incidents in history come from "small changes" whose review was skipped. |
| "ๆๅฏซ็ๆ็ฅ้ๅฎๆฏๅฐ็" (I wrote it, I know it's right) | Authors have blind spots about their own assumptions. Every piece of code needs another pair of eyes. |
| "AI ็ๆ็ๆ่ฉฒๆฒๅ้ก" (AI-generated code should be fine) | AI code needs MORE scrutiny, not less. It is confident and plausible but can be wrong. |
| "ๆธฌ่ฉฆๆ้ๅฐฑๅฅฝ" (Tests pass = good enough) | Tests are necessary but insufficient. They don't catch architecture issues, security holes, or readability problems. |
| "ไปฅๅพๅๆด็" (I'll clean it up later) | "Later" never comes. Review is the quality gate โ demand cleanup now. |
| "Review ๅคช่ฑๆ้" (Review takes too much time) | The cost of fixing unreviewed bugs is 10x the cost of catching them in review. |
Severity Classification
Tag EVERY finding with a severity level:
| Level | Marker | Definition | Example |
|---|---|---|---|
| Critical | ๐ด | Must fix to approve | Security holes, data loss risk, core feature broken |
| Important | ๐ก | Should fix or discuss before approve | Missing tests, poor error handling, spec non-compliance |
| Suggestion | ๐ต | Improvement, not required | Better naming, optional refactor, cleaner abstraction |
| Nit | โช | Style preference, ignorable | Whitespace, formatting, personal taste |
| FYI | โน๏ธ | Informational, no action needed | Future considerations, context notes |
When in doubt between two levels, choose the HIGHER severity.
a. Code Quality Review
b. Spec Verification
specs/<capability>/spec.md from the change directoryc. Functional Verification
Detect the project type and gather evidence accordingly. All detection is best-effort, skip gracefully if not applicable.
tests/ directory plus pytest.ini, pyproject.toml with [tool.pytest], or setup.cfgpython -m pytest (or the appropriate test command), capture output.html, .tsx, .vue, .jsx, etc.)pyproject.toml [project.scripts], bin/, etc.)d. Architecture Check
Review the implementation against system design principles:
Produce: status per check item with severity tag for any violations.
e. Performance Check
Identify common performance anti-patterns in the implementation:
Produce: status per check item with severity tag for any findings.
Core principle: Measure before optimizing. Flag patterns, do not prescribe specific optimizations without benchmark data.
For deeper checks, the review skill also provides references/security-checklist.md and references/performance-checklist.md if more scrutiny is warranted.
f. Assemble Review Report
Format for terminal display and GitHub issue:
## Review Report: Group N, {group name}
### Anti-Rationalization Check
Confirmed no excuses are influencing review judgment:
- [x] "It runs" โ Review covers readability, architecture, security, performance
- [x] "Small change" โ All changes reviewed regardless of size
### Code Quality
| File | Finding | Severity | Comment |
|------|---------|----------|---------|
| path/file.py | Clean structure, consistent naming | โน๏ธ | โ |
### Architecture
| Check | Status | Note |
|-------|--------|------|
| Follows existing patterns | โ
| โ |
| Module boundaries clean | โ
| โ |
| No circular dependencies | โ
| โ |
| Abstraction level appropriate | โ
| โ |
| New deps are necessary | โ
| โ |
### Performance
| Check | Status | Note |
|-------|--------|------|
| No N+1 queries | โ
| โ |
| Pagination present | โ
| โ |
| No blocking sync ops | โ
| โ |
| No unnecessary re-renders | โ
| โ |
| No unbounded loops | โ
| โ |
### Spec Coverage
| Requirement | Status | Severity (if issue) | Notes |
|-------------|--------|---------------------|-------|
| REQ-1: Basic functionality | โ
| โ | Implemented and tested |
| REQ-2: Edge cases | โ ๏ธ | ๐ด Critical | Missing โ no error path for null input |
### Functional Verification
| Item | Result | Severity | Notes |
|------|--------|----------|-------|
| add(1,1) returns 2 | โ
Pass | โ | See test output below |
### Tests
{pytest output or "No test infrastructure detected"}
### UI Screenshots
{Playwright screenshots or "No UI detected"}
### Summary
๐ด N Critical | ๐ก N Important | ๐ต N Suggestions | โช N Nits | โน๏ธ N FYI
g. Post Review Report to GitHub
gh issue comment <child_number> --body "$REVIEW_REPORT"
If screenshots were taken, upload first to get URLs, then embed them in the report markdown.
Posting the report records evidence for the human gate. Workflow state changes happen only in the approve or reject paths in Step 6.
This step is MANDATORY. You MUST stop here and wait for user input.
Use the structured question/choice tool (e.g., question()) to present exactly these options.
Do NOT present them as plain text โ use the interactive selection tool so the user can click/select directly:
Quality checks inform the decision. Only the user approves or rejects.
Do NOT proceed until the user explicitly chooses.
Approve, advance the group
gh issue comment <child_number> --body "โ
Review passed.
<Review Summary>"
git add -A
git commit -m "feat(<change-name>): complete Group N review"
git push
isolation.mode: worktree, run git commands inside the worktree directory.gh issue view <child_number> --json labels --jq '.labels[].name'
Confirm review is present. If not, STOP and report:
"โ ๏ธ Expected label review but found: <actual labels>. Aborting label change."
gh issue edit <child_number> --remove-label "review" --add-label "done"
Note: Do NOT call gh issue close. Closing removes issues from board label columns.
done in the Task Groups tableProgress: line (example: 2/3 groups completed)gh issue edit <parent_number> --body "$UPDATED_PARENT_BODY"
Reject, enter repair
tasks.md formatN.x fix tasks (example: - [ ] 1.4 Fix input validation in cli.py)gh issue comment <child_number> --body "โ Review failed.
**Feedback:**
{summary of user feedback}
**Fix Plan:**
{summary of proposed changes}
**Added Tasks:**
- [ ] N.x fix task 1
- [ ] N.x fix task 2"
gh issue view <child_number> --json labels --jq '.labels[].name'
Confirm `review` is present. If not, STOP and report:
"โ ๏ธ Expected label `review` but found: \<actual labels\>. Aborting label change."
gh issue edit <child_number> --remove-label "review" --add-label "in-progress"
in-progress in the Task Groups table:gh issue edit <parent_number> --body "$UPDATED_PARENT_BODY"
/corgi-apply to start fixing."Discuss
All label changes, parent updates, and repair task generation happen only inside the approve or reject paths after the user chooses.
done (issue left open โ closing removes it from board)tasks.md, child issue moved to in-progressIf you reached postconditions without asking the user in Step 5, you violated the contract. Stop and re-do Step 5.
todo, in-progress, review, done.github.yaml tracks groups by their GitHub issue numbers(End of file)