with one click
pr-comment-response
// Respond to PR review comments by building the smallest proof that confirms or refutes the claim before changing code or docs — never blindly trust a reviewer
// Respond to PR review comments by building the smallest proof that confirms or refutes the claim before changing code or docs — never blindly trust a reviewer
[HINT] Download the complete skill directory including SKILL.md and all related files
| name | pr-comment-response |
| description | Respond to PR review comments by building the smallest proof that confirms or refutes the claim before changing code or docs — never blindly trust a reviewer |
Respond to pull request review comments using a verify-first approach. Never blindly accept reviewer suggestions — build the smallest proof that can confirm or refute the claim before changing code or docs.
/pr-comment-response <PR-number>Reviewers can be wrong. Treat every correctness claim as a hypothesis, not a fact. The workflow is:
# Get all review comments on the PR
gh pr view <PR-number> --json reviews,comments
gh api repos/{owner}/{repo}/pulls/<PR-number>/comments
Collect every comment. Categorize each one:
| Category | Action |
|---|---|
| Bug report | Full verify-first workflow (Steps 2-6) |
| Invariant / assertion-strength claim | Targeted boundary proof plus code-path audit |
| Style/naming suggestion | Evaluate on merit, apply if reasonable |
| Question / clarification | Reply with explanation |
| Refactor suggestion | Evaluate, apply if it improves clarity without risk |
| Nitpick | Apply if trivial and correct |
Focus the verify-first workflow on bug reports and correctness claims. These are the comments where blind trust is most dangerous.
Invariant / assertion-strength claims also require Steps 2–6 — treat the claimed weakness as the hypothesis, identify the boundary input that would expose it, and proceed through the same workflow.
For each comment claiming a bug, incorrect behavior, or assertion-strength gap:
Document your analysis before building any proof:
### Comment: [reviewer quote or summary]
- **File**: path/to/file.rs:LINE
- **Claim**: [what the reviewer says is wrong]
- **Trigger condition**: [input/state that would expose the bug]
- **Plausibility**: [high/medium/low — your initial assessment and why]
- **Best proof form**: [failing test / boundary test + code-path audit / doc-only verification]
Pick the lightest proof that can separate "reviewer is right" from "reviewer is wrong":
| Claim shape | Preferred proof |
|---|---|
| Behavioral bug | A failing test that directly targets the claimed trigger |
| Boundary or invariant-strength concern | The smallest boundary test that distinguishes the competing claims, plus a code-path audit if control flow matters |
| Docs / wording mismatch | Verify against implementation and update docs only if the reviewer is right |
If the right proof is a test, it should:
#[test]
fn handles_empty_input_without_panic() {
let result = function_under_review(&[]);
assert!(result.is_ok(), "empty input should not panic");
}
Do NOT touch the production code yet.
Run the proof you chose.
If the proof includes a test:
cargo test <test_name> -- --nocapture
Three outcomes matter:
The proof shows the implementation is wrong. Proceed to Step 5.
Record:
- **Verdict**: CONFIRMED — [failing test, trace, or boundary proof]
Do not change the implementation. Fix only the docs or comments that created the confusion.
Record:
- **Verdict**: DOCS ONLY — implementation is correct, wording was misleading
- **Evidence**: [test, trace, or code-path proof]
The code already handles this case correctly. Do not change the code.
If you wrote a diagnostic test and it passes, delete it unless it adds durable coverage for a realistic future risk.
Record:
- **Verdict**: NOT REPRODUCED — code already handles this correctly
- **Verification artifact**: removed or not kept if it was diagnostic only
- **Response**: [draft reply to reviewer explaining why the code is correct]
Skip to Step 6 for this comment.
Now — and only now — make the smallest change the evidence supports:
# Confirm the fix
cargo test <test_name> -- --nocapture
# Check for regressions
cargo test
If broader checks fail, investigate — the fix may have been too broad or revealed a deeper issue.
Draft responses for each comment:
For confirmed bugs:
Good catch! Confirmed — added a test (`test_name`) that reproduces this.
Fixed in [commit SHA]. The issue was [brief explanation].
For docs-only corrections:
I checked this against the current implementation. The code path is correct, but the wording was misleading.
Clarified in [commit SHA] so the docs now match the actual behavior.
For unconfirmed claims:
I investigated this with a targeted proof for the scenario you described.
The current code already handles it correctly — [brief explanation of why the code is correct].
Let me know if I'm missing something.
For non-bug comments (style, refactor, questions): Address directly without the test workflow.
When a PR has many comments:
After processing all comments, produce a summary:
## PR Comment Response Summary — PR #<number>
### Claims Investigated
| # | Comment | File | Verdict | Proof | Fix |
|---|---------|------|---------|-------|-----|
| 1 | [summary] | path:line | CONFIRMED | failing test `test_name` | commit SHA |
| 2 | [summary] | path:line | DOCS ONLY | code-path audit + targeted proof | commit SHA |
| 3 | [summary] | path:line | NOT REPRODUCED | verified & removed | N/A |
### Other Changes
- [Style fix]: renamed `foo` to `bar` per reviewer suggestion
- [Refactor]: extracted helper function for clarity
- [Reply]: explained why X is intentional
### Test Results
- New tests added: N
- All tests passing: yes/no
- Regressions found: none / [details]
/test-strategy - Choose appropriate test type (unit, property, fuzz, Kani)/security-reviewer - For security-related review comments/bench-compare - If reviewer flags a performance issue