with one click
reviewing-code-in-faber
// Performs a code review inside a faber task and returns findings as the final assistant message. Use when the faber review command dispatches a task to review a branch, the current branch, or a pull request.
// Performs a code review inside a faber task and returns findings as the final assistant message. Use when the faber review command dispatches a task to review a branch, the current branch, or a pull request.
[HINT] Download the complete skill directory including SKILL.md and all related files
| name | reviewing-code-in-faber |
| description | Performs a code review inside a faber task and returns findings as the final assistant message. Use when the faber review command dispatches a task to review a branch, the current branch, or a pull request. |
Thorough analysis, selective output.
Perform a complete code review but only surface what matters. High-confidence findings go in the final message. Everything else is held back ā if the developer wants more, they can reopen the session and ask.
The review runs inside a faber task. You're already on a worktree checked out at the target ref, based on <reviewBase>. The output artifact is your final assistant message in this session ā not a GitHub review, not a comment, not a PR submission. Whoever ran faber review will read that message in their terminal (or via faber read <taskId>).
Classify every finding by how it was derived. The tier shapes how you frame the finding in the final message.
Things you can demonstrate are wrong by explaining exactly why they fail. Dead code with zero call sites. Broken method signatures. Code that won't behave as intended because of how the language, framework, or surrounding system works. Code that's wrong for an input or state it claims to handle.
These are facts, not opinions. State them plainly.
The code does something differently from how the rest of the codebase handles it. Every other hook sets retry: false but this one doesn't. Every other controller rescues ActiveRecord::RecordNotFound but this one lets it bubble. The naming convention used everywhere else wasn't followed here.
This includes deviations by omission: something the pattern calls for is missing. Every other public method in the class has tests but the new ones don't. Every other endpoint has authorization checks but this one skips it. The absence is the deviation.
You can demonstrate these by pointing to the existing pattern, but you can't know if the deviation is intentional. Surface them as questions: "I noticed this differs from the pattern in X. Intentional?"
Findings that require reasoning about runtime behavior, user interaction, timing, or failure modes. "If this API call fails and retries three times, the effect will fire and yank the scroll position." "Under concurrent access this could race."
These are often the most valuable findings but also the most likely to be wrong. Surface them, but own the ambiguity. Frame them as reasoning, not facts: "I think this might..." or "I traced this path and it looks like..." The author knows more about how this runs in production than you do.
Your prompt names the target and the base branch ā something like "Review branch feature-x against main" or "Review pull request #123 against main". HEAD is already on the target. Confirm with git status if you need to.
For PR reviews, the prompt includes the PR URL and title. Fetch the PR body and any linked Linear issue to understand the intended change:
gh pr view <number> --json body,title,author
If the PR body references an issue (e.g. ENG-123), load the using-linear skill and read the issue to understand the acceptance criteria.
For branch or current-branch reviews there's often no PR and no issue. Work from commit messages and the diff. Don't go hunting for context that isn't there.
The prompt may include ## Original task (the prompt the task was originally given, when reviewing a faber task) and ## Additional context (any extra direction passed via --context). Treat these as the stated intent for the change. They replace the role a PR body would play in PR mode -- don't go hunting for context elsewhere when these are present.
Get the full diff against the review base:
git diff <reviewBase>...HEAD
And the list of changed files:
git diff <reviewBase>...HEAD --name-only
The base is whatever the prompt named. For PR reviews you can also use gh pr diff <number> ā same content, different framing.
Skip this step for branch or current-branch reviews.
For PR reviews, read any existing review comments. The developer asked faber for a review, but human reviewers may have already raised concerns ā duplicating those wastes the reader's time:
gh api repos/<owner>/<repo>/pulls/<number>/comments --paginate
Note what's been raised and what's been resolved. You'll dedupe against this when you write the final message.
Before looking for issues, work out what the code actually does and why it does it that way.
Read the changed lines in context ā what calls them, what they call, how they fit into the system around them.
Pay particular attention to external inputs and systems the code interacts with: their full range of behaviour is rarely obvious from the call site. When the code uses something external ā a command, an API, a library ā look up what it actually does. Don't trust the calling code's assumptions.
For each significant piece, ask what alternatives the author implicitly rejected and what constraints they were optimising for. The choice the code makes is information about the problem the author was trying to solve.
Now find where it fails. With the understanding you built in step 4, trace each thing the code claims to handle and look for where it falls over. If something only works on the happy path, that's a real weakness. Code that fails for an input or state it claims to support is a Tier 1 finding.
Also look for places where the code is correct but the approach is questionable. Ask yourself "why this way?" of every significant choice.
Look for what's missing, not just what's wrong. If the codebase establishes a pattern (tests for each method, migrations paired with schema changes, docs updated alongside config), check whether the change follows it. Missing artifacts that the pattern calls for are Tier 2 findings.
Also worth a look:
This is where the most valuable findings come from. Don't shortcut it.
Go through every finding and assign a confidence tier:
If a finding is weaker than Tier 3 ā a gut feeling you can't ground in anything ā drop it.
Refer to Kind Feedback for instructions on your communication style. Your audience is the developer who ran faber review ā usually the author of the change ā reading your message in a terminal.
Structure:
path/to/file.ts:42 reference, a short description of the concern, and (where useful) a suggested fix.Frame findings according to their tier:
src/foo.ts:42 ā this branch is unreachable because kind was narrowed to 'a' on line 38."src/foo.ts:42 ā the other handlers in this file call logError before re-throwing. Intentional that this one doesn't?"src/foo.ts:42 ā I think this might race if two callers hit it at the same time. Worth checking."If the change is clean, say so in one or two sentences and stop. Don't pad. Don't invent findings.
Do not include:
State where the fix belongs, not where the symptom appears. The reader should be able to understand the issue with minimal effort.
Every finding must start with a short intent label ("Nit:", "Minor:", "Thought:", "Important:", "Blocking:") that tells the author how much weight to give it. Avoid disclaimer sentences. The label handles severity signalling.
# Review Findings
<summary of the change and your read>
## 1. `src/foo.ts:42`
<comment text>
```suggestion
replacement line
```
## 2. `src/bar.ts:15-18`
<comment text>
```suggestion
replacement block
```
Principles for giving kind, honest, effective feedback. Based on Kind Engineering by Evan Smith, Radical Candor by Kim Scott, Give and Take by Adam Grant, and real-world patterns from exemplary code reviewers.
Nice is polite and agreeable. Kind is invested in helping someone grow.
Be kind, not just nice. Meet people where they are, not where you are.
Every piece of feedback needs all three:
Take the listener's emotions into account, not your own. Set your own feelings aside so the message stays clear. If the feedback is about an emotion they caused, don't still be living in that emotion when you deliver it ā it will cloud your message.
Demonstrate expertise and humility. Call out what went well alongside what needs to change. Building credibility over time makes future feedback land better.
Show your work. Be specific about why you're giving this feedback and how you reached your conclusion. Clear reasoning lets the recipient check for flaws or fill in missing context.
| Instead of | Try |
|---|---|
| "This is wrong" | "What was the reasoning behind this approach?" |
| "We do it like this" | "Have you considered X? It might handle Y better because..." |
| "You should know better" | "I think there might be an issue with Z here ā what do you think?" |
When something is unclear, frame it as your experience, not the author's failure. This invites clarification without blame.
| Instead of | Try |
|---|---|
| "This is confusing" | "Upon first read, I was a little confused by this condition" |
| "This is hard to follow" | "It took me a few reads of this method to get it, but I got there in the end" |
| "You need to explain this" | "I wonder if a short doc comment would help here ā it took me a moment to see what's happening" |
The author needs to know how much weight to give your feedback. Most of the time your tone already does this. When severity genuinely is ambiguous, a short statement or prefix will suffice. Don't restate what the tone or a prefix already conveys.
app/contracts/ for these? Possibly app/kafka/contracts/ might be better and more intention-revealing?"Follow this section only when the prompt explicitly directs you to submit the review to GitHub. In all other circumstances, the final message is the artefact and you do not post.
The draft's path:line references are approximate. For each comment, find the exact line GitHub will accept.
Use faber's helper once per comment:
faber agent comment-targets <pr-number> <path> <line>
Output is plain text ā <line>: <content> for every right-side line within ±5 of <line> that's inside a changed hunk:
38: function compute(input) {
39: const x = input * 2
40: return x
41: }
42: const foo = compute(input)
Read the output. Pick the line whose content matches what the comment is really about ā based on the comment's intent, not numerical proximity to the draft's line number.
Recovery for empty output:
No output. Either the file isn't changed by this PR, or every line in [draft - 5, draft + 5] was a removed line. Try --all:
faber agent comment-targets <pr-number> <path> --all
This lists every targetable line in <path>. If still no output, the file either isn't changed by this PR or only has deletions (which GitHub doesn't accept as comment targets). Drop the comment in either case.
Three outcomes:
Note in the submission report any comments that were dropped or moved, with the original location and the reason.
<pr-number> is the PR number. The subcommand uses gh to resolve the PR's repo from the current working directory's git remote, so the agent must be running inside the worktree (it always is during a faber task).
Collect the surviving comments with their path/line (and start_line only when the comment covers a multi-line range). Use side: "RIGHT" ā line is the line number in the new file.
Comment bodies preserve their intent labels and any ```suggestion ``` fences from the draft.
Always pass the payload via --input - (stdin) rather than -f field flags. -f converts numbers to strings and the API rejects integer fields like line with 422.
Pipe directly into gh using a heredoc:
gh api repos/<owner>/<repo>/pulls/<number>/reviews --method POST --input - <<'PAYLOAD'
{
"event": "COMMENT",
"body": "<the summary text from the review>",
"comments": [
{
"path": "src/foo.ts",
"line": 42,
"side": "RIGHT",
"body": "Blocking: <comment text>\n\n```suggestion\nreplacement line\n```"
}
]
}
PAYLOAD
This is a single gh invocation. The cleanroom allow-list permits gh *.
Event selection:
Blocking: or Important: finding ā REQUEST_CHANGESCOMMENTAPPROVE. Approval is a human decision.If the API returns non-2xx, produce a # Review Submission Failed final message including the error body and the comments it tried to post, so the user can retry manually. Don't auto-retry.
The response includes html_url. Use that in the submission report.
The submission report is the final message when --post was set. It replaces the # Review Findings artefact:
# Review Posted
<one-line summary: number of comments + event type>
<URL of the submitted review>
## Summary
<the body that was posted as the review summary>
## Comments
### 1. `path/to/file.ts:42` ā Blocking
<the comment body, as posted>
### 2. `path/to/file.ts:60-63` ā Nit
<the comment body, as posted>
If the review had no inline findings, the report shows the summary and "0 inline comments". Note any comments that were dropped or moved during line mapping.