with one click
analyze-changes
// Analyze git changes (branch, commit, or current branch) and produce a design-oriented markup document with Mermaid diagrams, query/usage examples, data walkthroughs, PR context, and impact assessment.
// Analyze git changes (branch, commit, or current branch) and produce a design-oriented markup document with Mermaid diagrams, query/usage examples, data walkthroughs, PR context, and impact assessment.
[HINT] Download the complete skill directory including SKILL.md and all related files
| name | analyze-changes |
| description | Analyze git changes (branch, commit, or current branch) and produce a design-oriented markup document with Mermaid diagrams, query/usage examples, data walkthroughs, PR context, and impact assessment. |
| argument-hint | [branch-name | commit-sha] |
| user-invocable | true |
Analyze the git changes specified by the argument and produce a design-oriented analysis document.
The argument $ARGUMENTS is either:
developdevelopIf $ARGUMENTS is empty or blank, get the current branch name:
git rev-parse --abbrev-ref HEAD
Then treat it as a branch name (see branch handling below).
Otherwise, run these git commands to figure out what was provided:
git cat-file -t "$ARGUMENTS" 2>/dev/null
git branch --list "$ARGUMENTS"
If it resolves to a commit and is not a branch name (git branch --list "$ARGUMENTS" is empty), treat it as a single commit SHA. Gather:
git show --stat <SHA>git diff <SHA>~1..<SHA> (the full diff)git log -1 --format="%H%n%s%n%b" <SHA> (commit message)If it resolves to a branch, gather:
git log --oneline develop..<branch> (all commits on the branch)git diff develop...<branch> (combined diff)git log --format="%H%n%s%n%b%n---" develop..<branch> (all commit messages)Use gh to find a PR associated with the branch or commit:
# For a branch:
gh pr list --head "<branch-name>" --json number,title,body,comments,reviews,url --limit 1
# For a commit (search PRs that contain the SHA):
gh pr list --search "<SHA>" --state all --json number,title,body,comments,reviews,url --limit 1
If a PR is found:
gh pr view <number> --json comments,reviews,reviewDecision # Fetches top-level issue comments, review summaries, and decision
gh api repos/{owner}/{repo}/pulls/<number>/comments # Fetches detailed line-level review comments
For every file that was added or modified in the diff, use the Read tool to read the final version of the file. This gives you the full source to understand the design, not just the diff hunks.
Study the diff, commit messages, PR description, and PR comments carefully. Identify:
Create a file named changes-analysis-<identifier>.md in the project root directory, where <identifier> is the short SHA or branch name (sanitized for filenames โ replace / with -).
The document MUST follow this structure:
# Change Analysis: <short description>
**Ref**: `<SHA or branch name>`
**PR**: [#<number> <title>](<url>) *(if PR exists, omit section if no PR)*
**Date**: <commit date(s)>
**Author**: <author(s)>
## Summary
<2-5 sentence high-level summary of what these changes accomplish and why.>
## Design
<Start with the shared concepts that apply across all changes: the overall
algorithm/approach, common data structures, shared invariants, configuration.
Then break into per-feature/per-algorithm subsections.>
### <Shared concept: e.g., overall algorithm, detection pipeline>
<Explain the top-level algorithm or decision flow that ties the changes together.
Include a Mermaid diagram showing the overall flow.>
### <Shared concept: e.g., common data structures, eligibility checks>
<Describe data structures, class relationships, and preconditions shared across
multiple features. Include a class diagram if new types are introduced.>
### <Feature/Algorithm 1>
<For each distinct feature or algorithm, write a SELF-CONTAINED section that
flows in this order:>
**Algorithm.** <What it does, when it triggers, what the eligibility conditions
are. Keep this concise โ the reader should understand the approach before seeing
code or examples.>
**Execution flow:**
<Mermaid diagram showing the internal decision/data flow for this specific
feature. NOT a rehash of the overall diagram โ zoom into this feature's logic.>
**Query/usage example:**
<A concrete, real-world example that triggers this code path. For a database,
this is a SQL query with schema context. For an API, this is a request/response.
For a library, this is a code snippet. Pick the most representative example โ
prefer production queries (e.g., from benchmarks) over synthetic test data.>
**Data walkthrough:**
<Walk through the example step by step with ACTUAL VALUES. Show what the old
code did (with cost), then what the new code does (with cost). Use concrete
data โ not "vertex A" but "Alice" or "n1". Show the build phase output, the
probe phase per row, and the final result. This is what makes the design
understandable โ abstract descriptions alone are not enough.>
<Repeat for each feature/algorithm.>
### Summary
Provide a table mapping each feature to its trigger condition, implementation class, data structure, and real-world impact (benchmark numbers if available).
| Feature | Trigger | Implementation | Data Structures | Impact |
|---|---|---|---|---|
| ... | ... | ... | ... | ... |>
## PR Discussion Summary
*(Only if a PR with comments exists)*
<Summarize key points from PR comments and reviews. Note any decisions made,
concerns raised, or alternatives discussed.>
## Impact Assessment
- **Risk level**: Low / Medium / High
- **Affected components**: <list>
- **Testing considerations**: <what should be tested>
- **Backward compatibility**: <any breaking changes?>
Design-first, not file-first. Never organize the document by file, by diff hunk, or by commit. Organize by algorithm/feature. A single feature may span multiple files โ the document should explain it as one unit. Each section should be self-contained: a reader should understand one feature without reading the others.
Every feature section must have a concrete example with a data walkthrough. Abstract algorithm descriptions are insufficient. Show real values flowing through the system. For database changes, use actual SQL queries (from benchmarks or tests) and walk through with sample data. Show both old behavior and new behavior with cost comparison.
Mermaid diagrams are REQUIRED for non-trivial changes. Use:
Do NOT include a "Changed Files" table. It duplicates what git diff --stat already shows and adds no design insight. The document is about understanding, not inventory.
Do NOT use file:line references. They go stale after any rebase or edit and clutter the text. Refer to classes and methods by name (e.g., "traceBackwardBranch() in MatchExecutionPlanner"). The reader can find them with grep.
Shared concepts go at the top of the Design section. If multiple features share a data structure, an invariant (like context isolation), a safety mechanism (like a runtime fallback), or a configuration knob โ describe it once before the per-feature sections, not repeated in each.
Explain WHY, not just what. Use commit messages and PR description for motivation. When a design choice has trade-offs, state them. When something looks redundant (e.g., a check that always passes with current data), explain when it matters.
Be honest about limitations. If a feature is only exercised by synthetic tests and not by any production query, say so. If a pattern never actually filters in symmetric cases, explain what the real win is (cost reduction, not filtering).
Use the Agent tool with subagent_type: "Explore" if you need to understand surrounding code context that isn't in the diff โ e.g., to find which real queries trigger a new optimization, or to understand the old behavior being replaced. When the question is reference-accuracy (callers/overrides/usages of a Java symbol, "is this slot consumed anywhere", "which classes implement this interface"), instruct the Explore sub-agent in its prompt to use mcp-steroid PSI find-usages / find-implementations / type-hierarchy via steroid_execute_code, not grep, when the mcp-steroid MCP server is reachable per the SessionStart hook. Sub-agents default to grep otherwise; an unannotated delegation routes through grep and silently misses polymorphic call sites and identifiers in Javadoc/comments. See CLAUDE.md ยง MCP Steroid โ "Grep vs PSI โ when to switch" for the full routing rule.
The output file goes in the project root directory (the current working directory).