with one click
do-plan
// Use when creating or updating a feature plan document. Triggered by 'make a plan', 'plan this', 'flesh out the idea', or any request to scope and plan work before implementation.
// Use when creating or updating a feature plan document. Triggered by 'make a plan', 'plan this', 'flesh out the idea', or any request to scope and plan work before implementation.
[HINT] Download the complete skill directory including SKILL.md and all related files
| name | do-plan |
| description | Use when creating or updating a feature plan document. Triggered by 'make a plan', 'plan this', 'flesh out the idea', or any request to scope and plan work before implementation. |
| allowed-tools | Read, Write, Edit, Glob, Bash, AskUserQuestion, ToolSearch, WebSearch |
At the very start of this skill, write an in_progress marker:
sdlc-tool stage-marker --stage PLAN --status in_progress --issue-number {issue_number} 2>/dev/null || true
After the plan document is committed and pushed (end of Step 5), write the completion marker:
sdlc-tool stage-marker --stage PLAN --status completed --issue-number {issue_number} 2>/dev/null || true
Creates structured feature plans in docs/plans/ following Shape Up principles: narrow the problem, set appetite, rough out the solution, identify rabbit holes, and define boundaries.
docs/plans/{slug}.md| Sub-file | Load when... |
|---|---|
PLAN_TEMPLATE.md | Writing the plan document (copy the template into docs/plans/{slug}.md) |
SCOPING.md | The request is vague, a grab-bag, or needs narrowing before planning |
EXAMPLES.md | Deciding how to respond to a user request (vague vs. grab-bag vs. good) |
For cross-project work, the GH_REPO environment variable is automatically set by sdk_client.py. The gh CLI natively respects this env var, so all gh commands automatically target the correct repository. No --repo flags or manual parsing needed.
Before planning, verify the source issue has reconnaissance evidence:
python .claude/hooks/validators/validate_issue_recon.py ISSUE_NUMBER
If this fails, the issue needs a ## Recon Summary section added via /do-issue Step 3 (the reconnaissance routine). Do not proceed with planning until recon is validated — plans built on unverified assumptions produce rework.
The recon validator in Phase 0 confirms evidence exists in the issue. This step confirms the evidence is still true. Issues can sit for hours, days, or weeks before being planned. In that window, code refactors, other PRs land, sibling systems change, and referenced file:line pointers drift. A plan built on stale premises will produce rework even if the recon was rigorous when written.
Skip if: the issue was filed within the last hour AND no commits have landed on main since then. Otherwise this step is mandatory.
For each file:line reference cited in the issue body (including the Recon Summary):
Read the exact file:line and confirm the code at that location still matches what the issue describes. Line numbers drift under refactors; the symbol may have been renamed, moved, or deleted entirely.For each cited sibling issue or PR (e.g., "see #867", "blocked by #743", "related to #825"):
gh issue view <N> --json state,closedAt,title
gh pr view <N> --json state,mergedAt,title 2>/dev/null
git log the merge and check whether it changed files the current issue references.For the files most relevant to the issue (the ones cited in the issue body or surfaced by Phase 1's blast-radius analysis):
ISSUE_CREATED=$(gh issue view <N> --json createdAt -q .createdAt)
git log --oneline --since="$ISSUE_CREATED" -- <file1> <file2> ...
docs/plans/*.md for active plans touching the same area (ls -lt docs/plans/ then inspect the most recent few). Overlap with an active plan is a coordination signal, not necessarily a blocker — but it must be surfaced.For bug issues specifically, reproduce the bug against current main — or, if reproduction is infeasible (e.g., the bug requires a production-only precondition), read the code path and confirm the defect is still present. If the bug is now unreproducible or the cited symptoms no longer occur, stop and ask the user whether to close the issue rather than plan a fix for a non-bug.
Produce a ## Freshness Check section in the plan document capturing what was re-verified, with one of four dispositions:
| Disposition | Meaning | What to do |
|---|---|---|
| Unchanged | Nothing relevant has moved. Issue claims still hold. | Proceed to Phase 1. Note the commit SHA used as the baseline in the Freshness Check section. |
| Minor drift | Line numbers moved but claims still hold. A cited PR merged but didn't change the root cause. | Update file:line references in the plan. Proceed to Phase 1. Note drift details in the section. |
| Major drift | Root cause has changed, an adjacent system now handles the concern, or a prior PR already fixes it. | Stop. Report findings to the user and ask whether to close the issue, revise its scope, or proceed on a revised premise. Do NOT silently build a plan for a stale problem. |
| Overlap | An active plan in docs/plans/ is already addressing the same area. | Surface the overlap. Ask whether to merge into the existing plan or coordinate. |
This section stays in the plan document as durable evidence of when and how the freshness check was performed — reviewers during critique and build can tell at a glance whether the plan's premises were verified at plan time.
Gather relevant external context before planning. This surfaces current documentation, ecosystem patterns, and known pitfalls that training data may not cover.
Skip if: The work is purely internal (no external libraries, APIs, or ecosystem patterns involved) — e.g., refactoring internal code, fixing a typo, or reorganizing files.
Load the WebSearch tool — WebSearch is a deferred tool; its schema must be loaded before use:
ToolSearch("select:WebSearch")
Generate 1-3 search queries from the issue context. Extract key technical terms from the issue title, problem statement, and desired outcome. Target:
Execute each query using the WebSearch tool. Review the results for relevance.
Filter results — Only retain findings that are directly relevant to the technical approach. Discard generic results, marketing content, or tangentially related material.
Save valuable findings as memories for future plan reuse:
python -m tools.memory_search save "Finding: [concise description with source URL]" --importance 5.0 --source agent
Memory saves are fire-and-forget — if they fail, continue without error.
Write the ## Research section in the plan document with the findings. Include:
If no useful results were found, write: "No relevant external findings — proceeding with codebase context and training data."
Proceed to Phase 1 with the research context available.
Understand the request — gather evidence before scoping. Do NOT rely on inference; execute this checklist:
## Recon Summary section of the issue if present. Extract the "Confirmed," "Revised," "Pre-requisites," and "Dropped" buckets — these are direct inputs to the plan's Solution and No-Gos sections.Narrow the problem - Challenge vague requests (see SCOPING.md if needed)
Blast radius analysis - If the change involves code modifications, run the code impact finder:
.venv/bin/python -m tools.code_impact_finder "PROBLEM_STATEMENT_HERE"
Use results to inform plan sections:
impact_type="modify" -> Solution sectionimpact_type="dependency" -> Risks sectionimpact_type="test" -> Success Criteria sectionimpact_type="config" -> Solution sectionimpact_type="docs" -> Documentation sectionPrior art search - Search closed issues and merged PRs for related work. This prevents proposing solutions that have already been tried (and failed) or re-solving problems that already have working implementations.
# Search closed issues for related keywords
gh issue list --state closed --search "KEYWORDS_HERE" --limit 10 --json number,title,closedAt,url
# Search merged PRs for related work
gh pr list --state merged --search "KEYWORDS_HERE" --limit 10 --json number,title,mergedAt,url
Use results to fill the Prior Art section in the plan. If multiple prior attempts addressed the same problem, also fill the Why Previous Fixes Failed section. Skip if: Small appetite AND greenfield work (no existing code being modified).
4.5. xfail test search - For bug fixes, search the test suite for xfail markers related to the bug. These represent tests that document the bug but are marked as expected failures.
# Search for xfail markers in tests (both decorator and runtime forms)
grep -rn 'pytest.mark.xfail\|pytest.xfail(' tests/ --include="*.py" | head -20
For each xfail found that relates to the bug being fixed:
pytest.xfail() calls inside test bodies.
Unlike @pytest.mark.xfail decorators, runtime xfails short-circuit the test before reaching
assertions — so they silently pass even after the bug is fixed. These are invisible to pytest's
XPASS detection and MUST be explicitly listed as conversion targets in the plan.
Skip if: Not a bug fix, or no xfail tests found related to this bug.4.7. Infrastructure scan - Scan docs/infra/ for existing infrastructure constraints relevant to this work.
# Check for existing infra docs that might contain relevant constraints
ls docs/infra/*.md 2>/dev/null | head -20
Review any relevant INFRA docs for rate limits, API quotas, deployment constraints, or tool rules
that should inform the plan. Reference findings in the Solution and Risks sections.
Skip if: docs/infra/ doesn't exist or contains no relevant docs.
Data flow trace - For changes involving multi-component interactions, trace the data flow end-to-end through the system. Start from the entry point (user action, API call, event trigger) and follow through each component, transformation, and storage layer.
# Read the entry point file
# Follow imports and function calls through the chain
# Document each transformation and handoff between components
Use results to fill the Data Flow section in the plan. This is critical for changes that span multiple modules -- it prevents fixes applied at the wrong layer. Skip if: Change is isolated to a single file/function, or is purely documentation/config.
Failure analysis - If the prior art search (step 4) found previous attempts to fix the same problem, analyze why each attempt failed or was incomplete. Look for patterns:
Set appetite - Small / Medium / Large (see SCOPING.md for sizing guidance)
Rough out solution - Key components and flow, stay abstract
Race condition analysis - If the solution involves async operations, shared mutable state, or cross-process data flows, identify timing hazards. For each: specify what data/state must be established before dependent operations read it, and how the implementation prevents races. Skip if the change is purely synchronous and single-threaded.
Before writing the plan, resolve verifiable assumptions through time-boxed investigations.
### spike-N: [Description of what to verify]
- **Assumption**: "[The assumption being tested]"
- **Method**: web-research | prototype | code-read
- **Agent Type**: Explore (code-read), general-purpose (web-research), builder in worktree (prototype)
- **Time cap**: 5 minutes agent time
- **Result**: [filled after spike completes]
- **Confidence**: [high | medium | low]
- **Impact if false**: [what changes in the plan]
isolation: "worktree" to avoid repo pollution. Each spike returns a yes/no/finding — no committed code, no half-implementations## Spike Results section of the planSkip if: No verifiable assumptions identified, or all assumptions require human judgment (business decisions, priority calls).
Classification is mandatory - every plan MUST include a type: field (bug, feature, or chore).
Auto-Classification: When a message arrives via Telegram, the bridge auto-classifies it. Check if classification_type is available from the session context. If available, use it as the default type: value. The user can always override.
Create docs/plans/{slug}.md using the template from PLAN_TEMPLATE.md.
Conditional INFRA doc creation: If the plan introduces new dependencies, services, external API calls, or deployment changes, create docs/infra/{slug}.md using this structure:
# {Feature Name} — Infrastructure
## Current State
- [What infra exists today relevant to this work]
## New Requirements
- [New deps, services, API keys, config this plan adds]
- [Resource estimates: API quotas, storage, compute]
## Rules & Constraints
- [Rate limits, cost ceilings, API quotas]
- [Deployment topology requirements]
## Rollback Plan
- [How to revert infra changes if the feature is rolled back]
INFRA docs are NOT archived when plans ship — they accumulate in docs/infra/ as durable infrastructure knowledge. Skip if the plan involves no infrastructure changes.
After writing the plan, resolve the tracking issue first, then push.
REPO=$(gh repo view --json nameWithOwner -q .nameWithOwner)
Check for existing issue first! If the plan was created in response to an existing GitHub issue (e.g., "make a plan for issue #42"), do NOT create a new issue. Instead, get its title and link the plan:
EXISTING_ISSUE=42
ISSUE_TITLE=$(gh issue view $EXISTING_ISSUE --json title -q .title)
gh issue edit $EXISTING_ISSUE --add-label "plan"
Only create a NEW issue if the plan was initiated from scratch (not from an existing issue).
TYPE=$(grep '^type:' docs/plans/{slug}.md | sed 's/type: *//' | tr -d ' ')
if [ -z "$TYPE" ]; then
echo "ERROR: Plan must have a 'type:' field in frontmatter (bug, feature, or chore)"
exit 1
fi
ISSUE_TITLE="{Feature Name}"
gh issue create \
--title "$ISSUE_TITLE" \
--label "plan" \
--label "$TYPE" \
--body "$(cat <<EOF
**Type:** {type} | **Appetite:** {appetite} | **Status:** Planning
---
This issue is for tracking and discussion. The plan document is the source of truth.
EOF
)"
Plans are documentation, not code. ALWAYS commit and push to main first.
# MANDATORY: switch to main before committing the plan — regardless of current branch
git stash --include-untracked 2>/dev/null
git checkout main
git stash pop 2>/dev/null
git add docs/plans/{slug}.md && git commit -m "Plan: $ISSUE_TITLE"
git push
CRITICAL: Plan PRs must NOT close the tracking issue. The tracking issue stays open until the implementation PR merges with Closes #N. Never use closing keywords (Closes, Fixes, Resolves) when referencing the tracking issue in the plan PR body.
PLAN_LINK="https://github.com/${REPO}/blob/${PLAN_BRANCH}/docs/plans/{slug}.md"
if [ -n "$EXISTING_ISSUE" ]; then
# Prepend plan link to existing issue body
EXISTING_BODY=$(gh issue view $EXISTING_ISSUE --json body -q .body)
gh issue edit $EXISTING_ISSUE --body "**Plan:** ${PLAN_LINK}
${EXISTING_BODY}"
else
# Update the newly created issue with the plan link
ISSUE_NUM=$(gh issue list --state open --search "$ISSUE_TITLE" --json number -q '.[0].number')
gh issue edit $ISSUE_NUM --body "**Plan:** ${PLAN_LINK}
$(gh issue view $ISSUE_NUM --json body -q .body)"
fi
After linking or creating: update the plan's tracking: field and commit.
After all task steps are written and before committing, verify that task bullets are consistent with the current Technical Approach (or spike findings if no Technical Approach section exists).
json.dumps() is correct)asyncio.to_thread() vs. direct await) that contradicts the current approachConcrete example: If spike-2 found that json.dumps() is the correct encoding method and a task bullet still says "msgpack-encoded payload", update that task bullet to say "json-encoded payload" before committing the plan.
This step is non-negotiable for plans with spikes. For plans with no spikes (Small appetite, greenfield), it is a 30-second sanity check — scan once and move on.
Before finalizing, check the tracking issue for comments that contain feedback, scope changes, or new context that should be incorporated into the plan.
# Extract issue number from plan frontmatter tracking URL
ISSUE_NUM=$(grep '^tracking:' docs/plans/{slug}.md | grep -oP '/issues/\K\d+')
if [ -n "$ISSUE_NUM" ]; then
# Get all comments with IDs, sorted chronologically
gh api repos/{owner}/{repo}/issues/${ISSUE_NUM}/comments \
--jq '.[] | {id: .id, author: .user.login, created: .created_at, body: .body}' 2>/dev/null
# Get the latest comment ID
LATEST_COMMENT_ID=$(gh api repos/{owner}/{repo}/issues/${ISSUE_NUM}/comments \
--jq '.[-1].id // empty' 2>/dev/null)
# Get the plan's recorded last_comment_id
PLAN_COMMENT_ID=$(grep '^last_comment_id:' docs/plans/{slug}.md | sed 's/last_comment_id: *//')
fi
If new comments exist (LATEST_COMMENT_ID != PLAN_COMMENT_ID):
last_comment_idlast_comment_id: in the plan frontmatter to LATEST_COMMENT_IDIf no tracking issue or no comments: Skip this step.
Plan critique is handled separately by /do-plan-critique (war room). This phase focuses only on surfacing questions that need human input before the critique step.
main or plan/{slug} branch if main is protected)**Plan:** https://github.com/${REPO}/blob/${PLAN_BRANCH}/docs/plans/{slug}.mdtracking: set to the issue URLPlan draft created: docs/plans/{slug}.md
Tracking: {GitHub issue URL}
I've made the following key assumptions:
- [Assumption 1]
- [Assumption 2]
Please review the Open Questions section at the end of the plan and provide answers so I can finalize it.
After receiving answers:
status: ReadyIf this is a revision pass (critique returned NEEDS REVISION / MAJOR REWORK / READY TO BUILD with concerns AND the plan has been revised based on critique findings):
2a. Set revision_applied: true in the plan frontmatter — this is the canonical signal that the plan has settled and the lock is no longer needed:
# In the plan frontmatter, set revision_applied: true
# Then commit and push
git add docs/plans/{slug}.md && git commit -m "Plan revision ({slug}): address critique findings"
git push
2b. Clear the plan-revising lock — immediately after committing and pushing, clear the lock so the SDLC router can route to build. The lock and revision_applied must move together; both reflect "plan is settled":
# Clear the plan_revising lock (G7 guard will self-heal even if this is skipped,
# but clearing explicitly is the correct signal that the revision is complete)
sdlc-tool meta-set --key plan_revising --value false \
--issue-number {issue_number} 2>/dev/null || true
Plan finalized: docs/plans/{slug}.md
Tracking: {GitHub issue URL}
I think I'm done, but I'm supposed to ask — does anything feel off? Missed edge cases, wrong assumptions, anything that doesn't sit right? Now is the cheapest time to catch it.
All plans go to: docs/plans/{slug}.md
Use snake_case for slugs: async_meeting_reschedule.md, dark_mode_toggle.md, api_response_caching.md
MANDATORY: Plans are committed on main. Before committing, switch to main regardless of current branch. Plans are documentation, not code — they do not belong on feature branches.
When the plan is executed (via /do-build), the build skill creates a feature branch, does the work there, and opens a PR.
Status and classification are tracked in the plan document's YAML frontmatter.
Required Frontmatter Fields:
status: - Current state of the plantype: - Classification (bug, feature, or chore) - MANDATORYStatus Values:
Planning - Initial draft being createdReady - Finalized and ready for implementationIn Progress - Being implementedComplete - Shipped to productionCancelled - Not pursuing thisUpdate status as work progresses. Keep all tracking in the plan document itself.
Tracking issue lifecycle:
Ready or In Progress, update the GitHub issue status accordinglyCloses #N in the do-build PR body) — do NOT close issues manually