with one click
forge-spec
// From-scratch spec reconstruction. Surveys the codebase, identifies behavioral domains, and guides a user-confirmed authoring loop that produces a complete, anchor-keyed spec under ai-docs/spec/.
// From-scratch spec reconstruction. Surveys the codebase, identifies behavioral domains, and guides a user-confirmed authoring loop that produces a complete, anchor-keyed spec under ai-docs/spec/.
[HINT] Download the complete skill directory including SKILL.md and all related files
| name | forge-spec |
| description | From-scratch spec reconstruction. Surveys the codebase, identifies behavioral domains, and guides a user-confirmed authoring loop that produces a complete, anchor-keyed spec under ai-docs/spec/. |
| disable-model-invocation | true |
| argument-hint | [optional: starting domain name] |
Target: $ARGUMENTS
ws-print-infra spec-conventions.md (Bash) before any spec write — conventions are canonical there.ws-subquery --deep-research (sonnet); clerk uses ws-oneshot-agent -p clerk --model sonnet.git mv ai-docs/spec/*) requires explicit user confirmation before executing.ws-generate-spec-stem <descriptive-slug> before every anchor insertion.ws-spec-build-index (no args) after every spec file write or update.forge-spec-<domain> (e.g., forge-spec-auth).TaskList and scan for tasks whose name begins with forge-spec-.completed.ai-docs/spec/. If the directory is empty or absent, skip to step 2.ai-docs/ref/old-spec/YYMMDD/ (today's date) and used as reference only — not as a base to extend. Ask the user to confirm before proceeding.YYMMDD=$(date +%y%m%d)
mkdir -p ai-docs/ref/old-spec/$YYMMDD
git mv ai-docs/spec/* ai-docs/ref/old-spec/$YYMMDD/
Issue all four queries in a single response turn as parallel Bash calls:
ws-subquery --deep-research - <<'PROMPT'
Survey the project's directory and module structure.
Steps:
1. Run `find . -type f` to enumerate the source tree layout.
2. Identify top-level modules, packages, or service boundaries.
3. Return a structured summary: module names, paths, brief description of purpose derived from file names and directory layout.
Format your response as a markdown bullet list grouped by module/area.
PROMPT
ws-subquery --deep-research - <<'PROMPT'
Survey all tickets under ai-docs/tickets/ (all statuses: idea/, todo/, ready/, .done/, .dropped/).
Steps:
1. Glob ai-docs/tickets/**/*.md and read each file.
2. Extract: ticket title, status directory, and any behavior or feature described as public-facing or user-visible.
3. Group by apparent behavioral domain (infer from ticket title and content).
Return a grouped list: domain → behaviors/features mentioned in tickets.
PROMPT
ws-subquery --deep-research - <<'PROMPT'
Survey the archived spec files in ai-docs/ref/old-spec/ (most recent YYMMDD subdirectory).
Steps:
1. Glob ai-docs/ref/old-spec/**/*.md and read each file.
2. For each file: extract the title, summary, and all ## headings.
3. Note which features are marked 🚧 (planned) vs unmarked (implemented).
Return: a flat list of domain names found, with their heading topics. These are reference candidates only — do not treat them as authoritative.
PROMPT
ws-subquery --deep-research - <<'PROMPT'
Survey recent commit history for behavioral signals.
Steps:
1. Run `git log --oneline -100`.
2. Identify commits that mention user-visible features, API changes, CLI changes, or spec updates (feat:, fix:, spec: prefixes and commit bodies referencing spec-stems).
3. Group commit messages by apparent behavioral area.
Return: a grouped list of behavioral areas → representative commit messages. Omit chore/docs/refactor commits unless they reference spec-stems.
PROMPT
Wait for all four to return before synthesizing.
Combine the four survey returns:
Present the candidate domains to the user in a numbered list. Tell the user they may reorder, merge, split, rename, or drop entries before proceeding.
Wait for user response. Apply any adjustments. Do not proceed until the user explicitly confirms the final list.
Call TaskCreate once — one task per confirmed domain, in confirmed order:
TaskCreate(
name = "forge-spec-<domain>",
description = """
Spec authoring for domain: <domain>
Source paths: <inferred module paths for this domain>
Old spec files: <archived spec files that covered this domain, or none>
"""
)
Proceed immediately to On: per-domain with the first domain.
For each domain task in order, skipping tasks with status completed:
Call TaskUpdate to set the domain task status to in_progress.
Issue all four queries in a single response turn as parallel Bash calls:
ws-subquery --deep-research - <<PROMPT
Survey source code for the <domain> domain.
Module paths: <paths from task description>
Steps:
1. Read all source files under the listed paths.
2. Identify caller-visible behaviors: public functions, CLI commands, HTTP endpoints, config options, output formats, events, or any interface a consumer can observe.
3. For each behavior: note whether it appears fully implemented or partially implemented (stubs, TODOs, feature flags).
Return: bullet list of caller-visible behaviors with implementation status (implemented / partial / none visible).
PROMPT
ws-subquery --deep-research - <<PROMPT
Find tickets relevant to the <domain> domain.
Module paths: <paths from task description>
Steps:
1. Glob ai-docs/tickets/**/*.md.
2. Filter to tickets whose title or body mentions <domain> keywords or the module paths.
3. For each match: extract the feature or behavior described and its ticket status (ready/todo/idea/.done/.dropped).
Return: list of features → ticket status. Ready items are candidates for 🚧 planned markers.
PROMPT
ws-subquery --deep-research - <<PROMPT
Survey the archived spec files for the <domain> domain.
Archived location: ai-docs/ref/old-spec/ (most recent YYMMDD subdirectory)
Old spec files for this domain: <files from task description, or scan all>
Steps:
1. Read the relevant archived spec files.
2. For each ## heading: note the feature name and 🚧 status.
3. Flag any behaviors in the old spec not visible in current source.
Return: feature list from old spec with 🚧 status and a note on current-source presence.
PROMPT
ws-subquery --deep-research - <<PROMPT
Survey commit history for the <domain> domain.
Module paths: <paths from task description>
Steps:
1. Run `git log --oneline -- <paths>`.
2. Identify commits that added or changed caller-visible behavior (feat:, fix:, spec: prefixes or spec-stem references in body).
3. For each significant commit: note the behavior changed and whether it appears implemented or still in-flight.
Return: chronological list of behavioral changes, newest first.
PROMPT
Wait for all four to return before synthesizing.
Combine the four returns into a behavior brief:
Present the behavior brief to the user. For each item, establish:
spec-conventions.md. Ask on every ambiguous item.{#slug}. Planned → 🚧 {#slug}.Ask on every ambiguous item. Do not classify without confirmation.
Collect the confirmed list before writing anything.
judge: directory-vs-flat.ws-print-infra spec-conventions.md (Bash) before writing — read the output before proceeding.ws-generate-spec-stem <descriptive-slug> to obtain {#YYMMDD-slug}.
b. Write the spec entry using the spec-format template from spec-conventions.md.
c. Place 🚧 prefix on the ## heading if planned; omit if implemented.## heading. If not, add a placeholder section and note it to the user.ws-spec-build-index (no args).judge: directory-vs-flat — if the written file warrants a directory split, note it as a split candidate for a follow-up /write-spec invocation. Do not perform the split inline.ready/ status relevant to this domain. If none, commit the spec file changes now (git add ai-docs/spec/ && git commit) and skip to step 7.ws-oneshot-agent -p clerk --model sonnet - <<PROMPT
Associate spec stems with tickets and check convention compliance.
Run first:
ws-print-infra ticket-conventions.md
Spec stems generated for this domain:
<list: {#YYMMDD-slug} — feature name, one per line>
Tickets to update (ready only):
<list: ai-docs/tickets/<status>/<stem>.md — one-line description>
For each ticket:
1. Read the ticket file.
2. Add or update the `spec:` frontmatter field with the stems relevant to this ticket. Merge with any existing `spec:` entries — never overwrite.
3. Check the ticket body against loaded conventions. Fix any issues in place.
4. Do not commit — the caller handles all git operations.
PROMPT
## Clerk report. Resolve any open questions with the user before committing.TaskUpdate to set the domain task status to completed.completed, proceed to On: wrap-up.Run ws-spec-build-index (no args) as an idempotent safety pass over all spec files:
ws-spec-build-index
Emit to the user:
## Forge Spec — Complete
Domains covered: <N>
Spec files created: <list of paths>
Total stems generated: <count>
Implemented: <count>
🚧 Planned: <count>
ws-oneshot-agent -p spec-updater to strip 🚧 markers from any planned features whose implementation has since landed in commit history.🚧 entries with open tickets — confirm each has an active ready ticket or drop the marker./write-spec for any domain surfaces discovered after wrap-up.| Decision | When |
|---|---|
Flat file ai-docs/spec/<area>.md | Single, self-contained surface — none of the split conditions below apply |
Directory ai-docs/spec/<area>/index.md | Any one split condition is met: (1) a section has its own 🚧 markers with a distinct ticket lifecycle; (2) more than one [!note] Constraints block is present; (3) a section has a distinct audience from the parent doc |
When uncertain, start flat. Re-evaluate after writing — if a split condition fires, note the file for a follow-up /write-spec invocation.
ws-spec-build-index
No file arguments. Scans ai-docs/spec/**/*.md automatically. Run once after any spec write or update in this session.
TaskCreate(
name = "forge-spec-<domain>",
description = """
Spec authoring for domain: <domain>
Source paths: <comma-separated module paths>
Old spec files: <comma-separated archived spec paths, or none>
"""
)
Forge-spec optimizes for confirmed spec entries per domain — every entry in the produced spec reflects an explicit user decision on caller-visibility and implementation status. When a rule is ambiguous, apply whichever interpretation more reliably requires explicit user confirmation before any spec content is written.