with one click
lead-forge-spec
// Reconstruct specs from scratch by surveying the project, confirming behavioral domains, and guiding a user-confirmed authoring loop that produces complete anchor-keyed specs under ai-docs/spec/.
// Reconstruct specs from scratch by surveying the project, confirming behavioral domains, and guiding a user-confirmed authoring loop that produces complete anchor-keyed specs under ai-docs/spec/.
[HINT] Download the complete skill directory including SKILL.md and all related files
| name | lead-forge-spec |
| description | Reconstruct specs from scratch by surveying the project, confirming behavioral domains, and guiding a user-confirmed authoring loop that produces complete anchor-keyed specs under ai-docs/spec/. |
Target: user request
ws/convention.read(name: "spec-conventions") before any spec write - conventions are canonical there.ws/subquery(deep_research: true, question: <focused prompt>); clerk starts with ws/subquery(deep_research: false, question: <self-contained clerk prompt>).git mv ai-docs/spec/*) requires explicit user confirmation before executing.ws/spec_stem.generate(slug: "<descriptive-slug>") before every anchor insertion.ws/spec_index.verify() after every spec file write or update.forge-spec-<domain> (e.g., forge-spec-auth).ws/subquery(...) calls for a phase are dispatched in a single response turn when the host can issue parallel calls; store returned keys and wait on all keys before synthesizing.forge-spec-.completed.ai-docs/spec/. If the directory is empty or absent, skip to step 2.ai-docs/ref/old-spec/YYMMDD/ (today's date) and used as reference only - not as a base to extend. Ask the user to confirm before proceeding.YYMMDD=$(date +%y%m%d)
mkdir -p ai-docs/ref/old-spec/$YYMMDD
git mv ai-docs/spec/* ai-docs/ref/old-spec/$YYMMDD/
Issue all four queries in a single response turn as parallel ws/subquery calls. Store each returned subquery_key:
Call ws/subquery(deep_research: true, question: <block below>):
Survey the project's directory and module structure.
Enumerate source files, identify top-level modules/packages/service boundaries,
and return module names, paths, and purpose inferred from names/layout.
Format: markdown bullets grouped by module/area.
Call ws/subquery(deep_research: true, question: <block below>):
Survey all tickets through `ws/tickets.list(include_done: true, include_dropped: true)`.
Extract title, status directory, and public-facing or user-visible behavior.
Group by inferred behavioral domain.
Return: domain -> behaviors/features mentioned in tickets.
Call ws/subquery(deep_research: true, question: <block below>):
Survey the archived spec files in ai-docs/ref/old-spec/ (most recent YYMMDD subdirectory).
Glob ai-docs/ref/old-spec/**/*.md. Extract title, summary, `##` headings, and
`🚧` status. Return domain names and heading topics as reference candidates
only; do not treat archived specs as authoritative.
Call ws/subquery(deep_research: true, question: <block below>):
Survey recent commit history for behavioral signals.
Use `ws/git.log`. Identify user-visible features, API changes, CLI
changes, or spec updates (`feat:`, `fix:`, `spec:`, spec-stems in bodies).
Return behavioral areas -> representative commits. Omit chore/docs/refactor
unless they reference spec-stems.
Call ws/agents.result(name: <subquery-key>, timeout_seconds: 600) for all four keys before synthesizing.
Present the candidate domains to the user in a numbered list. Tell the user they may reorder, merge, split, rename, or drop entries before proceeding.
Wait for user response. Apply any adjustments. Do not proceed until the user explicitly confirms the final list.
Call TaskCreate once - one task per confirmed domain, in confirmed order:
TaskCreate(
name = "forge-spec-<domain>",
description = """
Spec authoring for domain: <domain>
Source paths: <inferred module paths for this domain>
Old spec files: <archived spec files that covered this domain, or none>
"""
)
Proceed immediately to On: per-domain with the first domain.
For each domain task in order, skipping tasks with status completed:
Call TaskUpdate to set the domain task status to in_progress.
Issue all four queries in a single response turn as parallel ws/subquery calls. Store each returned subquery_key:
Call ws/subquery(deep_research: true, question: <block below>):
Survey source code for the <domain> domain.
Module paths: <paths from task description>
Read listed source paths. Identify caller-visible public functions, CLI commands,
HTTP endpoints, config options, output formats, events, and observable interfaces.
Return behaviors with status: implemented / partial / none visible.
Call ws/subquery(deep_research: true, question: <block below>):
Find tickets relevant to the <domain> domain.
Module paths: <paths from task description>
Use `ws/tickets.find(query: "<domain>")`; filter by module paths when needed.
Return features -> ticket status. todo items are `🚧` candidates.
Call ws/subquery(deep_research: true, question: <block below>):
Survey the archived spec files for the <domain> domain.
Archived location: ai-docs/ref/old-spec/ (most recent YYMMDD subdirectory)
Old spec files for this domain: <files from task description, or scan all>
Read relevant archived specs. For each `##` heading, note feature name,
`🚧` status, and whether current source shows it.
Return features with archived status and current-source presence.
Call ws/subquery(deep_research: true, question: <block below>):
Survey commit history for the <domain> domain.
Module paths: <paths from task description>
Use path-filtered native Git history until ws exposes path-history. Identify commits that added or changed
caller-visible behavior (`feat:`, `fix:`, `spec:`, spec-stems).
Return behavioral changes newest first, with implementation status when visible.
Call ws/agents.result(name: <subquery-key>, timeout_seconds: 600) for all four keys before synthesizing.
Combine the four returns into one bullet per distinct caller-visible behavior: description, evidence source (code / ticket / old-spec / commit), candidate classification (implemented / planned), and uncertainty flags.
Present the behavior brief to the user. For each item, establish:
spec-conventions.md. Ask on every ambiguous item.{#slug}. Planned -> ## 🚧 Feature {#slug}.Ask on every ambiguous item. Do not classify without confirmation. Collect the confirmed list before writing anything.
judge: directory-vs-flat.ws/convention.read(name: "spec-conventions") before writing - read the output before proceeding.ws/spec_stem.generate(slug: "<descriptive-slug>") to obtain {#YYMMDD-slug}.
b. Write the spec entry using the spec-format template from spec-conventions.md.
c. Place 🚧 after the heading marker if planned; omit if implemented.## heading. If not, add a placeholder section and note it to the user.ws/spec_index.verify().judge: directory-vs-flat - if the written file warrants a directory split, note it as a split candidate for a follow-up ws:lead-write-spec invocation. Do not perform the split inline.ready/ status relevant to this domain. If none, commit the spec file changes through ws/git.commit and skip to step 7.subquery_key, then wait for it:Call ws/subquery(deep_research: false, question: <block below>):
Associate spec stems with tickets and check convention compliance.
Run first:
Call `ws/convention.read(name: "ticket-conventions")`
Spec stems generated for this domain:
<list: {#YYMMDD-slug} - feature name, one per line>
Tickets to update (ready only):
<list: ai-docs/tickets/<status>/<stem>.md - one-line description>
For each ticket:
1. Read the ticket.
2. Merge relevant stems into `spec:` frontmatter; never overwrite existing entries.
3. Check body against conventions and fix issues in place.
4. Do not commit; caller owns git.
ws/agents.result(name: <subquery-key>, timeout_seconds: 600), then review the ## Clerk report. Resolve any open questions with the user before committing.TaskUpdate to set the domain task status to completed.completed, proceed to On: wrap-up.Call ws/spec_index.verify() as an idempotent safety pass over all spec files.
Emit to the user:
## Forge Spec - Complete
Domains covered: <N>
Spec files created: <list of paths>
Total stems generated: <count>
Implemented: <count>
🚧 Planned: <count>
ws/subquery(deep_research: false, question: <self-contained spec-updater prompt>), then ws/agents.result(name: <subquery-key>, timeout_seconds: 600), to strip 🚧 markers from any planned features whose implementation has since landed in commit history.🚧 entries with open tickets - confirm each has an active todo ticket or drop the marker.ws:lead-write-spec for any domain surfaces discovered after wrap-up.| Decision | When |
|---|---|
Flat file ai-docs/spec/<area>.md | Single, self-contained surface - none of the split conditions below apply |
Directory ai-docs/spec/<area>/index.md | Any one split condition is met: (1) a section has its own 🚧 markers with a distinct ticket lifecycle; (2) more than one [!note] Constraints block is present; (3) a section has a distinct audience from the parent doc |
When uncertain, start flat. Re-evaluate after writing - if a split condition fires, note the file for a follow-up ws:lead-write-spec invocation.
ws/spec_index.verify()
No file arguments. Scans ai-docs/spec/**/*.md for duplicate anchors. Run once after any spec write or update in this session.
TaskCreate(
name = "forge-spec-<domain>",
description = """
Spec authoring for domain: <domain>
Source paths: <comma-separated module paths>
Old spec files: <comma-separated archived spec paths, or none>
"""
)
Forge-spec optimizes for confirmed spec entries per domain - every produced entry reflects an explicit user decision on caller-visibility and implementation status. When ambiguous, require confirmation before writing spec content.