| name | doc-coauthoring |
| description | Guide users through a structured workflow for co-authoring documentation with iterative refinement and explicit gap tracking. USE WHEN the user wants to write documentation, proposals, technical specs, requirements docs, decision docs, RFCs, or similar structured content OR says "create a spec", "write requirements", "refine requirements", "draft a proposal", or "write up". Use this skill for defining what, why, scope, constraints, and acceptance criteria. Treat generic "refine requirements" requests as spec work by default; if the user wants implementation actions, hand off to `refine-plan`. |
Doc Co-Authoring Workflow
This skill provides a structured workflow for guiding users through collaborative document creation. Act as an active guide, walking users through three stages: Context Gathering, Refinement & Structure, and Reader Testing.
Boundary: Spec vs Plan
This skill owns the spec/requirements layer of the pipeline.
- A spec answers what should happen, why it matters, what constraints apply, and what "done" means from a requirements perspective.
- A plan answers which actions need to happen, in which order, and whether each action is already done, pending, or blocked.
When open points appear during spec creation, keep them in the spec with explicit markers. Do not convert unresolved product, scope, behavior, or acceptance-criteria questions into pseudo-implementation tasks just to keep momentum.
If the user asks to turn a spec into executable steps, status-bearing tasks, or a progress-tracking plan, hand off to refine-plan.
Shared Delivery Gates
Use /Users/dh/Documents/DanielsVault/_shared/shared-ai-docs/docs/doc-workflow.md as the canonical reference for:
- Definition of Ready (DoR)
- Definition of Done (DoD)
- Decision Freeze Pack
- Session Briefing
- Review Control Surface
- Parallel Work Control Surface
- Mini-Retro
- Spec Goldstandard
This skill may briefly restate those gates for local context, but it should not redefine them differently.
In this workflow:
- the spec should become clear enough to satisfy the shared DoR later,
- larger spec sessions should start by clarifying mode/skill, source of truth, goal, non-goals, in-scope work, expected output, verification/review path, and open decisions,
- every spec should contain a Review Control Surface with spec variant, Goldstandard status, goal, in scope, out of scope, key test/harness cases, key verification commands, open decisions, and readiness status,
- larger spec blocks and handoffs should end with a short Mini-Retro so decisions, edits, open items, missing evidence, workflow friction, and session/context health are not lost,
- blocking
[MISSING ...] and [DECISION ...] items must stay visible until resolved,
- "ready for planning" or "ready for implementation" should be judged against the shared gate definitions.
Verification Command Authoring Guardrail
When a spec includes a Verification Commands block, enforce these authoring rules before marking the spec ready:
- Declare execution context explicitly (working directory, shell assumptions, platform target such as macOS/Linux).
- Avoid commands that fail on benign pre-existing state unless that failure is intentional and documented.
- For branch-based scope guards, specify baseline semantics and compatibility for long-lived branches (or require a clean branch).
- For runtime startup checks (
docker compose up, service boot), include deterministic readiness handling before first HTTP assertions.
- For each command, define success criteria that can be observed from exit code and/or concrete output token.
- Define a risk-based preflight subset (only high-risk commands) plus clear separation from gate verification statuses.
- Rehearse high-risk command contracts before marking the spec ready when this is safe without implementation or external side effects. This is not functional verification; it proves the command can select the expected runtime/tooling, resolve paths, parse flags, and start from the declared cwd/shell/platform.
- Treat these as high-risk by default: SDK/runtime selection, absolute paths, commands affected by parent
global.json or cwd, container/Compose commands, health/readiness checks, branch/diff guards, credential/secret checks, network/infra checks, and build/test commands with fragile flags such as --no-build or --no-restore.
- Explicitly forbid recursive verification loops (no "verification of verification" chains).
- If command simplifications are needed, list them as proposals with rationale/trade-offs first; adopt only after user confirmation.
If these are not met, add explicit markers such as:
[MISSING verification portability contract]
[MISSING runtime readiness strategy]
[DECISION scope guard baseline strategy]
[MISSING risk-based verification preflight subset]
[MISSING command contract rehearsal]
[REVIEW command contract not rehearsed: <reason>]
[DECISION verification command simplification approval]
When a command cannot be safely rehearsed during authoring, document why and name the later workflow owner (child-spec-hardening, spec-change-delivery, integration owner, or human operator) who must validate it before implementation permission. Do not call a spec ready for implementation if a high-risk command has neither a successful rehearsal nor an explicit blocking marker.
Content Quality Guardrail
Before calling a spec ready for planning or implementation, perform a strict content-quality review, not just a section checklist. Data contracts are one review category, not the whole semantic review.
Check especially:
- Correctness/domain fit: the described behavior solves the stated problem and matches the intended user/system context.
- Necessity/scope: requirements are needed for the goal; hidden scope creep, accidental non-goals, and deferred work are explicit.
- Completeness: normal paths, edge cases, failure paths, dependencies, constraints, and assumptions are covered or marked.
- Consistency: requirements, examples, terminology, statuses, parent/child scope, acceptance criteria, and verification commands do not conflict.
- Unambiguity: a competent implementer should not have to choose between multiple materially different meanings.
- Feasibility: the behavior is implementable in the declared technical/runtime environment.
- Testability/verifiability: important requirements can be proven with concrete tests, inspection, analysis, demonstration, or acceptance evidence.
- Traceability: requirements connect to source intent, parent scope, acceptance criteria, verification commands, and downstream artifacts.
- Atomicity/abstraction: requirements are not bundled so broadly that partial delivery can look complete, and they do not over-prescribe implementation unless that is intentional.
- Operational/lifecycle fit: migration, fallback, rollback, observability, compatibility, ownership, and closeout evidence are covered when relevant.
- Data/artifact contracts preserve meaning downstream. Require identity/context/provenance fields only when the spec's downstream flow needs them; use-case-specific examples should not become universal fields.
- Status fields define allowed values, failure states, and blocked states when statuses matter.
- Fallback paths produce compatible artifacts or explicitly document differences.
- Acceptance criteria cannot be satisfied by skipping the intended control flow.
If the content-quality pass finds a blocker, mark the spec as not implementation-ready even when all formal sections exist.
Reader-Calibrated Authoring and Findings
Write and review specs for the user's actual review path. The user often checks goal, in scope, out of scope, verification commands, and test/harness cases first, while skimming the deeper contract sections. Therefore these control surfaces must carry enough meaning for a fast human review:
- Goal and scope must state the actual behavior change, not only a project label.
- In scope and out of scope must make the delivery boundary obvious without reading the whole document.
- Verification commands and test/harness cases must show what important behavior is proven, including negative and failure cases when relevant.
- Deep contract sections may contain detail, but any important consequence of that detail must be reflected in the review surfaces above.
For specs, create or maintain this section near the top:
## Review Control Surface
- Spec-Variante:
- Goldstandard Status:
- Ziel:
- In Scope:
- Out of Scope:
- Wichtigste Test-/Harness-Cases:
- Wichtigste Verification Commands:
- Offene Entscheidungen:
- Readiness Status:
Keep it short and synchronized with the detailed sections. If the control surface and the body disagree, treat that as a blocking review finding.
When reporting non-trivial review findings, do not use terse technical labels that assume the user has read every detail section. Use this format:
- Finding: one short sentence in plain language.
- Why it matters: what could go wrong in implementation or review if this remains unresolved.
- Where to check: point to the human review surfaces first (goal, scope, verification commands, test/harness cases) and only then to deeper sections.
- Concrete example: name the missing artifact, status, command, fixture, case, or behavior.
- Needed action: state whether Claude can fix it autonomously or whether a user decision is required.
If a technical term is unavoidable, translate it once in the finding. For example, explain "canonical serialization" as the exact byte representation that is hashed or signed.
When the user wants a cleanup/review loop after authoring, hand off to doc-review-autoresolve: it should reconcile and fix inferable inconsistencies, then report readiness, while stopping for real decisions.
When the user is working on a child spec that already has a parent/master source and needs implementation-ready depth, hand off to child-spec-hardening instead of continuing open-ended co-authoring. That skill applies these authoring principles in a bounded child delivery-pack workflow and then runs the auto-resolve loop.
Documentation Research Routing
Before spec drafting continues, route documentation discovery to
/Users/dh/Documents/DanielsVault/_shared/shared-ai-docs/skills-repo/skills/rag-documentation-research/SKILL.md
when users ask for source search, for example:
- "durchsuche die dokumentation"
- "durchsuche docs"
- "research the documentation"
- "search the documentation"
- "find relevant docs"
- "welche dokumente sind relevant"
Routing rule:
- Use
rag-documentation-research first (RAG-first, source-backed).
- Then continue
doc-coauthoring with the retrieved sources as context.
Scope Pressure Guardrail
This skill must proactively apply the shared Spec Sizing Gate from docs/doc-workflow.md before treating a spec as a single coherent delivery change.
Treat scope as "too large" when one or more signals are present:
- More than 3 major capability domains in one spec (for example product behavior + security hardening + CI/runtime + migration).
- Multiple repositories/systems must change together without a clear phased boundary.
- The spec mixes strategic target-state design with immediate implementation details in one pass.
- Several external dependencies (teams, infra windows, credentials, approvals) are required before any meaningful validation is possible.
- Acceptance criteria cannot be verified within one realistic delivery cycle.
- The expected work is likely to require context compression, long-running continuation, or multiple implementation sessions before reliable completion.
- The scope naturally separates into multiple child slices with independent done signals or verification cycles.
When scope pressure is detected, do this before continuing:
- Explicitly call out the scope risk.
- Automatically create or update the Parent Spec control layer instead of continuing as a single normal spec.
- Produce 2-5 concrete child split options (for example by capability, system boundary, lifecycle phase, risk class, or environment lane) and choose the default split unless the user explicitly redirects.
- Preserve one short parent overview and move delivery detail into child specs or child skeletons.
- Add a parent/child control section that records coverage, dependencies, status, and the next recommended slice.
- Ensure deferred scope becomes a backlog or child-spec entry with a trigger and done signal, not only prose in "Next steps".
- Hand off to
spec-orchestrator to build the child inventory, coverage matrix, hardening queue, optional OpenSpec ledger, and session-ready next-slice recommendation.
Before calling any child spec implementation-ready, verify that it contains:
- parent/master coverage,
- parent scope conformance for every touched parent requirement (
preserves, extends, narrows_with_rationale, defers_to_child, missing_from_child, contradicts_parent),
- explicit in/out scope,
- a Decision Freeze Pack,
- concrete acceptance criteria,
- Verification Commands including runtime/container gates when relevant,
- dependency, spec-hardening write-set, shared/read-only file, integration-owner, and merge/sync boundaries if parallel hardening is plausible.
- future runtime/code write-set boundaries if the child claims it can later be implemented in parallel.
Do not call a child spec implementation-ready when it contradicts the parent spec or drops expected parent scope without a named backlog/child-spec re-entry path.
If the user explicitly forbids splitting after the gate fires, keep working but add an explicit marker such as:
[REVIEW Scope risk accepted: <reason>]
CORE Response Format Compatibility
The optional CORE response wrapper (📋 SUMMARY, 🔍 ANALYSIS, ⚡ ACTIONS, ✅ RESULTS, ➡️ NEXT, 🎯 COMPLETED) can be used for short conversational updates, handoffs, or completion summaries around this workflow.
Do not force that wrapper into the actual spec artifact. The generated document should keep its native structure, sections, markers, and reader-focused prose.
Marker System
Use these markers throughout the document to track gaps and decisions:
[MISSING ...] — Information that is needed but not yet provided. Example: [MISSING target deployment date]
[DECISION ...] — A choice that needs to be made between options. Example: [DECISION REST vs GraphQL for API layer]
[REVIEW ...] — Content that needs verification or review (e.g., from Reader Testing). Example: [REVIEW Reader Claude found this section ambiguous]
Marker rules:
- Markers may be blocking (must be resolved before the doc is ready) or non-blocking (explicitly marked as such)
- A document is ready when no blocking
[MISSING] or [DECISION] markers remain
- When a marker is resolved, remove it and replace with the actual content
- User answers marked with
=> resolve the corresponding marker
- Use markers for requirement/spec gaps, not as a substitute for implementation task tracking
Spec Header Contract
Every spec must include this header block at the top:
**Date:** 2026-03-03
**Status:** 🟡 Spec
**Scope:** Automated deployment validation and self-healing for NCG backend on Hetzner infrastructure
---
Header rules:
Date uses YYYY-MM-DD and reflects the spec creation date.
Scope is a short one-line scope statement.
Status must be one of:
🟡 Spec - spec is being authored/refined in doc-coauthoring
🟠 Plan - an executable plan exists (from refine-plan or spec-change-delivery)
🔵 Implemented - planned change was executed and implementation artifacts exist
🟢 Accepted - change was formally accepted (typically via spec-closeout)
Status ownership by workflow:
doc-coauthoring initializes new specs with 🟡 Spec.
- Workflow 1 (legacy):
refine-plan (or the implementing run) can advance to 🟠 Plan.
- the implementing run can advance to
🔵 Implemented.
spec-closeout is optional and sets 🟢 Accepted when used successfully.
- Workflow 2 (current):
spec-change-delivery is responsible for transitioning 🟡 Spec -> 🟠 Plan -> 🔵 Implemented.
spec-closeout sets 🟢 Accepted after full closeout succeeds.
Spec History and SessionId
Keep a history table at the bottom of every spec and keep it append-only.
Use this format (no iteration column):
| Date | Author | Change |
|------|--------|--------|
| YYYY-MM-DD | User | Initial spec draft created with scope and core requirements. |
| YYYY-MM-DD | Claude | Acceptance criteria clarified and rollback constraints added. |
Rules:
- Record each meaningful spec adjustment in one short sentence.
- Do not use iteration numbers.
- Do not overwrite or delete prior history rows; append new rows only.
- If an older spec still uses an iteration-style history, migrate it to this 3-column format when the file is touched.
Keep SessionId at the bottom of the spec file:
- If missing, add it as
SessionId: <session-id>.
- If present, preserve it and continue using the same value for ongoing updates.
When to Offer This Workflow
Trigger conditions:
- User mentions writing documentation: "write a doc", "draft a proposal", "create a spec", "write up"
- User mentions specific doc types: "PRD", "design doc", "decision doc", "RFC"
- User says: "create a spec", "write requirements", "refine requirements"
- User seems to be starting a substantial writing task
If a request says "refine requirements", first determine the intended output:
- If they want a better requirements/spec document, stay in this skill
- If they want those requirements translated into implementation actions, route to
refine-plan
Initial offer for open-ended documentation work:
Offer the user a structured workflow for co-authoring the document. For shared-ai-docs specs, keep the offer short and bias toward the DocWorkflow gates in docs/doc-workflow.md; do not force the full generic three-stage writing process when the user already provided a concrete spec/change request.
The full optional co-authoring flow has three stages:
- Context Gathering: User provides all relevant context while Claude asks clarifying questions
- Refinement & Structure: Iteratively build each section through brainstorming and editing
- Reader Testing: Test the doc with a fresh Claude (no context) to catch blind spots before others read it
Explain that this approach helps ensure the doc works well when others read it. Ask if they want to try this workflow or prefer to work freeform.
If user declines, work freeform. If user accepts, proceed to Stage 1.
Stage 1: Context Gathering
Goal: Close the gap between what the user knows and what Claude knows, enabling smart guidance later.
For larger work, new sessions, or resumed/compressed context, first establish the shared session briefing from doc-workflow.md: mode/skill, source of truth, goal, non-goals, in-scope work, expected output, verification/review path, and open decisions. Infer what is safe from the prompt and repo context; ask only for blockers.
Initial Questions
Start by asking the user for meta-context about the document:
- What type of document is this? (e.g., technical spec, decision doc, proposal)
- Who's the primary audience?
- What's the desired impact when someone reads this?
- Is there a template or specific format to follow?
- Any other constraints or context to know?
Inform them they can answer in shorthand or dump information however works best for them.
If user provides a template or mentions a doc type:
- Ask if they have a template document to share
- If they provide a link to a shared document, use the appropriate integration to fetch it
- If they provide a file, read it
If user mentions editing an existing shared document:
- Use the appropriate integration to read the current state
- Check for images without alt-text
- If images exist without alt-text, explain that when others use Claude to understand the doc, Claude won't be able to see them. Ask if they want alt-text generated. If so, request they paste each image into chat for descriptive alt-text generation.
Info Dumping
Once initial questions are answered, encourage the user to dump all the context they have. Request information such as:
- Background on the project/problem
- Related team discussions or shared documents
- Why alternative solutions aren't being used
- Organizational context (team dynamics, past incidents, politics)
- Timeline pressures or constraints
- Technical architecture or dependencies
- Stakeholder concerns
Advise them not to worry about organizing it - just get it all out. Offer multiple ways to provide context:
- Info dump stream-of-consciousness
- Point to team channels or threads to read
- Link to shared documents
If integrations are available (e.g., Slack, Teams, Google Drive, SharePoint, or other MCP servers), mention that these can be used to pull in context directly.
If no integrations are detected and in Claude.ai or Claude app: Suggest they can enable connectors in their Claude settings to allow pulling context from messaging apps and document storage directly.
Inform them clarifying questions will be asked once they've done their initial dump.
During context gathering:
-
If user mentions team channels or shared documents:
- If integrations available: Inform them the content will be read now, then use the appropriate integration
- If integrations not available: Explain lack of access. Suggest they enable connectors in Claude settings, or paste the relevant content directly.
-
If user mentions entities/projects that are unknown:
- Ask if connected tools should be searched to learn more
- Wait for user confirmation before searching
-
As user provides context, track what's being learned and what's still unclear
-
Mark unclear areas with [MISSING ...] and open choices with [DECISION ...] markers
Asking clarifying questions:
When user signals they've done their initial dump (or after substantial context provided), ask clarifying questions to ensure understanding:
Generate 5-10 numbered questions based on gaps in the context.
Inform them they can use shorthand to answer (e.g., "1: yes, 2: see #channel, 3: no because backwards compat"), link to more docs, point to channels to read, or just keep info-dumping. Whatever's most efficient for them.
Exit condition:
Sufficient context has been gathered when questions show understanding - when edge cases and trade-offs can be asked about without needing basics explained.
Transition:
Ask if there's any more context they want to provide at this stage, or if it's time to move on to drafting the document.
If user wants to add more, let them. When ready, proceed to Stage 2.
Before moving to Stage 2, run one explicit scope-pressure check using the guardrail above. If the scope is too broad, propose a split first.
Stage 2: Refinement & Structure
Goal: Build the document section by section through brainstorming, curation, and iterative refinement.
Instructions to user:
Explain that the document will be built section by section. For each section:
- Clarifying questions will be asked about what to include
- 5-20 options will be brainstormed
- User will indicate what to keep/remove/combine
- The section will be drafted
- It will be refined through surgical edits
Start with whichever section has the most unknowns (usually the core decision/proposal), then work through the rest.
Section ordering:
If the document structure is clear:
Ask which section they'd like to start with.
Suggest starting with whichever section has the most unknowns. For decision docs, that's usually the core proposal. For specs, it's typically the technical approach. Summary sections are best left for last.
If user doesn't know what sections they need:
Based on the type of document and template, suggest 3-5 sections appropriate for the doc type.
Ask if this structure works, or if they want to adjust it.
Once structure is agreed:
Create the initial document structure with placeholder text for all sections.
If access to artifacts is available:
Use create_file to create an artifact. This gives both Claude and the user a scaffold to work from.
Inform them that the initial structure with placeholders for all sections will be created.
Create artifact with all section headers and brief placeholder text using [MISSING ...] markers for unknown content.
Provide the scaffold link and indicate it's time to fill in each section.
If no access to artifacts:
Create a markdown file in the working directory. Name it appropriately (e.g., decision-doc.md, technical-spec.md).
Inform them that the initial structure with placeholders for all sections will be created.
Create file with the required spec header block, a Review Control Surface, all section headers, and [MISSING ...] markers as placeholder text. Include a history table in the 3-column format (Date | Author | Change) and a SessionId line at the bottom.
Confirm the filename has been created and indicate it's time to fill in each section.
For each section:
Step 1: Clarifying Questions
Announce work will begin on the [SECTION NAME] section. Ask 5-10 clarifying questions about what should be included:
Generate 5-10 specific questions based on context and section purpose.
Inform them they can answer in shorthand or just indicate what's important to cover.
Step 2: Brainstorming
For the [SECTION NAME] section, brainstorm [5-20] things that might be included, depending on the section's complexity. Look for:
- Context shared that might have been forgotten
- Angles or considerations not yet mentioned
Generate 5-20 numbered options based on section complexity. At the end, offer to brainstorm more if they want additional options.
Step 3: Curation
Ask which points should be kept, removed, or combined. Request brief justifications to help learn priorities for the next sections.
Provide examples:
- "Keep 1,4,7,9"
- "Remove 3 (duplicates 1)"
- "Remove 6 (audience already knows this)"
- "Combine 11 and 12"
If user gives freeform feedback (e.g., "looks good" or "I like most of it but...") instead of numbered selections, extract their preferences and proceed. Parse what they want kept/removed/changed and apply it.
Step 4: Gap Check
Based on what they've selected, ask if there's anything important missing for the [SECTION NAME] section.
Step 5: Drafting
Use str_replace to replace the placeholder text for this section with the actual drafted content.
Announce the [SECTION NAME] section will be drafted now based on what they've selected.
If using artifacts:
After drafting, provide a link to the artifact.
Ask them to read through it and indicate what to change. Note that being specific helps learning for the next sections.
If using a file (no artifacts):
After drafting, confirm completion.
Inform them the [SECTION NAME] section has been drafted in [filename]. Ask them to read through it and indicate what to change. Note that being specific helps learning for the next sections.
Key instruction for user (include when drafting the first section):
Provide a note: Instead of editing the doc directly, ask them to indicate what to change. This helps learning of their style for future sections. For example: "Remove the X bullet - already covered by Y" or "Make the third paragraph more concise".
Step 6: Iterative Refinement
As user provides feedback:
- Use
str_replace to make edits (never reprint the whole doc)
- If using artifacts: Provide link to artifact after each edit
- If using files: Just confirm edits are complete
- If user edits doc directly and asks to read it: mentally note the changes they made and keep them in mind for future sections (this shows their preferences)
Continue iterating until user is satisfied with the section.
Quality Checking
After several consecutive refinement rounds with no substantial changes, ask if anything can be removed without losing important information.
When section is done, confirm [SECTION NAME] is complete. Ask if ready to move to the next section.
Repeat for all sections.
Near Completion
As approaching completion (80%+ of sections done), announce intention to re-read the entire document and check for:
- Flow and consistency across sections
- Redundancy or contradictions
- Anything that feels like "slop" or generic filler
- Whether every sentence carries weight
- Whether the Review Control Surface is present, short, and synchronized with the detailed sections
- Whether goal, scope, verification commands, and test/harness cases are understandable as the user's primary review path
Read entire document and provide feedback.
When all sections are drafted and refined:
Announce all sections are drafted. Indicate intention to review the complete document one more time.
Review for overall coherence, flow, completeness, content quality, and reader-calibrated findings.
Provide any final suggestions.
Ask if ready to move to Reader Testing, or if they want to refine anything else.
Stage 3: Reader Testing
Reader Testing is optional for shared-ai-docs specs. Prefer the DocWorkflow review path (doc-review-autoresolve, child-spec-hardening, or spec-orchestrator) when the document is a delivery spec with Review Control Surface, DoR/DoD, parent/child coverage, or implementation gates. Use Reader Testing mainly for open-ended docs, proposals, audience-facing documentation, or when the user explicitly asks whether a fresh reader would understand the document.
Goal: Test the document with a fresh Claude (no context bleed) to verify it works for readers.
Instructions to user:
Explain that testing will now occur to see if the document actually works for readers. This catches blind spots - things that make sense to the authors but might confuse others.
Testing Approach
If access to sub-agents is available (e.g., in Claude Code):
Perform the testing directly without user involvement.
Step 1: Predict Reader Questions
Announce intention to predict what questions readers might ask when trying to discover this document.
Generate 5-10 questions that readers would realistically ask.
Step 2: Test with Sub-Agent
Announce that these questions will be tested with a fresh Claude instance (no context from this conversation).
For each question, invoke a sub-agent with just the document content and the question.
Summarize what Reader Claude got right/wrong for each question.
Step 3: Run Additional Checks
Announce additional checks will be performed.
Invoke sub-agent to check for ambiguity, false assumptions, contradictions.
Summarize any issues found.
Step 4: Report and Fix
If issues found:
Report that Reader Claude struggled with specific issues.
List the specific issues using the Reader-Calibrated Authoring and Findings format.
Indicate intention to fix these gaps.
Loop back to refinement for problematic sections.
If no access to sub-agents (e.g., claude.ai web interface):
The user will need to do the testing manually.
Step 1: Predict Reader Questions
Ask what questions people might ask when trying to discover this document. What would they type into Claude.ai?
Generate 5-10 questions that readers would realistically ask.
Step 2: Setup Testing
Provide testing instructions:
- Open a fresh Claude conversation: https://claude.ai
- Paste or share the document content (if using a shared doc platform with connectors enabled, provide the link)
- Ask Reader Claude the generated questions
For each question, instruct Reader Claude to provide:
- The answer
- Whether anything was ambiguous or unclear
- What knowledge/context the doc assumes is already known
Check if Reader Claude gives correct answers or misinterprets anything.
Step 3: Additional Checks
Also ask Reader Claude:
- "What in this doc might be ambiguous or unclear to readers?"
- "What knowledge or context does this doc assume readers already have?"
- "Are there any internal contradictions or inconsistencies?"
Step 4: Iterate Based on Results
Ask what Reader Claude got wrong or struggled with. Indicate intention to fix those gaps.
Loop back to refinement for any problematic sections.
Exit Condition (Both Approaches)
When Reader Claude consistently answers questions correctly and doesn't surface new gaps or ambiguities, the doc is ready.
Before declaring readiness, confirm that a reviewer can understand the important risks and proof points from goal, in scope, out of scope, verification commands, and test/harness cases without needing to reconstruct the whole middle of the spec.
Final Review
When Reader Testing passes:
Announce the doc has passed Reader Claude testing. Before completion:
- Recommend they do a final read-through themselves - they own this document and are responsible for its quality
- Suggest double-checking any facts, links, or technical details
- Ask them to verify it achieves the impact they wanted
Ask if they want one more review, or if the work is done.
If user wants final review, provide it. Otherwise:
Announce document completion. Provide a few final tips:
- Consider linking this conversation in an appendix so readers can see how the doc was developed
- Use appendices to provide depth without bloating the main doc
- Update the doc as feedback is received from real readers
Tips for Effective Guidance
Tone:
- Be direct and procedural
- Explain rationale briefly when it affects user behavior
- Don't try to "sell" the approach - just execute it
Handling Deviations:
- If user wants to skip a stage: Ask if they want to skip this and write freeform
- If user seems frustrated: Acknowledge this is taking longer than expected. Suggest ways to move faster
- Always give user agency to adjust the process
Context Management:
- Throughout, if context is missing on something mentioned, proactively ask
- Don't let gaps accumulate - address them as they come up
Artifact Management:
- Use
create_file for drafting full sections
- Use
str_replace for all edits
- Provide artifact link after every change
- Never use artifacts for brainstorming lists - that's just conversation
Quality over Speed:
- Don't rush through stages
- Each refinement round should make meaningful improvements
- The goal is a document that actually works for readers
Pipeline Integration
This skill supports two valid downstream workflows:
- Workflow 1 (legacy-compatible):
doc-coauthoring -> refine-plan (iterative) -> direct-mode implementation -> retro-plan (optional)
- Workflow 2 (current):
doc-coauthoring -> spec-change-delivery (direct/OpenSpec) -> retro-plan (optional) -> spec-closeout (optional/formal)
Handoff to planning/delivery: When a document is ready enough to drive execution (blocking requirements questions resolved, or any remaining gaps explicitly marked non-blocking, and Reader Testing passed), inform the user they can either:
- continue with Workflow 1 via
refine-plan (iterative plan-first path), or
- continue with Workflow 2 via
spec-change-delivery (delivery-first path).
Default behavior:
- Keep the currently active workflow for an existing thread/spec.
- For new deliveries without a preference, recommend Workflow 2.
During handoff, carry forward:
- resolved requirements
- explicit non-blocking assumptions
- any remaining blockers that must go back to the spec before the plan can be implementation-ready
- a short Mini-Retro when the spec block was substantial or context may be lost
If a later planning pass exposes a new missing requirement or unresolved product decision, send that gap back to the spec instead of burying it as an implementation subtask.
Do not generate full retro sections here; full retros are handled by retro-plan. A short Mini-Retro is allowed for larger spec blocks, session endings, context compression risk, or skill handoff.