| name | doc-review-autoresolve |
| description | Automatically resolve review findings in specs/docs, including strict content-quality review, and rerun review until no autonomous inconsistencies remain. USE WHEN the user asks to review docs/specs and wants findings fixed immediately (e.g. "reviewe", "fix findings", "reviewe nochmals", "passe Findings an"). Prefer autonomous fix + re-review loops and only ask the user when a real decision or missing requirement blocks safe resolution. |
doc-review-autoresolve
Purpose
Resolve documentation/spec review findings with minimal back-and-forth:
- review,
- fix autonomous findings immediately,
- re-review in the same run,
- assign an implementation-readiness verdict after spec edits,
- repeat until only true decision blockers remain.
Shared Workflow Contract
Use /Users/dh/Documents/DanielsVault/_shared/shared-ai-docs/docs/doc-workflow.md as the canonical reference for:
- marker meaning (
[MISSING ...], [DECISION ...], [REVIEW ...], [BLOCKED ...]),
- Session Briefing,
- Review Control Surface,
- Parallel Work Control Surface,
- Spec Goldstandard,
- Mini-Retro,
- DoR/DoD expectations,
- history/session requirements.
This skill must not introduce conflicting gate definitions.
Trigger Conditions
Use this skill when the user asks to:
- review specs/docs,
- apply review findings,
- resolve inconsistencies automatically,
- re-review after edits in the same thread.
Typical prompts:
- "reviewe die spec"
- "passe es an"
- "reviewe nochmals"
- "fix die findings"
- "mach das automatisch"
For larger, resumed, or context-compressed review work, begin by clarifying or inferring the Session Briefing: active mode/skill, source-of-truth files, review goal, non-goals, in-scope fixes, expected output, verification/review path, and open decisions. Ask only when a blocker cannot be inferred safely.
Default Mode (Auto-Resolve)
Default behavior is autonomous resolution:
- Do not stop after listing findings when a safe textual/spec correction is possible.
- Apply fixes directly.
- Re-run review immediately.
- Continue until no autonomous findings remain.
- Ensure the spec's Review Control Surface exists and matches the detailed spec variant/Goldstandard status/goal/scope/cases/commands/open decisions/readiness.
- When a spec proposes parallel work, ensure the Parallel Work Control Surface names child/work block, lane mode (
spec/doc hardening or implementation), owner/agent, allowed write-sets, shared/read-only files, dependencies, verification commands, integration owner, and merge/sync order.
- Do not invent product behavior, data fields, acceptance criteria, or architecture decisions that are not already implied by the spec and its normative sources.
Review-Only Override
If the user explicitly asks to review first, avoid edits, make proposals only, or says "keine Edits", run in review-only mode for that turn:
- Report prioritized findings and concrete proposed fixes.
- Do not patch files.
- Still assign a readiness verdict when enough information is available.
- Offer the smallest safe follow-up edit set, but wait for user approval before applying it.
Auto-resolve resumes only after the user asks to apply, fix, or continue autonomously.
Stop-and-Ask Boundary
Ask the user only if one of these applies:
- Multiple valid fixes with materially different product behavior.
- Security/legal/compliance/policy implications.
- Missing requirement or unresolved decision that cannot be inferred safely.
- Any marker that requires owner input to close (
[MISSING ...], [DECISION ...], blocking [REVIEW ...]).
- A content or semantic problem has multiple plausible domain fixes and the spec does not already choose one.
If none apply, resolve findings without waiting for confirmation.
Review-to-Resolution Loop
-
Collect findings
- Review target docs/specs with file+line references.
- Prioritize by severity/risk.
- Include formal checklist findings, content-quality findings, and domain/semantic findings.
- Write findings for a reader who may only have checked goal, in-scope, out-of-scope, verification commands, and test cases.
- Treat a missing or stale Review Control Surface as a finding for specs.
-
Classify each finding
autonomous: can be fixed safely now.
needs-decision: blocked by requirement/decision gap.
-
Patch autonomous findings
- Apply minimal, targeted edits.
- Preserve scope and existing accepted decisions.
-
Re-review immediately
- Re-run focused review on touched sections and cross-references.
- Catch regressions/secondary inconsistencies.
-
Assign readiness verdict
- After every spec edit or review loop, report
IMPLEMENTATION READY, READY WITH NON-BLOCKING NOTES, or NOT IMPLEMENTATION READY.
- Do this automatically; do not wait for the user to ask "is it implementation-ready?"
- Base the verdict on the shared DoR, parent/child conformance, open blockers, content-quality findings, acceptance criteria, and verification commands.
-
Repeat until stable
- Stop when either:
- no findings remain, or
- only
needs-decision findings remain.
-
Report clearly
- What was fixed.
- What remains and why user input is required.
- Final implementation-readiness verdict.
- Final state: clean vs pending decisions.
- Mini-Retro after larger review loops or before context/handoff loss: decisions, changes, open items, missing evidence, skill/workflow friction, and whether to continue in this session or start a new one.
Reader-Calibrated Finding Format
Findings must be understandable without assuming the user has read the full middle of the spec. The user often reviews goal, in scope, out of scope, verification commands, and test cases first; write findings so they make sense from that reading path.
For every non-trivial finding, include:
- Short finding: one sentence in plain language.
- Why it matters: what could go wrong in implementation or review if this stays as-is.
- Where to check: point to the relevant goal/scope/verification/test-case section, not only the deep contract section.
- Concrete example: a small example of the missing/ambiguous artifact, command, status, case, or behavior.
- Needed action: what can be fixed autonomously, or what decision the user must make.
Avoid findings that only say things like "contract depth missing", "fixture strategy unclear", or "status provenance incomplete". Expand them:
Finding: The S3 bundle contract says a manifest must be signed, but the verification cases do not say what a valid or invalid signed manifest looks like.
Why it matters: An implementation could pass by checking only that a `signature` field exists, while never proving that wrong signatures block installation.
Where to check: S3 goal/scope says "signed Free-Entry bundle"; Verification Commands and Harness Cases should prove valid and invalid signature behavior.
Example: Add one small embedded manifest example or require fixtures such as `valid-free-entry-bundle.yaml` and `invalid-signature-bundle.yaml`.
Needed action: Autonomous if fixture names and fields follow existing S1/S2 harness conventions; ask the user only if the signing model itself is undecided.
If a finding is technical, translate the technical term once. For example, define "canonical serialization" as "the exact byte representation that is hashed or signed, so two systems calculate the same hash for the same logical data."
Content Quality Review Gate
Do not limit reviews to "does the spec have the expected sections?" A spec can be formally complete and still wrong.
Always perform a strict content-level review of the whole intended change, using established requirements-quality criteria as a checklist. Data contracts are only one part of this review.
Review the spec for:
- Correctness and domain fit: The described behavior matches the stated user/problem context and does not solve the wrong problem.
- Necessity and scope discipline: Requirements are needed for the goal; extras, hidden scope creep, and accidental non-goals are visible.
- Completeness: Normal paths, edge cases, failure paths, dependencies, constraints, and open assumptions are covered or explicitly marked.
- Consistency: Requirements, examples, terminology, statuses, parent/child scope, acceptance criteria, and verification commands do not contradict each other.
- Unambiguity: A competent implementer would not have to choose between multiple materially different meanings.
- Feasibility and implementability: The requested behavior is technically plausible in the declared environment and does not depend on impossible timing, missing systems, or undefined capabilities.
- Verifiability/testability: Each important requirement can be proven by inspection, test, demonstration, analysis, or explicit acceptance evidence.
- Traceability: Important requirements connect to source intent, parent scope, acceptance criteria, verification commands, and downstream artifacts.
- Atomicity and abstraction level: Requirements are not bundled so broadly that partial implementation can masquerade as completion, and they do not over-prescribe implementation unless that is the actual constraint.
- Operational and lifecycle fit: Migration, fallback, observability, rollback, compatibility, ownership, and closeout evidence are present when the change needs them.
Then perform domain-specific semantic checks for the actual system flow:
- Identify the core user journey, system flow, or operational lifecycle in the spec.
- For every data contract, check whether produced data remains interpretable downstream.
- For every artifact, check whether it includes enough provenance, identity, version, status, and relationship fields to be useful later.
- For every state transition, check that the source event, allowed statuses, failure statuses, and blocking behavior are defined.
- For every security/privacy rule, check that leaks, unsafe defaults, and forbidden flows are testable.
- For every acceptance criterion, ask whether an implementation could pass the written test while still violating the intended behavior.
Treat these as high-priority findings when present:
- produced data that lacks the identity/context/provenance required by the spec's own downstream flow,
- hashes/signatures without canonical serialization rules,
- manifests without version/source/provenance/status fields,
- status fields without allowed values or failure states,
- "success" paths that can skip required runtime/harness assertions,
- local/online fallback paths that produce incompatible artifacts,
- requirements that are ambiguous, infeasible, untestable, internally inconsistent, or not traceable to the stated goal,
- missing failure/edge-case behavior that makes the intended feature impossible to implement safely,
- child spec scope that contradicts or silently drops parent-scope intent.
- parallel-lane plans that leave overlapping write-sets, shared-file ownership, contract-change sequencing, verification commands, or integration order ambiguous.
Use domain examples only as examples. For instance, in a survey contract, answers may need stable question identity if later steps must interpret answers by question. Do not turn that specific field into a universal requirement for unrelated specs.
Relationship To Doc Co-Authoring
doc-coauthoring creates and expands the spec: requirements, test cases, verification commands, examples, and user-facing decisions.
doc-review-autoresolve is the cleanup and consistency loop after that authoring work:
- Reorder or clarify existing content.
- Align repeated terminology, statuses, acceptance criteria, and verification commands.
- Resolve contradictions where the correct fix is already implied by the spec or normative sources.
- Add missing cross-references, parent/child conformance rows, and readiness verdicts when they follow from existing content.
- Stop when a real product, scope, architecture, security, legal, or data-contract decision is required.
This skill may run directly after doc-coauthoring. Its goal is to make the authored spec coherent and implementation-ready where possible, not to hallucinate missing product decisions.
When invoked after child-spec-hardening, treat the hardened child spec, parent conformance table, normative contract, harness cases, verification commands, DoR/DoD, and closeout sync targets as the review surface. Fix autonomous drift and inconsistencies, but do not invent missing contract decisions or mark a child ready when hardening gates are absent.
Spec Hygiene Rules
When touching spec files:
- keep header contract intact,
- keep the Review Control Surface synchronized with detailed sections, spec variant, Goldstandard status, and final readiness verdict,
- keep any Parallel Work Control Surface synchronized with detailed dependencies, write-set sections, shared-file rules, verification commands, and integration order,
- preserve
SessionId,
- append one concise history row for meaningful changes,
- do not silently weaken acceptance/verification gates,
- keep parent/child specs consistent where one references the other as normative source.
- after changes, add or report a readiness verdict so the user does not need a separate follow-up question.
Output Contract
After each run, report:
- Applied fixes (file + line references).
- Residual findings requiring user decision (if any).
- Content quality review result (
no content blockers or list of content blockers, including domain/data-contract blockers when present).
- Review Control Surface result (
present and consistent, updated, or remaining blocker).
- Readiness verdict (
IMPLEMENTATION READY, READY WITH NON-BLOCKING NOTES, or NOT IMPLEMENTATION READY).
- Re-review result (
no findings or list of remaining blockers).
- Mini-Retro when the review loop was substantial, ended a session, or hands off to another skill.
- Smallest next step only when user input is needed.
Never claim "done" while unresolved blocking decision findings remain.