| name | execution-guardrails |
| description | Cross-cutting quality guardrails for AI-assisted software work. Use when you want an explicit reminder to surface assumptions, prefer the simplest viable change, keep edits surgical, and define verifiable success criteria before brainstorming, specification, planning, implementation, refactoring, or review. |
| license | See LICENSE.txt in repository root |
| user-invocable | true |
| disable-model-invocation | true |
| argument-hint | [task or artifact to inspect] |
Execution Guardrails
Shared quality floor for the workflow. This skill does not replace a stage's primary skill; it reinforces how work should be carried out across brainstorm, spec, plan, TDD, and review.
When to Use This Skill
Use this skill when:
- an agent is guessing instead of clarifying
- a plan or implementation is growing more abstract than the requirement justifies
- a diff starts touching unrelated code, comments, or formatting
- success criteria are vague ("make it work") and need to become testable
- you want an explicit quality reset before handing work to another agent
Four Shared Guardrails
-
Assumptions explicit
Separate facts, assumptions, and unknowns. If ambiguity materially changes the approach, stop and clarify or label the assumption.
-
Simplicity first
Implement the smallest solution that satisfies the current requirement. Do not add speculative flexibility, abstraction, or configuration for future possibilities.
-
Surgical changes
Touch only what the request requires. Clean up only the dead code your change creates. Do not perform drive-by refactors.
-
Verifiable success criteria
Convert work into checks: tests, assertions, or explicit manual verification. Avoid vague definitions of done.
Anti-Hallucination Checks
Activate these additional checks before finalizing any artifact that downstream agents or humans will treat as authoritative (spec, plan, test-plan, impact analysis):
-
Negative Space Check
List all content in this output that the user may NOT have explicitly requested. Label each item [USER_REQUESTED], [INFERRED], or [ADDED_BY_AI]. Any [ADDED_BY_AI] item without justification is a Source Fabrication risk — surface it before proceeding.
-
Reference Grounding
For every external function, API endpoint, library, standard, or service cited: state your confidence it exists (HIGH / MEDIUM / LOW / UNVERIFIED). Surface all [UNVERIFIED] references to the user before treating them as actionable.
-
Confidence Flagging
Mark any statement you are less than 80% confident about with [UNCERTAIN]. Do not omit this step because the artifact looks internally coherent — coherence does not equal factual accuracy.
How to Apply by Stage
- Brainstorm: separate confirmed requirements from assumptions; avoid prematurely collapsing options into one interpretation.
- Spec: keep assumptions, non-goals, and unresolved questions distinct from requirements and acceptance criteria.
- Plan: plan only current scope; every step needs a verification method; call out assumptions that could invalidate multiple phases.
- Code / TDD: prefer the smallest diff that makes the target test pass; reject speculative abstractions and unrelated edits.
- Review: explicitly flag hidden assumptions, overengineering, and diff-scope drift when present.
See:
Relationship to the Existing Workflow
Use this layering model:
- Agent — who does the work
- Primary skill — the main methodology for that stage
- Execution guardrails — shared constraints on how the work is performed
- Quality gate —
agentic-eval / gate-check before handoff
Operationally:
- the always-on core lives in
copilot-instructions.md and core agents
- this skill is the manual fallback / explicit reload
agentic-eval rubrics score whether the resulting artifact is safe to hand off
Manual Invocation Examples
CLI
/execution-guardrails check this plan for hidden assumptions and speculative scope
/execution-guardrails review this diff for unrelated edits and overengineering
/execution-guardrails help me turn this vague goal into verifiable success criteria
VS Code
/execution-guardrails review this spec for assumptions vs confirmed requirements
Recommended Output Format
When using this skill directly, structure the response as:
- Assumptions / Unknowns
- Simplicity Risks
- Scope / Diff Hygiene Risks
- Verification Gaps
- Source Fabrication Risks (negative space items tagged
[ADDED_BY_AI])
- Unverified References (library functions, APIs, services tagged
[UNVERIFIED])
- Uncertain Statements (items tagged
[UNCERTAIN])
- Recommended correction
Keep corrections targeted. Do not rewrite the entire artifact unless the user asks.