with one click
science-pre-register
// Formalize expectations before analysis to prevent post-hoc rationalization. Use after add-hypothesis or plan-pipeline and before running analysis — to state expectations or what would change the user's mind.
// Formalize expectations before analysis to prevent post-hoc rationalization. Use after add-hypothesis or plan-pipeline and before running analysis — to state expectations or what would change the user's mind.
[HINT] Download the complete skill directory including SKILL.md and all related files
| name | science-pre-register |
| description | Formalize expectations before analysis to prevent post-hoc rationalization. Use after add-hypothesis or plan-pipeline and before running analysis — to state expectations or what would change the user's mind. |
Converted from Claude command /science:pre-register.
Before executing any research command:
Resolve project profile: Read science.yaml and identify the project's profile.
Use the canonical layout for that profile:
research → doc/, specs/, tasks/, knowledge/, papers/, models/, data/, code/software → doc/, specs/, tasks/, knowledge/, plus native implementation roots such as src/ and tests/Load role prompt: .ai/prompts/<role>.md if present, else references/role-prompts/<role>.md.
Load the science-research-methodology and science-scientific-writing Codex skills. If native skill loading is unavailable, use codex-skills/INDEX.md to map canonical Science skill names to generated skill files and source paths.
Read specs/research-question.md for project context when it exists.
Load project aspects: Read aspects from science.yaml (default: empty list).
For each declared aspect, resolve the aspect file in this order:
aspects/<name>/<name>.md — canonical Science aspects.ai/aspects/<name>.md — project-local aspect override or additionIf neither path exists (the project declares an aspect that isn't shipped with
Science and has no project-local definition), do not block: log a single line
like aspect "<name>" declared in science.yaml but no definition found — proceeding without it and continue. Suggest the user either (a) drop the
aspect from science.yaml, (b) author it under .ai/aspects/<name>.md, or
(c) align the name with one shipped under aspects/.
When executing command steps, incorporate the additional sections, guidance, and signal categories from loaded aspects. Aspect-contributed sections are whole sections inserted at the placement indicated in each aspect file.
Check for missing aspects: Scan for structural signals that suggest aspects the project could benefit from but hasn't declared:
| Signal | Suggests |
|---|---|
Files in specs/hypotheses/ | hypothesis-testing |
Files in models/ (.dot, .json DAG files) | causal-modeling |
Workflow files, notebooks, or benchmark scripts in code/ | computational-analysis |
Package manifests (pyproject.toml, package.json, Cargo.toml) at project root with project source code (not just tool dependencies) | software-development |
If a signal is detected and the corresponding aspect is not in the aspects list,
briefly note it to the user before proceeding:
"This project has [signal] but the
[aspect]aspect isn't enabled. This would add [brief description of what the aspect contributes]. Want me to add it toscience.yaml?"
If the user agrees, add the aspect to science.yaml and load the aspect file
before continuing. If they decline, proceed without it.
Only check once per command invocation — do not re-prompt for the same aspect if the user has previously declined it in this session.
Resolve templates: When a command says "Read .ai/templates/<name>.md",
check the project's .ai/templates/ directory first. If not found, read from
templates/<name>.md. If neither exists, warn the
user and proceed without a template — the command's Writing section provides
sufficient structure.
Resolve science CLI invocation: When a command says to run science,
prefer the project-local install path: uv run science <command>.
This assumes the root pyproject.toml includes science as a dev
dependency installed via uv add --dev --editable "$SCIENCE_TOOL_PATH"
(the distribution is science; the entry point it installs is science).
If that fails (no root pyproject.toml or science not in dependencies),
fall back to:
uv run --with <science-plugin-root>/science science <command>
Formalize the user's expectations, decision criteria, and null-result plans before analysis begins.
Follow the Science Codex Command Preamble before executing this skill. Use the research-assistant role prompt.
Additionally:
.ai/templates/pre-registration.md first; if not found, read templates/pre-registration.md.specs/hypotheses/.science inquiry list (if available).doc/plans/ (if any).doc/meta/pre-registration-*.md to avoid duplication.doc/plans/*-analysis-plan.md when the user or context references analysis-plan:<slug>.Have a natural conversation with the user to formalize their expectations. The questions below are guidelines — use your judgment about which are needed based on how much context the user has already provided.
specs/hypotheses/)doc/plans/)If this is a data-analysis pre-registration and no linked analysis-plan:<slug>
exists, recommend science-plan-analysis when any of these are underspecified:
input QA, preprocessing/normalization checks, independent unit, estimand,
power/resolution limit, or sensitivity-arbitration rule. The recommendation is
advisory, not a hard dependency.
For each hypothesis under test:
Pilot experiments: If this is a pilot (1-2 seeds, small N, exploratory scope), explicitly state what it CAN and CANNOT establish. A pilot can suggest directions and calibrate effect sizes but cannot confirm or refute a hypothesis. Frame decision criteria accordingly — a pilot's null result means "insufficient signal to justify scaling up", not "hypothesis is wrong."
Skip this if the analysis type doesn't have a meaningful "too good" threshold.
If the primary metric has changed from prior analyses, or if the metric choice is non-obvious:
If the experimental design involves non-obvious sampling decisions (stratified sampling, subsampling from a larger population, context selection), document the rationale and trade-offs:
Omit when sampling is straightforward (e.g., "use all available data").
After the conversation, write the pre-registration document using .ai/templates/pre-registration.md first, then templates/pre-registration.md.
Use the hypothesis ID, inquiry slug, or task ID as the basis:
doc/meta/pre-registration-<slug>.md (default), or doc/pre-registrations/<slug>.md if the project has adopted that placement.id: "pre-registration:<slug>"type: "pre-registration"status: "committed" once the user has signed off on the criteriacommitted: "<YYYY-MM-DD>" — the date the criteria are lockedspec: "<path-to-design-doc>" — optional; empty string if no paired design doc existsrelated: [...] — hypothesis IDs, inquiry slugs, and/or task IDs this pre-reg coversrelated field is what interpret-results searches on, so it must be populated.doc/meta/pre-registration-<slug>.md (or doc/pre-registrations/<slug>.md if the project uses that placement). The frontmatter must declare type: "pre-registration" and id: "pre-registration:<slug>" per the template.science-plan-pipeline — if no pipeline plan exists yetscience-bias-audit — to check for blind spots before running the analysisscience-discuss — to stress-test the expectations themselvesgit add -A && git commit -m "doc: pre-register expectations for <slug>"Reflect on the template and workflow used above.
If you have feedback (friction, gaps, suggestions, or things that worked well), report each item via:
science feedback add \
--target "command:pre-register" \
--category <friction|gap|guidance|suggestion|positive> \
--summary "<one-line summary>" \
--detail "<optional prose>"
Guidelines:
--target "template:<name>" instead