with one click
science-interpret-results
// Interpret analysis results and feed findings back into the research framework. Use when the user has pipeline output or findings to evaluate against propositions/hypotheses and update project priorities.
// Interpret analysis results and feed findings back into the research framework. Use when the user has pipeline output or findings to evaluate against propositions/hypotheses and update project priorities.
[HINT] Download the complete skill directory including SKILL.md and all related files
| name | science-interpret-results |
| description | Interpret analysis results and feed findings back into the research framework. Use when the user has pipeline output or findings to evaluate against propositions/hypotheses and update project priorities. |
Converted from Claude command /science:interpret-results.
Before executing any research command:
Resolve project profile: Read science.yaml and identify the project's profile.
Use the canonical layout for that profile:
research → doc/, specs/, tasks/, knowledge/, papers/, models/, data/, code/software → doc/, specs/, tasks/, knowledge/, plus native implementation roots such as src/ and tests/Load role prompt: .ai/prompts/<role>.md if present, else references/role-prompts/<role>.md.
Load the science-research-methodology and science-scientific-writing Codex skills. If native skill loading is unavailable, use codex-skills/INDEX.md to map canonical Science skill names to generated skill files and source paths.
Read specs/research-question.md for project context when it exists.
Load project aspects: Read aspects from science.yaml (default: empty list).
For each declared aspect, resolve the aspect file in this order:
aspects/<name>/<name>.md — canonical Science aspects.ai/aspects/<name>.md — project-local aspect override or additionIf neither path exists (the project declares an aspect that isn't shipped with
Science and has no project-local definition), do not block: log a single line
like aspect "<name>" declared in science.yaml but no definition found — proceeding without it and continue. Suggest the user either (a) drop the
aspect from science.yaml, (b) author it under .ai/aspects/<name>.md, or
(c) align the name with one shipped under aspects/.
When executing command steps, incorporate the additional sections, guidance, and signal categories from loaded aspects. Aspect-contributed sections are whole sections inserted at the placement indicated in each aspect file.
Check for missing aspects: Scan for structural signals that suggest aspects the project could benefit from but hasn't declared:
| Signal | Suggests |
|---|---|
Files in specs/hypotheses/ | hypothesis-testing |
Files in models/ (.dot, .json DAG files) | causal-modeling |
Workflow files, notebooks, or benchmark scripts in code/ | computational-analysis |
Package manifests (pyproject.toml, package.json, Cargo.toml) at project root with project source code (not just tool dependencies) | software-development |
If a signal is detected and the corresponding aspect is not in the aspects list,
briefly note it to the user before proceeding:
"This project has [signal] but the
[aspect]aspect isn't enabled. This would add [brief description of what the aspect contributes]. Want me to add it toscience.yaml?"
If the user agrees, add the aspect to science.yaml and load the aspect file
before continuing. If they decline, proceed without it.
Only check once per command invocation — do not re-prompt for the same aspect if the user has previously declined it in this session.
Resolve templates: When a command says "Read .ai/templates/<name>.md",
check the project's .ai/templates/ directory first. If not found, read from
templates/<name>.md. If neither exists, warn the
user and proceed without a template — the command's Writing section provides
sufficient structure.
Resolve science CLI invocation: When a command says to run science,
prefer the project-local install path: uv run science <command>.
This assumes the root pyproject.toml includes science as a dev
dependency installed via uv add --dev --editable "$SCIENCE_TOOL_PATH"
(the distribution is science; the entry point it installs is science).
If that fails (no root pyproject.toml or science not in dependencies),
fall back to:
uv run --with <science-plugin-root>/science science <command>
Interpret the results specified by the user input and update the project in a proposition/evidence-centric way.
In this project, results do not automatically prove or refute a hypothesis. They shift support, dispute, and uncertainty for specific propositions.
If no argument is provided, ask the user to describe their findings or point to a results file.
Follow the Science Codex Command Preamble before executing this skill. Use the research-assistant role prompt.
Additionally:
docs/proposition-and-evidence-model.md..ai/templates/interpretation.md first; if not found, read templates/interpretation.md.specs/hypotheses/.doc/questions/.doc/interpretations/.uv run science inquiry show "<slug>" --format json
the user input may be:
datapackage.json in a result directoryIf given a directory, scan for result files and summarize what is available.
datapackage.json in a result directory.
The manifest provides entity cross-references, config snapshot, and resource
listing. Load the manifest to identify which questions/hypotheses the run
addresses, then interpret the results resources.templates/interpretation-dev.md (see Writing below) — the empirical-mode sections are dead weight for infrastructure work.doc/discussions/*.md fileAlways note the mode at the top of the output when not in standard write mode.
When interpreting multiple tasks jointly or building on a prior interpretation,
list earlier interpretation documents in prior_interpretations as a narrative
breadcrumb. This field is not the machine-readable conclusion chain.
For needs-review resolution, use first-class graph relations:
sci:amends when the new conclusion revises, narrows, qualifies, or extends
an older conclusion without replacing it.sci:supersedes when the new conclusion replaces the older conclusion as the
current canonical reading. In this case, also mark the old conclusion
status: superseded.Do not use sci:supersedesClaim for conclusion replacement. That predicate is
reserved for falsification records.
Example frontmatter on the new interpretation:
relations:
- predicate: "sci:amends"
target: "interpretation:old"
or:
relations:
- predicate: "sci:supersedes"
target: "interpretation:old"
When a result is being interpreted because an epistemic entity was flagged
needs-review, keep the review timestamp separate from the conclusion change:
sci:triggeredBy upstream sources, and nearby
prior conclusions.science entity review <target-ref> --note "Reviewed against <source>; no standing change."sci:amends or sci:supersedes, and only then run
science entity review <target-ref> --note "Reconsidered; see interpretation:<new>."<target-ref> is the flagged entity, not the newly authored conclusion.
Freshness remains a review prompt; it does not mutate standing.
Extract the main findings and classify each as:
strongsuggestivenullambiguousmethodologicaldescriptive — structural or qualitative findings from exploratory/visualization analyses where statistical testing is not applicable (e.g., UMAP cluster structure, k-mer landscape patterns). Distinct from suggestive: the finding is qualitative by nature, not merely weak.conceptual — (conceptual mode only) insights from discussion, synthesis, or reasoning that reframe understanding without new empirical evidenceAlso identify the evidence type where possible:
literature_evidenceempirical_data_evidencesimulation_evidencebenchmark_evidenceexpert_judgmentnegative_resultInclude effect sizes, uncertainty intervals, and sample counts where available.
Conceptual mode adaptation: Most findings will be expert_judgment or literature_evidence. Instead of effect sizes and sample counts, characterize each insight by:
For each relevant hypothesis or inquiry, ask:
When a result bundle mixes levels, split them explicitly:
Prefer outputs like:
Avoid outputs like:
For each relevant open question:
Conceptual mode: Skip the empirical quality checks below. Instead, assess:
Then proceed to Step 5.
Empirical modes (write/update/dev): Before updating beliefs, check:
methodologicalIf the finding is fragile, say so explicitly.
Also ask:
measurement_model rather than prose-only caveats?independence_group?rival_model_packet and its current_working_model?Aggregator-circularity check. If "external validation" comes from a literature-aggregating resource (Open Targets, ChEMBL, DrugBank, PharmGKB, DisGeNET, OMIM, etc.), treat the agreement as partly circular: the resource's evidence pool may already include the project's own findings or the same upstream studies. Mitigations:
redundant-with-prior rather than independent corroborationSuspiciously good results: When results substantially exceed pre-registered upper bounds (observed >> expected), do not accept them uncritically. Before proceeding:
doc/meta/pre-registration-*.md) and compare observed vs. expected range explicitlyWhen graph updates are warranted, frame them as proposition updates:
cito:supports or cito:disputes to the affected propositionDo not use hypothesis status changes as the primary output. Hypothesis-level summaries can be updated later as a secondary reflection of underlying proposition changes.
After drafting the interpretation, run:
science health --project-root . --format json
Call out any remaining:
measurement_modelAfter analyzing results, create structured entities in addition to the prose document:
For each concrete empirical fact:
science graph add observation "<description>" --data-source <data-package-ref> --metric <what> --value <value>
For each interpretive proposition:
science graph add proposition "<text>" --source <data-package-ref> --confidence <0-1>
For each observation that bears on a proposition:
science graph add evidence <observation-ref> <proposition-ref> --stance supports|disputes --strength strong|moderate|weak
Bundle into a finding:
science graph add finding "<summary>" --confidence moderate --proposition <ref> --observation <ref> --source <data-package-ref>
Create the interpretation as a source-authored entity:
science interpretation create "<summary>" --input <data-package-ref> --related <finding-or-proposition-ref>
This places the file under doc/interpretations/<today>-<slug>.md with canonical frontmatter and runs prospective validation. Prefer this over the older science graph add interpretation, which still works but does not produce a durable source document.
Identify new questions raised by the results.
For each:
Propose changes to the task queue:
When knowledge/graph.trig exists, prefer using:
science graph project-summary --format json
science graph question-summary --format json # full by default; add --top to narrow
science graph inquiry-summary --format json
science graph dashboard-summary --format json
science graph neighborhood-summary --format json
to anchor the prioritization section, especially for:
For software projects, skip project-summary for now and start at question-summary / inquiry-summary.
Use them in this order:
project-summary to see the current research-level rollup, when the project is researchquestion-summary for the full question rollup, with --top as optional narrowinginquiry-summary to find which threads deserve attentiondashboard-summary and neighborhood-summary to identify the exact propositions and clusters driving that priorityPick the template that matches the mode:
.ai/templates/interpretation-dev.md first, then templates/interpretation-dev.md. Skip the empirical sections (Evidence Quality, Data Quality Checks, Proposition-Level Updates, Evidence vs. Open Questions) entirely — the dev template omits them on purpose..ai/templates/interpretation.md first, then templates/interpretation.md.If the project uses open questions rather than formal hypotheses, adapt section headers in the output document accordingly — e.g., "Question-Level Implications" instead of "Hypothesis-Level Implications". Evaluate against questions in doc/questions/ rather than hypothesis files in specs/hypotheses/.
Create the interpretation file with science interpretation create:
uv run science interpretation create "<short title>" \
--input <data-package-or-run-ref> \
--related <hypothesis:hNN-...|question:qNN-...>
The tool builds the canonical interpretation:<today>-<slug> ID, places the file under doc/interpretations/, writes canonical frontmatter (id, type, title, status, related, source_refs, created, updated), and runs prospective validation. --input maps to source_refs; --related is repeatable. After creation, open the file and fill the body using the template — preserve the frontmatter the tool produced. Add custom fields (e.g. input if the project schema requires it) by editing the frontmatter directly.
science hypothesis edit <ref> --status ...; for body changes edit the file in place. Do not mechanically flip statuses to supported or rejected.science question create "<text>" [--related <ref>] [--source-ref <ref>]. To attach the new question to the interpretation, run science entity edit <interpretation-ref> --related <question-ref>.science tasks.
Write durable result interpretations under doc/interpretations/, and when the findings change the project-level narrative or current state substantially, summarize that in doc/reports/ as well.science-compare-hypothesesscience-discussscience-add-hypothesisscience-pre-registerReflect on the template and workflow used above.
If you have feedback (friction, gaps, suggestions, or things that worked well), report each item via:
science feedback add \
--target "command:interpret-results" \
--category <friction|gap|guidance|suggestion|positive> \
--summary "<one-line summary>" \
--detail "<optional prose>"
Guidelines:
--target "template:<name>" instead