with one click
science-health
// Run the science health check and triage findings interactively. Use when the user says "check project health", "find issues", "what's broken", or after running migrations.
// Run the science health check and triage findings interactively. Use when the user says "check project health", "find issues", "what's broken", or after running migrations.
[HINT] Download the complete skill directory including SKILL.md and all related files
| name | science-health |
| description | Run the science health check and triage findings interactively. Use when the user says "check project health", "find issues", "what's broken", or after running migrations. |
Converted from Claude command /science:health.
Before executing any research command:
Resolve project profile: Read science.yaml and identify the project's profile.
Use the canonical layout for that profile:
research → doc/, specs/, tasks/, knowledge/, papers/, models/, data/, code/software → doc/, specs/, tasks/, knowledge/, plus native implementation roots such as src/ and tests/Load role prompt: .ai/prompts/<role>.md if present, else references/role-prompts/<role>.md.
Load the science-research-methodology and science-scientific-writing Codex skills. If native skill loading is unavailable, use codex-skills/INDEX.md to map canonical Science skill names to generated skill files and source paths.
Read specs/research-question.md for project context when it exists.
Load project aspects: Read aspects from science.yaml (default: empty list).
For each declared aspect, resolve the aspect file in this order:
aspects/<name>/<name>.md — canonical Science aspects.ai/aspects/<name>.md — project-local aspect override or additionIf neither path exists (the project declares an aspect that isn't shipped with
Science and has no project-local definition), do not block: log a single line
like aspect "<name>" declared in science.yaml but no definition found — proceeding without it and continue. Suggest the user either (a) drop the
aspect from science.yaml, (b) author it under .ai/aspects/<name>.md, or
(c) align the name with one shipped under aspects/.
When executing command steps, incorporate the additional sections, guidance, and signal categories from loaded aspects. Aspect-contributed sections are whole sections inserted at the placement indicated in each aspect file.
Check for missing aspects: Scan for structural signals that suggest aspects the project could benefit from but hasn't declared:
| Signal | Suggests |
|---|---|
Files in specs/hypotheses/ | hypothesis-testing |
Files in models/ (.dot, .json DAG files) | causal-modeling |
Workflow files, notebooks, or benchmark scripts in code/ | computational-analysis |
Package manifests (pyproject.toml, package.json, Cargo.toml) at project root with project source code (not just tool dependencies) | software-development |
If a signal is detected and the corresponding aspect is not in the aspects list,
briefly note it to the user before proceeding:
"This project has [signal] but the
[aspect]aspect isn't enabled. This would add [brief description of what the aspect contributes]. Want me to add it toscience.yaml?"
If the user agrees, add the aspect to science.yaml and load the aspect file
before continuing. If they decline, proceed without it.
Only check once per command invocation — do not re-prompt for the same aspect if the user has previously declined it in this session.
Resolve templates: When a command says "Read .ai/templates/<name>.md",
check the project's .ai/templates/ directory first. If not found, read from
templates/<name>.md. If neither exists, warn the
user and proceed without a template — the command's Writing section provides
sufficient structure.
Resolve science CLI invocation: When a command says to run science,
prefer the project-local install path: uv run science <command>.
This assumes the root pyproject.toml includes science as a dev
dependency installed via uv add --dev --editable "$SCIENCE_TOOL_PATH"
(the distribution is science; the entry point it installs is science).
If that fails (no root pyproject.toml or science not in dependencies),
fall back to:
uv run --with <science-plugin-root>/science science <command>
Aggregate project health diagnostics and walk the user through cluster-level cleanup.
the user input optionally specifies the project root (default: current directory).
uv run science health --project-root <root> --format=json
Parse the JSON output. Fields:
unresolved_refs: list of {target, mention_count, sources, looks_like}lingering_tags_lines: list of {file, values}layered_claims: object with:
proposition_claim_layer_coveragecausal_leaning_identification_coveragerival_model_packets_missing_discriminating_predictionsmigration_issuesGroup unresolved_refs by looks_like heuristic:
topic:t143, topic:t146 — likely mis-prefixed task IDstopic:h01 — likely mis-prefixed hypothesis IDstopic:genomics, topic:phase3b — could be real topics or operational markersFor the topic cluster, sub-cluster by user judgment hint:
pivot-2026-03-18): likely operational markersgenomics, protein): likely real topicsblocked, phase3b, cycle1): likely operationalFor refs that look like legitimate new entities, read docs/process/entity-creation-cookbook.md
before proposing action. Apply its identity policy triage explicitly: check the
external-id requirement, decide whether the item belongs in a shared registry kind
or a project-local kind, and use the prose-only fallback when the mention should
remain prose rather than become a graph entity.
Show a structured summary:
Health Report for <project>
================================
Unresolved References (N total):
- 5 look like task IDs (would be better as task: refs)
- 12 look like real topics (need entity stubs)
- 8 look like operational markers (consider meta: prefix)
Lingering tags: lines: M files
Total issues: X
Include the layered-claim section explicitly:
claim_layer coverage across propositionsidentification_strength coverage across causal-leaning propositionsmeasurement_modelIf the project is using independence_group on only one visible support line for a high-impact proposition, mention that as a fragility note even if it is still being surfaced manually rather than by a dedicated metric.
For each cluster, propose ONE action covering the whole cluster, not per-ref decisions. Examples:
Task-id cluster:
"5 refs look like task IDs being mis-prefixed: topic:t143, topic:t146, topic:t147, topic:t149, topic:t150. Rewrite all as task: refs?"
Real topics cluster:
"12 refs look like domain topics: topic:genomics, topic:protein, topic:embeddings, ... Create stub topic entity files for these in doc/topics/?"
Operational markers cluster:
"8 refs look like operational markers (phase, cycle, milestone): topic:phase3b, topic:cycle1, ... Rewrite as meta: refs (preserved as metadata, excluded from KG)?"
Lingering tags cluster:
"M files still have
tags:lines (residual from old templates). Runscience graph migrate-tags --applyto clean them up?"
For each cluster the user approves, use the appropriate CLI to apply:
sources field of each ref)science graph migrate-tags --apply (default meta:)science graph migrate-tags --apply --as-topicRe-run science health after applying actions to confirm the issue counts dropped. Show the user the delta.
git add <changed files>
git commit -m "chore(health): triage <N> issues — <brief description per cluster>"
looks_like heuristic is just a hint — let the user override it if they disagree.