with one click
science-search-literature
// Search scientific literature using OpenAlex and PubMed, rank results by project relevance, and produce a prioritized reading queue.
// Search scientific literature using OpenAlex and PubMed, rank results by project relevance, and produce a prioritized reading queue.
[HINT] Download the complete skill directory including SKILL.md and all related files
| name | science-search-literature |
| description | Search scientific literature using OpenAlex and PubMed, rank results by project relevance, and produce a prioritized reading queue. |
Converted from Claude command /science:search-literature.
Before executing any research command:
Resolve project profile: Read science.yaml and identify the project's profile.
Use the canonical layout for that profile:
research → doc/, specs/, tasks/, knowledge/, papers/, models/, data/, code/software → doc/, specs/, tasks/, knowledge/, plus native implementation roots such as src/ and tests/Load role prompt: .ai/prompts/<role>.md if present, else references/role-prompts/<role>.md.
Load the science-research-methodology and science-scientific-writing Codex skills. If native skill loading is unavailable, use codex-skills/INDEX.md to map canonical Science skill names to generated skill files and source paths.
Read specs/research-question.md for project context when it exists.
Load project aspects: Read aspects from science.yaml (default: empty list).
For each declared aspect, resolve the aspect file in this order:
aspects/<name>/<name>.md — canonical Science aspects.ai/aspects/<name>.md — project-local aspect override or additionIf neither path exists (the project declares an aspect that isn't shipped with
Science and has no project-local definition), do not block: log a single line
like aspect "<name>" declared in science.yaml but no definition found — proceeding without it and continue. Suggest the user either (a) drop the
aspect from science.yaml, (b) author it under .ai/aspects/<name>.md, or
(c) align the name with one shipped under aspects/.
When executing command steps, incorporate the additional sections, guidance, and signal categories from loaded aspects. Aspect-contributed sections are whole sections inserted at the placement indicated in each aspect file.
Check for missing aspects: Scan for structural signals that suggest aspects the project could benefit from but hasn't declared:
| Signal | Suggests |
|---|---|
Files in specs/hypotheses/ | hypothesis-testing |
Files in models/ (.dot, .json DAG files) | causal-modeling |
Workflow files, notebooks, or benchmark scripts in code/ | computational-analysis |
Package manifests (pyproject.toml, package.json, Cargo.toml) at project root with project source code (not just tool dependencies) | software-development |
If a signal is detected and the corresponding aspect is not in the aspects list,
briefly note it to the user before proceeding:
"This project has [signal] but the
[aspect]aspect isn't enabled. This would add [brief description of what the aspect contributes]. Want me to add it toscience.yaml?"
If the user agrees, add the aspect to science.yaml and load the aspect file
before continuing. If they decline, proceed without it.
Only check once per command invocation — do not re-prompt for the same aspect if the user has previously declined it in this session.
Resolve templates: When a command says "Read .ai/templates/<name>.md",
check the project's .ai/templates/ directory first. If not found, read from
templates/<name>.md. If neither exists, warn the
user and proceed without a template — the command's Writing section provides
sufficient structure.
Resolve science CLI invocation: When a command says to run science,
prefer the project-local install path: uv run science <command>.
This assumes the root pyproject.toml includes science as a dev
dependency installed via uv add --dev --editable "$SCIENCE_TOOL_PATH"
(the distribution is science; the entry point it installs is science).
If that fails (no root pyproject.toml or science not in dependencies),
fall back to:
uv run --with <science-plugin-root>/science science <command>
Search literature for the user input.
If no argument is provided, derive candidate search foci from specs/research-question.md and doc/questions/, then ask the user to confirm the focus.
Follow the Science Codex Command Preamble before executing this skill. Use the research-assistant role prompt.
Additionally:
<science-plugin-root>:
skills/data/sources/openalex.mdskills/data/sources/pubmed.md.ai/templates/paper.md first; if not found, read templates/paper.md.specs/research-question.mdspecs/scope-boundaries.mddoc/questions/specs/hypotheses/doc/papers/doc/topics/, doc/questions/doc/searches/ for recent related searches and ask whether to refresh or create a new run.Create 3-5 query variants before running searches:
Default constraints unless user specifies otherwise:
Use this execution order:
fallback-web.At least one query must hit OpenAlex with a broad conceptual framing and return ≥30 candidates before ranking. If every query is seed/author/title-driven and returns <30 results, you are in verify-mode, not discover-mode — stop and reformulate at least one query as a broad conceptual search.
For each candidate, capture identifiers where available:
Do not fabricate missing metadata. Mark unknown fields as [UNVERIFIED].
Deduplicate across sources by DOI first, then PMID, then normalized title.
Rank with explicit rationale using:
Label each ranked item as one of:
Core now (read immediately)Relevant next (read if time allows)Peripheral monitor (track but defer)Before writing output, enumerate the project's declared scope and check which parts this search does not cover.
Sources of declared scope (read all that exist):
science.yaml aspectsdoc/topics/ (topic slugs and their subtopics)doc/questions/ (open questions)specs/hypotheses/ (active hypotheses)For each declared item, mark whether the current search surfaced at least one ranked candidate that materially addresses it. If gaps exist, either:
## Coverage Notes and Gaps with a suggested follow-up query.Do not skip this step — a reading queue that silently omits declared scope is worse than one that flags the omission.
If doc/searches/ does not exist yet, create it first.
Create doc/searches/YYYY-MM-DD-<slug>.md with sections:
## Search Focus## Query Set## Sources and Run Metadata## Ranked Results## Priority Reading Queue## Coverage Notes and Gaps## Recommended Next ActionsIn ## Ranked Results, include a table with columns:
Also write machine-readable output to:
doc/searches/YYYY-MM-DD-<slug>.jsonInclude the normalized candidate list, dedupe keys, source provenance, and rank/tier fields.
Core now papers via science tasks add.science-research-papers (or create a task for later).Core now items, create a stub-only note at doc/papers/<citekey>.md using .ai/templates/paper.md first, then templates/paper.md. The stub must contain:
UNREAD — populate after reading the paper.science-research-papers or during task execution.tags for project-specific labels.ontology_terms for normalized ontology CURIEs (for example MeSH, GO, Biolink terms).datasets for relevant dataset accessions when identified.doc/topics/, doc/questions/) with new links and key takeaways.papers/references.bib. If the file does not exist yet, create it with:
% references.bib — BibTeX database for this Science project
% Use keys in the format: FirstAuthorLastNameYear (e.g., Smith2024)
science-next-steps focused on the searched scope.git add -A && git commit -m "docs(papers): search literature <slug>" (use papers: only if your project's commitlint config explicitly allows that type).Reflect on the template and workflow used above.
If you have feedback (friction, gaps, suggestions, or things that worked well), report each item via:
science feedback add \
--target "command:search-literature" \
--category <friction|gap|guidance|suggestion|positive> \
--summary "<one-line summary>" \
--detail "<optional prose>"
Guidelines:
--target "template:<name>" instead