with one click
query
// Answer questions from the wiki vault. Reads hot cache then index then pages, synthesizes with citations. Files good answers back.
// Answer questions from the wiki vault. Reads hot cache then index then pages, synthesizes with citations. Files good answers back.
[HINT] Download the complete skill directory including SKILL.md and all related files
| name | query |
| description | Answer questions from wiki vault. Reads strategically, synthesizes with citations, files answers back. |
| when_to_use | Use when the user asks a question that may be answered from the wiki vault — factual lookups, comparisons, synthesis. |
| allowed-tools | Agent Bash Read |
Wiki has synthesis work done. Read strategically, answer precisely, file answers back so knowledge compounds.
Reads hot.md, index.md, individual pages via obsidian CLI. See CLI docs for syntax.
Three depths. Choose based on the question complexity.
| Mode | Trigger | Reads | Token cost | Best for |
|---|---|---|---|---|
| Quick | query quick: ... or simple factual Q | hot.md + index.md only | ~1,500 | "What is X?", date lookups, quick facts |
| Standard | default (no flag) | hot.md + index + 3-5 pages | ~3,000 | Most questions |
| Deep | query deep: ... or "thorough", "comprehensive" | Full wiki + optional web | ~8,000+ | "Compare A vs B across everything", synthesis, gap analysis |
Use when the answer is likely in the hot cache or index summary.
wiki/hot.md. If it answers the question, respond immediately.wiki/index.md. Scan descriptions for the answer.Do not open individual wiki pages in quick mode. Do not call obsidian backlinks in quick mode — backlink-aware ranking belongs to standard and deep modes; quick mode preserves a ~1.5K token budget.
A four-step hybrid retrieval flow. Stop at the earliest step that answers the question.
wiki/hot.md first. It may already have the answer or directly relevant context. If it answers, stop.obsidian search query=<term> (or grep over wiki/) to collect candidate clusters of leaves that share those tags. This is faster and narrower than reading index.md for most topical questions.wiki/domains/<cluster-tag>/_index.md.
related: list is the curated answer set; follow its wikilinks at depth-1 and pull backlinks (obsidian backlinks path=<leaf> format=json) on the leaves it links to so heavily cited canonical pages surface in the top-N. This path is preferred — the hub is human-curated and pre-ranked.obsidian backlinks path=<leaf> format=json; entries field count = inbound citations). A heavily cited atomic note must surface in the top-N even if its outbound related: is sparse.wiki/index.md only when steps 1–3 fail (no hot-cache hit, no tag cluster, no hub). Scan section headers; identify candidate pages; rank by backlinks as in step 3.After candidates are read:
obsidian backlinks path=<leaf> format=json and read the entries whose frontmatter type: domain. That file is the leaf's containing hub. Hub membership is forward-only (hubs link to leaves; leaves never declare membership), so backlinks of type: domain are the canonical leaf→hub traversal. Below the hub threshold no hub exists — that is expected, not a gap.type: synthesis (a Research: [Topic] page produced by /autoresearch), always check for a trail: run obsidian backlinks path=<synthesis-path> format=json and filter the entries to those whose frontmatter type: trail. The trail is the curated reading order for that research run, with one-line annotations explaining each step's argument role — much cheaper than reconstructing the path from related: traversal. See Trail Discovery below for the multi-trail rule.(Source: [[Page Name]]).wiki/questions/answer-name.md?"Use for synthesis questions, comparisons, or "tell me everything about X."
Read wiki/hot.md and wiki/index.md.
Read every relevant domain hub. List wiki/domains/*/_index.md; read each hub whose tag intersects the question. Hubs are the cheapest path to a curated, pre-ranked answer set.
Identify all relevant leaves across concepts/, entities/, sources/, solutions/, comparisons/ — both the leaves linked from the hubs and any extra leaves the hubs missed (use obsidian search for completeness).
Pull backlinks for every candidate. Run obsidian backlinks path=<page> format=json on each candidate to surface canonical pages with high inbound but sparse outbound related:, and to find the type: domain hubs and type: trail reading orders that backlink each candidate. Read any hubs not already covered in step 2. For trails, follow the multi-trail rule in Trail Discovery below.
Gather pages via agent dispatch. When the candidate list has more than 5 pages, group them into logical clusters (by tag, hub, or topic area) and dispatch one agents/gather.md per cluster in parallel rather than reading all pages on the main thread.
Before spawning agents, verify CWD:
cd "${VAULT_ROOT}" && pwd # confirm vault root before gather fan-out
Pass each gather agent:
FILE_LIST — vault-relative paths for that clusterVAULT_ROOT — $VAULT_ROOTCONTEXT — query deep cluster: <cluster-tag-or-description>MAX_FILES — 20Wait for all gather agents to finish. Collect their structured summaries. Use these summaries as the page-content input for synthesis (step 7).
When the candidate list has 5 or fewer pages, read them inline — no agent needed.
If wiki coverage is thin, offer to supplement with web search.
Synthesize a comprehensive answer with full citations.
Always file the result back as a wiki page. Deep answers are too valuable to lose.
Read the minimum needed:
| Start with | Cost (approx) | When to stop |
|---|---|---|
| hot.md | ~500 tokens | If it has the answer |
| index.md | ~1000 tokens | If you can identify 3-5 relevant pages |
| 3-5 wiki pages | ~300 tokens each | Usually sufficient |
| 10+ wiki pages | expensive | Only for synthesis across the entire wiki |
If hot.md has the answer, respond without reading further.
For the full hot-cache protocol (when it is written, what it contains, and sub-agent rules), see ${CLAUDE_PLUGIN_ROOT}/_shared/hot-cache-protocol.md.
The master index (wiki/index.md) looks like:
## Domains
- [[Domain Name]]: description (N sources)
## Entities
- [[Entity Name]]: role (first: [[Source]])
## Concepts
- [[Concept Name]]: definition (status: developing)
## Sources
- [[Source Title]]: author, date, type
## Questions
- [[Question Title]]: answer summary
Scan the section headers first to determine which sections to read.
Domain hubs live at wiki/domains/<slug>/_index.md. Each hub is the curated entry point for one cluster:
---
type: domain
title: "Knowledge Management"
owns_folder: false
subdomain_of: ""
page_count: 12
created: YYYY-MM-DD
updated: YYYY-MM-DD
tags: [domain, knowledge-management]
status: developing
confidence: EXTRACTED
evidence: []
related:
- "[[llm-wiki-pattern|LLM Wiki Pattern]]"
- "[[hot|Hot Cache]]"
- "[[compounding-knowledge|Compounding Knowledge]]"
---
# Knowledge Management
One-paragraph hub description.
## Core concepts
- [[llm-wiki-pattern|LLM Wiki Pattern]] — one-line description
- [[compounding-knowledge|Compounding Knowledge]] — one-line description
## Sources
- [[llm-wiki-karpathy-gist|Karpathy's LLM Wiki Gist]] — origin source
Reach a hub via step 3 of the standard flow (wiki/domains/<cluster-tag>/_index.md) or via the leaf→hub backlink traversal in step 5.
Per-folder <folder>/_index.md files are not used. Folders like concepts/, entities/, sources/, solutions/ are flat directories; cross-folder navigation goes through hubs.
Trails are run-records emitted by /autoresearch. They live under wiki/trails/Trail: [Topic] (YYYY-MM-DD).md and answer "in what order, and why each next?" — complementary to hubs ("what notes are about X?"). When the question is about a topic that has been research-ran, the trail is usually the cheapest route to the right reading order.
Discovery procedure (synthesis → trail):
type: synthesis (e.g. Research: [Topic]), run:
obsidian backlinks path=<synthesis-path> format=json
backlinks format=json returns only {"file": "<path>"} entries (no frontmatter), so for each entry run obsidian properties path=<entry> to read its frontmatter and filter to those with type: trail. These are the trails covering that research run.YYYY-MM-DD date suffix on the filename — the date suffix is canonical, not the research_run: field, because filename ordering is what the index/log entries point at. Read only that trail. After answering, append exactly:
*N earlier trail(s) exist on this topic — say 'compare trails' to read all.*
where N is the count of older trails. Skip the line entirely when only one trail exists.compare trails follow-up. If the user replies with "compare trails" (or equivalent), read all trails for the topic, oldest first, and synthesize an evolution view — what changed between runs, what stayed, what dropped out.Trail vs. hub — both are reachable via backlinks, but they answer different questions and the discovery filter (type: trail vs. type: domain) is the disambiguator. A page can have both kinds of backlinks; read the trail when the user is re-entering a research topic, the hub when the user is exploring a domain.
Fallback. If no trail exists for a synthesis page, fall back to the existing backlink/hub traversal in steps 5–6 of the standard flow. No trail is not a gap — /autoresearch now emits exactly one trail per run, so the only legitimate absence is a synthesis page produced before this feature shipped.
Good answers compound into the wiki. Don't let insights disappear into chat history.
When filing an answer:
---
type: question
title: "Short descriptive title"
question: "The exact query as asked."
answer_quality: solid
created: YYYY-MM-DD
updated: YYYY-MM-DD
tags: [question, <domain>]
related:
- "[[Page referenced in answer]]"
sources:
- "[[wiki/sources/relevant-source.md]]"
status: developing
---
Then write the answer as the page body. Include citations. Link every mentioned concept or entity.
After filing, add an entry to wiki/index.md under Questions and append to wiki/log.md.
If the question cannot be answered from the wiki: