with one click
query
// Use when answering any analytical question over nx knowledge collections. This is the canonical entry point; direct search/query calls for analytical questions are an anti-pattern.
// Use when answering any analytical question over nx knowledge collections. This is the canonical entry point; direct search/query calls for analytical questions are an anti-pattern.
[HINT] Download the complete skill directory including SKILL.md and all related files
| name | query |
| description | Use when answering any analytical question over nx knowledge collections. This is the canonical entry point; direct search/query calls for analytical questions are an anti-pattern. |
| effort | medium |
Tier-aware discipline — apply at session start and before every major step:
mcp__plugin_nx_nexus__nx_answer(...) for verb-shape questions; mcp__plugin_nx_nexus__search(...) for keyword lookup.mcp__plugin_nx_nexus__memory_search(query="<topic>", project="<repo>").mcp__plugin_nx_nexus__scratch(action="search", query="<topic>").mcp__plugin_nx_nexus__plan_search(query="<task>", limit=3).mcp__plugin_nx_nexus__scratch(action="put", ..., tags="<topic>") for sibling agents downstream THIS session (T1, narrowest scope, cheapest write).mcp__plugin_nx_nexus__memory_put(...) for project-scoped decisions, future sessions same project (T2).mcp__plugin_nx_nexus__store_put(...) for permanent cross-project knowledge, future sessions everywhere (T3).mcp__plugin_nx_nexus__plan_save(...) for multi-agent pipeline outcomes (so future callers hit plan-match).This skill wraps one MCP call — nx_answer. If you are asked an
analytical question ("how does…", "what tradeoffs…", "compare X and
Y…", "why was this designed…"), you MUST call nx_answer rather than
search or query directly.
mcp__plugin_nx_nexus__nx_answer(
question="<full-sentence question>",
scope="<optional corpus or subtree filter>",
)
That's it. One tool call. nx_answer enforces plan-match-first:
plan_run — retrieval steps + any
contiguous operator run bundled into a single claude -p
subprocess.claude -p, then executes.No agent spawns. No T1 scratch relay. All coordination is in-process.
search?search returns top-K chunks. Analytical questions need composition
— extract + rank + compare + summarize — built on top of retrieval.
nx_answer composes those steps via plans saved in the library;
direct search returns you raw chunks and forces you to do the
synthesis manually (and usually worse than a plan would).
Concrete measurement (cross-project compositional query, Arcaneum
vs Nexus RDR corpora, 6-step plan: search → search → extract → extract → compare → summarize):
| Path | Elapsed | Result |
|---|---|---|
nx_answer (bundled operator chain) | 54s | Full philosophy-difference synthesis with corpus attribution |
Direct search + manual synthesis | you do it | Chunks you compose yourself, slowly |
The bundled claude -p subprocess amortizes spawn cost across 4
operators — one subprocess does the work of four. That's the latency
win; the composition quality is the second win.
search
directly; no composition needed.jet_brains_find_symbol,
jet_brains_find_referencing_symbols).store_get / store_get_many.Everything else — research, review, analyze, debug, document, cross-
project synthesis — route through nx_answer.
The five verb skills (/nx:research, /nx:review, /nx:analyze,
/nx:debug, /nx:document) each pin a dimensions={"verb": …}
filter so the plan matcher narrows to the right template family.
Pick the verb that matches the question shape; fall back to this
plain /nx:query skill when no verb cleanly fits.
search for "how does X work" or "what are the tradeoffs
in Y". That's an analytical question. Use nx_answer.plan_match directly. You lose run recording and the
fallback-on-miss path. nx_answer is the MCP-level contract.