// Use when the user explicitly says "let's distill" / "wrap this up into a learning" / "extract what we learned from <source>". Source-agnostic: a thread + sessions, a completed plan, a transcript, or an ad-hoc reflection. Produces N tagged learnings (not 1) keyed by diataxis + confidence + four-facet tags so future agents can retrieve them via `anvil list learning --diataxis ... --tags ... --confidence ...`. Not for active research (stay in the thread); not for one-off thoughts (use anvil:capturing-inbox); not for summaries that aren't durable claims.
[HINT] Download the complete skill directory including SKILL.md and all related files
name
distilling-learning
description
Use when the user explicitly says "let's distill" / "wrap this up into a learning" / "extract what we learned from <source>". Source-agnostic: a thread + sessions, a completed plan, a transcript, or an ad-hoc reflection. Produces N tagged learnings (not 1) keyed by diataxis + confidence + four-facet tags so future agents can retrieve them via `anvil list learning --diataxis ... --tags ... --confidence ...`. Not for active research (stay in the thread); not for one-off thoughts (use anvil:capturing-inbox); not for summaries that aren't durable claims.
license
MIT
allowed-tools
["Bash","Read","Edit"]
compatibility
Works with Claude Code 2.0+ and Codex 0.121+ via SKILL.md standard
Workflow for crystallizing thinking into retrievable knowledge artifacts. Distillation sits in the knowledge pipeline, parallel to the build pipeline. Threads are the workspace; learnings are the durable output.
The terminal contract is retrieval: every learning must be reachable later via anvil list learning --diataxis X --tags domain/Y,activity/Z --confidence W. If a draft can't be retrieved that way, the tags are wrong.
When this skill runs
The user explicitly commits to distilling: "let's distill", "wrap this up into a learning", "extract what we learned from ".
A source artifact exists and is named: a thread, a completed plan, a transcript, or a free reflection.
When not to use
The source isn't crystallized yet (still actively researching) → keep working in the thread.
The output would just restate the source — distillation requires a durable claim or piece of know-how, not a summary.
The user wants to capture a one-off thought without taxonomy commitment → anvil:capturing-inbox.
Phase 1 — Identify the source
Confirm one of the following with the user, then read the relevant files:
Source kind
What to read
Thread + sessions
The thread file plus every session linked via related: [[thread.<id>]]
Completed plan
The plan file plus build artifacts it produced
Transcript
The transcript file
Reflection
The conversation context only
anvil show thread <id> # if thread source
anvil list session --tag thread/<id> # if thread source, find linked sessions
Phase 2 — Decide cardinality
N learnings per pass, not 1. Each learning crystallizes one claim or one piece of know-how. Two related claims become two learnings — never bundle.
Draft a list of candidate learnings. For each, pick:
Title — the claim, phrased as a noun phrase ("FK locks block writes during backfill", not "how to handle locks").
diataxis — tutorial | how-to | reference | explanation. Default explanation for claims; how-to for procedural know-how; reference for catalogued options; tutorial only for end-to-end teaching material.
confidence — low | medium | high. Default low. Promote to medium only if backed by a primary source the user has read; high only if also independently verified (replicated, peer-reviewed, run in production).
Gate: present the draft list (titles + diataxis + confidence) to the user. User prunes/edits before any file is written.
Phase 3 — Tag each learning
Tags are mechanism, not vocabulary. Use the four-facet system; the actual values come from _meta/glossary.md.
anvil tags list --source used --prefix domain/ # values currently in vault
anvil tags list --source used --prefix activity/ # values currently in vault
anvil tags list --source used --prefix pattern/ # values currently in vault
For each learning, propose tags drawn from existing values first. Only invent a new tag if no existing one fits — and only after the user approves it. New tags must:
Be lowercase ASCII, hyphens only (no spaces, no underscores, no caps).
Have shape <facet>/<name> where facet is one of domain | activity | pattern. (status/ is forbidden — status is a frontmatter field.)
Be introduced by passing --allow-new-facet=<facet> on the create call. (Glossary seeding via anvil tags add <facet>/<name> --desc "..." is a Phase 5 follow-up; it does not bypass the novelty gate on its own.)
Never include a status/* tag.
Phase 4 — Create + populate each approved learning
Then direct-edit the file body. The body MUST contain three H2 sections in this exact order:
## TL;DR
One paragraph. The claim, why it matters.
## Evidence
Sources, session quotes, plan outcomes, transcript references that ground the claim.
Use wikilinks: `[[session.<id>]]`, `[[plan.<id>]]`, etc.
## Caveats
What would change `confidence`. What's still unknown. Limits of applicability.
Progressive disclosure: future agents query anvil list learning ..., see frontmatter + (eventually) the TL;DR; only drill into Evidence + Caveats when needed.
Tags are seeded on anvil create above (passing --tags and, for novel values, --allow-new-facet). Don't hand-edit the tags: frontmatter block — that bypasses the novelty gate. Body edits only.
Backlink the source:
anvil link learning <id> <source-type> <source-id>
Phase 5 — Glossary additions
If new tags were approved in phase 3, append them now (one per new tag):