with one click
memory-tree
// Restructure a flat MEMORY.md into a self-maintaining hierarchical memory tree. v2 adds automated reindexing, health monitoring, archive/purge lifecycle, and LLM reflection staging. Cuts boot token load by 70-95%.
// Restructure a flat MEMORY.md into a self-maintaining hierarchical memory tree. v2 adds automated reindexing, health monitoring, archive/purge lifecycle, and LLM reflection staging. Cuts boot token load by 70-95%.
[HINT] Download the complete skill directory including SKILL.md and all related files
| name | memory-tree |
| description | Restructure a flat MEMORY.md into a self-maintaining hierarchical memory tree. v2 adds automated reindexing, health monitoring, archive/purge lifecycle, and LLM reflection staging. Cuts boot token load by 70-95%. |
Restructure a flat MEMORY.md into a hierarchical memory/domains/ tree with auto-generated indexes. The agent reads a ~300-500 token root index on boot instead of the full file. Detailed content is searched on-demand via memory_search.
| Capability | v1 | v2 |
|---|---|---|
| Initial migration | ✅ Manual | ✅ Preserved |
| Reindexing | ❌ Manual | ✅ Auto every 6h (cron) |
| Memory health check | ❌ None | ✅ Status check every 6h |
| Stale entry cleanup | ❌ Never | ✅ Archive → 180d purge |
| general/ domain rot | ❌ Grows forever | ✅ 30-day max age enforced |
| LLM reflection staging | ❌ Never | ✅ Via _reflect_staging.json |
| Operational state | ❌ None | ✅ _meta.json + _stats.json |
memory/
├── _index.md # Root summary (~300-500 tokens) — replaces MEMORY.md
├── _meta.json # { last_reindex, domain_count, total_tokens, script_version }
├── _stats.json # { domains: { <name>: { files, tokens } }, general_age_violations }
├── _reflect_staging.json # (ephemeral) LLM reflection candidates — agent processes and deletes
├── _purge_log.json # Audit log of purged files
├── domains/
│ ├── <domain>/
│ │ ├── _index.md # Domain summary (auto-generated)
│ │ ├── topic-a.md # One ## section from old MEMORY.md
│ │ └── topic-b__DELETE.md # Archived — queued for 180d purge
│ └── <domain>/
│ └── ...
├── dates.md # Cross-cutting dates (if present)
└── daily/ # Existing daily logs moved here
├── YYYY-MM-DD.md
└── ...
Read your MEMORY.md. Count ## sections and estimate total tokens (chars ÷ 4). If under ~1,000 tokens, stop — migration isn't worth it.
cp MEMORY.md memory/MEMORY.md.bak
Never skip this. The backup is your rollback.
Read each ## Heading and assign it to a domain using these keyword rules:
| Domain | Keywords in heading |
|---|---|
identity | identity, personal, family, personality, preferences, network, professional network |
business | business, revenue, company, SEO, events, product, brand, pricing, clients |
infrastructure | infrastructure, services, cron, voice, API, issues, quirks, config, tools |
community | community, directory, meetup, group |
agents | agent, task, sub-agent, monitoring |
legal | legal, non-compete, contract, compliance |
dates | dates, remember, anniversary, birthday → goes to memory/dates.md (not a domain) |
general | anything that doesn't match above |
If a section clearly fits a subdomain (e.g., "getout.sg SEO"), nest it: domains/business/seo/getout-sg.md.
For each ## Section in MEMORY.md:
memory/domains/<domain>/_Related: lines) into that fileExample: ## Key Identity Facts → memory/domains/identity/key-identity-facts.md
Move all memory/YYYY-*.md files to memory/daily/:
mkdir -p memory/daily
mv memory/2026-*.md memory/daily/
Also move any progress-tracking files (e.g., *-progress*.md).
For each domain directory, create _index.md:
# <Domain> Index
_Last indexed: <ISO timestamp>_
_Estimated tokens: ~<total>_
### <filename.md> (~<tokens> tokens)
<First meaningful line from the file, max 250 chars>
### <filename2.md> (~<tokens> tokens)
<First meaningful line>
Token estimate = file chars ÷ 4.
Create memory/_index.md:
# Memory Index
_Last indexed: <ISO timestamp>_
_Estimated tokens: ~<grand total across all domains>_
## Domains
### <Domain> (~<tokens> tokens)
<One-line summary of domain's primary topic>
_Also: <other file names without .md>_
### Daily Logs (<N> files in memory/daily/)
Searchable via memory_search.
Overwrite MEMORY.md with:
# Memory Index (see memory/domains/ for full content)
# Auto-generated — do not edit directly
# Original preserved at memory/MEMORY.md.bak
# Last regenerated: <ISO timestamp>
<contents of memory/_index.md>
This is what gets loaded on boot — ~300-500 tokens instead of the full file.
memory/MEMORY.md.bak exists and matches original size## Section from original exists as a file in memory/domains/memory_search for a known fact — it should find content in the new pathsFive shell scripts live in skills/bonsai-memory/scripts/. Install them once; run via cron or manually.
Scans all domain files, regenerates _index.md for each domain, regenerates memory/_index.md, updates MEMORY.md, and writes _meta.json + _stats.json. No LLM required.
# Basic usage (uses ~/workspace by default)
bash skills/bonsai-memory/scripts/bonsai-reindex.sh
# Custom workspace
bash skills/bonsai-memory/scripts/bonsai-reindex.sh /path/to/workspace
# Via env var
BONSAI_WORKSPACE=/path/to/workspace bash skills/bonsai-memory/scripts/bonsai-reindex.sh
Output:
Reindexed 24 files across 5 domains (3840 tokens)
Writes:
memory/_meta.json — { last_reindex, domain_count, total_tokens, script_version }memory/_stats.json — { domains: { <name>: { files, tokens } }, general_age_violations }memory/_index.md — root summarymemory/domains/<domain>/_index.md — per-domain summariesMEMORY.md — updated with root indexReads _meta.json and _stats.json, runs live checks on the filesystem, prints a clean status report. No LLM required.
bash skills/bonsai-memory/scripts/bonsai-status.sh
Output:
🌿 Bonsai Status
──────────────────────────────
Last reindex: 2h 15m ago (2026-03-15T06:00:00Z)
Total memory: 3840 tokens across 5 domains
general/ violations: 0 files older than 30d
Pending purge: none
──────────────────────────────
Status: HEALTHY
Exit codes:
0 = HEALTHY1 = WARNING (overdue reindex, violations, pending purge, or staging file present)2 = CRITICAL (5+ general violations)Moves a memory file from one domain to another, updates frontmatter domain: field, then triggers reindex.
bash skills/bonsai-memory/scripts/bonsai-migrate.sh <SOURCE_FILE> <DEST_DOMAIN> [WORKSPACE]
Examples:
# Move a general/ file into business domain
bash skills/bonsai-memory/scripts/bonsai-migrate.sh \
memory/domains/general/go-events-pricing.md business
# With explicit workspace
bash skills/bonsai-memory/scripts/bonsai-migrate.sh \
memory/domains/general/some-topic.md infrastructure /path/to/workspace
What it does:
domain: <DEST_DOMAIN> in frontmattermemory/domains/<DEST_DOMAIN>/bonsai-reindex.sh automaticallyPhase 1 (cheap, no LLM): Scans _stats.json and the filesystem for candidates that need agent review. Writes _reflect_staging.json as a sentinel file for an agent to pick up.
Candidates are flagged when:
general/ files are older than 15 days (should be migrated to a proper domain)bash skills/bonsai-memory/scripts/bonsai-reflect.sh
Output:
Reflection staged: 7 candidates written to _reflect_staging.json
Run your agent now to process the reflection staging file.
Agent instructions:
1. Read memory/_reflect_staging.json
2. For each candidate: decide keep / migrate / archive (prefix with __DELETE)
3. Run bonsai-reindex.sh after all decisions
4. Delete _reflect_staging.json when done
_reflect_staging.json format:
{
"triggered_at": "2026-03-15T08:00:00Z",
"candidates": [
{
"file": "memory/domains/general/old-note.md",
"domain": "general",
"age_days": 22,
"tokens": 120,
"reason": "general/ file older than 15 days (22d)"
}
]
}
Finds all files with __DELETE suffix in memory/domains/. Only deletes if older than 180 days. Logs purged files to _purge_log.json.
bash skills/bonsai-memory/scripts/bonsai-purge.sh
Output:
Skipping old-note__DELETE.md — 134 days remaining
Purged: ancient-note__DELETE.md (184d old)
Purge log updated: memory/_purge_log.json
Purged 1 files. 1 files still in grace period.
Archive workflow:
some-topic.md → some-topic__DELETE.mdbonsai-purge.sh runs weekly, checks mtime — skips if <180 days oldRun these cron jobs to keep memory automatically maintained:
# Add to crontab (crontab -e):
0 */6 * * * bash ~/workspace/skills/bonsai-memory/scripts/bonsai-reindex.sh
30 */6 * * * bash ~/workspace/skills/bonsai-memory/scripts/bonsai-status.sh
0 2 * * 0 bash ~/workspace/skills/bonsai-memory/scripts/bonsai-purge.sh
Schedule breakdown:
Optional: Weekly reflection staging
# Run reflect every Sunday at 3am to queue candidates for agent review
0 3 * * 0 bash ~/workspace/skills/bonsai-memory/scripts/bonsai-reflect.sh
When bonsai-reflect.sh runs, it writes memory/_reflect_staging.json. On next boot (or when the agent checks status), it should notice this file and process it.
Agent steps when _reflect_staging.json exists:
memory/_reflect_staging.json — get the candidates listbonsai-migrate.sh <file> <dest-domain> to move it to a better domain__DELETE suffix (e.g., old-note.md → old-note__DELETE.md); it enters the 180-day grace period before bonsai-purge.sh removes it permanentlybonsai-reindex.sh after all decisions to regenerate indexes_reflect_staging.json when doneDecision heuristics:
general/ files >15 days: try to classify into a real domain; if not classifiable, archivememory/MEMORY.md.bak before touching MEMORY.md.memory_search compatibility. OpenClaw's memory_search scans memory/*.md recursively. The tree structure is fully compatible — no config changes needed.memory/daily/, don't restructure them.__DELETE suffix first and let bonsai-purge.sh handle final deletion after 180 days.| Workspace size | Before (tokens) | After (tokens) | Reduction |
|---|---|---|---|
| Small (<1K) | ~500 | ~300 | ~40% |
| Medium (1-3K) | ~2,000 | ~350 | ~80% |
| Large (3-6K) | ~5,000 | ~400 | ~92% |
| Very large (6K+) | ~6,400 | ~385 | ~94% |