with one click
maintain
// Inbox processing, calendar/task sync, archive sweep, and scheduled housekeeping for the TARS workspace
// Inbox processing, calendar/task sync, archive sweep, and scheduled housekeeping for the TARS workspace
[HINT] Download the complete skill directory including SKILL.md and all related files
| name | maintain |
| description | Inbox processing, calendar/task sync, archive sweep, and scheduled housekeeping for the TARS workspace |
| user-invocable | true |
| triggers | ["run maintenance","housekeeping","process inbox","check inbox","sync","check for gaps","archive sweep","worktrees","list worktrees"] |
| help | {"purpose":"Workspace maintenance scoped to ingest-adjacent workflows: inbox processing, calendar/task sync,\narchive sweep, worktree hygiene, and pending workspace migrations. Hygiene checks (broken links,\norphans, schema violations, staleness, contradictions, framework state drift) moved to\n`/lint` in v3.1.","use_cases":["Process my inbox","Sync tasks and calendar","Check for gaps","Archive stale notes","Run maintenance","List worktrees","Merge / prune worktrees","Run pending migrations"],"scope":"maintenance,inbox,sync,archive,housekeeping,worktrees,migrations"} |
Modes: inbox (classify and route pending items), sync (drift detection between workspace and external systems), archive (staleness-based sweep with guardrails), worktrees (git worktree hygiene), migrations (run pending schema migrations). A "maintenance" trigger runs inbox + sync + archive in sequence.
When a resolve_capability call returns status: "unavailable", follow the degradation messaging convention in skills/core/SKILL.md section "Degradation messaging convention".
Hygiene โ broken wikilinks, orphans, schema violations, staleness banners, contradictions, framework self-state drift โ moved to /lint in v3.1. Reference-update mode was retired (v2.1 artifact).
/linttars-weekly-maintenance invokes /maintain --weekly silently. Nothing auto-applies; everything flows into a review queue note for the next session.last_run timestamp indicates it is due.All workspace writes go through mcp__tars_vault__* tools. External integrations resolve via mcp__tars_vault__resolve_capability(capability=โฆ) โ never hard-code provider names.
| Mode | Trigger | Purpose |
|---|---|---|
| Inbox | "process inbox" | Classify and route pending inbox items (multimodal) |
| Sync | "sync", "check for gaps" | Calendar gaps, task drift, memory freshness |
| Archive | "archive sweep" | Staleness archival with 90d backlink + active-task guardrails |
| Worktrees | "list worktrees", /maintain worktrees | Discover, merge, and prune git worktrees |
| Migrations | "run migrations", /maintain migrations | Apply pending schema/vault-structure migrations |
| Re-register | "re-register jobs", /maintain --re-register | Switch or renew scheduler registrations with mutual exclusion |
| Maintenance | "run maintenance", "housekeeping", cron | Archive + sync; inbox only if non-empty |
Automatic (SessionStart): if _system/housekeeping-state.yaml.last_run is not today, the hook runs archive --auto + sync --light silently (surface-only; never applies without review). Cron: Friday 17:00 local by default (_system/schedule.md, registered via CronCreate in /welcome).
Triggered by: "process inbox", "check inbox".
Classify and process all pending inbox items. Supports text, transcripts, images, PDFs, decks, documents, spreadsheets, screenshots, exported notes, and mixed content when the active Claude environment can read them. If a binary file cannot be read directly, create a companion note and ask the user for extracted text or a converted copy rather than marking it processed.
mcp__tars_vault__search_by_tag(tag="tars/inbox", limit=50)
Fall back to directory listing of inbox/pending/ if the search returns empty.
For each file, read the first 50 lines (or full content for images) to determine content type.
| Content type | Detection signals | Processing route |
|---|---|---|
| Transcript/meeting notes | Speaker labels, timestamp patterns, "Meeting:", Otter/Fireflies/Teams/Zoom format | /meeting pipeline |
| Screenshot/image | .png, .jpg, .jpeg, .gif, .webp | Multimodal analysis + context inference |
| Article/link | URL, "http", article structure, byline | /learn wisdom mode |
| Document/deck/report | .pdf, .docx, .pptx, .xlsx, exported report files | Companion file + text extraction when readable; otherwise companion note + request extracted text |
| Task-like items | Checkbox patterns, "TODO", "Action item" | /tasks extract mode |
| Facts/memory items | Declarative statements, "Remember:", "Note:" | /learn memory mode |
| Claude-session capture | tars-source-type: claude-session frontmatter (dropped by PreCompact / SessionEnd hooks) | Review + route to /meeting, /learn, or /tasks |
| Mixed | Multiple types detected in one file | Split into components |
N items in inbox:
1. ClientCo-sync-notes.txt โ meeting transcript (Otter format)
2. IMG_2847.png โ screenshot, Slack message from Sarah about API deadline
3. api-patterns.pdf โ research paper on API design
4. quick-notes.md โ mixed (2 tasks + 3 facts)
5. claude-session-2026-04-17.md โ Claude session summary (pre-compact capture)
Process all? [all / pick specific / reclassify any]
Allow user to override classification, exclude items, or reorder priority.
Route each selected item to the appropriate skill. Between items, report progress.
/meeting pipeline with the file content (steps 2โ14).mcp__tars_vault__create_note(template="companion", path="contexts/YYYY-MM/โฆ") with the ยง26.13 frontmatter contract, extracted tasks/facts routed through /tasks / /learn.WebFetch extraction, then /learn wisdom mode.contexts/YYYY-MM/; if extraction is unavailable, keep the source in inbox and ask for a converted/exported text copy./tasks extract mode (accountability test + numbered review)./learn memory mode (durability test + numbered review).mcp__tars_vault__update_frontmatter(file="<inbox-item>", property="tars-inbox-processed", value="YYYY-MM-DD")
mcp__tars_vault__move_note(src="inbox/pending/<file>", dst="inbox/processed/<file>")
NEVER delete source files. The subsequent archive-sweep step (ยงbelow) moves inbox/processed/ items older than 7 days to long-term archive.
Emit inbox_processed telemetry. Append an inbox-processing summary to the daily note via mcp__tars_vault__append_note.
Triggered by: "sync", "check for gaps".
cap = mcp__tars_vault__resolve_capability(capability="calendar")
events = cap.tools[list_events-style](start=today-7, end=today)
Cross-reference each meeting against journal entries:
mcp__tars_vault__search_by_tag(tag="tars/meeting", frontmatter={"tars-date": "<date>"}, limit=5).Offer the user to route each gap to /meeting (if a transcript is available in inbox) or to create a placeholder journal entry.
cap = mcp__tars_vault__resolve_capability(capability="tasks")
external_tasks = cap.tools[list-style](...)
vault_tasks = mcp__tars_vault__search_by_tag(tag="tars/task", frontmatter={"tars-status": "open"}, limit=500)
Compare and surface:
Never auto-resolve. Present each drift for user decision (/tasks manage runs the actual update).
Check people who appeared in recent meetings (last 14 days) but whose memory notes haven't been touched in 30+ days.
recent_journals = mcp__tars_vault__search_by_tag(
tag="tars/meeting",
frontmatter={"tars-date__gte": "<today-14>"},
limit=50
)
# Extract participants, check tars-modified on each memory/people/<name>.md
Surface for user decision: update profile with recent insights (routes to /learn) or accept staleness.
(Telemetry rollup moved to /lint per the v3.1 boundary; the rollup script scripts/telemetry-rollup.py is the single source of truth and is consumed by /briefing weekly footer + /maintain --weekly. Source .jsonl retention is 90 days rolling โ older files move to _system/telemetry/archive/YYYY-MM.jsonl.gz on the weekly maintenance run.)
Emit sync_completed with {calendar_gaps, task_drift, stale_profiles} counts. Append the sync summary to the daily note via mcp__tars_vault__append_note.
Triggered by: "archive sweep". Also runs automatically under maintenance (see --auto flag used by SessionStart).
Every archive candidate must pass BOTH:
mcp__tars_vault__archive_note enforces these server-side โ the tool returns {blocked: true, reason: โฆ} if either guardrail trips.
scripts/archive.py --vault <TARS_VAULT_PATH> --dry-run --json to enumerate candidates past staleness thresholds (tiers defined in _system/schemas.yaml).mcp__tars_vault__archive_note(file=โฆ, dry_run=true) โ collect the guardrail verdicts.N archive candidates:
1. memory/people/former-contractor.md โ 180d stale, 0 recent backlinks
2. memory/decisions/2025-06-old-decision.md โ 270d stale, 0 recent backlinks
Archive all / select specific / skip
mcp__tars_vault__archive_note(file=โฆ). The server applies the tars/archived tag, moves to archive/<entity-type>/YYYY-MM/, and logs.inbox/processed/: items older than 7 days (by tars-inbox-processed date) move to archive/inbox/YYYY-MM/. Never deletes originals./maintain worktrees)Triggered by: "list worktrees", "worktree hygiene", "clean up worktrees", or explicitly /maintain worktrees.
When Claude Code runs sessions in isolated git worktrees, stale branches accumulate over time. This mode surfaces them and lets the user merge or prune each one.
git worktree list --porcelain
Parse the output into a list of {path, branch, HEAD} records. Skip the main worktree (first entry). For each non-main worktree:
commits_ahead vs main: git rev-list --count main..<branch>git log -1 --format="%ci" <branch>.md files relative to main)Active worktrees (excluding main):
1. claude/feature-xyz (3 commits ahead, last activity: 2026-04-28)
โ 2 workspace files modified (may need merge review)
2. claude/old-task (0 commits ahead, last activity: 2026-03-10)
โ nothing ahead of main; safe to prune
Actions: [merge N] [prune N] [merge all safe] [prune all stale] [skip]
A worktree is safe to prune when it has 0 commits ahead of main and its last activity is older than 7 days.
git merge <branch> from the main worktree checkout, then offer to run git worktree remove <path> --force on success.git worktree remove <path> --force (safe-to-prune only; confirm on others).git worktree prune to clean up stale admin files.Report counts: merged, pruned, skipped. Log to _system/changelog/YYYY-MM-DD.md if any action was taken.
/maintain migrations)Triggered by: "run migrations", "run pending migrations", or as part of /maintain --realign (Phase 5).
Applies pending schema and workspace-structure migrations that ship with each plugin version. Migrations are idempotent Python scripts in scripts/migrations/.
python3 scripts/run-migrations.py --vault $TARS_VAULT_PATH --list
This compares _system/housekeeping-state.yaml.plugin_version against the available migration scripts and prints pending ones.
Pending migrations (workspace is at v3.1.x; plugin is v3.3.0):
1. v3.2.0-add-tars-category โ backfill tars-category on 228 task notes
2. v3.3.0-backfill-journal-aliases โ add aliases to 13+ journal files with slug mismatch
Run all? [all / pick specific N,M / dry-run first / skip]
Always offer dry-run first. Never apply without user acknowledgement.
python3 scripts/run-migrations.py --vault $TARS_VAULT_PATH --apply [--migration v3.2.0-add-tars-category]
Each migration writes a results summary. On success, housekeeping-state.yaml.plugin_version advances to match the plugin.
journal/YYYY-MM/migration-<id>-FAILED.md and does NOT advance the version./maintain --re-register)Triggered by: "re-register jobs", "switch scheduler", /maintain --re-register, or when the user wants to explicitly change which scheduler owns their TARS jobs.
This is the ONLY supported path for switching a job from one scheduler to another. SessionStart and the weekly CronCreate renewal step never switch schedulers โ they only maintain the current one.
Current scheduler registrations:
daily_briefing โ mcp__scheduled-tasks (job: abc-123)
weekly_briefing โ CronCreate (job: xyz-789, registered 2026-04-29 โ expires in 5d)
weekly_maintenance โ CronCreate (job: xyz-790, registered 2026-04-29 โ expires in 5d)
lint โ not_registered
Available schedulers: mcp__scheduled-tasks (connected), CronCreate
Detect available schedulers using mcp__tars_vault__resolve_capability(capability="scheduler").
Re-register options:
[1] Switch all CronCreate jobs to mcp__scheduled-tasks (recommended โ persistent)
[2] Switch all mcp__scheduled-tasks jobs to CronCreate
[3] Register unregistered jobs only
[4] Cancel all registrations (unregister everything)
[5] Cancel โ keep current setup
For each job being re-registered:
mcp__scheduled-tasks: call the cancel/delete tool to remove the old job by stored ID.CronCreate: call CronDelete with the stored job ID.schedule value and the same confirm_before_run command form.housekeeping-state.yaml: id, scheduler_type, cron_create_registered_at, status._system/schedule.md table row._system/install.yaml scheduler_type if all jobs now share a single scheduler.CRITICAL: Always cancel the old job before creating the new one. Never leave both active simultaneously โ even briefly โ on a machine where both schedulers are accessible.
/maintain --weekly)Triggered by: the tars-weekly-maintenance cron job (Sunday 18:00 by default; registered in /welcome Step 7) or by an explicit /maintain --weekly from the user.
Why this exists: Claude does not run in the background, so every periodic feature in TARS (telemetry rollup, backlog grouping, staleness/drift/curator proposals) needs a single trigger that opens a session and produces a persistent surface. The cron-fired session ends without a human present, so the only output is a numbered review file the user reads on their next session.
Pipeline:
Telemetry rollup snapshot. Run scripts/telemetry-rollup.py --vault $TARS_VAULT_PATH --days 7 --format json and capture the output. Save the rendered text version to _system/changelog/YYYY-MM-DD.md under a "Weekly telemetry rollup" heading so the changelog has a permanent record.
Backlog auto-grouping. Read every note under _system/backlog/issues/ (already created by self-evaluation in /core). Group by tars-issue-type and the originating skill. Compute a count per group and the most-recent occurrence. Items with tars-occurrence-count >= 3 over the last 14 days surface as "skill X is failing repeatedly" entries. Never auto-edits any skill โ surfacing only.
Lint review queue. Invoke /lint --actions (Phase 5) โ see Step 6.5 of skills/lint/SKILL.md. Capture the materialized numbered queue.
User-model + workflow proposals (Phase 6). Invoke /learn --review-patterns (Mode C in skills/learn/SKILL.md) with the cron-fired surface flag so the call returns proposals as structured data instead of rendering inline. Append the structured proposals under a "User-model + workflow proposals" section in the review queue. Each row labels its kind (user-model field update or workflow proposal) plus the evidence count and 14-day window. Pinned fields (tars-pinned-fields in user-model, pinned: true in workflows) are skipped silently โ surface a one-line "N pinned-field matches suppressed" notice if any were filtered.
Curator proposals (Phase 7). Run three checks; each appends numbered proposals to the weekly review file under "Curator proposals". All checks honor cooling-off windows tracked in _system/housekeeping-state.yaml:
a. Memory + workflow staleness. Bash scripts/archive.py --vault $TARS_VAULT_PATH --json --check all. From the JSON: memory.archivable rows (excluding protected) become memory:<file> proposals; workflows.candidates rows become workflow:<id> proposals. tars-pinned: true notes and pinned: true workflows are filtered by the script โ surface a one-line "N pinned items skipped" notice from the pinned_skipped summary fields. Track last_run.archive_check in housekeeping-state; if last run was less than 7 days ago, skip this check (cooling-off).
b. Persona drift. Only run when (i) _system/install.yaml.persona is set, (ii) โฅ30 days of telemetry exist, (iii) last_run.persona_drift_check is โฅ14 days old or unset. Compute the user's 30-day skill-mix signature from scripts/telemetry-rollup.py --days 30 --format json (skills_loaded map). Compare against each persona template's tars-briefing-sections + implied skill mix:
product-leader โ expects briefing + think + learn weighted toward customer-signals/roadmap.sales-customer-facing โ briefing + meeting + tasks weighted toward accounts/follow-ups.delivery-pm โ briefing + tasks + initiative weighted toward blockers/RAID.data-science-lead โ think (mode D) + learn weighted toward experiments/metrics.architect-staff-eng โ think (mode A) + learn weighted toward ADRs/RFCs.support-ops-lead โ briefing + tasks weighted toward incidents/SLAs.engineering-manager โ meeting (1:1s) + briefing weighted toward team signals.
If the observed signature matches a different persona by โฅ40% margin over the current persona, append a single persona:<current>โ<proposed> proposal with the supporting evidence (top-3 most-invoked skills, top-3 recurring concerns from user-model). Update last_run.persona_drift_check to today regardless of whether a proposal was emitted.Materialize the weekly review file. Write everything to inbox/pending/weekly-review-YYYY-MM-DD.md via mcp__tars_vault__write_note_from_content:
---
tags: [tars/inbox, tars/weekly-review]
tars-source: maintain-weekly
tars-created: YYYY-MM-DD
tars-status: pending
tars-window-start: <YYYY-MM-DD>
tars-window-end: <YYYY-MM-DD>
---
# Weekly review โ YYYY-MM-DD
## Telemetry rollup (last 7 days)
<text from telemetry-rollup.py>
## Backlog signals
<grouped issues with counts>
## Lint actions
<numbered queue from /lint --actions>
## User-model + workflow proposals
<numbered list of proposals from /learn --review-patterns; each labeled
user-model:<field> or workflow:<id>. Empty section heading retained
when no proposals so the structure stays predictable for /lint.>
## Curator proposals
<numbered list of proposals from scripts/archive.py + persona-drift
check; each labeled memory:<file>, workflow:<id>, or
persona:<from>โ<to>. Pinned-skipped count surfaces as a one-line
notice. Empty section heading retained when no proposals.>
## How to act
- Reply with `auto-fix all`, `auto-fix N,M`, `review each`, or `skip` for the lint queue.
- Approve or dismiss curator items individually by number.
- Approving a curator item triggers `mcp__tars_vault__archive_note` (memory),
a `_system/workflows.yaml` edit (workflow retirement), or
`update_frontmatter(file="install", updates={"persona": "<new>"})`
(persona switch). Every action logs to `_system/changelog/YYYY-MM-DD.md`
with reversibility notes.
CronCreate re-registration check (v3.3). Read _system/housekeeping-state.yaml cron_jobs block. For each job where scheduler_type == "CronCreate":
cron_create_registered_at.schedule and the same confirm_before_run command form.cron_create_registered_at to today's ISO-8601 date after successful re-registration._system/schedule.md.mcp__scheduled-tasks here โ use the same scheduler_type that was originally recorded.scheduler_type == "mcp__scheduled-tasks" are persistent โ skip this step for them.Update housekeeping state. Set last_weekly_run: YYYY-MM-DD so the SessionStart hook can detect when it last ran. Persist per-check last-run timestamps for cooling-off windows: last_run.archive_check (7d), last_run.persona_drift_check (14d), and last_run.pattern_scan (any). All live under a new last_run block in _system/housekeeping-state.yaml; the SessionStart hook reads them when computing the cron-job notice.
Telemetry. Emit maintain_weekly_run with {rollup_events, backlog_groups, lint_queue_size, curator_memory, curator_workflow, persona_drift_proposed, review_file_path, cron_reregistered}.
The cron-fired session ends here. The user reviews inbox/pending/weekly-review-YYYY-MM-DD.md on their next interactive session via the existing inbox-surfacing flow in /maintain inbox.
Triggered by: "run maintenance", "housekeeping", or cron.
_system/housekeeping-state.yaml.last_run. If today and not a manual invocation, ask "Maintenance already ran today. Force re-run?"python3 scripts/heal-wikilinks.py --vault $TARS_VAULT_PATH --json --dry-run
If auto-fixable links are found, apply them silently:
python3 scripts/heal-wikilinks.py --vault $TARS_VAULT_PATH --apply
Surface the count in the maintenance summary (e.g. "Auto-healed 13 wikilinks"). Distance-2 suggestions are NOT auto-applied โ include them in the maintenance summary as a numbered list for the user to review on their next session. Skip this step entirely if heal-wikilinks.py is not present (graceful degradation)._system/housekeeping-state.yaml:
mcp__tars_vault__update_frontmatter(file="housekeeping-state", property="last_run", value="YYYY-MM-DD")
mcp__tars_vault__update_frontmatter(file="housekeeping-state", property="last_success", value=true)
maintenance_run telemetry. Append the maintenance summary to the daily note via mcp__tars_vault__append_note.inbox/processed/)./meeting, /learn, /tasks) โ inbox is classification, not ingestion.mcp__tars_vault__archive_note.--auto mode runs under SessionStart, which only surfaces proposals rather than applying them).inbox_processed, sync_completed, archive_swept, maintenance_run).mcp__tars_vault__* tools for workspace writes. Append daily-note and changelog entries explicitly.