with one click
ideasift
// IdeaSift — Survivor Idea Filtration Tool. Named AI personas adversarially filter ideas against a corpus. Multi-roster comparison for cross-panel validation.
// IdeaSift — Survivor Idea Filtration Tool. Named AI personas adversarially filter ideas against a corpus. Multi-roster comparison for cross-panel validation.
[HINT] Download the complete skill directory including SKILL.md and all related files
| name | ideasift |
| description | IdeaSift — Survivor Idea Filtration Tool. Named AI personas adversarially filter ideas against a corpus. Multi-roster comparison for cross-panel validation. |
| triggers | ["ideasift","ideasift ideas","filter ideas","adversarial review","survivor document","review architecture","challenge ideas","sift this","gap analysis"] |
This skill calls an ideasift binary that ships in COR-CODE source-only. Always run this check first — don't assume the binary exists, don't invoke it blind.
which ideasift
If it returns a path → tool is installed, proceed.
If it returns nothing → tool is NOT installed. Walk the user through install BEFORE attempting any filtration:
"ideasift isn't installed yet. The source ships in COR-CODE at
tools/ideasift/. Install steps:
cd <COR-CODE-root>/tools/ideasiftnpm install(fetches deps incl. Anthropic SDK)npm run build(compiles TypeScript →dist/)npm link(symlinksideasiftto your global npm bin)- Verify:
which ideasift && ideasift --versionWant me to run those steps for you, or would you rather run them yourself?"
Don't run the install silently — npm install fetches third-party packages and npm link modifies a global PATH-resolved binary. The user should know.
Pre-requisite — env: ANTHROPIC_API_KEY must be set (personas are Claude-driven). Check with [ -n "$ANTHROPIC_API_KEY" ] && echo "set" || echo "missing" before invoking. Warn the user if missing — do not pretend the run will work.
Named AI personas generate ideas, score them asymmetrically, and adversarially challenge each other. Only survivors make the final document. Multi-roster comparison mode cross-references what survives across different team compositions.
bughunt finds bugs in code. ideasift finds weak ideas in architecture.
These principles (from autoresearch analysis) improve how IdeaSift is used:
When evaluating survivor ideas, apply this rule before accepting:
Ask for every survivor: "Is the improvement worth the complexity it adds?" Renzo's territory but ALL personas should weigh this.
When running IdeaSift iteratively (sift → implement survivors → sift again), the SCORING FUNCTION must not change between runs. The 8 stages and +2/-3 asymmetric scale are the immutable eval. If you change scoring between iterations, you can't compare results. Lock the eval, change the corpus.
IdeaSift output can be verbose. When the project Claude reads the survivor document:
Don't dump the entire survivor-document.json into context. Extract the signal.
If IdeaSift returns 0 survivors, don't conclude "the corpus is perfect." Instead:
For iterative improvement of a document:
1. Sift the document → get survivors
2. Apply survivors to the document
3. Sift again with same eval (same scoring, same stages)
4. If new survivors emerge → repeat
5. If 0 survivors → document has converged
Each iteration should produce FEWER survivors than the last. If iteration 3 produces more than iteration 1, something is wrong — you're introducing complexity faster than you're resolving gaps.
Tool installed at: ~/.claude/tools/ideasift/
ideasift ./architecture.md
ideasift ./02-COMPLETED/ --budget 8
ideasift ./architecture.md --compare --budget 15
ideasift ./design.md --personas "Renzo,Renzo,Kael,Kael,Soren"
ideasift ./PHASES.md --quick --verbose
7 constructed identities with territory allocation (reduces correlated output):
| Name | Territory | Background |
|---|---|---|
| Mara | Ownership, SLA, governance | VP Ops Big Four → COO fintech |
| Kael | Security, legitimacy, provenance | Cloudflare security → UK financial services |
| Soren | State machines, replay, spec quality | Stripe payments infra → distributed systems |
| Lena | Trust, adoption, behavioural signals | Intercom product → B2B AI adoption |
| Renzo | Dead weight, overbuild, deletion | Two failed CTOs → independent advisor |
| Sage | Continuity, ambient awareness | Cognitive science → ambient computing research |
| Voss | Competitive attack, proof, speed | YC startup → platform engineering |
Duplicate personas compete within territory — Renzo,Renzo creates Renzo-1 and Renzo-2, each forced to find what the other missed.
| Preset | Composition | Bias |
|---|---|---|
| hardening | Mara, Kael, Kael, Soren, Renzo | Safety, rigour, subtraction |
| product | Lena, Lena, Sage, Voss, Renzo | Trust, adoption, competition |
| full | All 7 | Balanced coverage |
| Flag | Purpose | Default |
|---|---|---|
--personas <names> | Comma-separated persona names (duplicates allowed) | all 7 |
--compare | Run all 3 presets and cross-reference | off |
--rosters <presets> | Specific presets for comparison | hardening,product,full |
--max-ideas <n> | Max ideas per persona | 5 |
--challengers <n> | Challengers per idea | 2 |
--budget <usd> | Max API spend in USD | 5.00 |
-t, --thesis <text> | Non-negotiable thesis — personas improve it, never replace it | none |
-q, --quick | Skip adversarial challenge | off |
-v, --verbose | Show persona reasoning | off |
-o, --output <dir> | Report output directory | ./sift-report |
-p, --provider <name> | anthropic or openai | anthropic |
-m, --model <model> | Generator model | claude-sonnet-4-6 |
-c, --challenger-model | Model for challenges | same as generator |
-d, --description <text> | Review description for report | Architecture review |
Set ANTHROPIC_API_KEY or OPENAI_API_KEY in environment, or pass --api-key.
Two report files in the output directory:
survivor-document.json — structured data (ideas, scores, challenges, stats)survivor-document.md — readable markdown with carry-forward, discussion, and discarded sectionsCompare mode additionally produces:
<preset>/ subdirectories with per-roster reportscomparison.json — cross-roster agreement stats| Score | Meaning |
|---|---|
| +2 | Genuinely new, high-leverage addition |
| +1 | Valid sharpening of a partially-present concept |
| 0 | Interesting but low-leverage |
| -1 | Vague philosophical restatement |
| -2 | Duplicate of existing corpus presented as new |
| -3 | Misleading or already-covered advice sold as a major gap |
Duplicate personas in same territory: repeating another's idea costs -2.
User: "sift this architecture"
Claude: ideasift ./02-COMPLETED/DD-AUTONOMOUS-ORCHESTRATOR-architecture.md --budget 5 -o /tmp/sift-report
Claude: Reads survivor document, presents carry-forward items, discusses with user
User: "run sift comparison on the phases"
Claude: ideasift ./PHASES.md --compare --budget 12 -d "COR Intelligence build plan review"
Claude: Reads comparison, highlights universal findings (survived all 3 rosters)
User: "any gaps in this design?"
Claude: ideasift ./design-doc.md --quick --personas "Kael,Soren,Renzo" -o /tmp/sift-report
Claude: Reads survivors, flags genuine gaps, ignores discarded noise
| Configuration | Typical Cost | Duration |
|---|---|---|
| 3 personas, quick mode | $0.50-1.50 | 2-5 min |
| 7 personas, full challenge | $3-6 | 10-20 min |
| Compare mode (3 rosters) | $10-18 | 30-60 min |
| Custom 5 personas with duplicates | $2-4 | 5-15 min |
If the tool needs updating:
cd ~/.claude/tools/sift
# Make changes to src/
npm run build