| name | decision-frameworks |
| description | This skill should be used when the user asks to "make a decision", "evaluate options", "choose between alternatives", "assess trade-offs", "rank criteria", or needs structured decision-making frameworks using ClearThought's decision_framework operation for multi-criteria analysis. |
| version | 1.0.0 |
ClearThought Decision Frameworks
Purpose
Use ClearThought's structured decision-making tools to evaluate options systematically, weigh criteria objectively, and make well-reasoned decisions backed by analysis. This skill guides you through decision frameworks that reduce bias and improve decision quality.
When to Use This Skill
Use this skill when:
- Choosing between technology options (databases, frameworks, architectures)
- Making hiring or team composition decisions
- Evaluating vendor or tool selections
- Deciding between build vs. buy options
- Prioritizing features or projects
- Making architecture decisions with multiple trade-offs
- Selecting between competing approaches to solve a problem
- Need to justify decisions to stakeholders
- Facing decisions with unclear criteria or weights
DO NOT use for:
- Quick binary yes/no decisions (use judgment)
- Time-critical decisions requiring immediate action (use intuition + quick validation)
- Decisions with incomplete information (gather more first, then use framework)
- Trivial low-impact choices (overthinking wastes time)
Core Decision-Making Workflow
1. Define Options (Exploration Phase)
Before evaluating, clearly identify all viable options:
- Brainstorm - Generate candidate options without judgment
- Eliminate obvious - Remove options clearly inferior on all fronts
- Diversify - Include at least one option from each category (current state, incremental improvement, transformative change)
- Validate - Confirm each option is technically feasible and resource-compatible
Tip: Use ClearThought's tree_of_thought operation first to explore the decision space and surface hidden options.
2. Establish Decision Criteria
Define what matters for this decision:
- Extract from context - What problem are you solving? What constraints exist?
- Stakeholder input - Whose needs matter? (users, team, business, operations)
- Category mapping - Organize criteria into buckets (cost, quality, risk, timeline, maintainability)
- Validate completeness - Does this set of criteria capture what truly matters?
Common criteria categories:
- Financial: Cost, ROI, budget impact
- Technical: Scalability, performance, maintainability, learning curve
- Organizational: Team expertise, support availability, community
- Risk: Implementation risk, vendor risk, technical debt
- Timeline: Speed to implement, time-to-value, ramp-up time
- Strategic: Alignment with vision, flexibility for future, competitive advantage
See scripts/criteria-weighting-worksheet.md for detailed criteria collection process.
3. Assign Weights to Criteria
Not all criteria matter equally. Determine relative importance:
- Understand constraints - Are some criteria hard requirements vs. nice-to-have?
- Gather stakeholder input - Different people may weight criteria differently
- Validate weights - Use pairwise comparison or constraint-based weighting
- Document rationale - Why does Criterion A matter 3x more than Criterion B?
Weight assignment methods:
- Direct assignment - "Cost is 40%, Performance is 35%, Risk is 25%"
- Pairwise comparison - Compare each criterion against others systematically
- Constraint method - "Must have <$100k cost, otherwise cost is 40%"
- Stakeholder voting - Have decision-makers distribute 100 points across criteria
See scripts/criteria-weighting-worksheet.md for weighted scoring helper.
4. Score Options Against Criteria
Evaluate each option on each criterion using consistent rubric:
- Use 1-10 scale with clear definitions (1=poor, 10=excellent)
- Be consistent - Apply same scoring standards across options
- Document scoring rationale - Why does Option A score 8 on Performance?
- Highlight trade-offs - Where does each option excel? Where does it struggle?
Scoring rubric example (adapt to your criteria):
- 9-10: Exceeds requirements, significant advantage
- 7-8: Meets requirements well, clear advantage
- 5-6: Meets requirements adequately, slight advantage or neutral
- 3-4: Partially meets requirements, clear disadvantage
- 1-2: Fails requirements or major disadvantage
5. Calculate Weighted Scores
Combine scores with weights to get final rankings:
Weighted Score = Sum(Score on Criterion ร Weight of Criterion)
Example:
Option A:
Cost (weight 0.40): score 8 โ 3.2
Performance (weight 0.35): score 6 โ 2.1
Risk (weight 0.25): score 9 โ 2.25
Total: 7.55
Option B:
Cost (weight 0.40): score 6 โ 2.4
Performance (weight 0.35): score 9 โ 3.15
Risk (weight 0.25): score 6 โ 1.5
Total: 7.05
See scripts/decision-matrix-template.md for complete template.
6. Analyze Trade-offs and Sensitivities
Numbers alone don't tell the whole story. Examine nuances:
- Trade-offs - What are we giving up by choosing the top option?
- Sensitivity analysis - If we weight criteria differently, does the ranking change?
- Risk assessment - What could go wrong with the top choice? How would we handle it?
- Reversibility - Can we change course later, or is this decision final?
- Irreversible factors - Which trade-offs matter most for this decision's irreversibility?
Questions to ask:
- Would reasonable people weight criteria differently and get a different answer?
- What would need to change for the second-ranked option to be better?
- What's the cost of being wrong about the top option vs. second place?
- Can we implement the top choice incrementally to reduce lock-in risk?
7. Make and Document the Decision
Choose deliberately and create a record:
- State the decision clearly - "We choose Option A: PostgreSQL over MongoDB"
- Explain the reasoning - Point to the analysis: "Weighted score 7.55 vs 7.05, primarily due to better performance-to-cost ratio"
- Identify key assumptions - "Assumes team will build query optimization expertise over 6 months"
- Note who decided - Individual or group decision-maker
- Capture when - Decision date
- Include dissent - If stakeholders disagreed on weights/scores, document why
Template:
DECISION: [What we decided]
RANKING:
1. [Top option] - Score: X.XX (rationale)
2. [Second option] - Score: Y.YY (rationale)
3. [Third option] - Score: Z.ZZ (rationale)
WEIGHTS USED:
- Criterion A: X%
- Criterion B: Y%
- Criterion C: Z%
KEY TRADE-OFFS:
- Choosing [option] means we accept [trade-off A]
- We mitigate [risk B] by [strategy]
REVERSIBILITY:
- [Reversible or Irreversible]: [rationale]
DECISION-MAKER: [Person/Group]
DATE: [ISO date]
Using ClearThought's decision_framework Operation
Basic Usage
When you have options, criteria, and weights, invoke ClearThought:
mcp__clearthought__decision_framework(
options: ["PostgreSQL", "MongoDB", "DynamoDB"],
criteria: {
"Performance": 0.35,
"Cost": 0.40,
"Operational Risk": 0.25
}
)
ClearThought will:
- Validate your options and criteria are well-formed
- Guide you through scoring each option
- Calculate weighted scores automatically
- Surface trade-offs and sensitivity insights
- Generate a structured decision record
Advanced Usage
For complex decisions, combine with other ClearThought operations:
Explore first (tree_of_thought):
mcp__clearthought__tree_of_thought(
question: "What database technologies should we evaluate?"
)
Use the exploration to identify hidden options before running decision_framework.
Validate afterward (structured_reasoning):
mcp__clearthought__structured_reasoning(
problem: "Are we making the right database choice?"
)
Use structured reasoning to validate the decision quality.
Document reasoning (sequential_thinking):
mcp__clearthought__sequential_thinking(
question: "Why would reasonable people disagree on the database choice?"
)
Understand decision sensitivity to different perspectives.
Decision Patterns and Templates
Technology Selection Pattern
Common decision: "Which framework/language/database should we use?"
Typical criteria:
- Performance: Throughput, latency, scalability (weight: 20-35%)
- Developer experience: Learning curve, documentation, community (weight: 15-25%)
- Operational maturity: Monitoring, debugging, production support (weight: 15-25%)
- Cost: Licensing, infrastructure, team training (weight: 15-30%)
- Strategic fit: Alignment with team expertise, hiring availability (weight: 10-20%)
Scoring considerations:
- Score on current team knowledge (don't assume you'll hire experts)
- Consider not just initial implementation but 3-year maintenance cost
- Factor in opportunity cost of team context-switching
Architecture Decision Pattern
Common decision: "Should we use microservices, monolith, or hybrid?"
Typical criteria:
- Scalability: Independent scaling of components (weight: 15-25%)
- Team structure: Alignment with org structure (weight: 15-25%)
- Operational complexity: Deployment, monitoring, debugging (weight: 20-30%)
- Development velocity: Time to feature delivery (weight: 15-25%)
- Cost: Infrastructure, tooling, team size (weight: 15-25%)
Scoring considerations:
- Monolith scores high on velocity early, but velocity decreases with team size
- Microservices have high initial setup cost but better long-term scaling
- Consider team structure: conway's law strongly influences this decision
Build vs. Buy Pattern
Common decision: "Should we build this feature or buy a third-party solution?"
Typical criteria:
- Time to market: When do we need this? (weight: 20-30%)
- Cost: Build vs. buy total cost of ownership (weight: 25-35%)
- Customization needs: Can we adapt a third-party solution or need bespoke? (weight: 15-25%)
- Lock-in risk: Vendor risk vs. maintenance burden (weight: 10-20%)
- Strategic importance: Is this core to our differentiation? (weight: 10-20%)
Scoring considerations:
- Build costs include ongoing maintenance (often underestimated)
- Buy costs include integration, customization, and vendor risk
- Consider 3-5 year TCO, not just year-1 cost
- Factor in "not invented here" bias when scoring build option
Hiring/Team Composition Pattern
Common decision: "Should we hire a specialist or generalist for this role?"
Typical criteria:
- Project needs: Does the project require deep expertise? (weight: 20-30%)
- Team dynamics: Can a specialist integrate into the team? (weight: 15-25%)
- Growth potential: Can this person develop into other areas? (weight: 15-25%)
- Cost: Specialist premium vs. generalist salary (weight: 15-25%)
- Scalability: Can this person manage/mentor others? (weight: 10-20%)
Scoring considerations:
- Specialist excels on specific project but may struggle on diversity
- Generalist brings flexibility but may lack depth
- Consider team composition over time, not just this hire
- Factor in mentoring capability for long-term team growth
Common Pitfalls and How to Avoid Them
Pitfall 1: Undefined Criteria
Problem: Scoring against fuzzy criteria leads to inconsistent decisions.
Prevention:
- Write explicit definitions for each criterion
- Use concrete examples for what "high" vs. "low" means
- Score options independently, then compare
Example (Bad): "Performance"
Example (Good): "Throughput under peak load: ability to handle 10,000 req/sec with p99 latency <100ms"
Pitfall 2: Hidden Weights
Problem: Claiming criteria are equally weighted, but actually weighting some heavily.
Prevention:
- Explicitly assign weights upfront
- Have decision-makers debate weights before scoring
- Do sensitivity analysis: "If we weight X at 50% instead of 30%, does ranking change?"
Pitfall 3: Confirmation Bias in Scoring
Problem: Scoring options to justify a predetermined preference.
Prevention:
- Score all options against all criteria (don't skip weak criteria)
- Use consistent rubrics for all options
- Have multiple people score independently, then discuss differences
- Document surprising scores (why did we score Option A low on Criterion B?)
Pitfall 4: Analysis Paralysis
Problem: Perfect decision analysis requires infinite information; waiting for perfection delays action.
Prevention:
- Use "good enough" criteria when perfect data unavailable
- Set a decision deadline upfront
- Accept that some decisions are reversible (decide fast, adjust later)
- Use sensitivity analysis to identify which unknowns matter most
Pitfall 5: Ignoring Implementation Difficulty
Problem: Scoring on merit alone ignores the effort and risk of actually executing the choice.
Prevention:
- Include "implementation risk" or "effort required" as explicit criteria
- Factor in team readiness and learning curve
- Consider dependencies on other projects or external factors
- Document assumptions about resource availability
Integration with Other Operations
Decision + Tree of Thought (for exploration)
Use when you're not sure what options exist:
- Run
tree_of_thought to explore the decision space
- Identify viable options from the exploration
- Run
decision_framework to evaluate the most promising options
Decision + Sequential Thinking (for nuance)
Use when the decision feels complicated or ambiguous:
- Run
decision_framework to get initial ranking
- Run
sequential_thinking to reason through edge cases
- Revisit weights if sequential thinking reveals overlooked factors
Decision + Structured Reasoning (for validation)
Use when you want to stress-test the decision:
- Run
decision_framework to reach a conclusion
- Run
structured_reasoning on potential failure modes
- Update scoring if structured reasoning reveals risks
Visualization with Mindpilot
Once you've scored options, visualize the decision matrix using Mindpilot:
Create a heatmap-style diagram showing:
- Y-axis: Options (PostgreSQL, MongoDB, DynamoDB)
- X-axis: Criteria (Performance, Cost, Risk)
- Cell color/intensity: Score on that criterion
- Final column: Weighted total score
Create a trade-off diagram showing:
- X-axis: Performance score
- Y-axis: Cost score
- Bubble size: Operational risk
- Each bubble = one option
This visualization makes trade-offs obvious to stakeholders.
Decision Quality Metrics
After the decision is made and implemented, assess decision quality:
- Did the chosen option deliver as expected? (Were scoring assumptions correct?)
- Would we make the same decision with current knowledge? (Did we miss important information?)
- What surprised us? (Which criteria or trade-offs mattered more than expected?)
- What would we change about the decision process? (Improve for next time)
Document lessons learned and use them to refine criteria and weights for future decisions.
Next Steps
- Explore options first: See
references/framework-comparison.md to choose the right framework for your decision type
- Set up your matrix: Use
scripts/decision-matrix-template.md to structure your specific decision
- Determine weights: Work through
scripts/criteria-weighting-worksheet.md with stakeholders
- Score systematically: Follow the core workflow above
- Analyze deeply: Review the trade-offs and sensitivity insights
- Document for posterity: Create a decision record for future reference