// Analyze completed growth experiment results, validate hypotheses, generate insights, and suggest follow-up experiments. Use when experiments are completed, when the user asks about results or learnings, or when discussing what to do next based on experiment outcomes.
| name | experiment-analyzer |
| description | Analyze completed growth experiment results, validate hypotheses, generate insights, and suggest follow-up experiments. Use when experiments are completed, when the user asks about results or learnings, or when discussing what to do next based on experiment outcomes. |
| allowed-tools | ["Read","Write","Grep","Glob"] |
Analyze completed growth experiments, extract insights, and drive continuous learning.
This skill should activate when:
Win (Positive + Significant)
Loss (Negative + Significant)
Inconclusive
Neutral
Compare original hypothesis to results:
Hypothesis Components:
Validation Questions:
Compare predicted vs actual:
Impact Score Validation:
Confidence Score Validation:
Ease Score Validation:
Key Questions:
Secondary Metrics:
Based on the outcome, suggest 2-3 follow-up experiments:
For Wins:
For Losses:
For Inconclusive:
1. Read experiment JSON from completed/archived folder
2. Verify results data exists:
- Primary metric
- Baseline value
- Result value
- Statistical significance
- Sample size
- Duration
3. Check if hypothesis is documented
4. Review ICE scores
Change Percentage = ((Result - Baseline) / Baseline) ร 100
Result Classification:
- IF change% > 2% AND significance >= 95% โ Win
- IF change% < -2% AND significance >= 95% โ Loss
- IF significance < 95% โ Inconclusive
- IF abs(change%) < 2% โ Neutral
1. Classify result (Win/Loss/Inconclusive/Neutral)
2. Validate hypothesis against results
3. Review ICE score predictions
4. Extract key learnings
5. Identify surprising findings
6. Check secondary metrics
7. Look for patterns across related experiments
1. Based on result type, brainstorm 2-3 follow-ups
2. For each follow-up:
- Draft hypothesis
- Explain rationale (reference current learnings)
- Suggest category
- Provide preliminary ICE estimate
3. Prioritize follow-ups by potential impact
1. Create markdown analysis report
2. Include:
- Summary (result classification, key numbers)
- Hypothesis validation
- ICE score retrospective
- Key insights (bulleted list)
- Secondary metrics review
- Recommendations
- Follow-up experiment ideas
3. Save to experiments/archive/[id]_analysis.md
4. Update experiment JSON with learnings
# Experiment Analysis: [Title]
**Date:** [Analysis date]
**Experiment ID:** [id]
**Status:** [Win/Loss/Inconclusive/Neutral] โ/โ/?/โ
## Summary
- **Primary Metric:** [metric name]
- **Baseline:** [baseline value]
- **Result:** [result value]
- **Change:** [+/-X%]
- **Statistical Significance:** [XX%]
- **Sample Size:** [count]
- **Duration:** [days]
## Hypothesis Validation
### Original Hypothesis
[Full hypothesis statement]
### Validation
- **Expected Outcome:** [what we expected]
- **Actual Outcome:** [what happened]
- **Hypothesis Validated:** [Yes/No/Partially]
**Analysis:**
[Explanation of whether and why hypothesis was validated]
## ICE Score Retrospective
| Component | Predicted | Actual/Assessment | Accuracy |
|-----------|-----------|------------------|----------|
| Impact | [score] | [calculate from results] | [good/overestimated/underestimated] |
| Confidence | [score] | [based on outcome] | [justified/overconfident/underconfident] |
| Ease | [score] | [based on actual effort] | [accurate/harder/easier] |
**Learnings for Future Scoring:**
- [What we learned about predicting impact]
- [What we learned about confidence]
- [What we learned about ease]
## Key Insights
1. **[Primary insight]** - [Explanation with data]
2. **[Secondary insight]** - [Explanation]
3. **[Surprising finding]** - [What we didn't expect]
## Secondary Metrics
| Metric | Change | Interpretation |
|--------|--------|----------------|
| [metric 1] | [+/-X%] | [Good/Bad/Neutral] |
| [metric 2] | [+/-X%] | [Good/Bad/Neutral] |
**Side Effects:**
- Positive: [Any unexpected positive impacts]
- Negative: [Any unexpected negative impacts]
## Recommendations
### Immediate Actions
- [ ] [Action item 1]
- [ ] [Action item 2]
### Strategic Implications
[Broader implications for product/growth strategy]
## Follow-up Experiment Ideas
### 1. [Experiment Title]
**Category:** [category]
**Hypothesis:**
[Full hypothesis following template]
**Rationale:**
[Why this follow-up based on current learnings]
**Preliminary ICE:**
- Impact: [score] - [reasoning]
- Confidence: [score] - [reasoning]
- Ease: [score] - [reasoning]
- **Total: [score]**
---
### 2. [Experiment Title]
[Repeat format]
---
### 3. [Experiment Title]
[Repeat format]
## Related Experiments
[List any related experiments and their outcomes for pattern recognition]
## Notes
[Any additional context, edge cases, or considerations]
When user asks to analyze multiple experiments:
# Experiment Portfolio Analysis
## Overview
- Total Experiments: [count]
- Completed: [count]
- Win Rate: [X%]
- Average Change: [+X%]
## By Category
| Category | Experiments | Win Rate | Avg Impact |
|----------|-------------|----------|------------|
| Acquisition | [count] | [X%] | [+X%] |
| Activation | [count] | [X%] | [+X%] |
| Retention | [count] | [X%] | [+X%] |
| Revenue | [count] | [X%] | [+X%] |
| Referral | [count] | [X%] | [+X%] |
## ICE Score Performance
- Experiments with ICE > 500: [X% win rate]
- Experiments with ICE 300-500: [X% win rate]
- Experiments with ICE < 300: [X% win rate]
**Learning:** [Are high ICE scores actually better predictors?]
## Top Performers
1. [Experiment] - [+X%] change
2. [Experiment] - [+X%] change
3. [Experiment] - [+X%] change
## Key Patterns
- [Pattern 1 discovered across experiments]
- [Pattern 2]
- [Pattern 3]
## Recommendations
[Strategic recommendations based on portfolio analysis]
/experiment-update sets status to "completed"After each analysis: