| name | edpa-engine |
| user-invocable | false |
| description | Run EDPA evidence-driven calculation for an iteration. Gathers GitHub delivery evidence (commits, PRs, reviews, comments), computes CW from heuristics, calculates Score and DerivedHours, validates invariants. Use when closing an iteration, computing derived hours, or running "EDPA výpočet". Produces per-person allocation data for the reports skill.
|
| license | MIT |
| compatibility | GitHub CLI (gh), Python 3.10+, .edpa/config/people.yaml, .edpa/config/heuristics.yaml |
| allowed-tools | Read Bash(gh *) Bash(git *) Bash(python3 *) Grep |
| metadata | {"author":"Jaroslav Urbánek","version":"1.0.0","domain":"governance","phase":"engine","standard":"AgentSkills v1.0"} |
EDPA Engine — Evidence-Driven Calculation
What this does
Computes derived hours for all team members for a given iteration using EDPA formula.
Arguments
$ARGUMENTS = iteration ID (e.g., "PI-2026-1.3") or "latest" for most recent closed iteration.
Argument resolution (when $ARGUMENTS is empty)
If $ARGUMENTS is empty, blank, or "help":
- Call MCP tool
edpa_iterations (or read .edpa/iterations/*.yaml
directly). PI/iteration timeline data is reconstructed at runtime
from those per-PI and per-iteration YAML files — edpa.yaml no
longer carries pis[].
- Present available iterations with status and dates:
Available iterations:
PI-2026-1.1 [closed] 2026-04-06–2026-04-17
PI-2026-1.2 [closed] 2026-04-20–2026-05-01
PI-2026-1.3 [closed] 2026-05-04–2026-05-15
PI-2026-1.4 [active] 2026-05-18–2026-05-29 <-- suggested
PI-2026-1.5 [planned] 2026-06-01–2026-06-12 (IP)
- Default suggestion: the iteration with
status: active. If none is active, suggest the latest closed.
- Ask user: "Which iteration to compute? [suggested-id]"
- If user confirms or provides an ID, proceed. If
.edpa/config/edpa.yaml does not exist, inform user to run /edpa setup first.
Prerequisites
.edpa/config/people.yaml exists (run edpa-setup first)
.edpa/config/heuristics.yaml exists
- GitHub issues have Job Size field populated
- Iteration has closed stories (status: Done)
Calculation steps
1. Load configuration
import yaml
with open('.edpa/config/people.yaml') as f:
config = yaml.safe_load(f)
with open('.edpa/config/heuristics.yaml') as f:
heuristics = yaml.safe_load(f)
2. Gather delivery evidence
For each person P, identify relevant items:
- Stories: Done in iteration $ARGUMENTS, where P has evidence
- Features: Not Done and not Funnel in current PI, where P has evidence
- Epics: Not Done and not Funnel, where P has evidence
Evidence sources (hybrid — MCP preferred, gh CLI fallback):
gh issue list --label "iteration:$ARGUMENTS" --state closed --json number,title,assignees,labels,body
gh pr list --state merged --json number,title,author,reviews,commits,body
git log --since="iteration_start" --until="iteration_end" --format="%H %an %s"
3. Score evidence per person per item
For each (person, item) pair:
evidence_score = Σ signal_weights for detected signals
Apply threshold: evidence_score >= heuristics.evidence_threshold → relevant.
Derive CW from highest signal:
assignee → owner (1.0)
contribute_command or pr_author → key (0.6)
commit_author or pr_reviewer → reviewer (0.25)
issue_comment → consulted (0.15)
Allow manual override: check issue body for /contribute @person weight:X.
4. Calculate EDPA scores (v1.14: single path)
The engine credits two kinds of work events:
Score[P, story_done] = JobSize[story] × CW[P, story]
Score[P, gate_event] = JobSize[parent] × gate_weight × CW[P, parent]
- Story Done credit for every Story at
status: Done.
- Parent gate transitions for every Feature/Epic/Initiative
status transition captured in git history (via
sync pull --commit
auto-commits within the iteration date window).
When git history records no transitions, only Story Done credit fires
— the calculation degenerates gracefully to the pre-v1.14 "simple"
behaviour without any mode selector.
Pre-v1.14 mode selector removed. v1.13 and earlier had
--mode {simple, full, gates}. v1.14 dropped it: gates was
a strict superset of the others, so the single-path engine is
mathematically equivalent and operationally simpler.
5. Derive hours
SumScores[P] = Σ Score[P, *]
DerivedHours[P, item] = (Score[P, item] / SumScores[P]) × Capacity[P]
6. Validate invariants
Run ALL checks — halt on failure:
| Check | Formula | Action on fail |
|---|
| Σ = capacity | Σ DerivedHours[P, *] = Capacity[P] ± 0.01 | HALT, report |
| Σ ratio = 1 | Σ (Score/ΣScores) = 1.0 ± 0.001 | HALT, report |
| No negative | DerivedHours ≥ 0 for all | HALT, report |
| Eligibility | Non-Done items excluded from Story hours | WARN if included |
| JS exists | All items have Job Size > 0 | WARN, skip item |
7. Output
Write results to .edpa/reports/iteration-{ID}/edpa_results.json:
{
"iteration": "$ARGUMENTS",
"computed_at": "ISO-8601",
"people": [
{
"id": "person_id",
"capacity": 40,
"total_derived": 40.0,
"items": [
{"id": "S-200", "level": "Story", "js": 8, "cw": 1.0, "rs": 1.0, "score": 8.0, "ratio": 0.28, "hours": 11.2}
],
"invariant_ok": true
}
],
"team_total": 380.0,
"all_invariants_passed": true
}
Print summary table to stdout: person, capacity, derived total, items count, ok/fail.
Error handling
- No items in iteration → "No closed items found for {iteration}. Check iteration label."
- Missing Job Size → warn per item, exclude from calculation
- Person with 0 relevant items → warn, derive 0h (process issue, not math issue)
- GitHub API unavailable → use
git log as fallback evidence source