with one click
scan-e2e-tests
// [Documentation] Use when scanning E2E test architecture, page objects, step definitions, configuration, and framework patterns.
// [Documentation] Use when scanning E2E test architecture, page objects, step definitions, configuration, and framework patterns.
[HINT] Download the complete skill directory including SKILL.md and all related files
| name | scan-e2e-tests |
| description | [Documentation] Use when scanning E2E test architecture, page objects, step definitions, configuration, and framework patterns. |
Codex compatibility note:
- Invoke repository skills with
$skill-namein Codex; this mirrored copy rewrites legacy Claude/skill-namereferences.- Prefer the
plan-hardskill for planning guidance in this Codex mirror.- Task tracker mandate: BEFORE executing any workflow or skill step, create/update task tracking for all steps and keep it synchronized as progress changes.
- User-question prompts mean to ask the user directly in Codex.
- Ignore Claude-specific mode-switch instructions when they appear.
- Strict execution contract: when a user explicitly invokes a skill, execute that skill protocol as written.
- Subagent authorization: when a skill is user-invoked or AI-detected and its protocol requires subagents, that skill activation authorizes use of the required
spawn_agentsubagent(s) for that task.- Do not skip, reorder, or merge protocol steps unless the user explicitly approves the deviation first.
- For workflow skills, execute each listed child-skill step explicitly and report step-by-step evidence.
- If a required step/tool cannot run in this environment, stop and ask the user before adapting.
Codex does not receive Claude hook-based doc injection. When coding, planning, debugging, testing, or reviewing, open project docs explicitly using this routing.
Always read:
docs/project-config.json (project-specific paths, commands, modules, and workflow/test settings)docs/project-reference/docs-index-reference.md (routes to the full docs/project-reference/* catalog)docs/project-reference/lessons.md (always-on guardrails and anti-patterns)Situation-based docs:
backend-patterns-reference.md, domain-entities-reference.md, project-structure-reference.mdfrontend-patterns-reference.md, scss-styling-guide.md, design-system/README.mdfeature-docs-reference.mdintegration-test-reference.mde2e-test-reference.mdcode-review-rules.md plus domain docs above based on changed filesDo not read all docs blindly. Start from docs-index-reference.md, then open only relevant files for the task.
Goal: Scan E2E test codebase → populate docs/project-reference/e2e-test-reference.md with architecture, base classes, page objects, step definitions, configuration, and best practices. (read directly when relevant; do not rely on hook-injected conversation text)
Workflow:
file:line evidenceKey Rules:
file:lineBefore any other step, run in parallel:
Read docs/project-reference/e2e-test-reference.md
Detect E2E framework and artifact type:
| Signal | Framework | Artifact Type | Agent Routing |
|---|---|---|---|
*.feature files + [Binding]/[Given] in C# | SpecFlow (BDD) | BDD + Page Objects | Run Agent 1+2+3 (BDD) |
playwright.config.* | Playwright | Non-BDD | Run Agent 1+2 (skip Agent 3) |
cypress.config.* | Cypress | Non-BDD | Run Agent 1+2 (skip Agent 3) |
*.feature files + Python | Behave (BDD) | BDD | Run Agent 1+2+3 (BDD) |
*.feature files + Java | Cucumber (BDD) | BDD | Run Agent 1+2+3 (BDD) |
wdio.conf.* | WebdriverIO | Non-BDD | Run Agent 1+2 (skip Agent 3) |
| Mode | Condition | Action |
|---|---|---|
| Init | Target doc doesn't exist or is placeholder | Full scan, create all sections |
| Sync | Target doc exists with content | Diff scan — check for new frameworks, count changes |
docs/project-config.json e2eTesting section if it exists — use as hints for paths.Evidence gate: Confidence <60% on framework → report uncertainty, ask user before proceeding.
Create task tracking entries for each sub-agent. Do not start Phase 2 without tasks created.
Launch sub-agents matching detected framework. Each MUST:
file:line for every pattern exampleAll findings → plans/reports/scan-e2e-tests-{YYMMDD}-{HHMM}-report.md
Think: What makes this test infrastructure reusable vs brittle? How is the test project structured? What base classes exist and what do they provide? What lifecycle hooks are available?
Security/performance flag: If test credentials are found hardcoded in source files, flag as CRITICAL security issue in report.
Think: How do page objects encapsulate UI interaction? What patterns make them maintainable? What wait/retry strategies prevent flakiness?
Think: How do feature files, step definitions, and context sharing work together? What patterns enable reuse across scenarios? How is test state managed?
.feature) — categorize by areaRound 1 (main agent): Build sections from report findings.
Round 2 (fresh sub-agent, zero memory):
file:line? (Glob + Grep verify)Round 3 only if Round 2 finds issues. Max 3 rounds → escalate to user if unresolved.
| Section | Content |
|---|---|
| Architecture Overview | Layer diagram, project dependencies |
| Base Classes | Test/page object hierarchies with code examples |
| Page Object Pattern | How to create page objects, component wrappers |
| Wait & Assertion Patterns | Resilient waits, retry, assertion helpers |
| Configuration | Settings files, environment variants, CI setup |
| Running Tests | Commands for all, filtered, headed, CI modes |
| Best Practices | Project-specific conventions |
If docs/project-config.json exists, update/create the e2eTesting section:
{
"e2eTesting": {
"framework": "<detected>",
"language": "<detected>",
"bddFramework": "<detected or null>",
"guideDoc": "docs/project-reference/e2e-test-reference.md",
"runCommands": {},
"entryPoints": [],
"stats": {
"featureFilesGrepExpr": "<grep pattern>",
"stepDefinitionFilesGrepExpr": "<grep pattern>"
},
"dependencies": {},
"architecture": {}
}
}
Note: stats use grep expressions, NOT hardcoded counts.
<!-- Last scanned: YYYY-MM-DD --> at top.csproj / package.json / requirements.txt[IMPORTANT] Use task tracking to break ALL work into small tasks BEFORE starting — including tasks per file read. Prevents context loss from long files. Simple tasks: ask user whether to skip.
Prerequisites: MUST ATTENTION READ before executing:
Critical Thinking Mindset — Every claim needs traced proof, confidence >80% to act. Anti-hallucination: Never present guess as fact — cite sources, admit uncertainty, self-check output, cross-reference independently. Certainty without evidence = root of all hallucination.
Scan & Update Reference Doc — Surgical updates only, NEVER full rewrite.
- Read existing doc first — understand structure and manual annotations
- Detect mode: Placeholder (headings only) → Init. Has content → Sync.
- Scan codebase (grep/glob) for current patterns
- Diff findings vs doc — identify stale sections only
- Update ONLY diverged sections. Preserve manual annotations.
- Update metadata (date, version) in frontmatter/header
- NEVER rewrite entire doc. NEVER remove sections without evidence obsolete.
Output Quality — Token efficiency without sacrificing quality.
- No inventories/counts — stale instantly
- No directory trees — use 1-line path conventions
- No TOCs — AI reads linearly
- One example per pattern — only if non-obvious
- Lead with answer, not reasoning
- Sacrifice grammar for concision in reports
- Unresolved questions at end
AI Mistake Prevention — Failure modes to avoid:
Verify AI-generated content against actual code. AI hallucinates class names/signatures. Grep to confirm existence before documenting. Trace full dependency chain after edits. Always trace full chain. Surface ambiguity before coding. NEVER pick silently. NEVER hardcode file counts in docs. Use grep-expression stats, not hardcoded numbers.
IMPORTANT MUST ATTENTION read existing doc first, scan codebase, diff, surgical update only. Never rewrite entire doc.
IMPORTANT MUST ATTENTION output quality: no counts/trees/TOCs, 1 example per pattern, lead with answer.
MUST ATTENTION apply critical thinking — every claim needs traced proof, confidence >80% to act. Anti-hallucination: never present guess as fact.
MUST ATTENTION apply AI mistake prevention — holistic-first debugging, fix at responsible layer, surface ambiguity before coding, re-read files after compaction.
IMPORTANT MUST ATTENTION break work into small task tracking tasks BEFORE starting
IMPORTANT MUST ATTENTION detect framework in Phase 0 — agent routing depends on BDD vs non-BDD
IMPORTANT MUST ATTENTION cite file:line for every code example — NEVER fabricate class names
IMPORTANT MUST ATTENTION sub-agents write findings incrementally after each file — NEVER batch at end
IMPORTANT MUST ATTENTION NEVER hardcode file counts — use grep expressions in project-config.json stats
IMPORTANT MUST ATTENTION if Round 1 finds issues, Round 2 fresh-eyes is non-negotiable after fixing. Clean Round 1 ENDS the scan.
Anti-Rationalization:
| Evasion | Rebuttal |
|---|---|
| "Framework obvious, skip Phase 0 detection" | Phase 0 is BLOCKING — BDD vs non-BDD detection determines which agents run |
| "BDD agent not needed (probably non-BDD)" | Confirm non-BDD from Phase 0 evidence before skipping Agent 3 |
| "Skip Round 2 even when Round 1 found issues" | Clean Round 1 ends the scan. When issues exist, fresh-eyes mandatory after fixing — main agent rationalizes fabricated examples. |
| "File counts in project-config.json are fine" | NEVER hardcode counts — use grep expressions to avoid instant staleness |
| "Conditional sections not needed" | Only add conditional sections if corresponding code evidence found in scan |
[TASK-PLANNING] Before acting, analyze task scope and break into small todo tasks and sub-tasks using task tracking.
Source: .claude/hooks/lib/prompt-injections.cjs + .claude/.ck.json
$workflow-start <workflowId> for standard; sequence custom steps manually[CRITICAL] Hard-won project debugging/architecture rules. MUST ATTENTION apply BEFORE forming hypothesis or writing code.
Goal: Prevent recurrence of known failure patterns — debugging, architecture, naming, AI orchestration, environment.
Top Rules (apply always):
ExecuteInjectScopedAsync for parallel async + repo/UoW — NEVER ExecuteUowTaskwhere python/where py) — NEVER assume python/python3 resolvesExecuteInjectScopedAsync, NEVER ExecuteUowTask. ExecuteUowTask creates new UoW but reuses outer DI scope (same DbContext) — parallel iterations sharing non-thread-safe DbContext silently corrupt data. ExecuteInjectScopedAsync creates new UoW + new DI scope (fresh repo per iteration).AccountUserEntityEventBusMessage = Accounts owns). Core services (Accounts, Communication) are leaders. Feature services (Growth, Talents) sending to core MUST use {CoreServiceName}...RequestBusMessage — never define own event for core to consume.HrManagerOrHrOrPayrollHrOperationsPolicy names set members, not what it guards. Add role → rename = broken abstraction. Rule: names express DOES/GUARDS, not CONTAINS. Test: adding/removing member forces rename? YES = content-driven = bad → rename to purpose (e.g., HrOperationsAccessPolicy). Nuance: "Or" fine in behavioral idioms (FirstOrDefault, SuccessOrThrow) — expresses HAPPENS, not membership.python/python3 resolves — verify alias first. Python may not be in bash PATH under those names. Check: where python / where py. Prefer py (Windows Python Launcher) for one-liners, node if JS alternative exists.Test-specific lessons →
docs/project-reference/integration-test-reference.mdLessons Learned section. Production-code anti-patterns →docs/project-reference/backend-patterns-reference.mdAnti-Patterns section. Generic debugging/refactoring reminders → System Lessons in.claude/hooks/lib/prompt-injections.cjs.
ExecuteInjectScopedAsync, NEVER ExecuteUowTask (shared DbContext = silent data corruption){CoreServiceName}...RequestBusMessagepython/python3 resolves — run where python/where py first, use py launcher or nodeBreak work into small tasks (task tracking) before starting. Add final task: "Analyze AI mistakes & lessons learned".
Extract lessons — ROOT CAUSE ONLY, not symptom fixes:
$learn.$code-review/$code-simplifier/$security/$lint catch this?" — Yes → improve review skill instead.$learn.
[TASK-PLANNING] [MANDATORY] BEFORE executing any workflow or skill step, create/update task tracking for all planned steps, then keep it synchronized as each step starts/completes.