with one click
tech-stack-research
// [Architecture] Use when you need research, analyze, and compare tech stack options as a solution architect.
// [Architecture] Use when you need research, analyze, and compare tech stack options as a solution architect.
[HINT] Download the complete skill directory including SKILL.md and all related files
| name | tech-stack-research |
| description | [Architecture] Use when you need research, analyze, and compare tech stack options as a solution architect. |
Codex compatibility note:
- Invoke repository skills with
$skill-namein Codex; this mirrored copy rewrites legacy Claude/skill-namereferences.- Prefer the
plan-hardskill for planning guidance in this Codex mirror.- Task tracker mandate: BEFORE executing any workflow or skill step, create/update task tracking for all steps and keep it synchronized as progress changes.
- User-question prompts mean to ask the user directly in Codex.
- Ignore Claude-specific mode-switch instructions when they appear.
- Strict execution contract: when a user explicitly invokes a skill, execute that skill protocol as written.
- Subagent authorization: when a skill is user-invoked or AI-detected and its protocol requires subagents, that skill activation authorizes use of the required
spawn_agentsubagent(s) for that task.- Do not skip, reorder, or merge protocol steps unless the user explicitly approves the deviation first.
- For workflow skills, execute each listed child-skill step explicitly and report step-by-step evidence.
- If a required step/tool cannot run in this environment, stop and ask the user before adapting.
Codex does not receive Claude hook-based doc injection. When coding, planning, debugging, testing, or reviewing, open project docs explicitly using this routing.
Always read:
docs/project-config.json (project-specific paths, commands, modules, and workflow/test settings)docs/project-reference/docs-index-reference.md (routes to the full docs/project-reference/* catalog)docs/project-reference/lessons.md (always-on guardrails and anti-patterns)Situation-based docs:
backend-patterns-reference.md, domain-entities-reference.md, project-structure-reference.mdfrontend-patterns-reference.md, scss-styling-guide.md, design-system/README.mdfeature-docs-reference.mdintegration-test-reference.mde2e-test-reference.mdcode-review-rules.md plus domain docs above based on changed filesDo not read all docs blindly. Start from docs-index-reference.md, then open only relevant files for the task.
[BLOCKING] Execute skill steps in declared order. NEVER skip, reorder, or merge steps without explicit user approval. [BLOCKING] Before each step or sub-skill call, update task tracking: set
in_progresswhen step starts, setcompletedwhen step ends. [BLOCKING] Every completed/skipped step MUST include brief evidence or explicit skip reason. [BLOCKING] If Task tools are unavailable, create and maintain an equivalent step-by-step plan tracker with the same status transitions.
Goal: Research, analyze, and compare tech stack options for each layer of the system. Act as a solution architect — derive technical requirements from business analysis, research current market, produce detailed comparison report, and present options to user for decision.
Workflow:
Key Rules:
Be skeptical. Apply critical thinking, sequential thinking. Every claim needs traced proof, confidence percentages (Idea should be more than 80%).
Read artifacts from prior workflow steps (search in plans/ and team-artifacts/):
Extract and summarize:
| Signal | Value | Source |
|---|---|---|
| Expected users | ... | discovery interview |
| Domain complexity | Low/Med/High | domain model |
| Team skills | ... | discovery interview |
| Budget constraint | ... | business evaluation |
| Timeline | ... | business evaluation |
| Compliance needs | ... | business evaluation |
| Real-time needs | Yes/No | refined PBI |
| Integration complexity | Low/Med/High | domain model |
Map business signals to technical requirements:
| Business Signal | Technical Requirement | Priority |
|---|---|---|
| High user scale | Horizontal scaling, connection pooling | Must |
| Complex domain | Strong type system, ORM with migrations | Must |
| Real-time features | WebSocket/SSE support, event-driven arch | Must |
| Small team | Low learning curve, good DX, batteries-included | Should |
| Tight budget | Open-source, low hosting cost | Should |
| Compliance | Audit trail, encryption, auth framework | Must |
MANDATORY IMPORTANT MUST ATTENTION validate derived requirements with user via a direct user question before proceeding to research.
For EACH layer, research top 3 options using WebSearch (minimum 5 queries total):
| Layer | Example Options | Research Focus |
|---|---|---|
| Backend Framework | .NET, Node.js, Python, Go, Java | Performance, type safety, ecosystem |
| Frontend Framework | Angular, React, Vue, Svelte | DX, ecosystem, hiring, enterprise fit |
| Database | PostgreSQL, MongoDB, SQL Server | Scale, query complexity, cost |
| Messaging/Events | RabbitMQ, Kafka, Redis Streams | Throughput, reliability, complexity |
| Infrastructure | Docker+K8s, Serverless, PaaS | Cost, ops overhead, scaling |
| Auth | Keycloak, Auth0, custom | Cost, compliance, flexibility |
"{option_A} vs {option_B} {current_year} comparison"
"{option} enterprise production case studies"
"{option} community size github stars"
"{option} performance benchmarks {use_case}"
"{option} security track record vulnerabilities"
For EACH stack layer, produce a comparison table:
| Criteria | Option A | Option B | Option C | Weight |
|---|---|---|---|---|
| Team Fit | score + rationale | ... | ... | High |
| Scalability | score + rationale | ... | ... | High |
| Time-to-Market | score + rationale | ... | ... | High |
| Ecosystem/Libs | score + rationale | ... | ... | Medium |
| Hiring Market | score + rationale | ... | ... | Medium |
| Cost (hosting) | score + rationale | ... | ... | Medium |
| Learning Curve | score + rationale | ... | ... | Medium |
| Community Health | score + rationale | ... | ... | Low |
Scoring: 1-5 scale. Weight: High=3x, Medium=2x, Low=1x.
For each option, document:
### {Layer}: {Option Name}
**Pros:**
- {Pro 1} — {evidence/source}
- {Pro 2} — {evidence/source}
- {Pro 3} — {evidence/source}
**Cons:**
- {Con 1} — {evidence/source}
- {Con 2} — {evidence/source}
**Best suited when:** {conditions}
**Not suitable when:** {conditions}
**Production examples:** {2-3 real companies using this}
Calculate weighted total per option per layer. Present ranking:
### {Layer} Ranking
1. **{Option A}** — Score: {X}/100 — Confidence: {Y}%
2. **{Option B}** — Score: {X}/100 — Confidence: {Y}%
3. **{Option C}** — Score: {X}/100 — Confidence: {Y}%
**Recommendation:** {Option A}
**Why:** {2-3 sentence rationale linking to team skills, scale, and constraints}
Write report to {plan-dir}$research/tech-stack-comparison.md with:
Report must be <=200 lines. Use tables over prose.
MANDATORY IMPORTANT MUST ATTENTION present findings and ask 5-8 questions via a direct user question:
After user confirms, update report with final decisions and mark as status: confirmed.
{plan-dir}$research/tech-stack-comparison.md # Full comparison report
{plan-dir}/phase-02-tech-stack.md # Final confirmed tech stack decisions
MANDATORY IMPORTANT MUST ATTENTION break work into small todo tasks using task tracking BEFORE starting. MANDATORY IMPORTANT MUST ATTENTION validate EVERY recommendation with user via a direct user question — never auto-decide. MANDATORY IMPORTANT MUST ATTENTION include confidence % and evidence citations for all claims. MANDATORY IMPORTANT MUST ATTENTION add a final review todo task to verify work quality.
MANDATORY IMPORTANT MUST ATTENTION — NO EXCEPTIONS after completing this skill, you MUST ATTENTION use a direct user question to present these options. Do NOT skip because the task seems "simple" or "obvious" — the user decides:
After the existing ## Next Steps prompt above resolves, present a second, independent a direct user question call:
$why-review, $plan-validate.in_progress before execution, set completed after executionMANDATORY IMPORTANT MUST ATTENTION break work into small todo tasks using task tracking BEFORE starting. MANDATORY IMPORTANT MUST ATTENTION validate decisions with user via a direct user question — never auto-decide. MANDATORY IMPORTANT MUST ATTENTION add a final review todo task to verify work quality.
[TASK-PLANNING] Before acting, analyze task scope and systematically break it into small todo tasks and sub-tasks using task tracking.
[IMPORTANT] Analyze how big the task is and break it into many small todo tasks systematically before starting — this is very important.
Critical Thinking Mindset — Apply critical thinking, sequential thinking. Every claim needs traced proof, confidence >80% to act. Anti-hallucination: Never present guess as fact — cite sources for every claim, admit uncertainty freely, self-check output for errors, cross-reference independently, stay skeptical of own confidence — certainty without evidence root of all hallucination.
AI Mistake Prevention — Failure modes to avoid on every task:
- Check downstream references before deleting. Deleting components causes documentation and code staleness cascades. Map all referencing files before removal.
- Verify AI-generated content against actual code. AI hallucinates APIs, class names, and method signatures. Always grep to confirm existence before documenting or referencing.
- Trace full dependency chain after edits. Changing a definition misses downstream variables and consumers derived from it. Always trace the full chain.
- Trace ALL code paths when verifying correctness. Confirming code exists is not confirming it executes. Always trace early exits, error branches, and conditional skips — not just happy path.
- When debugging, ask "whose responsibility?" before fixing. Trace whether bug is in caller (wrong data) or callee (wrong handling). Fix at responsible layer — never patch symptom site.
- Assume existing values are intentional — ask WHY before changing. Before changing any constant, limit, flag, or pattern: read comments, check git blame, examine surrounding code.
- Verify ALL affected outputs, not just the first. Changes touching multiple stacks require verifying EVERY output. One green check is not all green checks.
- Holistic-first debugging — resist nearest-attention trap. When investigating any failure, list EVERY precondition first (config, env vars, DB names, endpoints, DI registrations, data preconditions), then verify each against evidence before forming any code-layer hypothesis.
- Surgical changes — apply the diff test. Bug fix: every changed line must trace directly to the bug. Don't restyle or improve adjacent code. Enhancement task: implement improvements AND announce them explicitly.
- Surface ambiguity before coding — don't pick silently. If request has multiple interpretations, present each with effort estimate and ask. Never assume all-records, file-based, or more complex path.
MANDATORY IMPORTANT MUST ATTENTION use task tracking to break ALL work into small tasks BEFORE starting. MANDATORY IMPORTANT MUST ATTENTION use a direct user question at EVERY decision point — never assume user preferences. MANDATORY IMPORTANT MUST ATTENTION research top 3 options per stack layer, compare with evidence, present report with recommendation + confidence %.
External Memory: For complex or lengthy work (research, analysis, scan, review), write intermediate findings and final results to a report file in
plans/reports/— prevents context loss and serves as deliverable.
Evidence Gate: MANDATORY IMPORTANT MUST ATTENTION — every claim, finding, and recommendation requires
file:lineproof or traced evidence with confidence percentage (>80% to act, <80% must verify first).
Source: .claude/hooks/lib/prompt-injections.cjs + .claude/.ck.json
$workflow-start <workflowId> for standard; sequence custom steps manually[CRITICAL] Hard-won project debugging/architecture rules. MUST ATTENTION apply BEFORE forming hypothesis or writing code.
Goal: Prevent recurrence of known failure patterns — debugging, architecture, naming, AI orchestration, environment.
Top Rules (apply always):
ExecuteInjectScopedAsync for parallel async + repo/UoW — NEVER ExecuteUowTaskwhere python/where py) — NEVER assume python/python3 resolvesExecuteInjectScopedAsync, NEVER ExecuteUowTask. ExecuteUowTask creates new UoW but reuses outer DI scope (same DbContext) — parallel iterations sharing non-thread-safe DbContext silently corrupt data. ExecuteInjectScopedAsync creates new UoW + new DI scope (fresh repo per iteration).AccountUserEntityEventBusMessage = Accounts owns). Core services (Accounts, Communication) are leaders. Feature services (Growth, Talents) sending to core MUST use {CoreServiceName}...RequestBusMessage — never define own event for core to consume.HrManagerOrHrOrPayrollHrOperationsPolicy names set members, not what it guards. Add role → rename = broken abstraction. Rule: names express DOES/GUARDS, not CONTAINS. Test: adding/removing member forces rename? YES = content-driven = bad → rename to purpose (e.g., HrOperationsAccessPolicy). Nuance: "Or" fine in behavioral idioms (FirstOrDefault, SuccessOrThrow) — expresses HAPPENS, not membership.python/python3 resolves — verify alias first. Python may not be in bash PATH under those names. Check: where python / where py. Prefer py (Windows Python Launcher) for one-liners, node if JS alternative exists.Test-specific lessons →
docs/project-reference/integration-test-reference.mdLessons Learned section. Production-code anti-patterns →docs/project-reference/backend-patterns-reference.mdAnti-Patterns section. Generic debugging/refactoring reminders → System Lessons in.claude/hooks/lib/prompt-injections.cjs.
ExecuteInjectScopedAsync, NEVER ExecuteUowTask (shared DbContext = silent data corruption){CoreServiceName}...RequestBusMessagepython/python3 resolves — run where python/where py first, use py launcher or nodeBreak work into small tasks (task tracking) before starting. Add final task: "Analyze AI mistakes & lessons learned".
Extract lessons — ROOT CAUSE ONLY, not symptom fixes:
$learn.$code-review/$code-simplifier/$security/$lint catch this?" — Yes → improve review skill instead.$learn.
[TASK-PLANNING] [MANDATORY] BEFORE executing any workflow or skill step, create/update task tracking for all planned steps, then keep it synchronized as each step starts/completes.