// Autonomously debug frontend issues through empirical browser observation using Chrome DevTools MCP. This skill should be used when debugging UI/UX issues, visual bugs, interaction failures, console errors, or network problems in web applications. Provides systematic 6-phase debugging workflow with GitHub integration, automated verification, and knowledge retention.
| name | frontend-debug |
| description | Autonomously debug frontend issues through empirical browser observation using Chrome DevTools MCP. This skill should be used when debugging UI/UX issues, visual bugs, interaction failures, console errors, or network problems in web applications. Provides systematic 6-phase debugging workflow with GitHub integration, automated verification, and knowledge retention. |
Enable autonomous frontend debugging through empirical browser observation and systematic investigation. This skill implements a structured 6-phase workflow that combines Chrome DevTools MCP for real-world browser testing with automated root cause analysis, fix implementation, and verification. No issue is considered resolved until empirically verified through direct browser interaction.
Core Principle: Evidence-based debugging requires direct browser observation. Never assume a fix works without browser verification.
Invoke this skill for:
Activation patterns:
# Direct invocation with issue description
@~/.claude/skills/frontend-debug/SKILL.md "Login button not responding"
# With flags for complex issues
@~/.claude/skills/frontend-debug/SKILL.md "Form validation broken" --ultrathink --loop
# From GitHub issue (automatically fetches details)
@~/.claude/skills/frontend-debug/SKILL.md --github-issue 123
@~/.claude/skills/frontend-debug/SKILL.md "#123" # Shorthand
Execute debugging through 6 systematic phases:
Phase 0: Initialization & Context Gathering Phase 1: Browser Investigation & Observation Phase 2: Root Cause Analysis & Fix Strategy Phase 3: Implementation Phase 4: Empirical Verification (MANDATORY) Phase 5: Reporting & Cleanup
Loop Flag: If --loop is specified, automatically retry Phase 1-4 on verification failure (max 3 iterations).
Objective: Establish debugging session and gather complete issue context.
If GitHub issue reference detected (e.g., --github-issue 123, "#123", GitHub URL):
@contexts/github-integration.md for GitHub workflow guidancegh issue view <number> --json title,body,labelsIf inline description provided:
If no description:
Check for special project configurations:
SoftTrak Detection:
package.json contains "softtrak" OR .softtrak/ directory exists@contexts/softtrak.md for credentials and auto-correction detailsGit Worktree Detection:
git rev-parse --git-dir | grep "\.git/worktrees"@contexts/worktree.md for multi-session managementCreate debugging session with tracking:
# Generate session ID
SESSION_ID="{project-name}-{timestamp}" # or "{worktree-name}-{timestamp}"
# Create checkpoint file for crash recovery
touch .debug-session-${SESSION_ID}.json
# Initialize TodoWrite with 6-phase tracking
# - Phase 0: Initialization โ
# - Phase 1: Investigation (pending)
# - Phase 2: Root Cause Analysis (pending)
# - Phase 3: Implementation (pending)
# - Phase 4: Verification (pending)
# - Phase 5: Reporting (pending)
Look for existing checkpoint files:
ls -la .debug-session-*.json
If found, offer to resume from last checkpoint with session state.
Checkpoint Criteria: Session initialized, issue understood, TodoWrite created, ready for browser investigation.
Objective: Systematically investigate issue through direct browser observation.
Primary Tool: Chrome DevTools MCP (MANDATORY - always use --isolated flag)
# CRITICAL: Always use --isolated flag to prevent browser conflicts
# Launch new isolated Chrome instance
# Navigate to application URL
# Handle authentication if required (use login helpers if available)
# Navigate to problematic page/feature
Execute all baseline captures:
required_captures:
- take_snapshot: Document current DOM/UI state
- take_screenshot: Visual baseline for comparison
- list_console_messages: Capture all console output (errors, warnings, info)
- list_network_requests: Capture all network activity (status codes, payloads)
Follow reproduction steps systematically:
click, fill, hover, navigateExecute comprehensive checks:
mandatory_checks:
console_analysis:
- Identify errors (stack traces, error messages)
- Note warnings related to issue
- Check for missing dependencies or failed imports
network_analysis:
- Identify failed requests (4xx, 5xx status codes)
- Inspect request/response payloads
- Check for timeout errors or slow requests
dom_analysis:
- Verify elements exist in DOM via snapshot
- Check element visibility and styling
- Validate event listeners are attached
state_analysis:
- Inspect localStorage, sessionStorage
- Check cookies and authentication state
- Validate application state in console
For complex issues, invoke additional analysis:
Checkpoint Criteria: Issue reproduced in browser, root cause hypothesis formed with confidence score.
Objective: Determine why issue occurs and formulate fix strategy.
Use SuperClaude troubleshoot command:
/sc:troubleshoot [detailed-issue-description] --validate --c7 --seq --persona-analyzer
This provides structured analysis using:
Structure analysis output:
root_cause_analysis:
primary_cause: "Why the issue occurs (e.g., event listener not attached)"
contributing_factors:
- "Factor 1 (e.g., async timing issue)"
- "Factor 2 (e.g., missing dependency)"
confidence_score: 0.75 # 0.0-1.0 scale
evidence:
- "Console error showing specific failure"
- "Network request returning 404"
- "DOM element missing expected class"
Develop 1-3 potential fixes ranked by confidence:
fix_candidates:
option_1:
approach: "Primary fix strategy"
confidence: 0.8
risk: "low"
rollback: "Easy - single file change"
option_2:
approach: "Alternative fix strategy"
confidence: 0.6
risk: "medium"
rollback: "Moderate - multiple files"
If confidence < 70%:
Checkpoint Criteria: Root cause identified with acceptable confidence, fix strategy approved.
After root cause analysis, evaluate debugging complexity for potential agent delegation:
Escalation Indicators (if ANY are true, consider agent delegation):
Decision Logic:
If escalation indicators detected:
action: delegate_to_agent
reason: "Complex debugging requires specialized sub-agent coordination"
next_step: "Invoke Task tool with frontend-debug-agent"
confidence_requirement: "<60% OR multi_domain:true OR systemic:true"
Agent Invocation Pattern:
Tool: Task
Description: "Complex frontend debugging requiring specialized analysis: [issue-summary]"
Agent Type: root-cause-analyst
Context: {
browser_evidence: "[screenshots, console logs]",
reproduction_steps: "[steps from Phase 1]",
hypothesis: "[from Phase 2 analysis]",
confidence: "[confidence score]",
domains_affected: "[network|state|ui|performance]"
}
Agent will spawn specialized debugging sub-agents and coordinate investigation.
If escalation indicators NOT detected:
action: continue_with_skill
reason: "Confidence acceptable, single-domain issue, skill-based debugging appropriate"
next_step: "Proceed to Phase 3: Implementation"
Proceed with skill-based fix implementation:
Objective: Apply fix to codebase following best practices.
Execute fix using appropriate tools:
file_operations:
single_file: Use Edit tool
multiple_files: Use MultiEdit tool for atomic changes
code_standards:
- Follow existing code patterns and conventions
- Add inline comments explaining fix
- Consider edge cases and error handling
- Maintain consistent formatting
For issues involving backend changes:
# Test API endpoints with curl
curl -X POST http://localhost:3000/api/endpoint \
-H "Content-Type: application/json" \
-d '{"test": "data"}'
# Check database state if relevant
# Verify server logs for errors
Execute build process if needed:
# Rebuild application
npm run build # or appropriate build command
# Restart dev server
# Clear browser cache if needed
# Verify server is running
Checkpoint Criteria: Fix implemented in code, application ready for empirical testing.
Objective: Verify fix through direct browser observation.
CRITICAL: This phase is MANDATORY. Never skip browser verification. Use Chrome DevTools MCP exclusively.
Execute comprehensive verification:
checks:
- Visual appearance matches expected state
- Elements present and correctly positioned
- Styling applied correctly
- Responsive behavior works
- Screenshot comparison (before/after)
checks:
- No errors related to the fix
- No new warnings introduced
- Expected log messages present
- Clean console output
checks:
- All requests succeed (200/201/204 status)
- Payloads correct in structure and content
- Response times acceptable
- No timeout errors
checks:
- Complete user flow works end-to-end
- All clickable elements respond
- Forms submit successfully
- Navigation functions correctly
- State persists appropriately
checks:
- Related features still work
- No new issues introduced
- Core workflows unaffected
- Edge cases handled
Execute verification systematically:
step_1_prepare:
- Hard reload browser (Cmd+Shift+R / Ctrl+Shift+R)
- Clear cached state if needed
- Return to starting point of user flow
step_2_execute:
- Reproduce original issue scenario
- Observe UI state and behavior at each step
- Monitor console for errors
- Monitor network for failures
step_3_validate:
- Compare actual behavior against expected
- Take final screenshot for evidence
- Document verification results
- Note any unexpected observations
If ANY verification criterion fails:
With --loop flag (Enhanced with Agent Escalation):
Iteration 1 (Skill-based):
If Iteration 1 fails โ Escalation Decision:
iteration_1_failed: true
action: "Evaluate complexity for agent delegation"
threshold: "If confidence <50% OR new domains discovered โ delegate"
Iterations 2-3 (Agent-based if escalated):
Rationale: Keep simple retries in main context, escalate complex cases to agents.
Without --loop flag:
Checkpoint Criteria: All 5 verification criteria passed, fix confirmed working.
Objective: Document findings and close debugging session.
Create comprehensive report:
# Frontend Debug Report: {Issue Title}
## Summary
[One-line description of issue and fix]
## Root Cause
[Detailed explanation of why issue occurred]
## Fix Applied
[What was changed and why it resolves the issue]
## Verification Results
- โ
UI State: [Details]
- โ
Console: [Details]
- โ
Network: [Details]
- โ
Interactions: [Details]
- โ
Regression: [Details]
## Evidence
- Before Screenshot: [path]
- After Screenshot: [path]
- Console Logs: [relevant logs]
- Network Activity: [relevant requests]
## Recommended Tests
[Suggested regression tests to prevent future issues]
## Session Details
- Session ID: {session-id}
- Duration: {elapsed-time}
- Iterations: {loop-count}
If debugging from GitHub issue:
# Post resolution comment with report
gh issue comment <number> --body "[report-content]"
# Add resolved label to indicate fix has been applied and verified
gh issue edit <number> --add-label "resolved-by-claude"
# Remove automated-debug label if present
gh issue edit <number> --remove-label "automated-debug"
# DO NOT automatically close the issue
# Let human developers review the fix and close manually
# This ensures proper validation and prevents premature closure
Clean up debugging artifacts:
# Mark all TodoWrite tasks as completed
# Delete checkpoint file: rm .debug-session-${SESSION_ID}.json
# Close browser session via Chrome DevTools MCP
# Save final report to appropriate location
Store learning for future debugging:
{
"issue_pattern": "description of issue type",
"root_cause_category": "async timing | network | state management | etc",
"solution_pattern": "approach that resolved issue",
"detection_signals": ["symptom 1", "symptom 2"],
"fix_template": "reusable solution approach"
}
Checkpoint Criteria: Report generated, GitHub issue updated (if applicable), session cleaned up.
chrome_devtools_unavailable:
symptom: "MCP connection fails"
action: "Check Chrome DevTools MCP configuration in .mcp.json"
fallback: "Request user to verify MCP setup and Chrome installation"
browser_already_running:
symptom: "Port conflict or existing instance"
cause: "--isolated flag not used"
solution: "Always use --isolated=true for new debugging sessions"
navigation_timeout:
symptom: "Page load timeout"
action: "Increase timeout or check network connectivity"
fallback: "Request user to verify application is running and accessible"
authentication_required:
symptom: "Login page or auth wall"
action: "Use project-specific login helper if available"
fallback: "Request user credentials or authentication approach"
fix_does_not_work:
with_loop_flag:
action: "Automatically restart Phase 1 (max 3 iterations)"
logging: "Document why previous fix failed"
without_loop_flag:
action: "Report failure with detailed explanation"
next_step: "Request user guidance on alternative approaches"
new_issue_introduced:
action: "Rollback changes to previous working state"
logging: "Document regression details"
next_step: "Re-analyze with different fix strategy"
partial_success:
action: "Document which criteria passed vs failed"
decision: "Discuss with user whether partial fix is acceptable"
gh_cli_not_available:
symptom: "gh command not found"
action: "Warn user to install: brew install gh"
fallback: "Continue with manual issue description"
issue_not_found:
symptom: "Issue number doesn't exist"
action: "Verify issue number and repository"
fallback: "Request user to provide issue details manually"
authentication_failed:
symptom: "gh auth required"
action: "Guide user through: gh auth login"
fallback: "Continue without GitHub integration"
If encountering errors with Chrome DevTools MCP tools:
@references/mcp-tools.md for detailed tool documentationDesign Principle: Load additional context only when detected or needed to optimize token usage.
GitHub Integration โ Load @contexts/github-integration.md
--github-issue flag OR GitHub issue reference detected (#123, URLs)SoftTrak Project โ Load @contexts/softtrak.md
package.json contains "softtrak" OR .softtrak/ directory existsGit Worktree โ Load @contexts/worktree.md
git rev-parse --git-dir | grep "\.git/worktrees" succeedsTool Reference โ Load @references/mcp-tools.md
Usage Examples โ Load @references/examples.md
This skill includes organized resources for effective debugging:
cleanup-sessions.sh
bash scripts/cleanup-sessions.shmcp-tools.md - Detailed Chrome DevTools MCP tool reference
examples.md - Example debugging scenarios
github-integration.md - GitHub issue lifecycle management
--github-issue flag used or issue reference detectedsofttrak.md - SoftTrak project configuration
worktree.md - Git worktree multi-session support
investigation-report-template.md - Report structure template
verification-checklist.md - Verification criteria checklist
session-state-schema.json - Session checkpoint schema
mcp-config-example.json - Chrome DevTools MCP configuration example
knowledge-base.json - Stored debugging patterns and solutions
framework-quirks.json - Framework-specific gotchas and patterns
project-contexts/softtrak.json - SoftTrak project data
--loop flag for complex issues requiring iterationModular Loading Strategy: Conditional loading minimizes token usage.
Token Budget:
Total Range: 4,500 tokens (minimal) to 10,000 tokens (all modules loaded)
Optimization: Load only what's needed based on detection patterns, achieving 30-70% token savings vs loading everything upfront.