// Browser automation, debugging, and performance analysis using Puppeteer CLI scripts. Use for automating browsers, taking screenshots, analyzing performance, monitoring network traffic, web scraping, form automation, and JavaScript debugging.
| name | chrome-devtools |
| description | Browser automation, debugging, and performance analysis using Puppeteer CLI scripts. Use for automating browsers, taking screenshots, analyzing performance, monitoring network traffic, web scraping, form automation, and JavaScript debugging. |
| license | Apache-2.0 |
Browser automation via executable Puppeteer scripts. All scripts output JSON for easy parsing.
CRITICAL: Always check pwd before running scripts.
On Linux/WSL, Chrome requires system libraries. Install them first:
pwd # Should show current working directory
cd .claude/skills/chrome-devtools/scripts
./install-deps.sh # Auto-detects OS and installs required libs
Supports: Ubuntu, Debian, Fedora, RHEL, CentOS, Arch, Manjaro
macOS/Windows: Skip this step (dependencies bundled with Chrome)
npm install # Installs puppeteer, debug, yargs
ImageMagick enables automatic screenshot compression to keep files under 5MB:
macOS:
brew install imagemagick
Ubuntu/Debian/WSL:
sudo apt-get install imagemagick
Verify:
magick -version # or: convert -version
Without ImageMagick, screenshots >5MB will not be compressed (may fail to load in Gemini/Claude).
node navigate.js --url https://example.com
# Output: {"success": true, "url": "https://example.com", "title": "Example Domain"}
All scripts are in .claude/skills/chrome-devtools/scripts/
CRITICAL: Always check pwd before running scripts.
./scripts/README.mdnavigate.js - Navigate to URLsscreenshot.js - Capture screenshots (full page or element)click.js - Click elementsfill.js - Fill form fieldsevaluate.js - Execute JavaScript in page contextsnapshot.js - Extract interactive elements with metadataconsole.js - Monitor console messages/errorsnetwork.js - Track HTTP requests/responsesperformance.js - Measure Core Web Vitals + record tracespwd # Should show current working directory
cd .claude/skills/chrome-devtools/scripts
node screenshot.js --url https://example.com --output ./docs/screenshots/page.png
Important: Always save screenshots to ./docs/screenshots directory.
Screenshots are automatically compressed if they exceed 5MB to ensure compatibility with Gemini API and Claude Code (which have 5MB limits). This uses ImageMagick internally:
# Default: auto-compress if >5MB
node screenshot.js --url https://example.com --output page.png
# Custom size threshold (e.g., 3MB)
node screenshot.js --url https://example.com --output page.png --max-size 3
# Disable compression
node screenshot.js --url https://example.com --output page.png --no-compress
Compression behavior:
Output includes compression info:
{
"success": true,
"output": "/path/to/page.png",
"compressed": true,
"originalSize": 8388608,
"size": 3145728,
"compressionRatio": "62.50%",
"url": "https://example.com"
}
# Keep browser open with --close false
node navigate.js --url https://example.com/login --close false
node fill.js --selector "#email" --value "user@example.com" --close false
node fill.js --selector "#password" --value "secret" --close false
node click.js --selector "button[type=submit]"
# Extract specific fields with jq
node performance.js --url https://example.com | jq '.vitals.LCP'
# Save to file
node network.js --url https://example.com --output /tmp/requests.json
BEFORE executing any script:
pwd.claude/skills/chrome-devtools/scripts/ directorycd to correct locationExample:
pwd # Should show: .../chrome-devtools/scripts
# If wrong:
cd .claude/skills/chrome-devtools/scripts
AFTER screenshot/capture operations:
ls -lh <output-path>Example:
node screenshot.js --url https://example.com --output ./docs/screenshots/page.png
ls -lh ./docs/screenshots/page.png # Verify file exists
# Then use Read tool to visually inspect
If script fails:
Example:
# CSS selector fails
node click.js --url https://example.com --selector ".btn-submit"
# Error: waiting for selector ".btn-submit" failed
# Discover correct selector
node snapshot.js --url https://example.com | jq '.elements[] | select(.tagName=="BUTTON")'
# Try XPath
node click.js --url https://example.com --selector "//button[contains(text(),'Submit')]"
❌ Wrong working directory → output files go to wrong location ❌ Skipping output validation → silent failures ❌ Using complex CSS selectors without testing → selector errors ❌ Not checking element visibility → timeout errors
✅ Always verify pwd before running scripts
✅ Always validate output after screenshots
✅ Use snapshot.js to discover selectors
✅ Test selectors with simple commands first
node evaluate.js --url https://example.com --script "
Array.from(document.querySelectorAll('.item')).map(el => ({
title: el.querySelector('h2')?.textContent,
link: el.querySelector('a')?.href
}))
" | jq '.result'
PERF=$(node performance.js --url https://example.com)
LCP=$(echo $PERF | jq '.vitals.LCP')
if (( $(echo "$LCP < 2500" | bc -l) )); then
echo "✓ LCP passed: ${LCP}ms"
else
echo "✗ LCP failed: ${LCP}ms"
fi
node fill.js --url https://example.com --selector "#search" --value "query" --close false
node click.js --selector "button[type=submit]"
node console.js --url https://example.com --types error,warn --duration 5000 | jq '.messageCount'
All scripts support:
--headless false - Show browser window--close false - Keep browser open for chaining--timeout 30000 - Set timeout (milliseconds)--wait-until networkidle2 - Wait strategySee ./scripts/README.md for complete options.
All scripts output JSON to stdout:
{
"success": true,
"url": "https://example.com",
... // script-specific data
}
Errors go to stderr:
{
"success": false,
"error": "Error message"
}
Use snapshot.js to discover selectors:
node snapshot.js --url https://example.com | jq '.elements[] | {tagName, text, selector}'
"Cannot find package 'puppeteer'"
npm install in the scripts directory"error while loading shared libraries: libnss3.so" (Linux/WSL)
./install-deps.sh in scripts directorysudo apt-get install -y libnss3 libnspr4 libasound2t64 libatk1.0-0 libatk-bridge2.0-0 libcups2 libdrm2 libxkbcommon0 libxcomposite1 libxdamage1 libxfixes3 libxrandr2 libgbm1"Failed to launch the browser process"
ls ~/.cache/puppeteernpm rebuild then npm installChrome not found
npm installnpx puppeteer browsers install chromeElement not found
node snapshot.js --url <url>Script hangs
--timeout 60000--wait-until load or --wait-until domcontentloadedBlank screenshot
--wait-until networkidle2--timeout 30000Permission denied on scripts
chmod +x *.shScreenshot too large (>5MB)
--max-size 3--format jpeg --quality 80--selector .main-contentCompression not working
magick -version or convert -version"compressed": true--selector to capture only needed areaDetailed guides available in ./references/:
Create custom scripts using shared library:
import { getBrowser, getPage, closeBrowser, outputJSON } from './lib/browser.js';
// Your automation logic
const client = await page.createCDPSession();
await client.send('Emulation.setCPUThrottlingRate', { rate: 4 });
See reference documentation for advanced patterns and complete API coverage.