// Browser automation via Puppeteer CLI scripts (JSON output). Capabilities: screenshots, PDF generation, web scraping, form automation, network monitoring, performance profiling, JavaScript debugging, headless browsing. Actions: screenshot, scrape, automate, test, profile, monitor, debug browser. Keywords: Puppeteer, headless Chrome, screenshot, PDF, web scraping, form fill, click, navigate, network traffic, performance audit, Lighthouse, console logs, DOM manipulation, element selector, wait, scroll, automation script. Use when: taking screenshots, generating PDFs from web, scraping websites, automating form submissions, monitoring network requests, profiling page performance, debugging JavaScript, testing web UIs.
[HINT] SKILL.mdと関連ファイルを含む完全なスキルディレクトリをダウンロード
| name | chrome-devtools |
| description | Browser automation via Puppeteer CLI scripts (JSON output). Capabilities: screenshots, PDF generation, web scraping, form automation, network monitoring, performance profiling, JavaScript debugging, headless browsing. Actions: screenshot, scrape, automate, test, profile, monitor, debug browser. Keywords: Puppeteer, headless Chrome, screenshot, PDF, web scraping, form fill, click, navigate, network traffic, performance audit, Lighthouse, console logs, DOM manipulation, element selector, wait, scroll, automation script. Use when: taking screenshots, generating PDFs from web, scraping websites, automating form submissions, monitoring network requests, profiling page performance, debugging JavaScript, testing web UIs. |
| license | Apache-2.0 |
Browser automation via executable Puppeteer scripts. All scripts output JSON for easy parsing.
On Linux/WSL, Chrome requires system libraries. Install them first:
cd .claude/skills/chrome-devtools/scripts
./install-deps.sh # Auto-detects OS and installs required libs
Supports: Ubuntu, Debian, Fedora, RHEL, CentOS, Arch, Manjaro
macOS/Windows: Skip this step (dependencies bundled with Chrome)
npm install # Installs puppeteer, debug, yargs
node navigate.js --url https://example.com
# Output: {"success": true, "url": "https://example.com", "title": "Example Domain"}
All scripts are in .claude/skills/chrome-devtools/scripts/
./scripts/README.mdnavigate.js - Navigate to URLsscreenshot.js - Capture screenshots (full page or element)click.js - Click elementsfill.js - Fill form fieldsevaluate.js - Execute JavaScript in page contextsnapshot.js - Extract interactive elements with metadataconsole.js - Monitor console messages/errorsnetwork.js - Track HTTP requests/responsesperformance.js - Measure Core Web Vitals + record tracescd .claude/skills/chrome-devtools/scripts
node screenshot.js --url https://example.com --output ./docs/screenshots/page.png
Important: Always save screenshots to ./docs/screenshots directory.
# Keep browser open with --close false
node navigate.js --url https://example.com/login --close false
node fill.js --selector "#email" --value "user@example.com" --close false
node fill.js --selector "#password" --value "secret" --close false
node click.js --selector "button[type=submit]"
# Extract specific fields with jq
node performance.js --url https://example.com | jq '.vitals.LCP'
# Save to file
node network.js --url https://example.com --output /tmp/requests.json
node evaluate.js --url https://example.com --script "
Array.from(document.querySelectorAll('.item')).map(el => ({
title: el.querySelector('h2')?.textContent,
link: el.querySelector('a')?.href
}))
" | jq '.result'
PERF=$(node performance.js --url https://example.com)
LCP=$(echo $PERF | jq '.vitals.LCP')
if (( $(echo "$LCP < 2500" | bc -l) )); then
echo "✓ LCP passed: ${LCP}ms"
else
echo "✗ LCP failed: ${LCP}ms"
fi
node fill.js --url https://example.com --selector "#search" --value "query" --close false
node click.js --selector "button[type=submit]"
node console.js --url https://example.com --types error,warn --duration 5000 | jq '.messageCount'
All scripts support:
--headless false - Show browser window--close false - Keep browser open for chaining--timeout 30000 - Set timeout (milliseconds)--wait-until networkidle2 - Wait strategySee ./scripts/README.md for complete options.
All scripts output JSON to stdout:
{
"success": true,
"url": "https://example.com",
... // script-specific data
}
Errors go to stderr:
{
"success": false,
"error": "Error message"
}
Use snapshot.js to discover selectors:
node snapshot.js --url https://example.com | jq '.elements[] | {tagName, text, selector}'
"Cannot find package 'puppeteer'"
npm install in the scripts directory"error while loading shared libraries: libnss3.so" (Linux/WSL)
./install-deps.sh in scripts directorysudo apt-get install -y libnss3 libnspr4 libasound2t64 libatk1.0-0 libatk-bridge2.0-0 libcups2 libdrm2 libxkbcommon0 libxcomposite1 libxdamage1 libxfixes3 libxrandr2 libgbm1"Failed to launch the browser process"
ls ~/.cache/puppeteernpm rebuild then npm installChrome not found
npm installnpx puppeteer browsers install chromeElement not found
node snapshot.js --url <url>Script hangs
--timeout 60000--wait-until load or --wait-until domcontentloadedBlank screenshot
--wait-until networkidle2--timeout 30000Permission denied on scripts
chmod +x *.shDetailed guides available in ./references/:
Create custom scripts using shared library:
import { getBrowser, getPage, closeBrowser, outputJSON } from './lib/browser.js';
// Your automation logic
const client = await page.createCDPSession();
await client.send('Emulation.setCPUThrottlingRate', { rate: 4 });
See reference documentation for advanced patterns and complete API coverage.