with one click
vercel-runtime-log-audit
// Audit Vercel production runtime logs across projects, with reliable patterns for finding 404 and 5xx issues and publishing operational summaries.
// Audit Vercel production runtime logs across projects, with reliable patterns for finding 404 and 5xx issues and publishing operational summaries.
[HINT] Download the complete skill directory including SKILL.md and all related files
| name | vercel-runtime-log-audit |
| description | Audit Vercel production runtime logs across projects, with reliable patterns for finding 404 and 5xx issues and publishing operational summaries. |
| version | 1.0.0 |
| author | Hermes Agent |
| license | MIT |
| metadata | {"hermes":{"tags":["vercel","logs","runtime","production","incident","operations"]}} |
Use this when the user asks to inspect Vercel runtime logs for abnormal production behavior such as 404s, 500s, or recurring runtime exceptions.
Confirm these first:
command -v vercel
printf 'VERCEL_TOKEN=%s\n' "${VERCEL_TOKEN:+set}"
printf 'VERCEL_TEAM_ID=%s\n' "${VERCEL_TEAM_ID:+set}"
vercel whoami
If you need all accessible projects, prefer the API because vercel projects ls --json may not emit usable JSON in automation:
python3 - <<'PY'
import json, os, urllib.request
team=os.environ['VERCEL_TEAM_ID']
token=os.environ['VERCEL_TOKEN']
url=f'https://api.vercel.com/v9/projects?teamId={team}&limit=100'
req=urllib.request.Request(url, headers={'Authorization': f'Bearer {token}'})
with urllib.request.urlopen(req) as r:
data=json.load(r)
for p in data.get('projects', []):
print(p['name'])
PY
Important current-environment caveat:
vercel logs --project ... --since ... --until ...)vercel logs --help only exposes vercel logs url|deploymentId, i.e. live tailing for a ready deployment from now for a short periodvercel logs --help and verify which command shape is actually availableurl|deploymentId tailing is supported, switch early to the direct request-logs API or document that exact historical recomputation is blocked from the current environmentFor a single-project audit, first confirm that recent production logs actually exist and see the rough status mix with a small bounded query:
vercel logs --project <project> --environment production --since 24h --json --no-branch --limit 50
This should usually finish in a few seconds. Use it to answer:
This is especially useful when a previous broad query returned zero results or timed out — distinguish:
no logs returned because the query/parse strategy was badlogs exist, but there really are no 404/500 entries in the window--status-code 500 for 5xx discovery across projectsIn practice, vercel logs --status-code 500 --json or --status-code 4xx/5xx can return no JSON results even when --level error clearly returns production 500 entries.
For 5xx auditing, use:
vercel logs --project <project> --environment production --since 24h --level error --json --no-branch --limit 1000
Then filter client-side to responseStatusCode starting with 5.
--search 'status:404' works better than --status-code 404Use:
vercel logs --project <project> --environment production --since 24h --search 'status:404' --json --no-branch --limit 1000
But treat the result carefully:
idvercel logs progress lines are not JSONThe CLI prints lines like:
Fetching project "..."Fetching logs...These may appear on stdout or stderr. Ignore non-JSON lines and parse only lines beginning with {.
When querying a bounded historical window such as:
vercel logs --project <project> --environment production \
--since '2026-04-25T00:00:00+09:00' \
--until '2026-04-26T00:00:00+09:00' \
--json --no-branch --limit 200
practical behavior may differ from the requested --limit:
id, you may end up with only about 50 unique rows even when --limit 200 or higher was requested404 / 307 queries alikeInterpretation rule:
id before counting paths or statusesThis matters especially for wiki or incident summaries covering exact calendar days: the safest framing is sampled status mix and repeated-path patterns, not exact volume.
When you call the backend directly at:
https://vercel.com/api/logs/request-logs
note these field-level realities:
ownerId, projectId, startDate, and endDatestartDate / endDate must be epoch milliseconds, not ISO timestampsenvironment=productionenvironment filter, project-level results can be heavily polluted by preview traffic and become misleading for production auditsrows plus top-level hasMoreRowspagination.data structurerequestId, not the CLI-style top-level idstatusCode, not responseStatusCodeIf required fields are missing or typed wrong, typical API errors include:
Validation error: Required at "ownerId"Validation error: Expected number, received nan at "startDate"Important environment-scope finding:
corp-web-japan, an unfiltered current-day query returned a mix dominated by preview rows, with only a small minority of production rowsenvironment=productionenvironment=preview when you actually want preview-only evidencetarget=production or target=preview filters the same way; verify with a small probe query and count the returned environment valuesFor historical day-window 404 / 307 audits, another practical failure mode can appear:
hasMoreRows = truepage can still return the exact same first-page requestId set instead of advancingHTTP 400 {"name":"ExceedsBillingLimitError"} for very small historical windows (observed even for a 1-minute May 7 KST window on corp-web-japan)all queries and some direct status filters such as 307 can still plateau at the first 50 unique rows while hasMoreRows = true404 or 500 may return a complete set with hasMoreRows = false, so treat them separately instead of assuming all statuses have the same measurement qualityInterpretation rule:
environment mix on a small sample; if preview traffic is present in a production audit, re-run with environment=productionrequestId values, stop treating the API as paginating correctlyExceedsBillingLimitError for small historical windows, treat exact historical recomputation as platform-blocked from the current environment and preserve/report only directly observed data500 / 502 / 503 / 504 status queries, because zero-row checks are still useful and fast even when 404 / 307 pagination is brokencurl verification after collecting the clean sample, record a cutoff timestamp and distinguish the clean pre-verification counts from the later raw rows polluted by your own checksvercel logs --level error --limit 1000 can hit the 1000-line cap quickly on noisy projects. If you get 1000 results, report it as:
>=1000 errors in the windowRecommended windows:
24h30dThis distinguishes active incidents from isolated noise.
Get projects through the API, then loop per project.
vercel logs --project <project> --environment production --since 24h --level error --json --no-branch --limit 1000
Parse JSON lines only, then keep records where:
responseStatusCode is 500, 502, 503, 504, etc.Default cross-project method:
vercel logs --project <project> --environment production --since 24h --search 'status:404' --json --no-branch --limit 1000
Dedupe by id before summarizing.
For a single project where you need a fast direct answer, also try the exact status-code filter:
vercel logs --project <project> --environment production --since 24h --status-code 404 --json --no-branch --limit 1000
In practice, this can return quickly and cleanly for one-project verification even though broader cross-project audits are often more reliable with status:404 search.
When the user asks only about one project and wants a quick answer, prefer these bounded direct checks before attempting any expensive full export:
vercel logs --project <project> --environment production --since 24h --status-code 500 --json --no-branch --limit 1000
vercel logs --project <project> --environment production --since 24h --status-code 5xx --json --no-branch --limit 1000
vercel logs --project <project> --environment production --since 24h --level error --json --no-branch --limit 1000
Interpretation pattern:
--limit 50404, 500, and 5xx checks500s: the general vercel logs --json stream is recency-ordered, so a noisy later 404 period can push earlier same-day 500 rows out of a bounded sample500 query disagree, trust the direct status-specific result for existence, then inspect the returned 500 rows directlyFor each project report:
(status, requestPath)Typical scanner/bot noise examples:
/wp-admin/.../wp-login.php.php probes/runtime-config.js, /env.json, /config.json, /swagger.json, /openapi.json, /.well-known/jwks.json/api/health, /health, /api/account, /api/v1/config, /api/v2/settings/.env.local, /backend/.env, /api/.env, /admin/.env, /config.env/.git/config and /.ssh/id_rsaLikely real issues often show:
Operational rule:
config-probe, api-probe, secret-probe, exploit-probeImportant practical finding:
console.logsrc/app/[...missing]/page.tsx), unmatched requests are intentionally made runtime-visibleRecommended approach:
/swagger.json or /api/health/.git/*, /.ssh/*, /wp-admin/*Action guidance:
denydeny or challenge, depending on your tolerance and plan featuresInterpretation rule:
Important experiential finding: a user can genuinely hit a 404 on a Vercel-hosted site even when the project's production runtime-log queries show zero 404 entries.
This happens when the 404 is served by Vercel's edge/static layer rather than by application runtime execution.
What this means operationally:
vercel logs runtime queries are not a complete source of truth for user-visible 404s404 = 0 in runtime-log output does NOT prove users saw no 404 pagesruntime 404s from edge/static 404sWhen the user says they personally saw a 404 but runtime logs show none, do a bounded synthetic test:
curl -I -sS 'https://<domain>/__hermes-vercel-log-test-404' | sed -n '1,20p'
Then immediately run a recent bounded log query for the project:
vercel logs --project <project> --environment production --since 5m --json --no-branch --limit 200
If the HTTP response is 404 but the request path does not appear in recent runtime logs, treat that as evidence that the 404 is outside runtime-log visibility.
These response headers are strong evidence that the 404 was handled by Vercel's edge/static layer rather than by app runtime:
x-matched-path: /404x-vercel-cache: HIT or other cache-layer responseserver: VercelInterpretation:
During path-specific incident work, your own curl, browser, or synthetic checks can immediately appear in the same runtime-log window and distort a tiny sample.
Use this pattern:
--until '<timestamp>' bound just before your own checks.This is especially important when only a few requests exist in the window.
messageFor custom runtime handlers that log structured payloads such as:
[runtime-missing-redirect] {...}[runtime-404] {...}Do not stop at top-level fields like requestPath and responseStatusCode.
Parse the trailing JSON object from message and extract fields such as:
requestedPathredirectTargethostrefereruserAgentThis is often the only place where referrer and redirect-target evidence exists.
Practical implication:
referer: null plus crawler user agents often means direct bot/unfurl fetches rather than normal in-site navigationredirectTarget values let you verify whether an allowlisted redirect is sending traffic to the intended upstream URLpython3 - <<'PY'
import subprocess, json
cmd=['vercel','logs','--project','<project>','--environment','production','--since','24h','--level','error','--json','--no-branch','--limit','20']
p=subprocess.run(cmd, capture_output=True, text=True)
for raw in ((p.stdout or '')+'\n'+(p.stderr or '')).splitlines():
raw=raw.strip()
if raw.startswith('{'):
x=json.loads(raw)
if str(x.get('responseStatusCode','')).startswith('5'):
print(json.dumps({k:x.get(k) for k in ['timestamp','responseStatusCode','requestPath','message','source','deploymentId']}, ensure_ascii=False))
PY
python3 - <<'PY'
import subprocess, json, collections
cmd=['vercel','logs','--project','<project>','--environment','production','--since','24h','--level','error','--json','--no-branch','--limit','1000']
p=subprocess.run(cmd, capture_output=True, text=True)
items=[]
for raw in ((p.stdout or '')+'\n'+(p.stderr or '')).splitlines():
raw=raw.strip()
if raw.startswith('{'):
try:
x=json.loads(raw)
if str(x.get('responseStatusCode','')).startswith('5'):
items.append(x)
except:
pass
print('count', len(items))
print('statuses', collections.Counter(x.get('responseStatusCode') for x in items))
PY
When the user wants nonexistent page URIs to appear in Vercel Runtime Logs, the practical fix for a Next.js App Router site is to force unmatched page paths through runtime.
If the app has no matching runtime page/route for an unknown path, Vercel can serve the 404 at the edge/static layer. In that case:
404Add a root catch-all page route:
src/app/[...missing]/page.tsx
Recommended implementation pattern:
import { headers } from "next/headers";
import { notFound } from "next/navigation";
export const dynamic = "force-dynamic";
export default async function MissingRoutePage({
params,
}: {
params: Promise<{ missing: string[] }>;
}) {
const { missing } = await params;
const requestHeaders = await headers();
const requestedPath = `/${missing.join("/")}`;
console.log(
"[runtime-404]",
JSON.stringify({
requestedPath,
host: requestHeaders.get("host"),
referer: requestHeaders.get("referer"),
userAgent: requestHeaders.get("user-agent"),
}),
);
notFound();
}
Why this works:
dynamic = "force-dynamic" ensures runtime execution rather than static optimizationconsole.log creates a Runtime Log entrynotFound() preserves the correct 404 response for the usernpm test
npm run typecheck
npm run build
npm run start -- --port 3012
curl -I http://127.0.0.1:3012/__hermes-vercel-log-test-404
Then confirm the local server output includes a line like:
[runtime-404] {"requestedPath":"/__hermes-vercel-log-test-404", ...}
If the repo has git.deploymentEnabled: false, branch pushes may not create preview deployments automatically. In that case use:
vercel pull --yes --environment=preview
vercel build
vercel deploy --prebuilt --yes --no-wait
Wait for the preview deployment to become READY, then trigger a missing path on the preview URL.
vercel logs --project <project> --environment preview --since 10m --status-code 404 --json --no-branch --limit 50
Expected result:
source usually serverlessmessage contains [runtime-404]This solves visibility for unmatched page-like routes captured by the App Router catch-all route. It does NOT guarantee identical runtime-log visibility for every possible missing asset or other platform-level miss.
Be explicit about measurement quality:
5xx counts from --level error are the most reliable quick signal404 counts from status:404 are best treated as sampled/distinct entries after dedupe unless you have confirmed the requests are runtime-visible>=1000runtime-visible 404s after fix from prior edge/static 404s outside runtime visibilityA practical failure mode: the local environment can have VERCEL_TOKEN / VERCEL_TEAM_ID set but the token is expired or invalid, and newer CLI flows may fail with errors such as:
The specified token is not valid403 on project listing even though repo docs contain the expected team/project identifiersFallback workflow for still-useful investigation:
ops/vercel-firewall/README.mdteamId, projectId, project name, and expected domainscurl -I against production and stage.This fallback cannot prove the missing referer or exact runtime row contents, so report that limitation explicitly. But it is still often enough to answer:
If you see a path-specific 404 such as /section/.../download and current HTML no longer links there:
/download -> /pdf) and current HTML only emits the new path, classify the likely cause as stale external/bookmarked/cached traffic rather than a current in-site navigation bug, unless logs later prove otherwiseGood summary structure:
When publishing a wiki snapshot from runtime logs:
origin/main SHA as context, but do not pretend the findings came from code inspection alone--status-code 500 is sufficient in every contextFetching ... lines are not JSON1000 as the exact count when the CLI likely capped the result--limit 1000 full-log export for a single project when a --limit 50 existence check plus direct 404/500/5xx queries would answer the question faster