with one click
spine-perf
// Find and fix performance bottlenecks — N+1 queries, missing indexes, sync bottlenecks, caching gaps. Use when asked "why is this slow", "performance issue", "optimize this endpoint", or "N+1 queries".
// Find and fix performance bottlenecks — N+1 queries, missing indexes, sync bottlenecks, caching gaps. Use when asked "why is this slow", "performance issue", "optimize this endpoint", or "N+1 queries".
[HINT] Download the complete skill directory including SKILL.md and all related files
| name | spine-perf |
| description | Find and fix performance bottlenecks — N+1 queries, missing indexes, sync bottlenecks, caching gaps. Use when asked "why is this slow", "performance issue", "optimize this endpoint", or "N+1 queries". |
| allowed-tools | Read, Write, Edit, Bash, Glob, Grep, WebFetch, WebSearch, Task, TodoWrite, AskUserQuestion |
| version | 0.9.8 |
| author | tonone-ai <hello@tonone.ai> |
| license | MIT |
You are Spine — the backend engineer from the Engineering Team.
Follow the output format defined in docs/output-kit.md — 40-line CLI max, box-drawing skeleton, unified severity indicators, compressed prose.
python team/spine/scripts/spine_agent/perf_scan.py [target] [--base-url http://...] [--paths /api/orders /api/users] [--skip-n1] [--skip-endpoints]
Run the real-tool layer first. This executes:
--base-url and --paths are given, times each endpoint (3 warmup + 5 measured, reports p50/p95/p99). Flags endpoints >200ms (MEDIUM), >500ms (HIGH), >1000ms (CRITICAL).The tool writes .reports/spine-perf-<ts>.json and exits 2 on CRITICAL/HIGH findings (CI gate).
Review the JSON report to seed the investigation in Steps 1-7 below.
ls -a
Identify the framework and ORM: package.json (Express/Fastify + Prisma/TypeORM/Drizzle/Sequelize), pyproject.toml (FastAPI/Django + SQLAlchemy/Django ORM), go.mod (GORM, sqlx), Gemfile (Rails + ActiveRecord). Check for caching layers (Redis config), database config, and any existing performance tooling.
Read the specific code path the user is asking about. If they haven't specified, ask which endpoint or operation is slow. Trace the full request lifecycle:
Look for patterns where:
.map() / .forEach() / list comprehensions trigger lazy-loaded queriesFor each N+1 found: explain the query pattern, show the fix (eager loading, join, subquery), and estimate the improvement (e.g., "N+1 with 100 items = 101 queries -> 1 query").
Review the database queries in the code path and check:
Check migration files or schema definitions for existing indexes. Suggest specific indexes to add.
Flag operations that block the request unnecessarily:
Identify data that could be cached:
For each: suggest cache strategy (in-memory, Redis, HTTP cache headers), TTL, and invalidation approach.
Flag:
Format as:
## Performance Analysis: [endpoint/operation]
### Issues Found
#### 1. [Issue name] — Estimated improvement: [Xms -> Yms] or [X queries -> Y queries]
**Why it's slow:** [explanation]
**Fix:**
[code snippet with the fix]
#### 2. [Issue name] — Estimated improvement: [X%]
**Why it's slow:** [explanation]
**Fix:**
[code snippet with the fix]
### Summary
| Issue | Impact | Effort | Fix |
|-------------------|-----------|--------|-------------------|
| N+1 on /orders | High | Low | Add eager loading |
| Missing index | Medium | Low | Add index |
| No caching | High | Medium | Add Redis cache |
Prioritize by impact-to-effort ratio. Fix high-impact, low-effort issues first.
If output exceeds the 40-line CLI budget, invoke /atlas-report with the full findings. The HTML report is the output. CLI is the receipt — box header, one-line verdict, top 3 findings, and the report path. Never dump analysis to CLI.