// Test application performance, scalability, and resilience. Use when planning load testing, stress testing, or optimizing system performance.
| name | performance-testing |
| description | Test application performance, scalability, and resilience. Use when planning load testing, stress testing, or optimizing system performance. |
| category | specialized-testing |
| priority | high |
| tokenEstimate | 1100 |
| agents | ["qe-performance-tester","qe-quality-analyzer","qe-production-intelligence"] |
| implementation_status | optimized |
| optimization_version | 1 |
| last_optimized | "2025-12-02T00:00:00.000Z" |
| dependencies | [] |
| quick_reference_card | true |
| tags | ["performance","load-testing","stress-testing","scalability","k6","bottlenecks"] |
<default_to_action> When testing performance or planning load tests:
Quick Test Type Selection:
Critical Success Factors:
| Type | Purpose | When |
|---|---|---|
| Load | Expected traffic | Every release |
| Stress | Beyond capacity | Quarterly |
| Spike | Sudden surge | Before events |
| Endurance | Memory leaks | After code changes |
| Scalability | Scaling validation | Infrastructure changes |
| Metric | Target | Why |
|---|---|---|
| p95 response | < 200ms | User experience |
| Throughput | 10k req/min | Capacity |
| Error rate | < 0.1% | Reliability |
| CPU | < 70% | Headroom |
| Memory | < 80% | Stability |
qe-performance-tester: Load test orchestrationqe-quality-analyzer: Results analysisqe-production-intelligence: Production comparisonBad: "The system should be fast" Good: "p95 response time < 200ms under 1,000 concurrent users"
export const options = {
thresholds: {
http_req_duration: ['p(95)<200'], // 95% < 200ms
http_req_failed: ['rate<0.01'], // < 1% failures
},
};
Bad: Every user hits homepage repeatedly Good: Model actual user behavior
// Realistic distribution
// 40% browse, 30% search, 20% details, 10% checkout
export default function () {
const action = Math.random();
if (action < 0.4) browse();
else if (action < 0.7) search();
else if (action < 0.9) viewProduct();
else checkout();
sleep(randomInt(1, 5)); // Think time
}
Symptoms: Slow queries under load, connection pool exhaustion Fixes: Add indexes, optimize N+1 queries, increase pool size, read replicas
// BAD: 100 orders = 101 queries
const orders = await Order.findAll();
for (const order of orders) {
const customer = await Customer.findById(order.customerId);
}
// GOOD: 1 query
const orders = await Order.findAll({ include: [Customer] });
Problem: Blocking operations in request path (sending email during checkout) Fix: Use message queues, process async, return immediately
Detection: Endurance testing, memory profiling Common causes: Event listeners not cleaned, caches without eviction
Solutions: Aggressive timeouts, circuit breakers, caching, graceful degradation
// performance-test.js
import http from 'k6/http';
import { check, sleep } from 'k6';
export const options = {
stages: [
{ duration: '1m', target: 50 }, // Ramp up
{ duration: '3m', target: 50 }, // Steady
{ duration: '1m', target: 0 }, // Ramp down
],
thresholds: {
http_req_duration: ['p(95)<200'],
http_req_failed: ['rate<0.01'],
},
};
export default function () {
const res = http.get('https://api.example.com/products');
check(res, {
'status is 200': (r) => r.status === 200,
'response time < 200ms': (r) => r.timings.duration < 200,
});
sleep(1);
}
# GitHub Actions
- name: Run k6 test
uses: grafana/k6-action@v0.3.0
with:
filename: performance-test.js
Load: 1,000 users | p95: 180ms | Throughput: 5,000 req/s
Error rate: 0.05% | CPU: 65% | Memory: 70%
Load: 1,000 users | p95: 3,500ms ❌ | Throughput: 500 req/s ❌
Error rate: 5% ❌ | CPU: 95% ❌ | Memory: 90% ❌
| ❌ Anti-Pattern | ✅ Better |
|---|---|
| Testing too late | Test early and often |
| Unrealistic scenarios | Model real user behavior |
| 0 to 1000 users instantly | Ramp up gradually |
| No monitoring during tests | Monitor everything |
| No baseline | Establish and track trends |
| One-time testing | Continuous performance testing |
// Comprehensive load test
await Task("Load Test", {
target: 'https://api.example.com',
scenarios: {
checkout: { vus: 100, duration: '5m' },
search: { vus: 200, duration: '5m' },
browse: { vus: 500, duration: '5m' }
},
thresholds: {
'http_req_duration': ['p(95)<200'],
'http_req_failed': ['rate<0.01']
}
}, "qe-performance-tester");
// Bottleneck analysis
await Task("Analyze Bottlenecks", {
testResults: perfTest,
metrics: ['cpu', 'memory', 'db_queries', 'network']
}, "qe-performance-tester");
// CI integration
await Task("CI Performance Gate", {
mode: 'smoke',
duration: '1m',
vus: 10,
failOn: { 'p95_response_time': 300, 'error_rate': 0.01 }
}, "qe-performance-tester");
aqe/performance/
├── results/* - Test execution results
├── baselines/* - Performance baselines
├── bottlenecks/* - Identified bottlenecks
└── trends/* - Historical trends
const perfFleet = await FleetManager.coordinate({
strategy: 'performance-testing',
agents: [
'qe-performance-tester',
'qe-quality-analyzer',
'qe-production-intelligence',
'qe-deployment-readiness'
],
topology: 'sequential'
});
Performance is a feature: Test it like functionality Test continuously: Not just before launch Monitor production: Synthetic + real user monitoring Fix what matters: Focus on user-impacting bottlenecks Trend over time: Catch degradation early
With Agents: Agents automate load testing, analyze bottlenecks, and compare with production. Use agents to maintain performance at scale.