Skip to main content
Manus에서 모든 스킬 실행
원클릭으로

llm-benchmark

// Retrieve and compare LLM performance metrics (latency, throughput, pricing, intelligence scores) using the Artificial Analysis API. Use this skill whenever the user wants to: - Compare two or more LLMs on speed, cost, or quality metrics - Find the fastest, cheapest, or smartest model for a given use case - Get current benchmark scores (MMLU, GPQA, coding, math) for a model or set of models - Analyze trade-offs between latency, throughput, and pricing across AI providers - Answer questions like "which model has the lowest TTFT?", "what's the cheapest model above X intelligence score?", "compare GPT-4o vs Claude vs Gemini on speed and cost" Trigger any time the user mentions model comparison, LLM benchmarks, token speed, time to first token, or asks which model to pick. Requires a valid Artificial Analysis API key (set as env var ARTIFICIAL_ANALYSIS_API_KEY or provided inline).

$ git log --oneline --stat
stars:0
forks:0
updated:2026년 4월 8일 12:29
파일 탐색기
3 개 파일
SKILL.md
readonly