Skip to main content
Jeden Skill in Manus ausführen
mit einem Klick

llm-benchmark

// Retrieve and compare LLM performance metrics (latency, throughput, pricing, intelligence scores) using the Artificial Analysis API. Use this skill whenever the user wants to: - Compare two or more LLMs on speed, cost, or quality metrics - Find the fastest, cheapest, or smartest model for a given use case - Get current benchmark scores (MMLU, GPQA, coding, math) for a model or set of models - Analyze trade-offs between latency, throughput, and pricing across AI providers - Answer questions like "which model has the lowest TTFT?", "what's the cheapest model above X intelligence score?", "compare GPT-4o vs Claude vs Gemini on speed and cost" Trigger any time the user mentions model comparison, LLM benchmarks, token speed, time to first token, or asks which model to pick. Requires a valid Artificial Analysis API key (set as env var ARTIFICIAL_ANALYSIS_API_KEY or provided inline).

$ git log --oneline --stat
stars:0
forks:0
updated:8. April 2026 um 12:29
Datei-Explorer
3 Dateien
SKILL.md
readonly