Skip to main content
Run any Skill in Manus
with one click
$pwd:

evaluating-llms

// Evaluate LLM systems using automated metrics, LLM-as-judge, and benchmarks. Use when testing prompt quality, validating RAG pipelines, measuring safety (hallucinations, bias), or comparing models for production deployment.

$ git log --oneline --stat
stars:345
forks:52
updated:December 9, 2025 at 21:02
File Explorer
18 files
SKILL.md
readonly