Skip to main content
Execute qualquer Skill no Manus
com um clique
$pwd:

evaluating-llms

// Evaluate LLM systems using automated metrics, LLM-as-judge, and benchmarks. Use when testing prompt quality, validating RAG pipelines, measuring safety (hallucinations, bias), or comparing models for production deployment.

$ git log --oneline --stat
stars:345
forks:52
updated:9 de dezembro de 2025 às 21:02
Explorador de arquivos
18 arquivos
SKILL.md
readonly