Skip to main content
Run any Skill in Manus
with one click

llm-evaluation

// LLM evaluation and testing patterns including prompt testing, hallucination detection, benchmark creation, and quality metrics. Use when testing LLM applications, validating prompt quality, implementing systematic evaluation, or measuring LLM performance.

$ git log --oneline --stat
stars:60
forks:15
updated:November 1, 2025 at 23:59
SKILL.md
readonly