Skip to main content
Run any Skill in Manus
with one click

eval-guide

// Eval enablement accelerator — help customers think through "what does good look like" for their AI agent, then generate a structured eval plan and test cases they can use immediately. No built agent required — an idea or description is enough. Promotes eval-first development: write evals before building. Use when anyone mentions agent evaluation, eval planning, "what should we test", "how do we know if the agent is good", test case generation, or interpreting eval results.

$ git log --oneline --stat
stars:6
forks:4
updated:May 6, 2026 at 14:45
File Explorer
20 files
SKILL.md
readonly