Skip to main content
Run any Skill in Manus
with one click

promptfoo-evaluation

// Configures and runs LLM evaluation using Promptfoo framework. Use when setting up prompt testing, creating evaluation configs (promptfooconfig.yaml), writing Python custom assertions, implementing llm-rubric for LLM-as-judge, or managing few-shot examples in prompts. Triggers on keywords like "promptfoo", "eval", "LLM evaluation", "prompt testing", or "model comparison".

$ git log --oneline --stat
stars:884
forks:139
updated:March 2, 2026 at 12:01
File Explorer
4 files
SKILL.md
readonly