| name | review-environments |
| description | Review verifiers environments for correctness, robustness, and ecosystem compatibility. Use when asked for environment code review, quality audit, migration validation, or release readiness checks for local environments or environments pulled from the Hub. |
Review Environments
Goal
Find correctness risks and regressions first, then assess maintainability and ecosystem compliance.
Review Input Modes
- Local environment module in
./environments/<env_name>.
- Pulled Hub environment via
prime env pull owner/name.
- Installed package under active workspace.
Review Workflow
- Identify environment contract:
load_environment(...)
- base class and rollout behavior (
SingleTurnEnv, MultiTurnEnv, ToolEnv/MCPEnv/StatefulToolEnv, SandboxEnv/PythonEnv, V1 vf.Env with vf.Taskset/vf.Harness for framework programs, CliAgentEnv for sandboxed agents)
- rubric and metrics
- Verify installability and runtime entrypoint with the canonical eval path. Do not add
--skip-upload unless the user explicitly requests that deviation; standard runs save automatically for the private Evaluations tab and prime eval tui:
prime env install <env>
prime eval run <env> -m openai/gpt-4.1-mini -n 5
- Trace reward pipeline and validate scoring semantics.
- Run targeted checks for tool/stateful behavior where applicable.
Endpoint And Model Selection Nudge
- Encourage endpoint alias setup in
configs/endpoints.toml for reproducible review runs.
- Check
api_client_type when reviewing non-default providers. openai_chat_completions is the default; openai_responses and anthropic_messages should be explicit in endpoint configs when those protocols are required.
- Ask whether review coverage should prioritize instruct or reasoning behavior.
- Instruct go-tos:
gpt-4.1 series, qwen3 instruct series.
- Reasoning go-tos:
gpt-5 series, qwen3 thinking series, glm series.
Critical Review Criteria
- Reward correctness:
- Prefer deterministic, explicit checks or LLM judges.
- Flag best-effort keyword or style heuristics unless explicitly approved.
- Environment self-containment:
- Flag any requirement for user-managed background services before
load_environment().
- Require environment-managed lifecycle for sandboxes/sessions.
- v1 taskset/harness contracts:
- Expect new taskset/harness environments to use the v1
vf.Env / vf.Taskset / vf.Harness format.
- Verify
Task data is serializable, state remains serializable at rollout boundaries, and model/client controls flow through runtime state rather than top-level dataset columns.
- For V1 harness programs, verify framework clients consume
state.get_endpoint_config(api="chat") rather than hardcoding an upstream LLM endpoint. For CliAgentEnv agents, verify sandboxed agent code consumes the injected interception endpoint; the proxy is what makes rollouts visible to the rubric.
- Migration fidelity:
- For ports, verify one-to-one equivalence of prompts, tool traces, and scoring logic.
- Flag any assumptions made without user decision.
- Secrets handling:
- Ensure required keys are validated in
load_environment() with vf.ensure_keys(...).
- Performance and scaling:
- Identify obvious bottlenecks in dataset loading, rubric calls, or tool execution.
Findings Format
Return findings first, sorted by severity:
P0/P1 bugs and behavioral mismatches.
P2 quality risks and maintainability issues.
- Test gaps and missing eval coverage.
Include file paths, exact lines, impact, and concrete fix direction.
If No Findings
State explicitly that no defects were found, then list residual risk and untested areas.