with one click
second-opinions
// Get validation from a different AI model before committing major changes — detects available LLM CLIs and routes to the best one.
// Get validation from a different AI model before committing major changes — detects available LLM CLIs and routes to the best one.
[HINT] Download the complete skill directory including SKILL.md and all related files
| name | second-opinions |
| description | Get validation from a different AI model before committing major changes — detects available LLM CLIs and routes to the best one. |
| display_name | Second Opinions |
| brand_color | #4F46E5 |
| local_only | false |
| group | Dev Workflow |
| usage | /second-opinions:run |
| summary | Get a second opinion from a different AI on complex changes |
| default_prompt | Get a second opinion on this implementation or design decision and summarize the strongest agreement, disagreement, and actionable feedback. |
Get validation from a different AI before committing. Any single model — regardless of which one is running — has blind spots shaped by its training, context, and the conversation so far. A different architecture, temperature, or framing catches different things.
Mandatory:
Skip for: trivial fixes, style questions, crystal-clear requirements
Use the bundled detection script, with an inline fallback if SKILL_DIR isn't set:
bash "${SKILL_DIR}/scripts/detect-llms.sh" --quiet 2>/dev/null || \
for t in agent ask-gemini codex llm; do command -v "$t" >/dev/null 2>&1 && echo "$t"; done
Use the first one found. If none are available, tell the user and skip this step.
Second opinions are about deep analysis, not speed. Use the smartest model available:
| Tool | For deep analysis | For quick checks |
|---|---|---|
agent | agent --frontier (claude-opus-4-5) | agent --smart (gemini-2.5-pro) |
ask-gemini | ask-gemini --pro (Gemini Pro) | ask-gemini (Gemini Flash) |
When this skill is invoked for pre-merge review, design validation, or architecture decisions, prefer agent --frontier (or ask-gemini --pro if agent unavailable). For quick sanity checks or brainstorming, agent --smart is fine.
The prompt is the same regardless of agent — adapt the invocation to whatever's available. The detect-llms.sh script outputs NAME|INVOKE_PATTERN|MODEL_FAMILY|NOTES — use the INVOKE_PATTERN field, substituting {prompt} with your actual prompt.
Show the diff and ask for a production-readiness check:
Review my git changes for production readiness.
Show the diff from main and check for:
- Correctness and edge cases
- Architecture and design
- Performance implications
- Security concerns
Describe the options and constraints, then ask: What trade-offs am I not seeing?
Ask one specific question about the implementation — don't fish for general feedback.
The other agent is a collaborator, not an authority. Classify each piece of feedback:
| Category | Action |
|---|---|
| Must-fix | Bug, security issue, correctness problem → implement immediately |
| Should-fix | Genuine simplification, better error handling → implement if clean |
| Nice-to-have | Alternative approach, style preference → mention to user |
| Reject | Over-engineering, conflicts with project conventions → skip with reason |
If the other agent and your analysis disagree, explain the disagreement to the user and let them decide.
These thoughts mean STOP and get a second opinion:
5 minutes of external validation prevents hours of debugging.