// [CLAUDE CODE ONLY] Leverage Gemini CLI for AI peer review, second opinions on architecture and design decisions, cross-validation of implementations, security analysis, alternative approaches, and holistic codebase analysis. Requires terminal access to execute Gemini CLI commands. Use when making high-stakes decisions, reviewing complex architecture, analyzing large codebases (1M token context window), or when explicitly requested for a second AI perspective. Must be explicitly invoked using skill syntax.
| name | gemini-peer-review |
| description | [CLAUDE CODE ONLY] Leverage Gemini CLI for AI peer review, second opinions on architecture and design decisions, cross-validation of implementations, security analysis, alternative approaches, and holistic codebase analysis. Requires terminal access to execute Gemini CLI commands. Use when making high-stakes decisions, reviewing complex architecture, analyzing large codebases (1M token context window), or when explicitly requested for a second AI perspective. Must be explicitly invoked using skill syntax. |
🖥️ Claude Code Only - Requires terminal access to execute Gemini CLI commands.
Enable Claude Code to leverage Google's Gemini CLI for collaborative AI reasoning, peer review, and multi-perspective analysis of code architecture, design decisions, and implementations.
Two AI perspectives are better than one for high-stakes decisions.
This skill enables strategic collaboration between Claude Code (Anthropic) and Gemini (Google) for:
Not a replacement—a second opinion.
Gemini's massive 1M token context window allows it to process entire codebases without chunking, providing holistic analysis that complements Claude's detailed reasoning. Together, they offer comprehensive insights through different reasoning approaches.
DO use when:
DON'T use when:
Important: This skill requires explicit invocation. It is not automatically triggered by natural language.
To use this skill, Claude must explicitly invoke it using:
skill: "gemini-peer-review"
User phrases that indicate this skill would be valuable:
When these phrases appear, Claude should suggest using this skill and invoke it explicitly if appropriate.
Both Codex and Gemini peer review skills provide valuable second opinions, but excel in different scenarios.
Use Gemini Peer Review when:
Use Codex Peer Review when:
For mid-range codebases (500-5k LOC):
For maximum value on high-stakes decisions: Use both skills sequentially and apply synthesis framework (see references/synthesis-framework.md).
Assess if peer review adds value:
Questions to consider:
If yes to 2+ questions: Proceed with peer review workflow
Extract and structure relevant information:
Load references/context-preparation.md for detailed guidance on:
Key preparation steps:
Context structure template:
[CONTEXT]
Project: [type, purpose, stack]
Current situation: [what exists]
Constraints: [technical, business, time]
Scale considerations: [users, data volume, performance requirements]
[CODE/ARCHITECTURE]
[relevant code or architecture description - can be extensive due to 1M context]
[MULTIMODAL ASSETS]
[if applicable: architecture diagrams, design mockups, technical specs]
[QUESTION]
[specific question or review request]
[EXPECTED OUTPUT]
[format: analysis, alternatives, recommendations, etc.]
Gemini-specific advantages:
Execute appropriate CLI command:
Load references/gemini-commands.md for complete reference.
Common patterns:
Non-interactive review (recommended):
gemini -p "$(cat <<'EOF'
[prepared context and question here]
EOF
)"
With model selection:
gemini --model gemini-2.5-pro -p "$(cat <<'EOF'
[context for complex reasoning]
EOF
)"
With multimodal (image/diagram):
gemini --image architecture.png -p "Analyze this architecture diagram: [question]"
Security-focused review:
gemini -p "$(cat <<'EOF'
Security review focus:
[context and code]
EOF
)"
Model selection guidelines:
Use gemini-2.5-pro for:
Use gemini-2.5-flash for:
Key flags:
-p / --prompt: Run in headless mode (non-interactive)--model / -m: Select specific model (pro vs flash)--output-format: Control output format (text/json/stream-json)--yolo / -y: Auto-approve all actions@file_path or @directory/ to include contextCommon patterns:
Architecture review:
gemini --model gemini-2.5-pro -p "$(cat <<'EOF'
Review this microservices architecture:
[Service definitions, API contracts, data flow]
Concerns: scalability, data consistency, deployment complexity
Question: Are the service boundaries appropriate? Any architectural risks?
EOF
)"
Security-focused review:
gemini -p "$(cat <<'EOF'
Security review of authentication system:
[Auth code, session management, token handling]
Threat model: [attack vectors]
Question: Identify vulnerabilities, attack vectors, and hardening opportunities.
EOF
)"
Design decision with alternatives:
gemini --model gemini-2.5-pro -p "$(cat <<'EOF'
Design decision: Event sourcing vs traditional CRUD
[Domain model, use cases, team context]
Alternatives:
A) Event sourcing with CQRS
B) Traditional CRUD with audit logs
C) Hybrid approach
Question: Analyze trade-offs for our context and recommend approach.
EOF
)"
Error handling:
Compare and integrate both AI perspectives:
Load references/synthesis-framework.md for detailed synthesis patterns.
Analysis framework:
Agreement Analysis
Disagreement Analysis
Complementary Insights
Trade-off Identification
Insight Extraction
Synthesis output structure:
## Perspective Comparison
**Claude's Analysis:**
[key points from Claude's initial analysis]
**Gemini's Analysis:**
[key points from Gemini's review - note any insights from 1M context advantage]
**Points of Agreement:**
- [shared insights that increase confidence]
**Points of Divergence:**
- [different perspectives and why - may reveal important trade-offs]
**Complementary Insights:**
- [unique value from each perspective]
- [what Gemini saw with holistic view that Claude couldn't see incrementally]
- [what Claude's detailed reasoning revealed that Gemini's broader view missed]
## Synthesis & Recommendations
[integrated analysis incorporating both perspectives]
**Recommended Approach:**
[action plan based on both perspectives]
**Rationale:**
[why this approach balances both perspectives]
**Remaining Considerations:**
[open questions or concerns to address]
Leveraging Gemini's unique strengths in synthesis:
Deliver integrated insights to user:
Presentation principles:
When perspectives align: "Both Claude and Gemini agree that [approach] is preferable because [reasons]. This alignment increases confidence in the recommendation. Gemini's analysis of the entire codebase confirmed [specific insight]."
When perspectives diverge: "Claude favors [approach A] prioritizing [factors], while Gemini suggests [approach B] emphasizing [factors]. This divergence reveals an important trade-off: [explanation]. Gemini's holistic view of [system aspect] suggests [insight]. Consider [factors] to decide which approach better fits your context."
When one finds issues the other missed: "Gemini's analysis of the complete service architecture identified [concern] that wasn't apparent when examining components individually. This adds [insight] to our analysis..."
When Gemini's unique capabilities add value: "Gemini's processing of the architecture diagram alongside the code revealed [visual pattern] that maps to [code pattern]. This multimodal analysis suggests [recommendation]."
Load references/use-case-patterns.md for detailed examples of each scenario.
Scenario: Reviewing system design before major implementation
Process:
Example question: "Review this microservices architecture. Are there concerns with service boundaries, data consistency, or deployment complexity? I've included the service diagram and all API contracts."
Gemini advantage: Can process entire architecture in one context, seeing patterns across all services
Scenario: Choosing between multiple implementation approaches
Process:
Example question: "Should we use event sourcing or traditional CRUD for this domain? Consider complexity, auditability, team expertise, and long-term maintainability. Here's our current domain model and use cases."
Gemini advantage: Can analyze current codebase patterns to assess consistency with existing approaches
Scenario: Validating security-critical code before deployment
Process:
Example question: "Review this authentication implementation. Are there vulnerabilities in session management, token handling, or access control? Our threat model includes [specific threats]."
Gemini advantage: Can trace security boundaries across entire codebase to find indirect vulnerabilities
Scenario: Optimizing performance-critical code
Process:
Example question: "This query endpoint is slow under load. Identify bottlenecks in the database access pattern, caching strategy, and N+1 issues. Current response time: 2s, target: <100ms."
Gemini advantage: Can analyze database queries in context of entire data access layer for systemic issues
Scenario: Improving test coverage and quality
Process:
Example question: "Review our testing approach. Are there coverage gaps, missing edge cases, or better testing strategies for this complex state machine?"
Gemini advantage: Can analyze all test files alongside implementation to identify systematic gaps
Scenario: Understanding unfamiliar code or patterns
Process:
Example question: "Explain this recursive backtracking algorithm. What patterns are used, and are there clearer alternatives? I'm new to this domain."
Gemini advantage: Can search for similar patterns in public codebases (with Search grounding) for comparison
Scenario: Stuck on a problem or exploring better approaches
Process:
Example question: "We're stuck on real-time conflict resolution for collaborative editing. What alternative CRDT or operational transform approaches could work better? Current approach causes [specific issues]."
Gemini advantage: Can reference current research and best practices via Search grounding
Scenario: Understanding architecture of unfamiliar large codebase
Process:
Example question: "Analyze this 50k LOC monorepo. Map the module dependencies, identify the core abstractions, and explain the request lifecycle from API to database."
Gemini advantage: This is where Gemini truly excels - can process entire codebase in single context
Scenario: Reviewing implementation against design specifications
Process:
Example question: "Here's our API design spec (PDF) and architecture diagram. Does the implementation match? Are there deviations that might cause issues?"
Gemini advantage: Unique capability - can process PDFs and images alongside code for true multimodal analysis
Load references/gemini-commands.md for complete command documentation.
Quick reference:
| Use Case | Command Pattern | Flags |
|---|---|---|
| Architecture review | gemini --model gemini-2.5-pro -p "[context]" | --model for complex reasoning |
| Review with diagram | gemini --image diagram.png -p "[question]" | --image for visual context |
| Security analysis | gemini -p "Security: [code]" | -p for prompt text |
| Fast code review | gemini -p "[code review]" | Default flash model |
| Large codebase analysis | gemini --model gemini-2.5-pro -p "[full context]" | Pro model for 1M token context |
| Quick validation | gemini "[question]" | Interactive mode |
With concept-forge skill:
@strategist and @builder archetypes to prepare questionsWith prose-polish skill:
With claimify skill:
Pre-implementation:
Post-implementation:
During implementation:
Action: Reformulate question with better context and specificity
DO:
DON'T:
Effective context:
Ineffective context:
Good questions:
Poor questions:
Gemini CLI must be installed to use this skill.
# Install via npm (recommended)
npm install -g @google/gemini-cli
# Verify installation
gemini --version
Requires Node.js 20+
# Option 1: OAuth login (recommended)
gemini login
# Option 2: API key
gemini config set apiKey YOUR_API_KEY
Get API Key:
gemini config set apiKey YOUR_KEYFree Tier:
# Set default model
gemini config set defaultModel gemini-2.5-flash
# For complex reasoning tasks
gemini config set defaultModel gemini-2.5-pro
# View current config
gemini config list
# Test CLI access
gemini "Hello, Gemini!"
# If successful, you'll see a response from Gemini
If Gemini CLI is not available:
references/setup-guide.md)Optional configuration via CLI:
# Set default model
gemini config set defaultModel gemini-2.5-flash # Faster
# or
gemini config set defaultModel gemini-2.5-pro # Complex reasoning
# Set generation parameters (optional)
gemini config set temperature 0.3 # More focused (0.0-1.0)
gemini config set maxTokens 8192 # Response length limit
# View all settings
gemini config list
# Reset to defaults
gemini config reset
For peer review, recommended settings:
defaultModel: gemini-2.5-flash for most cases, gemini-2.5-pro for complex analysistemperature: 0.3-0.5 (more focused, less creative)maxTokens: 8192 (allow detailed analysis)Trust convergence:
Trust divergence:
Trust specialized knowledge:
Gemini's unique strengths:
Claude's unique strengths:
Load references/workflow-examples.md for complete scenarios.
User: "I'm designing a multi-tenant SaaS architecture. Should I use separate databases per tenant or a shared database with row-level security?"
Claude initial analysis: [Provides analysis of trade-offs]
Invoke peer review:
gemini --model gemini-2.5-pro -p "$(cat <<'EOF'
Review multi-tenant SaaS architecture decision:
CONTEXT:
- B2B SaaS with 100-500 tenants expected
- Varying data volumes per tenant (small to large)
- Strong data isolation requirements
- Team familiar with PostgreSQL
- Cloud deployment (AWS)
- Growth projection: 2x tenants annually
OPTIONS:
A) Separate database per tenant
- Complete isolation
- Independent scaling
- Operational complexity
B) Shared database with row-level security (RLS)
- Simpler operations
- Shared resources
- RLS overhead
CURRENT CODEBASE:
[Include relevant ORM models, database config, auth system]
QUESTION:
Analyze trade-offs for scalability, operational complexity, data isolation,
and cost. Which approach is recommended for this context?
Consider both current state and 3-year growth trajectory.
EXPECTED OUTPUT:
- Analysis of each approach
- Trade-off matrix
- Recommendation with rationale
- Migration path considerations
EOF
)"
Synthesis: Compare Claude's and Gemini's trade-off analysis, extract key insights, present balanced recommendation with rationale from both perspectives.
User: "Review authentication implementation for security issues"
Invoke peer review:
gemini -p "$(cat <<'EOF'
Security review of authentication system:
THREAT MODEL:
- Session hijacking
- Token replay attacks
- Credential stuffing
- CSRF attacks
- XSS-based token theft
IMPLEMENTATION:
[Include auth code from src/auth/session.py, tokens.py, middleware/auth.py]
SECURITY REQUIREMENTS:
- 99.9% prevention of unauthorized access
- Compliance: SOC2, HIPAA
- Session timeout: 30 min inactivity
- MFA support required
QUESTION:
Identify vulnerabilities, attack vectors, and hardening opportunities.
Prioritize findings by severity and likelihood.
EXPECTED OUTPUT:
- Vulnerability assessment (severity ratings)
- Attack vector analysis
- Specific remediation recommendations
- Best practice gaps
EOF
)"
Synthesis: Combine security findings from both AIs, create prioritized remediation list.
User: "Help me understand this unfamiliar 60k LOC codebase"
Invoke peer review (leveraging 1M context):
gemini --model gemini-2.5-pro -p "$(cat <<'EOF'
Analyze this complete backend codebase:
CODEBASE:
[Include entire codebase - Gemini's 1M token window can process 60k LOC!]
CONTEXT:
- E-commerce platform backend
- Microservices architecture
- New engineer onboarding perspective needed
QUESTIONS:
1. What are the major architectural patterns?
2. How does a typical request flow from API → Database?
3. What are the core abstractions/modules?
4. Where are the critical integration points?
5. What are potential scalability bottlenecks?
6. What technical debt is visible?
EXPECTED OUTPUT:
- High-level architecture summary
- Request lifecycle walkthrough
- Module dependency map
- Critical code paths
- Scalability considerations
- Onboarding guide structure
EOF
)"
Gemini advantage on display: This is where Gemini truly shines—processing entire codebases in one context to see patterns, dependencies, and architectural decisions that would be impossible to detect with chunked analysis.
User: "Does our implementation match the original architecture design?"
Invoke peer review with diagram:
gemini --image docs/architecture-v2.png -p "$(cat <<'EOF'
Compare architecture design vs. implementation:
DESIGN SPECIFICATION:
[See attached architecture diagram showing intended service structure]
IMPLEMENTATION:
[Include implementation code from src/services/*, src/api/*, infrastructure/*]
QUESTIONS:
1. Does implementation match intended architecture?
2. What deviations exist and why might they be problematic?
3. Are there missing components from the design?
4. Are there additional components not in the design?
5. Do the actual service boundaries align with designed boundaries?
EXPECTED OUTPUT:
- Match/deviation analysis
- Gap identification
- Risk assessment of deviations
- Recommendations for alignment
EOF
)"
Gemini advantage on display: Multimodal analysis—comparing visual architecture with actual code—is a unique Gemini capability that Claude cannot replicate alone.
Don't:
Do:
Peer review succeeds when:
Peer review fails when:
This skill improves through:
Feedback loop:
references/context-preparation.md - Detailed context preparation guidereferences/gemini-commands.md - Complete API reference and examplesreferences/synthesis-framework.md - Synthesis methodologyreferences/use-case-patterns.md - Detailed scenario examplesreferences/setup-guide.md - Installation and configurationreferences/workflow-examples.md - End-to-end example workflowsLarge Codebase Analysis:
Multimodal Analysis:
Current Information:
Alternative Perspective:
Detailed Reasoning:
Native Integration:
Privacy:
Use both when:
Use Gemini specifically for:
Use Claude alone for:
End of Skill Guide
For detailed implementation examples, see references/ directory.
For setup assistance, see references/setup-guide.md.
For API reference, see references/gemini-commands.md.