with one click
consult-zai
// Compare z.ai GLM 4.7 and code-searcher responses for comprehensive dual-AI code analysis. Use when you need multiple AI perspectives on code questions.
// Compare z.ai GLM 4.7 and code-searcher responses for comprehensive dual-AI code analysis. Use when you need multiple AI perspectives on code questions.
[HINT] Download the complete skill directory including SKILL.md and all related files
| name | consult-zai |
| description | Compare z.ai GLM 4.7 and code-searcher responses for comprehensive dual-AI code analysis. Use when you need multiple AI perspectives on code questions. |
You orchestrate consultation between z.ai's GLM 4.7 model and Claude's code-searcher to provide comprehensive analysis with comparison.
High value queries:
Lower value (single AI may suffice):
When the user asks a code question:
Wrap the user's question with structured output requirements:
[USER_QUESTION]
=== Analysis Guidelines ===
**Structure your response with:**
1. **Summary:** 2-3 sentence overview
2. **Key Findings:** bullet points of discoveries
3. **Evidence:** file paths with line numbers (format: `file:line` or `file:start-end`)
4. **Confidence:** High/Medium/Low with reasoning
5. **Limitations:** what couldn't be determined
**Line Number Requirements:**
- ALWAYS include specific line numbers when referencing code
- Use format: `path/to/file.ext:42` or `path/to/file.ext:42-58`
- For multiple references: list each with its line number
- Include brief code snippets for key findings
**Examples of good citations:**
- "The authentication check at `src/auth/validate.ts:127-134`"
- "Configuration loaded from `config/settings.json:15`"
- "Error handling in `lib/errors.ts:45, 67-72, 98`"
Launch both simultaneously in a single message with multiple tool calls:
For z.ai GLM 4.7: Use a temp file to avoid shell quoting issues:
Step 1: Write the enhanced prompt to a temp file using the Write tool:
Write to $CLAUDE_PROJECT_DIR/tmp/zai-prompt.txt with the ENHANCED_PROMPT content
Step 2: Execute z.ai with the temp file:
macOS:
zsh -i -c 'zai -p "$(cat $CLAUDE_PROJECT_DIR/tmp/zai-prompt.txt)" --output-format json --append-system-prompt "You are GLM 4.7 model accessed via z.ai API." 2>&1'
Linux:
bash -i -c 'zai -p "$(cat $CLAUDE_PROJECT_DIR/tmp/zai-prompt.txt)" --output-format json --append-system-prompt "You are GLM 4.7 model accessed via z.ai API." 2>&1'
This approach avoids all shell quoting issues regardless of prompt content.
For Code-Searcher: Use Task tool with subagent_type: "code-searcher" with the same enhanced prompt
This parallel execution significantly improves response time.
After processing the z.ai response (success or failure), clean up the temp prompt file:
rm -f $CLAUDE_PROJECT_DIR/tmp/zai-prompt.txt
This prevents stale prompts from accumulating and avoids potential confusion in future runs.
Use this exact format:
[Raw output from zai-cli agent]
[Raw output from code-searcher agent]
| Aspect | z.ai (GLM 4.7) | Code-Searcher (Claude) |
|---|---|---|
| File paths | [Specific/Generic/None] | [Specific/Generic/None] |
| Line numbers | [Provided/Missing] | [Provided/Missing] |
| Code snippets | [Yes/No + details] | [Yes/No + details] |
| Unique findings | [List any] | [List any] |
| Accuracy | [Note discrepancies] | [Note discrepancies] |
| Strengths | [Summary] | [Summary] |
[State which level applies and explain]
[Combine the best insights from both sources into unified analysis. Prioritize findings that are:
[Which source was more helpful for this specific query and why. Consider: