بنقرة واحدة
ai-prompt-engineering
// Prompt engineering for production LLMs — structured outputs, RAG, tool workflows, and safety. Use when designing or debugging prompts for LLM APIs.
// Prompt engineering for production LLMs — structured outputs, RAG, tool workflows, and safety. Use when designing or debugging prompts for LLM APIs.
[HINT] تحميل مجلد المهارة الكامل بما في ذلك SKILL.md وجميع الملفات المرتبطة
| name | ai-prompt-engineering |
| description | Prompt engineering for production LLMs — structured outputs, RAG, tool workflows, and safety. Use when designing or debugging prompts for LLM APIs. |
Modern Best Practices (January 2026): versioned prompts, explicit output contracts, regression tests, and safety threat modeling for tool/RAG prompts (OWASP LLM Top 10: https://owasp.org/www-project-top-10-for-large-language-model-applications/).
This skill provides operational guidance for building production-ready prompts across standard tasks, RAG workflows, agent orchestration, structured outputs, hidden reasoning, and multi-step planning.
All content is operational, not theoretical. Focus on patterns, checklists, and copy-paste templates.
assets/ and fill in TASK, INPUT, RULES, and OUTPUT FORMAT.null/explicit missing.This skill includes Claude Code + Codex CLI optimizations:
Prefer “brief justification” over requesting chain-of-thought. When using private reasoning patterns, instruct: think internally; output only the final answer.
| Task | Pattern to Use | Key Components | When to Use |
|---|---|---|---|
| Machine-parseable output | Structured Output | JSON schema, "JSON-only" directive, no prose | API integrations, data extraction |
| Field extraction | Deterministic Extractor | Exact schema, missing->null, no transformations | Form data, invoice parsing |
| Use retrieved context | RAG Workflow | Context relevance check, chunk citations, explicit missing info | Knowledge bases, documentation search |
| Internal reasoning | Hidden Chain-of-Thought | Internal reasoning, final answer only | Classification, complex decisions |
| Tool-using agent | Tool/Agent Planner | Plan-then-act, one tool per turn | Multi-step workflows, API calls |
| Text transformation | Rewrite + Constrain | Style rules, meaning preservation, format spec | Content adaptation, summarization |
| Classification | Decision Tree | Ordered branches, mutually exclusive, JSON result | Routing, categorization, triage |
User needs: [Prompt Type]
|-- Output must be machine-readable?
| |-- Extract specific fields only? -> **Deterministic Extractor Pattern**
| `-- Generate structured data? -> **Structured Output Pattern (JSON)**
|
|-- Use external knowledge?
| `-- Retrieved context must be cited? -> **RAG Workflow Pattern**
|
|-- Requires reasoning but hide process?
| `-- Classification or decision task? -> **Hidden Chain-of-Thought Pattern**
|
|-- Needs to call external tools/APIs?
| `-- Multi-step workflow? -> **Tool/Agent Planner Pattern**
|
|-- Transform existing text?
| `-- Style/format constraints? -> **Rewrite + Constrain Pattern**
|
`-- Classify or route to categories?
`-- Mutually exclusive rules? -> **Decision Tree Pattern**
TASK:
{{one_sentence_task}}
INPUT:
{{input_data}}
RULES:
- Follow TASK exactly.
- Use only INPUT (and tool outputs if tools are allowed).
- No invented details. Missing required info -> say what is missing.
- Keep reasoning hidden.
- Follow OUTPUT FORMAT exactly.
OUTPUT FORMAT:
{{schema_or_format_spec}}
AVAILABLE TOOLS:
{{tool_signatures_or_names}}
WORKFLOW:
- Make a short plan.
- Call tools only when required to complete the task.
- Validate tool outputs before using them.
- If the environment supports parallel tool calls, run independent calls in parallel.
RETRIEVED CONTEXT:
{{chunks_with_ids}}
RULES:
- Use only retrieved context for factual claims.
- Cite chunk ids for each claim.
- If evidence is missing, say what is missing.
Use these references when validating or debugging prompts:
frameworks/shared-skills/skills/ai-prompt-engineering/references/quality-checklists.mdframeworks/shared-skills/skills/ai-prompt-engineering/references/production-guidelines.mdTrue expertise in prompting extends beyond writing instructions to shaping the entire context in which the model operates. Context engineering encompasses:
| Aspect | Prompt Engineering | Context Engineering |
|---|---|---|
| Focus | Instruction text | Full input pipeline |
| Scope | Single prompt | RAG + history + tools |
| Optimization | Word choice, structure | Information architecture |
| Goal | Clear instructions | Optimal context window |
1. Context Prioritization: Place most relevant information first; models attend more strongly to early context.
2. Context Compression: Summarize history, truncate tool outputs, select most relevant RAG chunks.
3. Context Separation: Use clear delimiters (<system>, <user>, <context>) to separate instruction types.
4. Dynamic Context: Adjust context based on task complexity - simple tasks need less context, complex tasks need more.
Do
Avoid
Best Practices (Core) - Foundation rules for production-grade prompts
Production Guidelines - Deployment and operational guidance
Quality Checklists - Validation checklists before deployment
Domain-Specific Patterns - Claude 4+ optimized patterns for specialized domains
RAG Patterns - Retrieval-augmented generation workflows
Agent and Tool Patterns - Tool use and agent orchestration
Extraction Patterns - Deterministic field extraction
Reasoning Patterns (Hidden CoT) - Internal reasoning without visible output
Additional Patterns - Extended prompt engineering techniques
Prompt Testing & CI/CD - Automated prompt evaluation pipelines
Multimodal Prompt Patterns - Vision, audio, and document input patterns
Prompt Security & Defense - Securing LLM applications against adversarial attacks
Templates are copy-paste ready and organized by complexity:
External references are listed in data/sources.json:
When asked for “latest” prompting recommendations, prefer provider docs and standards from data/sources.json. If web search is unavailable, state the constraint and avoid overconfident “current best” claims.
This skill provides foundational prompt engineering patterns. For specialized implementations:
AI/LLM Skills:
Software Development Skills:
For Claude Code:
For Codex CLI:
medium for interactive coding (default), high/xhigh for complex autonomous multi-hour tasks