with one click
prompt-engineering
// Evidence-based techniques for designing effective LLM prompts including few-shot learning, chain-of-thought reasoning, and prompt injection prevention.
// Evidence-based techniques for designing effective LLM prompts including few-shot learning, chain-of-thought reasoning, and prompt injection prevention.
[HINT] Download the complete skill directory including SKILL.md and all related files
| name | prompt-engineering |
| description | Evidence-based techniques for designing effective LLM prompts including few-shot learning, chain-of-thought reasoning, and prompt injection prevention. |
Evidence-based techniques for designing effective LLM prompts from leading AI research and documentation.
Sources: Anthropic Claude 4 docs, OpenAI Platform docs, Chain-of-Thought Prompting (Wei et al., arXiv:2201.11903)
[Role/Context] You are a [role] with [expertise].
[Task] Your task is to [specific action].
[Input] <input>{{DATA}}</input>
[Format] Respond with: [specification]
[Constraints] - Specific requirement 1
Provide 2-5 examples showing desired input-output patterns. Label space and distribution matter more than correctness of individual examples.
Examples:
Input: "Show active users"
Output: SELECT * FROM users WHERE status = 'active';
Input: "{{USER_QUERY}}"
Output:
Enable step-by-step reasoning for complex tasks. Use "Let's think step by step" (zero-shot) or provide reasoning demonstrations (few-shot).
Analyze for vulnerabilities:
1. Identify all user inputs
2. Trace data flow through code
3. Check validation/sanitization
4. Flag dangerous operations
Use tags to separate instructions from data (reduces prompt injection risk).
<code>{{USER_CODE}}</code>
Review the code above for security issues.
Temperature (0-2)
Top-p (0-1): Alternative to temperature. 0.1 = conservative, 0.9 = diverse
<prose>response here</prose>Input Isolation: Use delimiters/tags to separate instructions from user data
Validation: Filter suspicious patterns ("ignore previous", "new instructions")
Least Privilege: Limit model tool access and API permissions
Output Validation: Verify model outputs before execution
Vague: "Make it better" → "Reduce cyclomatic complexity below 10"
Missing Context: "Fix bug" → "Fix null pointer on line 45 when users.find() returns undefined"
Ambiguous Format: "Write tests" → "Write Jest tests with AAA pattern, mocking external deps"