Skip to main content
تشغيل أي مهارة في Manus
بنقرة واحدة

promptinjection

// Test LLM applications for prompt injection vulnerabilities — jailbreak attempts, system prompt extraction, context manipulation, guardrail bypass techniques, direct injection, indirect injection, multi-stage attacks, and reconnaissance. USE WHEN prompt injection, jailbreak, LLM security, AI security assessment, pentest AI application, test chatbot, guardrail bypass, direct injection, indirect injection, RAG poisoning, multi-stage attack, complete assessment, reconnaissance.

$ git log --oneline --stat
stars:11,743
forks:1,612
updated:٢٨ فبراير ٢٠٢٦ في ١٣:٢٤
مستكشف الملفات
13 ملفات
SKILL.md
readonly