// Executes OpenAI Codex CLI for code analysis, refactoring, and automated editing. Activates when users mention codex commands, code review requests, or automated code transformations requiring advanced reasoning models.
| name | codex |
| description | Executes OpenAI Codex CLI for code analysis, refactoring, and automated editing. Activates when users mention codex commands, code review requests, or automated code transformations requiring advanced reasoning models. |
~/.codex/config.toml)codex --version on first use per sessionFor every Codex task, follow this sequence:
โ Detect HPC/Slurm environment:
/home/woody/, /home/hpc/, Slurm env vars)--yolo flag to bypass Landlock sandbox restrictionsโ Ask user for execution parameters via AskUserQuestion (single prompt):
gpt-5, gpt-5-codex,gpt-5.1, gpt-5.1-codex or defaultminimal, low, medium, highโ Determine sandbox mode based on task:
read-only: Code review, analysis, documentationworkspace-write: Code modifications, file creationdanger-full-access: System operations, network access--yolo flag (bypasses Landlock restrictions)โ Build command with required flags:
codex exec [OPTIONS] "PROMPT"
Essential flags:
-m <MODEL> (if overriding default)-c model_reasoning_effort="<LEVEL>"-s <SANDBOX_MODE> (skip on HPC)--skip-git-repo-check (if outside git repo)-C <DIRECTORY> (if changing workspace)--full-auto (for non-interactive execution, cannot be used with --yolo)HPC command pattern (with --yolo to bypass Landlock):
codex exec --yolo -m gpt-5 -c model_reasoning_effort="high" --skip-git-repo-check \
"Analyze this code: $(cat /path/to/file.py)" 2>/dev/null
Note: --yolo is an alias for --dangerously-bypass-approvals-and-sandbox and is REQUIRED on HPC clusters to avoid Landlock sandbox errors. Do not use --full-auto with --yolo as they are incompatible.
โ Execute with stderr suppression:
2>/dev/null to hide thinking tokensโ Validate execution:
--yolo flag was used, retry if missingโ Inform about resume capability:
codex resume"๐ฅ HPC QUICK TIP: On HPC clusters (e.g.,
/home/woody/,/home/hpc/), ALWAYS add--yoloflag to avoid Landlock sandbox errors. Example:codex exec --yolo -m gpt-5.1 ...
codex exec -m gpt-5 -c model_reasoning_effort="medium" -s read-only \
--skip-git-repo-check --full-auto "review @file.py for security issues" 2>/dev/null
cat file.py | codex exec -m gpt-5.1 -c model_reasoning_effort="low" \
--skip-git-repo-check --full-auto - 2>/dev/null
Note: Stdin with - flag may not be supported in all Codex CLI versions.
When running on HPC clusters with Landlock security restrictions, use the --yolo flag:
# Primary solution: --yolo flag bypasses Landlock sandbox
codex exec --yolo -m gpt-5 -c model_reasoning_effort="high" --skip-git-repo-check \
"Analyze this code: $(cat /path/to/file.py)" 2>/dev/null
Alternative: Manual Code Injection (if --yolo is unavailable):
# Capture code content and pass directly in prompt
codex exec -m gpt-5.1 -c model_reasoning_effort="high" --skip-git-repo-check --full-auto \
"Analyze this Python code: $(cat file.py)" 2>/dev/null
Or for large files, use heredoc:
codex exec --yolo -m gpt-5.1 -c model_reasoning_effort="high" --skip-git-repo-check "$(cat <<'ENDCODE'
Analyze the following code comprehensively:
$(cat file.py)
Focus on: architecture, algorithms, multi-GPU optimization, potential bugs, code quality.
ENDCODE
)" 2>/dev/null
Note: --yolo is short for --dangerously-bypass-approvals-and-sandbox and is safe on HPC login nodes where you have limited permissions anyway. Do not combine --yolo with --full-auto as they are incompatible.
codex exec -m gpt-5.1 -c model_reasoning_effort="high" -s workspace-write \
--skip-git-repo-check --full-auto "refactor @module.py to async/await" 2>/dev/null
echo "fix the remaining issues" | codex exec --skip-git-repo-check resume --last 2>/dev/null
codex exec -C /path/to/project -m gpt-5.1 -c model_reasoning_effort="medium" \
-s read-only --skip-git-repo-check --full-auto "analyze architecture" 2>/dev/null
codex exec --profile production -c model_reasoning_effort="high" \
--full-auto "optimize performance in @app.py" 2>/dev/null
| Flag | Values | When to Use |
|---|---|---|
-m, --model | gpt-5, gpt-5-codex | Override default model |
-c, --config | key=value | Runtime config override (repeatable) |
-s, --sandbox | read-only, workspace-write, danger-full-access | Set execution permissions |
--yolo | flag | REQUIRED on HPC - Bypasses all sandbox restrictions (alias for --dangerously-bypass-approvals-and-sandbox). Cannot be used with --full-auto |
-C, --cd | path | Change workspace directory |
--skip-git-repo-check | flag | Allow execution outside git repos |
--full-auto | flag | Non-interactive mode (workspace-write + approvals on failure). Cannot be used with --yolo |
-p, --profile | string | Load configuration profile from config.toml |
--json | flag | JSON event output (CI/CD pipelines) |
-o, --output-last-message | path | Write final message to file |
-i, --image | path[,path...] | Attach images (repeatable or comma-separated) |
--oss | flag | Use local open-source model (requires Ollama) |
Model Reasoning Effort (-c model_reasoning_effort="<LEVEL>"):
minimal: Quick tasks, simple querieslow: Standard operations, routine refactoringmedium: Complex analysis, architectural decisions (default)high: Critical code, security audits, complex algorithmsModel Verbosity (-c model_verbosity="<LEVEL>"):
low: Minimal outputmedium: Balanced detail (default)high: Verbose explanationsApproval Prompts (-c approvals="<WHEN>"):
on-request: Before any tool useon-failure: Only on errors (default for --full-auto)untrusted: Minimal promptsnever: No interruptions (use with caution)~/.codex/config.toml
# Override single setting
codex exec -c model="gpt-5" "task"
# Override multiple settings
codex exec -c model="gpt-5" -c model_reasoning_effort="high" "task"
Define in config.toml:
[profiles.research]
model = "gpt-5.1"
model_reasoning_effort = "high"
sandbox = "read-only"
[profiles.development]
model = "gpt-5.1-codex"
sandbox = "workspace-write"
Use with:
codex exec --profile research "analyze codebase"
Automatic inheritance:
Resume syntax:
# Resume last session
codex exec resume --last
# Resume with new prompt
codex exec resume --last "continue with next steps"
# Resume via stdin
echo "new instructions" | codex exec resume --last 2>/dev/null
# Resume specific session
codex exec resume <SESSION_ID> "follow-up task"
Flag injection (between exec and resume):
# Change reasoning effort for resumed session
codex exec -c model_reasoning_effort="high" resume --last
AskUserQuestionBefore using high-impact flags, request user approval via AskUserQuestion:
--full-auto: Automated execution-s danger-full-access: System-wide access--yolo / --dangerously-bypass-approvals-and-sandbox:
When output contains warnings:
AskUserQuestion to determine next stepsSymptom: "shell is blocked by the sandbox" or permission errors
Root cause: Sandbox read-only mode restricts file system
Solutions (priority order):
Stdin piping (recommended):
cat target.py | codex exec -m gpt-5 -c model_reasoning_effort="medium" \
--skip-git-repo-check --full-auto - 2>/dev/null
Explicit permissions:
codex exec -m gpt-5 -s read-only \
-c 'sandbox_permissions=["disk-full-read-access"]' \
--skip-git-repo-check --full-auto "@file.py" 2>/dev/null
Upgrade sandbox:
codex exec -m gpt-5 -s workspace-write \
--skip-git-repo-check --full-auto "review @file.py" 2>/dev/null
Symptom: "unexpected argument '--add-dir' found"
Cause: Flag does not exist in Codex CLI
Solution: Use -C <DIR> to change directory:
codex exec -C /target/dir -m gpt-5 --skip-git-repo-check \
--full-auto "task" 2>/dev/null
Symptom: Non-zero exit without clear message
Diagnostic steps:
2>/dev/null to see full stderrcodex --versioncat ~/.codex/config.tomlcodex exec -m gpt-5 "hello world"codex exec --model gpt-5 "test"Symptom: "model not found" or authentication errors
Solutions:
grep model ~/.codex/config.toml-m gpt-5-codex--oss (requires Ollama)Symptom: Cannot resume previous session
Diagnostic steps:
codex history--last flag instead of specific IDSymptom: "Landlock sandbox error", "LandlockRestrict", or all file operations fail
Root Cause: HPC clusters use Landlock/seccomp kernel security modules that block Codex's default sandbox
โ
SOLUTION: Use the --yolo flag (priority order):
YOLO Flag (PRIMARY SOLUTION - WORKS ON HPC):
# Bypasses Landlock restrictions completely
codex exec --yolo -m gpt-5 -c model_reasoning_effort="high" --skip-git-repo-check \
"Analyze this code: $(cat /full/path/to/file.py)" 2>/dev/null
Why this works: --yolo (alias for --dangerously-bypass-approvals-and-sandbox) disables the Codex sandbox entirely, allowing direct file access on HPC systems. Note: Do not use --full-auto with --yolo as they are incompatible.
Manual Code Injection (fallback if --yolo unavailable):
# Pass code directly in prompt via command substitution
codex exec -m gpt-5 -c model_reasoning_effort="high" --skip-git-repo-check --full-auto \
"Analyze this code comprehensively: $(cat /full/path/to/file.py)" 2>/dev/null
Heredoc for Long Code:
codex exec --yolo -m gpt-5 -c model_reasoning_effort="high" --skip-git-repo-check "$(cat <<'EOF'
Analyze the following Python code for architecture, bugs, and optimization opportunities:
$(cat /home/user/script.py)
Provide technical depth with actionable insights.
EOF
)" 2>/dev/null
Run on Login Node (if compute node blocks outbound):
# SSH to login node first, then run codex there (not in Slurm job)
ssh login.cluster.edu
codex exec --yolo -m gpt-5 --skip-git-repo-check "analyze @file.py" 2>/dev/null
Use Apptainer/Singularity (if cluster supports):
# Build image with Codex installed, then run via Slurm
singularity exec codex.sif codex exec --yolo -m gpt-5 "task"
Best Practice for HPC:
--yolo flag on HPC clusters - it's safe on login nodes where you already have limited permissions--yolo with $(cat file.py) for maximum compatibility2>/dev/null unless:
Create profiles for common workflows:
review: High reasoning, read-onlyrefactor: Medium reasoning, workspace-writequick: Low reasoning, read-onlysecurity: High reasoning, workspace-writeHPC Clusters - --yolo is SAFE and REQUIRED:
--yolo bypasses Codex sandbox but you still operate within HPC user restrictions--yolo on HPC to avoid Landlock errorsGeneral Use - Exercise Caution:
--yolo on unrestricted systems (your laptop, cloud VMs with full sudo)--full-auto + -s workspace-write for normal developmentAlways verify before:
danger-full-access sandbox (outside HPC)--yolo on personal machines with sudo accessAsk user approval for:
workspace-write usagecodex exec --json -o result.txt -m gpt-5 \
-c model_reasoning_effort="medium" \
--skip-git-repo-check --full-auto \
"run security audit on changed files" 2>/dev/null
for file in *.py; do
cat "$file" | codex exec -m gpt-5 -c model_reasoning_effort="low" \
--skip-git-repo-check --full-auto "lint and format" - 2>/dev/null
done
# Step 1: Analysis
codex exec -m gpt-5 -c model_reasoning_effort="high" -s read-only \
--full-auto "analyze @codebase for architectural issues" 2>/dev/null
# Step 2: Resume with changes
echo "implement suggested refactoring" | \
codex exec -s workspace-write resume --last 2>/dev/null
If errors persist after troubleshooting:
Check documentation:
WebFetch https://developers.openai.com/codex/cli/reference
WebFetch https://developers.openai.com/codex/local-config#cli
Report to user:
Request guidance:
| Task Type | Recommended Model | Reasoning Effort |
|---|---|---|
| Quick syntax fixes | gpt-5.1 | minimal |
| Code review | gpt-5.1 | medium |
| Refactoring | gpt-5.1-codex | medium |
| Architecture analysis | gpt-5.1 | high |
| Security audit | gpt-5.1 | high |
| Algorithm optimization | gpt-5.1-codex | high |
| Documentation generation | gpt-5.1 | low |