// Generate comprehensive context-setting markdown files for coding tasks by intelligently exploring codebase and structuring work with ROLE, CONSTRAINTS, STYLE, file reading order, and structured tasks. Use proactively when the user is starting a complex dev task requiring multiple steps or file reads.
| name | context-setter |
| description | Generate comprehensive context-setting markdown files for coding tasks by intelligently exploring codebase and structuring work with ROLE, CONSTRAINTS, STYLE, file reading order, and structured tasks. Use proactively when the user is starting a complex dev task requiring multiple steps or file reads. |
Creates context-setting markdown files that transform vague user requests into comprehensive, structured prompts for coding tasks. Output files follow proven patterns: ROLE definition, CONSTRAINTS (locked decisions), STYLE guidelines, ordered file reading lists, and numbered tasks with clear deliverables.
Primary use: Extract and sequence constraints from existing docs (PRDs, AGENTS.md, specs) into executable task structure. Secondary use: When docs don't exist, explore codebase and ask clarifying questions to build structure from scratch
Once the context file is generated, start a fresh Claude Code conversation and reference the file in your first message:
Read /path/to/context-file.md and execute the tasks
This gives Claude a clean context window with all necessary structure, constraints, and task sequencing upfront.
Clarify vague requests - If user provides rough/unclear prompt, ask:
Identify scope:
File Discovery Priority:
AGENTS.md / CLAUDE.md - project guidelines and conventionsREADME.md - project overview and setupworkings/prds/ or workings/*.md - product requirements documentstests/ folder - patterns for verification and testing approachworkings/context-setting/Search Strategy:
**/*.md, **/tests/**, src/**/*.ts)Extract from codebase:
Document Structure Template:
# ROLE
[Single line: persona + responsibility for this specific task]
## MISSION
[2-4 sentence non-technical sketch of what you're building/fixing and why]
[Answer: What's the goal? What's already done? What needs doing?]
[Keep it lean - business/user value, not implementation details]
## CONSTRAINTS
[Numbered list of locked decisions, constraints, specs that must not change]
[Source these from AGENTS.md, PRDs, existing specs - never invent]
## STYLE
British English. Concise, shippable. When proposing edits: filename + bullets (+ line numbers if needed).
[Add task-specific style notes if relevant]
Never invent spec; if missing, mark "out of scope for v0.1".
## ORDER OF FILES (read in this order)
1) [absolute path to first file]
2) [absolute path to second file]
...
## TASK 1 โ [Clear task name]
[Clear objective with specific deliverables]
[Sub-steps if complex, each actionable]
[Expected output format if relevant]
## TASK 2 โ [Next task name]
[Continue sequential tasks...]
[Include testing/verification tasks if code changes involved]
## OUTPUT FORMAT
[Specify expected format: compact results, bullets, pass/fail tables, fixture contents, etc.]
[Set expectation for verbosity - "compact", "exact contents", "no extra prose"]
---
## P.S. Meta Notes
[2-4 sentences explaining key decisions made while structuring this context doc]
[Surface interesting tradeoffs, assumptions, or reasoning about task sequencing/scope]
[Examples: why certain files are prioritized, why tasks are ordered this way, what's intentionally deferred]
[Think: what would the user find valuable to know about how you approached this? Especially if they skim read the document.]
Section Guidelines:
ROLE:
MISSION:
CONSTRAINTS:
STYLE:
ORDER OF FILES:
1) /absolute/path/to/file.mdTASKS:
TASK N โ [Clear descriptive name]OUTPUT FORMAT:
Filename Convention:
keyword-keyword-keyword.md
Filename creation:
cloudflare-mailbox-mcp-handshake.md, pubstack-core-system-build.md, digit-export-implementation.mdSave Location:
/workspace/project/workings/context-setting/
After saving:
User prompt: "setup cloudflare worker with secure handshake from mcp server, add logging"
Generated context file:
# ROLE
You are the DevOps Engineer for Cloudflare Workers deployment and observability.
## MISSION
Build secure MCP server integration with Cloudflare Workers mailbox. MCP servers (running in Claude/ChatGPT) need to post events to our Worker using HMAC authentication. Worker already validates and writes to GitHub - we need the MCP side implemented with proper handshake and observability via Workers Logs.
## CONSTRAINTS
1) HMAC-SHA256 handshake for secure MCP โ Worker communication
2) 300s replay window for request validation
3) 2 MB binary payload limit
4) console.log statements visible in Workers Logs
5) Environment vars via wrangler.toml or dashboard secrets
## STYLE
British English. Concise, shippable. When proposing edits: filename + bullets.
Never invent spec; if missing, mark "out of scope for v0.1".
## ORDER OF FILES (read in this order)
1) /workspace/project/AGENTS.md
2) /workspace/project/workings/mailbox-brief-v01.md
3) /workspace/project/README.md
4) /workspace/project/tests/
5) https://developers.cloudflare.com/workers/observability/logs/workers-logs/
## TASK 1 โ Implement HMAC handshake validation
Add request signature verification:
- Extract signature from Authorization header
- Compute HMAC-SHA256 of request body with shared secret
- Validate timestamp within 300s window
- Return 401 if validation fails
- Return 200 with {status: "authorized"} if valid
## TASK 2 โ Setup Cloudflare Workers Logs
Configure logging per Cloudflare docs:
- Enable Workers Logs in dashboard
- Add strategic console.log statements for debugging
- Test that logs appear in real-time stream
- Document log access for team
## TASK 3 โ Test handshake locally
Create test script:
- Generate valid HMAC signatures
- Test replay window enforcement
- Verify 401 responses for invalid signatures
- Verify 200 responses for valid requests
## OUTPUT FORMAT
Compact results. For each task, confirm completion with:
- Filename + line numbers for code changes
- Test results (PASS/FAIL)
- Log output samples
---
## P.S. Meta Notes
[2-4 sentences explaining key decisions made while structuring this context doc]
[Surface interesting tradeoffs, assumptions, or reasoning about task sequencing/scope]
[Examples: why certain files are prioritized, why tasks are ordered this way, what's intentionally deferred]
[Think: what would the user find valuable to know about how you approached this?]
User prompt: "build the digit export, N=50 latest public posts, SHA-256 etag"
Generated context file:
# ROLE
You are the Lead Engineer for Symbient Digit export API.
## MISSION
Build a public JSON export of the latest 50 posts for consumption by external apps. Posts are stored as markdown files with front matter - we need to filter public-only, extract clean text (stripping code/quotes), and serve with proper ETag caching. This powers the "what's new" feed for subscribers.
## CONSTRAINTS
1) N=50 latest public-visibility posts only
2) ETag = SHA-256 hash of JSON response body
3) Monotonic `updated` timestamp field required
4) Self-authored permalink fallback if in_reply_to null
5) Text extraction drops fenced code blocks and `> @` quote lines
## STYLE
British English. Concise, shippable. When proposing edits: filename + bullets.
Never invent spec; if missing, mark "out of scope for v0.1".
## ORDER OF FILES (read in this order)
1) /workspace/project/AGENTS.md
2) /workspace/project/workings/digits-v0.1-prd.md
3) /workspace/project/src/
## TASK 1 โ Query latest N=50 public posts
Implement query logic:
- Filter by front matter `visibility: public`
- Sort by filename timestamp DESC
- Limit to 50 results
- Return array of post objects
## TASK 2 โ Generate Digit JSON format
Transform each post to Digit schema:
- Extract title, body, permalink
- Strip fenced code (```...```) and quote lines (> @...)
- Add monotonic `updated` timestamp
- Use self-authored permalink if no in_reply_to
## TASK 3 โ Compute SHA-256 ETag
Hash the JSON response:
- Serialize JSON deterministically
- Compute SHA-256 hash
- Return in ETag header as `sha256:[hash]`
## TASK 4 โ Verifiable test
Create test:
- Generate 60 posts (30 public, 30 private)
- Verify export returns exactly 50 public posts
- Verify ETag matches SHA-256 of response
- Verify text extraction drops code/quotes correctly
## OUTPUT FORMAT
For each task:
- Filename + line numbers for implementation
- Test output showing PASS/FAIL
- Sample JSON response (first 2 Digits)
---
## P.S. Meta Notes
[2-4 sentences explaining key decisions made while structuring this context doc]
[Surface interesting tradeoffs, assumptions, or reasoning about task sequencing/scope]
[Examples: why certain files are prioritized, why tasks are ordered this way, what's intentionally deferred]
[Think: what would the user find valuable to know about how you approached this?]
Before saving context file, verify:
keyword-keyword-keyword.md pattern (3-5 descriptive keywords)workings/context-setting/