| name | research-paper-workflow |
| description | Standardized end-to-end workflow for academic paper production across Codex, Claude Code, and Gemini. Use when a user needs to choose a paper type (empirical, qualitative, systematic review, methods, theory), select a workflow stage, and produce consistent artifacts under RESEARCH/[topic]/ with explicit task IDs, quality gates, and submission-ready outputs. |
Research Paper Workflow
Run a model-agnostic paper workflow using shared Task IDs and artifact contracts.
This is a self-contained skill package. All assets needed for execution ā workflows, skill specifications, output templates, standards, and agent roles ā are bundled in subdirectories of this package. No external repo access is needed.
Quick Start
- Ask for
paper_type: empirical, qualitative, systematic-review, methods, or theory.
- Ask for
task_id from the contract (for example F3 or G1).
- Execute the task and write outputs to
RESEARCH/[topic]/ using the exact file paths.
- Apply quality gates before submission tasks (
H1, H2).
Available Workflow Commands
These commands map to the same behavior across Codex, Claude Code, and Gemini:
/paper [topic] [venue] # Master router ā choose paper type + task ID
/lit-review [topic] [year range] # Systematic literature review (PRISMA)
/paper-read [URL or DOI] # Deep paper analysis
/find-gap [research area] # Identify research gaps
/build-framework [theory/concept] # Build theoretical framework
/academic-write [section] [topic] # Academic writing assistance
/synthesize [topic] [outcome_id] # Evidence synthesis / meta-analysis
/paper-write [topic] [type] [venue] # Full manuscript drafting
/study-design [topic] # Empirical study design
/ethics-check [topic] # Ethics / IRB pack
/submission-prep [topic] [venue] # Submission package
/rebuttal [topic] # Rebuttal / response to reviewers
/code-build [method] --domain ... # Build academic research code
/proofread [topic] # AI de-trace / final proofreading
/academic-present [topic] # Academic presentation preparation
Bundled Workflows
Full workflow definitions are included in the workflows/ subdirectory of this skill package. When a user invokes any command above (e.g. /paper, /lit-review), read the corresponding file from workflows/<command-name>.md for the complete execution instructions.
The workflows/paper.md file is the master router ā it maps every Task ID (A1āK4) to the correct sub-workflow or skill card. Start there for any task-ID-based request.
Skill Directory Structure
The skill system covers the full research lifecycle across 11 stages:
skills/
āāā A_framing/ (question-refiner, contribution-crafter, hypothesis-generator, theory-mapper, gap-analyzer, venue-analyzer)
āāā B_literature/ (academic-searcher, paper-screener, paper-extractor, citation-snowballer, fulltext-fetcher, citation-formatter, concept-extractor, literature-mapper, reference-manager-bridge)
āāā C_design/ (study-designer, rival-hypothesis-designer, robustness-planner, dataset-finder, variable-constructor, data-dictionary-builder, data-management-plan, prereg-writer, variable-operationalizer)
āāā D_ethics/ (ethics-irb-helper, statement-generator, deidentification-planner)
āāā E_synthesis/ (effect-size-calculator, evidence-synthesizer, quality-assessor, publication-bias-checker, qualitative-coding)
āāā F_writing/ (manuscript-architect, analysis-interpreter, effect-size-interpreter, table-generator, figure-specifier, meta-optimizer, discussion-writer)
āāā G_compliance/ (prisma-checker, reporting-checker, tone-normalizer)
āāā J_proofread/ (ai-fingerprint-scanner, human-voice-rewriter, similarity-checker, final-proofreader)
āāā H_submission/ (submission-packager, rebuttal-assistant, peer-review-simulation, fatal-flaw-detector, reviewer-empathy-checker, credit-taxonomy-helper, limitation-auditor)
āāā I_code/ (code-builder, data-cleaning-planner, data-merge-planner, code-specification, code-planning, code-execution, code-review, reproducibility-auditor, stats-engine)
āāā K_presentation/ (presentation-planner, slide-architect, slidev-scholarly-builder, beamer-builder)
āāā Z_cross_cutting/ (academic-context-maintainer, metadata-enricher, model-collaborator, self-critique)
āāā domain-profiles/ (economics, finance, psychology, biomedical, education, cs-ai, ...)
Output Structure
RESEARCH/[topic]/
āāā context/ # Project-level research state + decision log
āāā framing/ # A-stage outputs (RQ, hypothesis, contribution, venue)
āāā protocol.md # Research protocol
āāā search_log.md # Reproducible search records
āāā screening/ # Screening logs + PRISMA flow
āāā notes/ # Individual paper notes
āāā extraction_table.md # Data extraction table
āāā synthesis.md # Final synthesis report
āāā manuscript/ # Outline, draft, claims map, figures plan
āāā proofread/ # AI detection, humanization, similarity, final proofread
āāā submission/ # Cover letter, checklist, statements
āāā revision/ # Rebuttal + response materials
āāā analysis/ # Code + data pipelines
āāā presentation/ # Slide deck spec, slides.md / slides.tex
āāā bibliography.bib # BibTeX references
Required Behavior
- Use the canonical task and output definitions in
references/workflow-contract.md.
- Keep stage labels and task IDs unchanged across models.
- Do not infer stage order alphabetically when the contract exposes explicit ordering metadata.
- When
self-critique is one of the required skills, preserve critique history across revision rounds and treat review/self_critique_log.md as the canonical issue register for the loop.
- If a requested output is missing prerequisites, create a gap note and ask whether to:
- continue with placeholders, or
- run the prerequisite task first.
- Keep claims, methods, and evidence aligned (run integrity checks for stage
G).
- Apply
references/academic-output-rubric.md whenever producing scholarly prose, synthesis, design, review, or submission artifacts.
- When a workflow references
templates/<name>.md, load the template from the templates/ subdirectory of this package.
Skill Loading Strategy
Three-tier loading for token efficiency. All paths are relative to this skill package directory:
- Quick lookup (~3KB): Use
skills-summary.md ā skill names + one-line descriptions per stage. Use this to identify which skill to invoke.
- Default reference (~19KB): Use
skills-core.md ā consolidated process descriptions, templates, and output formats. Use this when executing a skill.
- Full specification: Load
skills/[stage]/[skill-name].md ā detailed edge cases, error recovery, quality bars, and verbose templates. Use this only when the core reference is insufficient.
Bundled Assets
This package includes the following subdirectories:
| Directory | Contents |
|---|
workflows/ | 16 workflow definitions (slash commands) |
references/ | Stage playbooks + workflow contract |
skills/ | 71 detailed skill spec files across 13 stage directories |
skills-summary.md | Quick-reference skill index (~3KB) |
skills-core.md | Consolidated skill reference (~19KB) |
templates/ | 44 output templates for manuscripts, submissions, ethics, etc. |
standards/ | Canonical contract YAML + capability map + agent profiles |
roles/ | 10 agent role definitions for orchestrator execution |
References
- Task model + outputs:
references/workflow-contract.md
- Platform routing map:
references/platform-routing.md
- Coverage matrix:
references/coverage-matrix.md
- Academic output rubric:
references/academic-output-rubric.md
- Stage playbooks:
references/stage-A-framing.md (tasks A1āA5)
references/stage-B-literature.md (tasks B1āB6)
references/stage-C-design.md (tasks C1āC5)
references/stage-D-ethics.md (tasks D1āD3)
references/stage-E-synthesis.md (tasks E1āE5)
references/stage-F-writing.md (tasks F1āF6)
references/stage-G-compliance.md (tasks G1āG4)
references/stage-J-proofread.md (tasks J1āJ4)
references/stage-H-submission.md (tasks H1āH4)
references/stage-I-code.md (tasks I1āI9)
references/stage-K-presentation.md (tasks K1āK4)