with one click
rrwrite-draft-section
// Drafts a specific manuscript section using repository data and citation indices. Enforces fact-checking via Python tools.
// Drafts a specific manuscript section using repository data and citation indices. Enforces fact-checking via Python tools.
[HINT] Download the complete skill directory including SKILL.md and all related files
| name | rrwrite-draft-section |
| description | Drafts a specific manuscript section using repository data and citation indices. Enforces fact-checking via Python tools. |
| arguments | [{"name":"target_dir","description":"Output directory for manuscript files (e.g., manuscript/repo_v1)","default":"manuscript"}] |
| allowed-tools | null |
| context | fork |
{target_dir}/outline.md.{target_dir}/outline.md to understand section requirements and evidence files.python scripts/rrwrite-config-manager.py --section {section_name}
This ensures the draft meets the target word count (±20% variance allowed).references.bib or {target_dir}/literature_citations.bib to find relevant citation keys.$x^2$).[smith2020]).CRITICAL: You must verify all numerical claims.
*.csv or *.log).python scripts/rrwrite-verify-stats.py --file <PATH> --col [NAME] --op [mean/max/min]Include figures when:
Priority System:
figures/from_repo/) - these are ACTUAL research outputsfigures/generated/) - these are supplementary visualizationsBefore drafting, check for figures from manifest:
from pathlib import Path
import json
import sys
sys.path.append(str(Path.cwd() / "scripts"))
from rrwrite_figure_generator import FigureSelector
# Check for figure manifest (created by extraction stage)
manifest_path = Path("{target_dir}") / "figures/figure_manifest.json"
if manifest_path.exists():
# Get figures recommended for this section (prioritizes repo figures)
section_figures = FigureSelector.get_figures_from_manifest(
section_name="{section_name}",
manifest_path=manifest_path,
prioritize_repo_figures=True # Priority 1 first
)
print(f"Available figures for {section_name}:")
for fig in section_figures:
priority_label = "REPO" if fig['priority'] == 1 else "GENERATED"
print(f" [{priority_label}] {fig['id']}: {fig['default_caption']}")
print(f" Path: {fig['path']}")
if 'generating_script' in fig and fig['generating_script']:
print(f" Script: {fig['generating_script']}")
else:
# Fallback: use old method (generated figures only)
from rrwrite_figure_generator import FigureSelector
figures_dir = Path("{target_dir}") / "figures"
available_figures = FigureSelector.get_figures_for_section(
section_name="{section_name}",
figures_dir=figures_dir
)
print(f"Found {len(available_figures)} figures for {section_name}")
Markdown format for figures:

**Figure 1**: Workflow diagram showing the complete analysis pipeline implemented in this repository. This figure illustrates the data flow from input processing through statistical analysis to final output generation.
Guidelines:
sections/ directory (e.g., ../figures/from_repo/)**Figure N**: DescriptionInclude tables when:
Before drafting, check for pre-generated TSV tables from repository analysis:
from pathlib import Path
import sys
sys.path.append(str(Path.cwd() / "scripts"))
from rrwrite_table_generator import TableSelector
# Check for available tables
data_tables_dir = Path("{target_dir}") / "data_tables"
if data_tables_dir.exists():
available_tables = TableSelector.get_tables_for_section(
section_name="{section_name}",
data_tables_dir=data_tables_dir
)
print(f"Found {len(available_tables)} relevant data tables for {section_name}:")
for table_info in available_tables:
if table_info['exists']:
print(f" - {table_info['name']}")
To include a table in your section:
import pandas as pd
from rrwrite_table_generator import TableGenerator
# Load TSV table
df = pd.read_csv("data_tables/repository_statistics.tsv", sep='\t', comment='#')
# Optional: Filter or transform data
df = df.head(10) # Limit to top 10 rows
# Format as markdown table
table_md = TableGenerator.format_markdown_table(
df,
caption="**Table 1: Repository composition by file type**",
alignment={'file_count': 'right', 'total_size_mb': 'right'}
)
# Include in section text
section_text = f"""
The repository structure is summarized in Table 1, showing the distribution
of files across categories.
{table_md}
As shown in Table 1, the repository contains...
"""
Tables generated during repository analysis:
| File | Content | Best for sections |
|---|---|---|
file_inventory.tsv | Complete file listing with metadata | Results (filtered) |
repository_statistics.tsv | Summary metrics by category | Methods, Results |
size_distribution.tsv | File size distribution quartiles | Results |
research_indicators.tsv | Detected research topics | Introduction, Methods |
When drafting Methods sections, cite ONLY specific tools, datasets, and methodologies that were actually used:
✅ Appropriate citations:
❌ Inappropriate citations:
Rationale: Methods describes what YOU did, not general principles. Abstract concepts belong in Introduction (motivation) or Discussion (broader context).
Example (correct):
Schema validation was performed using LinkML specifications [LinkML2024].
Example (incorrect):
All data followed FAIR principles [Wilkinson2016].
When drafting the Availability (or "Data and Code Availability") section:
Should include:
Should NOT include:
Format: Concise, factual statements. 50-150 words typical.
Example (correct):
# Data and Code Availability
Source code is available at https://github.com/user/project under the MIT license.
Installation requires Python 3.10+ and can be completed via `pip install project`.
Complete documentation is hosted at https://project.readthedocs.io.
All experimental data are deposited in Zenodo (DOI: 10.5281/zenodo.1234567).
Example (incorrect - has inappropriate citations):
... complete documentation following FAIR principles [Wilkinson2016].
When drafting Results sections, cite ONLY to report what was observed or measured, not to explain concepts or provide justification:
✅ Appropriate citations:
❌ Inappropriate citations:
Rationale: Results reports OBSERVATIONS and MEASUREMENTS from your work. Explanations, justifications, and contextual citations belong in Introduction (motivation/background) or Discussion (interpretation/implications).
Example (correct):
The literature search identified 29 papers spanning reproducible research [Wilkinson2016, Barker2022], computational notebooks [Pimentel2023], and AI-assisted writing [CHI2024, Ros2025].
(These are examples of papers found - actual results being reported)
Example (incorrect):
Literature evidence tracking established provenance chains between claims and sources [Himmelstein2019, CliVER2024].
(This explains what provenance chains are/do, not reporting a measurement)
Example (incorrect):
This evidence chain addresses concerns about hallucination in AI writing [CliVER2024].
(This justifies WHY we did something - belongs in Introduction or Discussion)
Write the section to {target_dir}/SECTIONNAME.md where SECTIONNAME is:
abstract.md for Abstractintroduction.md for Introductionmethods.md for Methodsresults.md for Resultsdiscussion.md for Discussionconclusion.md for Conclusionavailability.md for Data and Code AvailabilityAfter drafting, validate the section:
python scripts/rrwrite-validate-manuscript.py --file {target_dir}/SECTIONNAME.md --type section
After successful validation, update workflow state:
import sys
from pathlib import Path
sys.path.insert(0, str(Path('scripts').resolve()))
from rrwrite_state_manager import StateManager
manager = StateManager(output_dir="{target_dir}")
manager.add_section_completed("SECTIONNAME") # e.g., "methods", "results"
Display updated progress:
python scripts/rrwrite-status.py --output-dir {target_dir}
Report validation status and updated workflow progress. If validation fails, fix issues and re-validate.