with one click
table-results-review
// Review ML/AI result tables, LaTeX table files, captions, provenance, and paper table style. Use for benchmark, ablation, metric, model-spec, and compute tables.
// Review ML/AI result tables, LaTeX table files, captions, provenance, and paper table style. Use for benchmark, ablation, metric, model-spec, and compute tables.
[HINT] Download the complete skill directory including SKILL.md and all related files
| name | table-results-review |
| description | Review ML/AI result tables, LaTeX table files, captions, provenance, and paper table style. Use for benchmark, ablation, metric, model-spec, and compute tables. |
| allowed-tools | Read, Write, Edit, Bash, Glob, WebSearch, WebFetch |
Audit standalone paper tables before they become paper evidence, meeting material, or rebuttal material.
Use this skill when:
tables/results.tex and inserts them into sections with \input{tables/results}Do not use this skill for rendered figure assets or plot styling. Use figure-results-review for figures/*.pdf, figures/*.png, and figures/*.tex figure bundles. Use paper-evidence-board when the main task is linking many figures and tables to claims across the whole paper.
Pair this skill with:
paper-result-asset-builder when a paper-facing table needs to be generated or regenerated from CSV result files before table reviewpaper-evidence-gap-miner when the table review reveals a missing comparison, slice, variance, or baseline and existing CSVs may already contain itpaper-evidence-board when tables must be linked to paper claims, sections, reviewer risks, and actionsbaseline-selection-audit when a comparison table may miss important baselines or use unfair settingsresult-diagnosis when table numbers are surprising, unstable, negative, or contradictoryexperiment-design-planner when a table exposes missing controls, seeds, metrics, or ablationsexperiment-report-writer when raw logs need a structured report before table reviewconference-writing-adapter when final table narrative or compactness must match a target venueresearch-project-memory when claim/evidence/provenance/risk/action/handoff updates should persist across sessions.tex source, table description, caption, main-text callout, and provenance.submit-paper.table float inside wraptable; use an inline block with local caption/label handling, then tune wrap line count, width, font size, \tabcolsep, and small local \vspace by visual iteration.Collect:
tables/results.tex\input{tables/results} or equivalentCLM-###, EVD-###, TAB-###, RSK-###, or ACT-###Rewrite the intended evidence relation:
This table supports [claim] by showing [comparison/ranking/trend/tradeoff] under [setup].
If that sentence cannot be written, route to paper-evidence-board before polishing the table.
For paper tables, identify the standalone source:
tables/table_name.tex
Inspect:
table or table*tabular, tabularx, longtable, booktabs, resizebox, small, or custom macros\caption{}, \label{}, footnotes, arrows, bold/underline, row groups, column groups, and missing valuesFlag the bundle as incomplete if it lacks caption, label, callout, source provenance, or a clear bolding/rounding/missing-value rule.
Produce a table description before judging the caption.
The table description should state:
Do not put the full table description into the caption. Use it as the audit record that checks whether the caption and paper prose are faithful to the table.
For each table, answer:
Assign one status:
supports-claimsupports-narrower-claimambiguouscontradicts-claimdiagnostic-onlynot-readyCheck:
[H] still may leave vertical skips, so fix local whitespace in or around the table before changing global settingswraptable layouts, optional line count [N], width, caption height, font size, and \tabcolsep are documented as local layout choices rather than unexplained magic constantsFlag any issue that could cause a reviewer to misread the result.
Check:
If the table lacks necessary uncertainty or provenance, decide whether to rerun, add columns/footnotes, weaken the claim, or move the table to appendix/diagnostic status.
For each table, produce:
.tex, source data/log/config/report, table-generation parameters, experiment parameters, and source certaintyCaption pattern:
[What the table reports.] We compare [methods] on [task/dataset] using [metrics; direction] under [key experiment parameters].
[Grouping or fairness detail.] [Takeaway tied to the claim]. Bold marks [bolding rule].
For model-spec, metric-definition, or method-comparison tables:
[What the table defines or compares.] Columns summarize [fields] used in [paper section or experiment].
[Interpretive note.] [Takeaway tied to the claim or reader task].
Do not put every hyperparameter in the caption. Include the parameters needed to interpret the claim. Put full provenance in the review report, appendix, artifact, or paper/.agent/ record.
For every issue, route to one or more actions:
fix-table-wrapper: stale caption, label mismatch, unclear bolding rule, wrong resize, broken footnote, or row/column mismatch in tables/*.texedit-table: grouping, decimals, bolding, footnotes, missing values, row/column order, or metric arrowsrewrite-caption: setup, metric, takeaway, caveat, bolding rule, or claim alignmentwrite-description: missing table description or missing provenance recordrewrite-results-text: nearby paper prose overclaims or misses the takeawaybuild-result-asset: raw CSV evidence exists but the paper-facing table needs to be generated with documented aggregation, rounding, and provenancemine-existing-results: missing comparison, slice, variance, or baseline may already exist in CSVs or reportsrerun: missing seeds, variance, baseline, metric, or protocol after existing results are checkeddiagnose-result: suspicious, negative, unstable, or contradictory numbersbaseline-audit: missing or unfair baselinenarrow-claim: evidence only supports a smaller statementmove-to-appendix: useful but not central enough for main papercut: table does not support a paper needName the next skill when appropriate.
If saving to a project and no path is given, use:
docs/results/table_results_review_YYYY-MM-DD_<short-name>.md
The report must include:
.tex, input location, label, caption, paper callout locationWhen memory exists, update the smallest useful set of entries:
memory/evidence-board.md: table evidence status, source .tex, setup, table-generation parameters, experiment parameters, and linked claimsmemory/claim-board.md: claims supported, narrowed, contradicted, or not readymemory/risk-board.md: reviewer risks from table ambiguity, missing uncertainty, weak baselines, missing provenance, or overclaimingmemory/action-board.md: table edits, reruns, caption fixes, result diagnosis, baseline audit, or claim revisionspaper/.agent/: table map, source/input pairings, paper locations, table descriptions, caption state, provenance gaps, and stale table warnings.agent/worktree-status.md: result-generation or table-generation tasks and exit conditionsUse certainty labels:
verified for values checked against raw data, logs, generated table, or paper textuser-stated for user-supplied contextinferred for reviewer-risk and narrative judgmentsunverified for numeric or statistical claims that could not be inspectedBefore finalizing:
tables/*.tex