with one click
tools-file-ops
// Base module providing reusable file operations patterns for workflow scripts
// Base module providing reusable file operations patterns for workflow scripts
[HINT] Download the complete skill directory including SKILL.md and all related files
| name | tools-file-ops |
| description | Base module providing reusable file operations patterns for workflow scripts |
| user-invocable | false |
Role: Shared Python module providing atomic file operations, metadata parsing, TOON output helpers, and base directory configuration for workflow scripts.
Execution mode: Library module; import functions as documented in usage examples.
Prohibited actions:
.plan/ files directly; use base_path() for path constructionatomic_write_file() for writesConstraints:
base_path() or get_base_dir().plan/ by default)Import file_ops module in Python scripts that write to .plan/ directories:
Location: scripts/file_ops.py
Base Directory Functions
1. get_base_dir()
Path - base directory (default: .plan)2. set_base_dir(path)
path (str/Path) - new base directory.plan default*3. base_path(parts)
*parts - path components to joinPath - full path including workflow base directorybase_path('plans', 'my-task', 'plan.md') ā .plan/plans/my-task/plan.mdFile Operations
4. atomic_write_file(path, content)
path (str/Path), content (str)5. ensure_directory(path)
path (str/Path) - file or directory pathTOON Output Helpers
**6. output_success(operation, kwargs)
operation (str), additional kwargs7. output_error(operation, error)
operation (str), error (str)Script Entry Point
8. safe_main(main_fn)
main_fn - the main function (must return int or None)sys.exit() internallysafe_main(main)() or @safe_main decoratorMetadata Functions
9. parse_markdown_metadata(content)
content (str) - full file contentdict - metadata key-value pairskey=value and key.subkey=value (dot notation)10. generate_markdown_metadata(data)
data (dict) - metadata to serializestr - formatted metadata block11. update_markdown_metadata(content, updates)
content (str), updates (dict)str - updated content12. get_metadata_content_split(content)
content (str)tuple[str, str] - (metadata_block, body_content)#!/usr/bin/env python3
from file_ops import (
atomic_write_file,
base_path,
output_success,
output_error,
generate_markdown_metadata
)
def main():
try:
# Construct path within .plan directory
filepath = base_path('lessons-learned', '2025-11-28-001.md')
# Generate metadata
metadata = generate_markdown_metadata({
'id': '2025-11-28-001',
'component.type': 'command',
'applied': 'false'
})
# Write atomically (creates directories automatically)
content = f"{metadata}\n# Lesson Title\n\nContent here..."
atomic_write_file(filepath, content)
output_success('write-lesson', file=str(filepath))
except Exception as e:
output_error('write-lesson', str(e))
sys.exit(1)
if __name__ == '__main__':
main()
| Script | Purpose |
|---|---|
file_ops.py | Core file operations module (importable) |
constants.py | Shared constants: status values, phase names, filenames, certainty values, directory names |
from file_ops import base_path, atomic_write_file, output_success, output_error
from constants import STATUS_SUCCESS, FILE_STATUS, PHASES, DIR_PLANS
# Resolve plan directory paths
plan_dir = base_path('plans', plan_id) # Returns Path to .plan/plans/{plan_id}
artifacts = base_path('plans', plan_id, 'artifacts')
# Atomic file writes (temp file + rename for crash safety)
atomic_write_file(plan_dir / 'status.json', json.dumps(data, indent=2))
# Structured TOON output
output_success({'plan_id': plan_id, 'action': 'created'})
output_error('file_not_found', f'Plan {plan_id} not found')
The executor manages PYTHONPATH automatically, so scripts can import file_ops directly:
from file_ops import atomic_write_file, base_path, output_success, output_error
Files are stored in .plan/ directory:
.plan/ # Workflow artifacts
āāā run-configuration.json # Command execution tracking
āāā lessons-learned/ # Knowledge capture
ā āāā *.md
āāā memory/ # Session state
ā āāā context/*.json
ā āāā handoffs/*.json
āāā plans/ # Task plans
āāā {task-name}/
āāā plan.md
āāā references.json
When scripts in one domain (e.g., plan-marshall:plan-files) need to access resources in another domain (e.g., plan-marshall:manage-lessons), follow the ID-based access pattern.
Scripts take IDs, not paths, for cross-domain resources. The script resolves the ID to a path internally using base_path().
# Script in planning domain needs to access lesson from lessons-learned domain
# CORRECT: Accept ID, resolve path internally
def copy_lesson_to_plan(lesson_id: str, plan_dir: Path) -> dict:
# Resolve ID to path internally
lesson_file = base_path("lessons-learned", f"{lesson_id}.md")
if not lesson_file.exists():
return {"success": False, "error": f"Lesson not found: {lesson_id}"}
# Proceed with copy...
# WRONG: Orchestrator constructs path and passes it to script
# In orchestrator (phase-management SKILL.md):
python3 {script} --lesson-file {lesson.file} # BAD: orchestrator builds path
# In script:
def copy_lesson_to_plan(lesson_file: Path, plan_dir: Path): # BAD: accepts path
pass
| Scenario | Use ID-Based | Reason |
|---|---|---|
| Cross-domain resource access | Yes | Scripts own their domain's paths |
| Same-domain resource access | Optional | Same skill owns both paths |
| User-specified file | No | User explicitly provides path |
| Configuration files | No | Paths defined in config are explicit |
When creating scripts that access cross-domain resources:
--lesson-id) not pathbase_path from file_opsbase_path("domain-dir", f"{id}.md")