| name | migrate |
| description | Universal migration from Obsidian, Notion, Logseq, markdown, CSV, JSON, Roam |
| triggers | ["migrate from","import from obsidian","import from notion"] |
| tools | ["put_page","search","add_link","add_tag","sync_brain"] |
| mutating | true |
Migrate Skill
Universal migration from any wiki, note tool, or brain system into GBrain.
Contract
- Source data is never modified or deleted; migration is additive only.
- Every migrated page is verified round-trip: written to gbrain, read back, spot-checked.
- Cross-references from the source system (wikilinks, block refs, tags) are converted to gbrain equivalents.
- Migration is tested on a sample (5-10 files) before bulk execution.
- Post-migration health check confirms page count, link integrity, and embedding coverage.
Supported Sources
| Source | Format | Strategy |
|---|
| Obsidian | Markdown + [[wikilinks]] | Direct import, convert wikilinks to gbrain links |
| Notion | Exported markdown or CSV | Parse Notion's export structure |
| Logseq | Markdown with ((block refs)) | Convert block refs to page links |
| Plain markdown | Any .md directory | Import directory into gbrain directly |
| CSV | Tabular data | Map columns to frontmatter fields |
| JSON | Structured data | Map keys to page fields |
| Roam | JSON export | Convert block structure to pages |
Phases
- Assess the source. What format? How many files? What structure?
- Plan the mapping. How do source fields map to gbrain fields (type, title, tags, compiled_truth, timeline)?
- Test with a sample. Import 5-10 files, verify by reading them back from gbrain and exporting.
- Bulk import. Import the full directory into gbrain.
- Verify. Check gbrain health and statistics, spot-check pages.
- Build links. Extract cross-references from content and create typed links in gbrain.
Obsidian Migration
-
Import the vault directory into gbrain (Obsidian vaults are markdown directories)
-
Wire the graph with native wikilink support (v0.12.1+):
gbrain extract links --source db --dry-run | head -20
gbrain extract links --source db
extract links natively parses [[relative/path]] and [[relative/path|Display Text]]
alongside standard [text](page.md) markdown syntax. Ancestor-search resolution handles
wiki KBs where authors omit one or more leading ../ prefixes. The .md suffix is
inferred automatically for wikilinks.
Obsidian-specific:
- Tags (
#tag) become gbrain tags
- Frontmatter properties map to gbrain frontmatter
- Attachments (images, PDFs) are noted but handled separately via file storage
Notion Migration
- Export from Notion: Settings > Export > Markdown & CSV
- Notion exports nested directories with UUIDs in filenames
- Strip UUIDs from filenames for clean slugs
- Map Notion's database properties to frontmatter
- Import the cleaned directory into gbrain
CSV Migration
For tabular data (e.g., CRM exports, contact lists):
- For each row in the CSV, create a page with column values as frontmatter
- Use a designated column as the slug (e.g., name)
- Use another column as compiled_truth (e.g., notes)
- Store each page in gbrain
Verification
After any migration:
- Check gbrain statistics to verify page count matches source
- Check gbrain health for orphans and missing embeddings
- Export pages from gbrain for round-trip verification
- Spot-check 5-10 pages by reading them from gbrain
- Test search: search gbrain for "someone you know is in the data"
Anti-Patterns
- Bulk import without sample test. Never import the full dataset before verifying with 5-10 files. The cost of cleaning up hundreds of bad pages is enormous.
- Destroying source data. Migration is additive. Never modify, move, or delete the source files.
- Ignoring cross-references. Wikilinks, block refs, and tags from the source system must be converted to gbrain equivalents. Dropping them loses the knowledge graph.
- Skipping verification. A migration without post-import health check, page count comparison, and spot-check reads is incomplete.
Output Format
MIGRATION REPORT -- [source] -> GBrain
=======================================
Source: [format] ([file count] files, [size])
Mapping: [field mapping summary]
Sample Test (N files):
- Imported: N/N
- Round-trip verified: N/N
- Cross-refs converted: N
Bulk Import:
- Total imported: N
- Skipped (duplicates/errors): N
- Links created: N
- Tags migrated: N
Verification:
- Page count match: [yes/no]
- Health check: [pass/fail]
- Search test: [query] -> [result count] hits
Tools Used
- Store/update pages in gbrain (put_page)
- Read pages from gbrain (get_page)
- Link entities in gbrain (add_link)
- Tag pages in gbrain (add_tag)
- Get gbrain statistics (get_stats)
- Check gbrain health (get_health)
- Search gbrain (query)