// High-performance toolkit for genomic interval analysis in Rust with Python bindings. Use when working with genomic regions, BED files, coverage tracks, overlap detection, tokenization for ML models, or fragment analysis in computational genomics and machine learning applications.
| name | gtars |
| description | High-performance toolkit for genomic interval analysis in Rust with Python bindings. Use when working with genomic regions, BED files, coverage tracks, overlap detection, tokenization for ML models, or fragment analysis in computational genomics and machine learning applications. |
Gtars is a high-performance Rust toolkit for manipulating, analyzing, and processing genomic interval data. It provides specialized tools for overlap detection, coverage analysis, tokenization for machine learning, and reference sequence management.
Use this skill when working with:
Install gtars Python bindings:
uv uv pip install gtars
Install command-line tools (requires Rust/Cargo):
# Install with all features
cargo install gtars-cli --features "uniwig overlaprs igd bbcache scoring fragsplit"
# Or install specific features only
cargo install gtars-cli --features "uniwig overlaprs"
Add to Cargo.toml for Rust projects:
[dependencies]
gtars = { version = "0.1", features = ["tokenizers", "overlaprs"] }
Gtars is organized into specialized modules, each focused on specific genomic analysis tasks:
Efficiently detect overlaps between genomic intervals using the Integrated Genome Database (IGD) data structure.
When to use:
Quick example:
import gtars
# Build IGD index and query overlaps
igd = gtars.igd.build_index("regions.bed")
overlaps = igd.query("chr1", 1000, 2000)
See references/overlap.md for comprehensive overlap detection documentation.
Generate coverage tracks from sequencing data with the uniwig module.
When to use:
Quick example:
# Generate BigWig coverage track
gtars uniwig generate --input fragments.bed --output coverage.bw --format bigwig
See references/coverage.md for detailed coverage analysis workflows.
Convert genomic regions into discrete tokens for machine learning applications, particularly for deep learning models on genomic data.
When to use:
Quick example:
from gtars.tokenizers import TreeTokenizer
tokenizer = TreeTokenizer.from_bed_file("training_regions.bed")
token = tokenizer.tokenize("chr1", 1000, 2000)
See references/tokenizers.md for tokenization documentation.
Handle reference genome sequences and compute digests following the GA4GH refget protocol.
When to use:
Quick example:
# Load reference and extract sequences
store = gtars.RefgetStore.from_fasta("hg38.fa")
sequence = store.get_subsequence("chr1", 1000, 2000)
See references/refget.md for reference sequence operations.
Split and analyze fragment files, particularly useful for single-cell genomics data.
When to use:
Quick example:
# Split fragments by clusters
gtars fragsplit cluster-split --input fragments.tsv --clusters clusters.txt --output-dir ./by_cluster/
See references/cli.md for fragment processing commands.
Score fragment overlaps against reference datasets.
When to use:
Quick example:
# Score fragments against reference
gtars scoring score --fragments fragments.bed --reference reference.bed --output scores.txt
Identify overlapping genomic features:
import gtars
# Load two region sets
peaks = gtars.RegionSet.from_bed("chip_peaks.bed")
promoters = gtars.RegionSet.from_bed("promoters.bed")
# Find overlaps
overlapping_peaks = peaks.filter_overlapping(promoters)
# Export results
overlapping_peaks.to_bed("peaks_in_promoters.bed")
Generate coverage tracks for visualization:
# Step 1: Generate coverage
gtars uniwig generate --input atac_fragments.bed --output coverage.wig --resolution 10
# Step 2: Convert to BigWig for genome browsers
gtars uniwig generate --input atac_fragments.bed --output coverage.bw --format bigwig
Prepare genomic data for machine learning:
from gtars.tokenizers import TreeTokenizer
import gtars
# Step 1: Load training regions
regions = gtars.RegionSet.from_bed("training_peaks.bed")
# Step 2: Create tokenizer
tokenizer = TreeTokenizer.from_bed_file("training_peaks.bed")
# Step 3: Tokenize regions
tokens = [tokenizer.tokenize(r.chromosome, r.start, r.end) for r in regions]
# Step 4: Use tokens in ML pipeline
# (integrate with geniml or custom models)
Use Python API when:
Use CLI when:
Comprehensive module documentation:
references/python-api.md - Complete Python API reference with RegionSet operations, NumPy integration, and data exportreferences/overlap.md - IGD indexing, overlap detection, and set operationsreferences/coverage.md - Coverage track generation with uniwigreferences/tokenizers.md - Genomic tokenization for ML applicationsreferences/refget.md - Reference sequence management and digestsreferences/cli.md - Command-line interface complete referenceGtars serves as the foundation for the geniml Python package, providing core genomic interval operations for machine learning workflows. When working on geniml-related tasks, use gtars for data preprocessing and tokenization.
Gtars works with standard genomic formats:
Enable verbose logging for troubleshooting:
import gtars
# Enable debug logging
gtars.set_log_level("DEBUG")
# CLI verbose mode
gtars --verbose <command>