// Comprehensive healthcare AI toolkit for developing, testing, and deploying machine learning models with clinical data. This skill should be used when working with electronic health records (EHR), clinical prediction tasks (mortality, readmission, drug recommendation), medical coding systems (ICD, NDC, ATC), physiological signals (EEG, ECG), healthcare datasets (MIMIC-III/IV, eICU, OMOP), or implementing deep learning models for healthcare applications (RETAIN, SafeDrug, Transformer, GNN).
| name | pyhealth |
| description | Comprehensive healthcare AI toolkit for developing, testing, and deploying machine learning models with clinical data. This skill should be used when working with electronic health records (EHR), clinical prediction tasks (mortality, readmission, drug recommendation), medical coding systems (ICD, NDC, ATC), physiological signals (EEG, ECG), healthcare datasets (MIMIC-III/IV, eICU, OMOP), or implementing deep learning models for healthcare applications (RETAIN, SafeDrug, Transformer, GNN). |
PyHealth is a comprehensive Python library for healthcare AI that provides specialized tools, models, and datasets for clinical machine learning. Use this skill when developing healthcare prediction models, processing clinical data, working with medical coding systems, or deploying AI solutions in healthcare settings.
Invoke this skill when:
PyHealth operates through a modular 5-stage pipeline optimized for healthcare AI:
Performance: 3x faster than pandas for healthcare data processing
from pyhealth.datasets import MIMIC4Dataset
from pyhealth.tasks import mortality_prediction_mimic4_fn
from pyhealth.datasets import split_by_patient, get_dataloader
from pyhealth.models import Transformer
from pyhealth.trainer import Trainer
# 1. Load dataset and set task
dataset = MIMIC4Dataset(root="/path/to/data")
sample_dataset = dataset.set_task(mortality_prediction_mimic4_fn)
# 2. Split data
train, val, test = split_by_patient(sample_dataset, [0.7, 0.1, 0.2])
# 3. Create data loaders
train_loader = get_dataloader(train, batch_size=64, shuffle=True)
val_loader = get_dataloader(val, batch_size=64, shuffle=False)
test_loader = get_dataloader(test, batch_size=64, shuffle=False)
# 4. Initialize and train model
model = Transformer(
dataset=sample_dataset,
feature_keys=["diagnoses", "medications"],
mode="binary",
embedding_dim=128
)
trainer = Trainer(model=model, device="cuda")
trainer.train(
train_dataloader=train_loader,
val_dataloader=val_loader,
epochs=50,
monitor="pr_auc_score"
)
# 5. Evaluate
results = trainer.evaluate(test_loader)
This skill includes comprehensive reference documentation organized by functionality. Read specific reference files as needed:
File: references/datasets.md
Read when:
Key Topics:
File: references/medical_coding.md
Read when:
Key Topics:
File: references/tasks.md
Read when:
Key Topics:
File: references/models.md
Read when:
Key Topics:
File: references/preprocessing.md
Read when:
Key Topics:
File: references/training_evaluation.md
Read when:
Key Topics:
uv pip install pyhealth
Requirements:
Objective: Predict patient mortality in intensive care unit
Approach:
references/datasets.mdreferences/tasks.mdreferences/models.mdreferences/training_evaluation.mdreferences/training_evaluation.mdObjective: Recommend medications while avoiding drug-drug interactions
Approach:
references/datasets.mdreferences/tasks.mdreferences/models.mdreferences/medical_coding.mdreferences/training_evaluation.mdObjective: Identify patients at risk of 30-day readmission
Approach:
references/datasets.mdreferences/tasks.mdreferences/preprocessing.mdreferences/models.mdreferences/training_evaluation.mdObjective: Classify sleep stages from EEG signals
Approach:
references/datasets.mdreferences/tasks.mdreferences/preprocessing.mdreferences/models.mdreferences/training_evaluation.mdObjective: Standardize diagnoses across different coding systems
Approach:
references/medical_coding.md for comprehensive guidanceObjective: Automatically assign ICD codes from clinical notes
Approach:
references/datasets.mdreferences/tasks.mdreferences/preprocessing.mdreferences/models.mdreferences/training_evaluation.mdAlways split by patient: Prevent data leakage by ensuring no patient appears in multiple splits
from pyhealth.datasets import split_by_patient
train, val, test = split_by_patient(dataset, [0.7, 0.1, 0.2])
Check dataset statistics: Understand your data before modeling
print(dataset.stats()) # Patients, visits, events, code distributions
Use appropriate preprocessing: Match processors to data types (see references/preprocessing.md)
Start with baselines: Establish baseline performance with simple models
Choose task-appropriate models:
Monitor validation metrics: Use appropriate metrics for task and handle class imbalance
Calibrate predictions: Ensure probabilities are reliable (see references/training_evaluation.md)
Assess fairness: Evaluate across demographic groups to detect bias
Quantify uncertainty: Provide confidence estimates for predictions
Interpret predictions: Use attention weights, SHAP, or ChEFER for clinical trust
Validate thoroughly: Use held-out test sets from different time periods or sites
ImportError for dataset:
Out of memory:
max_seq_length)Poor performance:
Slow training:
device="cuda")# Complete mortality prediction pipeline
from pyhealth.datasets import MIMIC4Dataset
from pyhealth.tasks import mortality_prediction_mimic4_fn
from pyhealth.datasets import split_by_patient, get_dataloader
from pyhealth.models import RETAIN
from pyhealth.trainer import Trainer
# 1. Load dataset
print("Loading MIMIC-IV dataset...")
dataset = MIMIC4Dataset(root="/data/mimic4")
print(dataset.stats())
# 2. Define task
print("Setting mortality prediction task...")
sample_dataset = dataset.set_task(mortality_prediction_mimic4_fn)
print(f"Generated {len(sample_dataset)} samples")
# 3. Split data (by patient to prevent leakage)
print("Splitting data...")
train_ds, val_ds, test_ds = split_by_patient(
sample_dataset, ratios=[0.7, 0.1, 0.2], seed=42
)
# 4. Create data loaders
train_loader = get_dataloader(train_ds, batch_size=64, shuffle=True)
val_loader = get_dataloader(val_ds, batch_size=64)
test_loader = get_dataloader(test_ds, batch_size=64)
# 5. Initialize interpretable model
print("Initializing RETAIN model...")
model = RETAIN(
dataset=sample_dataset,
feature_keys=["diagnoses", "procedures", "medications"],
mode="binary",
embedding_dim=128,
hidden_dim=128
)
# 6. Train model
print("Training model...")
trainer = Trainer(model=model, device="cuda")
trainer.train(
train_dataloader=train_loader,
val_dataloader=val_loader,
epochs=50,
optimizer="Adam",
learning_rate=1e-3,
weight_decay=1e-5,
monitor="pr_auc_score", # Use AUPRC for imbalanced data
monitor_criterion="max",
save_path="./checkpoints/mortality_retain"
)
# 7. Evaluate on test set
print("Evaluating on test set...")
test_results = trainer.evaluate(
test_loader,
metrics=["accuracy", "precision", "recall", "f1_score",
"roc_auc_score", "pr_auc_score"]
)
print("\nTest Results:")
for metric, value in test_results.items():
print(f" {metric}: {value:.4f}")
# 8. Get predictions with attention for interpretation
predictions = trainer.inference(
test_loader,
additional_outputs=["visit_attention", "feature_attention"],
return_patient_ids=True
)
# 9. Analyze a high-risk patient
high_risk_idx = predictions["y_pred"].argmax()
patient_id = predictions["patient_ids"][high_risk_idx]
visit_attn = predictions["visit_attention"][high_risk_idx]
feature_attn = predictions["feature_attention"][high_risk_idx]
print(f"\nHigh-risk patient: {patient_id}")
print(f"Risk score: {predictions['y_pred'][high_risk_idx]:.3f}")
print(f"Most influential visit: {visit_attn.argmax()}")
print(f"Most important features: {feature_attn[visit_attn.argmax()].argsort()[-5:]}")
# 10. Save model for deployment
trainer.save("./models/mortality_retain_final.pt")
print("\nModel saved successfully!")
For detailed information on each component, refer to the comprehensive reference files in the references/ directory:
Total comprehensive documentation: ~28,000 words across modular reference files.