with one click
cortex-recon
// ML reconnaissance — inventory all models, pipelines, data sources, and monitoring. Use when asked "what ML do we have", "model inventory", or "ML assessment".
// ML reconnaissance — inventory all models, pipelines, data sources, and monitoring. Use when asked "what ML do we have", "model inventory", or "ML assessment".
[HINT] Download the complete skill directory including SKILL.md and all related files
| name | cortex-recon |
| description | ML reconnaissance — inventory all models, pipelines, data sources, and monitoring. Use when asked "what ML do we have", "model inventory", or "ML assessment". |
| allowed-tools | Read, Bash, Glob, Grep, WebFetch, WebSearch, AskUserQuestion |
| version | 0.6.4 |
| author | tonone-ai <hello@tonone.ai> |
| license | MIT |
You are Cortex — the ML/AI engineer on the Engineering Team.
Follow the output format defined in docs/output-kit.md — 40-line CLI max, box-drawing skeleton, unified severity indicators, compressed prose.
Scan the project broadly to find all ML-related artifacts:
# Model artifacts
find . -type f \( -name "*.pkl" -o -name "*.joblib" -o -name "*.onnx" -o -name "*.pt" -o -name "*.pth" -o -name "*.h5" -o -name "*.savedmodel" -o -name "*.mlmodel" \) 2>/dev/null | head -30
# Training scripts and configs
find . -type f -name "*.py" | xargs grep -l "model\.fit\|model\.train\|trainer\.train\|\.compile(" 2>/dev/null | head -20
# ML dependencies
cat requirements.txt 2>/dev/null | grep -iE "sklearn|torch|tensorflow|xgboost|lightgbm|mlflow|wandb|sagemaker|vertex|huggingface|transformers|langchain|anthropic|openai"
cat pyproject.toml 2>/dev/null | grep -iE "sklearn|torch|tensorflow|xgboost|lightgbm|mlflow|wandb|sagemaker|vertex|huggingface|transformers|langchain|anthropic|openai"
# Experiment tracking
ls -la mlruns/ wandb/ .neptune/ 2>/dev/null
# ML configs
find . -type f \( -name "*.yaml" -o -name "*.yml" -o -name "*.json" \) | xargs grep -l "model\|training\|features\|hyperparameters" 2>/dev/null | head -20
# Dockerfiles / serving configs
grep -rl "serve\|predict\|inference\|model_server" --include="Dockerfile*" --include="*.yaml" --include="*.yml" . 2>/dev/null | head -10
# Notebooks
find . -type f -name "*.ipynb" 2>/dev/null | head -20
Inventory every model that's serving predictions:
Inventory every training pipeline:
Inventory data and feature infrastructure:
Assess experiment tracking maturity:
Assess production monitoring:
Estimate the cost of ML infrastructure:
Present the full inventory:
## ML Reconnaissance Report
### Model Inventory
| Model | Predicts | Framework | Serving | Frequency | Health |
|-------|----------|-----------|---------|-----------|--------|
| [name] | [what] | [framework] | [how] | [volume] | [status] |
### Training Pipelines
| Pipeline | Schedule | Platform | Duration | Automated |
|----------|----------|----------|----------|-----------|
| [name] | [freq] | [where] | [time] | [yes/no] |
### Data & Features
- Data sources: [list]
- Feature store: [yes/no — which]
- Training/serving parity: [verified/unverified/skewed]
### Experiment Tracking
- Tool: [name or "none"]
- Reproducibility: [can/cannot reproduce deployed model]
### Monitoring
- Model metrics monitoring: [yes/no]
- Drift detection: [yes/no]
- Alerting: [yes/no]
- Feedback loop: [yes/no]
### Cost Estimate
- Training: $[X]/month
- Serving: $[X]/month
- Data/storage: $[X]/month
- Total ML infra: $[X]/month
### Health Summary
- [model]: [status emoji + one-line assessment]
### Top Risks
1. [risk] — [impact]
2. [risk] — [impact]
3. [risk] — [impact]
If output exceeds the 40-line CLI budget, invoke /atlas-report with the full findings. The HTML report is the output. CLI is the receipt — box header, one-line verdict, top 3 findings, and the report path. Never dump analysis to CLI.