with one click
jupyter-ml
// Full CUDA ML stack + JupyterLab with real-time collaboration and CRDT MCP server on port 8888. Use when working with GPU-accelerated Jupyter notebooks, ML training with collaboration, or the jupyter-ml layer.
// Full CUDA ML stack + JupyterLab with real-time collaboration and CRDT MCP server on port 8888. Use when working with GPU-accelerated Jupyter notebooks, ML training with collaboration, or the jupyter-ml layer.
[HINT] Download the complete skill directory including SKILL.md and all related files
| name | jupyter-ml |
| description | Full CUDA ML stack + JupyterLab with real-time collaboration and CRDT MCP server on port 8888. Use when working with GPU-accelerated Jupyter notebooks, ML training with collaboration, or the jupyter-ml layer. |
| Property | Value |
|---|---|
| Dependencies | cuda, supervisord |
| Sub-layers | llama-cpp, unsloth, jupyter-mcp |
| Ports | 8888 |
| Service | jupyter-ml (supervisord) |
| Volume | workspace at ~/workspace |
| Install files | layer.yml, pixi.toml, tasks: |
This is a Tier 2 "environment owner" layer that:
layers: [llama-cpp, unsloth, jupyter-mcp]jupyter-mcp sub-layer (not directly in tasks:)Build order: pixi environment → llama-cpp (binaries) → unsloth (vllm wheel + unsloth pip + patch) → jupyter-mcp (MCP extension)
conda-forge: JupyterLab >= 4.4.0, jupyter-resource-usage, jupyterlab-git, jupyterlab-lsp, jupyterlab-spellchecker, tensorboard, wandb, matplotlib, seaborn, pandas, numpy, scikit-learn, scipy, polars, pyarrow, dask, duckdb, altair, papermill, marimo, mkdocs, black, pytest
PyPI (ML Core): PyTorch >= 2.10.0 (CUDA 13.0), xformers, transformers >= 5.0.0rc1, accelerate, einops, kornia, spandrel, torchsde
PyPI (vLLM Runtime): blake3, flashinfer-python, numba, ray, xgrammar, and 25+ more runtime deps
PyPI (Fine-tuning): peft, trl, bitsandbytes, deepspeed, liger-kernel
PyPI (LangChain): langchain, langchain-core, langchain-openai, langchain-community, langchain-classic, langchain-anthropic, langchain-huggingface, langchain-ollama, chromadb, faiss-cpu
PyPI (Evaluation): evidently (with llm extras), evaluate, sacrebleu, rouge-score, nltk, bertviz
PyPI (APIs): openai, anthropic, gradio, ollama (client)
PyPI (Collaboration): jupyter-collaboration >= 4.1.0
RPM: git, gcc, gcc-c++
| Variable | Value | Purpose |
|---|---|---|
NVIDIA_PYTHON_PROJECT | ~/.pixi | NVIDIA driver → pixi env mapping |
LD_LIBRARY_PATH | /usr/lib64:$HOME/llama.cpp | CUDA libs + llama.cpp shared libs |
LLAMA_CPP_PATH | ~/llama.cpp | (from llama-cpp sub-layer) |
UNSLOTH_SKIP_LLAMA_CPP_INSTALL | 1 | (from unsloth sub-layer) |
HF_HOME | ~/.cache/huggingface | (from unsloth sub-layer) |
Same CRDT MCP server as /ov-jupyter:jupyter — 11 tools for programmatic notebook access (notebook_list/create/get/watch/list_users, cell_get/update/insert/delete/execute, room_list). Clients no longer manage CRDT rooms — every notebook_/cell_ call auto-attaches. See /ov-jupyter:jupyter-mcp "Usage philosophy and caveats" for the design principles.
Endpoint: http://localhost:8888/mcp (Streamable HTTP, MCP spec 2025-11-25)
| jupyter | jupyter-ml | |
|---|---|---|
| Base dep | supervisord | cuda, supervisord |
| GPU | No | CUDA 13.0 |
| Platforms | amd64 + arm64 | amd64 only |
| MCP | CRDT (11 tools) | CRDT (11 tools) |
| ML stack | No | Full (PyTorch, vLLM 0.19, unsloth) |
| Volume | workspace | workspace |
/ov-jupyter:jupyter-ml/ov-jupyter:jupyter-ml-notebook/ov-jupyter:jupyter — Lightweight variant (no CUDA, multi-arch)/ov-jupyter:llama-cpp — Sub-layer: llama.cpp binaries/ov-jupyter:unsloth — Sub-layer: vLLM wheel + fine-tuning + vLLM patch/ov-jupyter:jupyter-mcp — Sub-layer: CRDT MCP extension/ov-jupyter:notebook-templates — Starter notebooks (data layer, used alongside this layer in images)/ov-hermes:hermes — MCP consumer (auto-discovers via OV_MCP_SERVERS; uses jupyter tools to read/edit/execute cells)/ov-openwebui:openwebui — MCP consumer (sets CODE_EXECUTION_ENGINE=jupyter when this server is discovered, routing Open WebUI code blocks to the Jupyter kernel)Use when the user asks about:
jupyter-ml layer/ov-build:layer — layer authoring reference (layer.yml schema, task verbs, service declarations)/ov-build:eval — declarative testing (eval: block, ov eval image, ov eval live)