| name | openwebui |
| description | Open WebUI with auto-configured LLM providers, MCP servers, and Jupyter code execution.
MUST be invoked before any work involving: the openwebui layer, Open WebUI configuration,
LLM provider auto-detection, or MCP server discovery for Open WebUI.
|
openwebui -- Open WebUI with auto-configuration
Layer Properties
| Property | Value |
|---|
| Dependencies | python, supervisord |
| Volumes | data -> /opt/data |
| Ports | 8080 |
| Aliases | open-webui -> open-webui |
| Services | openwebui (supervisord, autostart) |
| Install files | pixi.toml, tasks:, openwebui-entrypoint |
Environment Variables
| Variable | Value |
|---|
DATA_DIR | /opt/data |
PORT | 8080 |
DOCKER | true |
ENABLE_DIRECT_CONNECTIONS | true |
ENABLE_CODE_EXECUTION | true |
ENABLE_PERSISTENT_CONFIG | false |
Optional Environment Variables (env_accepts)
| Variable | Description |
|---|
OPENROUTER_API_KEY | API key for OpenRouter (maps to OPENAI_API_KEY with OpenRouter base URL) |
OLLAMA_API_KEY | API key for Ollama Cloud inference |
OLLAMA_HOST | Local Ollama server URL (auto-injected by ollama layer env_provides) |
OPENAI_API_KEY | Direct OpenAI API key |
OPENAI_API_BASE_URL | OpenAI-compatible API base URL |
WEBUI_AUTH | Enable authentication (default: true) |
WEBUI_ADMIN_EMAIL | Admin account email for first-start setup |
OV_MCP_SERVERS | JSON array of MCP servers (auto-injected by mcp_provides layers) |
MCP Accepts
| Name | Description |
|---|
jupyter | JupyterLab CRDT MCP server for notebook manipulation |
chrome-devtools | Chrome DevTools MCP server for browser automation |
Secrets (Two-Tier Architecture)
Tier 1: Infrastructure secrets (podman secrets via secrets: field)
Auto-generated, stored in credential store (keyring/kdbx/config-file fallback):
| Secret | Env Fallback | Purpose | Auto-gen path |
|---|
webui-secret-key | WEBUI_SECRET_KEY | JWT + encryption key (CRITICAL: losing it breaks all sessions and OAuth tokens) | ov config time, ProvisionPodmanSecrets |
admin-password | WEBUI_ADMIN_PASSWORD | Admin account password — declared as secret_requires: | ov deploy add time, ensureLayerSecret |
Provisioned as Secret=ov-openwebui-<name>,type=env,target=<ENV> in the quadlet. The entrypoint checks env vars first (from type=env injection), then file mounts at /run/secrets/ as fallback.
First-run admin login: since 2026-05-06, WEBUI_ADMIN_PASSWORD auto-generates as a 32-byte hex random value if not pre-set. To retrieve the auto-generated password and log in for the first time:
ov secrets get ov/secret WEBUI_ADMIN_PASSWORD
Override with a specific password before the first deploy:
ov secrets set ov/secret WEBUI_ADMIN_PASSWORD <password>
ov config openwebui
Tier 2: User API keys (GPG-encrypted .secrets file)
User-provided API keys stored via ov secrets gpg:
ov secrets gpg set OPENROUTER_API_KEY sk-or-xxx
ov config openwebui --env-file .secrets
Auto-Configuration Entrypoint
The openwebui-entrypoint runs on EVERY start (no sentinel — ENABLE_PERSISTENT_CONFIG=false means env vars always override database config):
- Reads secrets from env vars (podman
type=env) or /run/secrets/ file mounts
- LLM provider detection:
OLLAMA_HOST -> OLLAMA_BASE_URL (Ollama server)
OPENROUTER_API_KEY -> OPENAI_API_KEY + OPENAI_API_BASE_URL=https://openrouter.ai/api/v1
OLLAMA_API_KEY -> OPENAI_API_KEY + OPENAI_API_BASE_URL=https://api.ollama.com/v1
- MCP server discovery from
OV_MCP_SERVERS JSON -> builds TOOL_SERVER_CONNECTIONS JSON for Open WebUI
- Jupyter detection — if a jupyter MCP server URL is found, sets
CODE_EXECUTION_ENGINE=jupyter
- Execs
open-webui serve
TOOL_SERVER_CONNECTIONS Format
The entrypoint translates ov's OV_MCP_SERVERS format into Open WebUI's expected TOOL_SERVER_CONNECTIONS JSON:
[{
"type": "mcp",
"url": "http://ov-jupyter:8888/mcp",
"spec_type": "url", "spec": "", "path": "",
"auth_type": "", "key": "",
"config": {"enable": true},
"info": {"id": "", "name": "jupyter", "description": "MCP: jupyter"}
}]
Key fields: type must be "mcp" (not "openapi"), config.enable must be true.
Key Design Choice: ENABLE_PERSISTENT_CONFIG=false
This is the critical architectural decision. Without it, Open WebUI's database config overrides environment variables after the first admin-panel save. With it disabled, env vars ALWAYS win — so the entrypoint's dynamic configuration (LLM providers, MCP servers, Jupyter) works reliably on every restart. No sentinel-guarded config patching needed (unlike hermes).
Important: No port_relay Needed
Open WebUI binds to 0.0.0.0:8080 by default. The port_relay field is only for services that bind to 127.0.0.1 (like Chrome DevTools on port 9222). Using port_relay with Open WebUI causes a socat/Open WebUI port conflict on eth0.
Build Notes
- pixi.toml uses
[pypi-dependencies] (NOT [feature.default.pypi-dependencies] — the default feature is reserved in pixi)
- Open WebUI is installed via
pip install open-webui in the pixi environment (~500MB+ of dependencies)
- First startup takes ~30 seconds (Alembic DB migrations, model loading). supervisord
autorestart=true handles transient failures.
Usage
openwebui:
base: fedora
layers:
- agent-forwarding
- openwebui
- dbus
- ov
ports:
- "8080:8080"
Related Layers
/ov-foundation:python -- Python runtime dependency
/ov-foundation:supervisord -- process manager dependency
/ov-hermes:hermes -- alternative AI agent with similar MCP/LLM auto-config pattern
Related Commands
/ov-core:config -- ov config openwebui --update-all for service discovery
/ov-build:secrets -- ov secrets for kdbx/keyring credential management
/ov-core:service -- ov service status openwebui for runtime management
/ov-build:mcp -- probe the MCP servers openwebui consumes (auto-configured into TOOL_SERVER_CONNECTIONS from OV_MCP_SERVERS): ov eval mcp list-tools <provider-image> shows what tools openwebui will see, and ov eval mcp ping verifies liveness before debugging openwebui itself.
Related Images
/ov-openwebui:openwebui -- the deployed image
/ov-jupyter:jupyter -- deploy alongside for MCP notebook access and code execution
/ov-ollama:ollama -- deploy alongside for local LLM inference
/ov-hermes:hermes -- alternative AI frontend (CLI-based agent vs web UI)
When to Use This Skill
MUST be invoked when the task involves the openwebui layer, Open WebUI configuration, LLM provider auto-detection, MCP server discovery for Open WebUI, or the openwebui entrypoint. Invoke this skill BEFORE reading source code or launching Explore agents.
Related
/ov-build:layer — layer authoring reference (layer.yml schema, task verbs, service declarations)
/ov-build:eval — declarative testing (eval: block, ov eval image, ov eval live)