with one click
update
// Use when deploying updates to this machine. Pulls latest changes, syncs dependencies, verifies environment, and restarts the bridge service. Triggered by 'update', 'deploy', 'pull and restart', or after git pull.
// Use when deploying updates to this machine. Pulls latest changes, syncs dependencies, verifies environment, and restarts the bridge service. Triggered by 'update', 'deploy', 'pull and restart', or after git pull.
[HINT] Download the complete skill directory including SKILL.md and all related files
| name | update |
| description | Use when deploying updates to this machine. Pulls latest changes, syncs dependencies, verifies environment, and restarts the bridge service. Triggered by 'update', 'deploy', 'pull and restart', or after git pull. |
Pull the latest changes from the remote repository, sync dependencies, and restart the bridge service.
PREREQUISITE: Must be on latest main branch before running.
cd ~/src/ai && git checkout main && git pull
If there are local changes, stash them first: git stash. The update orchestrator also handles this, but being on main is required.
Run the full update orchestrator and report the results:
cd ~/src/ai && .venv/bin/python scripts/update/run.py --full
The orchestrator will:
.claude hardlinks and audit skill hooksscutil --get ComputerName, matches against machine field in ~/Desktop/Valor/projects.json, reports which projects this machine handlesprojects.json (green-light gate) — runs bridge/config_validation.py::validate_projects_config over the full config (Step 4.6). Enforces that every bridge-contact identifier (DM contact id, Telegram group, email contact, email domain wildcard) resolves to exactly one machine. On failure: log the error, skip the service restart, leave the running bridge serving on the previously-validated config. See Single-Machine Ownership.After running, report the result. If there are warnings or errors, list each one clearly.
First-install backfill reminder (markitdown): When the update run is the first to install the [knowledge] extra on this machine (detected by scripts/update/deps.py's lockfile-diff check), the Telegram summary appends a one-line tip: run 'valor-ingest --scan ~/work-vault/' to backfill existing binary files into sidecars. The reminder is gated by ~/.cache/valor/markitdown-backfill-reminded and fires only once per machine. If the user asks why existing PDFs/docs in the vault are not yet indexed after update, point them at this command — the watcher only picks up files modified after it starts.
Log rotation: The orchestrator installs the user-space log-rotate LaunchAgent on every --full run (com.valor.log-rotate.plist → ~/Library/LaunchAgents/). No root/sudo needed; the LaunchAgent runs scripts/log_rotate.py every 30 minutes to rotate any logs/*.log file over 10 MB. The installer is content-idempotent — if the rendered plist matches the installed file, the bootout/bootstrap cycle is skipped entirely. If a stale /etc/newsyslog.d/valor.conf exists from prior releases, the orchestrator attempts sudo -n rm (non-interactive) to remove it; if sudo requires a password, the cleanup is skipped with a warning and retried next run.
The orchestrator automatically cleans up sessions as part of Step 5.5:
killedThe update system automatically checks PyPI for newer versions of anthropic and claude-agent-sdk on every run. When a newer version is available:
pyproject.tomluv sync to install the new versionpytest tests/unit/test_docs_auditor_substrate.py -x -q)pyproject.toml and re-syncs old versionsThis means SDK upgrades happen automatically and safely — no manual intervention needed unless a breaking change causes test failures.
When pyproject.toml changes via git pull with critical dep version changes (telethon, anthropic, claude-agent-sdk):
remote-update.sh) detects the change and writes data/upgrade-pending/update manually will apply the upgrade with proper verificationIf data/upgrade-pending exists:
# Check what's pending
cat ~/src/ai/data/upgrade-pending
# After /update applies the upgrade and verifies the bridge starts:
rm ~/src/ai/data/upgrade-pending
To check the environment without making changes:
cd ~/src/ai
.venv/bin/python scripts/update/run.py --verify
After update, reinstall launchd plists to pick up any template changes:
cd ~/src/ai
./scripts/install_reflections.sh
./scripts/install_worker.sh
The install script substitutes __PROJECT_DIR__ and __HOME_DIR__ placeholders with the current machine's paths. This ensures plists work on any machine without hardcoded usernames.
cd ~/src/ai
rm -rf .venv
uv venv
uv sync --all-extras
cd ~/src/ai
uv sync --all-extras --reinstall
ls ~/Desktop/Valor/google_token.jsonvalor-calendar test.venv/bin/python -c "import google_auth_oauthlib; print('OK')"The bridge derives active projects from scutil --get ComputerName matched against the machine field in ~/Desktop/Valor/projects.json. If the wrong projects are active:
scutil --get ComputerNamepython -c "import json; [print(f'{k}: {v.get(\"machine\")}') for k,v in json.load(open('$HOME/Desktop/Valor/projects.json')).get('projects',{}).items()]"machine value in projects.json matches the ComputerName exactly (case-insensitive)# Check logs
tail -50 ~/src/ai/logs/bridge.error.log
# Manual restart
~/src/ai/scripts/valor-service.sh restart
# Check status
~/src/ai/scripts/valor-service.sh status
# Check logs
tail -50 ~/src/ai/logs/worker_error.log
# Manual restart
~/src/ai/scripts/valor-service.sh worker-restart
# Check status
~/src/ai/scripts/valor-service.sh worker-status
# Reinstall plist
~/src/ai/scripts/install_worker.sh
git.py)from scripts.update import git
# Pull with automatic stash handling
result = git.git_pull(project_dir)
# result.success, result.commit_count, result.commits
# Check pending upgrades
pending = git.check_upgrade_pending(project_dir)
# pending.pending, pending.timestamp, pending.reason
deps.py)from scripts.update import deps
# Sync dependencies
result = deps.sync_dependencies(project_dir, reinstall=False)
# result.success, result.method ("uv" or "pip")
# Verify versions
versions = deps.verify_critical_versions(project_dir)
# [VersionInfo(package, version, expected, matches), ...]
verify.py)from scripts.update import verify
result = verify.verify_environment(project_dir)
# result.system_tools, result.python_deps, result.dev_tools
# result.valor_tools, result.ollama, result.sdk_auth, result.mcp_servers
calendar.py)from scripts.update import calendar
# Ensure global hook is configured
hook = calendar.ensure_global_hook(project_dir)
# hook.configured, hook.created, hook.error
# Generate calendar config
config = calendar.generate_calendar_config(project_dir)
# config.success, config.mappings, config.error
mcp_memory.py, mcp_byob.py)Both modules idempotently verify/repair their entry in ~/.claude.json
mcpServers under fcntl.flock(LOCK_EX | LOCK_NB) on
~/.claude.json.lock with the same 3-attempt backoff (50/200/800ms).
run.py calls both on every invocation so drift is healed automatically.
from scripts.update import mcp_memory, mcp_byob
# Memory MCP -- python3 -m mcp_servers.memory_server
r1 = mcp_memory.verify_memory_mcp(write=True)
# r1.ok, r1.action ("ok"|"installed"|"repaired"|...)
# BYOB MCP -- tsx ~/.byob/packages/mcp-server/bin/byob-mcp.ts, BYOB_ALLOW_EVAL=1
r2 = mcp_byob.verify_byob_mcp(write=True)
# r2.ok, r2.action
write=False runs in verify-only mode (LOCK_SH, no rename) -- used by
/update --verify.
run.py wires:
mcp_memory.verify_memory_mcp() -- runs every invocation.mcp_byob.verify_byob_mcp() -- runs every invocation.For BYOB binary updates (rebuild ~/.byob/ when the pinned commit changes
in config/byob_pin.json) and bcu binary updates (re-download + SHA verify
against config/bcu_pin.json when the opt-in sentinel
~/.config/valor/computer-use-enabled is present), see the upcoming
implementation in scripts/update/run.py and the post-install canary at
scripts/update/byob_canary.js. Pins are bumped only via:
/update --bump-byob -- next BYOB upstream commit/update --bump-bcu -- next bcu release tagRollback paths:
~/.byob/ tree to ~/.byob.prev/ before
git pull && bun install && bun run setup. BYOB v0.3+ is a workspace
monorepo with build artifacts under packages/*/output/ and
packages/*/dist/ — there is no single top-level dist/ to copy.
Restore by rm -rf ~/.byob && mv ~/.byob.prev ~/.byob on canary
failure (defined as cd ~/.byob && bun run doctor reporting any red
status, or the post-install end-to-end probe — once
byob_canary.js is built — failing within 30s).~/.local/bin/background-computer-use.prev symlink, restored on
/v1/list_apps canary failure.service.py)from scripts.update import service
# Get bridge status
status = service.get_service_status(project_dir)
# status.running, status.pid, status.uptime, status.memory_mb
# Install/restart bridge
service.install_service(project_dir) # Installs bridge + update cron
service.restart_service(project_dir)
# Get worker status
worker = service.get_worker_status(project_dir)
# worker.running, worker.pid, worker.uptime, worker.memory_mb
# Install/restart worker
service.install_worker(project_dir) # Installs standalone worker service
service.restart_worker(project_dir)
Machines that run the do-design-system skill also need Node + npm (for
npx @google/design.md). remote-update.sh runs npm ci --only=prod
guarded by:
if [ -f "$PROJECT_DIR/package.json" ] && command -v npm >/dev/null 2>&1; then
( set +o pipefail; cd "$PROJECT_DIR" && npm ci --only=prod ) \
|| echo "[update] npm ci failed (non-fatal); continuing"
fi
The non-pipefail subshell + || echo trailer guarantee a missing npm
or a transient install failure never aborts the parent update. Machines
without Node simply skip the block silently; design-system tooling then
falls back to Python-only emission (--generate --no-node) for
design-system.md / brand.css / source.css. Lint and DTCG / Tailwind
exports still require Node and are only produced on Node-equipped
machines. See docs/features/design-system-tooling.md for the full
fallback semantics.