with one click
setup
// Use when configuring a new machine to run the Valor Telegram bridge. Installs all dependencies, authentication, and service startup. Triggered by 'setup', 'configure this machine', or 'new machine setup'.
// Use when configuring a new machine to run the Valor Telegram bridge. Installs all dependencies, authentication, and service startup. Triggered by 'setup', 'configure this machine', or 'new machine setup'.
[HINT] Download the complete skill directory including SKILL.md and all related files
| name | setup |
| description | Use when configuring a new machine to run the Valor Telegram bridge. Installs all dependencies, authentication, and service startup. Triggered by 'setup', 'configure this machine', or 'new machine setup'. |
| disable-model-invocation | true |
Configure this machine to run the Valor Telegram bridge. You do everything except the interactive Telegram login step.
PREREQUISITE: Must be on latest main branch before running.
cd ~/src/ai && git checkout main && git pull
Before starting, confirm the user has:
ai repo cloned at ~/src/ai on the main branch with latest changes pulledpython resolves to Python 3.12+Claude Code hooks invoke bare python under /bin/sh, which does not honor zsh aliases. macOS does not ship a python binary by default — only python3. Without this symlink every hook that uses python silently fails with command not found, surfacing errors in the UI and disabling validators (no-raw-redis-delete, plan-section checks, SDLC reminders, etc.).
# Verify python3 is 3.12+
python3 --version
# Create the symlink in a user-writable PATH dir (no sudo)
ln -sf "$(command -v python3)" /opt/homebrew/bin/python
# Confirm /bin/sh resolves it
/bin/sh -c 'python --version' # expected: Python 3.12.x or newer
The update orchestrator (scripts/update/run.py) verifies this via check_python_alias() and fails loudly if missing.
We use uv for fast, reliable Python package management (much faster than pip).
# Check if uv is already installed
if ! command -v uv &> /dev/null; then
echo "Installing uv package manager..."
curl -LsSf https://astral.sh/uv/install.sh | sh
export PATH="$HOME/.local/bin:$PATH"
fi
# Verify installation
uv --version
cd ~/src/ai
# Create virtual environment with uv (auto-creates with pip support)
uv venv
# Sync all dependencies including dev tools from pyproject.toml
uv sync --all-extras
# Install package in editable mode (registers CLI tools)
uv pip install -e .
This will:
.venv/ with Python 3.12 (or latest)valor-calendar, valor-telegram)Verify key imports work:
.venv/bin/python -c "import telethon; import httpx; import dotenv; import anthropic; import google_auth_oauthlib; print('Dependencies OK')"
If this fails, debug before continuing.
Check if .env exists. If not:
cp .env.example .env
Ask the user which project(s) this machine should monitor. The available projects are defined in ~/Desktop/Valor/projects.json -- check the full list there. Common options:
ACTIVE_PROJECTS=psyoptimalACTIVE_PROJECTS=valor,popotoACTIVE_PROJECTS=valor,django-project-template,popoto,psyoptimal,flutter-project-template,cuttlefish,yudame-researchEdit .env and ensure these are set:
| Variable | Required | Notes |
|---|---|---|
ACTIVE_PROJECTS | Yes | Comma-separated project keys |
ANTHROPIC_API_KEY | Yes | Starts with sk-ant- |
TELEGRAM_API_ID | Yes | Numeric, from my.telegram.org |
TELEGRAM_API_HASH | Yes | Hex string, from my.telegram.org |
TELEGRAM_PHONE | Yes | With country code, e.g. +1234567890 |
TELEGRAM_PASSWORD | If 2FA on | Telegram 2FA password |
TELEGRAM_SESSION_NAME | No | Defaults to valor_bridge |
If any required values are placeholder/missing, ask the user to provide them. The shared API keys file at ~/src/.env may have ANTHROPIC_API_KEY and other keys -- check there first.
Set up Google Calendar integration for work time tracking.
Check if credentials exist:
ls ~/Desktop/Valor/google_credentials.json
If missing, ask the user to:
~/Desktop/Valor/google_credentials.jsonCheck if token already exists:
ls ~/Desktop/Valor/google_token.json 2>/dev/null
If no token exists, run the OAuth flow:
cd ~/src/ai
# This will open browser for Google OAuth consent
.venv/bin/valor-calendar --reauth
The user must complete the OAuth consent in their browser. After completion, verify the token is valid:
.venv/bin/valor-calendar --check
The calendar config is auto-generated by the /update command. For now, ensure the Google Calendars exist with matching names:
Required calendars in Google Calendar:
After setup, run /update to auto-generate config/calendar_config.json.
The SDK uses Max subscription OAuth via the Claude Desktop app (no API credits needed).
# Check if Claude Desktop app is running (provides OAuth for CLI)
if pgrep -f "Claude.app" > /dev/null; then
echo "Claude Desktop is running (provides subscription auth)"
else
echo "Claude Desktop is not running"
echo "Start /Applications/Claude.app to enable subscription auth"
echo "Without it, the bridge will fall back to API key billing"
fi
# Verify API key exists as fallback
if grep -q 'ANTHROPIC_API_KEY=sk-ant-' .env 2>/dev/null; then
echo "API key configured (fallback if Desktop auth fails)"
else
echo "No API key fallback configured"
fi
How authentication works:
.env (ANTHROPIC_API_KEY)USE_API_BILLING=true in .envThe SDK spawns Claude Code CLI subprocesses that inherit authentication from the running Claude Desktop app. No separate login command is needed.
sentry-cli is installed automatically by /update. After installation, authenticate:
# Login to Sentry (generates auth token)
sentry-cli login
# Or set token directly in ~/Desktop/Valor/.env
# SENTRY_PERSONAL_TOKEN=sntrys_...
# The SDK automatically injects this as SENTRY_AUTH_TOKEN for PM sessions
The token is stored in ~/Desktop/Valor/.env as SENTRY_PERSONAL_TOKEN and auto-injected into agent sessions by sdk_client.py.
Project configuration lives in ~/Desktop/Valor/projects.json (iCloud-synced, private). This directory is shared across machines via iCloud.
Check if ~/Desktop/Valor/projects.json exists. If not, create from the repo example:
mkdir -p ~/Desktop/Valor
cp config/projects.example.json ~/Desktop/Valor/projects.json
Edit ~/Desktop/Valor/projects.json for this machine's projects.
Critical rules when editing projects.json:
working_directory -- absolute path to the repo on this machinemachine -- the exact ComputerName of the single machine that owns it (scutil --get ComputerName). This is the source of truth for ownership; whitelists, groups, and email patterns all inherit from it. Two projects on different machines must never share a Telegram group, email contact, or DM whitelist contact id — see Single-Machine Ownership.defaults section -- copy it from the example if missingrespond_to_all: false -- the default is true, which is correct. Omit the field entirely from project-level telegram config."groups": {"Dev: ProjectName": {"persona": "developer"}} is sufficientls on each working_directory to confirmNo per-contact ownership edits. When adding this machine, you do not edit dms.whitelist, individual telegram.groups entries, or email.contacts/domains to "exclude" other machines. Just set each project's machine field once. The validator (bridge/config_validation.py) and the update gate (scripts/update/run.py Step 4.6) will enforce that no contact is owned by two machines.
Example minimal project entry:
{
"projects": {
"myproject": {
"name": "My Project",
"working_directory": "~/src/myproject",
"telegram": {
"groups": {
"Dev: My Project": {"persona": "developer"}
}
},
"github": {
"org": "orgname",
"repo": "reponame"
},
"context": {
"tech_stack": ["Python"],
"description": "What the agent should focus on"
}
}
},
"defaults": {
"working_directory": "~/src/ai",
"telegram": {
"respond_to_all": true,
"respond_to_mentions": true,
"respond_to_dms": true,
"mention_triggers": ["@valor", "valor", "hey valor"]
},
"response": {
"typing_indicator": true,
"max_response_length": 4000,
"timeout_seconds": 300
}
}
}
Persona overlay files live in ~/Desktop/Valor/personas/. The loader (agent.sdk_client.load_persona_prompt) prefers the private overlay when present and falls back to the in-repo template (config/personas/<persona>.md) otherwise. Seeding the private overlays from the in-repo defaults at setup time gives the agent identical behavior on every fresh machine without waiting for iCloud propagation from another box.
The PM and developer personas have in-repo templates that are version-controlled and PR-reviewable:
config/personas/project-manager.md — PM pipeline gate rules (CRITIQUE mandatory, REVIEW mandatory, multi-issue fan-out)config/personas/developer.md — Developer SDLC-owner playbook (Mode 3 parallel orchestrator, merge_authorized bypass)Seed them into the vault if not already present (do NOT overwrite — existing overlays may carry per-machine customizations):
mkdir -p ~/Desktop/Valor/personas
for persona in project-manager developer; do
src="config/personas/${persona}.md"
dst="$HOME/Desktop/Valor/personas/${persona}.md"
if [ ! -f "$dst" ]; then
cp "$src" "$dst"
echo "Seeded $dst from $src"
else
echo "$dst already exists — leaving in place (run \`diff\` to compare with $src)"
fi
done
The teammate.md overlay is still operator-customized (not seeded by setup); if missing, the in-repo config/personas/teammate.md would be the fallback but that file is intentionally gitignored, so a teammate-only machine must author its own overlay.
If the machine is already running and you want to inspect drift between the in-repo template and the private overlay:
diff config/personas/project-manager.md ~/Desktop/Valor/personas/project-manager.md
diff config/personas/developer.md ~/Desktop/Valor/personas/developer.md
The persona loader emits a WARNING log line if a known load-bearing substring is missing from the private overlay (e.g., Mode 3 for the developer overlay, CRITIQUE for the PM overlay). Watch logs/bridge.log after the first session for these warnings — they signal that the private overlay has rolled back and should be re-synced.
If the project is already defined on another machine's ~/Desktop/Valor/projects.json, copy its entry rather than writing from scratch (iCloud syncs this file across machines).
After editing, verify all working directories exist:
# For each project's working_directory, confirm it exists
ls ~/src/<project_dir>
Check for an existing session:
ls data/*.session 2>/dev/null
If a session file exists: Skip to Step 8.
If no session file exists: The user must complete an interactive login. Tell them:
I've finished all the automated setup. One step requires your input -- the Telegram login sends a verification code to your phone.
Please run this in a terminal:
cd ~/src/ai && source .venv/bin/activate && python scripts/telegram_login.pyLet me know when you're done.
STOP HERE. Do not proceed until the user confirms the login is complete.
After they confirm, verify the session was created:
ls data/*.session
If no session file appeared, something went wrong. Ask the user what happened and help debug.
Install the reflections daily maintenance plist (runs at 6 AM Pacific):
cd ~/src/ai
./scripts/install_reflections.sh
Verify it loaded:
launchctl list | grep com.valor.reflections
If the output shows the com.valor.reflections label, the scheduler is installed. It will run scripts/reflections.py daily at 6 AM, performing log review, session analysis, LLM reflection, and memory consolidation.
These two surfaces are operator-opt-in. Skip on non-macOS hosts.
BYOB lets the agent read and act on the user's already-logged-in Chrome via MCP tools (byob_navigate, byob_click, etc.) -- no state.json files in the repo, no per-session re-auth.
# 1. Install bun if not already present
command -v bun >/dev/null || curl -fsSL https://bun.sh/install | bash
# 2. Clone BYOB to ~/.byob/ and check out the pinned commit
PIN=$(python3 -c "import json; print(json.load(open('config/byob_pin.json'))['commit'])")
if [ ! -d ~/.byob ]; then
git clone https://github.com/wxtsky/byob ~/.byob
fi
git -C ~/.byob fetch
git -C ~/.byob checkout "$PIN"
# 3. Build + register the native messaging host
cd ~/.byob && bun install && bun run setup
cd ~/src/ai
# 4. Register the BYOB MCP server in ~/.claude.json (idempotent, self-healing)
python -c "from scripts.update import mcp_byob; r = mcp_byob.verify_byob_mcp(write=True); print(r.message)"
After install, the user must:
chrome://extensions → toggle Developer mode ON (top-right) → click Load unpacked (top-left) → select ~/.byob/packages/extension/output/chrome-mv3/ (the BYOB extension cannot be auto-installed; this is an operator click-through).⌘Q on macOS — closing windows is not enough). Reopen Chrome. Chrome only re-reads the Native Messaging config on full restart.Verify with BYOB's own diagnostic — this is authoritative across BYOB versions and tells you exactly what's wrong if anything's off:
cd ~/.byob && bun run doctor
Expected output (all green checkmarks):
~/.byob/bridges/<deviceId>.sockIf any line is red, the message points at the exact fix. The most common case is "no live bridge — extension never connected" which means the user hasn't loaded the extension yet, or loaded it into a different Chrome profile than the one being tested.
Note: the IPC socket path is per-device (UUID-keyed under ~/.byob/bridges/), not a fixed ~/.byob/run/byob.sock. The MCP server discovers the socket at startup; callers should never hardcode the path.
bcu drives Slack, Notes, Telegram Desktop, etc. via the macOS Accessibility API without moving the user's cursor. Prompt the user before installing:
Do you want to enable computer-use (lets the agent drive native macOS apps -- Slack, Notes, etc. -- without moving your cursor)?
On yes:
# Write the opt-in sentinel
mkdir -p ~/.config/valor && touch ~/.config/valor/computer-use-enabled
# Resolve the pinned bcu release
TAG=$(python3 -c "import json; print(json.load(open('config/bcu_pin.json'))['release_tag'])")
# Download + verify SHA + install -- /update handles this on every run too,
# so the SETUP-time fetch is just bootstrap. See scripts/update/run.py.
echo "bcu pinned tag: $TAG"
echo "Run: python scripts/update/run.py --full to fetch + install + permission-prompt."
After install, the user must grant two permissions in System Settings:
BackgroundComputerUse.appBackgroundComputerUse.appThese permissions cannot be granted programmatically.
On no: skip everything. Don't write the sentinel; /update will leave bcu alone.
Ensure the logs directory exists, then start the bridge as a background process:
mkdir -p logs
Start the bridge using the service script:
./scripts/valor-service.sh start
Wait a few seconds, then verify it started:
sleep 4 && tail -20 logs/bridge.log 2>/dev/null
Check for these indicators in the logs:
Agent backend: Claude Agent SDK -- correct backendActive projects: [...] -- the projects you configuredMonitored groups: [...] -- the Telegram groupsConnected to Telegram -- successful connectionAlso verify the process is running:
pgrep -f telegram_bridge.py
Run a comprehensive health check:
cd ~/src/ai
echo "=== System Tools ==="
claude --version
gh --version
git --version
uv --version
echo ""
echo "=== Python Environment ==="
.venv/bin/python --version
.venv/bin/python -c "import telethon; import anthropic; import google_auth_oauthlib; print('Dependencies OK')"
echo ""
echo "=== CLI Tools ==="
.venv/bin/valor-calendar --version 2>/dev/null || echo "valor-calendar: Not found (run 'uv pip install -e .' again)"
.venv/bin/python -m tools.sms_reader.cli recent --limit 1 | grep -q "rowid" && echo "SMS reader: OK" || echo "SMS reader: FAIL"
echo ""
echo "=== Bridge Status ==="
./scripts/valor-service.sh status
Report the final status to the user with:
/update to generate calendar configrespond_to_all: false in project configsdefaults section in projects.jsonworking_directory paths exist on disk before startingexport PATH="$HOME/.local/bin:$PATH"
echo 'export PATH="$HOME/.local/bin:$PATH"' >> ~/.zshrc
rm -rf .venv
uv venv
uv sync --all-extras
ls ~/Desktop/Valor/google_credentials.json.venv/bin/valor-calendar --reauthtail -50 logs/bridge.logls data/*.session.venv/bin/python -c "import telethon; print('OK')"