with one click
testing-hriv
// End-to-end testing guide for the HRIV app including local stack setup, seed data, auth, UI navigation, metadata operations, admin export/import, image upload, image replacement, and tile sidecar routing.
// End-to-end testing guide for the HRIV app including local stack setup, seed data, auth, UI navigation, metadata operations, admin export/import, image upload, image replacement, and tile sidecar routing.
[HINT] Download the complete skill directory including SKILL.md and all related files
| name | testing-hriv |
| description | End-to-end testing guide for the HRIV app including local stack setup, seed data, auth, UI navigation, metadata operations, admin export/import, image upload, image replacement, and tile sidecar routing. |
End-to-end testing guide for the HRIV app: local stack bring-up, seed data, auth,
UI navigation, metadata operations, admin export/import, and image upload. For
domain-specific flows see the sibling skills testing-image-processing
(tile pipeline / pyvips) and testing-backup-service (disaster recovery).
backend/.env file if it doesn't exist (docker-compose references it):
touch backend/.env
docker compose up -d --build
Services: frontend (Vite on :5173), backend (FastAPI on :8000), db (PostgreSQL),
redis, worker (arq), seed. Wait ~10s for the db to seed.docker-compose.yml now defines a worker service; docker compose up -d
starts it automatically. If you're on an older checkout without that service,
start the worker manually or image processing will enqueue to Redis without being processed:
docker compose exec -d backend arq app.worker.WorkerSettings
If the frontend Docker build fails with npm ci errors about missing packages from
the lock file, delete the stale frontend/package-lock.json (it is in .gitignore
but may exist locally from a prior npm install) and rebuild:
rm -f frontend/package-lock.json
docker compose up -d --build frontend
Bind-mounts give hot-reload for most source edits. For Dockerfile / dependency / nginx config changes, rebuild the specific service:
docker compose up -d --build frontend # or backend, worker, etc.
None for local testing — seed users are created automatically.
Backup-service S3/Azure testing needs credentials; see testing-backup-service.
All use password: password
| Role | canEditContent | canManageUsers | |
|---|---|---|---|
| admin@bcit.ca | admin | Yes | Yes |
| instructor@bcit.ca | instructor | Yes | No |
| student@bcit.ca | student | No | No |
| ID | Name | Category | Source |
|---|---|---|---|
| 1 | Duomo di Milano | Italian | OpenSeadragon examples |
| 2 | Duomo di Milano (Gothic Detail) | Gothic | OpenSeadragon examples |
| 3 | Highsmith Panorama | American | Library of Congress |
| 4 | Library of Congress | Panoramas | Library of Congress |
TOKEN=$(curl -s -X POST http://localhost:8000/api/auth/login \
-H 'Content-Type: application/json' \
-d '{"email":"admin@bcit.ca","password":"password"}' \
| python3 -c "import sys,json; print(json.load(sys.stdin)['access_token'])")
curl -H "Authorization: Bearer $TOKEN" http://localhost:8000/api/images/1
arch matches).+ icons.+ on any row opens a "New Category" dialog; the new category is auto-selected./api/categories/tree query).The backend returns 409 Conflict when creating or renaming a category to a name
that already exists among its siblings (same parent_id). The frontend dialogs
(AddCategoryDialog, EditCategoryDialog) show an inline red Alert and keep the dialog
open for retry.
Key behaviors to verify:
parent_id), not globalTesting flow (Manage > Categories):
+ next to "Root level" → type an existing root name (e.g. "Architecture") → Create → expect error+ next to a different parent (e.g. Panoramas) → type a name that exists elsewhere (e.g. "American") → Create → expect successBottom-left of the viewer, left to right:
| # | Icon | Function |
|---|---|---|
| 1 | + | Zoom in |
| 2 | – | Zoom out |
| 3 | House | Home (reset view) |
| 4 | Arrows | Fullscreen toggle |
| 5 | CCW arrow | Rotate left |
| 6 | CW arrow | Rotate right |
| 7 | Diagonal arrow | Selection tool (draw rectangles) |
| 8 | Padlock | Lock / unlock overlays |
| 9 | X | Clear overlays |
| 10 | Pencil | Canvas annotation edit |
Warning: Fullscreen (4) is adjacent to the selection tool (7) and easy to hit accidentally. Press Escape to exit fullscreen.
When testing viewer stability after metadata edits, watch the URL — zoom=, x=,
y= params should remain unchanged if the viewport was preserved.
The viewer displays a real-time magnification badge (NX) in the bottom-left
corner of the navigator mini-map (the mini-map itself is in the bottom-right
of the viewer). The badge updates on every zoom animation frame.
| Condition | Display |
|---|---|
| No measurement settings on image | Raw image-zoom ratio (e.g. <1X, 1X, 4X) |
| Measurement scale + unit configured | Real-world magnification (e.g. 155X, 2117X) |
Seed images have no measurement settings by default, so the badge shows <1X at
home zoom. To test measurement-aware magnification:
8, Unit = um (8 pixels per micrometre).155X at home zoom).<1X (image is smaller than viewport)155X (depends on image dimensions and viewport)<1X instead of 0XpointerEvents: none — it should never block clicks on the navigatorviewer.navigator.element, NOT added via viewer.addControl()animation and animation-finish events (matches repositionLabels pattern)1X)Images use version-based optimistic concurrency; PATCH requires If-Match:
VERSION=$(curl -s -H "Authorization: Bearer $TOKEN" http://localhost:8000/api/images/1 \
| python3 -c "import sys,json; print(json.load(sys.stdin)['version'])")
curl -X PATCH http://localhost:8000/api/images/1 \
-H "Authorization: Bearer $TOKEN" \
-H "Content-Type: application/json" \
-H "If-Match: $VERSION" \
-d '{"name": "New Name"}'
Always re-fetch version before each PATCH or you'll get 409 Conflict.
metadata_extra_merge patches individual keys in metadata_extra without
overwriting the rest — this is how the frontend updates locked overlays and
measurement settings independently:
# Add / update a key
curl -X PATCH http://localhost:8000/api/images/1 \
-H "Authorization: Bearer $TOKEN" -H "Content-Type: application/json" \
-H "If-Match: $VERSION" \
-d '{"metadata_extra_merge": {"locked_overlays": [{"x":0.1,"y":0.2,"w":0.3,"h":0.4}]}}'
# Remove a key by setting it to null
curl -X PATCH http://localhost:8000/api/images/1 \
-H "Authorization: Bearer $TOKEN" -H "Content-Type: application/json" \
-H "If-Match: $VERSION" \
-d '{"metadata_extra_merge": {"locked_overlays": null}}'
metadata_extra and metadata_extra_merge are mutually exclusive — sending both
in one request returns 422.
locked_overlays entries are validated by OverlayRectSchema — each must have
numeric x, y, w, h. Malformed entries are silently filtered on both
backend and frontend.
To exercise frontend handling of malformed metadata, inject directly:
docker exec hriv-db-1 psql -U hriv -d hriv -c \
"UPDATE images SET metadata = jsonb_set(COALESCE(metadata,'{}'), '{locked_overlays}', \
'[{\"x\":0.1,\"y\":0.2,\"w\":0.3,\"h\":0.4},{\"garbage\":true},{\"x\":\"str\",\"y\":0,\"w\":0,\"h\":0}]') \
WHERE id=2"
Then open that image in the browser to verify graceful handling.
Default seed data is too small to exercise cancellation. Generate ~1 GB of incompressible data inside the backend container:
docker exec hriv-backend-1 python3 -c "
import os, random
for d in range(20):
path = f'/data/tiles/large_test/dir_{d}'
os.makedirs(path, exist_ok=True)
for f in range(500):
with open(f'{path}/file_{f}.bin', 'wb') as fh:
fh.write(random.randbytes(102400))
"
Archives are stored at /data/admin_tasks/ inside the backend container:
docker exec hriv-backend-1 find /data/admin_tasks -name "*.tar.gz" -type f
docker exec hriv-backend-1 tar -tzf /data/admin_tasks/<filename>.tar.gz | head -20
# admin_tasks/ must be excluded from archives (no re-archiving of past exports):
docker exec hriv-backend-1 tar -tzf /data/admin_tasks/<filename>.tar.gz | grep admin_tasks
# (should return nothing)
asyncio.to_thread; a concurrent coroutine polls cancellation every 2s.threading.Event.queue.Queue and flush every 2s.from playwright.async_api import async_playwright
async with async_playwright() as p:
browser = await p.chromium.connect_over_cdp("http://localhost:29229")
page = [pg for ctx in browser.contexts for pg in ctx.pages
if "localhost:5173" in pg.url][0]
async with page.expect_file_chooser() as fc_info:
await page.click('text=browse to upload')
fc = await fc_info.value
await fc.set_files('/path/to/image.jpg')
The snackbar auto-dismisses after 6 s — use Playwright wait_for to catch the link
deterministically. For deeper image-processing tests (progress flush timing,
synthetic large images, pyvips eval signals) see testing-image-processing.
The Edit Details modal supports one-to-one image replacement with a two-step
confirmation flow. This replaces the image file, regenerates tiles and thumbnails,
and clears canvas metadata (locked_overlays, canvas_annotations).
Generate synthetic test images of varying sizes:
# Small JPEG for quick tests
python3 -c "import numpy as np; from PIL import Image; Image.fromarray(np.random.randint(0,255,(600,800,3),dtype=np.uint8)).save('/tmp/test_replacement.jpg', quality=85)"
# Large PNG for processing-time tests
python3 -c "import numpy as np; from PIL import Image; Image.fromarray(np.random.randint(0,255,(2000,2000,3),dtype=np.uint8)).save('/tmp/test_replacement_large.png')"
Alternatively, generate a test image directly in the browser console (avoids needing PIL/numpy and the native file picker):
const canvas = document.createElement('canvas');
canvas.width = 4000; canvas.height = 3000;
const ctx = canvas.getContext('2d');
for (let y = 0; y < 3000; y += 10)
for (let x = 0; x < 4000; x += 10) {
ctx.fillStyle = `rgb(${(x*y)%256},${(x+y)%256},${(x^y)%256})`;
ctx.fillRect(x, y, 10, 10);
}
canvas.toBlob(blob => {
const file = new File([blob], 'test_image.jpg', {type: 'image/jpeg'});
const dt = new DataTransfer(); dt.items.add(file);
const input = document.querySelector('input[type="file"]');
input.files = dt.files;
input.dispatchEvent(new Event('change', {bubbles: true}));
}, 'image/jpeg', 0.98);
"Replacing this image will delete the current image file, all tiles, and any canvas annotations and overlays. This cannot be undone."
Since the file input is hidden, use Playwright CDP to inject the file directly rather than trying to interact with the OS file picker:
import asyncio
from playwright.async_api import async_playwright
async def inject_file():
async with async_playwright() as p:
browser = await p.chromium.connect_over_cdp("http://localhost:29229")
context = browser.contexts[0]
page = context.pages[0]
file_input = page.locator('input[type="file"]')
await file_input.set_input_files('/tmp/test_replacement.jpg')
asyncio.run(inject_file())
This directly sets the hidden <input type="file"> without needing to interact
with the native file chooser dialog. The modal must be open before running this.
After the processing snackbar disappears, verify via API:
curl -s -H "Authorization: Bearer $TOKEN" http://localhost:8000/api/images/1 | python3 -m json.tool
Key assertions:
tile_sources changed from external URL to /api/tiles/<id>/image.dzithumb changed to /api/tiles/<id>/thumbnail.jpegwidth and height match the replacement image dimensionsfile_size is populatedmetadata_extra is {} (canvas metadata cleared)name, category_id, copyright, note, program_ids are preservedversion has incrementedThe frontend performs two separate API calls for replacement:
apiUpdateImage) — updates form fieldsapiReplaceImage) — uploads the new fileIf the file upload fails after the metadata PATCH succeeds, metadata changes are committed but the file remains unchanged. This is a known trade-off; see issue #271 for discussion of potential atomic replacement approaches.
The in-modal upload progress bar cannot be visually observed on localhost. XHR
upload.onprogress tracks bytes written to the OS TCP send buffer, not bytes received
by the server. On loopback, the kernel's TCP buffers (128KB–4MB) absorb the entire
file instantly — progress jumps 0→100% before the 500ms React re-render tick fires.
Approaches that do NOT work on localhost:
tc qdisc on port 8000 — wrong port (XHR tracks browser→Vite on 5173)tc qdisc on port 5173 IPv4 — Chrome uses IPv6 ::1, bypasses filtertc qdisc on port 5173 IPv4+IPv6 — throttles ALL traffic including PATCHsend() completes instantlyNetwork.emulateNetworkConditions — not available on browser-level WebSocketTo test the progress bar, deploy to a real network environment where upload latency is non-trivial, or use a remote server accessible over a WAN link.
The Helm chart nginx config (charts/frontend/files/default.conf.template) has
client_max_body_size 0 (unlimited) for upload endpoints and 10MB for other
/api/ routes. The replace endpoint pattern images/\d+/replace must be in the
unlimited list, or large replacements will fail with 413. This doesn't affect
docker-compose testing (Vite dev proxy has no body limit).
http://localhost:29229.
Use it for Playwright scripts (p.chromium.connect_over_cdp(...)) when native
computer-use is awkward (file uploads, flaky timing, snackbars)./opt/.devin/chrome/chrome/linux-*/chrome-linux64/chrome
with --user-data-dir=/home/ubuntu/.browser_data_dir to keep profile state. The
google-chrome wrapper requires the CDP proxy.wmctrl -r :ACTIVE: -b add,maximized_vert,maximized_horz
Install wmctrl first if needed (sudo apt-get install -y wmctrl). Keyboard
shortcuts like Super+Up only tile to half-screen on some window managers.