with one click
webxr-dev
// WebXR/VR development guide for Three.js on Meta Quest — camera rig, controllers, teleport, postprocessing limitations, and hard-won pitfalls from shipping VR
// WebXR/VR development guide for Three.js on Meta Quest — camera rig, controllers, teleport, postprocessing limitations, and hard-won pitfalls from shipping VR
[HINT] Download the complete skill directory including SKILL.md and all related files
| name | webxr-dev |
| description | WebXR/VR development guide for Three.js on Meta Quest — camera rig, controllers, teleport, postprocessing limitations, and hard-won pitfalls from shipping VR |
A practical guide for building WebXR (VR/AR) experiences with Three.js, based on hard-won lessons from shipping VR on Meta Quest headsets.
Invoke with /webxr-dev when:
This is the single most important decision in your WebXR app.
scene.background, setClearColor() work normally.scene.background rendering in AR sessions intentionally.setClearColor(color, 1) may be overridden by Three.js XR path.gl_FragColor = vec4(color, 1.0) are fine, but MeshBasicMaterial/MeshStandard may not be.Start with immersive-vr. Only switch to immersive-ar when you have:
1. A specific passthrough use case
2. The ability to test on-device
3. A plan for managing alpha in every material and shader
Never default to immersive-ar just because the device supports it. A Quest 3 supports both — always prefer immersive-vr unless passthrough is the primary experience.
const xr = navigator.xr;
const supported = await xr.isSessionSupported("immersive-vr");
if (!supported) return;
const sessionInit: XRSessionInit = {
optionalFeatures: [
'local-floor', // Floor-level reference space
'bounded-floor', // Room-scale boundary
'hand-tracking', // Quest hand tracking
'layers', // XRProjectionLayer + composition layers
'hit-test', // Ray-vs-real-world hit testing (AR)
],
};
const session = await xr.requestSession("immersive-vr", sessionInit);
renderer.xr.setReferenceSpaceType("local-floor");
renderer.xr.setSession(session);
Only these strings are recognized in optionalFeatures / requiredFeatures:
anchors, bounded-floor, depth-sensing, dom-overlay, hand-tracking, hit-test, layers, light-estimation, local, local-floor, secondary-views, unbounded, viewer.
Unrecognized strings are silently ignored — they won't cause an error but they won't do anything either.
Critical: The default WebXR framebuffer is low-resolution. You MUST create an XRWebGLLayer with native scale factor to avoid pixelated rendering:
const onSessionStarted = (session: XRSession) => {
session.addEventListener("end", onSessionEnded);
renderer.xr.setReferenceSpaceType("local-floor");
// Set high-res framebuffer — without this, everything looks pixelated
const gl = renderer.getContext();
const glLayer = new XRWebGLLayer(session, gl, {
framebufferScaleFactor: XRWebGLLayer.getNativeFramebufferScaleFactor(session),
antialias: true,
});
session.updateRenderState({ baseLayer: glLayer });
renderer.xr.setSession(session);
};
Without this, Quest renders at a fraction of native resolution. This single change eliminates most "it looks blurry/pixelated" complaints.
Passthrough requires immersive-ar, not immersive-vr. There is no 'passthrough' optional feature — that string is not part of the WebXR spec.
To enable passthrough:
// Request an AR session — the device composites your render over the camera feed
const session = await xr.requestSession("immersive-ar", {
optionalFeatures: ['local-floor', 'hand-tracking'],
});
// session.environmentBlendMode will be "alpha-blend" on Quest
In immersive-ar, framebuffer alpha controls camera blending: alpha=0 shows full passthrough, alpha=1 shows your render. You cannot switch between VR and AR mid-session — the mode is fixed at request time.
For custom shader skyboxes (ray marching, procedural stars, etc.), use a standard sphere with DoubleSide:
const sphereGeo = new THREE.IcosahedronGeometry(50, 5);
// Do NOT call sphereGeo.scale(-1, 1, 1) — see "Geometry Inversion Trap" below
const material = new THREE.ShaderMaterial({
vertexShader, fragmentShader,
side: THREE.DoubleSide, // Guarantees visibility from inside — no culling surprises
depthWrite: false,
});
const vrSphere = new THREE.Mesh(sphereGeo, material);
vrSphere.frustumCulled = false;
Never combine sphereGeo.scale(-1, 1, 1) with side: THREE.BackSide.
scale(-1,1,1) flips the winding order AND normals (normals now point inward)BackSide renders the face opposite to the normal directionBackSide renders the outside — camera inside sees nothingSafe options for camera-inside-sphere rendering:
DoubleSide (recommended): Renders both faces, eliminates all culling ambiguity. The shader determines what to draw via ray direction — double-rendering has zero visual cost.BackSide: Works in theory (outward normals, BackSide renders inner faces) but winding order can be ambiguous across geometry types.FrontSide: Also works but equally fragile.Use DoubleSide and stop worrying about it.
In VR, the camera is at the user's physical head position (~1.6m above floor). This affects:
Objects that look fine on desktop often feel enormous in VR because you have real spatial perception. Always reduce the effective size of objects for VR. Example: a black hole with mass=1.5 (rs=1.5m) works on desktop but in VR it's a room-sized sphere. Reduce to mass=0.2 for a manageable, dramatic effect. Test iteratively — there's no substitute for in-headset scale perception.
The VR vertex shader must pass per-eye camera position for correct stereo:
varying vec3 vCameraPos;
void main() {
vCameraPos = cameraPosition; // Three.js sets this per-eye in VR
// ...
}
Do NOT hardcode camera position — each eye renders from a different offset.
postprocessing Library Does NOT Support WebXRThe postprocessing npm package (EffectComposer, BloomEffect, etc.) has zero XR/stereo awareness. It cannot render to the XR framebuffer.
In the render loop, VR mode must bypass the composer:
if (renderer.xr.isPresenting) {
renderer.render(scene, camera); // Direct render — no composer
} else {
composer.render(delta); // Desktop gets postprocessing
}
Consequence: Any screen-space effects (bloom, distortion, tone mapping, chromatic aberration) are completely invisible in VR.
Replace screen-space postprocessing with 3D scene objects:
| Desktop Effect | VR Replacement |
|---|---|
| Bloom | Emissive materials, MeshBasicMaterial with bright colors, additive blending |
| Screen distortion | Expanding shockwave ring meshes (RingGeometry + DoubleSide + fade) |
| Color grading | Adjust material colors directly |
| Glow | Larger, brighter MeshBasicMaterial spheres with transparency |
Example — merger collision effect:
// Glow sphere: scale up + color shift at peak
this.glowMaterial.opacity = glowIntensity * 0.9;
this.mergerGlow.scale.setScalar(1 + glowIntensity * 5);
// Purple → white shift
this.glowMaterial.color.setRGB(
0.39 + glowIntensity * 0.61,
0.40 + glowIntensity * 0.60,
0.95 + glowIntensity * 0.05
);
// Expanding shockwave ring
const shockProgress = Math.max(0, (playbackTime - mergerNorm) * 6);
if (shockProgress > 0 && shockProgress < 1) {
shockwaveMaterial.opacity = (1 - shockProgress) * 0.7;
shockwaveRing.scale.setScalar(1 + shockProgress * 15);
} else {
shockwaveMaterial.opacity = 0;
}
All movable objects (camera, controllers, teleport target) live under a single cameraRig group:
scene
└── cameraRig (THREE.Group)
├── camera (PerspectiveCamera)
├── controller1
├── controller2
└── teleportTarget
Moving the rig moves everything together. The XR system updates the camera's local transform from headset tracking; the rig provides the world offset.
Any child of cameraRig receives positions in rig-local space. When raycasting against world-space objects (ground plane, scene objects), you MUST convert the hit point before assigning to a rig child:
// WRONG — world hit assigned as local position → offset by rig transform
teleportTarget.position.copy(worldHitPoint);
// CORRECT — convert to rig-local space first
teleportTarget.position.copy(worldHitPoint);
cameraRig.worldToLocal(teleportTarget.position);
This applies to:
Without worldToLocal(), the marker appears offset from the actual ray intersection — the offset grows as the rig moves further from origin.
Left controller:
axes[2] = strafe (left/right)
axes[3] = forward/back
buttons[1] = grip (hold for fly mode)
buttons[3] = thumbstick press (menu toggle)
buttons[4] = X button
Right controller:
axes[2] = horizontal turn (left/right)
axes[3] = vertical fly (up/down)
buttons[1] = grip
buttons[3] = thumbstick press
if (leftGrip) {
// Fly mode: full 3D movement following head direction
// forward/right keep Y component from camera quaternion
} else {
// Ground mode: project to XZ plane
forward.y = 0; forward.normalize();
right.y = 0; right.normalize();
}
// Horizontal: rotate the rig (yaw)
if (Math.abs(rx) > DEAD_ZONE) {
cameraRig.rotateY(-rx * TURN_SPEED * dt);
}
// Vertical: move up/down (NOT pitch rotation — see below)
if (Math.abs(ry) > DEAD_ZONE) {
cameraRig.position.y += -ry * MOVE_SPEED * 0.5 * dt;
}
Do not use cameraRig.rotateX() for vertical look. The XR headset controls camera orientation via head tracking. Rotating the rig on X conflicts with the headset's own pitch — the rotation is applied but immediately composed with the headset pose, creating disorienting or invisible results.
cameraRig.position.y = 0 for ground-level teleportworldToLocal() on the hit pointDefault 2 m/s feels sluggish in large scenes. 5 m/s is a good baseline for exploration. Vertical fly at half speed (2.5 m/s) feels natural.
Use a textured plane mesh that floats in world space. Render UI to a canvas, apply as texture.
class VRPanel {
mesh: THREE.Mesh; // Plane with CanvasTexture
canvas: HTMLCanvasElement;
ctx: CanvasRenderingContext2D;
// Position in front of camera
positionInFront(camera: THREE.Camera, distance: number) {
const forward = new THREE.Vector3(0, 0, -1).applyQuaternion(camera.quaternion);
this.mesh.position.copy(camera.position).addScaledVector(forward, distance);
this.mesh.lookAt(camera.position);
}
}
renderOrder to ensure panel is always visibleThis is non-negotiable. Desktop browser testing tells you nothing about:
chrome://inspect/#devices in desktop Chromeadb tcpip 5555 then adb connect <quest-ip>:5555Note: The Developer menu location changes across Quest firmware versions. Try:
If no Developer option exists, you need to register at developer.meta.com first (free), enable via the phone app, and reboot the Quest.
Do not iterate blindly without on-device testing. Four rounds of blind fixes cost more than setting up remote debugging once.
See the Critical Rule above. This single mistake cascades into dozens of rendering issues.
In VR, the camera is at head height. Objects at origin are at your feet. Position objects at eye level for the best experience.
Three.js intentionally skips background rendering in AR sessions. You must fill the background yourself (opaque sky sphere, shader output with alpha=1).
scale(-1,1,1) + BackSide double-negates face culling. Use DoubleSide instead. See "Geometry Inversion Trap" above.
Materials with transparent: true render differently in AR vs VR. In VR, transparency is visual only. In AR, low alpha = passthrough camera.
Large skybox spheres may be culled by the VR camera's frustum. Always set frustumCulled = false on skybox meshes.
If you swap geometries (e.g., large sphere ↔ small sphere), always .dispose() the old one to prevent GPU memory leaks.
Without explicitly creating an XRWebGLLayer with getNativeFramebufferScaleFactor(), the Quest renders at a fraction of native resolution. Everything looks pixelated. Always set native scale factor.
The postprocessing library doesn't support WebXR. Bloom, distortion, and tone mapping are skipped when renderer.xr.isPresenting. Use 3D scene objects instead.
If the teleport target or reticle is a child of cameraRig, world-space hit points must be converted to local space via cameraRig.worldToLocal(). Otherwise the marker is offset by the rig's position.
cameraRig.rotateX() conflicts with headset tracking. Use vertical position movement (fly up/down) instead. Only yaw (rotateY) works reliably on the rig.
Don't let individual scenes choose session mode. The XR manager should request the session, and scenes should adapt to what they get.
// Good: XR manager decides
const mode = "immersive-vr"; // App-level decision
// Bad: Scene tries to change session type
if (userWantsPassthrough) requestSession("immersive-ar"); // Can't switch mid-session
interface Scene {
supportsXR?: boolean;
init(ctx: SceneContext): Promise<void>;
update(dt: number, elapsed: number): void;
dispose(): void;
}
Scenes should:
renderer.xr.isPresenting in update loopIf you need passthrough later, implement it as:
immersive-ar with a complete alpha management strategy tested on-deviceThere is no way to enable passthrough in an immersive-vr session. Never bolt passthrough onto a working VR app without dedicated on-device testing.