p5.tree is a render pipeline layer for p5.js v2 — pose and camera interpolation, coordinate-space conversions between WORLD / EYE / SCREEN / NDC, frustum visibility, HUD, multi-pass post-processing, picking, and declarative control panels. The demo below exercises all of it at once.
Under the hood it’s three independent packages: a renderer-agnostic numeric core (
@nakednous/tree— math, spaces, keyframes, visibility), a lightweight DOM layer (@nakednous/ui— sliders, transport), and a p5.js v2 bridge that wires them to the canvas. The dependency direction is strict and one-way — the core knows nothing about p5 or the DOM — which is what lets the same keyframe interpolation that drives a camera path also animate any object, and lets the whole stack run headless or in a future renderer without touching the math.
Post-processing pipeline
The effects in the demo are authored as p5.strands callbacks on top of baseFilterShader() — pixelation and value noise follow the patterns covered in Luke Plowden’s p5.strands tutorial and Dave Pagurek’s writing shaders in JavaScript. The depth-of-field pass is where p5.tree comes in: it pulls layer.depth from the framebuffer and drives the focal plane from a live world-space position — closing the loop between the PoseTrack animating the sphere and the shader blurring the scene around it:
function dofCallback() {
const depthTex = uniformTexture(() => layer.depth)
const focus = uniformFloat(() => focusVal) // updated each frame via mapLocation()
const dofIntensity = uniformFloat('dofIntensity') // UI-owned — pushed by target panel
const getBlurriness = (d) => abs(d - focus) * 40 * dofIntensity
const maxBlurDistance = (b) => b * 0.01
getColor((inputs, canvasContent) => {
let colour = getTexture(canvasContent, inputs.texCoord)
let samples = 1
const centerDepth = getTexture(depthTex, inputs.texCoord).r
const dofriness = getBlurriness(centerDepth)
for (let i = 0; i < 20; i++) {
const angle = float(i) * TWO_PI / 20
const dofDistance = float(i) / 20 * maxBlurDistance(dofriness)
const offset = [cos(angle), sin(angle)] * dofDistance
const sampleDepth = getTexture(depthTex, inputs.texCoord + offset).r
if (sampleDepth >= centerDepth ||
maxBlurDistance(getBlurriness(sampleDepth)) >= dofDistance) {
colour += getTexture(canvasContent, inputs.texCoord + offset)
samples++
}
}
colour /= float(samples)
return [colour.rgb, 1]
})
}
focusVal is the screen-space z of the sphere, recomputed every frame via mapLocation() (see Space transformations). The sphere itself moves along a PoseTrack path — so the focal plane shifts continuously, driven entirely by animation state, with no manual uniform updates in draw().
Ordering and toggling passes is handled by pipe(). The canonical draw structure is:
layer.begin()
// render 3D scene into `layer`
layer.end()
pipe(layer, fxList()) // fxList() filters and reorders enabled passes
pipe() reuses internal ping-pong buffers — no allocations per frame. When multiple independent pipelines are needed, a string cache key separates them; releasePipe() clears them.
Uniform UI
createPanel() manages shader uniforms and optional DOM controls through a single schema. Set target to a shader and the panel calls setUniform() every frame automatically — schema keys are matched to uniform names, nothing in draw():
uiNoise = createPanel({
frequency: { min: 0, max: 10, value: 3, step: 0.1 },
amplitude: { min: 0, max: 1, value: 0.3, step: 0.01 },
speed: { min: 0, max: 1, value: 0.3, step: 0.01 }
}, { target: noiseFilter, x: rX, y: 10, width: 160,
labels: true, title: 'Noise', color: 'white', offset: 0 })
Inside the strands callback, uniformFloat('frequency') declares the slot by string name — the UI matches it. Closures appear only for values that aren’t UI-driven. The DOF callback shows both in the same function: dofIntensity is UI-owned (string name, matched by target), while focusVal and layer.depth are computed each frame from space transforms and the framebuffer:
const depthTex = uniformTexture(() => layer.depth) // computed — framebuffer
const focus = uniformFloat(() => focusVal) // computed — from mapLocation()
const dofIntensity = uniformFloat('dofIntensity') // UI-owned — pushed by target
For boolean state like effect toggles, the schema accepts { value: true } — rendering a checkbox. The target option, x/y positioning, and title are all part of the same trailing options object, following the library-wide single-options-parameter convention.
PoseTrack
TRS keyframe interpolation for any object. createPoseTrack() with no argument returns a pose track that accepts { pos, rot, scl } keyframes; rot can be a raw quaternion, an axis-angle pair, or a look-direction object — the parser normalizes all forms.
The demo drives the magenta focus sphere along a looping path with a pose track, so the DOF focal plane shifts continuously without user input. The track is precomputed in setup():
sphereTrack = createPoseTrack()
const _Q = [0, 0, 0, 1]
const _S = [1, 1, 1]
const _kf0 = { pos: [ 0, 50, -180], rot: _Q, scl: _S }
sphereTrack.add(_kf0)
sphereTrack.add({ pos: [ 140, -30, -50], rot: _Q, scl: _S })
// ... additional waypoints
sphereTrack.add(_kf0) // closes path — last segment curves back to start
sphereTrack.play({ loop: true, duration: 90 })
In draw(), evaluation writes into a pre-allocated scratch object — zero heap allocation per frame:
// _sph = { pos: [0,0,0], rot: [0,0,0,1], scl: [1,1,1] } — allocated once
sphereTrack.eval(_sph)
models[0].position.set(_sph.pos[0], _sph.pos[1], _sph.pos[2])
The out-parameter contract — eval(scratch) fills a user-owned buffer — is the standard hot-path pattern throughout the library.
Camera keyframes
CameraTrack is a specialization of the same track system for { eye, center, up, fov, halfHeight } lookat keyframes. Each field is interpolated independently using the same pure numeric core: eye and center follow centripetal Catmull-Rom paths, up is nlerped on the unit sphere, fov and halfHeight are lerped when both adjacent keyframes carry a non-null value for that field.
createCameraTrack(cam) binds a camera and returns a track; createPanel() mounts the full transport panel — record, play, seek, reset, and props — wired to it:
camTrack = createCameraTrack(sceneCam)
uiCamTrack = createPanel(camTrack, {
x: 10, y: camTrackY, width: 170,
loop: false, rate: 1,
seek: true, props: true, info: true,
color: 'white'
})
Playback supports duration (frames per segment), signed rate (negative reverses), and looping. Interpolation modes are configurable per field:
camTrack.eyeInterp = 'hermite' // default — smooth camera flight
camTrack.centerInterp = 'linear' // default — stable lookat target
fov (radians) animates perspective field of view; halfHeight (world units) animates the vertical extent of an ortho frustum — both are captured automatically by track.add(). The bridge reads the current projection at capture time and calls perspective() or ortho() at apply time, so projection type is never hardcoded into the keyframes.
Space transformations
mapLocation() and mapDirection() convert points and vectors between any pair of coordinate spaces: WORLD, EYE, SCREEN, NDC, MODEL. In the demo, the primary use is computing the DOF focal depth — a world-space position projected to screen space each frame:
mapLocation(models[0].position, { out: _loc, from: p5.Tree.WORLD, to: p5.Tree.SCREEN })
focusVal = _loc[2] // screen-space z → DOF focus plane
_loc is a Float32Array(3) allocated once in setup; the function fills it in place. For lighting, mapDirection() converts the camera’s forward axis from eye space to world space so a directional light always faces the scene from the viewer’s perspective:
mapDirection(p5.Tree._k, { out: _dir, from: p5.Tree.EYE, to: p5.Tree.WORLD })
directionalLight(255, 255, 255, _dir[0], _dir[1], _dir[2])
When many conversions run per frame, the underlying matrix can be cached and passed explicitly via an ipvMatrix option — compute it once, pass it everywhere.
HUD
beginHUD() / endHUD() switch the canvas to an orthographic screen-space mode — origin at the top-left, y increasing downward — independent of any active camera or 3D transforms. The demo uses this to composite the post-processed scene before rendering the UI panels on top, which are DOM elements positioned over the canvas. The pattern:
// after pipe() — scene is already composited
beginHUD()
// draw labels, overlays, insets at pixel coordinates
endHUD()
A related use is stamping a framebuffer inset as a picture-in-picture, common in visibility and culling demos: render an overview scene into a separate p5.Framebuffer, then image() it inside a beginHUD() / endHUD() block at a fixed corner position.
References
- p5.js v2 — Renderer3D · Framebuffer · Shader · Strands
- Three.js · Babylon.js
- github.com/VisualComputing/p5.tree
- 🎓 Meza M — Exploración de WebGL: Gráficos 3D en la Web · mauriciomeza.github.io/WebGL-Tests
- 🎓 Bulla D — Post-effects study · visualcomputing.github.io/posteffects
- 🎓 Chaparro S (2021) — Método de cinemática inversa en tiempo real basado en FABRIK · MSc Thesis · Universidad Nacional de Colombia
- Charalambos JP (2025) — nub: A Rendering and Interaction Library for Visual Computing in Processing · Journal of Open Research Software · doi:10.5334/jors.477
- WebGL 2.0 Specification · Khronos Group
- WebGPU Specification · W3C Candidate Draft