A depth-of-field blur effect where the focal plane is driven by a live world-space position. The scene is rendered into a
p5.Framebuffer, then a p5.strands DOF pass is applied viapipe(). The magenta sphere is the focal target — its screen-space z is recomputed every frame withmapLocation()and fed directly into the shader, so the blur follows the sphere continuously. A first-person directional light tracks the viewer usingmapDirection().
The shader as a strands callback
The DOF pass is authored as a p5.strands callback on baseFilterShader().modify(). There is no raw GLSL string — the algorithm is expressed in JavaScript using the strands DSL, which compiles it to WebGL2 under the hood.
function dofCallback() {
const depthTex = uniformTexture(() => layer.depth)
const focus = uniformFloat(() => focusVal) // closure — recomputed each frame
const blurIntensity = uniformFloat('blurIntensity') // UI-owned via target panel
const getBlurriness = (d) => abs(d - focus) * 40 * blurIntensity
const maxBlurDist = (b) => b * 0.01
getColor((inputs, canvasContent) => {
// spiral sample — blur more the farther a fragment sits from the focal plane
// ...
})
}
dofFilter = baseFilterShader().modify(dofCallback)
Two kinds of uniform binding appear in the same callback: closures (() => focusVal, () => layer.depth) for values computed per-frame on the CPU, and string names ('blurIntensity') for values owned by a createPanel target — the panel calls setUniform for those automatically.
Focal depth via mapLocation
mapLocation converts any point between coordinate spaces. Here the magenta sphere’s world-space position is projected to screen space, and its z component gives a depth value in the same range the depth buffer uses — so the shader can compare it directly with sampled depth texels.
// zero-alloc: _loc is a Float32Array(3) allocated once in setup
mapLocation(models[0].position, { out: _loc, from: p5.Tree.WORLD, to: p5.Tree.SCREEN })
focusVal = _loc[2]
Because focusVal is a plain global, the strands closure () => focusVal picks up the updated value every frame without any manual uniform call in draw().
First-person directional light via mapDirection
First-person lighting aligns the light direction with the camera’s forward axis. mapDirection converts the camera’s −z axis (p5.Tree._k) from eye space to world space, which is what directionalLight expects.
mapDirection(p5.Tree._k, { out: _dir, from: p5.Tree.EYE, to: p5.Tree.WORLD })
directionalLight(255, 255, 255, _dir[0], _dir[1], _dir[2])
_dir is a Float32Array(3) allocated once — zero heap allocation per frame. Each of the following calls is equivalent for the default case:
mapDirection(p5.Tree._k, { from: p5.Tree.EYE, to: p5.Tree.WORLD })
mapDirection({ from: p5.Tree.EYE, to: p5.Tree.WORLD })
mapDirection() // defaults: _k, EYE → WORLD
Post-processing with pipe
pipe chains filter shaders over a source framebuffer, reusing internal ping/pong buffers — no allocation per frame. The simplest usage:
layer.begin()
// render scene
layer.end()
pipe(layer, [dofFilter]) // display: true by default
For multiple independent pipelines, a string key option separates their buffer caches. releasePipe() frees them when no longer needed.
References
- Depth of field in p5.js — Dave Pagurek
- p5.strands tutorial — Luke Plowden
- p5.tree