Lookat-camera keyframe animation with
createCameraTrack. A boundanimCamplays a four-keyframe track; a separateviewCamorbits around the abstraction — eye polyline, gaze rays, per-keyframe mini-camera markers. A live frustum follows playback, and an FBO inset shows whatanimCamactually sees. ToggleTEXand that inset slides onto the live frustum’s near plane — the frustum becomes a window. TwocreatePanelinstances drive the whole thing: one binds checkboxes to UI state, one binds transport controls to the track.
Two cameras, one bound to the track
viewCam = createCamera() // observer — orbitControl drives this
animCam = createCamera() // animated — the track drives this
track = createCameraTrack(animCam)
createCameraTrack(cam) binds the track to a p5.Camera. A predraw hook calls cam.applyPose(track.eval(out)) every frame — no apply() in draw, no guard on track.playing. Crucially, the bound camera does not have to be the active one: the hook keeps writing into animCam regardless, and you choose when to render through it.
previewFbo.begin()
setCamera(animCam) // make the bound camera active for this pass
resetMatrix()
drawScene() // ← what animCam sees, into the FBO
previewFbo.end()
setCamera(viewCam) // back to the observer
orbitControl()
drawScene() // ← the abstraction view
Two cameras, two passes, one shared track.
Four lookat keyframes
track.add({ eye:[-320, -100, 220], center:[0,0,0], fov: PI / 3, near: 42 })
track.add({ eye:[-100, -100, 260], center:[0,0,0], fov: PI / 4, near: 42 })
track.add({ eye:[ 100, -100, -240], center:[0,0,0], fov: PI / 3.5, near: 42 })
track.add({ eye:[ 320, -100, -220], center:[0,0,0], fov: PI / 3, near: 42 })
Each keyframe stores { eye, center, up?, fov?, halfHeight?, near?, far? }. The eye sweeps left to right; every keyframe targets the origin, so the four CENTER gaze rays converge on the same point — the amber dot at (0, 0, 0). fov (vertical, radians) and near size each keyframe’s marker; halfHeight is the orthographic alternative to fov. Defaults: center=[0,0,0], up=[0,1,0], near=0.1, far=1000. cam.capturePose() reads all of this back from a live camera, so track.add(animCam.capturePose()) records the current viewpoint as a new keyframe.
A live frustum from the track
viewFrustum accepts three forms in its camera parameter — p5.Camera, CameraTrack, or a plain pose spec:
viewFrustum({ camera: animCam }) // static
viewFrustum({ camera: track, bits: NEAR | APEX }) // animated, follows playback
viewFrustum({ camera: { eye:[100,0,0], fov: PI/3 } }) // one-off
Pass the track and viewFrustum reads track.eval() and track.mat4Eye() at the cursor. The frustum is the track’s interpolated state, drawn directly — no scratch buffers, no parallel lookAt reconstruction. Hit play and it slides between the keyframe markers; pause and it freezes wherever track.cursor is. Detection is duck-typed (.eval(out) + .mat4Eye(out) + a keyframes array), so any compatible object animates without further wiring.
The frustum as a window
Toggle TEX and the same call paints previewFbo.color onto the live frustum’s near plane:
viewFrustum({
camera: track,
nearTexture: previewFbo.color,
bits: NEAR | APEX,
})
previewFbo is rendered earlier in the frame through animCam. Painting that attachment onto the near plane via nearTexture ties two views of the same pose together: the four edges and the rendered image both come from the track’s current state. The frustum becomes a literal window onto the playback view, and you can orbit around the back of it with viewCam.
One overlay per bit
The layering rule is one bit per trackPath call, with marker: null so the keyframe pyramids aren’t overdrawn five times:
const { PATH, CENTER, CONTROLS, TANGENTS_IN, TANGENTS_OUT } = p5.Tree
stroke('#e8ecf1') ; trackPath(track, { bits: PATH, marker: null })
stroke('#a0ff6e') ; trackPath(track, { bits: CENTER, marker: null })
stroke('#4a5566') ; trackPath(track, { bits: CONTROLS, marker: null })
stroke('#5cd0ff') ; trackPath(track, { bits: TANGENTS_IN, marker: null })
stroke('#ff6ec7') ; trackPath(track, { bits: TANGENTS_OUT, marker: null })
stroke('#ffd166') ; trackPath(track, { bits: 0 }) // default markers
bits: 0 keeps the per-keyframe mini-camera marker (sized by kf.fov and kf.near) and skips everything else. CONTROLS is the straight chord polygon linear interpolation would produce — toggle it on to see the cubic Hermite default curve away from it. CENTER draws gaze rays from each eye to its center, plus a point() at each center. To draw the center polyline (the path the lookat target traces when it moves) instead of the gaze rays, pass target: 'center' — collapsed to a point in this sketch since every center sits at the origin.
| Bit | Stroke | Role |
|---|---|---|
PATH | #e8ecf1 | interpolated eye polyline |
CENTER | #a0ff6e | gaze rays from each eye to its center |
CONTROLS | #4a5566 | eye control polygon (straight chords) |
TANGENTS_IN | #5cd0ff | incoming tangents at each keyframe |
TANGENTS_OUT | #ff6ec7 | outgoing tangents at each keyframe |
| markers | #ffd166 | per-keyframe mini-camera frustums |
| live frustum | #f0d060 | viewFrustum({ camera: track }) cursor |
Two faces of createPanel
createPanel is a single factory that dispatches on its first argument. Pass a schema — a plain object of named control definitions — and you get a parameter panel. Pass anything with .play (a PoseTrack or CameraTrack) and you get a transport panel. Both appear in this sketch:
// schema → parameter panel — checkboxes drive the ui object
bitsPanel = createPanel({
path: { value: true, label: 'PATH' },
center: { value: true, label: 'CENTER' },
liveTex: { value: false, label: 'TEX' },
// ...
}, {
title: 'trackPath',
target: (name, value) => { ui[name] = value },
})
// track → transport panel — play/pause/seek/rate, drives the bound camera
transportPanel = createPanel(track, {
title: 'camera track',
camera: null, // suppress the + button
reset: false, // suppress the ↺ button
})
Schema mode infers control type from value shape: numbers become sliders, booleans checkboxes, CSS colour strings colour pickers, length-2/3/4 arrays vec sliders, an options array a dropdown. The target is a value sink — a (name, value) function (as here), an object with .set(name, value), or a p5 shader (which gets setUniform calls on every dirty tick — the pattern the post-effects pipeline leans on).
Track mode wires standard transport controls — play/pause, seek, rate (signed, so negative reverses), loop, bounce — to the track’s playback state. camera: null suppresses the + button (which would otherwise capture the current pose and add a keyframe); reset: false suppresses the ↺ button. Same factory, same layout options, two completely different consumers — that’s what lets every sketch in this series share one UI vocabulary.
HUD overlays
The lower-right preview inset is a beginHUD() / endHUD() block:
function drawPreviewInset() {
const fw = previewFbo.width, fh = previewFbo.height
const x = width - fw - 10, y = height - fh - 10
beginHUD()
image(previewFbo.color, x, y, fw, fh)
noFill()
stroke('#f0d060')
rect(x, y, fw, fh)
endHUD()
}
Between those two calls the canvas is in screen space: (x, y) ∈ [0, width] × [0, height], origin top-left, y increasing downward — the standard 2D convention. beginHUD clears the depth buffer and installs an orthographic camera sized to the canvas; endHUD restores the 3D camera and depth state. Anything drawn in between sits on top of the 3D scene regardless of where the geometry was, so it’s the natural home for FBO insets, frame counters, status badges, text labels pinned to projected positions via mapLocation(WORLD → SCREEN), and any other overlay that wants pixel coordinates rather than world units.
beginHUD()
text('FPS: ' + frameRate().toFixed(1), 10, 20)
endHUD()
Pair it with mapLocation (covered in post-effects pipeline and elsewhere in the series) to billboard labels next to 3D objects: project the world point to screen space, then draw the label in the HUD block at those pixel coordinates. The depth-clear means a label drawn this way always wins against the 3D scene — no z-fighting with the geometry it’s annotating.
Versus PoseTrack
// PoseTrack — you apply the pose yourself
track.eval(out); push(); applyPose(out); box(60); pop()
// CameraTrack — the predraw hook applies the pose to the bound camera
const track = createCameraTrack(animCam)
PoseTrack returns a TRS and lets you compose it anywhere, which is why one track can drive any number of objects. CameraTrack writes directly into its bound camera, because driving a camera without overwriting its matrices isn’t really driving it. The two-camera pattern in this sketch is the design space that opens up because of that coupling.
References
- p5.tree README —
CameraTrack,viewFrustum,trackPath,createPanel @nakednous/tree— pure numeric core: path sampling,eval,mat4Eye,mat4Persp@nakednous/ui— DOM panel layer thatcreatePanelwraps; zero p5 dependency- p5.Camera reference —
camera(),perspective(),ortho(),setPosition(),lookAt() - p5.Framebuffer reference —
createFramebuffer,begin/end, the.colorattachment - Post-effects pipeline — the same
createPanelfactory wired to shader uniforms viatarget: shader - LookAt matrix derivation — why
eye/center/upis sufficient orientation data - Centripetal Catmull–Rom spline — the tangent-free Hermite default
- View frustum — what the keyframe markers and live frustum are drawing
- glTF animation channels — the keyframe-and-interpolator model
Trackfollows