Tree-like affine transformation hierarchies are at the core of many tasks in rendering, interaction, and computer vision—from view frustum & occlusion culling and collision detection to motion retargeting and post-WIMP interfaces. Our recent publication, nub: A Rendering and Interaction Library for Visual Computing in Processing, introduces a functional and declarative API, built around a dataflow-based architecture that integrates rendering and event-driven interaction through a simple yet powerful scene graph model.
Built on top of Processing’s 2D/3D environment, nub offers a lightweight and expressive foundation for education, research, and experimentation in visual computing. It supports hierarchical rendering, multi-view scenes, view-based interaction, and extensible workflows for interactive content. This post presents an overview of its architecture and capabilities, and outlines future work extending it to the web through p5.treegl, with research directions focused on scene graphs, picking, gesture-based control, post-effects, and AI-assisted visual computing.
nub Overview
nub models a scene as a hierarchy of Node
s, each with local affine transformations and optional visual hints. Rendering is handled by one or more Scene
s, which couple a PGraphics
context with an eye node (camera). This separation of content and view enables multi-view setups, offscreen rendering, and advanced effects like shadow mapping.
Interaction is decoupled and event-driven: gestures (mouse, touch, etc.) are dispatched through the Scene
, either by tag or node reference, allowing selective manipulation of nodes or views. Built-in utilities support spatial queries and visibility tests, enabling interaction logic to scale across complex scenes.
Figure 1 shows an example of a nub application: a node tree representing a human skeleton is rendered from three different viewpoints, each managed by an independent scene with its own camera and rendering context.

Node tree (left) and multi-view skeleton rendering (right)
Opportunities in Visual Computing with p5.treegl
(Call for MSc and PhD Candidates)
The next phase of this project brings nub’s scene-graph architecture and interaction model into the p5.treegl library to extend its utility for web-based visual computing. We envision two main research lines:
p5.treegl Development
- Scene-graph support: Introduce hierarchical node structures with affine transformations to enable rendering, composition, culling, and motion retargeting use cases.
- Tagging and picking: Add node labeling and efficient ray-based picking to support object selection and manipulation in interactive scenes.
- Gesture-based interaction: Adapt nub’s gesture dispatch model to decouple user input from rendering, allowing per-node interaction routines and multi-view control.
Applied Research
- Post-effects by AI-assisted composition: Build a curated collection of post-processing shaders (e.g., blur, tone mapping, bloom), with a system to parse natural language descriptions into effect pipelines by matching and parameterizing shader uniforms. This involves constructing a structured shader database and training models to map textual prompts to GLSL-based compositions.
Read more → - Visual computing applications: Explore use cases enabled by the extended library, including advanced culling, collision detection, motion retargeting, and post-WIMP interaction techniques.
These directions support core areas in rendering, interaction, and computer vision, and offer potential thesis topics at MSc and PhD levels. Students interested in graphics, AI for creative tools, or web-based rendering & HCI are encouraged to get involved.
Sources:
Charalambos, J.P. (2025). nub: A Rendering and Interaction Library for Visual Computing in Processing. Journal of Open Research Software, 13(1):4.
Charalambos, J.P. (2023). p5.treegl: Shader development and space transformations library for p5.js (v1+).
p5.treegl blog posts: A collection of experimental features, design ideas, and early previews shaping future versions of the p5.treegl library.