Expert Reviews
Feedback from two expert consultations on the Data Forest project - one from a CU faculty member and one from an outside industry expert.
Review 1: Joel Swanson
CU Faculty - Artist & Design ProfessorAbout the Reviewer
Joel Swanson is an artist and design professor at the University of Colorado Boulder with extensive experience in interactive installations. His expertise in aesthetics, interaction design, and physical exhibition spaces made him an ideal reviewer for the visual and experiential direction of Data Forest.
What I Presented
I walked Joel through the full project concept: a network of webcams collecting crowd data (people count, movement, sound levels) and feeding a parameterized forest simulation displayed on a screen. I showed him my AI-generated design inspiration, the CAD model of the Atlas building with identified high-traffic zones (main atrium, hallways near room 104, B1 area around 1B25, and the black box), and discussed the different approaches I was considering for how the forest visualization should respond to the data.
Feedback Received
Localized Growth Over Aggregate
I presented two options: placing trees at the exact positions where cameras detect people, or growing foliage indiscriminately across the whole model based on aggregate crowd data. Joel pushed me toward a middle ground - each camera should drive its own localized clump of trees independently, rather than all cameras contributing to one estimate. He felt this was more interesting as a data visualization because each camera's data would feel unique and the viewer could see spatial differences across the building.
Intermittent Live Events
Joel's most impactful suggestion was to add intermittent live events to the simulation - unexpected moments that create variation in the experience. He proposed ideas like a flood where water rises and fish swim by, a rainbow appearing, birds showing up, or weather events. These would be triggered by specific conditions (e.g., too many people in one spot, a time interval) and would make the installation more memorable and give people a reason to return. He described them as "fun little Easter eggs" that keep viewers engaged.
Message and Meaning
Joel challenged me to think about whether the installation carries a deeper message. He noted that trees naturally evoke environmental themes and asked whether I was trying to say something or if it was purely experiential. He described the concept as a "terrarium" - being deep in the basement of the expo and looking at a digital representation of the building's energy from the outside. He encouraged me to think about the high concept, even if the installation can work without an explicit message.
Display: TV Over Projection
When I asked about display options, Joel was clear: a TV is better than a projection for this project. The detail in the forest simulation benefits from the richer, brighter display of a screen. More importantly, in the crowded expo environment, people standing in front of a projection would cast shadows and interfere with the image. A TV avoids this entirely and allows people to get close to examine the detail.
Interactive Pose Detection
I shared my reach goal of having a camera in the installation room itself for pose detection - where viewers' poses trigger different ambient audio (bird chirping, cricket sounds, etc.) to make the experience more immersive. Joel responded positively and pointed me to the artist Rafael Lozano-Hemmer, specifically his piece "Zoom Pavilion," which uses tracking cameras in a room to process everyone through tracking software. He recommended Lozano-Hemmer's broader body of work as a reference for interactive tracking installations.
How This Feedback Shaped My Project
- Committed to localized per-camera tree growth rather than a single aggregate simulation
- Added intermittent live events to the project roadmap as a key feature for engagement
- Decided on a TV display for the final installation
- Began researching Rafael Lozano-Hemmer's work for interaction design inspiration
- Started thinking more deliberately about the "terrarium" framing and whether to embed a message
Full interview transcript: joel.txt
Review 2: Brad Gallagher
Outside Expert - Motion Capture, Creative Coding & Interactive MediaAbout the Reviewer
Brad Gallagher is an expert in motion capture, creative coding, Unity, and TouchDesigner with years of experience building interactive installations and real-time systems. He runs a motion capture lab and regularly works with OSC protocols, 3D environments, and sensor-driven installations. His deep technical background in both the creative and engineering sides of interactive media made him the ideal person to evaluate my technical architecture.
What I Presented
I showed Brad my iteration 1 presentation, the CAD model of the Atlas building, and my existing technical architecture: a Python/OpenCV multi-camera detection server using YOLOv8 that pushes metrics to a FastAPI server, which TouchDesigner scrapes for data. I explained my vision for a detailed 3D forest simulation and my concern that TouchDesigner was struggling with the 3D world-building aspect of the project.
Feedback Received
TouchDesigner vs. Unity: Right Tool for the Job
Brad's central piece of feedback was that I was fighting against TouchDesigner's strengths. He acknowledged that TouchDesigner is excellent for abstract, real-time visuals, audio reactivity, and data routing - but building a detailed 3D simulated world with vegetation, terrain, and dynamic assets is not what it's designed for. He said he'd never seen a truly 3D, polished, video-game-quality environment built in TouchDesigner. He strongly recommended Unity instead, pointing out that it has built-in transforms (position, rotation, scale) on every object, a massive asset store with pre-built vegetation and L-systems, and is fundamentally designed for 3D world-building.
OSC: Secret Sauce
Brad introduced me to OSC (Open Sound Control), a protocol he described as "inter-application interactive glue." Despite its name, OSC is a general-purpose messaging protocol that uses UDP networking for fast, low-latency communication between applications. He explained that it uses an address structure similar to URLs - totally user-defined - where you attach numerical data and send it to a specific IP and port. It works on a single machine via localhost or across a network by changing the IP address. He uses it constantly in his lab to connect motion capture systems, projection software, DAWs, and Unity.
Architectural Recommendation
Brad recommended a specific architecture change: keep my existing Python/OpenCV detection pipeline (which he praised, noting that my threading approach put me "ahead of the game"), but instead of formatting data as JSON for a web server, send it directly to Unity via OSC. On the Unity side, I'd have a listener script attached to scene objects that receives the OSC messages and modifies their transforms in response. He outlined a concrete first test: scale a bush asset in Unity based on the person count from my camera - one person increases scale by 10%, two people by 20%, etc. He estimated I could get this proof of concept working in a few hours.
Unity's Asset Ecosystem
Brad emphasized that Unity's asset store would be critical for a project of this ambition. He pointed out that I wouldn't have time to model all the 3D objects myself, but that the asset store has free and low-cost pre-built assets - vegetation with growth animations, animals with AI behaviors, particle systems, and environmental effects. He suggested that assets with built-in behaviors (like animals that run around based on AI rules) could be affected by my crowd data - for example, animals speeding up when more people are detected. This would add depth without requiring me to build everything from scratch.
Networking and Deployment Considerations
Brad raised an important practical point: we'd need to verify that CU's network (CU Guest) allows UDP traffic for OSC, since some institutional networks block non-standard traffic. However, he noted that the entire system can be developed and tested on a single machine using localhost, and scaling to multiple machines is just a matter of changing IP addresses. He also mentioned that his lab has a high-performance PC that could potentially be used for the final expo deployment, which would help with my computing power constraints.
Why Unity Over Unreal
When discussing 3D engines, Brad specifically recommended Unity over Unreal Engine. He felt Unity is more approachable - it uses C# scripting that allows you to "just write code" rather than Unreal's Blueprint visual scripting system. He noted that Unity has significantly more documentation and community resources, and that you can build 3D environments without writing any code at all, but the scripting is there when you need it - which I would, for receiving OSC data and driving the simulation.
How This Feedback Shaped My Project
- Made the decision to pivot the visualization layer from TouchDesigner to Unity
- Adopted OSC as the communication protocol between the Python detection server and Unity
- Kept the existing Python/OpenCV/YOLOv8 detection pipeline intact - it was validated as a strong approach
- Built a proof-of-concept Unity scene with a camera-reactive tree within the same week
- Began exploring the Unity asset store for vegetation, terrain, and environmental assets
- Identified CU network UDP testing as a future task before expo deployment
Full interview transcript: gallagher.txt