As I mentioned in my previous post I’m working on using a highly stylized ripple to visualize audio in my piece. The goal is that the physical panels in my installation only move when the user makes sound, thus “transforming” sound into physical motion and projection. I think a ripple is the most natural representation of this and with a fair amount of tweaking I’ve been able to create a solution that captures the style that I’ve been sketching throughout the planning phase.
The ripple is based on the research that Neil Wallis has done on water simulation. Depending on the volume of audio, various properties of the ripples are modulated to feel natural and connected to user input. In a later iteration of this project I intend to use properties of the audio spectrum to further enhance this visual. The ripples radiate out from the location where the user is standing while the user continues to make noise. The moment the sound stops, the ripples freeze in that state and become a permanent part of the visual of the piece.
I recorded a bit of my most recent testing session after I aligned the ripple locations on the left 1/2 of the installation piece:
After my last post I tested the shatter effect on the actual installation (previously I was testing on my laptop), and I wasn’t happy with the results. When the lines are projected in full scale it feels much different than on a laptop screen. The shatter effect moved these lines further away horizontally, but didn’t disrupt the projection in any other direction, which wasn’t my intention. So I returned to the ripple effect and tweaked the visuals for the slit-scan tracking. The images shown below are a trace of me moving through the testing area at varying speeds (which determines to fidelity of the visual), and moving left/right/forward and backwards.
I’m happy with the final visual, but it’s much more dependent on lighting than the previous projection (the pink background), because of this I will need to include a few studio lights in my final setup when I film the completed project. The lights will need to illuminate the user(s) in the piece, but not wash-out the projection. In my testing this has been challenging.
Speaking of challenging, I’ve been testing the piece in the break room at work. Here’s my testing setup surrounded by file cabinets and a fridge.
Also, because I’m now doing all of my work on the installation itself, reading and writing code has become more challenging (but far more beautiful than a two dimensional screen).
The visual for the piece has gone through several iterations at this point. As you saw from the previous post, I wanted a “trace” of the user’s visual on the screen. I progressed from the screen shots that I posted before to a working slit-scan-style trace:
The ripple effect shown was going to show as a reaction to user’s audio input, because the goal of the piece is to react to audio and transform it into physical movement (on the projection screen) and projected visuals. The ripple was that visual. The problem with this revision of the visual (and there were several), is that the scan feels disconnected from the user’s movements, it’s not directly in front of the position where the user currently is, and it isn’t reactive to speed and position in 3D space. My next revision took the same input and mapped the position in 3d in a way to shows a clear trace of user movements if the participant is moving slowly, and shows a fractured visual when there is fast movement. I think this visual turned out more true to my intentions for the piece. I am still debating the background color for an “untracked” area, but for now I am going with black because the tracked images show a lot of contrast:
This video also shows the ripple effect when audio is detected. After working with the piece for a week, I’m trying a different route for audio visualization. The ripple effect seemed a bit cliche, and doesn’t suit the linear fragmented blocks of the piece. So I’m working on a “shattering” visual that will split the visual at the location where the user is standing when they make sound. I have it functioning in isolation, but I’m still working on successfully integrating it into my piece:
I’ve made more progress on my “trace” visual, after the last post I was able to persist a sliver of an image to make the “slit scan” look, but that was just tracking from the center of the screen. Since I am tracking users via kinect (and pulling the RGB image from the kinect), the next step was to build the trace from the current location of the user:
After getting that to work the next step was to attempt to isolate the user from the background. I considered a background replacement algorithm (I haven’t ruled it out yet), but since I vaguely know the height of the user’s head I can work from that point down to only show the user. I wasn’t tracking height previously, so with a few changes I was able to get that information and vaguely approximate the user’s height (I jumped a couple of times about 3/4 of the way through):
The next step was scaling the visual because my projector is 1280px wide. Processing’s resize function for PImages gives crazy results every time I used it, so working with pixel data, I “manually” stretched each sliver to be the appropriate width. Still need to figure out how I’m going to work with height:
So my goal for the projected visual was always a “trace” of the tracked users interacting with the piece. I created a quick prototype in After Effects to show motion of a user through the space. I would want the “slices” of motion to be thinner, but I think the render conveys the concept:
What you see at the end of the video is the “ripples” caused by sound captured in the area. More on this later.
The next step in the process of creating the visuals was to program the motion trace. I always imagined it looking a bit like a slit scan camera photo, except the “source” of the scan would always be focused on the user as he/she moves through space. Also the background would be removed. Well, I got the tracking working, and I was able to draw a visual of what’s captured, but right now it’s not based on the tracked user (though the user is being tracked, as you can see by the dots). And no background removal, yet. But soon. Attached is a quick video recording from my phone (sped up so it’s not as boring):
And here’s a more visible screenshot of the test track output- the visual on the right is what you’d see stretched across the display: