WISISWYG

June 2019



I was given the opportunity to work on an installation for the Getty! I and three other students used a program we just learned at the beginning of the semester called TouchDesigner to create generative art.

My job was to make the “playback system”, aka the state machine. It linked all of the different art pieces to create a cohesive experience. I also created the idle state for the installation, which was an iterative art piece that got a digital stroke added to it with each complete interaction. Everyone who used the installation added to the piece, and over the course of the night the piece began to take shape.


On April 29, 2019, the Getty Center celebrated “Color.” An event focused on color in imagination, the science of color, art, and the future. The chair of our department, Ana Herruzo, gave us the opportunity to create an immersive, interactive installation merging artificial intelligence and visual arts. Named “WISIWYG: What I See Is What You Get,” the installation produced live data and animations by analyzing user facial expressions and used a machine learning algorithm to train the artificial neural network models from the Getty Museum’s art collection. It consisted of three video screens stacked vertically into one seamless display, a camera for facial recognition, and an Xbox kinect for motion data. The experience was powered by TouchDesigner, a tool for creating realtime generative art.

Applied Computer Science students and professors at the Getty Center
Applied Computer Science students and professors at the Getty Center
Art generated by the installation's idle state
Art generated by the installation's idle state