ML art installation based on latent space exploration, in collaboration with a plant

Skills Python, Machine Learning, Touchdesigner, Spout/UDP
Team Yvonne Fang, Eesha Shetty, Lia Coleman, Michelle Zhang
Duration 1 month


This is an interactive art project that explores co-creation between humans, plants, and AI, by combining sensory elements with image generation using StyleGAN to study the concept of latent space interpolation. By using sensors placed on a plant's leaves and stems, participants can collaborate with a living organism to create a real-time generative output of StyleGAN generated visuals of a plant's leaves, which are influenced by both the plant's biological signals and the participant's touch. By giving the plant agency to navigate our fine-tuned StyleGan's latent space, this project blurs boundaries between human, plant, and machine to create an interactive performance. This project is inspired by the art of Hubert Duprat, who collaborated with caddisfly larvae to make gold-leaf cocoons, and Alexander Mordvintsev's Intelligence All the Way Down, where the artist explored biological processes as a non-human form of intelligence.


  • StyleGAN3 was fine-tuned using a plant leaf dataset from Instagram (514 photos of leaves taken from the same fallen tree, @alongletter), resulting in intriguing artifacts such as smoky and neon appearances in the generated images. Although they are imperfect checkpoints, possibly due to data augmentation or mode collapse, they are beautiful and unexpected.
  • Images are generated in real-time through StyleGAN3, by manipulating latent codes through linear interpolation, allowing for the geenration of interpolated images between two latent codes, which are generated randomly given a seed.
  • To extract signals from plants, an Arduino implementation of Disney Research Lab's Touché technology is used, which involves performing a frequency sweep to detect changes in capacitance when the plant is touched in different areas; the resulting line graph is analyzed to generate an alpha value used in StyleGAN, with the method of mapping the peak y value to a range of 0 to 1 yielding better results compared to other tested approaches.


Initially, I experimented with Pix2Pix following this tutorial since the method is to generate real time visuals from a visual representation of the soundwave, which is similar to the plant signal data that we will get. I trained the Pix2Pix model on some nature landscape and the soundscape associated with it, and got some interesting results (below). However, we did not proceed with Pix2Pix because we wanted to explore some other options that can more directly translate sound/music to visuals. We eventually decided on StyleGAN3 due to the need for real-time inference in an interactive installation.

Adapting the existing TouchDesigner setup to StyleGAN proved challenging due to float data input, which Spout couldn't handle. We attempted to use UDP input from an ELEGOO UNO 3 into TouchDesigner, but faced issues with running Python scripts for live inference within TouchDesigner. Eventually, we bypassed TouchDesigner and directly passed UDP input to the Python script to run the StyleGAN image interpolation.

However, I did manage to send the generated image and animation loop to Touchdesigner via Spout by downgrading to Python 3.7.6 since Spout only supports up to Python 3.7. This afforded us with the possibility of further manipulating the generation outcome with data such as any kind of audio signals.

I also experimented with using environment sound volume to drive latent codes for more varied visual effects, which I was able to achieve by analyzing live audio stream within TouchDesigner, and sending the data into the python script through UDP.

Full Report & Links

Full Project Report Link