Dancers morph into birds and trees in a soaring journey across time and space.
In the video for HAAi’s ‘Baby, We’re Ascending’ – the title track from the DJ and producer’s debut album, made in collaboration with Jon Hopkins – sweeping melodies, soaring vocals and a hardcore-inspired rhythm are brought to life by a combination of real-life dance and AI technology. Choreographed by Akira Uchida and animated by Tom Furse, the visual is both a kaleidoscopic interpretation of the track and a stunning technical achievement in its own right, in which three dancers morph seamlessly with flowers, birds and trees.
“I’ve worked with Tom across my entire album, including the video for ‘Purple Jelly Disc’, the AI clouds on the digital album cover, my Mixmag cover and now for this,” says HAAi. “He’s also creating some bespoke visuals for my bigger shows this year which I’m really excited about.”
“I met Akira virtually as he had choreographed a dance piece to an older track of mine called ‘Feels’, which blew me away. His interpretation of my music and translating it into movement was really emotional to watch. It was a no brainer for me to work with both Tom and Akira on the video.”
Furse is best known as a member of The Horrors and as a solo musician in his own right, but more recently he was inspired to try using AI to create visuals and animation. “I’d seen examples of the technology before with things like the famous ‘avocado chair’ but after hearing about VQGAN+CLIP on the Interdependence podcast I woke up a few mornings later and thought ‘OK, I’m going to give this a go today’. That morning cracked my entire creative practise wide open. I wasn’t just doing music anymore. It also changed the way my eyes saw the world.” When Furse first heard the track, he wanted to convey the feeling he says he gets with a lot of HAAi’s music, “a kind of rushing feeling, a sense of being propelled through the atmosphere at force.”
“So there’s already that feeling of flight, and married with ideas of ascension it seemed only natural to explore avian forms. Across my work so far there’s been a lot of botanical exploration so I also incorporated that into my prompts for the AI knowing that I’d get some interesting results as it tried to figure out whether any part of a dancer was supposed to a bird, or a flower, or something in between. But also as Akira pointed out to me, ascension is also about change, and the life journey of a flower illustrates change very poetically.”
Uchida had a similar response to the track, which he wanted to communicate through the choreography. “The first thing that impacted me upon listening to the track was this feeling of it being heavenly and ethereal. The peak in the song gave me a very specific feeling of falling upwards into the sky and beyond (not to be confused with flying) which inspired some of the visuals at the end of the video. There is also an immensity in the sound which I felt was important to capture as well as a powerful feminine energy I wanted to channel in movement.”
The production of the video was a collaborative process, with Uchida shooting the dancers in front of a green screen in a New York studio, and Furse processing the footage with conventional means before running each scene through a machine learning synthesis process called Guided Diffusion.
“This is still an emerging technique and I believe possibly the first time it’s been used at this scale,” Furse says. “I’ve seen 5-10 second clips before but it’s such a time consuming process I’m not sure if anyone has really had the freedom to set the time aside to make anything longer form. Personally I can’t wait to start using and see this process in more conventional narrative storytelling. It has so many possibilities.”
“Collaborating with Tom was a really enriching experience,” Uchida says. “Though I had worked with green screen before, working with AI in this way opened up a whole new world of possibilities and challenges as well. As we were both undertaking a new process in our own ways, myself working with AI and Tom working with dance, a lot of our collaboration had to do with problem solving and coming up with creative solutions.”
Although Uchida and Furse were in close contact throughout the process and communicated on revisions before reaching the final edit, the unpredictable nature of the AI rendering presented some challenges in developing the choreography. “We did a few tests throughout the process, so I had a reference as to what might work better than other choices, but ultimately I was choreographing without having an exact idea of how this would turn out,” Uchida says. “I knew that we wouldn’t be able to see some of the more subtle details in their expression, so I focused on creating large powerful movements which convey a strong intensity, while focusing on the form, so the feeling would still translate and remain present regardless of the outcome.”
In their earliest iterations, the results from machine learning algorithms trained to create images created some strange and variable results, but recent developments allowed Furse to create visuals with some degree of predictability. “The results can be unexpected, although not enough to be completely unrepeatable,” he says. “The details might be different each time but your general output will still be in the same world if you’ve crafted the prompts and the process well enough.
“I would love to train my own models but there is some pretty serious time and processing power needed to do so and gain impressive results so I used various open source models for this video. The openly available nature of this technology is an interesting component, it’s just waiting there for people to use. Who will emerge as the Bach of promptism and AI image synthesis? Someone’s bound to come along and really blow some minds, and I’m excited for them.”
Baby, We’re Ascending credits: