We Are The Robots: Is the future of music artificial?

Last year computer scientists unveiled the first song to be composed by artificial intelligence, the Beatles-esque ditty ‘Daddy’s Car’. But it’s not the first sign of AI infiltrating music-making – from self-generating soundtracks to unique albums created on demand, the robots are on the march. Jack Needham asks if we’re ready for the AI revolution to reach our ears.

When we think of the early relationship between humans, machines and music, we might think of Kraftwerk’s analog pop or Delia Derbyshire’s Radiophonic soundscapes – yet our fascination with machine music goes back much further than that. Late last year, University of Canterbury professor Jack Copeland and composer Jason Long restored the first piece of recorded machine music created in 1951 by Alan Turing, the British mathematician and artificial intelligence pioneer. The single-sided 12” acetate disc captures three melodies played by a primitive computer that filled most of the ground floor of Turing’s laboratory.

Nearly 70 years later we find ourselves in the age of ‘Daddy’s Car’, the first song to be composed by artificial intelligence. Prompted to write a song in the style of the Beatles, an AI system based at Paris laboratory Flow Machines created the melody and harmony after analysing a database of over 13,000 tracks in different musical styles, from jazz and pop to Brazilian samba and Broadway musicals. The music that comes from the ‘FlowComposer’ is defined by the limitations set for it – a certain note, chord structure or specific artist to analyse, for example. The result is impressive, though some might say it sounds more like the Super Furry Animals or a polished 13th Floor Elevators than the Fab Four. But with an entire album planned for 2017, ‘Daddy’s Car’ is just one hint of FlowComposer’s capabilities.

“We’re on the verge of a revolution in this field,” says Fiammetta Ghedini, a researcher at Flow Machines, who describes the team’s work as “a non-commercial project to cooperate and collaborate with humans to help create music.” As it stands, the FlowComposer is not independent just yet. “It’s easy to recreate kilometres of music that doesn’t have a beginning or an end,” says Ghedini. “But to create a song with a bridge, an intro, a riff and all these kind of structures, that’s difficult.” While the Flow Machine was able to create a melody and harmony for ‘Daddy’s Car’, the song still had to be structured, produced and sung by resident composer Benoît Carré.

But if FlowComposer is capable of creating music independently, does that make it a musician? “If you’re talking about AI, you have to ask: what does an AI have to do to be a musician? In essence, it can’t simply do what it’s being told to do,” says Geraint Wiggins, professor of computational creativity at Queen Mary, University of London. “If I put a score in front of a professional musician and tell them to play it, why are they being musically intelligent? Because they’re asking, how would I bring out the important bits? Or, how should I change the tempo? They’re understanding the music and interpreting it, not just playing back a sequence of notes that you tell them to. If a machine wants to produce music that sounds good to humans it has to have some notion of what humans feel. Right now computers can simulate our feelings, but there’s nothing artistic in that.”

Pop culture representations of AI, from the malicious Hal 9000 in 2001: A Space Odyssey to Terminator’s Skynet, suggest that we’re sceptical of AI and its potential. Our real-life encounters to date have hardly quelled our anxieties – remember the time a Microsoft-owned AI evolved into a racist Twitter bot in just a few hours? But that mindset could still change. Though musical uses for AI are in their infancy, AI is already affecting our daily lives. Smart home systems like the Amazon Echo are making previously impossible tasks like turning on a lightswitch easier than ever (apparently they have other uses, but most people seem more interested in making them talk to each other), while Tesla’s self-driving cars are predicting road traffic accidents before they’ve even happened. We’re accepting of AI when it makes our lives easier, but when it comes to creative pursuits we don’t feel the same way.

“People have a bias against all the creative things produced by a computer,” says Ghedini. The reaction to the Flow Machines project – one website called it “a dire warning for humanity” – shows us that artificially created music is much more disturbing to us than auto-correct or an Amazon device that shaves five minutes off our morning routine.

“You have to use the AI as a tool, not as a shortcut”Bill Baird

So how soon will the machines take over? “We are really, really close to a revolution in AI and I think that it will take over a lot of human activities,” says Ghedini. “But what we want to do is augment creative possibilities without substituting them, because if we let that happen then it’s going to be a little bit scary. So for us the idea is to build a tool that allows you to have new ideas, not replace them.”

Musician Bill Baird agrees. “You have to use the AI as a tool, not as a shortcut,” says the experimental artist and former Sound Team member, who’s currently based at Mills College in Oakland, California. Baird’s latest LP, Summer Is Gone, uses a custom algorithm to create a unique album for each listener based on their exact location and the time of day. “It’s like a bunch of pins in a lock,” he explains. “It uses a mathematical term called ‘factorials’ – every time someone logs on it generates a randomised sequence which is then referred to a database and memorised.” With each playback, a new sequence is created and cross-referenced with the database to form a unique musical pattern. “I think technology is much more interesting when you create something that is just not possible using normal means,” he argues. “If you’re trying to recreate something you can do in real life then what’s the point?”

‘Daddy’s Car’ is a milestone for AI, but Flow Machines’ aim is not just to rehash the past artificially but to help humans create music for the future. AI could be just another technological advancement used to make music, the next step on from drum machines, synthesisers and DAWs in our bedrooms. What if you could hold a jam session at home with an artificially intelligent band? Or create an improvised live show that reacts in real time with a crowd? The potential for both amateur and professional musicians could be huge.

“I think the social impact will be that artificially intelligent composing companions will be stimulating us to try new things,” says Wiggins. In fact, this method of music production is already being used to astounding effect. Last year’s No Man’s Sky, the never-ending, always-evolving exploration game, came with an equally expansive soundtrack from Sheffield’s 65daysofstatic. Using a complex series of algorithms to score the 585 billion years’ worth of gameplay, the soundtrack not only regenerates in new and unheard forms as the game progresses, but also responds to shifts in gameplay, for instance where fear-inducing music is needed for a tense moment.

But as much as AI could be used to help humans create more complex artforms, it could also reduce our workloads. As the success of ‘Daddy’s Car’ hints, there is an obvious potential for AI to be developed for ghostwriting, or even to create an autonomous virtual pop star. “Every generation has their equivalent,” says Wiggins of ghostwriting. “Popular music tends to be very heavily structured. If you look at trance music, it’s very pre-defined and tends to have exact same structure, so that’s very possible to replicate quite easily.”

But, he adds, what would be much more impressive is if AI came up with a whole new musical form. “It’s easy for programs to learn a chord sequence or to harmonise a melody, but getting one that says what the structure of a piece of music should be is really different. That’s arguably the difference between recreating popular music and creating artistic music.”

It’s true that for now, AI cannot replicate the politically charged lyricism of a Yasiin Bey verse, or the kaleidoscopic imagination of Björk – but mimicry may not be where AI’s musical potential lies. AI isn’t going to bring us the next Miles Davis or Philip Glass, but it could be used by a new generation of artists to realise ideas that were previously impossible. While AI is still in its infancy, it is already being used to push music into uncharted territory – and that’s nothing to fear. Just as virtual reality is not a replacement for our natural surroundings, but a way to augment them, artificial intelligence could be the next technological leap in our creativity.

Read next: No Man’s Sky: How 65daysofstatic ended up scoring the most anticipated game of the year



Share Tweet