Iranian futurist Ash Koosha is pushing electronic music into a virtual reality

The first time Ash Koosha put on a virtual reality headset, the light bulb didn’t go off so much as explode.

Few artists, even among those making music electronically, feel as attuned to technology as Ashkan Kooshanejad, an Iranian producer currently based in London. He describes himself as a futurist, using an old word that’s eternally concerned with the new. Futurist also describes his music, as evidenced by last year’s excellent debut album, GUUD. Released on Olde English Spelling Bee, it was a record equally inspired by classical and computer music which shifted between meticulous arrangements and utter chaos with a dizzying glee.

It was all by design though, and some of the most thrilling moments in Koosha’s music come when he reveals the real sounds he’s working with, such as a looping car engine or shattering glass. These moments are a glimpse back to the reality — our reality — that he so easily whisks us away from.

With a new album, I AKA I, due on Ninja Tune on April 1, the same day he plays Rewire festival in the Netherlands, Koosha seems ready to take us even further down the rabbit hole – and that’s more true than ever now that he’s developing a visual accompaniment to the album designed for virtual reality headsets. The inspiration for the project came after trying an Oculus Rift headset at an exhibition in London when he happened to be listening to music on headphones. That eureka moment led to him making an album that bridges the gap between a “headphones record” and “headset record”.

I AKA I is an even headier record than its predecessor, and the chaotic thrill is stronger than ever, but it also feels like an album that could bring the producer to a wider audience than ever before. We discussed the musical potential of virtual reality, his early days of experimentation and the microscopic beauty of water drops.

Ash Koosha also made a very special FACT mix of 100% original unreleased productions. Stream and download his ‘COMA mix’ below.


When did you first become interested in melding technology to your music?

In general, I’m a futurist. I’m all about ideas for the future of humanity and its arts, I always look into different subject matters about how we can improve our lives and find new mediums. It’s just my thing besides music and film. It’s the foundation of my thoughts.

You’ve already experimented with so many different forms, from studying in a classical conservatory to playing rock music. Did you find ways to explore these futurist interests through those avenues as well?

It’s funny, where I was growing up, music was in a way where you’d get a lot of records from different genres without knowing the social background. You’d listen to a lot of stuff without knowing what was behind it, so it was all aesthetics like musicality. I was just interested in sound and different forms of music. Maybe that’s why I tried to experience each and every one of them, from hard rock to hip-hop beats to electronic to classical, but for me it was the wide range that interested me. So I come from playing live instruments in different genres to just working with the computer, but now I think what I do now is a collage of everything.

You studied at the Tehran Conservatory Of Music. With that formal training as your foundation, have you seen your approach to composing evolve or change since delving more into electronic music?

Yes, my background is more improv. Before [music] school I was a guy who’d play three hours of non-stop bass guitar, doing solos and stuff, just trying to play solo jazz and weird stuff. But then when I went to school it structured my work and I started understanding what I’m doing, not in a way where I’m limited by the structure, but where I know which alleys I’m taking and which ways I’m going in terms of improv. At the same time I was recording electronic music, but to me it was only soundscapes, I was putting samples together.

Then I started listening to classical music a lot and learning classical composition, like harmony and learning more about counterpoint, and that’s when I could structure those unknown electronic sounds and recorded sounds that I was manipulating with the computer. Then I started actually making sense out of them so I could enjoy them, first of all. I could play to someone and be like, “Look, this is a melody, but the instrument is something unusual, it’s something unknown, it’s not an instrument.” Like I recorded glass breaking, or something like that.

“There’s another realm of sound that we’re missing out on”

That’s one of the first things that excited me about GUUD. I felt this combination of something feeling very structured and composed, and then also there’s that chaos. The push and pull between those two things was thrilling. How did that album develop?

It’s interesting that you say that, because what I do, I believe, is half and half. It’s 50 per cent control and 50 per cent [letting] the computer decide. The method that I have is I put units and processing on the sound, and after that I listen back to what the computer has decided as well. When I stretch a sound there are random elements that pop out of the waveform. So it’s a mix of me controlling all these chaotic things and also respecting the randomness of sound — I would say it’s electronic music that’s alive. It’s kind of speaking for itself, it’s trying to talk as well. So with GUUD I was experimenting with that. That was my laboratory for that kind of electronic music where you have these events — sonic events — that just move on their own, basically, and you put them in the right place in the room, it’s like you decorate them.

Was there something you put into the computer which was particularly surprising or inspiring when it was spat out on the other end?

Yeah – drops from the tap are some of the most interesting sounds, because when they leave the tap and are dropped on the surface they’re exploding, and we don’t see that. If you see it under a microscope they kind of splash into different drops that splash into different drops again, and that’s what the sound is made of. So when you stretch the sound there’s a lot of bubbly details. When you zoom in it’s amazing. It’s one of the richest sounds.

Also, I remember once I was on the street and I was recording car sounds, engine sounds next to where the cars stop behind a red light. And then one of the cars, I don’t know if the driver forgot to brake at the right time, but there was a sudden brake with that crazy screaming noise of the tire. I was running so happily and when I went back that was the best lead sound — some crazy screaming lead that I could get out of them. Not many sounds actually work in music, but if you isolate them in the right way they can be one of the richest synthesizers.

When did you start to work on I AKA I?

Shortly after GUUD, I was recording a lot of music. I had around 80 tracks that I finished a few months ago and I thought, this is too long for an album, but let’s try to make an album out of it. It’s not so different from GUUD but I think in terms of how I’m confident with putting the sound in tracks, it’s improved. It’s not as experimental, I’d say. Whenever a noise goes where it goes, I mean it this time.

I realized there’s a space in sound, there’s another realm of sound that we’re missing out on — inside the sound. When you zoom in, like Photoshopping a visual, you see how many harmonics you can create. People did it with their DJ decks, trying to pitch down the sound and sample it and collage it, but I think we can go beyond that. I use the form of a classical orchestra with sounds from that realm, from fractal sounds, from microsounds, from microtones. So instead of the violin with the same notation, these sounds come to play and make that orchestra.

Ideally, one day if I had an orchestra that would run each and every part of that sound, that would be the future of classical electronic music, where you have a classical form that’s human, we can relate to it, but the sound is not necessarily a violin or a piano — it can be anything.

Ash Koosha

“I don’t want the experiment to be just for the lab or for a few people”

Following that idea of going “into the sound”, the work you’re creating with this album has this element of virtual reality. What was your first experience of putting on one of the headsets?

The first time I experienced it was in London in an exhibition. There was this application where you could fly over a landscape and see clouds. It was a simple thing, but I had headphones on. Both senses were occupied in that moment, my hearing and vision, and I thought this is what music should be — my music. It should be like you’re inside the music — if that makes sense? That was the snap, the click.

People describe a record as like a “headphones record” and the way you describe it it’s almost like a evolution of that.

Exactly, it’s precisely that. It’s a “VR album”. Let’s say you have 30 minutes — not like 60 minutes, people are gonna vomit — let’s say you have a 30 minute album, and through the album there’s a narrative where objects are the sounds and sounds are the visual objects. There’s no division, there’s no distinction between the two. So you get this union of audio/visual, but there’s also a story, there are colors involved, there are details of sound that go around you, and it’s still music.

It’s funny, that’s a part that never even occurred to me. You mention you don’t want to make someone throw up — has that been speed bump for you? Making something that the human body can’t keep up with?

I’m struggling with so many problems similar to this one. It’s true. If the album is a one-piece narrative it might be tricky, because at the moment the hardware is not fully functional where you can sit there for 55 minutes. At some point you’re going to get headaches and when you take off the headset you can’t walk — you’re disoriented. But I feel like with the application and with the coding it will be easier. Also, it doesn’t have to be a one-piece thing, you can select and go through tracks.

Have you thought about how you might approach this performing with an audience? Would you have a different angle?

I wouldn’t go as far as putting 200 headsets on people, that idea bugs me. If people were going to be in VR they’d stay at home. For now, I’m doing my shows the classical way, just twisting knobs, but it would be more fun if I interacted with my own sounds in a more immersive way. I think some people might wonder what this guy is doing on the stage with a headset on, but with visual mapping I think it could be interesting.

Would you ever see yourself returning to classical music?

To be honest, I wasn’t ever a classical musician. I studied classical music form and theory — I wasn’t ever anything, I didn’t know what I was doing. So at this point I don’t know what I’m going to do in the future. I’m just going to try and experiment with things and maybe people will listen to it and maybe that becomes something they would listen to more and more. I don’t want the experiment to be just for the lab or for a few people or a certain type of people. These are new sounds that we can use in popular music and classical music and whatever everyone is listening to, really.

Latest

Latest



		
	
Share Tweet