“It’s making all of this tech easier to use for experimental composers, who want to create something that explores how digital instruments and devices can go beyond what they were originally designed to do. By empowering composers to do that, it helps these instruments become much more integrated into contemporary music.”

Zubin Kanga

Zubin Kanga is a pianist, composer, and technologist. For over a decade, he has been at the forefront of curating and creating interdisciplinary musical programmes that seek to explore and redefine what it means to be a performer through interactions with new technologies.

In 2020, Kanga was awarded a UK Research and Innovation Future Leaders Fellowship to fund his latest multi-year project Cyborg Soloists based at Royal Holloway, University of London, where he is also Senior Lecturer in Musical Performance and Digital Arts. Cyborg Soloists is unlocking new possibilities in composition and performance through interactions with AI and machine learning, interactive visuals and VR, motion and biosensors, and new hybrid instruments. His work with Cyborg Soloists was recently featured in The New York Times, The Wire, Classical Music Magazine, and New Scientist.

Zubin has collaborated with many of the world’s leading composers and premiered more than 150 works. He has performed at many international festivals including the hcmf// (UK), Melbourne Festival (Australia), Paris Autumn Festival (France), Time of Music (Finland), Klang Festival (Denmark), Modulus Festival (Canada), and Gaudeamus Festival (Netherlands). Recent collaborations include major new works by Philip Venables, Nicole Lizée, Alex Paxton, Neil Luck, Nwando Ebizie, Larry Goves, Tansy Davies, Laura Bowler, and with Alexander Schubert on his internet-based WIKI-PIANO.NET (performed 32 times across 9 countries).  

Zubin recently premiered Laurence Osborn’s Schiller’s Piano (alongside Manchester Collective) and Alex GrovesDANCE SUITE at the Southbank Centre. He will be performing at this year’s edition of hcmf// (Huddersfield Contemporary Music Festival) on 19th November 2024, performing Alex Ho’s Cyborg Etudes, Claudia Molitor’s In Den Träumen, and Alex Paxton’s Car-Pig, alongside the UK premiere of Steady State by Alexander Schubert on 18th November — the first multimedia piece to use decision-based brain sensors to control music and visuals. Ahead of these performances, Patrick Ellis sat down with Zubin at a café in Walthamstow, London, and discussed composer-performer collaborations, new technologies, brain scanners, building a shared knowledge, and more…

Alexander Schubert, ‘WIKI-PIANO.NET’ (2018), performed by Zubin Kanga.

Patrick/PRXLUDES: We’re speaking following your performances at the Southbank Centre this month. The pieces on that programme were created as part of a large-scale project you are currently leading named Cyborg Soloists, which has been going on for a few years…

Zubin Kanga: I’ve been running Cyborg Soloists since 2021 and we have plans to extend it further into the future. It’s funded by UK Research Innovation and hosted at Royal Holloway, University of London. The project is focused around new interactions between musicians and new technologies, exploring combinations of musicians with AI, new audiovisuals interactions from holograms to VR, new digital instruments, biosensors, and motion and gesture capture.

You’ve worked with many different composers – for example Alexander Schubert, Ben Nobuto, Laurence Osborn and Neil Luck. Could you talk about how you facilitated the tech for those collaborations?

There’s such a range of composers — there are some who are very tech savvy. You come to them with a project, or even some new equipment or an experimental instrument, and they will jump at the opportunity and know exactly what they would like to do with it. Even if they need to learn new skills, they will know where to bring people in. Alexander Schubert — who I have worked with a couple of times — is extremely tech savvy and knows when he needs help, but he’s so multiskilled with what he can do with visuals and sound and lighting. He just has a huge amount of virtuosity in working with all of these multimedia elements together.  With this new piece that he has written for me using brain sensors, Steady State, we brought in an external expert, Serafeim Perdikis — who researches brain computer interaction at the University of Essex — to do the serious programming of converting the data into OSC, and create a system where I could control Max/MSP; but after we had got that, Alex did the rest of the programming of the audio-visual interaction with the sensors (and a lot technical side of the sensors interacting with visuals) himself, before bringing in a video artist and costume designer to expand it into his full vision of the work. 

There are a few composers like that based in the UK too. Someone like Ben Nobuto, who wrote a piece for me for my Nonclassical album Machine Dreams [Bad Infinity] — he worked with TouchKeys, integrating voice effects. Once he had worked out how the TouchKeys worked, then he was able to work very quickly with a very fluent technical skillset to draw upon. Other composers — say Laurence Osborn for example — don’t regularly  work with new technologies. With the first piece that he wrote for me a few years ago, Absorber (before Cyborg Soloists), he had this idea of using a double keyboard technique, which I have actually used a lot now, where you put another keyboard on top of the piano, and it becomes a double manual instrument. In that case, I become the technical facilitator; the composer brings an idea to me and I need to have the skills to realise that. 

That also happened with my collaboration with Neil Luck. We used the Mi.MU Gloves — which are these gloves with gesture and motion sensors — for his piece Whatever Weighs You Down in collaboration with Chisato Minamimura, a deaf performance artist. That was an amazing collaboration; we had these workshops where we explored these gestural transformations of Neil’s piano writing, imitating what I was doing on the piano and then turning that into dance. Thinking about how gesture can communicate onstage, and then thinking about how these gloves work — finding gestures which have some kind of communicative connotation, or feel iconic, but also work with how the gloves function.

Would you say that each of your collaborations have resulted in works that were bespoke to you?

A lot of my favourite works to perform are definitely written for what I like to do on stage. How I like to play the piano, as well as my own aesthetic interests. But then there are a lot of works where the composer comes with a clear idea, and they really stick with that vision, even if I suggest it goes in a different direction.

When collaborating, it’s always this kind of negotiation of how much the composer will come to you and how much I need to go to them. Sometimes it’s a situation where there’s a lot of negotiation at the end of the composition process, [like] with Laurence the first time he wrote for me. With Absorber, we negotiated the difficulties of the piano/keyboard combined playing to some extent after he’d written it, but then I also had to really push myself to learn entirely new techniques to be able to play that piece. Now for the new concerto that he has composed for me, it’s been a lot more collaborative and communicative from the beginning and we’ve already been working with this piano+keyboard setup over several works. The collaborative work has been integrated into our whole artistic process.

Zubin Kanga, ‘Absorber’ (2019), performed by Zubin Kanga at Kings Place, London, UK.

With each of those composers that you’ve worked with a few times — when you have a greater understanding of each other, does it give you more room to take risks in the process?

Having that trusting relationship when you know someone well is really beneficial for collaboration. My PhD was on the composer-performer collaborative process and the different factors that can affect this process: when it is a longstanding creative relationship, when there is a power relationship, such as when I’m working with a really famous senior composer or working with a student. In that chapter, I found that it doesn’t really make any difference actually — it’s more about the attitude between the composer and the performer. If you are working with different forms of notation, such as graphic [notation], that can completely change the collaborative dynamic. 

In all the most productive collaborations, they have been composers that I have really trusting working relationships with. It’s always difficult when you are working with someone new for the first time and trying to negotiate those boundaries, but often these really valuable artistic collaborations develop just within the creation of a single small work, if both artists are open to the process.

What is it like when you bring a third person into the process, such as a tech facilitator? An example I can think of would be a piece that you did with Joanna Ward in 2022 — FULL AND HOLLOW (bean piece 4) — with Sam Underwood

That work had this amazing robotic piano mechanism which goes inside of the piano that Sam made. It’s made up of magnetic resonators that go on the bass strings, little hammers that are in the middle register, and then a couple of other little mechanisms  that tap the frame. Joanna’s graphic score worked really well, in that it gave a lot of freedom for how to approach and work with that instrument.

There have been many other tech facilitators. There was Serafeim Perdikis for the brain sensors, Nick Moroz who provides help with programming for Max/MSP, and many others. I did a piece by Robin Haigh [MORROW] as part of my Nonclassical album, where he wanted the vertical positions on the Touchkeys (which have surface sensors) to relate to the tempi of these repeated samples, as well as their dynamics. I could program a basic version myself, but found it hard to get it working smoothly and sounding natural without it glitching. So Nick did that finer programming work, in collaboration with Robin, me and Andrew McPherson (Imperial College) who designed the instruments; It was this multi-layered collaboration to achieve this sound that Robin had in his head.

And with Andrew we also had another great collaboration. We collaborated around one of the first uses of the Keyscanner, which is this optical scanner he designed that’s used in Philip Venables’ Answer Machine Tape, 1987. It sits above the piano keyboard and  detects MIDI, which is used in this piece to allow me to type text on screen. I got one of the first ones that Andrew made . Because he was still developing it, I played a small part in shaping the device — feeding back things about the folding mechanism, or making requests like “how sensitive can we make it?” — and Andrew would work on it and come back with an updated version of the device. Sometimes these developers are big companies where they have limits to how much time they are going to give you, while with someone like Andrew — who’s a Professor at Imperial College, and runs the Augmented Instruments Lab — it’s part of his work and research to refine these designs to work for live performers.

Another example I can think of is Laurence Osborn’s friend Joss [Jocelyn Campbell]. What is it like being a part of a collaboration process that includes a technician who regularly works with that composer?

Joss worked closely with Laurence on the new concerto as well as his previous works, Absorber and Counterfeits (Siminica). He was mainly working on preparing the sounds for the sampler. Laurence worked closely with Joss to refine the sounds, and then it was straightforward for me to load them into keyboard samples; so his longstanding collaboration allows my work with Laurence to then focus on the implementation of the sounds in the work, as they’ve already been prepared so well. 

In the case of Laura Bowler, we worked with Sam Redway, who wrote some of the text for SHOW(ti)ME, gave feedback on some of the dramaturgy, as well as programming the audio-visual sequence, combining pre-recorded video and live video on stage. For the live electronics, we worked with Matthew Fairclough; and he did the programming for how the loop pedal would work, as well as some of the more complex functions of the MiMU gloves. All of these technical, dramatic and musical aspects had to be negotiated carefully between us; the tech ambitions balanced with the practical ambitions for how much I need to juggle onstage. Laura knows exactly where she needs this extra input and is a really good collaborator, inviting input from all of us while still keeping a clear vision.

Have there been moments in a premiere or an earlier performance when things have gone wrong and then you’ve got back to the composer telling them “this isn’t quite working”? What is it like to renegotiate with them? 

It is pretty rare. I think in those rare cases you go back and speak to the composer to ask for something that’s much more technically robust and reliable. In other cases, it works fine onstage, but it’s such a complicated setup, so it’s a question of whether there is a practical solution to achieve a similar result without the complexity and number of instruments, that makes it difficult to tour.

With Luke Nickel, there have been a few versions of his piece, hhiiddeenn vvoorrttiicceess. It has these amazing visuals using AI backgrounds combined with rollercoasters, and uses Soundbrenner haptic metronomes with independent tempo curves — so my hands are always moving at different tempi. I have a ring where I can turn my hand into a type of rollercoaster, moving it through space which controls the  strobe lights at certain points. For certain concerts — like when I went to Canada — I couldn’t carry these strobe lights and needed a shorter work to fit the programme, so he made a recital version without the lights. We also did an installation version, where people could put their hands on boxes and feel the different tempi themselves. So having these different versions is actually really useful for presenting the work in different contexts.

I also think it’s important (even without tech) to refine pieces after their premiere. There’s the case of Brahms and his collaboration on his Violin Concerto with the great violinist Joseph Joachim. After the premiere, Joachim and Brahms had a long exchange of letters in which they rewrote a lot of the violin part. The actual published version is very different from the original premiered version. That notion of all works being works-in-progress has been around for a long time. But the tech side adds a new element to that.

Philip Venables, ‘Answer Machine Tape, 1987’ (2022), performed by Zubin Kanga at hcmf//, Huddersfield, UK.

Speaking of concerti — you’ve recently performed Laurence Osborn’s concerto Schiller’s Piano at the Southbank Centre, with Manchester Collective

This new piece continues the setup Laurence has used in two previous works for me — of using the piano with another keyboard to form a double-manual instrument — but in this case using the sounds of piano construction. Laurence went to the Southbank Centre piano workshop and recorded the sounds of pianos being repaired and put back together; each movement uses a different type of a sound of piano being built, whether it’s the hand drill, the knocking of the brass, the end of the string being cut. 

The concept of the piece came out of a visit Laurence made to the Buchenwald Memorial. In the Second World War, some prisoners there were forced to build a replica of the piano belonging to Friedrich Schiller, best known for writing the Ode to Joy — this was all part of Nazi propaganda, with only the shell of a piano built, and the real piano stored underground. The work is about fascism’s attempts to recreate the past.

What is it like juggling the tech and then performing with an ensemble at this larger scale? 

There were a few things to juggle, but the tech was relatively straightforward, and I’m used to this piano+keyboard setup, which meant it was actually not much more complicated than a conventional piano concerto. There are obviously new challenges when collaborating with a new ensemble, but we worked with Aaron Holloway-Nahum as conductor, and Manchester Collective were an excellent ensemble to collaborate with — with a really great energy, virtuosity and musical insight that they brought to the performances. 

Have you done other works with piano / tech and a live ensemble, or is this the first time? 

I did a piece with Explore Ensemble earlier in the year, which was also for piano and keyboard by Larry Goves for Aldeburgh Festival. In that piece, the curious codes of silence, it was for piano alongside a detuned keyboard acting as a kind of doppelgänger piano, and a Lumatone — which is an instrument with hexagonal keys that allows for more complex microtonal tuning — in this case allowing me to play chords within a 31-note scale. 

I do enjoy working with other people as part of an ensemble. Working as a soloist is great — but you can do [all] sorts of other things when you work as part of an ensemble, that results in very different types of pieces being composed for the tech, and the opportunity to explore how these digital instruments interact with a range of traditional instruments.

You performed a solo set after the Manchester Collective gig — performing works by Tansy Davies, Alex Groves, and a piece by yourself. You have worked with Alex Groves before…

Alex Groves wrote a work for my Nonclassical album — Single Form (Swell) — that used LUMI keyboards to shape waves of sound. This new work — DANCE SUITE — is for the ROLI Seaboard, and uses nightclub sounds and samples, while playing with the expectations around the genre. 

Tansy Davies is another composer who doesn’t normally work with tech. She borrowed the Sequential Prophet Rev 2 from me for a few months, and wrote Star-Way — which is a really beautiful piece that brings her some of her orchestral and chamber sense of structure and sound and compresses it onto this instrument. It’s a really wonderful combination of instrument and composer.

And then my piece, Hypnagogia (after Bach) is a transformation of the final movement of Bach’s St Matthew Passion — playing different transformations of Bach using the piano, Korg Prologue Synthesizer and electronics, controlled by the MiMU gloves — which I use to manipulate the sounds and trigger synth samples. The ending is influenced by Wendy Carlos and her arrangements of Bach using synthesizers in the 1960s.

Zubin Kanga, ‘Hypnagogia (after Bach)’, performed by Zubin Kanga at Submerge Festival, Manchester, UK.

With your own compositions, do the ideas come when you are collaborating with other people? When you see composers take an idea in one direction, does it spark your mind to take a similar idea and implement it into your own work?

I spend a lot of time exploring how these new instruments and devices work, then often find that composers who I commission are missing something really exciting that the instrument can clearly do. So my approach to the instruments is usually completely different to what the composers I’ve commissioned have done with them. Originally, I was doing that with the piano, and I found that there were so many ways of doing extended techniques — whereas most composers are only playing with a narrow subset of effects — so I write the piece that I want to play for that instrument, rather than trying to get the composer to write that type of piece. A lot of the works for Cyborg Soloists have explored the full capabilities of instruments like the MiMU gloves, and that knowledge I developed in creating the works has then become valuable in discussing options for these instruments when a composer comes to work with them. 

I’ve noticed the works you’ve commissioned have tended to be more thematic or exploring as one parameter or element of tech in focus, such as Robin Haigh’s swells in MORROW

That work is all about controlling the speed of those repeating samples. I actually did a talk about it at the RMA [Royal Musical Association] conference recently, which was about each of these keyboard instruments that have been composed as part of Cyborg Soloists, and each of the composers just choosing one specific function. Like Benjamin Tassie and his work Earth of the Slumbering and Liquid Trees, which uses historical organ samples from around the UK and The Netherlands. He used the ROLI Seaboard and didn’t want to put a whole load of effects on the organ samples, because they have this amazing character; but he uses an LFO for parts of the piece to create these subtle rhythmic swells, which can move independently per finger. That’s a really interesting subtle effect using just one function of the instrument. 

And similarly with the Touchkeys: Laurence used the pitch bend function in Counterfeits (Siminica), and Oliver Leith used it as a microtonal keyboard — dividing the keys into two in order to allow it to play quarter tones. They are all exploring a very particular functionality out of that instrument, when it can actually do many, many things at once. In these cases, the focus of the techniques makes for stronger works — they’re about a musical concept, rather than just being showcases for the instruments.

Because you have such a close relationship to both the instruments and the tech, whilst they are taking it from the angle of an idea. It’s a different process, a different mindset…

And maybe this is why composer-performers write these kinds of pieces. Historically, a lot of works by composer-performers end up being about the instruments. With Chopin, so much [is] about the piano and piano technique in a lot of his work, he is really interested in the actual physicality of the instrument. As a composer, my interest comes from exploring these instruments, and the fact that I have to learn how to use these devices very well and explore all their multiple functionalities to work with all of these different composers.

Benjamin Tassie, ‘Earth of the Slumbering and Liquid Trees’ (2023), performed by Zubin Kanga at the National Gallery, London, UK.

In terms of works where more theatrics and visuals are used, have you had times in the process where you have had more involvement? For example in Laura Bowler’s SHOW(ti)ME

A lot of the workshops consisted of figuring out what this text would be. How does it work with speaking and playing at the same time? Discussing what the story is about —my own piano and my practice, my relationship to the instrument, as well as elements Laura brought in from composing other autobiographical works. The piece came very organically. We had a whole week where we developed a load of material together. The glittery motorcycle helmet came because she wanted me to wear some kind of headgear where I’m out of this “preparation for the performance” mode and into the “actual” performance — which in the piece only lasts for about thirty or forty seconds. It’s all about this whole build up to it, and the performers’ contrasting public persona and private anxieties.

Obviously, it is very hard to do theatrical things if you are not a trained actor or involved in collaboratively generating them, and it needs the same degree of care and preparation as playing the instrument. In Alex Schubert’s piece, Steady State, which I am performing at hcmf//, there is no keyboard, just the brain sensors. For a lot of the piece it’s like an experiment unfolding onstage. I explain how it works, so that the audience understands how these brain sensors are controlling the sound and video, but it soon evolves into something much more strange and hallucinatory — a multimedia exploration about what it means for the brain to be a component in a system.

We developed the choreography with two young composers — Oscar Corpo and Ludmilla Mercier — who are performing with me at hcmf//. A lot of this needs to organically happen during a workshop, but that’s also the way people in the theatre world work – it needs time to develop and for all the performers to be able to learn and embody all the movement on stage.

How in depth was Alexander’s original idea with sensors to begin with at the start of the process? 

We had already been working together for years. When I showed him the options for all of the tech for Cyborg Soloists, he told me that he wanted to use brain sensors, which was something that he hadn’t worked with before — and more importantly, this type of conscious live control of music and video hasn’t been done before. 

In order to understand what was possible with the sensors, we had to go to Serafeim Perdikis, a brain-computer interaction expert at the University of Essex. Serafeim discussed the steady state phenomenon, which has been studied for decades, and is used to allow brain sensors to control speech software or prosthetics. How it works is: if you look at a flashing light on screen, then the sensors will detect the same frequency at the back of the brain. If there are different flashing objects on the screen, it can tell which one you are looking at. In this work, this allows the brain to become a component in an audiovisual feedback loop, which is changing and being changed by what I’m seeing on screen. 

And do you think that this work will be the catalyst to more pieces using brain sensors? 

The whole purpose of Cyborg Soloists is to make these types of technologies and tools more accessible. We are planning to package it up properly and release it open source, so that people can use the software tools with their own sensors.

Alex Paxton, ‘Car-Pig’ (2023), performed by Zubin Kanga at Rich Mix, London, UK.

As a performer, and being the driving force of Cyborg Soloists — you are not only premiering the works, but you are introducing the world to new technologies, or at least utilising them in a musical context? 

It’s making all of this tech easier to use for experimental composers, who want to create something that explores how digital instruments and devices can go beyond what they were originally designed to do. By empowering composers to do that, it helps these instruments become much more integrated into contemporary music. There’s a barrier with cost with some of these devices, but there is also a bigger hurdle with expertise — and the time it takes to develop the knowledge of a new instrument to a certain point where it becomes a useful tool. So a lot of what we are doing is releasing these software tools and models that we’ve made which can then be used and adapted in other ways by composers to allow the body of knowledge around these devices to grow.

Who are some of the composers you are going to work with for these large scale projects?

I will be working with Ensemble Offspring, collaborating with two Australian composers — Amanda Cole and Tristan Coelho — as well as the German composer Brigitta Muntendorf.

Robert Laidlow is writing a concerto I’ll perform with the BBC Philharmonic Orchestra. He’s doing amazing stuff with AI — he’s using the BBC Philharmonic’s orchestral archive, and training a neural network with that material. You get AI generated versions of the orchestra playing, and I will be controlling these AI sounds with a range of new instruments, including a new experimental device from the Intelligent Instruments Lab in Iceland.

I’ve been talking with Laura Bowler about a concerto, expanding on the ideas that we explored in SHOW(ti)ME. And there’s also a bunch of composers that I have wanted to work with but it hasn’t worked yet in terms of schedules, where we’re now discussing future works. And I’m also looking for new younger composers to work with, which is also an important part of my role as a performer and commissioner — going around to concerts, and looking at what emerging composers are doing and what is new. Spotting who will work well with this project.

Within the aim of Cyborg Soloists essentially…

Some composers only want to write chamber music for acoustic instruments and that’s great. But even if they don’t have the expertise, it’s whether they have the interest in doing something with new technologies. In many cases, I build up a shared knowledge of a particular use of technology with a composer that we want to explore further. I have seen a lot of these composers growing their approach to the instruments and the tech. They explore something in one piece, and then find something that they can take further in a future piece, develop[ing] their approach as you would if you were writing a series of piano pieces. 

It gives an opportunity for people to try things that are not readily available. There’s not so many composition opportunities where you can get a ROLI keyboard. It’s some composers’ first chances at exploring this technology, which could then become an embedded part of their practice later down the line.

And for composers, if they are writing for one of these instruments, they are not going to buy it just to write for it once. So it’s important that I’m there helping build a repertoire for these instruments, that eventually will lead to some kind of practice around these devices.

It’s not so different to when Liszt and Chopin had relationships with piano makers in the 19th Century, writing for these instruments to show off their unique capabilities. Like the Erards that Liszt played: these really robust instruments that were full of character and had these amazing, almost prepared, piano effects when you get to the extremes of the instrument. While the Pleyel pianos had this light touch and depth of colour, which really suited Chopin’s technique.

The piano is an instrument that has not really developed much in the last century, except for a few experiments. While electronic instruments and even analogue instruments are changing all of the time; new devices are being made, with new features, there are keyboards that are digital/analogue hybrids. The technology has kept on changing and has been refined in response to what musicians want. That’s both an opportunity and a challenge for composers and performers working today to be part of that process of defining what these new instruments and music technologies can be.

And you are someone who is helping push that into new realms, both for developers and composers.

In a lot of these cases, I am feeding back what a lot of these composers have done into these companies and researchers. Such as Soundbrenner, who responded to the expanded capabilities that Luke Nickel requested: they opened up the app to new functions that have now become permanent features. Also with Andrew McPherson — the Keyscanner is going to be something that lots of people will want to get involved with in the future, converting the piano into a hybrid digital-acoustic instrument, with endless possibilities. So it’s been great to be part of the process of this device’s development, providing feedback on early versions and see it being refined in response.

It’s mentioned on the Cyborg Soloists website that there are eight specific workstreams…

I’ve since simplified this down to four core themes: the body as instrument, AI, new audiovisual interactions, and new digital instruments. There are different aspects to using the body as an instrument — using touch motion sensors, or gesture capture using video, as well as brain sensors. There are many new digital instruments which can be pushed beyond their original design. Music and AI is a fast-changing field, and a lot of the earlier AI works we did with neural networks are now dated. Our projects with AI are a constant exploration of what is happening now with this technology, and what it means for musicians.

With audio-visual interactions, many of our planned projects involve collaborations with CoSTAR, a major AHRC research project based at Royal Holloway, in collaboration with a number of other universities and Pinewood Studios. This will be a hub for a lot of audio-visual research into new technologies for film, TV and theatre, as well as music. There are techniques like holographic projections that we’re exploring that have been used before in other fields, but there’s a huge range of new possibilities for composers to explore, interacting with musicians onstage using digital instruments and other devices to allow them to control and shape the visuals live.

Robin Haigh, ‘MORROW’ (2022), performed by Zubin Kanga at Rich Mix, London, UK.

Finally, I wanted to ask something on behalf of a lot of composers in general: what’s one piece of advice you would give when working with new instruments and technologies?

There is a difference between working with new tech, and working with new tech with performers. Even if one is unfamiliar with the devices, a composer should understand how it fundamentally works: i.e. how it’s meant to work, what the limitations might be when put into a performer’s hands, having a conceptual idea of how it functions with a similar level of understanding they’d bring to writing for traditional instruments.

The best pieces written for me were when even if the composer didn’t have the know-how of the technology, they came with a clear idea of how a functionality should work; how it will work onstage, how it will interact with the standard keyboard writing. That’s an important thing — having that vision that’s very clear of what they want to do, and why they want to use that bit of technology for a live work on stage. What purpose does it play? What’s the relationship with the audience? Does the audience understand what you are doing with that piece of tech — can they see it? Does it “read” in a theatrical sense?

It’s also about composers understanding the performer you are writing for. Some young composers are aiming to write the “ideal” piano or cello piece for example — one that can work for any player — rather than writing for that particular pianist or that particular cellist. I also think some composers believe that if something is too specific to that performer, then other people won’t be able to play it. But I’ve found that if it’s really specific, it ends up being much more interesting — and ends up getting more performances. And the performer you are writing it for will want to play it more as it has been written for them.

It is important to get to know your performers, learning about what they want to do and what fits in their programmes as well. You’re never writing in a vacuum; you are composing as part of a programme collaborating with an artist with their own identity, abilities and artistic vision. Collaboration is the most important aspect of any musicians’ artistic work, and it’s even more important when exploring new instruments and new technologies. 

Learn more about Zubin Kanga’s upcoming performances at hcmf// on 18th and 19th November 2024, including the UK premiere of Alexander Schubert’s Steady State:

Learn more about Zubin and Cyborg Soloists at:

Header photo credit: Robin Clewley

1 Comment

  1. […] have included works for Slide Action, the Marian Consort, the BBC Scottish Symphony Orchestra, Zubin Kanga, Aldeburgh Festival, EXAUDI, and Newcastle Youth Choir, alongside upcoming projects with her sister […]

Leave a Reply

avatar
About Author

Patrick Ellis (b. 1994, UK) is a composer, performer and curator based in London.

Since 2023, Patrick has been the creative director for PRXLUDES. His contributions have included over 30 interviews with emerged and esteemed artists, ensembles and organisations.

Discover more from PRXLUDES

Subscribe now to keep reading and get access to the full archive.

Continue reading