One of three winners of National Sawdust’s inaugural Hildegard Competition for emerging women and non-binary composers, the Los Angeles-based composer X. Lee defines their voice by the integration of interaction, electronics, multimedia visuals, gesture/movement performance, spatial design, and the unification of acoustic and electronic synthesis. Their renowned background as a DJ and producer in the techno subculture, contrasting with their extensive education and training in classical composition theory and practice, has given them a unique understanding of the relationship between music, technologies, and the human experience. The fully immersive energy and large-scaled audio-visual interactions of “raves” inspires them to take the abstract concepts and aesthetics of techno into their personal electroacoustic works. They have participated in summer festivals such as the Conservatoire Americaine de Fontainebleau – under the mentorship of Francois Paris, Martin Matalon, and Allain Gaussin – and Contemporary Music Creation + Critique at IRCAM that ran alongside Manifeste. Their audio-visual electronic works have won calls for scores from Mise-En Ensemble’s for Open Bushwick Studios as well as Listening to Ladies’s for Ensemble Ctrl-Z. Most recently, they were featured on the Eastman School of Music’s radio show, featuring trans and gender non-binary contemporary composers. This fall, X. Lee will be continuing their studies in the Cursus Program at the Institut de Recherche et Co-ordination Acoustique/Musique (IRCAM).
Invited to choose their own interviewer for this National Sawdust Log profile, X. Lee selected the groundbreaking German electronic musician Carsten Nicolai, who performs and records under the name Alva Noto. In a recent conversation, they discussed scoring for electronics alongside acoustic instruments and creating immersive experiences in tune with the times.
ALVA NOTO: When did you become interested in music?
X. LEE: I’ve been interested in music since I can remember. My uncle is a double bass player, he’s a professor at Stanford and San Jose State, and my aunt’s a really awesome harpist. So I was around music all the time. I started playing cello seriously when I was 12 or 13, and I started deejaying around then, too.
At 13? Interesting! Did you get access to clubs at 13 already?
No, not yet, not then. It was just kind of my own thing. My mom rented out rooms in our house to college students. There was a girl who was dating this guy who was a DJ, and he kept his stuff in her room. So when she was at school, I used to sneak in and start deejaying—and he caught me. So he taught me how to… back then it was mostly hip hop, scratching and turntablism and stuff like that, so I learned that. And then I got into EDM and it was really interesting.
Right. I was listening to your stuff, and I was wondering, are you performing together with the musicians? How do you start such a composition?
Usually what I do is, I meet up with a musician and I say, what are the boundaries we can push? What are the sounds we can make with this instrument? I explore that for a day or two with them and then I write my piece, and then we will put it into action for the performance or rehearsals.
And how do you write? Do you do recordings with them? What is the starting point?
Definitely meeting up with them and seeing what kinds of sounds the instrument can make. I’ll take some samples and recordings of it, and then expand on that. I also usually write a lot of Max patches for my live and electronic interactions, so I see what kinds of effects work really well with that instrument and do some samples of that. And then I take it home, I sit with it, and then I write maybe a fixed electronic part with it, and then I notate it out.
From my experience, it’s very difficult to make notation for an electronic part. How do you solve that?
For the electronics, usually there’s a fixed portion, and then there’s usually a live-process portion. For the fixed portion, I kind of write it out almost like an instrument of its own. I just write out the impacts, the sustains, the crescendos, the decrescendos—I treat it like just any other instrument. I write it out in a notated way; sometimes it’s graphical, sometimes it’s just actual notes.
They may be very beautiful, interesting scores. It would be interesting to see the scores. To work with classical musicians, notations are… if you look at the classical orchestra or ensemble, a classical setup with instruments – but that’s not what you’re doing, actually, right? Electronics are a really essential part for you.
So how do you notate the sonic quality you’re achieving? How can the score reflect the piece? Or do you leave things open to the players? What would happen if somebody would play the piece without you?
I try to keep my notations as precise as possible, because I think with electronics, especially when it’s fixed media, it gets really restrained in timing. That’s the biggest thing I’m trying to overcome. And hopefully in my time at IRCAM, I’m going to focus a lot on that, the study of how to get something more interactive, completely self-evolving electronics.
If I understand this right, Max/MSP – that’s how you’re building your own instruments.
So if you do not perform the piece with them, you would deliver the instrument as part of the score?
I send the Max patch, I usually have written-out performance notes for how to use it, and I try to keep it as simple as possible. You don’t want a super-complex patch where no one knows how to use it. I’d rather just have a click-and-go for the fixed part, and then I’ll have like a few sliders for each instrument for the processing, and then I notate out when you slide those in.
Okay, that’s interesting. In a way, you’re building instruments. That’s a very different way of classical composition.
Yeah. Personally, the issue I have with a lot of the electronics in classical music is that it has been stuck in this era for a long time, almost like the ’80s. It’s like, you have something happen, and some reverb, and then it kind of develops. I really want to move away from that, and progress it to a time where the electronics are its own individualistic component, and instruments are interacting with the electronics. We live in a time where technology is so immersive and you have to be completely immersed, and I think that should be represented in the music, too.
So how do you see the computer? What is the computer for you?
I see the computer as a tool that creates limitless opportunities, like new possible sonic realms and new forms of integration. And as technology continues to progress, so does its influence in music and performance. For me, computers are a key component to my entire compositional process. My techniques are based on technological performative interaction and algorithmic or spectral-based tonality, which are techniques married to the introduction and development of computers and technology.
You involve cello and voice; from a classical music background, this is very understandable. But you mix it with your stuff, which I think is a major part, right? What is the sonic quality you want to achieve by mixing both worlds?
I really want to treat electronics like its own instrument, and I want to hear the acoustical instruments become a machine, and we blur the lines between what is machine and what is actually just acoustic instruments. That’s what I’m really interested in exploring.
In one of the pieces I was listening to, where there was voice, as well – is there any kind of narration in your composition? When you use voice and you’re choosing words, what is the concept behind it?
In an abstract way I enjoy story, but I don’t like super direct “here’s the story line,” because I think music is up for interpretation, always. When you have something more abstract, it can mean different things to different people. What’s important is the emotions that you try to engage the audience with, with what you have going on sonically and visually. In that piece specifically, the voice and cello piece, I have this dancer in the box in the visuals, so it’s almost this feeling of enclosure, and the tension I tried to create in the music was to make a claustrophobic feeling. So I focus on the emotions more than the actual story line, or a direct reason or political reason or whatever.
Yeah. When I compose, sometimes I have a image or some very short sequence out of a movie just to remind me of a feeling or something. What do you like to keep you focused on what kind of feeling or atmosphere you want to achieve in a piece? Do you have something like that?
Yeah, I definitely have… not really storylines, but definitely more imageries and feelings. But my brain is kind of weird: it’s like glitchy images, you know? I usually have an idea of something I want to see visually – for example, that piece, or in my other pieces I have lights, or performance gestures. This piece that’s going to be performed at Hildegard, the last section is just going to be the performers and electronics – they put their instruments down and they grab their phones, and they’re just glitching with their phones, like their movements. I think the engagement of the audience is watching the ritual of performing, so that’s what I have in my mind: How is the audience going to see this, and what is going to stimulate them visually?
I visit Japan a lot, and I have a feeling that you have a strong relationship to the ’80s noise movement in Japan – Merzbow, for instance. Many of these people – it was basically pre-computer – they had these incredible setups with pedals and modular systems and being very noisy. I have the feeling that you’re very interested in this noise idea.
Totally, totally. Noise is super important. Even in my acoustical writing, I’m focusing on the timbres of noise. So when I write my acoustic parts, I actually sometimes take noise samples and I put it into… I don’t know if you know what OpenMusic is? I put it into something like AudioSculpt, I get the frequencies that I want, I put it into OpenMusic, and it shows me the harmonic changes or chordal changes within it through an algorithm. So that’s usually how I write my acoustic parts. And I’m like, Okay, this is the frequency I want to use to accentuate this noise, or create the same kind of noise, and again, that blurring of the machine and acoustic feel.
The pieces I heard all sound very alive. To what part do you allow improvisation?
Oh, actually, a lot. I have my guidelines and I have my piece, but I think it’s really important for a musician to have open space. When something’s too rigid, it sounds too rigid; you lose the life of the performance. But if you leave it too open-ended, musicians don’t know what to do, because they’re not trained to improvise like that. You have to give restrictions with your freedom. So I’m like: Okay, here’s a gesture that I kind of like; I write that out and I’m like, continue with sporadic gestures similar to this. Or, here’s the selected pitches; use these pitches and do your own thing. Here’s the rhythm, do whatever pitches you want, and then after 10 bars, start adding in this pitch within it.
How do you see the relationship between Eastern and Western music? You’re faced with very different scales, a very different way of composition. Is this something you have kind of floating in your bag?
That’s interesting… I’ve actually never thought about that. I’m not conscious of it, if it is there. Whatever culture you’re from, I think it’s almost just a natural instinct. I think I actually grabbed more from the French school of spectralism, because that’s what I studied in school. Messiaen, Boulez, stuff like that.
How do you feel about Stockhausen and the German school?
Awesome! [laughs] I love it. It’s good stuff.
I joined a concert of Stockhausen, who was still alive, and he was very much into taping things…
Back when tape was cool! [laughs]
But it was almost like a competition: Who has the most channels? Is multichannel systems something that interests you? Like, Stockhausen worked with 128 channels in Osaka, for instance, but this was at a time when no computer existed and I have no idea how he operated 128 channels at the same time. But this was a very important aspect; the French school had this idea, as well. Do you use Bluetooth speakers?
Oh, yeah, using SPAT, something like that?
And many different speakers, like the GRM. You’ve probably heard about those guys. How important is the P.A.?
It’s very important. I think spatialization creates the space. The problem I had with a lot of electronic and electroacoustic music, especially in the student setting, is a very 2-D feel: You have the acoustic playing, and then you have this speaker system that’s playing back some electronics, and there’s a detachment – it’s too obvious to me. In a dream world, I would have a completely immersive space, almost like a black-box space. I really want to lose the disconnect between virtual and physical. That would be ideal… but it’s really hard to accomplish on a student budget. [laughs]
Do you mic the instruments?
Definitely. Especially for the live processed…
In my experience, it was always very helpful, in order to give the acoustic instruments and electronics common ground, that they’re all going through the speaker system.
Yeah, that makes everything a smoother mix, I think. But when you spatialize, that’s exciting, because don’t have to have any effects on an instrument. Let’s say it’s a flute, and they play a little gesture; you can spin it around, and that immediately puts it in a 3-D space for the listener.
Yeah. Interesting. So in a way, you’re not just doing classical composition; you’re much more interested in a kind of multi-sensorial environment.
Totally. And I think that comes from my DJ background of going to raves. There’s intense stimuli, constantly. You have lights, you have visuals, you have the energy of the crowd, you have the overbearingly loud music. It’s important to feel that immersed in your space, I think. And as we go forward into the future, I see it going as people wanting that more. With the development of A.R., V.R., we’re constantly wanting to be more immersed into whatever we’re experiencing.
It sounds very interesting. So when are you going to perform your piece?
I’m in New York right now because of rehearsals, and the performance is on June 12.
And you’re going to use a dancer, or lights?
I’m going to have some visual components. I’m working with a friend who’s a V.R. architect, so he’s creating a virtual space, a futuristic kind of city. And then we have a connect camera facing the conductor, so as she’s conducting, there’s going to be visual interaction with particles and movement. So it’s going to be an interactive V.R. virtual space.
Very cool. Good luck for your performance. I’ll listen more carefully to your stuff… and keep me updated.
X. Lee’s music will be featured in the Hildegard Competition Concert at National Sawdust on June 12 at 7pm; nationalsawdust.org