Dorsen: We were working in Paris for two weeks in a little black box. Coming here at the Muziekgebouw is super. It is beautiful, with the blond wood, great acoustics, and the technical crew is extremely good. So is it delightful working here.
What is algorithmic theater?
I use the computer as a performer. Computers can talk, they can translate, and in a way improvise, so that’s why I want to research their theatrical potential. For this concert, the Beatles song Yesterday and the wellknown song Tomorrow from the musical Annie, are scrambled by specially designed software, and then projected over the three singers’ heads as musical score. The software produces a different score each performance, yet it alway ends with a clearly recognisable version of ‘Tomorrow’.
How did you get the idea for this?
Usually it comes with a simple thought, which I mention casually in an off-hand remark. In 2010 I started working with computer programmers, writing live performing software. Software can produce speech, control lighting, used in real-time yet it is a non-human performance. The first time I used this idea in a improvised performance with just two laptops having a conversation based on Chomsky and Foucault. The next piece was going more in a theatrical direction, making a live generated adaptation of Hamlet. The language was scrambled in a new direction, making new scenes, new poetry, with computer voices speaking 20 different parts and triggering stage effects.
How come you started to put people on the stage?
I started to become curious what the machine-human collaboration would be like. 'Yesterday Tomorrow' is music being generated by a computer, which the singers interpret on the fly. Me and the human team create a structure, and the computer fills in the details. They have to make quick decisions about how they treat the score. It is coming from human to machine, its being rewritten by the machine and then it’s coming back to the human who then puts an extra layer on to the material.
What happens at the concert?
The algorithm is slowly transforming Yesterday into Tomorrow. Which means the songs seem to be scrambled into musical jibberish, but it keeps changing and again you start to hear melody, tonality and something familiar. In the end the process makes sense, and it’s very satisfying when you come to the end. One of the questions is whether the computer programme has creativity of itself, for it is different every night.
For me, getting away from language to music as primarily mode of expression. I was thinking of something so human as singing, which still to me has this very romantic notion of self-expression and extrordinary beauty that humans are capable of producing. How would this work together: ultimate human expresivity and a strict rule-based procedure.
Are you satisfied with the piece?
Yes, I am very interested each night what will happen. I chose the songs because they are so familiar, banal even, and they have very iconic melodies: it is a pleasure to hear them being messed-with, decontructed, and acquire a different beauty. When you come towards Tomorrow at the end, there is diffidence, decollage between the singers, and I find it better than the song. To me, it is much more satisfying than the actual song.
I constructed it as a midlife piece. I am 41 and have lots of questions: how do you stay engaged, what have you done, understanding what older people have been telling you, that there is no arrival, just constant change.