NTO talk

In September 2023, I had the honor of delivering a presentation to the music division of the Norwegian Association of Theatres and Orchestras (NTO). The central theme of my talk was the convergence of music and artificial intelligence (AI), with a particular emphasis on the untapped opportunities this burgeoning field offers to the worlds of art and music, as opposed to merely delineating its challenges.

I initiated the discussion by offering a rudimentary introduction to AI and machine learning. Subsequently, I focused on two specific methodologies, classification and generation, that I deemed most pertinent to this specialized audience. To elucidate the core principles underlying these areas, I used two straightforward applications that I had developed for demonstration purposes.

AI generated creatures on stage Illustration image by Dall-E

I then proceeded to explore existing services and applications that hold particular relevance for choirs and orchestras. I posited that although many commercial AI solutions in the musical domain are still in their infancy and often yield less-than-satisfactory results—especially for those with a nuanced understanding of music—there exists a plethora of intriguing possibilities for professional music ensembles. These opportunities span various facets of musical creation, ranging from composition and performance to engaging audiences in novel ways.

In conclusion, I highlighted the potential of advanced language models like ChatGPT as valuable collaborators in the conceptualization and development of new artistic projects and ideas. By focusing on the yet-to-be-realized possibilities that AI presents, I aimed to provide a balanced perspective that encourages proactive engagement with these emerging technologies.

The slides from the talk (in Norwegian) can be downloaded here.

 

AudioMostly 2023

AudioMostly 2023 logo

During my research term at Infomus, Casa Paganini, University of Genoa, Italy, I started a collaboration with Sanket Sabharwal analysing a set of motion capture recordings of two dancers I did at the ZHdK in Zürich, fall 2021. This collaboration resulted in a paper that Sabharwal presented at the AudioMostly'23 conference in Edinburgh, since I didn't receive enough funding from my university to be able to go there myself.

Abstract: In this paper, we will present a pilot study that explores the relationship between music and movement in dance phrases spontaneously choreographed to follow phrases of electroacoustic music. Motion capture recordings from the dance phrases were analyzed to get measurements of contraction-expansion and kinematic features, and the temporal location of the peaks of these measurements was subsequently compared with the peaks of a set of audio features analyzed from the musical phrases. The analyses suggest that the dancers variably accentuate their movements to the peaks or accents in the music. The paper discusses the findings in addition to possible improvements of the research design in further studies.

The paper is published by the ACM Digital Library and can be read and downloaded here.

 

Particles environment in MotionComposer 3

Since 2011, I've had the privilege of collaborating with Robert Wechsler, the founder and inventor of the MotionComposer project. Together, we've been developing a musical environment that uses computer vision technology and audio software to convert movement into music. This environment is designed to be accessible to individuals with varying abilities. The onset of the COVID-19 pandemic has inevitably led to some delays in our development timeline. Specifically, the "Particles" sound environment, initially developed for MotionComposer 2.0, was not incorporated into version 3.0 as planned. However, following two pivotal workshops—one in Genoa in the spring of 2022 and another in Trondheim in 2023, Particles has officially been reintroduced as one of the six key sound environments available. I am delighted to announce that this updated version of Particles is now featured on the official MotionComposer website. It comes complete with an enthralling demonstration performed by a dancer, which beautifully showcases the capabilities and creative possibilities of this interactive musical environment.

Particles demo from MotionComposer on Vimeo.

 

Project with Johanna Ciampa

Stills from performance with Johanna Ciampa

In the spring of 2023 I did a series of workshops with dancer and choreographer Johanna Ciampa from Montana (US) developing an interactive performance entitled Moving, Listening and Being: An Iterative Process. The performance featured 3D sonic landscapes with elements controlled by the dancer with the use of body-worn sensors and an interactive digital instrument. The workshops were a part of Ciampa's Master project, which she submitted at the international Choreomundus program Dance Knowledge, Practice and Heritage spring/summer'23. The topic of the performance was how human activities pose threats to aquatic life, using the coral as point of focus for the dramaturgy of the performance. The main ideas and concept in the project were Ciampa's, and my role was to be a co-creator and off-stage co-performer with responsibility for the interactive audio technologies involved. In the performance, Ciampa embodied the character of an adult-stage coral in a fjord. She then controlled an interactive instrument intended to portrait an unhealthy coral gradually coming back to good health. Accompanying the coral sounds were sounds of differently sized ships as well as aquatic life - the former thanks to a set of bioacoustic recordings from the Norwegian Polar Institute in Tromsø.

The performance took place in Music Technology's Portal (Fjordgt.1 campus) at NTNU in front of a small, invited audience. We used the 20.2 speaker setup I have put up there, using MIDAS-M32 mixer and a Mac Mini running Reaper with ICST ambisonics fourth order plugins to spatialize the sounds. My interactive instruments were integrated in a VST plugin made with Csound and Cabbage. Three NGIMU sensors (x-io) were used to track Ciampa's movements.

 

Interactive Digital Art & Societal Health Conference

Interactive Digital Art & Societal Health Conference is a new conference organized by Jin Hyun Kim and Marcello Lusanna at Humboldt University of Berlin taking place December 2-3. The organizer states that the motivation of organizing this conference is to promote the usage of interactive digital art in healthcare and other daily life contexts where societal health plays a role both on a theoretical and practical level. The conference gathered scholars from many diciplines including music therapy, psychotherapy, musicology, music technology, HCI, philosophy, anthropology and somatic interactive design.

The Portal (NTNU)

I was honored to be invited to do at talk at the conference, which took place at the Kunsthaus KuLe. The title of my talk was Potential for health and well-being effects in interactive sonification of movements.

Abstract: This paper argues how interactive digital art combining aspects from dance and music through sensor and sound generation technologies has the potential of promoting the health and well-being of users of all abilities. Dance and music in a more traditional sense are known to have positive health effects by motivating movement, creative expression and social interaction. Moreover, they can reduce the risk of physical illness by improving aerobic capacity, balance, elasticity and coordination as well as mental disorders by reducing stress and inducing positive emotions. Technologies for interactive sonification of dance movement combines aspects of dance and music at the same time by translating movement sensor data into sound and music, and if designed carefully they can have a considerable potential for many of the same health and well-being effects that music and dance have separately. For example, they often tend to imply feedback loops where a movement first will generate a sound, and then in turn can motivate the user to move as a response to the sound. Through the possibilities of generating and controlling musical parameters on higher levels, interactive sonification of dance movement can afford degrees of inclusion that surpass that of e.g. traditional instruments, which often require years of training to master. This is also why interactive movement sonification can offer people with different kinds of disabilities a way of playing music through their movements, and thereby a way of generating positive experiences of basic psychological need satisfaction as well as other elements of well-being like positive emotions, engagement, relationships, meaning, and accomplishment. The paper will show how two different interactive systems/devices using two sensing technologies both afford these kinds of positive health and well-being potential, namely the MotionComposer and the VibraChair, where the author has been engaged in the development of both. Whereas the former is a well-established product in its third version, the latter is still in its exploratory stages of development. The paper will discuss and compare different aspects of the systems/devices, and conclude with some general suggestions for design aiming for positive health and well-being effects.

 

Multichannel setup in the Portal

"The Portal" is a room at the Music Technology campus in Fjordgt.1, originally designed for the two-campus program "Music, Communication and Technology" that NTNU and UiO had running from 2018-2022. During the fall I have been working with a new spatial audio setup for the room. The previous setup was a small ring with eight Genelec 8020D loudspeakers, about 2.5 meters in diameter, covering only a small part of the room. My aim was to have a setup with a much larger listening zone (and sweet spot), including also a height dimension. With 16 Genelec 8030A speakers to my disposal, the solution I chose was to locate the speakers close to the walls in two heights. Although the aim was to have a relatively symmetric setup, the layout of the room and the equipment in it forced a number of compromises. Still, when testing the setup with different sound material decoded in up to 5th order ambisonics, the result was quite good. The students taking the course 'Electronic Music' (MUST3054) at the NTNU Master's programme in Creative Music Technology, were the first to compose for the setup, each of them making original compositions in abisonics audio. We were also so lucky as to have a version of Natasha Barrett's composition Hidden Values decoded especially for the setup, which was also actively used in the course.

The coordinates for the loudspeaker setup can be downloaded here.

The Portal (NTNU)The image shows half of the room and eight of the loudpeakers in the NTNU Portal.

 

Constellation Stories at Springfield (MA)

On Saturday the 5th of November 2022, Constellation Stories: Stories of the Night Sky, a dance performance with original music composed by myself and Madeleine Shapiro and choreography by Merli V. Guerra, was performed by the Luminarium Dance Company at the Springfield Science Museum (MA). The composition was made through remote collaboration between Shapiro, contributing with virtuoso cello improvisation from New York City, and myself, contributing with electronic processing, adding other types of material and the overall mixing and production, working from Genova, Italy and Trondheim, Norway. A local newspaper, The Republican, covered the performance, and you can see the story below.

Republican story

The performance featured dancers Alexa Barreiros, Melenie Diarbekirian, Kate Harbison, Joscelyn E. Hunter, McKenzie Lani Jones, Jamie Peña, Aniky Salima, and Marissa Stellato. It was reported that several hundreds of visitors attended the event.

 

Pädiatrische praxis article

Padiatrische Praxis logo

In 2022, I was invited to write an article about MotionComposer in the German journal Pädiatrische praxis together with Robert Wechsler, Alicia Peñalba, and Stephan Geiger. This is my first publication German and also in a medical journal.

Abstract (translated to English): It has been shown that digital music technoloin combination with sensors and carefully written software software can bring movement, joy and social social interaction in people with a wide range of people with a wide range of abilities, including children with even children with severe physical or mental mental disabilities. This article describes the MotionComposer and some of its possible therapeutic therapeutic applications in sociopaediatric diatric centers.

The article can be downloaded here.

 

AudioMostly 2022 conference paper

SMC 2022 logo

AudioMostly is, as the website proclaims, "an interdisciplinary conference on design and experience of interaction with sound that prides itself on embracing applied theory and reflective practice". In 2022 the conference was held at the St.Pölten university of applied sciences in Austria during September 6-9. My submission was about my work at the ICST (ZHdK) in the fall of 2021 and was submitted as a short paper. That allowed me to do a demo and poster presentation at the conference.

Abstract: The paper describes a work-in-progress exploring the expressive and creative potential of dance phrase onsets and endings in interactive dance, using an artistic research approach. It briefly delineates the context of the presented work, before describing the technical setup applied, both in terms of hardware and software. The main part of the paper is concerned with the specific mappings of three different sections in the performance that the project resulted in. Subsequently, the process and performance are evaluated, including both the dancer’s feedback and observations by the author. The points from the evaluation are then discussed with reference to relevant research literature. Findings include that the dancer experienced an increased awareness of beginnings and endings in different sections of the performance, and that postural adjustments were necessary to make the interaction more robust.

The paper is available for download here.

 

Appointed full professor

In June 2022 I was very proud to receive the notification from the Faculty of Humanities at the NTNU that I had been appointed full professor in the field of music technology within the research area of experience and expression in embodied music performance practices involving music technology. The appointment was made on the basis of academic qualifications. The expert committee, consisting of professors Thor Magnusson, Trond Lossius and Wayne Siegel, noted that some of my work "indicates additional qualifications of relevance and value within the cross-disciplinary environment at NTNU, operating between music and arts practices, music technology, audio engineering and musicology".

 

SMC 2022 Paper

SMC 2022 logo

The SMC-22 conference (Sound and Music Computing) is, as the organizer informus, "a multifaceted event around acoustics, music, and audio technology". It will take place in Saint-Étienne (France) on June 4-12, but will also have a considerable online program. The theme of this year's conference is: Music Technology and Design.

I am happy to be presenting a paper entitled Designing interactive sonifications for the exploration of dance phrase edges at this year's conference. The presentation format is new of the year: 5 minute presentation followed by a hybrid poster session/demo.

Abstract: The paper presents practice-led research where interactive sonifications of dance movements are created so as to have special emphasis on the movement edges of dance phrases. The paper starts out by discussing to what degree interactive sonification of movement and the artistic practice of interactive dance have areas of overlap, and whether their contexts and intentions may be combined to complement each other. A discussion of the concept of salience and how it relates to perceptual edges in general follows, and more specifically to onsets and endings both of movements and musical objects. In this discussion, the author also considers the role of accents as well as different degrees of abruptness of changes. The paper subsequently presents a set of interactive sonifications demonstrating different degrees of salience from very high to very low. Details related to the technical setup, analysis of movement data and the movement-sound mappings are presented, and the results are discussed.

If you are interested, check out the project page with accompanying material. The paper can be downloaded here.

 

Casa Paganini Workshop, May 2022

May 27 and 28 I hosted a workshop at Casa Paganini, Genova, with dancer and choreographer Cora Gasparotti and musician Giangiacomo Gallo visiting. Both are aspiring artists from Rome, where Gasparotti just graduated from Accademia Nazionale di Danza with a Master's thesis about the use of interactive sonification in dance training. Gallo has a background from studies in electroacoustic composition and sound engineering, and is part of DiacronieLab, a group of musicians in Rome working with music and technology.

At the workshop we explored different technologies such as inertial sensors, EMG armbands (Myo) and stereoscopic 3D video tracking (MotionComposer). The workshop combined demonstration, discussion and practical exploration and improvisation, involving topics such as:
  • different sensors' strengths and weaknesses
  • sensor communication
  • sensor data treatment methods
  • movement to sound mappings
  • audience communication and involvement

Excerpts from the workshop at Casa Paganini, showing dancer/choreographer Cora Gasparotti

We started by looking at the Myo armbands, and how data treatment methods such as scaling, filtering, noise gating and transfer functions can be used to have the best condition for developing an interactive instrument. Together we tested some simple mappings using the data from three of the electrodes to control sine tones and band pass filtered white noise, which Cora explored in a little improvisation. The first section of the video shows the result of the demonstration and discussion of points 1-4 of the list above.

Next, we tested out several of the interactive instruments that I developed during my stay at the ZHdK fall 2021, applying Myo armbands and NGIMU sensors from x-io (left and right wrists plus torso). I explained how each of the instruments were related to the composition in which they appeared, Beginnings and Endings Study I, before Cora (and sometimes Giangiacomo) would explore freely, before making a short improv for the camera (see sections 2-5 of the video).

The second day we moved our attention to the MotionComposer version 3, where we tested out some beta features that will be released later this summer. Although the lighting conditions were far from ideal, we got to test out four of its five environments: Tonality, Drums, Fields and Particles (which I implemented on the device during the workshop with MotionComposer the preceding week). Cora made two videos playing the Tonality environment, one with a chromatic scale, the other with a pentatonic scale.

 

Music for Constellation Stories

In February 2022 I was invitated by cello improvisor Madeleine Shapiro and choreographer Merli V. Guerra to collaborate to make the music for an upcoming dance piece that Guerra was choreographing for her company Luminarium. During two intensive weeks in February I collaborated remotely -- CoVid-style -- with renowned New York-based cellist, Madeleine Shapiro, to make a 14 minute piece. Madeleine sent me sections of improvisations on the cello and crotales sounds, which I then would electronically process and use as elements in the composition. Madeleine would then respond to this and send me new improvisations. I also added sounds that I had recorded myself, for instance, a carpet of lively conversations from a piazza in Genoa, Italy, and percussion sounds played by retired colleague music technology professor, drummer and percussionist, Carl Haakon Waadeland. The final mix kept a main cello voice in the centre to capture a live-feeling.

The piece was world premiered the 4th of March 2022 at Rider University (New Jersey, US) with dancers Tanisha Anand, Lex Barreiros, Kaniah Browne, Jossie Hunter, Nevaeh Peaks, Jamie Pena and Vee Williams. The video shows the full piece from the premiere with audio recorded on site.

Constellation Stories by Luminarium Dance. Choreography Merli Guerra, music by Madeleine Shapiro (cello) and Andreas Bergsland.

 

Informus presentation January 2022

As an introduction to Infomus at the University of Genoa I held a hybrid presentation at Casa Paganini for the research group at the research center entitled Sonification of automated movement analysis focusing on onsets and endings in interactive dance. The presentation mainly dealt with the interactive instruments developed during the stay at ZHdK, including the movement anlyses techniques that they were based on.

Demonstration of rarity index with NGIMU sensor.

One of these techniques was a slightly modified implementation of the rarity index , a computational model of saliency developed by the Infomus team. The presentation also included demonstration videos of the interactive sonifications made at Infomus. These are videos accompanying two research papers that are forthcoming, but will also be presented at the VIBRA pages soon.