Paper at the Sound and Music Computing Conference

SMC Malaga


Together with Sigurd Saue and Pekka Stokke I wrote a paper about VIBRA for the Sound and Music Computing Conference in Malaga.

Abstract:

The paper presents the interactive dance project VIBRA, based on two workshops taking place in 2018. The paper presents the technical solutions applied and discusses artis- tic and expressive experiences. Central to the discussion is how the technical equipment, implementation and map- pings to different media has affected the expressive and experiential reactions of the dancers.

The paper can be found here: http://folk.ntnu.no/andbe/Content/VIBRA_paper.pdf



NRK P3 Interview

The NRK P3 show "Urørt" made a short feature about instrumental music February 2019, and I was interviewed to contribute to the feature. The feature with the interview is available here.


p3




PhD supervision course

I attended a two-day course for PhD supervisors at NTNU January 2019 led by Kari Smith, professor at the Department for Teacher Education at NTNU, and leader of the National Research School for Teacher Education. The course qualified me as a "certified supervisor", which from the new regulations of the PhD programme at NTNU is required for at least one of the team of supervisors at a PhD (alternatively, the supervisor has to have supervised earlier).


PhD


The course covered many issues relevant for PhD supervision, among them regulations and framework, building relations, gender issues, ethics, co-authorship, submission and what to do in the case of rejection. We also got to discuss some interesting cases, including the typical "burn out" that many experience in the last phase of their PhD period. The course also highlighted the culture differences at NTNU, especially between hard and soft sciences, article based and monograph theses, and small and big departments.



MotionComposer workshop at Dämeritzsee, Berlin

The creative part of the MotionComposer team was gathered for four days at the Dämeritzsee Hotel in December 2018 - lying next to a beautiful lake - for a kind of "hackathon" workshop, aiming to develop the 3.0 version further through long days of work.


DameritzSee
The Dämeritzsee, just outside of Berlin


The new tracking software using stereoscopic camera input was now much more developed, and could deliver tracking data of quite good quality for many of the core parameters. For my part, it was the first time I could try the new version of Particles with the new tracking system, and we were happy to experience that the central features of the sound environment worked well. There were some issues with certain parameters being too "trigger-happy", so this is something to be further improved in the tracker.

As for the features of the Particles sound environment, we decided to include arm hits to the side, kicks and jumps to it. I briefly made some sketches with glissanding sound particles that had some interesting qualities, and these will be developed further for the final version, especially giving them a sharper attack. Here are some hits followed by a few kicks:



Sketches for hits and kicks for Particles for MC3.0


I also spent a bit of time preparing a work flow for making rhythms for Particles, including making a sequencer interface in Max that could make a file that could be immediately loaded into csound. Here is a screenshot of the 8 track, 64-step sequencer:

Seq
8-track, 64-step sequencer for Particles, built in Max

We concluded the workshop with setting milestones for the further development of Particles in the spring 2019.


 

Stocos workshop and artist talk

As a part of the ARTEC seminar series, I had invited Stocos, a Spanish/Swiss artist trio creating interactive dance pieces, to do a two-day workshop and an artist talk with demonstration. The trio consists of Pablo Palacio, composer and interactive music instrument designer, Muriel Romero, choreographer and dancer, and Daniel Bisig, visual artist, sensor technician and programmer. The group combines research into embodied interaction, artificial intelligence, and simulation of swarm behavior with artistic exploration of music, visuals and dance.


STOCOS1
Stocos (Pablo, front left, Daniel front center, Muriel, far right) with workshop participants


The workshop took place at DansIT's (Dance in Trøndelag) studio at Svartlamon in Trondheim. It took place as a collaboration between ARTEC and DansIT, where the former covered the expenses related to travel and accomodation for the group, and the latter offered me their venue free of charge as a recidency. A total of fourteen participants (including me as the organizer) took part in the workshop, even if some were not present at all times. The group consisted of people with very varying backgrounds: composers, musicians, dancers, actors, media/video artists, stage designers, dramaturgs, philosophers and music technologists. Of the participants, eight of them were either employed or students at NTNU and another three had relatively recently achieved their degrees (Master/PhD) at NTNU and were now working in the art sector in Trondheim. Due to restrictions in the number of people, there were additional applicants to the workshop on a waiting list.

The first workshop day started with a presentation of the workshop holders in Stocos, their artistic aims and statement, and then continued with an overview of their work during the last 10 years or so. A core concept of the group, which is also reflected in their name, is the use of different types of stochastic algorithms, both at signal-level (in the form of noise) and in different ways of mapping movement parameters with visuals and sound. In addition, different ways of applying swarm properties in especially mapping and visual projections were central. This was applied for the first time in the piece with the same name as the group, where the swarms were first of all visible as minimalistic visual projections, mostly in black and white, showing either moving entities or trajectories of invisble moving entities - interacting to larger or lesser degrees with dancers. Another work, Piano-Dancer, was an exploration of the same core principles of stochastics and swarms, but here a single dancer's movements and algorithms were controlling a motorised piano. In the piece Neural Narratives, they were creating virtual extensions of the dancing body with computer visuals, with tree-like entities following the movements of the dancers while also having a semi-autonomous character. Lastly, they presented their most recent work, The marriage between Heaven and Hell, based on the literary work of William Blake. In this work they continued their work with interactive control of voices, this time using phrases from Blake's poem. Particular to this work was the use of Laban derived bodily expressive aspects, such as energy, fluidity, symmetry or impulsivity, translating them (in real time) into audible and visual digital abstractions. This piece also featured the use of body-worn interactive lasers, where the movements of the dancers controlled whether they were active of not. The presentation spurred a lot of questions and discussion about everything from technicalities to aesthetic issues.

The second part of the first workshop day was more technical and focused on the different types of sensors the group used. They started showing technology they have used in earlier years, namely the Kinect. Using the open source driver implemented in EyesWeb the sensor gave them access both to the countour and the skeleton of the dancer. Even if the Kinect had relatively high latency and the skeleton tracking was quite buggy, they still had used it In recent years. More and more, however, they had moved to IMUs for their tracking. They used a Bosch/Adafruit BNO055 connected to an arduino with wi-fi shield, with everything put in a 3D-printed case, and then put in an elastic band - attached to the arm or the lower leg. This avoided several of the problems associated with video-based tracking (occlusion, IR light from other sources, etc.), even if there were specific challenges associated also with the IMUs. The IMUs also offered absolute orientation data in 3D, making it possible to track the position and rotation of the extremities, not only their movements. Finally, the group showed us their most recent development, namely sensor shoes, with seven pressure sensors in the sole.

stocos2
Daniel Bisig explaining the construction of the sensor shoe.

After just having monitored the values from the sensors in visual graphs, the last part of the first workshop day focused on the mapping on the values from the sensors to sound. Here, the participants got to try out the IMU bands and the sensor shoes with several different mappings - synthetic singing voices, sampled speaking voices, stochastically modulated noise, and several others.

The second day of the workshop started with a movement session led by Romero with focus on exercises related to Laban's theories of individual choreographic spaces around the dancer, and how these could be divided into planes cutting in all directions around a dancer. Many of the exercises directly articulated these planes.

After the movement session, the focus was moved to the computer generated visuals. Most of these were in some way or another derived from swarm simulations, and Daniel Bisig spent quite some time explaining how these were programmed and manipulated so as to create different visual expressions. Among other things we got to see how using the individual agents' traces as visual elements along with the manipulation of pan and zoom of the space they inhabited created a multitude of different patterns and effects. Some of the participants also got to try out an interactive algorithm that used the contour around the body to attract the swarm in such a way that they delineated the user's silhouette. We also got to see how the swarms could articulate letters more or less using the same technique, and how the dampening of attraction forces made the contours around the letters less distinct.

stocos3
Workshop participant Elena Perez trying out the interactive swarm visuals.

There was also time to try out the interactive lasers (with sound) from The marriage between Heaven and Hell, which was, naturally enough, very popular. All the lights were turned off, and a hazer was activated, so that the trajectories of the lasers were visible. The lasers all had lenses spreading the beam vertically about 30% so that the result was a line when it was projected on the floor. We also got to see how moving the lasers in different ways generated different effects, like "scanning" and "strobe". We then did further tryouts in the darkened space - now with the sensors controlling stage lights (via MIDI/DMX) together with transformed vocal sounds. Even if the lights had a significant latency, it was nice to see how they could be mapped from the IMU sensors.

At the end of the workshop, Stocos presented longer extracts from several of their pieces, to answer the question that many participants had posed: How do you put it all together? We then got a much better understanding of the artistic contexts and intentions the different technologies and mappings were used in - and further how their artistic expressions could be transformed with different techniques.

All in all, the impression was that the workshop participants were highly inspired by the workshop, and several of them talked about getting new artistic ideas. Stocos were also impressed with the engagement that the participants showed. Several people also expressed a wish to have Stocos back to Trondheim to do a full performance.

Artist talk at Kunsthallen

Monday the 26th of November Stocos gave an artist talk at Kunsthallen as a part of the ARTEC seminar series. Around 15 people attended. Stocos presented their artistic aims and statement, and then continued with an overview of their work during the last 10 years or so. In between the discussion of the different pieces, Muriel Romero demonstrated the sensor instruments used in the different performances. The artist talk lasted 90 minutes.

stocos4
Muriel Romero demonstrating the sensor shoes. Daniel Bisig is watching.




Student concert at Planetariet

As a part of the course Electroacoustic Music - Analysis and Composition that I teach at NTNU, five master's students at the music technology programme got to perform their newly composed pieces at the Planetarium of Vitensenteret (The Science Centre) in Trondheim November 16 2018. The concert took place as the first milestone in several people's joint efforts of transforming the stunning 36.2 channel sound system, custom made for planetarium shows, into a concert venue for multichannel/spatial music. NTNU Music Technology, with myself and PhD candidate Mathieu Lacroix, composer Frank Ekeberg and padagogue at Vitensenteret, Lars Pedersen, have been researching to find a setup that can work for multichannel presentation of electroacoustic music. After quite a bit of experimentation during the fall of 2018, we ended with a solution where we used Blue Ripple Sound's plug-ins to play music in 3rd order ambisonics through the Reaper DAW running on an external laptop connected to the IOsono system via Dante. At the concert, the students presented compositions they had made as a part of the course, but adapted to the 36.2 setup at Planetariet. The concert was concluded with Ekeberg's piece Terra Incognita from 2001. The concert program can be read here. It is my intention to publish a blog post later in the winter with details about the setup to help others arrange multichannel concerts at Planetariet.


planetariet




Panel at RITMO Workshop

The newly started center for interdisciplinary studies in rhythm, time and motion, RITMO, hosted a workshop to celebrate their opening November 12th-14th 2018: RITMO International Motion Capture Workshop.
I was invited by the chair of the event, Alexander Jensenius (UiO) to participate in a panel about interactive music systems together with John Sullivan (McGill University), Cagri Erdem (UiO) and Charles Martin (UiO). Each of us got to give a short presentation of our work, before we sat down to have a panel discussion, also involving the audience. In my introduction, I presented the particular challenges of creating interactive music systems for people with different abilities in my work with MotionComposer.

In the workshop there were several panels like this, with topics like Music Performance Analysis, Music-Dance Analysis, Music Perception Analysis and Different Types of Motion Sensing. There was also one whole day dedicated to different tools (MoCap Toolbox, MIRToolbox, EyesWeb, and more). High points in the program were the three keynotes by Marc Leman (Ghent University), Marcello Wanderly (McGill University) and Peter Vuust (Aarhus University). We also got a tour of the RITMO center, showing off the motion capture lab, eye tracking lab and more.


RITMO_panel
Panel about motion capture tools at the RITMO International Motion Capture Workshop


Sound design workshop at Fusion Systems

October 8 - 11 2018 the German company Fusion Systems, a company working mainly with computer vision and sensor fusion for traffic and industry applications, hosted a workshop together with Palindrome Dance Company. The workshop aimed at developing sound design for interactive music applications for people with different abilities. I was invited to the workshop together with Marcello Lusanna and Ives Schachtschabel to work on sound design and meaningful movement-sound mappings. During the workshop we got to try out a work-in-progress prototype of a stereoscopic motion tracking system.


FusionSystemsWkshp
Stereoscopic motion tracking prototype at the Fusion Systems workshop.




Panel at Knowing Music, Musical Knowing

Knowing Music, Musical Knowing was a conference and International Music Research School hosted by my university NTNU and Grieg Research School October 23rd-26th. On the website of the event, it was described in the following manner: "This event will explore the complex and multifarious connections between music and knowing. Bringing together key thinkers and practitioners in contemporary interdisciplinary music studies – including but not limited to dance studies, music therapy, ethnomusicology, music technology, historical musicology and music education – this four-day school will inspect both the unique forms of knowledge embodied through musical practice and the different ways humans create and impart knowledge about music."


KnowingMusic
Myself together with Elin Angelo, Jørgen Langdalen, Simon Gilbertson and Kari Holdhus.


For this event, I was asked to participated in a panel  about the challenges and possibilities for musical knowledge transmission, learning and education between music disciplines. The panel consisted of professor Elin Angelo, Head of Department of Music, NTNU, Jørgen Langdalen, associate professor of music therapy, Simon Gilbertson, associate professor Kari Holdhus from Høgskolen Stord/Haugesund, and post doctor Nora Kulset.

In my five minute introduction I briefly discussed how common denominators in the form of passion, listening and musicking bodies can allow us to build understanding, communicate, as well as improve learning, across disciplines. The whole introduction can be read here.



Music Tech Fest, Stockholm, September 2018

Music Tech Fest (MTF) is an annual event gathering artists, professionals, hackers and academics for a few days to work in an intense and stimulating environment. This time it was KTH in Stockholm that had taken on the event, with the main event happening September 7-9 2018.

I was very happy to be invited to participate in the #MTFLabs, which was an extended part of the whole thing, going on for a full week, Monday - Sunday. In the Labsa total of 80 artists, hackers and acedemics were gathered with a mission of putting on a 45 minute show at the end of the week. Everything took place inside a shut down nuclear reactor 25 meters below ground.

Reaktor hall
The reactor hall at KTH. Photo by Mark Stokes.

The whole thing was kicked off by presentations by Michela Magas, the founder of MTF, Danica Kragic, Airan Berg and pop singer Imogen Heap. Then, during a process of self organization, we split off into different groups and sub groups, according to what we wished to work with in terms of technology and artistic expressions.

I soon found a good connection with Joseph Wilk (UK), a creative (live)coder and computer wiz, and Lilian Jap (SE) a media artist and dancer, and we decided to form a group under the theme of paralanguage. Later, the group was completed with Kirsi Mustalahti (FI), an actress and activists, working with arts for disabled people through the Accessible Arts and Culture (ACCAC) organization.

TEam
Me with Lilian Jap, Joseph Wilk and Kirsi Mustalahti

Inspired by the post-apocalyptic feeling in the reactor and theme of extra-terrestial colonization that was circulated in the whole of the #MTFLAbs group, we decided to make a piece about having to re-invent language through paralanguage and body movements. Joseph discovered a really nice speech data base with Enligsh accents, featuring almost 2700 sound files with a single statement being spoken by different speakers from from all over the world. The database was a nice metaphoric expression of "humanity" that we felt was a good fit with our idea. Our initial idea was then to use a wavenet encoder - a machine learning technique working on the audio sample level - to train a model that we could control with the Kirsi and Lilian's movements. However, it turned out that this took a lot more time than we had at our disposal.

With the sample database downloaded we at least had a rich source of sounds, that also had a nice consistency. Joseph was then able to organize the material so he could play it using his live coding setup, and this worked so well that we kept it in the performance for the build-up in the final section.

Without a trained model and an interface to the parameters of the database, it was difficult for me to create an instrument that could play the whole database. I therefore chose to use only a subset of the samples (about 100 of them), that I stretched and layered in different ways using Csound. We put an NGIMU sensor and a Myo sensor armband on each of the dancers, and they were then mapped so as to control each of their individual set of voices. We then structured the piece in two sections, the first one (discovery and first contact) dwelling on two single syllables "plea" (from please) and "call", with each syllable being played by one dancer, and the final part (beginning communication developing into chaos), building up textures and layers of an ever increasing amount of words and syllables played by dancers and Joseph's live coding. The piece ended after about 4:30 with a sudden freeze of movements and silence, whereupon a comprehensible version of the speech material was presented to the audience: "Please call Stella. Ask her to bring these things with her from the store" - a highly mundane statement rounding off the somewhat existential thematics.

Here is a video of the first section of one of our performances (we had three in total). It is shot by Mark Stokes:



We all agreed after the performance that our work together had a lot of potential, both in terms of artistic development and application for people with other abilities, so we have put in several applications for further work with the project called Accents in Motion.




Running version of Six Fantasies Machine

SFM

The Six Fantasies Machine (SFM) instrument is a tool developed in Csound for producing a range of sounds similar to those found in Paul Lansky’s classic computer music composition Six Fantasies on a Poem by Thomas Campion. I made it as a part of my PhD project, Experiencing Voices in Electroacoustic Music, which was finished in 2010. Due to changes in Csound, for many years I couldn't run the software on the recent versions, which was a bit sad. However, I have now (October 2018) gone through the code again and got it up and running on Csound 6.12 on OSX.

For download, info and sound examples, check the SFM project page.




Startup of VIBRA - network for interactive dance

VIBRA_logo

After having worked with interactive dance over some years mostly outside of Norway, I found it could be interesting to gather people in my local region to do some work here. I was lucky enough to get a grant from the Norwegian Culture Council, which made it possible to arrange a series of three workshops in Trondheim with dancers/choreographers, video artists and a music tech colleague during the winter/spring of 2018.

The network was named VIBRA, an Norwegian acronym which translates into Visuals - Interaction - Movement - Space - Audio. The project now has its own website at www.vibra.no where we have put information about the project and a blog about technical as well as artistic issues.

VIBRA is a network of artists and technologists who explore artistic expression possibilities related to video, interaction, movement, space and audio - in short form, often referred to as interactive dance. With interactive dance, we mean a cross artistic expression in which dancers using sensor and computer technology control one or more aspects of sound, music, video, light, or other scenic instruments. This has hitherto been largely unexplored in Norway. Much of the non-classical dance presented on Norwegian scenes today is accompanied by pre-recorded musical events and / or video. With such a fixed accompaniment, the relationships that occur between motion, music and visual elements will work essentially one way. By using new technology, we hope to introduce an interactivity that can provide a more dynamic expression that more closely puts cross-cultural relationships at stake. The start-up of the network will involve a bundle of people from Trondheim / Trøndelag that have a relevant background and will constitute the project group.

Here is a short excerpt from the showing of the last workshop, using NGIMU and Myo sensors with four dancers and projecting sound over a 8-channel setup:


The video excerpt shows dancers Tone Pernille Østern and Elen Øyen.





Presentation at IRCAM Forum Workshops

IRCAM forum
The IRCAM Forum Workshop is an annual event in which IRCAM Forum members get to share and discuss ideas. As they write on their website, "The Ircam Forum workshops is a unique opportunity to meet the research and development teams as well as computer music designers and artists in residency at IRCAM. During these workshops, we will share news on technologies and expertise providing hands-on classes and demos on sound and music."

As an IRCAM Forum member I made a submission with Robert Wechsler entitled MotionComposer - A Therapy Device that Turns Movement into Music, and it was luckily accepted and the presentation was held the 8th of March 2018. It was a great experience to show our work at this renowned institution in front of a highly competent audience. And of course, we also greatly enjoyed to see presentations of the IRCAM teams as well as the other IRCAM Forum members.

Abstract
The MotionComposer is a therapy device that turns movement into music. The newest version uses passive stereo-vision motion tracking technology and offers a number of musical environments, each with a different mapping. (Previous versions used a hybrid CMOS/ToF technology). In serving persons of all abilities, we face the challenge to provide the kinesic and musical conditions that afford sonic embodiment, in other words, that give users the impression of hearing their movements and shapes. A successful therapeutic device must, a) have a low entry fee, offering an immediate and strong causal relationship, and b) offer an evocative dance/music experience, to assure motivation and interest over time. To satisfy both these priorities, the musical environment "Particles" uses a mapping in which small discrete movements trigger short, discrete sounds, and larger flowing movements make rich conglomerations of those same sounds, which are then further modified by the shape of the user's body.




Invited lectures: Healing soundscape

HSRV

Robert Wechsler and myself were invited to give a lecture and performance at the Hochshule für Musik und Theater in Hamburg the 16th of November 2017 as a part of the lecture series Healing soundscape: Klang - Raum - Atmosphäre. The title of our lecture/performance was "MotionComposer – a movement-into-music device for persons of different abilities“ – a demonstration/performance with audience participation.

The invitation actually resulted in a total of three lectures: One for masters students of multimedia composition and one open for a general public. The first lecture was more technical, focusing on both the tracking, musical mapping and the how it was implemented using Csound.

The second lecture was for a group of music therapy students and focused on the therapeutic applications of MotionComposer.

The third lecture was the public lecture which was a part of the lecture series. In it we argued how dance and music are closely related, and how that has been an impetus in the design and application of MotionComposer. It also gave us a chance to show artistic applications of the MotionComposer. A local dancer, Trinidad, improvised in the particles environment together with Robert Wechsler. I had brought with me some sounds that we had never tried before - they were recordings of manipulations of a mixer with closed feedback and some effects - and they turned out to work nicely. After the performance, the audience also got to try out several environments of the MotionComposer.

Here is an excerpt from the performance with a part of the performance where Trinidad has a solo. Unfortunately, it only has audio from the camera resulting in quite poor quality:



Video from the lecture/performance "MotionComposer – a movement-into-music device for persons of different abilities“ – a demonstration/performance with audience participation featuring the dancer Trinidad.




Sound for photo exhibition: Ekko frå sjelas spegel

Ekko_sjelens_spegel
One of the images from the exhibition Ekko frå sjelas spegel with accompanying poem

In October 2017 I got the chance to work with Trondheim based photographer Siri Nordberg. She specializes in portraits and in her exhibition Ekko frå sjelas spegel (Echoes of the mirrors of the soul) she even combined portraits with poems she had written (later to be published in a book). It was exhibited in very special exhibition space: an arched cellar from the 19th century in the centre of Trondheim. Since an important theme in many of the texts were related to the sea and coastal landscapes, Nordberg wanted me to make a soundscape composition that could fit. I chose a recording I had made at Kvarøya in northern Norway, which featured a beautiful crescendo and decrescendo of waves and a boat passing in the distance. I treated it lightly to strengthen the inner dramaturgy of the composition. Listen to an excerpt here:








Tower Block - Post Mortem article

IJAPHC

A few years back I wrote an article reflecting on different aspects of The Tower Block - Post Mortem project together with NTNU colleague Barbro Rønning. The article was originally intended to be a part of a book project on intermediality edited by Rosemary Klich, which was unfortunately not realized. A possibility then opened to publish the article in the International Journal of Aesthetics and Philosophy of Culture with a special issue called Nordic Time Zones: Art and Performance in the Northern Landscape, with Knut Ove Arntzen as invited editor and a special introduction by Shannon Jackson.

Abstract: This article discusses aspects of intermedial experiencing of the site-specific intermedial performance The Tower Block – Post Mortem and the preceding process of generating material for the performance through gathering and sharing memories from The Tower Block. The authors describe the gathering of memory material on the project’s own blog, and how this process highlighted both properties of networked media and the phenomenology of remembrance. Subsequently, they discuss three different aspects of the performance, and how each of them relates to important issues in the discourse on intermediality in performance culture: 1) Intermedial elements in performance and their play with spatiality, temporality and their relationship to processes of remembrance; 2) Ritual and the transformative aesthetic of the performance and how it relates to the authenticity, and to interrelationships of the actual and the virtual; 3) Resonance and tension in trans-medial metaphors used in the performance.

The article can be downloaded here, and the whole issue on this link.




Crossadaptive seminar

CrossAdaptive

The project Cross adaptive processing as musical intervention, led by colleague Øyvind Brandtsegg at NTNU, arranged a two day seminar November 2 and 3 2017 at the Department of Music, NTNU in Trondheim. The seminar featured a list of international participants in addition to colleagues from NTNU and The Norwegian Academy of Music (NMH): Miller Puckette, Simon Emmerson, Josh Reiss, Victor Lazzarini, Gary Bromham and Marije Baalman. The seminar featured academic presentations, panels and performances, and is well documented in video and slides at the project web site. My contribution was a presentation entitled Documentation in the Crossadaptive project, and focused on the use of video as a tool for documentation and reflection. The slides for the presentation can be downloaded here, and this link features a video of the talk.



Musikkteknologidagene (Music Technology Days)

MTD2017

Musikkteknologidagene (Music Technology Days) is the meeting point of the music technology field in Norway, and in October 9-11 2017 it was hosted by the Norwegian Academy of Music (NMH) in Oslo. The programme consisted of academic presentations during the day and concerts in the evenings. I had two contributions, one academic and one artistic.

The academic contribution was with NTNU colleague Sigurd Saue, and was entitled Some Experiences from the project Sound - Space - Movement. The presentation featured a discussion of different aspects of the project with the same name, including motion tracking, body movement, movement->sound/spatialisation mapping, flexible reverberation with streaming convolution. An abstract in Norwegian can be found here.

The artistic presentation was an interactive dance performance featuring dancer and NTNU colleague Tone Pernille Østern taking place in Levinsalen at NMH with a 24-channel loudspeaker setup, combining a circle and an oval configuration. We were incredibly happy to be in the distinguished company of composers such as Åke Parmerud and Natasha Barrett, and the impro group Parallax, featuring spatialisation by Anders Tveit.



Movement and Computing Conference (MOCO)

MOCO17

The Fourth International Conference on Movement and Computing (MOCO 2017), was held at Goldsmiths University, London, June 2017. It featured highly regarded researchers from many different fields, ranging from computational neuroscience, affective computing and interactive dance. With less than 45% of submitted papers being accepted, me and MotionComposer colleague and leader, Robert Wechsler, were very happy to being in such a distinguished company of artists and researchers. Our contribution was a short paper entitled Issues and Strategies of Rhythmicality in MotionComposer and you can read it in the proceedings found here (choose Table of Contents and navigate to the title). Among other things the use of artificial intelligence and machine learning was a topic that was recurrently addressed, and Marc Coniglio spent a good portion of his keynote speech to speculate how this could affect the field in the near future.


MotionComposer workshop at Fraunhofer

MC-Fraunhofer

In June 2017 the MotionComposer team was gathered for their annual workshop in Jena, Germany, at the much renowned research institution Fraunhofer and their devision for optics and precision engineering. Together with fellow MC-composers Ives Schachtschabel and Peter Breitenbach, I had the task of starting the development of sound environments for the MC version 3.0. We had a startup meeting with representatives for Fraunhofer, the organization Grenzenlos where we presented our visions and planned features for the different environments. After four days of intensive work our final presentation, with performers from Grenzenlos, also included a test of the sensor in development planned for the MotionComposer 3.0 and built by Fusion Systems, where parameters of the 2.0 Particles environment were controlled by the sensor. All in all the workshop gave a highly valuable impetus for developing the Particles environment further, including suggestions for new sound banks.



Radio interview about the Telharmonium

nrk


NRK (Norwegian Broadcasting) did a feature about one of the world's first electronic instruments, the Telharmonium, April 11 on the P1 channel. I was happy to be asked to contribute to the feature, and did a short interview where I talked about the instrument and did a short demo using a rough additive synthesis simulation. You can listen to the feature (in Norwegian) with the interview here. The Music technology student, Joachim Ose, was the journalist behind the feature.


Sound-Space-Movement. Research concert.

Lyd-rom-bevegelse

As a result of an initiative by Njål Ølnes, post.doc, the Department of Music, NTNU, has started a series of what we call Research Concerts. The series combines elements from research presentations for a non-scientific audience with the concert form, and aims at bringing together researchers/performers from different branches of the department.

The team behind the research concert entitled Sound-Space-Movement (Lyd-rom-bevegelse) included Tone Pernille Østern, Sigurd Saue, Elen Øyen and myself, and was presented at Dokkhuset March 23. The presentation was focused on how bodily interaction with MotionComposer could bring forth interesting movements and reflection from different perspectives. It featured an 8-channel version of the Particles environment of the MotionComposer that I developed in collaboration with Sigurd Saue.

I started off the event with a slightly humourous interactive performance of Richard Strauss' Also Sprach Zarathustra, with Tone copying my movements in the background (see a short excerpt here). I continued with presenting MotionComposer (MC) before I went into more details about my work with the Particles environment of MC, including a demonstration. I especially focused on mapping, demonstrating several possible mappings with data from the MC. Elen followed with an improvised Particles solo, playing with sounds of the Chinese opera singer, Feng Hau. After the performance Elen gave a highly personal talk about her experiences of becoming an dance improvisor using wheel chair. After doing a short Particles improvisation with trombone sounds he himself had played, Sigurd took a more scientific approach for the remainder of his presentation. He gave a very brief introduction to room acoustics, but also talked about convolution-based reverberation and multi-channel spatialization. Here, he demonstrated an orientation sensor he had coupled to a microcontroller and wireless transmitter. The sensor was used in conjunction with the MC-system to control sound spatialization (I had set up vbap8 in csound to do this), reverberation mix and distance. Tone rounded off the individual section with a highly expressive improvised solo using the Materials sound bank of Particles. This was followed by a performative talk where Tone combined verbal reflections around body, space and movement, with her own movements in the room.

Subsequently, the audience were invited to try to do their own improvisations with the MC-system, and several audience members took their chances and tried it out. The whole concert was rounded off by a Q&A session with all four presenters.


Panel at DHN2017 Conference

DHN2017

The 2nd Conference of the association of Digital Humanities in the Nordic Countries (DHN) was held at the University of Gothenburg March 14–16, 2017. Together with a group of researchers from several universities in the Nordic countries I participated in a panel entitled "The Nordic Hub of DARIAH-EU: A DH Ecosystem of Cross-Disciplinary Approaches". The abstract can be read here. The finished version of the panel paper was published in CEUR Workshop proceedings, and can be read here.


Open lecture at Rockheim

Rockheim lecture

As a part of the collaboration between NTNU and Rockheim National Museum for Popular Music I held an open lecture the 16th of February entitled: "The Music Technology of Pop Music". The lecture was a part of the course Music Technology in Light of History, but was open to the general public. The lecture gave an introduction to instruments, sound media and technological tools for musical expression in the period 1950-1975. The slides from the presentation are available here.


Presentation at NTNU Dariah-EU kickoff seminar

As a result of NTNUs membership in the Nordic Hub of DARIAH-EU Nordic Hub (see post below) ARTECs Andrew Perkis and Dariah coordinator at NTNU, Thomas Sørlie Hansen arranged an open kickoff seminar where people interested in digital humanities at NTNU were invited. Researchers from many different fields were represented at the event. I held a short presentation at the event focusing on my work with MotionComposer.