Sound and music computing

Research in sound and music computing

Hybrid Instruments

Hybrid Instruments

Hybrid Instruments – Grafting Acoustic Instruments and Signal Processing: Creative Control and Augmented Expressivity

In this research, work is ongoing towards hybrid acoustic / electric musical instruments. As a first focus, the violin is enhanced with embedded processing that provides real-time simulations of acoustic body models using DSP techniques able to transform from one model into another – including extrapolations beyond realistic models, in order to explore interesting new timbres. Models can include everything from traditional violin bodies to guitars, sitars with their sympathetic strings, and even physically impossible acoustic bodies.

The development also explores several advanced approaches to sensor augmentation and gestural playing techniques that can be applied to bowed-string and other acoustic instruments, in order to provide inherent creative control over the possibilities offered by DSP.

To date, this research has focused on augmenting the expressivity of the violin towards finding novel timbral possibilities, rather than attempting to simulate prior acoustic violins with high fidelity. The opportunity to control a malleable virtual instrument body while playing, i.e., a model that changes reverberant resonances in response to player input, results in interesting and often musically inspiring audio effects.

Other common audio effects can also be employed and simultaneously controlled via the musician’s movements. For example, gestural movements of the instrument are tracked via embedded Inertial Measurement Units (IMUs), which can be mapped to alter parameters such as the wet-dry mix of a simple ‘octave doubler’ or other more advanced audio effect, further augmenting the expressivity of the player.

More info about the project here.

DREAM

DREAM

Digital Re-working/re-appropriation of ElectroAcoustic Music (EU project)

The aim of the DREAM project is to create a permanent installation at the Milan Museum of musical instruments, consisting of a SW-HW system that re-creates the electronic lutherie of the Studio di Fonologia Musicale (RAI, Milan, Italy): in particular the production setup that were originally used to compose the Pousseur’s composition Scambi will be considered. This work will allow musicians from all over Europe to experience the re-appropriation of both Pousseur’s composition and of the ancient analog technology that were used at that time through the virtual lutherie produced by the project. At the same time the installation will be the occasion for scholars to think about open compositions understanding by direct manipulation in this specific creative process. Furthermore they will learn a very important piece of history of modern Music: the birth of electronic music and its connection with the most important radiophonic european institutions.

DREAM project description.

Rhythmic Walking Interaction with Ecological Auditory Feedback

Rhythmic Walking Interaction with Ecological Auditory Feedback

Walking is an activity that plays an important part in our daily lives. In addition to being a natural means of transportation, walking is also characterized by the resulting sound, which can provide information about the surface, type of shoe, and movement speed as well as the person’s age, weight, and physical condition.

Here we present a video with all conditions, which we tested in our recent experiment on rhythmic walking interaction with ecological feedback. For more info check the original article entitled: ‘The effects of ecological auditory feedback on rhythmic walking interaction’ by Maculewicz, Jylhä, Serafin, and Erkut.

Sound Texture Resynthesis with Sparse Decomposition and Noise Modelling

Sound Texture Resynthesis with Sparse Decomposition and Noise Modelling

We introduce a framework that represents environmental texture sounds as a linear superposition of independent foreground and background layers that roughly correspond to entities in the physical production of the sound. Sound samples are decomposed into a sparse representation with the matching pursuit algorithm and a dictionary of Daubechies wavelet atoms. An agglomerative clustering procedure groups atoms into short transient molecules. A foreground layer is generated by sampling these sound molecules from a distribution, whose parameters are estimated from the input sample. The residual signal is modelled by an LPC-based source-filter model, synthesizing the background sound layer. The capability of the system is demonstrated with a set of fire sounds.

Here you can find a joint project with Stefan Kersten from Music Technology Group, UPF, Barcelona.

Music Interface Technologies in Intercultural Contexts

MITIC is a project funded by FI’s International Programme that involves both practical and theoretical research on music interface technologies facilitating expressive and collaborative intercultural musical activities. Music Interface Technology is a relatively young field of research that merges artistic and cultural explorations with technology for creating interactive systems, allowing musicians to embrace innovative methods of expressive music-making, be it performing, composing, or improvising. All of the involved researchers – based both at AAU, Copenhagen, and U.C. Berkeley, California – are established experts in the field of New Interfaces for Musical Expression (NIME). Project outcomes include innovative methods and theories for use by the community of research on Music Interface Technologies, new musical instruments that can be used in culturally collaborative music practices, and production of joint scientific papers in the field.

The research examines the influence of music technology on different music styles under a microscope, looking at the ways in which interactive systems can affect human musical collaborations between cultures. The first scientific focus of the project is to improve existing methods and theories for creating music interfaces. The second focus is to explore the use of these in developing actual hybrid acoustic-electronic music instruments that can encourage intercultural collaborations. The research enhances cross-cultural collaborative music making by employing computer technology within interactive performance interfaces and environments.

Audio-haptic Interaction

Audio-haptic Interaction

We are investigating the use of physically simulated audio-haptic feedback for different tasks.

For example, we are investigating whether providing auditory and haptic feedback helps the task of balancing on a balance board.

Decoding Auditory Attention in Polyphonic Music with EEG

Decoding Auditory Attention in Polyphonic Music with EEG

Polyphonic music (music consisting of several instruments playing in parallel) is an intuitive way of embedding multiple information streams. The different instruments in a musical piece form concurrent information streams that seamlessly integrate into a coherent and hedonistically appealing entity. We explore polyphonic music as a novel stimulation approach for use in a brain-computer interface.

In a multi-streamed oddball experiment, we had participants shift selective attention to one out of three different instruments in music audio clips. Each instrument formed an oddball stream with its own specific standard stimuli (a repetitive musical pattern) and oddballs (deviating musical pattern). Contrasting attended versus unattended instruments, ERP analysis shows subject- and instrument-specific responses including P300 and early auditory components. The attended instrument can be classified offline with a mean accuracy of 91% across 11 participants. This is a proof of concept that attention paid to a particular instrument in polyphonic music can be inferred from ongoing EEG, a finding that is potentially relevant for both brain-computer interface and music research.

This is a joint project with Matthias Treder, Daniel Miklody, Irene Sturm, and Benjamin Blankertz from Neurotechnology Group, Berlin Institute of Technology.

CultAR - Culturally Enhanced Augmented Realities

CultAR - Culturally Enhanced Augmented Realities

CultAR (Culturally Enhanced Augmented Realities) is an EU funded Seventh Framework Program [ICT-2011.8.2] ICT for access to cultural resources] project with partners Aalto-korkeakoulusäätiö, School of Science, Helsinki Institute for Information Technology HIIT (AALTO), Finland; University of Helsinki (UH), Finland; Graz University of Technology (TU Graz), Austria; Aalborg University (AAU), Denmark; University of Padova (UNIPD), Italy; Comune di Padova (CPD), Italy; and Ubiest S.p.a (UBI), Italy.

The project provides a mobile platform that 1) actively increases users’ awareness of their cultural surroundings with advanced, adaptable and personalized interfaces, and 2) increase users’ social engagement with culture via a leap in social media technologies and contextual inference methods. To reach these goals, CultAR will advance the State of the Art in mobile 3D, augmented reality and tactile technologies, combining them into a completely new mobile experience interface. The CultAR platform achieves personalised and engaging digital cultural experiences through enhanced representation, hybrid space mediation, social engagement and awareness. Adaptability and context awareness will be enhanced through dynamic 3D models of urban environments with an ability to control all aspects of the representation, including dynamic content, applying various emphasis methods that draw the attention of the user to potentially interesting cultural content. Cultural stakeholders will provide content and expert users for CultAR. Advances in technology are verified and analyzed not just by technical benchmarking, but with both specialist analysis and on-location field experiments in Padua, Italy and Aalborg, Denmark. Methodology in measuring user experiences is developed with thorough monitoring using eye trackers, physiological sensors, gesture tracking and gesture recognition in pursuit of inferring emotional states quantitatively.

Find more information about the project here or at the CultAR website.

Unsupervised Generation of Musical Sound Sequences from a Sound Example

Unsupervised Generation of Musical Sound Sequences from a Sound Example

We developed a system that learns the rhythmical structure of percussion sequences from an audio example in an unsupervised manner, providing a representation that can be used for the generation of stylistically similar and musically interesting variations. The procedure consists of segmentation and symbolization (feature extraction, clustering, sequence structure analysis, temporal alignment). An a low level, the percussion sequence is transcribed as a multi-level discretization. The regularity of the sequence, as a high level information source, is examined on each clustering level and the most regular level is used to estimate inter beat interval and metrical phase of the sequence. Then, variations on the original sequence are generated, recombining the audio material derived from the sample itself. A metrically reduced version of the original and the generations are played to two professional percussionists in an informal experiment, thereby providing a subjective evaluation of the rhythmical analysis system. The results reveal that the generations are interesting and maintain the style and the meter of the original sample. This indicates the benefit of using an unsupervised multi-level clustering procedure in conjunction with high level structural constraints.

This work is a joint project with Marco Marchini (Music Technology Group, UPF, Barcelona and Srikanth Cherla, City University London).

Papers

Automatic Phrase Continuation from Guitar and Bass Guitar Melodies (Computer Music Journal)
Unsupervised Generation of Percussion Sound Sequences from a Sound Example (Music Technology Group)
Beatboxing example

Sonic Interaction Design

Sonic Interaction Design

Sonic interaction design is the study and exploitation of sound as one of the principal channels conveying information, meaning, and aesthetic/emotional qualities in interactive contexts.

Sonic interaction design is at the intersection of interaction design and sound and music computing. If interaction design is about designing objects people interact with, and such interactions are facilitated by computational means, in Sonic Interaction Design sound is mediating interaction either as a display of processes or as an input medium.

Check out the new book on Sonic Interaction Design which has recently been published by MIT Press.

 

Creative Music Mixing Interface

Creative Music Mixing Interface

This small project is a collaboration with The Rhythmic Music Conservatory, Copenhagen and The Royal Danish Academy of Music.

The project explores new approaches in creating a flexible and creative interface for mixing music. Based on investigations of work processes of expert music producers the project identifies key design factors crucial for the development and evaluation of such an interface. Interactive surfaces are tested in context, exploring different interaction technologies using a so-called stage metaphor control structure. Interaction technologies include multi-touch, tangible objects and smart tangibles. Additionally, different digital control structures for supporting creativity are explored. Find more information about the project here.

Tangible Mix Surface - Towards a more Flexible and Creative Music Mixing Interface.

Natural Interactive Walking (NIW)

Natural Interactive Walking (NIW)

The Natural Interactive Walking project (NIW) proceeds from the hypothesis that walking, by enabling rich interactions with floor surfaces, consistently conveys enactive information that manifests itself predominantly through haptic and auditory cues. Vision will be regarded as playing an integrative role linking locomotion to obstacle avoidance, navigation, balance, and the understanding of details occurring at ground level. The ecological information we obtain from interaction with ground surfaces allows us to navigate and orient during everyday tasks in unfamiliar environments, by means of the invariant ecological meaning that we have learned through prior experience with walking tasks.

Multimodal interaction to simulate natural interactive walking.

 

Quantitative Analysis of Performance Practice of Non-Western Music

Byzantine Chant performance practice is computationally compared to the Chrysanthine theory of the eight Byzantine Tones (octoechos). Intonation, steps, and prominence of scale degrees are quantified, based on pitch class profiles. The novel procedure introduced comprises the following analysis steps:
(1) the pitch trajectory is extracted and post processed with music-specific filters.
(2) Pitch class histograms are calculated by kernel smoothing.
(3) Histogram peaks are detected.
(4) Phrase ending analysis aids the finding of the tonic to align pitch histograms.
(5) The theoretical scale degrees are mapped to the empirical ones.
(6) A schema of statistical tests detects significant deviations of theoretical scale tuning and steps from the estimated ones in performance practice.
(7) The ranked histogram peak amplitudes are compared to the theoretic prominence of particular scale degrees.

The analysis of 94 Byzantine Chants performed by four singers shows a tendency of the singers to level theoretic particularities of the echos that stand out of the general norm in the octoechos: theoretically extremely large steps are diminished in performance. The empirical intonation of the IV. scale degree as the frame of the first tetrachord is more consistent with the theory than the VI. and the VII. scale degree. In practice, smaller scale degree steps (67–133 cents) appear to be increased and the highest scale step of 333 cents appears to be decreased compared to theory. In practice, the first four scale degrees in decreasing order of prominence I, III, II, IV are more prominent than the V., VI., and the VII.

This is a joint project with Maria Panteli, University of Amsterdam.

A Quantitative Comparison of Chrysanthine Theory and Performance Practice of Scale Tuning, Steps, and Prominence of the Octoechos in Byzantine Chant.