Con Espressione

Role on this Project: Researcher

Project Website: http://www.cp.jku.at/research/projects/ConEspressione/

Official Description

What makes music so important, what can make a performance so special and stirring? It is the things the music expresses, the emotions it induces, the associations it evokes, the drama and characters it portrays. The sources of this expressivity are manifold: the music itself, its structure, orchestration, personal associations, social settings, but also – and very importantly – the act of performance, the interpretation and expressive intentions made explicit by the musicians through nuances in timing, dynamics etc.

Thanks to research in fields like Music Information Research (MIR), computers can do many useful things with music, from beat and rhythm detection to song identification and tracking. However, they are still far from grasping the essence of music: they cannot tell whether a performance expresses playfulness or ennui, solemnity or gaiety, determination or uncertainty; they cannot produce music with a desired expressive quality; they cannot interact with human musicians in a truly musical way, recognising and responding to the expressive intentions implied in their playing.

The project is about developing machines that are aware of certain dimensions of expressivity, specifically in the domain of (classical) music, where expressivity is both essential and – at least as far as it relates to the act of performance – can be traced back to well-defined and measurable parametric dimensions (such as timing, dynamics, articulation). We will develop systems that can recognise, characterise, search music by expressive aspects, generate, modify, and react to expressive qualities in music. To do so, we will (1) bring together the fields of AI, Machine Learning, Music Information Retrieval (MIR), and Music Performance Research; (2) integrate theories from Musicology to build more well-founded models of music understanding; (3) support model learning and validation with massive musical corpora of a size and quality unprecedented in computational music research.

In terms of computational methodologies, we will rely on, and improve, methods from Artificial Intelligence - particularly: probabilistic models (for information fusion, tracking, reasoning and prediction); machine learning - particularly: deep learning techniques (for learning musical features, abstractions, and representations from musical corpora, and for inducing mappings for expression recognition and prediction); audio signal processing and pattern recognition (for extracting musical parameters and patterns relevant to expressivity); and information theory (for modelling musical expectation, surprise, uncertainty, etc.). This will be combined with high-level concepts and models of structure perception from fields like systematic and cognitive musicology, in order to create systems that have a somewhat deeper ‘understanding’ of music, musical structure, music performance, and musical listening, and the interplay of these factors in making music the expressive and rewarding art that it is. (A more detailed discussion of how we believe all these things relate to each other can be found in the “Con Espressione Manifesto”).

With this research, we hope to contribute to a new generation of MIR systems that can support musical services and interactions at a new level of quality, and to inspire expressivity-centered research in other domains of the arts and human-computer interaction (HCI).

My Responsibilites

TODO


© 2019. All rights reserved.

Powered by Hydejack v8.1.1