Artificial Intelligence and Music

Role on this Project: Researcher

Project Website:

Official Description

FWF Project Z159 is the result of a Wittgenstein Prize awarded to Gerhard Widmer in 2009. The project was supported by the generous sum of EUR 1.4 million, and its purpose was to greatly advance our research in the intersection of computer science, Artificial Intelligence (AI), and music.

The general goal of our research is to develop computer systems that can ‘listen’ to music, develop a basic ‘understanding’ of the contents and meaning of musical signals, and learn to recognise, classify, synchronise, and manipulate music in ‘intelligent’ ways and this way support many important practical applications in the digital music world.

This particular research project focused on two grand goals: one, to teach computers to recognise musically relevant patterns and structure in music recordings, much as human listeners do when listening to music, at many different levels – e.g., to recognise event onsets in music, identify beat, rhythm, tempo, metrical structure, harmonies, instruments, voices – and to recognise musical pieces and track them (i.e., follow along in the sheet music, as musicians do) in real time (live). The outcome of this work are numerous new computer listening algorithms that are among the (or are the) best in the world for these tasks, as was also shown by winning first prizes in many international scientific competitions (see our `Hall of Fame’). Some of these also algorithms have also found their way into real commercial applications in the digital media world (e.g., automatic media monitoring, radio broadcast analysis, and music search and recommendation services.)

The second goal was to go a level ‘deeper’, developing computer methods that can help us get a deeper understanding of the ‘meaning’ of music, its expressive aspects, how music becomes ‘human music’ through the artistic act of interpretation and performance. Here, we greatly advanced previous work on computer systems that investigate the art of expressive music performance, analysing performances by great human musicians and learning to describe and predict how music needs to be played (e.g., in terms of timing, dynamics, articulation) so as to sound ‘musical’ and ‘natural’ to us. The result of this strand of research are computer programs that helped discover and describe interesting details about the art of great pianists (these were also published in the international world of musicology), and programs that can learning themselves to play music in musically meaningful and ‘expressive’ ways, winning, among other things, an international Computer Piano Performance Contest (RENCON 2011). The last result in this respect (Spring 2017) is that in a blind listening test, our computer’s performance of a piano piece was judged by human listeners as more ‘human’ than that of an actual concert pianist …

My Responsibilites

In this project, my main responisibility was reserach in robust and flexible live music tracking. The outcome was a very robust algorithm that is able to follow piano music of any complexity.

© 2019. All rights reserved.

Powered by Hydejack v8.1.1