BEGIN:VCALENDAR
VERSION:2.0
PRODID:ILLC Website
X-WR-TIMEZONE:Europe/Amsterdam
BEGIN:VTIMEZONE
TZID:Europe/Amsterdam
X-LIC-LOCATION:Europe/Amsterdam
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:19700329T020000
RRULE:FREQ=YEARLY;BYMONTH=3;BYDAY=-1SU
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:19701025T030000
RRULE:FREQ=YEARLY;BYMONTH=10;BYDAY=-1SU
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
UID:/NewsandEvents/Archives/2002/newsitem/333/18-D
 ecember-2002-Music-AI-Colloquium-Taylan-Cemgil
DTSTAMP:20021128T000000
SUMMARY:Music & AI Colloquium, Taylan Cemgil
ATTENDEE;ROLE=Speaker:Taylan Cemgil (Nijmegen)
DTSTART;TZID=Europe/Amsterdam:20021218T150000
DTEND;TZID=Europe/Amsterdam:20021218T000000
LOCATION:Nieuwe Achtergracht 166, room B235, Amste
 rdam
DESCRIPTION:Automatic music transcription refers t
 o extraction of a human readable and interpretable
  description from a recording of a musical perform
 ance. Traditional music notation is such a descrip
 tion that lists the pitch levels (notes) and corre
 sponding timestamps. Such a representation would b
 e useful in several applications such as interacti
 ve music performance, information retrieval (Music
 -IR) and content description of musical material i
 n large music databases. In this talk, I will focu
 s on a subproblem in music-ir, where I assume that
  exact timing information of notes is available, f
 or example as a stream of MIDI events from a digit
 al keyboard. I will present a probabilistic genera
 tive model for timing deviations in expressive mus
 ic performance. The structure of the proposed mode
 l will turn out to be a switching state space mode
 l (switching Kalman filter). The switch variables 
 correspond to discrete note locations as in a musi
 cal score. The continuous hidden variables denote 
 the tempo.   Given the model, we can formulate two
  well known music recognition problems, namely tem
 po tracking and automatic transcription (rhythm qu
 antization) as filtering and maximum a posteriori 
 (MAP) state estimation tasks. Unfortunately, exact
  computation of posterior features such as the MAP
  state is intractable in this model class, so we r
 esort to Monte Carlo methods for integration and o
 ptimization. I have compared Markov Chain Monte Ca
 rlo (MCMC) methods (such as Gibbs sampling, simula
 ted annealing and iterative improvement) and seque
 ntial Monte Carlo methods (particle filters). Simu
 lation results suggest better results with sequent
 ial methods. The methods can be applied in both on
 line and batch scenarios (such as tempo tracking a
 nd transcription) and are thus potentially useful 
 in a number of music applications such as adaptive
  automatic accompaniment, score typesetting and mu
 sic information retrieval.
X-ALT-DESC;FMTTYPE=text/html:\n      <p>\n        
 Automatic music transcription refers to extraction
  of a human\n        readable and interpretable de
 scription from a recording of a\n        musical p
 erformance. Traditional music notation is such a\n
         description that lists the pitch levels (n
 otes) and\n        corresponding timestamps. Such 
 a representation would be\n        useful in sever
 al applications such as interactive music\n       
  performance, information retrieval (Music-IR) and
  content\n        description of musical material 
 in large music databases. In\n        this talk, I
  will focus on a subproblem in music-ir, where I\n
         assume that exact timing information of no
 tes is available,\n        for example as a stream
  of MIDI events from a digital\n        keyboard. 
 I will present a probabilistic generative model fo
 r\n        timing deviations in expressive music p
 erformance. The\n        structure of the proposed
  model will turn out to be a\n        switching st
 ate space model (switching Kalman filter). The\n  
       switch variables correspond to discrete note
  locations as in a\n        musical score. The con
 tinuous hidden variables denote the\n        tempo
 .\n      </p>\n      <p>Given the model, we can fo
 rmulate two well known music\n      recognition pr
 oblems, namely tempo tracking and automatic\n     
  transcription (rhythm quantization) as filtering 
 and maximum a\n      posteriori (MAP) state estima
 tion tasks. Unfortunately, exact\n      computatio
 n of posterior features such as the MAP state is\n
       intractable in this model class, so we resor
 t to Monte Carlo\n      methods for integration an
 d optimization. I have compared Markov\n      Chai
 n Monte Carlo (MCMC) methods (such as Gibbs sampli
 ng,\n      simulated annealing and iterative impro
 vement) and sequential\n      Monte Carlo methods 
 (particle filters). Simulation results\n      sugg
 est better results with sequential methods. The me
 thods can\n      be applied in both online and bat
 ch scenarios (such as tempo\n      tracking and tr
 anscription) and are thus potentially useful in a\
 n      number of music applications such as adapti
 ve automatic\n      accompaniment, score typesetti
 ng and music information\n      retrieval.\n      
 </p>\n    
URL:/NewsandEvents/Archives/2002/newsitem/333/18-D
 ecember-2002-Music-AI-Colloquium-Taylan-Cemgil
END:VEVENT
END:VCALENDAR
