Please note that this newsitem has been archived, and may contain outdated information or links.
25 October 2023, Encoding Polyphony – Symposium in honor of Simha Arom
This Symposium is part of a four-day event with the following components:
October 24, 16:00, UT, 301. Film screening of SIMHA (Jérôme Blumberg, 2015), registration required via s.muziekwetenschap at uva.nl
October 25, 14.00, UT, 101A. Encoding Polyphony - A Symposium in honor of Simha Arom (see below)
October 26, 16:30, UT, Theatre Hall. African Polyrhythm in the Work of György Ligeti,Lecture demonstration with Simha Arom and percussion ensemble led by Julien André, online registration required via this link.
October 27 (participation by invitation only) Workshop with invited guests of the NWO-OC project Unraveling our Capacity for Music
Program of the Symposium on October 25
14:00 doors open
14:30 Julia Kursell: Welcome and Introduction
15:15 Christoph Finkensiep: Beyond Voices: a New Take on Polyphonic Structure in Western Tonal Music
16:15 Coffee break
16:30 Frank Scherbaum: Searching for Structural Patterns in Traditional Georgian Vocal Music: From Visual Analysis to Generative Grammars
18:00 End of the Symposium
14:30 Julia Kursell: Welcome and Introduction
Decoding Polyrhythm and Polyphony – Simha Arom's Work on Notating, Visualizing and Understanding Complex Polyphony
Simha Arom's work in the field of musicology bridges between the fields we are interested in at the University of Amsterdam in musicology: cultural musicology, cognitive and computational musicology, and the Western tradition of composed music up until today. His work shows us how many burning questions of today's research can be tackled. Can the ethnographer interact with those musicking while not imposing a hierarchy between the researcher and the musicking subject? Is musical literacy a specificity of Western music – and how does musical knowledge relate to this assumption? How can the study of music contribute to other fields, be it linguistics, anthropology, or computational modelling, among others? The introduction contextualizes Arom's work in some strands of the history of the humanities and sciences. It will pay special attention to his work on and with musical notation.
15:15 Christoph Finkensiep:
Beyond Voices: a New Take on Polyphonic Structure in Western Tonal Music
A rich and complex polyphonic structure is often taken to be one of the foundations of the Western tonal tradition. In the usual understanding, polyphony is thereby based on a set of concurrent voices, each of which has its own identity and interacts with the others. This notion is challenged by a number of phenomena in free polyphony (i.e. music without explicit voices), but also in monophony and explicit polyphony. This talk puts forward the argument that polyphony is better understood as a network of functional relations. A formal model of this network and a corresponding notation are presented that can express phenomena such as implicit polyphony, the realization of latent structures (e.g., harmonies and schemata), and the origin of harmonic syntax in voice leading.
16:30 Frank Scherbaum:
Searching for Structural Patterns in Traditional Georgian Vocal Music: From Visual Analysis to Generative Grammars
This presentation will report on an ongoing collaboration with Simha Arom to study the 'grammar' of Georgian traditional vocal music. The aim of this work is to investigate what can be learned from the analysis of musical scores in this context. To this end, two complementary conceptual strategies were used, one graphical and the other algorithmic. Most graphical approaches are very efficient when it comes to visual analysis of individual songs. However, for the analysis of song collections, the use of algorithmic approaches is essential. Therefore, numerous natural language processing (NLP) tools such as n-grams, Kohonen's self-expanding grammar, and Generative Pretrained Transformer (GPT) networks have been investigated for corpus analysis. Training a generative language model on our whole corpus shows that - despite the small size of our corpus - the structure of the resulting embedding space of the GPT network already shows signs of semantic structures (learned from syntax)
.
Please note that this newsitem has been archived, and may contain outdated information or links.