The Language and Music Cognition unit uses computational models and artificial intelligence to study questions of semantics and meaning, both linguistic and musical, and tests the behavioural implications of these models for speakers, signers, musicians, readers, and listeners.
One important line of research focusses on our capacity for music (i.e., musicality), defined as a natural, spontaneously developing set of traits that are based on and constrained by our cognitive abilities, and its underlying biology. The group also explores the learnability and evolution of language, in particular how the uniquely human propensity to use complex expressions to convey complex meanings came about. Complexity itself is a priority for the unit as a means of understanding core cognitive abilities such as language learning, comprehension, or reasoning. Other work explores the cognitive boundaries between language and music, for example, delineating the conditions under which the speech-to-song illusion can occur.
Machine learning and representations are key to several unit member’s methodologies, for example, measuring the musical characteristics of ‘catchy’ music, modelling visually grounded language use, or explaining linguistic universals. The group works with diverse and multimodal data, both symbolic and subsymbolic, correlational and experimental, audio/video and text.
- John Ashley Burgoyne
- Katrin Schulz (deputy)
ILLC members connected to the unit by additional affiliation
- Raquel Fernández (NLP&DH)
- Karolina Krzyzanowska (EPS)
- Sandro Pezzelle (NLP&DH)
- Robert van Rooij (FSPL)
- Jelle Zuidema (NLP&DH)