Archives

Please note that this newsitem has been archived, and may contain outdated information or links.

3 December 2015, Computational Linguistics Seminar, Grzegorz Chrupała

Speaker: Grzegorz Chrupała (Tilburg)
Title: Learning visually grounded linguistic representations
Date: Thursday 3 December 2015
Time: 16:00
Location: ILLC Common Room, Science Park 107, Amsterdam

Abstract:

Most research into learning linguistic representations focuses on the distributional hypothesis and exploits linguistic context to embed words in a semantic vector space. In this talk I address two important but often neglected aspects of language learning: compositionality and grounding. Words are important building blocks of language, but what makes language unique is putting them together: how can we build meaning representations of phrases and whole sentences out of representations of words? And how can we make sure that these representations connect to the extralinguistic world that we perceive and interact with? I will present a multi-task gated recurrent neural network model which sequentially processes words in a sentence and builds a representation of its meaning while making concurrent predictions about (a) which words are to follow and (b) what are the features of the corresponding visual scene. Learning is driven by feedback on this multi-task objective. I evaluate the induced representations on tasks such as image search, paraphrasing and textual inference, and present quantitative and qualitative analyses of how they encode certain aspects of language structure.

Please note that this newsitem has been archived, and may contain outdated information or links.