News and Events: Upcoming Events

Please note that this newsitem has been archived, and may contain outdated information or links.

22 November 2016, Logic Tea, Dieuwke Hupkes & Sara Veldhoen

Speaker: Dieuwke Hupkes & Sara Veldhoen
Title: Diagnostic classifiers: revealing how neural networks process hierarchical structure
Date: Tuesday 22 November 2016
Time: 17:00-18:00
Location: Room 1.15, Science Park 107, Amsterdam


A key property of human language is its hierarchical compositional semantics: the meaning of larger chunks depends on both the meaning of the words (or phrases) they are composed of, and the way they are put together, which allows us to both understand and produce sentences we have never heard before. Why exactly language is organised like this, and how such complex operations are implemented in our brains, is one of the great questions in cognitive science to which an answer has yet to be found.

Nowadays, the most successful computational models of natural language semantics are (deep) artificial neural networks. Such networks are very powerful, but are also often compared to 'black boxes' as it is usually entirely unclear what exactly the networks are doing. We present a study of how (and whether) recursive and (gated) recurrent neural networks can compute the meaning of sentences from a toy language with an unambiguous (but deep) hierarchical syntax and semantics.

We find that recursive neural networks - that have a hybrid architecture shaped according to an inputed syntactic analysis of a sentence - can compute the meaning of sentences in a principled way, that can be visually analysed and understood. Also recurrent neural networks - that process their input incrementally - can find a solution for this problem, albeit with a generalising capacity that is much lower. We present a method to analyse the internal dynamics of the latter based on symbolically defined hypotheses. Our findings tell us something about how neural networks may process languages with a hierarchical compositional semantics. Perhaps more importantly, our approach also shows how we can use symbolic reasoning to 'open the black box' of the many successful deep learning in natural language processing (and other domains), when visualisation alone is not sufficient.

[b]For more information, please visit the website[b][b]or contact[b]Sirin Botan (), Bonan Zhao (), or Julian Schloder ().

Please note that this newsitem has been archived, and may contain outdated information or links.