1st Semester 2017/18: Learning Compositionality in Neural Networks

Instructor(s): Elia Bruni, Giovanni Cina' and Dieuwke Hupkes
If you are interested in this project, please contact the instructor(s) by email.

Introduction In the past few years, machine learning has made enormous strides: currently, deep neural networks are the most successful models for a variety of tasks, such as machine translation, language modelling, natural language understanding and image recognition [1]. Whether neural networks can also learn formal reasoning has received relatively little attention in the past. The area at the intersection between logic and machine learning is now attracting more and more researchers - most of the work so far has focused on the enhancement of Automated Theorem Provers and proof assistants with techniques from machine learning - but the progress has been slowed down by the lack of appropriate datasets to train networks on.

What will you do? In this (research) project, we will focus on the question: Can neural networks learn formal reasoning? The first step in this project will involve constructing a dataset of proofs in classical propositional logic. The creation of such a dataset requires domain knowledge, especially since the choices made regarding the dataset determine to a large extent what kind of questions that can be answered with it. You will have to carefully think about the contents of the dataset and write parts of the code to generate and analyse the data, tapping into available resources such as repositories of formulas used by SAT solvers and existing theorem provers. As of today there is no such dataset publicly available, this may very well be a valuable contribution to the community working on these issues, possibly resulting in a publication (e.g. consider [2]).

In the second stage of the project, you will work on creating baseline (neural) models to enable further studies of the dataset. Depending on how quickly the first two stages are completed, more advanced modelling is also a possibility, moving the first steps into building and testing a neural network that learns logical reasoning.

Requirements: This project will have a research character, and you will work tightly together in a small group, together with your supervisors. We will work with Python. To successfully complete the project, the students in the team should (collectively) have the following skills:

  • Strong background in propositional logic;
  • Some knowledge of Python and relevant libraries (e.g. Numpy, possibly Pytorch/tensorflow or related, etc) or willingness to learn a bit about it before the project starts;
  • Familiarity with neural networks.
Not everyone is required to have all the skills above, as long as someone else in the group does. If you are excited about theoretical foundations of neural networks and their link with logic and you think you have something to contribute: ask us!

References

[1] Bengio, Yoshua, Ian J. Goodfellow, and Aaron Courville. Deep learning. Nature 521 (2015): 436-444.
[2] Kaliszyk, Cezary, Francois Chollet, and Christian Szegedy. HolStep: A Machine Learning Dataset for Higher-order Logic Theorem Proving. ICLR 2017.