Hierarchy and interpretability in neural models of language processing Dieuwke Hupkes Abstract: Artificial neural networks have become remarkably successful on many natural language processing tasks. In this dissertation, I explore if these successes make them useful as explanatory models of human language processing, focusing in particular on hierarchical compositionality and recurrent neural networks (RNNs). I consider two questions: - Are RNNs in fact capable of processing hierarchical compositional structures? - How can we obtain insight in how they do so? This dissertation is divided into three parts. In part one, I consider artificial languages, which provide a clean setup in which processing of structure can be studied in isolation. In this part, I also introduce diagnostic classification -- an interpretability technique that plays an important role in this dissertation -- and reflect upon what it means for a model to be able to process hierarchical compositionality. In part two, I consider language models, trained on naturalistic data. Such models have been shown to capture syntax-sensitive long-distance subject-verb relationships. I investigate how they do so. I present detailed analyses of their inner dynamics, using diagnostic classification, neuron ablation and generalised contextual decomposition. Lastly, in part three, I consider if a model's solution can be changed through an adapted learning signal. In summary, in this dissertation I present many different analyses concerning the abilities of RNNs to process hierarchical structure, as well as several techniques to understand these blackbox models. The results sketch a positive picture of the usefulness of such models as explanatory models of processing languages with hierarchical compositional semantics.