Modelling in the Language Sciences Bart de Boer, Willem Zuidema Abstract: Computers can be used for many different purposes in linguistic research. They can be used for data storage and search. They can be used as devices for speech analysis or synthesis. They can be used to present linguistic stimuli to subjects and record their responses. In all these applications, computers are used as sophisticated tools, and they are programmed according to purely practical criteria: as long as they get the job done, the researchers who use the applications do not care about the internal workings of the software. However, computing can also become the focus of linguistic research. Computers can be used to operationalize linguistic theories by implementing them as computer programs. This is done because linguistic theories may be so complex that their predictions can no longer be derived using verbal reasoning or pen-and-paper analysis. Moreover, turning a linguistic theory into a computer program forces the researcher to make her assumptions explicit. By running the program, and studying its behavior under a variety of circumstances, the researcher can test the theory against empirical findings and often discover unexpected consequences. In this chapter, we discuss the use of computational models in the language sciences. Although formalization has had a central place since the 1950s in syntax and phonetics in particular, the last two decades have seen an explosion of interest in mathematical and computational models in all linguistic subfields: from typology to language acquisition, from discourse to phonology, linguists are increasingly viewing formal modelling as an approach that ensures the internal consistency of theories. However, although many proponents of modelling believe it makes their field more scientific and objective, it seems fair to say that the introduction of formal models has so-far not led to a broad consensus among language researchers. On the contrary, models have often been at the heart of longstanding controversies (e.g., those about formalisms vs. functionalism, nativism vs. empiricism, single- vs. dual-mechanism). One reason, we believe, that modelling has played more of a divisive than a unifying role is that there has been little attention to questions about modelling methodology: what kind of lessons can we expect to learn from a model? What makes a good or a bad model? How may different models of the same linguistic phenomenon relate to eachother? How could models of different phenomena fit together? Thinking about such questions leads one to systematically consider the role of specific models in a given subfield: Are they consistent with and complementary to each other? Are the assumptions that go into a particular model, if not (yet) supported by empirical findings, made plausible by results from other models? The situation is not uniform across all linguistic subfields, of course, but we observe that in fields where 1 or 2 of these questions have received a lot of attention, the others tend to be ignored even more. For instance, in syntactic theory there has been an enormous amount of work (of impressive mathematical sophistication) on comparing different syntactic frameworks and their ability to model native speaker intuitions about the grammaticality of carefully selected (but often highly contrived) sentences. However, in our view, this field has paid much too little attention to questions about whether that is really the most important criterion for evaluating models of language and about relations with cognitive and neural models. As we will emphasize in this chapter, the ability to reproduce a selected set of empirical phenomena is certainly not the only criterion for a good model. Because it is impossible to cover all linguistic subfields, we will make our general points about methodology concrete using examples from two particular domains: the evolution of speech and the learnability of syntax. In both fields computational modeling has played an important role, but in both we also believe progress has been hampered by lack of attention to modeling methodology and the questions one immediately asks about the relation between existing models when taking the view on modeling that we develop in this chapter. For sustaining the success of modelling approaches in linguistic research, it is crucial that models start living up to their promise: modellers must make explicit how their models fit in with other modelling and empirical work, and how their modelling results affect judgments of plausibility of existing hypotheses that exist in the field to which they wish to make a contribution. Moreover, they must do so based on careful consideration of other work, without overstating their results and misusing the prestige that comes with mathematical and computational approaches. In section 2 we will start with some considerations about the methodology of modelling in linguistics, and introduce the concepts of model sequencing and model parallelization. In sections 3 and 4 we will illustrate these concepts with two case studies on modeling in the evolution of speech and the learnability of syntax respectively. In section 5 we will then draw some general lessons from these case studies, and sketch an agenda for future research in computational modelling of language.