Generalization in Artificial Language Learning: Modelling the Propensity to Generalize Raquel G. Alhama, Willem Zuidema Abstract: Experiments in Artificial Language Learning have revealed much about the ability of human adults to generalize to novel grammatical instances (i.e., instances consistent with a familiarization pattern). Notably, generalization appears to be negatively correlated with the amount of exposure to the artificial language, a fact that has been claimed to be contrary to the predictions of a statistical mechanism (Peña, Bonatti, Nespor, and Mehler (2002); Endress and Bonatti (2007)). In this paper, we propose to model generalization as a three-step process involving: i) memorization of segments of the input, ii) computation of the probability for unseen sequences, and iii) distribution of this probability among particular unseen sequences. Applying two probabilistic models for steps (i) and (ii), we can already explain relevant aspects of the experimental results. We also demonstrate that the claim about statistical mechanisms does not hold when generalization is framed under the 3-step approach; concretely, a statistical model of step (ii) can explain the decrease of generalization with exposure time.