Logical Models for Bounded Reasoners Anthia Solaki Abstract: This dissertation aims at the logical modelling of aspects of human reasoning, informed by facts on the bounds of human cognition. We break down this challenge into three parts: Part I, explaining why the design of such logical systems is a worthwhile project; Parts II and III, providing logical frameworks to this end, concerning, respectively, single-agent and multi-agent reasoning. In Part I, we discuss the place of logical systems for knowledge and belief in the Rationality Debate, i.e. the debate on whether humans are rational or not. We argue for the need to revise the standard epistemic/doxastic logics (S5/KD45, respectively) in order to provide formal counterparts of an alternative picture of rationality -- one wherein empirical facts have a key role (Chapter 2). In Part II, we design resource-sensitive logical models that encode explicitly the deductive reasoning of a bounded agent and the variety of processes underlying it. This is achieved through the introduction of a dynamic, impossible-worlds semantics, with quantitative components capturing the agent's cognitive capacity and the cognitive costs of deductive inference rules with respect to certain resources, such as memory and time (Chapter 3). We then show that this type of semantics can be combined with plausibility models, which allow for (i) the study of more nuanced notions of knowledge and belief from the resource-sensitive perspective, and (ii) the study of the interplay between inference and interaction (Chapter 4). We proceed with the demonstration of another contribution of this type of semantics; we show it can be instrumental in modelling the logical aspects of System 1 ("fast") and System 2 ("slow") cognitive processes, as per dual process theories of reasoning (Chapter 5). In Part III, we move from single- to multi-agent frameworks. This unfolds in three directions: (a) the formation of beliefs about others (e.g. due to observation, memory, and communication), (b) the manipulation of beliefs (e.g. via acts of reasoning about oneself and others), and (c) the effect of the above on group reasoning. Point (a) is addressed through the design of temporal models keeping track of agents' visibility and communication; the framework is applied to the formalization of paradigmatic tasks testing people's Theory of Mind, the so-called False Belief Tasks (Chapter 6). Point (b) is addressed through the design of special action models, which are compatible with our resource-sensitive semantics and able to represent actions of deduction, introspection, and attribution, that, when cognitively affordable, can refine the zero- and higher-order beliefs of agents (Chapter 7). Point (c) is addressed by first looking into idealizations of group epistemic notions, with an emphasis on distributed knowledge. Inspired by experiments on group reasoning, we then identify two dimensions of actualizing distributed knowledge under bounded resources, namely communication and inference. Using the toolbox introduced earlier, we build a dynamic framework with effortful actions accounting for both (Chapter 8). We finally discuss directions for future work, touching upon the study of probabilistic reasoning and social networks, and we reflect on the contribution of the thesis as a whole (Chapter 9).