Funded PhD in Argumentation and Explainable AI

๐๐๐ป๐ฑ๐ฒ๐ฑ ๐ฏ-๐๐ฒ๐ฎ๐ฟ ๐ฃ๐ต๐ ๐ถ๐ป ๐ฐ๐ผ๐บ๐ฝ๐๐๐ฒ๐ฟ ๐๐ฐ๐ถ๐ฒ๐ป๐ฐ๐ฒ - ๐ฎ๐ฟ๐๐ถ๐ณ๐ถ๐ฐ๐ถ๐ฎ๐น ๐ถ๐ป๐๐ฒ๐น๐น๐ถ๐ด๐ฒ๐ป๐ฐ๐ฒ - ๐ฒ๐
๐ฝ๐น๐ฎ๐ถ๐ป๐ฎ๐ฏ๐น๐ฒ ๐ฟ๐ฒ๐ฎ๐๐ผ๐ป๐ถ๐ป๐ด
๐๐ฒ๐๐๐ผ๐ฟ๐ฑ๐: Argumentative AI, Explainable Reasoning, Human-Centered AI, Argument Influence
๐ฆ๐๐ฎ๐ฟ๐๐ถ๐ป๐ด ๐ฑ๐ฎ๐๐ฒ: October 2025
๐๐ผ๐ฐ๐ฎ๐๐ถ๐ผ๐ป: Centre de Recherche en Informatique de Lens (CRIL UMR 8188 - CNRS & Universitรฉ d'Artois), France
๐ฆ๐๐ฝ๐ฒ๐ฟ๐๐ถ๐๐ผ๐ฟ๐: Dr Srdjan Vesic (CNRS CRIL Universitรฉ d'Artois) and Dr Mathieu Hainselin (CRP-CPO Universitรฉ de Picardie Jules Verne)
๐๐ฒ๐๐ฐ๐ฟ๐ถ๐ฝ๐๐ถ๐ผ๐ป:
Computational argumentation theory provides essential tools for analyzing structured debates, with applications in AI-assisted decision-making systems, online discussion platforms, and human-AI interaction. In this context, explainability is critical: systems must not only determine which arguments are accepted based on abstract semantics, but also make this reasoning transparent and cognitively accessible to human users. Yet, existing semanticsโtypically grounded in logic-based frameworks and Dung's abstract argumentationโmight fail to align with human intuitions, limiting both their usability and trustworthiness in practice.This fully funded PhD thesis will focus on improving the alignment between formal acceptability semantics and human reasoning. Research objectives include:
โข Evaluating whether current principled constraints are perceived as intuitive by users
โข Assessing their explanatory power, particularly in helping users grasp why certain arguments are accepted or rejected
โข Formalizing new principles or designing alternative semantics to better capture observed reasoning patterns
โข Investigating quantitative impact measures, which capture how much individual arguments influence the acceptability of others, and evaluating how such influence is perceived and interpreted by users
The project is highly interdisciplinary, involving close collaboration with psychologists and cognitive scientists, and combining formal modeling, empirical user studies, and potential software prototypes. The overarching goal is to contribute to the development of more explainable, intuitive, and responsible AI systems, grounded in both logical foundations and empirical validation.
Good level of English is required. Applicants should have a strong background in logic, AI, computer science, or related fields. An interest in cognitive science is welcome.
The PhD includes full funding, collaboration opportunities and publication support.
For more details about the thesis and to apply, send an email to vesic at cril.fr