(New) Postdoctoral Researcher in Safe AI (AI4KIDS Project) at University of Luxembourg [LU]
AI4KIDS addresses the critical need for child-centric safe AI by developing a norm-first Belief–Desire–Intention (BDI) architecture where generative models (LLMs) are constrained by machine-readable child-protection policies to ensure purposeful, legally compliant, explainable and auditable AI behaviour.
The role focuses on designing safe, norm-constrained AI architectures for child-centric applications, combining multi-agent systems, LLMs, and social robotics within an international team. The project also includes industrial validation with social robotics platforms (e.g., QTrobot) for deployment in educational and special-needs contexts, marrying computational law, symbolic AI, and large-scale evaluation into a blueprint for safe child-facing AI.