\n

Learning models defining recursive computations, like autom ata and formal grammars, are the core of the field called Grammatical Inference (GI). The expressive power of these models and the complexity of the a ssociated computational problems are major researc h topics within mathematical logic and computer sc ience. Historically, there has been little interac tion between the GI and ICALP communities, though recently some important results started to bridge the gap between both worlds, including application s of learning to formal verification and model che cking, and (co-)algebraic formulations of automata and grammar learning algorithms.

\nThe go al of this workshop is to bring together experts o n logic who could benefit from grammatical inferen ce tools, and researchers in grammatical inference who could find in logic and verification new frui tful applications for their methods. The LearnAut workshop will consist of 3 invited talks and 14 co ntributed talks from researchers whose submitted w orks were selected after a double-blind peer-revie wed phase. A significant amount of time will be ke pt for interactions between participants.

\n

URL:https://learnaut22.github.io
END:VEVENT
END:VCALENDAR
We invite submissions of recent work, including preliminary research, related to the th eme of the workshop. The Program Committee will se lect a subset of the abstracts for oral presentati on. At least one author of each accepted abstract is expected to represent it at the workshop (in pe rson, or virtually). Note that accepted papers wil l be made available on the workshop website but wi ll not be part of formal proceedings (i.e., LearnA ut is a non-archival workshop). Submissions in the form of extended abstracts must be at most 8 sing le-column pages long at most (plus at most four fo r bibliography and possible appendixes) and must b e submitted in the JMLR/PMLR format. We do accept submissions of work recently published or currently under review.