BEGIN:VCALENDAR
VERSION:2.0
PRODID:ILLC Website
X-WR-TIMEZONE:Europe/Amsterdam
BEGIN:VTIMEZONE
TZID:Europe/Amsterdam
X-LIC-LOCATION:Europe/Amsterdam
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:19700329T020000
RRULE:FREQ=YEARLY;BYMONTH=3;BYDAY=-1SU
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:19701025T030000
RRULE:FREQ=YEARLY;BYMONTH=10;BYDAY=-1SU
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
UID:/NewsandEvents/Archives/2025/newsitem/15353/14
 -January-2025-Computational-Linguistics-Seminar-An
 a-Lucic
DTSTAMP:20250113T134737
SUMMARY:Computational Linguistics Seminar, Ana Luc
 ic
ATTENDEE;ROLE=Speaker:Ana Lucic (ILLC, University 
 of Amsterdam)
DTSTART;TZID=Europe/Amsterdam:20250114T160000
LOCATION:Room L3.36 at LAB42, Amsterdam Science Pa
 rk.
DESCRIPTION:Model explainability has become an imp
 ortant problem in artificial intelligence (AI) due
  to the increased effect that algorithmic predicti
 ons have on humans. Explanations can help users un
 derstand not only why AI models make certain predi
 ctions, but also how these predictions can be chan
 ged via counterfactual explanations. Given a data 
 point and a trained model, we want to find the min
 imal perturbation to the input such that the predi
 ction changes. We frame the problem of finding cou
 nterfactual explanations as a gradient-based optim
 ization task and first focus on tree ensembles. We
  then extend our method to accommodate graph neura
 l networks (GNNs), given the increasing promise of
  GNNs in real-world applications such as fake news
  detection and molecular simulation.
X-ALT-DESC;FMTTYPE=text/html:\n  <p>Model explaina
 bility has become an important problem in artifici
 al intelligence (AI) due to the increased effect t
 hat algorithmic predictions have on humans. Explan
 ations can help users understand not only why AI m
 odels make certain predictions, but also how these
  predictions can be changed via counterfactual exp
 lanations. Given a data point and a trained model,
  we want to find the minimal perturbation to the i
 nput such that the prediction changes. We frame th
 e problem of finding counterfactual explanations a
 s a gradient-based optimization task and first foc
 us on tree ensembles. We then extend our method to
  accommodate graph neural networks (GNNs), given t
 he increasing promise of GNNs in real-world applic
 ations such as fake news detection and molecular s
 imulation.</p>\n
URL:https://projects.illc.uva.nl/LaCo/CLS/
END:VEVENT
END:VCALENDAR
