BEGIN:VCALENDAR
VERSION:2.0
PRODID:ILLC Website
X-WR-TIMEZONE:Europe/Amsterdam
BEGIN:VTIMEZONE
TZID:Europe/Amsterdam
X-LIC-LOCATION:Europe/Amsterdam
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:19700329T020000
RRULE:FREQ=YEARLY;BYMONTH=3;BYDAY=-1SU
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:19701025T030000
RRULE:FREQ=YEARLY;BYMONTH=10;BYDAY=-1SU
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
UID:/NewsandEvents/Archives/2019/newsitem/11124/10
 -September-2019-Computational-Linguistics-Seminar-
 Zeynep-Akata
DTSTAMP:20190909T143153
SUMMARY:Computational Linguistics Seminar, Zeynep 
 Akata
ATTENDEE;ROLE=Speaker:Zeynep Akata (University of 
 Amsterdam)
DTSTART;TZID=Europe/Amsterdam:20190910T160000
LOCATION:ILLC Seminar Room F1.15, Science Park 107
 , Amsterdam
DESCRIPTION:Clearly explaining a rationale for a c
 lassification decision to an end-user can be as im
 portant as the decision itself. Existing approache
 s for deep visual recognition are generally opaque
  and do not output any justification text; contemp
 orary vision-language models can describe image co
 ntent but fail to take into account class-discrimi
 native image properties which justify visual predi
 ctions. In this talk, I will present my past and c
 urrent work on Zero-Shot Learning, Vision and Lang
 uage for Generative Modeling and Explainable Artif
 icial Intelligence where we show (1) how to genera
 lize image classification models to cases when no 
 visual training data is available, (2) how to gene
 rate images and image features using detailed visu
 al descriptions, and (3) how our models focus on d
 iscriminating properties of the visible object, jo
 intly predict a class label, explain why/not the p
 redicted label is chosen for the image.
X-ALT-DESC;FMTTYPE=text/html:\n  <p>Clearly explai
 ning a rationale for a classification decision to 
 an end-user can be as important as the decision it
 self. Existing approaches for deep visual recognit
 ion are generally opaque and do not output any jus
 tification text; contemporary vision-language mode
 ls can describe image content but fail to take int
 o account class-discriminative image properties wh
 ich justify visual predictions. In this talk, I wi
 ll present my past and current work on Zero-Shot L
 earning, Vision and Language for Generative Modeli
 ng and Explainable Artificial Intelligence where w
 e show (1) how to generalize image classification 
 models to cases when no visual training data is av
 ailable, (2) how to generate images and image feat
 ures using detailed visual descriptions, and (3) h
 ow our models focus on discriminating properties o
 f the visible object, jointly predict a class labe
 l, explain why/not the predicted label is chosen f
 or the image.</p>\n
URL:http://projects.illc.uva.nl/LaCo/CLS/
END:VEVENT
END:VCALENDAR
