BEGIN:VCALENDAR
VERSION:2.0
PRODID:ILLC Website
X-WR-TIMEZONE:Europe/Amsterdam
BEGIN:VTIMEZONE
TZID:Europe/Amsterdam
X-LIC-LOCATION:Europe/Amsterdam
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:19700329T020000
RRULE:FREQ=YEARLY;BYMONTH=3;BYDAY=-1SU
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:19701025T030000
RRULE:FREQ=YEARLY;BYMONTH=10;BYDAY=-1SU
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
UID:/NewsandEvents/Archives/2025/newsitem/15354/11
 -February-2025-Computational-Linguistics-Seminar-M
 artha-Lewis
DTSTAMP:20250206T164745
SUMMARY:Computational Linguistics Seminar, Martha 
 Lewis
ATTENDEE;ROLE=Speaker:Martha Lewis (ILLC, Universi
 ty of Amsterdam)
DTSTART;TZID=Europe/Amsterdam:20250211T160000
LOCATION:Room L3.33, ILLC Lab42, Science Park 900,
  Amsterdam
DESCRIPTION:Recent neural approaches to modelling 
 language and concepts have proven quite effective,
  with a proliferation of large models trained on c
 orrespondingly massive datasets. However, these mo
 dels still fail on some tasks that humans, and sym
 bolic approaches, can easily solve. Large neural m
 odels are also, to a certain extent, black boxes -
  particularly those that are proprietary. There is
  therefore a need to integrate compositional and n
 eural approaches, firstly to potentially improve t
 he performance of large neural models, and secondl
 y to analyze and explain the representations that 
 these systems are using. In this talk I will prese
 nt results showing that large neural models can fa
 il at tasks that humans are able to do, and discus
 s alternative, theory-based approaches that have t
 he potential to perform more strongly. I will give
  applications in language, reasoning, and vision. 
 Finally, I will present some future directions in 
 understanding the types of reasoning or symbol man
 ipulation that large neural models may be performi
 ng.
X-ALT-DESC;FMTTYPE=text/html:\n  <p>Recent neural 
 approaches to modelling language and concepts have
  proven quite effective, with a proliferation of l
 arge models trained on correspondingly massive dat
 asets. However, these models still fail on some ta
 sks that humans, and symbolic approaches, can easi
 ly solve. Large neural models are also, to a certa
 in extent, black boxes - particularly those that a
 re proprietary. There is therefore a need to integ
 rate compositional and neural approaches, firstly 
 to potentially improve the performance of large ne
 ural models, and secondly to analyze and explain t
 he representations that these systems are using. I
 n this talk I will present results showing that la
 rge neural models can fail at tasks that humans ar
 e able to do, and discuss alternative, theory-base
 d approaches that have the potential to perform mo
 re strongly. I will give applications in language,
  reasoning, and vision. Finally, I will present so
 me future directions in understanding the types of
  reasoning or symbol manipulation that large neura
 l models may be performing.</p>\n
URL:https://projects.illc.uva.nl/LaCo/CLS/
END:VEVENT
END:VCALENDAR
