BEGIN:VCALENDAR
VERSION:2.0
PRODID:ILLC Website
X-WR-TIMEZONE:Europe/Amsterdam
BEGIN:VTIMEZONE
TZID:Europe/Amsterdam
X-LIC-LOCATION:Europe/Amsterdam
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:19700329T020000
RRULE:FREQ=YEARLY;BYMONTH=3;BYDAY=-1SU
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:19701025T030000
RRULE:FREQ=YEARLY;BYMONTH=10;BYDAY=-1SU
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
UID:/NewsandEvents/Archives/2024/newsitem/15011/4-
 June-2024-Computational-Linguistics-Seminar-Tanise
 -Ceron
DTSTAMP:20240531T001327
SUMMARY:Computational Linguistics Seminar, Tanise 
 Ceron
ATTENDEE;ROLE=Speaker:Tanise Ceron (University of 
 Stuttgart)
DTSTART;TZID=Europe/Amsterdam:20240604T160000
LOCATION:Room L3.36, ILLC Lab42, Science Park 900,
  Amsterdam / Online
DESCRIPTION:Due to the widespread use of large lan
 guage models (LLMs) in ubiquitous systems, we need
  to understand whether they embed a specific world
 view and what these views reflect. Recent studies 
 report that, prompted with political questionnaire
 s, LLMs show left-liberal leanings. However, it is
  as yet unclear whether these leanings are reliabl
 e (robust to prompt variations) and whether the le
 aning is consistent across policies and political 
 leaning. In this talk, I will present the results 
 of our study where we propose a series of tests wh
 ich assess the reliability and consistency of LLMs
 ' stances on political statements based on a datas
 et of voting-advice questionnaires collected from 
 seven EU countries and annotated for policy domain
 s. We then evaluate LLMs ranging in size from 7B t
 o 70B parameters and observe to what extent they a
 re consistent in terms of political worldview and 
 political orientation. Finally, I’ll discuss the i
 mportance of taking these biases into account, and
  how they raise relevant design questions in use c
 ase applications.
X-ALT-DESC;FMTTYPE=text/html:\n  <p>Due to the wid
 espread use of large language models (LLMs) in ubi
 quitous systems, we need to understand whether the
 y embed a specific worldview and what these views 
 reflect. Recent studies report that, prompted with
  political questionnaires, LLMs show left-liberal 
 leanings. However, it is as yet unclear whether th
 ese leanings are reliable (robust to prompt variat
 ions) and whether the leaning is consistent across
  policies and political leaning. In this talk, I w
 ill present the results of our study where we prop
 ose a series of tests which assess the reliability
  and consistency of LLMs' stances on political sta
 tements based on a dataset of voting-advice questi
 onnaires collected from seven EU countries and ann
 otated for policy domains. We then evaluate LLMs r
 anging in size from 7B to 70B parameters and obser
 ve to what extent they are consistent in terms of 
 political worldview and political orientation. Fin
 ally, I’ll discuss the importance of taking these 
 biases into account, and how they raise relevant d
 esign questions in use case applications.</p>\n
URL:https://projects.illc.uva.nl/LaCo/CLS/
END:VEVENT
END:VCALENDAR
