Universiteit van Amsterdam


Institute for Logic, Language and Computation

Please note that this newsitem has been archived, and may contain outdated information or links.

1 - 2 August 2019, 1st ACL Workshop on Gender Bias for Natural Language Processing, Florence, Italy

Date: 1 - 2 August 2019
Location: Florence, Italy
Deadline: Friday 26 April 2019

Gender and other demographic biases in machine-learned models are of increasing interest to the scientific community and industry. Models of natural language are highly affected by such perceived biases, present in widely used products, can lead to poor user experiences. This workshop will be the first dedicated to the issue of gender bias in NLP techniques and it includes a shared task on coreference resolution. In order to make progress as a field, this workshop will specially focus on discussing and proposing standard tasks which quantify bias.

Keynote Speaker: Pascale Fung, Hong Kong University of Science and Technology

We invite submissions of technical work exploring the detection, measurement, and mediation of gender bias in NLP models and applications. Other important topics are the creation of datasets exploring demographics such as metrics to identify and assess relevant biases or focusing on fairness in NLP systems. Finally, the workshop is also open to non-technical work welcoming sociological perspectives.

We also invite work on gender-fair modeling via our shared task, coreference resolution on GAP (Webster et al. 2018). GAP is a coreference dataset designed to highlight current challenges for the resolution of ambiguous pronouns in context. Participation will be via Kaggle, with submissions open over a three month period in the lead up to the workshop.

For more information, see http://genderbiasnlp.talp.cat or contact Marta R. Costa-jussà at , or Kellie Webster at .

Please note that this newsitem has been archived, and may contain outdated information or links.