Please note that this newsitem has been archived, and may contain outdated information or links.
5 February 2020, LUNCH Seminar, Arianna Betti
How can we ensure trust in machines? In particular, how can computational text analysis, an important sector of AI, ensure trust in its algorithms? The sector is booming, and its real-life applications ubiquitous. But how comfortable are you with having an AI assess whether your mum's calls to 112 are really urgent? Having your brother defended by a legal AI? Have software decide whether you'll get the next grant? I bet your answers vary from 'not very much' to 'not at all': what do you think should happen to remedy this situation? Is this something that we, the ILLC community, substantially can contribute to? If so, how, ideally?
Please note that this newsitem has been archived, and may contain outdated information or links.