BEGIN:VCALENDAR
VERSION:2.0
PRODID:ILLC Website
X-WR-TIMEZONE:Europe/Amsterdam
BEGIN:VTIMEZONE
TZID:Europe/Amsterdam
X-LIC-LOCATION:Europe/Amsterdam
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:19700329T020000
RRULE:FREQ=YEARLY;BYMONTH=3;BYDAY=-1SU
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:19701025T030000
RRULE:FREQ=YEARLY;BYMONTH=10;BYDAY=-1SU
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
UID:/NewsandEvents/Archives/2023/newsitem/14428/6-
 --9-November-2023-2nd-international-workshop-on-th
 e-emerging-ethical-aspects-of-AI-with-a-focus-on-B
 ias-Risk-Explainability-and-the-role-of-Logic-and-
 Computational-Logic-BEWARE-23-Rome-Italy
DTSTAMP:20230805T134952
SUMMARY:2nd international workshop on the emerging
  ethical aspects of AI, with a focus on Bias, Risk
 , Explainability and the role of Logic and Computa
 tional Logic (BEWARE-23), Rome, Italy
DTSTART;VALUE=DATE:20231106
DTEND;VALUE=DATE:20231109
LOCATION:Rome, Italy
DESCRIPTION:Current AI applications do not guarant
 ee objectivity and are riddled with biases and leg
 al difficulties. AI systems need to perform safely
 , but problems of opacity, bias and risk are press
 ing. Definitional and foundational issues about wh
 at kinds of bias and risks are involved in opaque 
 AI technologies are still very much open. Moreover
 , AI is challenging Ethics and brings the need to 
 rethink the basis of Ethics. In this context, it i
 s natural to look for theories, tools and technolo
 gies to address the problem of automatically detec
 ting biases and implementing ethical decision-maki
 ng.  This workshop addresses issues of logical, et
 hical and epistemological nature in AI through the
  use of interdisciplinary approaches. We aim to br
 ing together researchers in AI, philosophy, ethics
 , epistemology, social science, etc., to promote c
 ollaborations and enhance discussions towards the 
 development of trustworthy AI methods and solution
 s that users and stakeholders consider technologic
 ally reliable and socially acceptable.  BEWARE23 i
 s co-located with the AIxIA 2023 conference.  The 
 workshop invites submissions from computer scienti
 sts, philosophers, economists and sociologists wan
 ting to discuss contributions ranging from the for
 mulation of epistemic and normative principles for
  AI, their conceptual representation in formal mod
 els, to their development in formal design procedu
 res and translation into computational implementat
 ions.  The workshop invites (possibly non-original
 ) submissions of FULL PAPERS (up to 15 pages) and 
 SHORT PAPERS (up to 5 pages). Short papers are par
 ticularly suitable to present work in progress, ex
 tended abstracts, doctoral theses, or general over
 views of research projects. Note that all papers w
 ill undergo a careful peer-reviewer process and, i
 f accepted, camera-ready versions of the papers wi
 ll be published on the AIxIA subseries of CEUR pro
 ceedings (Scopus indexed).
X-ALT-DESC;FMTTYPE=text/html:<div>\n  <p>Current A
 I applications do not guarantee objectivity and ar
 e riddled with biases and legal difficulties. AI s
 ystems need to perform safely, but problems of opa
 city, bias and risk are pressing. Definitional and
  foundational issues about what kinds of bias and 
 risks are involved in opaque AI technologies are s
 till very much open. Moreover, AI is challenging E
 thics and brings the need to rethink the basis of 
 Ethics. In this context, it is natural to look for
  theories, tools and technologies to address the p
 roblem of automatically detecting biases and imple
 menting ethical decision-making.</p>\n  <p>This wo
 rkshop addresses issues of logical, ethical and ep
 istemological nature in AI through the use of inte
 rdisciplinary approaches. We aim to bring together
  researchers in AI, philosophy, ethics, epistemolo
 gy, social science, etc., to promote collaboration
 s and enhance discussions towards the development 
 of trustworthy AI methods and solutions that users
  and stakeholders consider technologically reliabl
 e and socially acceptable.</p>\n  <p>BEWARE23 is c
 o-located with the AIxIA 2023 conference.</p>\n</d
 iv><div>\n  <p>The workshop invites submissions fr
 om computer scientists, philosophers, economists a
 nd sociologists wanting to discuss contributions r
 anging from the formulation of epistemic and norma
 tive principles for AI, their conceptual represent
 ation in formal models, to their development in fo
 rmal design procedures and translation into comput
 ational implementations.</p>\n  <p>The workshop in
 vites (possibly non-original) submissions of FULL 
 PAPERS (up to 15 pages) and SHORT PAPERS (up to 5 
 pages). Short papers are particularly suitable to 
 present work in progress, extended abstracts, doct
 oral theses, or general overviews of research proj
 ects. Note that all papers will undergo a careful 
 peer-reviewer process and, if accepted, camera-rea
 dy versions of the papers will be published on the
  AIxIA subseries of CEUR proceedings (Scopus index
 ed).</p>\n</div>
URL:https://sites.google.com/view/beware2023
END:VEVENT
END:VCALENDAR
