Trustworthy AI Lab

The Trustworthy AI Lab at the CIRSFID, Alma Mater Research Center for Human-Centered Artificial Intelligence of the University of Bologna is an interdisciplinary group of researchers interested in the philosophical, ethical and juridical, social and technological aspects of AI including trustworthiness and the question of what it is that makes AI uses ethical, just, lawful and trustworthy.
The Lab intends to encourage the debate and reflections on the responsible use of Artificial Intelligence.
One of the main goal of the Lab is to perform Ethical Assessments for Trustworthy AI of AI-based research activities that are conducted in many Departments of the University of Bologna, using Z-Inspection®, a process to assess Trustworthy AI, based on the Ethics Guidelines for Trustworthy AI defined by the European Commission High-Level Expert Group on Artificial Intelligence.
The Lab is affiliated with the Z-Inspection® initiative (http://z-inspection.org)

 

Lab members (includes a link to their profiles and publications):

 

Research activities and publications in the area of:

  • Trustworthy AI
  • AI, Ethics and Human rights
  • AI, Governance and Policy making
  • Bioethics, biobanking and big data research in health care
  • Big Data Analytics
  • AI knowledge, Reasoning decision-making and Adjudication
  • Legal and Social issues of AI
  • Privacy and Data Protection
  • AI models against Cybercrime
  • Technical robustness and Safety
  • IT Security and Reliability

 

Resources

Z-Inspection® is listed in the new OECD Catalogue of AI Tools & Metrics: https://oecd.ai/en/catalogue/tools/z-inspection

................................................................................................................................

How to Assess Trustworthy AI in Practice.

Roberto V. Zicari, Julia Amann, Frédérick Bruneault, Megan Coffee, Boris Düdder, Eleanore Hickman, Alessio Gallucci, Thomas Krendl Gilbert, Thilo Hagendorff, Irmhild van Halem, Elisabeth Hildt, Georgios Kararigas, Pedro Kringen, Vince I. Madai, Emilie Wiinblad Mathez, Jesmin Jahan Tithi, Dennis Vetter, Magnus Westerlund, Renee Wurth
On behalf of the Z-Inspection® initiative (2022)

Abstract
This report is a methodological reflection on Z-Inspection®.  Z-Inspection® is a holistic process used to evaluate the trustworthyness of AI-based technologies at different stages of the AI lifecycle. It focuses, in particular, on the identification and discussion of ethical issues and tensions through the elaboration of socio-technical scenarios. It uses the general European Union’s High-Level Expert Group’s (EU HLEG) guidelines for trustworthy AI.

This report illustrates for both AI researchers and AI practitioners how the EU HLEG guidelines for trustworthy AI can be applied in practice. We share the lessons learned from conducting a series of independent assessments to evaluate the trustworthiness of AI systems in healthcare. We also share key recommendations and practical suggestions on how to ensure a rigorous trustworthy AI assessment throughout the life-cycle of an AI system.

(On behalf of the Z-Inspection® initiative (2022))

The full report is available on arXiv . 

Link: https://arxiv.org/abs/2206.09887

Cite as: arXiv:2206.09887 [cs.CY]. [v2] Tue, 28 Jun 2022 14:23:47 UTC (465 KB)

................................................................................................................................

Artificial Intelligence and Law

A recording of a talk given by Dr. Roberto V. Zicari for the course of Dr. Seongwook Heo at the Seoul National University Law School on March 30, 2022. The English section of the talk begins at minute 27.

This talk introduces the EU Framework for Trustworthy AI, the Z-Inspection® process to assess trustworthy AI, and the draft EU AI Act.

YouTube: https://www.youtube.com/watch?v=Zt4FfzuXMKc&feature=emb_imp_woyt

....................................................................................................................................

Z-inspection® is a registered trademark.

This work is distributed under the terms and conditions of the Creative Commons (Attribution-NonCommercial-ShareAlike CC BY-NC-SA) license.