Trustworthy AI Lab

The Trustworthy AI Lab at the CIRSFID, Alma Mater Research Center for Human-Centered Artificial Intelligence of the University of Bologna is an interdisciplinary group of researchers interested in the philosophical, ethical and juridical, social and technological aspects of AI including trustworthiness and the question of what it is that makes AI uses ethical, just, lawful and trustworthy.
The Lab intends to encourage the debate and reflections on the responsible use of Artificial Intelligence.
One of the main goal of the Lab is to perform Ethical Assessments for Trustworthy AI of AI-based research activities that are conducted in many Departments of the University of Bologna, using Z-Inspection®, a process to assess Trustworthy AI, based on the Ethics Guidelines for Trustworthy AI defined by the European Commission High-Level Expert Group on Artificial Intelligence.
The Lab is affiliated with the Z-Inspection® initiative (http://z-inspection.org)

 

Lab members (includes a link to their profiles and publications):

 

Research activities and publications in the area of:

  • Trustworthy AI
  • AI, Ethics and Human rights
  • AI, Governance and Policy making
  • Bioethics, biobanking and big data research in health care
  • Big Data Analytics
  • AI knowledge, Reasoning decision-making and Adjudication
  • Legal and Social issues of AI
  • Privacy and Data Protection
  • AI models against Cybercrime
  • Technical robustness and Safety
  • IT Security and Reliability

 

Resources

First World Z-inspection® Conference
Ateneo Veneto, March 10-11, 2023, Venice, Italy.

The interdisciplinary meeting welcomed over 60 international scientist and experts from AI, ethics, human rights and domains like healthcare, ecology, business or law.

At the conference the practical use of the Z-Inspection® process to assess real use cases for the assesment of trustworthy AI were presented.Among them :

– The Pilot Project: “Assessment for Responsible Artificial Intelligence” together with Rijks ICT Gilde – part of the Ministry of the Interior and Kingdom Relations (BZK)- and the province of Fryslân (The Netherlands); 

– The assessment of the use of AI in times of COVID-19 at the Brescia Public Hospital (“ASST Spedali Civili di Brescia“).

Two panel discussions on “Human Rights and Trustworthy AI” and “How do we trust AI?“ provided an interdisciplinary view on the relevance of data and AI ethics in the human rights and business context. 

The main message of the conference was the need of a Mindful Use of AI (#MUAI).

This premiere World Z-Inspection® Conference was held in cooperation with Global Campus of Human Rights and Venice Urban Lab and was supported by Arcada University of Applied Science, Merck, Roche and Zurich Insurance Company.

DOWNLOAD CONFERENCE READER

Link to the Video: Conference Impressions

Posted in LInkedIn:
https://www.linkedin.com/posts/roberto-v-zicari-087863_muai-ethics-artificialintelligence-activity-7063583730170290177-DFT2?utm_source=share&utm_medium=member_desktop

................................................................................................................................

Z-Inspection® is listed in the new OECD Catalogue of AI Tools & Metrics: https://oecd.ai/en/catalogue/tools/z-inspection

................................................................................................................................

How to Assess Trustworthy AI in Practice.

Roberto V. Zicari, Julia Amann, Frédérick Bruneault, Megan Coffee, Boris Düdder, Eleanore Hickman, Alessio Gallucci, Thomas Krendl Gilbert, Thilo Hagendorff, Irmhild van Halem, Elisabeth Hildt, Georgios Kararigas, Pedro Kringen, Vince I. Madai, Emilie Wiinblad Mathez, Jesmin Jahan Tithi, Dennis Vetter, Magnus Westerlund, Renee Wurth
On behalf of the Z-Inspection® initiative (2022)

Abstract
This report is a methodological reflection on Z-Inspection®.  Z-Inspection® is a holistic process used to evaluate the trustworthyness of AI-based technologies at different stages of the AI lifecycle. It focuses, in particular, on the identification and discussion of ethical issues and tensions through the elaboration of socio-technical scenarios. It uses the general European Union’s High-Level Expert Group’s (EU HLEG) guidelines for trustworthy AI.

This report illustrates for both AI researchers and AI practitioners how the EU HLEG guidelines for trustworthy AI can be applied in practice. We share the lessons learned from conducting a series of independent assessments to evaluate the trustworthiness of AI systems in healthcare. We also share key recommendations and practical suggestions on how to ensure a rigorous trustworthy AI assessment throughout the life-cycle of an AI system.

(On behalf of the Z-Inspection® initiative (2022))

The full report is available on arXiv . 

Link: https://arxiv.org/abs/2206.09887

Cite as: arXiv:2206.09887 [cs.CY]. [v2] Tue, 28 Jun 2022 14:23:47 UTC (465 KB)

................................................................................................................................

Artificial Intelligence and Law

A recording of a talk given by Dr. Roberto V. Zicari for the course of Dr. Seongwook Heo at the Seoul National University Law School on March 30, 2022. The English section of the talk begins at minute 27.

This talk introduces the EU Framework for Trustworthy AI, the Z-Inspection® process to assess trustworthy AI, and the draft EU AI Act.

YouTube: https://www.youtube.com/watch?v=Zt4FfzuXMKc&feature=emb_imp_woyt

....................................................................................................................................

Z-inspection® is a registered trademark.

This work is distributed under the terms and conditions of the Creative Commons (Attribution-NonCommercial-ShareAlike CC BY-NC-SA) license.