The rapid increase of interest in, and use of, artificial intelligence (AI) in computer applications has raised a parallel concern about its ability (or lack thereof) to provide understandable, or explainable, output to users. This concern is especially legitimate in biomedical contexts, where patient safety is of paramount importance. This position paper brings together seven researchers working in the field with different roles and perspectives, to explore in depth the concept of explainable AI, or XAI, offering a functional definition and conceptual framework or model that can be used when considering XAI. This is followed by a series of desiderata for attaining explainability in AI, each of which touches upon a key domain in biomedicine.

A manifesto on explainability for artificial intelligence in medicine.

Carlo Combi
;
Beatrice Amico;Andreas Holzinger;John H. Holmes
2022-01-01

Abstract

The rapid increase of interest in, and use of, artificial intelligence (AI) in computer applications has raised a parallel concern about its ability (or lack thereof) to provide understandable, or explainable, output to users. This concern is especially legitimate in biomedical contexts, where patient safety is of paramount importance. This position paper brings together seven researchers working in the field with different roles and perspectives, to explore in depth the concept of explainable AI, or XAI, offering a functional definition and conceptual framework or model that can be used when considering XAI. This is followed by a series of desiderata for attaining explainability in AI, each of which touches upon a key domain in biomedicine.
2022
Artificial intelligence, Explainability, Explainable artificial intelligence, Interpretability, Interpretable artificial intelligence
File in questo prodotto:
File Dimensione Formato  
aiim2022.pdf

accesso aperto

Licenza: Creative commons
Dimensione 705.32 kB
Formato Adobe PDF
705.32 kB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11562/1091286
Citazioni
  • ???jsp.display-item.citation.pmc??? 14
  • Scopus 65
  • ???jsp.display-item.citation.isi??? 27
social impact