Modelling the concept of explanation is a central matter in AI systems, as it provides methods for developing eXplainable AI (XAI). When explanation applies to normative reasoning, XAI aims at promoting normative trust in the decisions of AI systems: in fact, such a trust depends on understanding whether systems predictions correspond to legally compliant scenarios. This paper extends to normative reasoning a work by Governatori et al. (2022) on the notion of stable explanations in a non-monotonic setting: when an explanation is stable, it can be used to infer the same normative conclusion independently of other facts that are found afterwards.
Stable Normative Explanations
Governatori, Guido
Membro del Collaboration Group
;Olivieri, Francesco
Membro del Collaboration Group
;Cristani, Matteo
Membro del Collaboration Group
2022-01-01
Abstract
Modelling the concept of explanation is a central matter in AI systems, as it provides methods for developing eXplainable AI (XAI). When explanation applies to normative reasoning, XAI aims at promoting normative trust in the decisions of AI systems: in fact, such a trust depends on understanding whether systems predictions correspond to legally compliant scenarios. This paper extends to normative reasoning a work by Governatori et al. (2022) on the notion of stable explanations in a non-monotonic setting: when an explanation is stable, it can be used to infer the same normative conclusion independently of other facts that are found afterwards.File | Dimensione | Formato | |
---|---|---|---|
FAIA-362-FAIA220447.pdf
solo utenti autorizzati
Descrizione: Paper
Licenza:
Copyright dell'editore
Dimensione
214.62 kB
Formato
Adobe PDF
|
214.62 kB | Adobe PDF | Visualizza/Apri Richiedi una copia |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.