The adoption of AI models in medicine and neurosciences has the potential to play a significant role not only in bringing scientific advancements but also in clinical decision-making. However, concerns mounts due to the eventual biases AI could have which could result in far-reaching consequences particularly in a critical field like biomedicine. It is challenging to achieve usable intelligence because not only it is fundamental to learn from prior data, extract knowledge and guarantee generalization capabilities, but also to disentangle the underlying explanatory factors in order to deeply understand the variables leading to the final decisions. There hence has been a call for approaches to open the AI `black box' to increase trust and reliability on the decision-making capabilities of AI algorithms. Such approaches are commonly referred to as XAI and are starting to be applied in medical fields even if not yet fully exploited. With this thesis we aim at contributing to enabling the use of AI in medicine and neurosciences by taking two fundamental steps: (i) practically pervade AI models with XAI (ii) Strongly validate XAI models. The first step was achieved on one hand by focusing on XAI taxonomy and proposing some guidelines specific for the AI and XAI applications in the neuroscience domain. On the other hand, we faced concrete issues proposing XAI solutions to decode the brain modulations in neurodegeneration relying on the morphological, microstructural and functional changes occurring at different disease stages as well as their connections with the genotype substrate. The second step was as well achieved by firstly defining four attributes related to XAI validation, namely stability, consistency, understandability and plausibility. Each attribute refers to a different aspect of XAI ranging from the assessment of explanations stability across different XAI methods, or highly collinear inputs, to the alignment of the obtained explanations with the state-of-the-art literature. We then proposed different validation techniques aiming at practically fulfilling such requirements. With this thesis, we contributed to the advancement of the research into XAI aiming at increasing awareness and critical use of AI methods opening the way to real-life applications enabling the development of personalized medicine and treatment by taking a data-driven and objective approach to healthcare.
EXplainable Artificial Intelligence: enabling AI in neurosciences and beyond
federica cruciani
2023-01-01
Abstract
The adoption of AI models in medicine and neurosciences has the potential to play a significant role not only in bringing scientific advancements but also in clinical decision-making. However, concerns mounts due to the eventual biases AI could have which could result in far-reaching consequences particularly in a critical field like biomedicine. It is challenging to achieve usable intelligence because not only it is fundamental to learn from prior data, extract knowledge and guarantee generalization capabilities, but also to disentangle the underlying explanatory factors in order to deeply understand the variables leading to the final decisions. There hence has been a call for approaches to open the AI `black box' to increase trust and reliability on the decision-making capabilities of AI algorithms. Such approaches are commonly referred to as XAI and are starting to be applied in medical fields even if not yet fully exploited. With this thesis we aim at contributing to enabling the use of AI in medicine and neurosciences by taking two fundamental steps: (i) practically pervade AI models with XAI (ii) Strongly validate XAI models. The first step was achieved on one hand by focusing on XAI taxonomy and proposing some guidelines specific for the AI and XAI applications in the neuroscience domain. On the other hand, we faced concrete issues proposing XAI solutions to decode the brain modulations in neurodegeneration relying on the morphological, microstructural and functional changes occurring at different disease stages as well as their connections with the genotype substrate. The second step was as well achieved by firstly defining four attributes related to XAI validation, namely stability, consistency, understandability and plausibility. Each attribute refers to a different aspect of XAI ranging from the assessment of explanations stability across different XAI methods, or highly collinear inputs, to the alignment of the obtained explanations with the state-of-the-art literature. We then proposed different validation techniques aiming at practically fulfilling such requirements. With this thesis, we contributed to the advancement of the research into XAI aiming at increasing awareness and critical use of AI methods opening the way to real-life applications enabling the development of personalized medicine and treatment by taking a data-driven and objective approach to healthcare.File | Dimensione | Formato | |
---|---|---|---|
EXplainable_Artificial_Intelligence_enabling_AI_in_neurosciences_and_beyond_Cruciani_PhDThesis.pdf
accesso aperto
Descrizione: Tesi di dottorato
Tipologia:
Tesi di dottorato
Licenza:
Creative commons
Dimensione
34.23 MB
Formato
Adobe PDF
|
34.23 MB | Adobe PDF | Visualizza/Apri |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.