eXplainable artificial intelligence (XAI) methods have emerged to convert the black box of machine learning (ML) models into a more digestible form. These methods help to communicate how the model works with the aim of making ML models more transparent and increasing the trust of end-users in their output. SHapley Additive exPlanations (SHAP) and Local Interpretable Model Agnostic Explanation (LIME) are two widely used XAI methods, particularly with tabular data. In this perspective piece, the way the explainability metrics of these two methods are generated is discussed and a framework for the interpretation of their outputs, highlighting their weaknesses and strengths is proposed. Specifically, their outcomes in terms of model-dependency and in the presence of collinearity among the features, relying on a case study from the biomedical domain (classification of individuals with or without myocardial infarction) are discussed. The results indicate that SHAP and LIME are highly affected by the adopted ML model and feature collinearity, raising a note of caution on their usage and interpretation.SHapley Additive exPlanations and Local Interpretable Model Agnostic Explanation are two widely used eXplainable artificial intelligence methods. However, they have limitations related to model-dependency and the presence of collinearity among the features which result in unrealistic explanations. This perspective discusses these two issues through two case studies and provides possible solutions to overcome and eliminate their impactsimage (c) 2024 WILEY-VCH GmbH
A Perspective on Explainable Artificial Intelligence Methods: SHAP and LIME
Salih, Ahmed M.;Galazzo, Ilaria Boscolo;Menegaz, Gloria
2024-01-01
Abstract
eXplainable artificial intelligence (XAI) methods have emerged to convert the black box of machine learning (ML) models into a more digestible form. These methods help to communicate how the model works with the aim of making ML models more transparent and increasing the trust of end-users in their output. SHapley Additive exPlanations (SHAP) and Local Interpretable Model Agnostic Explanation (LIME) are two widely used XAI methods, particularly with tabular data. In this perspective piece, the way the explainability metrics of these two methods are generated is discussed and a framework for the interpretation of their outputs, highlighting their weaknesses and strengths is proposed. Specifically, their outcomes in terms of model-dependency and in the presence of collinearity among the features, relying on a case study from the biomedical domain (classification of individuals with or without myocardial infarction) are discussed. The results indicate that SHAP and LIME are highly affected by the adopted ML model and feature collinearity, raising a note of caution on their usage and interpretation.SHapley Additive exPlanations and Local Interpretable Model Agnostic Explanation are two widely used eXplainable artificial intelligence methods. However, they have limitations related to model-dependency and the presence of collinearity among the features which result in unrealistic explanations. This perspective discusses these two issues through two case studies and provides possible solutions to overcome and eliminate their impactsimage (c) 2024 WILEY-VCH GmbHI documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.