In the last years, the fast and growing data availability has allowed providing highly predictive responses for the advanced research fields. Sophisticated machine learning models and techniques were then developed together with artificial intelligence-based systems. Increasing attention is given to Artificial Intelligence (AI), especially due to its algorithms, which give rise to robust predictions. Nevertheless, AI systems have a black box nature resulting in automated decision-making. This can classify a user into a category associated with the prediction of the individual behavior without specifying the underlying rationale. Some concerns about the adequacy of the AI models and methods in regulatory scenarios arise, primarily due to the possible biases generated by the Machine Learning algorithms. This leads organizations to claim high credibility and interpretability to provide effective operational control. The lack of transparency and explainability is, therefore, a critical point for policymakers and regulators aimed at avoiding wrong actions with adverse consequences on society. This issue is more evident in the financial and banking sectors, where the use cases of AI extend to the contexts of risk management, predictive analytics, and fraud detection, as well as in the healthcare field, where the focus is on both the funding management process of the healthcare services and the improvement of the diagnostic precision. We can resort to AI-based systems to predict the financial, default, funding loss, and diagnostic-related risks. However, AI-based systems require that the main criteria, which support the predictions, are known in order to assess the related severity and foster the appropriate measures to reduce the risks in case of shocks in the financial systems, changes in market conditions, or monitoring of the healthcare policies. For the purpose of explaining and interpreting machine learning models, eXplainable Artificial Intelligence (XAI) represents a fundamental field for understanding the steps and methods driving the decision process. In line with the policy requirements of transparency, this Research Topic aims to include original papers proposing the development of innovative XAI methodologies for global or local explanations in the research area of: • the financial and banking sectors - mainly focused on credit scoring, which involves lending algorithms, price discovery (representing the basis of financial robot advisory algorithms), and cyber risk management (greatly critical due to the increasingly online connections); • the healthcare field mainly focused on the evaluation of the funding and management policies.

Explainable Artificial Intelligence models and methods in Finance and Healthcare

Fabio Aurelio D'Asaro;
2022-01-01

Abstract

In the last years, the fast and growing data availability has allowed providing highly predictive responses for the advanced research fields. Sophisticated machine learning models and techniques were then developed together with artificial intelligence-based systems. Increasing attention is given to Artificial Intelligence (AI), especially due to its algorithms, which give rise to robust predictions. Nevertheless, AI systems have a black box nature resulting in automated decision-making. This can classify a user into a category associated with the prediction of the individual behavior without specifying the underlying rationale. Some concerns about the adequacy of the AI models and methods in regulatory scenarios arise, primarily due to the possible biases generated by the Machine Learning algorithms. This leads organizations to claim high credibility and interpretability to provide effective operational control. The lack of transparency and explainability is, therefore, a critical point for policymakers and regulators aimed at avoiding wrong actions with adverse consequences on society. This issue is more evident in the financial and banking sectors, where the use cases of AI extend to the contexts of risk management, predictive analytics, and fraud detection, as well as in the healthcare field, where the focus is on both the funding management process of the healthcare services and the improvement of the diagnostic precision. We can resort to AI-based systems to predict the financial, default, funding loss, and diagnostic-related risks. However, AI-based systems require that the main criteria, which support the predictions, are known in order to assess the related severity and foster the appropriate measures to reduce the risks in case of shocks in the financial systems, changes in market conditions, or monitoring of the healthcare policies. For the purpose of explaining and interpreting machine learning models, eXplainable Artificial Intelligence (XAI) represents a fundamental field for understanding the steps and methods driving the decision process. In line with the policy requirements of transparency, this Research Topic aims to include original papers proposing the development of innovative XAI methodologies for global or local explanations in the research area of: • the financial and banking sectors - mainly focused on credit scoring, which involves lending algorithms, price discovery (representing the basis of financial robot advisory algorithms), and cyber risk management (greatly critical due to the increasingly online connections); • the healthcare field mainly focused on the evaluation of the funding and management policies.
2022
Artificial Intelligence systems, Machine Learnings models, Credit Risk, Cybersecurity, Blockchain, Financial Data Science, Healthcare Policies, Healthcare Funding Process
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11562/1088626
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact