Space debris is observed from ground-based and space-based telescopes. Typically, faint streak-like features are generated in the resulting images due to the small size of the objects and relative observation geometry. The detection of low-SNR streaks poses a major challenge to state-of-the-art tools. In case of space-based telescopes onboard pre-processing can greatly reduce the data needs, but requires efficient algorithms to allow real-time selection of relevant images. Depending on the configured sensitivity in processing tools, they can generate a large number of false positive detections. The false streaks are caused by artifacts generated by different factors like oversaturation caused by bright stars, bad pixels, or cosmic rays. In this paper, we propose a method to greatly reduce the number of detected false positives. This method is based on exploiting data collected in the ESA’s STREET database using modern data-driven techniques. We proposed a lightweight Deep Neural Network (DNN) approach based on the employment of a classical Multi-Layer Perceptron (MLP) on categorical features extracted from the STREET database. The proposed DNN uses manually obtained ground truth from STREET database to learn how to classify detected pixels between real streaks and false positives (artifacts). The aim of this work is to integrate the false positive rejection feature in the already existing streaks detection pipeline while maintaining the same level of transparency of the non-data-driven legacy system, despite the inclusion of a black-box element. This was possible by employing explainability techniques. The model achieves an accuracy of 92% in false positive detection, and the trustworthiness of these results is verified using both classical cross-validation techniques and by proposing modern criteria based on AI-explainability models such as Kernel Explainer from SHAP library. Through interpretation of Shapley values obtained by simulations, we were able to quantify the magnitude of the contribution of each feature in the model. Shapley values are interpreted both qualitatively and quantitatively with comparison to both manual procedure and statistical relevance of every feature used in the model. The results are very promising, hence this solution could be an example to increase the level of trust of using machine learning models in critical scenarios like space safety.

Space debris streak classification: a transparent deep learning approach to reduce false positive detections

Luigi Palladino;
2023-01-01

Abstract

Space debris is observed from ground-based and space-based telescopes. Typically, faint streak-like features are generated in the resulting images due to the small size of the objects and relative observation geometry. The detection of low-SNR streaks poses a major challenge to state-of-the-art tools. In case of space-based telescopes onboard pre-processing can greatly reduce the data needs, but requires efficient algorithms to allow real-time selection of relevant images. Depending on the configured sensitivity in processing tools, they can generate a large number of false positive detections. The false streaks are caused by artifacts generated by different factors like oversaturation caused by bright stars, bad pixels, or cosmic rays. In this paper, we propose a method to greatly reduce the number of detected false positives. This method is based on exploiting data collected in the ESA’s STREET database using modern data-driven techniques. We proposed a lightweight Deep Neural Network (DNN) approach based on the employment of a classical Multi-Layer Perceptron (MLP) on categorical features extracted from the STREET database. The proposed DNN uses manually obtained ground truth from STREET database to learn how to classify detected pixels between real streaks and false positives (artifacts). The aim of this work is to integrate the false positive rejection feature in the already existing streaks detection pipeline while maintaining the same level of transparency of the non-data-driven legacy system, despite the inclusion of a black-box element. This was possible by employing explainability techniques. The model achieves an accuracy of 92% in false positive detection, and the trustworthiness of these results is verified using both classical cross-validation techniques and by proposing modern criteria based on AI-explainability models such as Kernel Explainer from SHAP library. Through interpretation of Shapley values obtained by simulations, we were able to quantify the magnitude of the contribution of each feature in the model. Shapley values are interpreted both qualitatively and quantitatively with comparison to both manual procedure and statistical relevance of every feature used in the model. The results are very promising, hence this solution could be an example to increase the level of trust of using machine learning models in critical scenarios like space safety.
2023
Machine Learning
Space Debris
Explainable-AI
Streak Detection
File in questo prodotto:
File Dimensione Formato  
1709595491_320__usiqsjeh.pdf

solo utenti autorizzati

Tipologia: Versione dell'editore
Licenza: Copyright dell'editore
Dimensione 788.39 kB
Formato Adobe PDF
788.39 kB Adobe PDF   Visualizza/Apri   Richiedi una copia

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11562/1120887
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact