Heterogeneous data modalities can provide complementary cues for several tasks, usually leading to more robustalgorithms and better performance. However, while training data can be accurately collected to include a variety of sensory modalities,it is often the case that not all of them are available in real life (testing) scenarios, where a model has to be deployed. This raises thechallenge of how to extract information from multimodal data in the training stage, in a form that can be exploited at test time,considering limitations such as noisy or missing modalities. This paper presents a new approach in this direction for RGB-D visiontasks, developed within the adversarial learning and privileged information frameworks. We consider the practical case of learningrepresentations from depth and RGB videos, while relying only on RGB data at test time. We propose a new approach to train ahallucination network that learns to distill depth information via adversarial learning, resulting in a clean approach without severallosses to balance or hyperparameters. We report state-of-the-art results for object classification on the NYUD dataset, and videoaction recognition on the largest multimodal dataset available for this task, the NTU RGB+D, as well as on the Northwestern-UCL

Learning with privileged information via adversarial discriminative modality distillation

V. Murino
2020-01-01

Abstract

Heterogeneous data modalities can provide complementary cues for several tasks, usually leading to more robustalgorithms and better performance. However, while training data can be accurately collected to include a variety of sensory modalities,it is often the case that not all of them are available in real life (testing) scenarios, where a model has to be deployed. This raises thechallenge of how to extract information from multimodal data in the training stage, in a form that can be exploited at test time,considering limitations such as noisy or missing modalities. This paper presents a new approach in this direction for RGB-D visiontasks, developed within the adversarial learning and privileged information frameworks. We consider the practical case of learningrepresentations from depth and RGB videos, while relying only on RGB data at test time. We propose a new approach to train ahallucination network that learns to distill depth information via adversarial learning, resulting in a clean approach without severallosses to balance or hyperparameters. We report state-of-the-art results for object classification on the NYUD dataset, and videoaction recognition on the largest multimodal dataset available for this task, the NTU RGB+D, as well as on the Northwestern-UCL
2020
Multimodal deep learning, adversarial learning, privileged information, network distillation, modality hallucination
File in questo prodotto:
File Dimensione Formato  
1810.08437v1.pdf

accesso aperto

Tipologia: Documento in Post-print
Licenza: Creative commons
Dimensione 4.02 MB
Formato Adobe PDF
4.02 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11562/1033280
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 50
  • ???jsp.display-item.citation.isi??? 42
social impact