In this paper, we investigate how to learn rich and ro- bust feature representations for audio classification from vi- sual data and acoustic images, a novel audio data modality. Former models learn audio representations from raw sig- nals or spectral data acquired by a single microphone, with remarkable results in classification and retrieval. How- ever, such representations are not so robust towards vari- able environmental sound conditions. We tackle this draw- back by exploiting a new multimodal labeled action recog- nition dataset acquired by a hybrid audio-visual sensor that provides RGB video, raw audio signals, and spatialized acoustic data, also known as acoustic images, where the visual and acoustic images are aligned in space and syn- chronized in time. Using this richer information, we train audio deep learning models in a teacher-student fashion. In particular, we distill knowledge into audio networks from both visual and acoustic image teachers. Our experiments suggest that the learned representations are more powerful and have better generalization capabilities than the features learned from models trained using just single-microphone audio data.

Audio-Visual Model Distillation Using Acoustic Images

Murino, Vittorio
2020-01-01

Abstract

In this paper, we investigate how to learn rich and ro- bust feature representations for audio classification from vi- sual data and acoustic images, a novel audio data modality. Former models learn audio representations from raw sig- nals or spectral data acquired by a single microphone, with remarkable results in classification and retrieval. How- ever, such representations are not so robust towards vari- able environmental sound conditions. We tackle this draw- back by exploiting a new multimodal labeled action recog- nition dataset acquired by a hybrid audio-visual sensor that provides RGB video, raw audio signals, and spatialized acoustic data, also known as acoustic images, where the visual and acoustic images are aligned in space and syn- chronized in time. Using this richer information, we train audio deep learning models in a teacher-student fashion. In particular, we distill knowledge into audio networks from both visual and acoustic image teachers. Our experiments suggest that the learned representations are more powerful and have better generalization capabilities than the features learned from models trained using just single-microphone audio data.
2020
Multimodal learning; Knowledge Distillation; Acoustic images
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11562/1122772
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact