Facial expressions are an essential part of nonverbal communication and major indicators of human emotions. Effective automatic Facial Emotion Recognition (FER) systems can facilitate comprehension of an individual's intention, and prospective behaviors in Human-Computer and Human-Robot Interaction. However, FER faces an enduring challenge, commonly encountered in real-life, of partial occlusions caused by objects such as sunglasses and hands. With the onset of the COVID-19 pandemic, facial masks become a major obstruction for FER systems. The utilization of facial masks exacerbates the occlusion issue since these cover a significant portion of a person's face, including the highly informative mouth area from which positive and negative emotions can be differentiated. Conversely, the efficacy of FER is largely contingent upon the supervised learning paradigm, which necessitates costly and laborious data annotation. Our study centers on utilizing the reconstruction capability of a Convolutional Residual Autoencoder to differentiate between positive and negative emotions. The proposed approach employs unsupervised feature learning and takes as inputs facial images of individuals with masks and without masks. Our study puts particular emphasis on the transferability of the proposed approach to different domains in comparison to current state-of-the-art fully supervised methods. The comprehensive experimental evaluation demonstrates the superior transferability of the proposed approach, highlighting the effectiveness of the unsupervised feature learning pipeline. Despite outperforming more complex methods in some scenarios, the proposed approach is characterized by relatively low computational expense. The source code of the proposed approach, along with the facial images created for this study, are accessible in HERE.

Unleashing the Transferability Power of Unsupervised Pre-Training for Emotion Recognition in Masked and Unmasked Facial Images

Beyan, Cigdem;
2023-01-01

Abstract

Facial expressions are an essential part of nonverbal communication and major indicators of human emotions. Effective automatic Facial Emotion Recognition (FER) systems can facilitate comprehension of an individual's intention, and prospective behaviors in Human-Computer and Human-Robot Interaction. However, FER faces an enduring challenge, commonly encountered in real-life, of partial occlusions caused by objects such as sunglasses and hands. With the onset of the COVID-19 pandemic, facial masks become a major obstruction for FER systems. The utilization of facial masks exacerbates the occlusion issue since these cover a significant portion of a person's face, including the highly informative mouth area from which positive and negative emotions can be differentiated. Conversely, the efficacy of FER is largely contingent upon the supervised learning paradigm, which necessitates costly and laborious data annotation. Our study centers on utilizing the reconstruction capability of a Convolutional Residual Autoencoder to differentiate between positive and negative emotions. The proposed approach employs unsupervised feature learning and takes as inputs facial images of individuals with masks and without masks. Our study puts particular emphasis on the transferability of the proposed approach to different domains in comparison to current state-of-the-art fully supervised methods. The comprehensive experimental evaluation demonstrates the superior transferability of the proposed approach, highlighting the effectiveness of the unsupervised feature learning pipeline. Despite outperforming more complex methods in some scenarios, the proposed approach is characterized by relatively low computational expense. The source code of the proposed approach, along with the facial images created for this study, are accessible in HERE.
2023
Facial emotion recognition
facial mask
partial occlusions
affective computing
unsupervised pre-training
human-robot interaction
File in questo prodotto:
File Dimensione Formato  
IJ21_Unleashing_the_Transferability_Power_of_Unsupervised.pdf

accesso aperto

Tipologia: Versione dell'editore
Licenza: Dominio pubblico
Dimensione 26.15 MB
Formato Adobe PDF
26.15 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11562/1121828
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 1
  • ???jsp.display-item.citation.isi??? 0
social impact