Learning to solve auxiliary tasks concurrently with a principal task of interest can improve performance when data is scarce or the principal task is complex. This idea is inspired by the improved generalization capability induced by solving multiple tasks simultaneously, leading to a robust shared representation. However, selecting optimal auxiliary tasks typically requires manual design or costly meta-learning approaches. We propose Detaux, a framework that discovers an unrelated auxiliary classification task via weakly supervised disentanglement at the representation level. Isolating variations relevant to the principal task in one subspace while generating orthogonal subspaces with high separability allows us to discover auxiliary labels by clustering in these subspaces, allowing a transition from Single-Task Learning (STL) to Multi-Task Learning (MTL). In particular, the original labels associated with the principal task and the newly discovered ones can be fed into any MTL framework. Experiments and ablation studies highlight the effectiveness of Detaux and reveal an unexplored link between disentangled representations and MTL. The source code is available at https://github.com/intelligolabs/Detaux.

Disentangled Latent Spaces Facilitate Data-Driven Auxiliary Learning

Skenderi, Geri
;
Capogrosso, Luigi;Toaiari, Andrea;Denitto, Matteo;Fummi, Franco;Melzi, Simone
2025-01-01

Abstract

Learning to solve auxiliary tasks concurrently with a principal task of interest can improve performance when data is scarce or the principal task is complex. This idea is inspired by the improved generalization capability induced by solving multiple tasks simultaneously, leading to a robust shared representation. However, selecting optimal auxiliary tasks typically requires manual design or costly meta-learning approaches. We propose Detaux, a framework that discovers an unrelated auxiliary classification task via weakly supervised disentanglement at the representation level. Isolating variations relevant to the principal task in one subspace while generating orthogonal subspaces with high separability allows us to discover auxiliary labels by clustering in these subspaces, allowing a transition from Single-Task Learning (STL) to Multi-Task Learning (MTL). In particular, the original labels associated with the principal task and the newly discovered ones can be fed into any MTL framework. Experiments and ablation studies highlight the effectiveness of Detaux and reveal an unexplored link between disentangled representations and MTL. The source code is available at https://github.com/intelligolabs/Detaux.
2025
9783032101846
Representation Learning, Auxiliary Learning, Multi-Task Learning
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11562/1178907
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact