Many approaches to learning classi¯ers for structured objects (e.g., shapes) use generative models in a Bayesian framework. However, state-of-the-art classi¯ers for vectorial data (e.g., support vector ma- chines) are learned discriminatively. A generative embedding is a map- ping from the object space into a ¯xed dimensional feature space, induced by a generative model which is usually learned from data. The ¯xed di- mensionality of these feature spaces permits the use of state of the art discriminative machines based on vectorial representations, thus bringing together the best of the discriminative and generative paradigms. Using a generative embedding involves two steps: (i) de¯ning and learn- ing the generative model used to build the embedding; (ii) discrimina- tively learning a (maybe kernel) classi¯er on the adopted feature space. The literature on generative embeddings is essentially focused on step (i), usually adopting some standard o®-the-shelf tool (e.g., an SVM with a linear or RBF kernel) for step (ii). In this paper, we follow a di®er- ent route, by combining several Hidden Markov Models-based generative embeddings (including the classical Fisher score) with the recently pro- posed non-extensive information theoretic kernels. We test this method- ology on a 2D shape recognition task, showing that the proposed method is competitive with the state-of-art.

Information theoretical kernels for generative embeddings based on Hidden Markov Models

BICEGO, Manuele;MURINO, Vittorio;
2010-01-01

Abstract

Many approaches to learning classi¯ers for structured objects (e.g., shapes) use generative models in a Bayesian framework. However, state-of-the-art classi¯ers for vectorial data (e.g., support vector ma- chines) are learned discriminatively. A generative embedding is a map- ping from the object space into a ¯xed dimensional feature space, induced by a generative model which is usually learned from data. The ¯xed di- mensionality of these feature spaces permits the use of state of the art discriminative machines based on vectorial representations, thus bringing together the best of the discriminative and generative paradigms. Using a generative embedding involves two steps: (i) de¯ning and learn- ing the generative model used to build the embedding; (ii) discrimina- tively learning a (maybe kernel) classi¯er on the adopted feature space. The literature on generative embeddings is essentially focused on step (i), usually adopting some standard o®-the-shelf tool (e.g., an SVM with a linear or RBF kernel) for step (ii). In this paper, we follow a di®er- ent route, by combining several Hidden Markov Models-based generative embeddings (including the classical Fisher score) with the recently pro- posed non-extensive information theoretic kernels. We test this method- ology on a 2D shape recognition task, showing that the proposed method is competitive with the state-of-art.
2010
pattern recognition; machine learning; image processing
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11562/349259
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 4
  • ???jsp.display-item.citation.isi??? 4
social impact