We consider the problem of recovering a distribution function on the real line from observations additively contaminated with errors following the standard Laplace distribution. Assuming that the latent distribution is completely unknown leads to a nonparametric deconvolution problem. We begin by studying the rates of convergence relative to the $L^2$-norm and the Hellinger metric for the direct problem of estimating the sampling density, which is a mixture of Laplace densities with a possibly unbounded set of locations: the rate of convergence for the Bayes' density estimator corresponding to a Dirichlet process prior over the space of all mixing distributions on the real line matches, up to a logarithmic factor, with the $n^{-3/8}\log^{1/8}n$ rate for the maximum likelihood estimator. Then, appealing to an inversion inequality translating the $L^2$-norm and the Hellinger distance between general kernel mixtures, with a kernel density having polynomially decaying Fourier transform, into any $L^p$-Wasserstein distance, $p\geq1$, between the corresponding mixing distributions, provided their Laplace transforms are finite in some neighborhood of zero, we derive the rates of convergence in the $L^1$-Wasserstein metric for the Bayes' and maximum likelihood estimators of the mixing distribution. Merging in the $L^1$-Wasserstein distance between Bayes and maximum likelihood follows as a by-product, along with an assessment on the stochastic order of the discrepancy between the two estimation procedures.

Bayes and maximum likelihood for L^1-Wasserstein deconvolution of Laplace mixtures

Scricciolo, Catia
2018-01-01

Abstract

We consider the problem of recovering a distribution function on the real line from observations additively contaminated with errors following the standard Laplace distribution. Assuming that the latent distribution is completely unknown leads to a nonparametric deconvolution problem. We begin by studying the rates of convergence relative to the $L^2$-norm and the Hellinger metric for the direct problem of estimating the sampling density, which is a mixture of Laplace densities with a possibly unbounded set of locations: the rate of convergence for the Bayes' density estimator corresponding to a Dirichlet process prior over the space of all mixing distributions on the real line matches, up to a logarithmic factor, with the $n^{-3/8}\log^{1/8}n$ rate for the maximum likelihood estimator. Then, appealing to an inversion inequality translating the $L^2$-norm and the Hellinger distance between general kernel mixtures, with a kernel density having polynomially decaying Fourier transform, into any $L^p$-Wasserstein distance, $p\geq1$, between the corresponding mixing distributions, provided their Laplace transforms are finite in some neighborhood of zero, we derive the rates of convergence in the $L^1$-Wasserstein metric for the Bayes' and maximum likelihood estimators of the mixing distribution. Merging in the $L^1$-Wasserstein distance between Bayes and maximum likelihood follows as a by-product, along with an assessment on the stochastic order of the discrepancy between the two estimation procedures.
2018
Deconvolution
Dirichlet process
Entropy
Hellinger distance
Laplace mixture
Maximum likelihood
Posterior distribution
Rate of convergence
Sieve
Wasserstein distance
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11562/967994
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 3
  • ???jsp.display-item.citation.isi??? 2
social impact