Asymptotic cumulative risk and Bayes risk under entropy loss, with applications
Clarke, Bertrand Salem
This item is only available for download by members of the University of Illinois community. Students, faculty, and staff at the U of I may log in with your NetID and password to view the item. If you are trying to access an Illinois-restricted dissertation or thesis, you can request a copy through your library's Inter-Library Loan office or purchase a copy directly from ProQuest.
Permalink
https://hdl.handle.net/2142/19361
Description
Title
Asymptotic cumulative risk and Bayes risk under entropy loss, with applications
Author(s)
Clarke, Bertrand Salem
Issue Date
1989
Doctoral Committee Chair(s)
Barron, Andrew
Department of Study
Statistics
Discipline
Statistics
Degree Granting Institution
University of Illinois at Urbana-Champaign
Degree Name
Ph.D.
Degree Level
Dissertation
Keyword(s)
Statistics
Language
eng
Abstract
In many areas of application of statistics one has a relevent parametric family of densities and wishes to estimate the density from a random sample. In such cases one can use the family to generate an estimator. We fix a prior and consider the properties of the predictive density as an estimator of the density.
We examine the cumulative risk of the estimator and its cumulative Bayes risk under Kullback-Leibler loss. Those two mathematical quantities appear in other contexts with different interpretations. Aside from density estimation, the first quantity occurs in source coding, and hypothesis testing, and the second occurs in source coding, channel coding, and asymptotic convergence of the posterior to a normal. In the first chapter we state our two main results, give some examples, and discuss the applications of our results to those areas.
Our two key results amount to two senses in which the Kullback-Leibler distance between the n-fold product of a distribution in a parametric family and a mixture of such distributions over the parametric family increases as the logarithm of the sample size, provided that in the mixing some mass is assigned near the true distribution. The first is a direct examination of the Kullback-Leibler distance, the second is an examination of the Kullback-Leibler distance after it has again been averaged with respect to the prior. We prove that each is of the form one half the dimension of the parameter times the logarithm of the sample size plus a constant. In both cases the constant is identified.
The key technique for the first result is Laplace integration which gives upper and lower bounds which are asymptotically tight and can be made uniformly good over compact sets in the parameter space. When the parameter space is no longer compact it becomes advantageous to use different techniques in obtaining upper and lower bounds. The convergence holds in an average sense not pointwise uniform. For the upper bound we use an inequality due to Barron (1988) so as to set up an application of the dominated convergence theorem, and for a lower bound we use the fact that the normal has maximal entropy for constrained variance.
Use this login method if you
don't
have an
@illinois.edu
email address.
(Oops, I do have one)
IDEALS migrated to a new platform on June 23, 2022. If you created
your account prior to this date, you will have to reset your password
using the forgot-password link below.