IDEALS Home University of Illinois at Urbana-Champaign logo The Alma Mater The Main Quad

Asymptotic cumulative risk and Bayes risk under entropy loss, with applications

Show full item record

Bookmark or cite this item: http://hdl.handle.net/2142/19361

Files in this item

File Description Format
PDF 9010836.pdf (4MB) Restricted to U of Illinois (no description provided) PDF
Title: Asymptotic cumulative risk and Bayes risk under entropy loss, with applications
Author(s): Clarke, Bertrand Salem
Doctoral Committee Chair(s): Barron, A.
Department / Program: Statistics
Discipline: Statistics
Degree Granting Institution: University of Illinois at Urbana-Champaign
Degree: Ph.D.
Genre: Dissertation
Subject(s): Statistics
Abstract: In many areas of application of statistics one has a relevent parametric family of densities and wishes to estimate the density from a random sample. In such cases one can use the family to generate an estimator. We fix a prior and consider the properties of the predictive density as an estimator of the density.We examine the cumulative risk of the estimator and its cumulative Bayes risk under Kullback-Leibler loss. Those two mathematical quantities appear in other contexts with different interpretations. Aside from density estimation, the first quantity occurs in source coding, and hypothesis testing, and the second occurs in source coding, channel coding, and asymptotic convergence of the posterior to a normal. In the first chapter we state our two main results, give some examples, and discuss the applications of our results to those areas.Our two key results amount to two senses in which the Kullback-Leibler distance between the n-fold product of a distribution in a parametric family and a mixture of such distributions over the parametric family increases as the logarithm of the sample size, provided that in the mixing some mass is assigned near the true distribution. The first is a direct examination of the Kullback-Leibler distance, the second is an examination of the Kullback-Leibler distance after it has again been averaged with respect to the prior. We prove that each is of the form one half the dimension of the parameter times the logarithm of the sample size plus a constant. In both cases the constant is identified.The key technique for the first result is Laplace integration which gives upper and lower bounds which are asymptotically tight and can be made uniformly good over compact sets in the parameter space. When the parameter space is no longer compact it becomes advantageous to use different techniques in obtaining upper and lower bounds. The convergence holds in an average sense not pointwise uniform. For the upper bound we use an inequality due to Barron (1988) so as to set up an application of the dominated convergence theorem, and for a lower bound we use the fact that the normal has maximal entropy for constrained variance.
Issue Date: 1989
Type: Text
Language: English
URI: http://hdl.handle.net/2142/19361
Rights Information: Copyright 1989 Clarke, Bertrand Salem
Date Available in IDEALS: 2011-05-07
Identifier in Online Catalog: AAI9010836
OCLC Identifier: (UMI)AAI9010836
 

This item appears in the following Collection(s)

Show full item record

Item Statistics

  • Total Downloads: 0
  • Downloads this Month: 0
  • Downloads Today: 0

Browse

My Account

Information

Access Key