Withdraw
Loading…
Information-theoretic limitations of distributed information processing
Xu, Aolin
Loading…
Permalink
https://hdl.handle.net/2142/95358
Description
- Title
- Information-theoretic limitations of distributed information processing
- Author(s)
- Xu, Aolin
- Issue Date
- 2016-11-30
- Director of Research (if dissertation) or Advisor (if thesis)
- Raginsky, Maxim
- Doctoral Committee Chair(s)
- Raginsky, Maxim
- Committee Member(s)
- Hajek, Bruce
- Milenkovic, Olgica
- Srikant, Rayadurgam
- Wu, Yihong
- Department of Study
- Electrical & Computer Eng
- Discipline
- Electrical & Computer Engr
- Degree Granting Institution
- University of Illinois at Urbana-Champaign
- Degree Name
- Ph.D.
- Degree Level
- Dissertation
- Keyword(s)
- distributed information processing
- fundamental limits
- information theory
- decentralized estimation
- Bayes risk
- small ball probability
- strong data processing inequality
- distributed function computation
- computation time
- network reduction
- diameter of network
- statistical learning
- adaptive composition
- stability of learning algorithms
- generalization error
- mutual information
- Gibbs algorithm
- adaptive data analytics
- Abstract
- In a generic distributed information processing system, a number of agents connected by communication channels aim to accomplish a task collectively through local communications. The fundamental limits of distributed information processing problems depend not only on the intrinsic difficulty of the task, but also on the communication constraints due to the distributedness. In this thesis, we reveal these dependencies quantitatively under information-theoretic frameworks. We consider three typical distributed information processing problems: decentralized parameter estimation, distributed function computation, and statistical learning under adaptive composition. For the first two problems, we derive converse results on the Bayes risk and the computation time, respectively. For the last problem, we first study the relationship between the generalization capability of a learning algorithm and its stability property measured by the mutual information between its input and output, and then derive achievability results on the generalization error of adaptively composed learning algorithms. In all cases, we obtain general results on the fundamental limits with respect to a general model of the problem, so that the results can be applied to various specific scenarios. Our information-theoretic analyses also provide general approaches to inferring global properties of a distributed information processing system from local properties of its components.
- Graduation Semester
- 2016-12
- Type of Resource
- text
- Permalink
- http://hdl.handle.net/2142/95358
- Copyright and License Information
- Copyright 2016 Aolin Xu
Owning Collections
Dissertations and Theses - Electrical and Computer Engineering
Dissertations and Theses in Electrical and Computer EngineeringGraduate Dissertations and Theses at Illinois PRIMARY
Graduate Theses and Dissertations at IllinoisManage Files
Loading…
Edit Collection Membership
Loading…
Edit Metadata
Loading…
Edit Properties
Loading…
Embargoes
Loading…