Withdraw
Loading…
The interplay between information and control theory within interactive decision-making problems
Gorantla, Siva Kumar
Loading…
Permalink
https://hdl.handle.net/2142/30956
Description
- Title
- The interplay between information and control theory within interactive decision-making problems
- Author(s)
- Gorantla, Siva Kumar
- Issue Date
- 2012-05-22T00:18:22Z
- Director of Research (if dissertation) or Advisor (if thesis)
- Coleman, Todd P.
- Doctoral Committee Chair(s)
- Coleman, Todd P.
- Committee Member(s)
- Meyn, Sean P.
- Jones, Douglas L.
- Kiyavash, Negar
- Basar, Tamer
- Department of Study
- Electrical & Computer Eng
- Discipline
- Electrical & Computer Engr
- Degree Granting Institution
- University of Illinois at Urbana-Champaign
- Degree Name
- Ph.D.
- Degree Level
- Dissertation
- Keyword(s)
- information theory
- stochastic control
- team decision theory
- sequential information gain
- sequential decision making
- inverse optimal control
- reliable communication
- nonlinear filter stability
- message point communication schemes
- Abstract
- The context for this work is two-agent team decision systems. An agent is an intelligent entity that can measure some aspect of its environment, process information and possibly influence the environment through its action. In a colloborative two-agent team decision system, the agents can be coupled by noisy or noiseless interactions and cooperate to solve problems that are beyond the individual capabilities or knowledge of each agent. This thesis focuses on using stochastic control and information theoretic tools hand-in-hand in solving and analyzing an interactive two-agent sequential decision-making problem. Stochastic control techniques can help in identifying optimal strategies for sequential decision making based on observations. Information-theoretic tools address the fundamental limit of performance between two agents with noisy interaction - in the context of communication and rate-distortion. The motivation for this work comes from the quest for using stochastic control tools in identifying optimal policies for a two-agent team decision system with an objective of maximizing the information rate. The resulting policies, if they exist, will involve decision making at each step based on observations, in contrast to existing communication schemes that decide what to transmit over a long time-horizon, at the start of communication. However, there are many questions that have to be addressed: How should we formulate a stochastic-control problem to capture information gains? Suppose we can formulate such a control problem. Can we solve for explicit, non-random, optimal strategies that operate on sufficient statistics (thus resulting in a simple structure for optimal policies)? Further, do these control-theory based policies assure reliability of communication in an information-theoretic sense? Consider a different problem where a third person has knowledge of the optimal policies of the two interacting agents, but is unaware of the cost function they are colloboratively optimizing. Can he deduce what the two agents are trying to achieve based on their policies? In this thesis, we focus on addressing these questions using perspectives from both information and control theory. We consider an interacting two-agent decision-making problem consisting of a Markov source process, a causal encoder with feedback, and a causal decoder. We augment the standard formulation by considering general alphabets and a non-trivial cost function operating on current and previous symbols; this enables us to introduce the ‘sequential information gain cost’ function that can capture information gains accumulated at each time step. We emphasize how this problem formulation leads to a different style of coding scheme with a control-theoretic flavor. Further, we solve for structural results on these optimal policies using dynamic programming principles. We then demonstrate another interplay between information theory and control theory, at the level of reliability of message-point communication schemes, by establishing a relationship between reliability in feedback communication to the stability of the posterior belief’s nonlinear filter. We also consider the two-agent inverse optimal control (IOC) problem, where a fixed policy satisfying certain statistical conditions is shown to be optimal for some cost function, using probabilistic matching. We provide examples of the applicability of this framework to communication with feedback, hidden Markov models and the nonlinear filter, decentralized control, brain-machine interfaces, and queuing theory.
- Graduation Semester
- 2012-05
- Permalink
- http://hdl.handle.net/2142/30956
- Copyright and License Information
- Copyright 2012 Siva Kumar Gorantla
Owning Collections
Graduate Dissertations and Theses at Illinois PRIMARY
Graduate Theses and Dissertations at IllinoisManage Files
Loading…
Edit Collection Membership
Loading…
Edit Metadata
Loading…
Edit Properties
Loading…
Embargoes
Loading…