Withdraw
Loading…
Rationally inattentive decision-making: Bayesian decision-making with information choice
Shafieepoorfard, Ehsan
Loading…
Permalink
https://hdl.handle.net/2142/105132
Description
- Title
- Rationally inattentive decision-making: Bayesian decision-making with information choice
- Author(s)
- Shafieepoorfard, Ehsan
- Issue Date
- 2019-02-25
- Director of Research (if dissertation) or Advisor (if thesis)
- Raginsky, Maxim
- Doctoral Committee Chair(s)
- Raginsky, Maxim
- Committee Member(s)
- Başar, Tamer
- Meyn, Sean
- Srikant, Rayadurgam
- Liberzon, Daniel
- Department of Study
- Electrical & Computer Eng
- Discipline
- Electrical & Computer Engr
- Degree Granting Institution
- University of Illinois at Urbana-Champaign
- Degree Name
- Ph.D.
- Degree Level
- Dissertation
- Keyword(s)
- Stochastic control, Information Theory, Bayesian Decision Making, Markov Decision Process, Network Control, Behavioral Economics
- Abstract
- Rationally inattentive decision-making (RIDM) extends general problem of Bayesian decision-making under uncertainty to the case when the decision- maker (DM) has several options for obtaining extra information about the environment, before making the decision. The environment can quantitatively be described with variables referred to as state. The decision is presumably made to minimize a cost function, which depends on the unknown state and final action taken by the DM. The crucial assumption is that the DM has to first rationally select what information to process among the options provided to him and what information to put aside. In other words, he has to rationally decide what information to pay attention to, and what information to be inattentive to. He then takes the optimal action based on all the information in hand. The term RIDM was coined by economist Christopher Sims to justify sluggish macroeconomic adjustments. This notion of decision-making under uncertainty includes sequential setup, and in its broad sense relates several classic concepts such as decoding, estimation, optimization and control. In general, sequential Bayesian decision-making under uncertainty is a framework widely used to address numerous real-world situations. It comprises a decision-maker or a controller who successively interacts with the environment or a plant. Such an interaction is typically characterized by three basic elements: first, how the decision-maker perceives the state, which is not necessarily assumed to be perfectly known; second, how the decision-maker acts based on this perception; third, how the state of the system changes subject to the action over the successive stages of interaction. Among these, the latest element is a characteristic of the system and is always given. Typically, the works on optimization, dynamic programming and estimation focus mainly on how to take action and treat the perception element as given. On the other hand, information theory is concerned with how to transmit the information in the most efficient way for the decision-maker (or decoder) to take the optimal decision. The general framework we provide here relates to several situations where perception and taking action get into a reciprocal interaction. In such situations, how to perceive the state is also treated as a decision variable. It must be decided upon from a set of perception mechanisms and then employed towards minimizing the cost. Practically, such situations are encountered in the study of networked control systems, artificially intelligent and automated systems, brain and cognitive sciences and behavioral economics. The common feature in all these circumstances is the presence of some sort of constraint that hinders the perfect perception of the state and leaves a handful of competing ways for collecting imperfect information about the state. To model and tackle this generalized sequential Bayesian decision-making under uncertainty, we consider the joint probability measure of all the system variables over the sequence of the stages. All basic elements involved can then be analogously modeled as conditional probability kernels over such variables. This includes the perception mechanism that is modeled as a probability kernel known as observation kernel. For the constraint that hinders perfect perception, we assume bounded Shannon mutual information between the observation and state, which introduces a convex set of observation kernels that transfer the same amount of information. After introducing and explaining the basic framework, we formulate and tackle three fundamental problems as follows: rationally inattentive Markov decision process over a finite horizon, rationally inattentive ergodic control of Markov chain over infinite horizon and sequential empirical coordination with a bundle of random processes over a finite horizon. We model each system through considering the joint probability of all of its variables. We then provide a systematic way to reduce the space of policies to appropriate sets over which our problems take convenient forms of convex optimizations. We then show that by defining appropriate distortion measure, these con- vex optimization problems can be linked to the distortion-rate problem in information theory. The results should be of interest to both economics and engineering communities.
- Graduation Semester
- 2019-05
- Type of Resource
- text
- Permalink
- http://hdl.handle.net/2142/105132
- Copyright and License Information
- Copyright 2019 Ehsan Shafieepoorfard
Owning Collections
Graduate Dissertations and Theses at Illinois PRIMARY
Graduate Theses and Dissertations at IllinoisDissertations and Theses - Electrical and Computer Engineering
Dissertations and Theses in Electrical and Computer EngineeringManage Files
Loading…
Edit Collection Membership
Loading…
Edit Metadata
Loading…
Edit Properties
Loading…
Embargoes
Loading…