Withdraw
Loading…
Toward efficient multi-agent deep reinforcement learning
Liu, Iou-Jen
Loading…
Permalink
https://hdl.handle.net/2142/116168
Description
- Title
- Toward efficient multi-agent deep reinforcement learning
- Author(s)
- Liu, Iou-Jen
- Issue Date
- 2022-06-29
- Director of Research (if dissertation) or Advisor (if thesis)
- Schwing, Alexander G.
- Doctoral Committee Chair(s)
- Schwing, Alexander G.
- Committee Member(s)
- Chen, Deming
- Jiang, Nan
- Srikant, Rayadurgam
- Department of Study
- Electrical & Computer Eng
- Discipline
- Electrical & Computer Engr
- Degree Granting Institution
- University of Illinois at Urbana-Champaign
- Degree Name
- Ph.D.
- Degree Level
- Dissertation
- Keyword(s)
- Machine Learning
- Deep Reinforcement Learning
- Multi-Agent Learning
- Embodied AI
- Abstract
- Deep reinforcement learning (RL) has achieved remarkable success in various domains, including its use for games like GO and chess. Recently, deep multi-agent RL (MARL) has drawn much attention, because a plethora of real world problems can naturally be formulated in a MARL setting. For instance, coordination of autonomous vehicles and unmanned aerial vehicles or robot fleet control requires multiple agents to take actions based on local observations and coordinate their behaviors. However, both single-agent deep RL and multi-agent deep RL face a common challenge: low data efficiency and long training times. In this thesis, we take a step forward to address the problem: How to make (multi-agent) deep reinforcement learning more efficient, i.e., how to use less data and how to reduce training time? We address the long training time and low data efficiency of deep RL in five thrusts: (1) Parallel high-throughput training; (2) Better representation learning; (3) Transfer learning; (4) Efficient exploration; and (5) Training agents to leverage external knowledge. For 1), to achieve higher throughput for RL training, we proposed a fast RL training framework which collects data in parallel without sacrificing data efficiency of RL algorithms. For 2), we investigate the use of graph convolution networks to capture the permutation-invariant nature of a centralized critic commonly used in MARL. We find this to lead to more efficient learning. In addition, we study an object-centric representation that scales a multi-agent RL algorithm to complex visual environments. For 3), to allow RL agents to leverage ‘knowledge’ from trained agents, we propose a transfer learning framework which permits a student model to leverage ‘knowledge’ of multiple teacher models. We find this transfer to result in faster learning. For 4), we study coordinated multi-agent exploration, which permits agents to coordinate their exploration efforts and learn faster. Lastly, for 5), we propose ‘Asking for Knowledge’ (AFK), an agent which learns to generate language commands to query for meaningful knowledge that helps solve given tasks more efficiently. In summary, in this dissertation, we study approaches that improve the data efficiency and training time of deep reinforcement learning. We believe, with shorter training time and better data efficiency, (multi-agent) deep reinforcement learning could be applied to a wide variety of real-world problems and the approaches presented in this dissertation brought us closer to this goal.
- Graduation Semester
- 2022-08
- Type of Resource
- Thesis
- Copyright and License Information
- Copyright 2022 Iou-Jen Liu
Owning Collections
Graduate Dissertations and Theses at Illinois PRIMARY
Graduate Theses and Dissertations at IllinoisManage Files
Loading…
Edit Collection Membership
Loading…
Edit Metadata
Loading…
Edit Properties
Loading…
Embargoes
Loading…