Withdraw
Loading…
Path planning and control of flying robots with account of human’s safety perception
Yoon, Hyung Jin
Loading…
Permalink
https://hdl.handle.net/2142/104824
Description
- Title
- Path planning and control of flying robots with account of human’s safety perception
- Author(s)
- Yoon, Hyung Jin
- Issue Date
- 2019-04-16
- Director of Research (if dissertation) or Advisor (if thesis)
- Hovakimyan, Naira
- Doctoral Committee Chair(s)
- Hovakimyan, Naira
- Committee Member(s)
- Wang, Ranxiao F.
- Stipanović, Dušan M.
- Schwing, Alexander
- Department of Study
- Mechanical Sci & Engineering
- Discipline
- Mechanical Engineering
- Degree Granting Institution
- University of Illinois at Urbana-Champaign
- Degree Name
- Ph.D.
- Degree Level
- Dissertation
- Keyword(s)
- Human-robot interaction
- Optimal trajectory generation
- Model predictive control
- Reinforcement learning
- Partially observable Markov decision process
- Abstract
- In this dissertation, a framework for planning and control of flying robot with the account of human’s safety perception is presented. The framework enables the flying robot to consider the human’s perceived safety in path planning. First, a data-driven model of the human’s safety perception is estimated from human’s test data using a virtual reality environment. A hidden Markov model (HMM) is considered for estimation of latent variables, as user’s attention, intention, and emotional state. Then, an optimal motion planner generates a trajectory, parameterized in Bernstein polynomials, which minimizes the cost related to the mission objectives while satisfying the constraints on the predicted human’s safety perception. Using Model Predictive Path Integral (MPPI) framework, the algorithm is possible to execute in real-time measuring the human’s spatial position and the changes in the environment. A HMM-based Q-learning is considered for computing the online optimal policy. The HMM-based Q-learning estimates the hidden state of the human in interactions with the robot. The state estimator in the HMM-based Q-learning infers the hidden states of the human based on past observations and actions. The convergence of the HMM-based Q-learning for a partially observable Markov decision process (POMDP) with finite state space is proved using stochastic approximation technique. As future research direction one can consider to use recurrent neural networks to estimate the hidden state in continuous state space. The analysis of the convergence of the HMM-based Q-learning algorithm suggests that the training of the recurrent neural network needs to consider both the state estimation accuracy and the optimality principle.
- Graduation Semester
- 2019-05
- Type of Resource
- text
- Permalink
- http://hdl.handle.net/2142/104824
- Copyright and License Information
- Copyright 2019 Hyung Jin Yoon
Owning Collections
Graduate Dissertations and Theses at Illinois PRIMARY
Graduate Theses and Dissertations at IllinoisManage Files
Loading…
Edit Collection Membership
Loading…
Edit Metadata
Loading…
Edit Properties
Loading…
Embargoes
Loading…