Withdraw
Loading…
Lifted Inference for Relational Hybrid Models
Choi, Jaesik
Loading…
Permalink
https://hdl.handle.net/2142/32004
Description
- Title
- Lifted Inference for Relational Hybrid Models
- Author(s)
- Choi, Jaesik
- Issue Date
- 2012-06-27T21:24:04Z
- Director of Research (if dissertation) or Advisor (if thesis)
- Amir, Eyal
- Doctoral Committee Chair(s)
- Amir, Eyal
- Committee Member(s)
- Roth, Dan
- LaValle, Steven M.
- Poole, David
- Department of Study
- Computer Science
- Discipline
- Computer Science
- Degree Granting Institution
- University of Illinois at Urbana-Champaign
- Degree Name
- Ph.D.
- Degree Level
- Dissertation
- Keyword(s)
- Probabilistic Graphical Models
- Relational Hybrid Models
- Lifted Inference
- First-Order Probabilistic Models
- Probabilistic Logic
- Kalman filter
- Relational Kalman filter
- Variational Learning, Markov Logic Networks
- Abstract
- Probabilistic Graphical Models (PGMs) promise to play a prominent role in many complex real-world systems. Probabilistic Relational Graphical Models (PRGMs) scale the representation and learning of PGMs. Answering questions using PRGMs enables many current and future applications, such as medical informatics, environmental engineering, financial forecasting and robot localizations. Scaling inference algorithms for large models is a key challenge for scaling up current applications and enabling future ones. This thesis presents new insights into large-scale probabilistic graphical models. It provides fresh ideas for maintaining a compact structure when answering questions or inferences about large, continuous models. The insights result in a key contribution, the Lifted Relational Kalman filter (LRKF), an efficient estimation algorithm for large-scale linear dynamic systems. It shows that the new relational Kalman filter enables scaling the exact vanilla Kalman filter from 1,000 to 1,000,000,000 variables. Another key contribution of this thesis is that it proves that typically used probabilistic first-order languages, including Markov Logic Networks (MLNs) and First-Order Probabilistic Models (FOPMs), can be reduced to compact probabilistic graphical representations under reasonable conditions. Specifically, this thesis shows that aggregate operators and the existential quantification in the languages are accurately approximated by linear constraints in the Gaussian distribution. In general, probabilistic first-order languages are transformed into nonparametric variational models where lifted inference algorithms can efficiently solve inference problems.
- Graduation Semester
- 2012-05
- Permalink
- http://hdl.handle.net/2142/32004
- Copyright and License Information
- Copyright 2012 Jaesik Choi
Owning Collections
Dissertations and Theses - Computer Science
Dissertations and Theses from the Dept. of Computer ScienceGraduate Dissertations and Theses at Illinois PRIMARY
Graduate Theses and Dissertations at IllinoisManage Files
Loading…
Edit Collection Membership
Loading…
Edit Metadata
Loading…
Edit Properties
Loading…
Embargoes
Loading…