Withdraw
Loading…
Distribution-free uncertainty quantification for deep learning
Lin, Zhen
Loading…
Permalink
https://hdl.handle.net/2142/124231
Description
- Title
- Distribution-free uncertainty quantification for deep learning
- Author(s)
- Lin, Zhen
- Issue Date
- 2024-04-08
- Director of Research (if dissertation) or Advisor (if thesis)
- Sun, Jimeng
- Doctoral Committee Chair(s)
- Sun, Jimeng
- Committee Member(s)
- Rehg, James M
- Tong, Hanghang
- Romano, Yaniv
- Department of Study
- Computer Science
- Discipline
- Computer Science
- Degree Granting Institution
- University of Illinois at Urbana-Champaign
- Degree Name
- Ph.D.
- Degree Level
- Dissertation
- Keyword(s)
- Uncertainty Quantification, Deep Learning, Machine Learning, Conformal Prediction, Prediction Set, Neural Networks, Cross-Sectional Time-Series, Healthcare Applications
- Abstract
- The integration of sophisticated deep learning models into critical domains, such as healthcare, autonomous vehicles, and the legal system, is increasingly becoming a trend. These models offer significant potential for enhancing outcomes efficiently, but their adoption raises crucial challenges related to model confidence, uncertainty communication, and risk management. Uncertainty Quantification (UQ) is a key framework that addresses these issues by providing a systematic way to assess and act on the reliability of model predictions. This thesis focuses on distribution-free UQ methods, which require minimal assumptions about the data distribution and model specifics, making them highly applicable across various deep learning applications. We first investigate the problem of full calibration for deep learning classifiers, aiming to address the over-confidence or under-confidence typically observed in such classifiers. Then, we discuss methods to convert a model's output into actionable uncertainty information, expressed as prediction intervals and prediction sets. In particular, we focus on the construction of prediction intervals and sets with rigorous coverage or risk control guarantees, typically provided by conformal prediction tools. Finally, we explore the problem of UQ for natural language generation (NLG), for which we apply a graph-based approach using the similarity graph of multiple sampled responses. Through exploring model calibration, risk-controlling prediction sets, conformal prediction intervals, and UQ for NLG, this thesis aims to advance the understanding and implementation of UQ in high-stakes decision-making environments.
- Graduation Semester
- 2024-05
- Type of Resource
- Thesis
- Copyright and License Information
- Copyright 2024 Zhen Lin
Owning Collections
Graduate Dissertations and Theses at Illinois PRIMARY
Graduate Theses and Dissertations at IllinoisManage Files
Loading…
Edit Collection Membership
Loading…
Edit Metadata
Loading…
Edit Properties
Loading…
Embargoes
Loading…