UNCERTAINTY QUANTIFICATION IN NEURAL NETWORKS FOR REGRESSION
Wang, Lilian
Loading…
Permalink
https://hdl.handle.net/2142/124799
Description
Title
UNCERTAINTY QUANTIFICATION IN NEURAL NETWORKS FOR REGRESSION
Author(s)
Wang, Lilian
Issue Date
2023-05-01
Keyword(s)
uncertainty quantification; neural network
Abstract
Machine learning models provide a prediction given an input, but it gives the user an output regardless of the model’s certainty in its prediction. However, it is important to know when a model is merely guessing a solution, as other methods, such as a human expert, should be used to confirm or modify its output. Uncertainty quantification brings uncertainty into a machine learning model to calculate the uncertainty of a model’s predictions. If an input data point is from a distribution that is very different from the training data, the model should have high uncertainty. In this thesis, we introduce uncertainty into the model by using Monte Carlo dropout and quantify uncertainty as the variance of the predictions for each datapoint. We find that as the distribution of the test datapoints increasingly deviates from the input test datapoints, the variance of the output also increases.
Use this login method if you
don't
have an
@illinois.edu
email address.
(Oops, I do have one)
IDEALS migrated to a new platform on June 23, 2022. If you created
your account prior to this date, you will have to reset your password
using the forgot-password link below.