Modelling functions from sample data with classification applications
Saarinen, Sirpa Helena
This item is only available for download by members of the University of Illinois community. Students, faculty, and staff at the U of I may log in with your NetID and password to view the item. If you are trying to access an Illinois-restricted dissertation or thesis, you can request a copy through your library's Inter-Library Loan office or purchase a copy directly from ProQuest.
Permalink
https://hdl.handle.net/2142/19668
Description
Title
Modelling functions from sample data with classification applications
Author(s)
Saarinen, Sirpa Helena
Issue Date
1994
Department of Study
Computer Science
Discipline
Computer Science
Degree Granting Institution
University of Illinois at Urbana-Champaign
Degree Name
Ph.D.
Degree Level
Dissertation
Keyword(s)
Artificial Intelligence
Computer Science
Language
eng
Abstract
In this thesis we investigate various aspects of the pattern recognition problem solving process. Pattern recognition can be viewed as a decision making process where the underlying density functions or discriminant functions of the application have to be estimated often in a high dimensional space. We consider two main types of estimators: the feed-forward neural network and the nearest neighbor method.
In the first part of the thesis we investigate the optimization problem that is solved when using feed-forward neural networks for function approximation. We find that the feed-forward neural network optimization problem is very ill-conditioned and can influence the solution process severely. We also show how the feed-forward neural network function and its gradient can be implemented using automatic differentiation techniques.
The second part of the thesis is concerned with the nearest neighbor method. We present two new continuous, supervised learning methods: a novel memory-based learning technique and an approximate nearest neighbor method, both approximating the convergence properties of the nearest neighbor method. These methods can be used in continuous learning areas such as speech, hand-writing and financial applications. We also present a fast approximate search method for high dimensional spaces that is based on the k-d-tree. A lower bound on the performance of the method is derived and some results are shown for a uniform distribution. An application of this method to a speech data set shows very promising results.
Use this login method if you
don't
have an
@illinois.edu
email address.
(Oops, I do have one)
IDEALS migrated to a new platform on June 23, 2022. If you created
your account prior to this date, you will have to reset your password
using the forgot-password link below.