GS*. An Adaptive Bias Framework for Classification Algorithms
Uhrik, Carl Thomas
This item is only available for download by members of the University of Illinois community. Students, faculty, and staff at the U of I may log in with your NetID and password to view the item. If you are trying to access an Illinois-restricted dissertation or thesis, you can request a copy through your library's Inter-Library Loan office or purchase a copy directly from ProQuest.
Permalink
https://hdl.handle.net/2142/72097
Description
Title
GS*. An Adaptive Bias Framework for Classification Algorithms
Author(s)
Uhrik, Carl Thomas
Issue Date
1993
Doctoral Committee Chair(s)
Baskin, A.,
Department of Study
Computer Science
Discipline
Computer Science
Degree Granting Institution
University of Illinois at Urbana-Champaign
Degree Name
Ph.D.
Degree Level
Dissertation
Keyword(s)
Computer Science
Abstract
This thesis addresses dynamically adaptive bias in an algorithm for deriving classification rules from examples. Whereas prior studies examined either early setting of "global" biases for a specific problem taken as a whole (which learning method/algorithm is most appropriate to a finding a "cover" for a particular training set) or setting of localized parameters as an algorithm proceeds (e.g., adjusting weights on rules), this work takes a different approach. First, a generalized framework for SBL classification algorithms is proposed. This allows existing biases of several algorithms to be unified and consolidated under one roof, with the original algorithms corresponding to specific settings of "bias switches". Thus the meta-algorithm spans existing biases, but still allows a user to assert specific preferences. Secondly, heuristics are added to the framework to adjust biases according to progress of the biases in solving a learning problem at hand. Thirdly, problems are broken into subproblems in which the prevailing biases are allowed to differ. This permits a higher degree of structure than previously possible in a solution as well as promising more efficiency on problems that can be viewed as a composition of subproblems. Yet, it is more than a matter of pasting together previous learning algorithms. In order to identify that structure, care must be taken to isolate the learning subproblems--for example, to ensure that the quasioptimal quantization of numerical values for one subproblem does not obscure the pattern of values present in another subproblem. This particular difficulty is handled through a flexible value aggregation scheme which is an integral part of the framework mentioned above.
The experimental agenda includes 3 sets of studies: 2 which are more artificial and controlled, and another which is more realistic. In the first sets, to demonstrate the utility of being adaptive to subproblems bias, problems with known structure are synthesized by a problem generator. In the real-world study set (Sparks, Engine Design, Annealing), there is known to be considerable noise, and dealing with numerical values is a strong consideration. A comparison of the GS$\sp*$ results for the problems is made against 2 standard algorithms (CN2 and NEWID).
Use this login method if you
don't
have an
@illinois.edu
email address.
(Oops, I do have one)
IDEALS migrated to a new platform on June 23, 2022. If you created
your account prior to this date, you will have to reset your password
using the forgot-password link below.