Inductive classifier learning from data: An extended Bayesian belief function approach
Ma, Yong
This item is only available for download by members of the University of Illinois community. Students, faculty, and staff at the U of I may log in with your NetID and password to view the item. If you are trying to access an Illinois-restricted dissertation or thesis, you can request a copy through your library's Inter-Library Loan office or purchase a copy directly from ProQuest.
Permalink
https://hdl.handle.net/2142/23508
Description
Title
Inductive classifier learning from data: An extended Bayesian belief function approach
Author(s)
Ma, Yong
Issue Date
1995
Doctoral Committee Chair(s)
Wilkins, David C.
Department of Study
Artificial Intelligence
Computer Science
Discipline
Artificial Intelligence
Computer Science
Degree Granting Institution
University of Illinois at Urbana-Champaign
Degree Name
Ph.D.
Degree Level
Dissertation
Keyword(s)
Artificial Intelligence
Computer Science
Language
eng
Abstract
A central problem in artificial intelligence is reasoning under uncertainty. This thesis views inductive learning as reasoning under uncertainty and develops an Extended Bayesian Belief Function approach that allows a two-layer representation of the probabilistic rules: basic probabilistic belief and their confidences, which are independent of each other and represent different semantics of the rules. The use of the confidence measure of probabilistic rules can thus handle many difficult problems in inductive learning, including noise, missing values, small samples, inter-attribute dependency, and irrelevant or partially relevant attributes, all of which are characteristics of real-world induction tasks.
The theoretical framework is based upon an uncertainty calculus, Dempster-Shafer theory which allows an explicit representation of complete or partial lack of knowledge. This explicit representation is used to quantify and discount the effects of unreliable probability estimates due to noise and small samples, and to account for inter-attribute dependency and irrelevant or partially relevant attributes. Based on this methodology, a learning system, called IUR (Induction of Uncertain Rules) that uses only the first-order correlation information, is developed and experimentally demonstrated to outperform the major existing induction systems on many of the standard test sets.
Future research includes extending IUR to use higher-order correlation information and integrating the Extended Bayesian Belief Function approach to other learning paradigms such as decision trees and neural networks.
Use this login method if you
don't
have an
@illinois.edu
email address.
(Oops, I do have one)
IDEALS migrated to a new platform on June 23, 2022. If you created
your account prior to this date, you will have to reset your password
using the forgot-password link below.