Generalizing the Structure of Explanations in Explanation-Based Learning
Shavlik, Jude William
This item is only available for download by members of the University of Illinois community. Students, faculty, and staff at the U of I may log in with your NetID and password to view the item. If you are trying to access an Illinois-restricted dissertation or thesis, you can request a copy through your library's Inter-Library Loan office or purchase a copy directly from ProQuest.
Permalink
https://hdl.handle.net/2142/69590
Description
Title
Generalizing the Structure of Explanations in Explanation-Based Learning
Author(s)
Shavlik, Jude William
Issue Date
1988
Doctoral Committee Chair(s)
DeJong, Gerald F.
Department of Study
Computer Science
Discipline
Computer Science
Degree Granting Institution
University of Illinois at Urbana-Champaign
Degree Name
Ph.D.
Degree Level
Dissertation
Keyword(s)
Education, Technology of
Artificial Intelligence
Computer Science
Abstract
Explanation-based learning is a recently developed approach to concept acquisition by computer. In this type of machine learning, a specific problem's solution is generalized into a form that can later be used to solve conceptually similar problems. A number of explanation-based generalization algorithms have been developed. Most do not alter the structure of the explanation of the specific problems--no additional objects nor inference rules are incorporated. Instead, these algorithms generalize by converting constants in the observed example to variables with constraints. However, many important concepts, in order to be properly learned, require that the structure of explanations be generalized. This can involve generalizing such things as the number of entities involved in a concept or the number of times some action is performed. For example, concepts such as momentum and energy conservation apply to arbitrary numbers of physical objects, clearing the top of a desk can require an arbitrary number of object relocations, and setting a table can involve an arbitrary number of guests.
Two theories of extending explanations during the generalization process have been developed and computer implementations have been created to computationally test these approaches. The Physics 101 system utilizes characteristics of mathematically-based problem solving to extend mathematical calculations in a psychologically-plausible way, while the BAGGER system implements a domain-independent approach to generalizing explanation structures. Both of these systems are described and the details of their algorithms presented. Several examples of learning in each system are discussed. An approach to the operationality/generality trade-off and an empirical analysis of explanation-based learning are also presented. The computer experiments demonstrate the value of generalizing explanation structures in particular, and of explanation-based learning in general. These experiments also demonstrate the advantages of learning by observing the intelligent behavior of external agents. Several open research issues in generalizing the structure of explanations and related approaches to this problem are discussed. This research brings explanation-based learning closer to its goal of being able to acquire the full concept inherent in the solution to a specific problem.
Use this login method if you
don't
have an
@illinois.edu
email address.
(Oops, I do have one)
IDEALS migrated to a new platform on June 23, 2022. If you created
your account prior to this date, you will have to reset your password
using the forgot-password link below.