An explanation-based learning approach to incremental planning
Chien, Steve Ankuo
This item is only available for download by members of the University of Illinois community. Students, faculty, and staff at the U of I may log in with your NetID and password to view the item. If you are trying to access an Illinois-restricted dissertation or thesis, you can request a copy through your library's Inter-Library Loan office or purchase a copy directly from ProQuest.
Permalink
https://hdl.handle.net/2142/19510
Description
Title
An explanation-based learning approach to incremental planning
Author(s)
Chien, Steve Ankuo
Issue Date
1991
Doctoral Committee Chair(s)
DeJong, Gerald F.
Department of Study
Computer Science
Discipline
Computer Science
Degree Granting Institution
University of Illinois at Urbana-Champaign
Degree Name
Ph.D.
Degree Level
Dissertation
Keyword(s)
Artificial Intelligence
Computer Science
Language
eng
Abstract
Planning is the task of finding a set of operators whose executive transforms the current world state into a world state which satisfies some goal criterion. Because many tasks involve focussed change of a world state, planning techniques are relevant to a wide variety of important AI tasks such as automatic programming, process design and control, and manufacturing engineering.
However, when planning in complex, real-world domains, large amounts of knowledge are needed to adequately describe world behavior. With a large domain theory, complete reasoning can become a computationally intractable task. Consequently, even if a system has a complete and correct domain theory the computational demands of exhaustive reasoning may prevent successful planning.
This thesis describes incremental reasoning and learning techniques to reduce the cost of planning in computationally intractable domains. In this approach plans are initially constructed using inference limiting simplifications. Because limiting inference implies not exhaustively checking all possible inferences, resulting plans may make incorrect predictions. In order to deal with this difficulty the system uses these incorrect goal predictions to direct a refinement process which expands the limited inference of the initial plan, thus preventing recurrence of the incorrect goal prediction. By using executive feedback to focus attention upon parts of the plan requiring further inference, the system avoids the computationally intractable blind search of potentially relevant inferences required by exhaustive reasoning. The class of limited inference simplifications and refinement techniques described in this thesis have been shown to have the properties of convergence upon soundness (i.e. a plan will eventually be refined to make the same goal predictions as a plan developed using exhaustive reasoning) and completeness (i.e. the simplifications will not cause the planner to overlook any potential solutions considered by an exhaustive planner).
This incremental reasoning approach has been validated in two ways. First, a complexity analysis of the computational savings of the incremental reasoning approach has been performed. Second, this approach has been fully implemented and this implementation has been used to empirically compare the cost of the incremental reasoning approach and the exhaustive reasoning approach in two domains.
Use this login method if you
don't
have an
@illinois.edu
email address.
(Oops, I do have one)
IDEALS migrated to a new platform on June 23, 2022. If you created
your account prior to this date, you will have to reset your password
using the forgot-password link below.