On the Learnability of Disjunctive Normal Form Formulas and Decision Trees
Aizenstein, Howard Jay
This item is only available for download by members of the University of Illinois community. Students, faculty, and staff at the U of I may log in with your NetID and password to view the item. If you are trying to access an Illinois-restricted dissertation or thesis, you can request a copy through your library's Inter-Library Loan office or purchase a copy directly from ProQuest.
Permalink
https://hdl.handle.net/2142/72083
Description
Title
On the Learnability of Disjunctive Normal Form Formulas and Decision Trees
Author(s)
Aizenstein, Howard Jay
Issue Date
1993
Doctoral Committee Chair(s)
Pitt, L.,
Department of Study
Computer Science
Discipline
Computer Science
Degree Granting Institution
University of Illinois at Urbana-Champaign
Degree Name
Ph.D.
Degree Level
Dissertation
Keyword(s)
Artificial Intelligence
Computer Science
Abstract
The learnability of disjunctive normal form formulas and decision trees is investigated. Polynomial time algorithms are given, and nonlearnability results are obtained, for restricted versions of these general learning problems.
Polynomial time algorithms are presented for exactly learning (with membership and equivalence queries) read-twice DNF and read-k-disjoint DNF. A read-twice DNF formula is a boolean formula in disjunctive normal form where each variable appears at most twice. A read-k disjoint DNF formula f is a DNF formula where each variable appears at most k times (for an arbitrary positive integer k) and every assignment to the variables satisfies at most one term of f. The read-k disjoint DNF result also applies for a generalization of this class, which we call read-k sat-j DNF.
For a similar learning protocol, it is shown that, assuming NP $\not=$ co-NP, there does not exist a polynomial time algorithm for learning read-thrice DNF formulas-boolean formulas in disjunctive normal form where each variable appears at most three times. This result contrasts with our polynomial time algorithm for learning read-twice DNF, and adds evidence to the conjecture that DNF is hard to learn in the membership and equivalence query model. Nonlearnability results are also obtained for the class of read-k decision trees. It is shown that this class is hard to learn in the membership and equivalence query model, provided that the equivalence queries are also required to be read-k decision trees. It is also shown that read-k decision trees are hard to learn in the PAC model (without membership queries).
A different type of nonlearnability result is obtained for the class of arbitrary DNF formulas. A natural approach for learning DNF formulas (suggested by Valiant in a seminal paper of learning theory) is to greedily collect the prime implicants of the hidden function. We show that no algorithm using such an approach can learn DNF in polynomial time. Results which suggest that DNF formulas are hard to learn rely on the construction of rare hard-to-learn formulas. This raises the question of whether most DNF formulas are learnable. For certain natural definitions of most DNF formulas, this question is answered affirmatively.
Use this login method if you
don't
have an
@illinois.edu
email address.
(Oops, I do have one)
IDEALS migrated to a new platform on June 23, 2022. If you created
your account prior to this date, you will have to reset your password
using the forgot-password link below.