Supporting word learning with language-internal distributional statistics: A place for the recurrent neural network language model?
Huebner, Philip A.
This item is only available for download by members of the University of Illinois community. Students, faculty, and staff at the U of I may log in with your NetID and password to view the item. If you are trying to access an Illinois-restricted dissertation or thesis, you can request a copy through your library's Inter-Library Loan office or purchase a copy directly from ProQuest.
Permalink
https://hdl.handle.net/2142/115549
Description
Title
Supporting word learning with language-internal distributional statistics: A place for the recurrent neural network language model?
Author(s)
Huebner, Philip A.
Issue Date
2022-04-17
Director of Research (if dissertation) or Advisor (if thesis)
Willits, Jon A
Doctoral Committee Chair(s)
Willits, Jon A
Committee Member(s)
Hummel, John
Dell, Gary
Fisher, Cynthia
Benjamin, Aaron
Department of Study
Psychology
Discipline
Psychology
Degree Granting Institution
University of Illinois at Urbana-Champaign
Degree Name
Ph.D.
Degree Level
Dissertation
Keyword(s)
recurrent neural network
language acquisition
distributional semantics
semantic category
word learning
learning dynamics
incremental learning
child-directed language, CHILDES
psychology
psycholinguistics
Abstract
Prior work has demonstrated that statistical dependencies between words in language input can be used to construct word clusters broadly conforming to lexical classes in adult language (Cartwright & Brent, 1997; J. L. Elman, 1990; T. H. Mintz, 2003; Redington et al., 1998), and that children use this information to guide inferences during word learning in the absence of perceptual information (Lany & G´omez, 2008; Lany & Saffran, 2011; Wojcik & Saffran, 2015). Building on these insights, this thesis examines whether the simple Recurrent Neural Network (simple RNN) could be used to model children’s acquisition of form-based lexical semantic category knowledge and whether this knowledge could be used to help children infer category-associated features of novel words. In order to determine the feasibility of the RNN as a cognitive model of this procedure, I discuss several desiderata concerning how corpus-derived distributional semantic statistics should be encoded and accessed in the network, and undertake comprehensive simulations that address basic questions concerning the mechanism by which the RNN acquires lexical semantic category knowledge, and how learned representations are influenced by the statistical properties of the input. In particular, I show that the construction of form-based lexical semantic representations by the simple RNN is extremely vulnerable to a particular kind of redundancy, which occurs when an item in the left context can be reliably used to predict an item in the right context of a target word. This co-occurrence pattern in the data allows the RNN to ‘ignore’ the intervening target word, yielding semantically impoverished representations that are less useful for guiding children’s inferences during word learning. In order to better understand and overcome this limitation, I developed a theory that formalizes how the training data, learning dynamics, and training strategy conspire to shape lexical semantic representations in the RNN. Semantic Property Inheritance (SPIN) theory makes recommendations for how to choose training data that maximizes the acquisition of statically accessible lexical semantic category knowledge. In particular, the theory is concerned with atomicity, which requires that the (distributional) semantic properties of a target word are encoded in the representation of the target word as opposed to words that also occur in the same sentence. Further, SPIN theory predicts that training the RNN on child-directed transcribed speech that has been ordered by the age of the target child results in more atomic lexical semantic representations for nouns than training in reverse order. I test and confirm this prediction, and discuss implications of this finding for child language acquisition, the importance of studying model learning dynamics, model-data interactions, and the gradual refinement of learned representations over the course of training on non-stationary data.
Use this login method if you
don't
have an
@illinois.edu
email address.
(Oops, I do have one)
IDEALS migrated to a new platform on June 23, 2022. If you created
your account prior to this date, you will have to reset your password
using the forgot-password link below.