Language acquisition and object recognition with Bert
Lin, Yuguang
This item is only available for download by members of the University of Illinois community. Students, faculty, and staff at the U of I may log in with your NetID and password to view the item. If you are trying to access an Illinois-restricted dissertation or thesis, you can request a copy through your library's Inter-Library Loan office or purchase a copy directly from ProQuest.
Permalink
https://hdl.handle.net/2142/100022
Description
Title
Language acquisition and object recognition with Bert
Author(s)
Lin, Yuguang
Contributor(s)
Levinson, Stephen
Issue Date
2018-05
Keyword(s)
Artificial Intelligence
Machine Learning
Gaussian Mixture Model
K-means
Hidden Markov Model
Abstract
Recent advances in the broad field of artificial intelligence (AI) has brought
much excitement and many expectations.
However, there is a strong need to understand
intelligence, and through understanding it we can help achieve true
machine intelligence, one that is not only able to complete certain difficult tasks but also reason about the world. To study intelligence, we look at ourselves and especially at infants. At a very young age, we can consciously and easily perform tasks that involve understanding of both languages and vision, two of the channels through which we acquire most of our information from the external world.
How do we do so? How do children learn their first language?
In our lab, we believe intelligence and learning should be interactive.
We learn from interaction with the real world through our five senses.
We also believe a massive number of well-defined labels does not exist for
children. In order to study this idea of unlabeled data and learning through
interaction, for this thesis we implemented a system that enabled a human robot to associate
visual information with speech information and to learn
to describe a new object with vocabularies acquired during training. Several
machine learning models were implemented on the iCub humanoid platform. Specifically,
Gaussian Mixture Models and K-means were implemented for
the vision part of the experiment, and Hidden Markov Models were used for
the speech.
Use this login method if you
don't
have an
@illinois.edu
email address.
(Oops, I do have one)
IDEALS migrated to a new platform on June 23, 2022. If you created
your account prior to this date, you will have to reset your password
using the forgot-password link below.