Integration of real-time speech recognition and action in humanoid robots
Yun, Katherine
Loading…
Permalink
https://hdl.handle.net/2142/104052
Description
Title
Integration of real-time speech recognition and action in humanoid robots
Author(s)
Yun, Katherine
Contributor(s)
Levinson, Stephen E.
Issue Date
2019-05
Keyword(s)
humanoid robot
speech recognition
iCub motor control
face recognition
Abstract
Human speech and visual data are two crucial sources of communication that aid people in interacting with their surrounding environment. Thus, both speech and visual inputs are essential and should contribute to the robot’s action to promote the use of the robot as a cognitive tool. Speech recognition and face recognition are two demanding areas of research: they represent two means by which intelligence behaviors can be expressed. In this thesis, we are interested in investigating whether a robot is able to integrate visual and speech information to make decisions and perform actions accordingly. The iCub robot will listen to real-time human speech from the user and point its finger at a person’s face in an image as dictated by the user. In the following sections, our methods, experimental results, and future work will be further discussed.
Use this login method if you
don't
have an
@illinois.edu
email address.
(Oops, I do have one)
IDEALS migrated to a new platform on June 23, 2022. If you created
your account prior to this date, you will have to reset your password
using the forgot-password link below.