Information Fusion for Robust Audio -Visual Speech Recognition
Zhang, You
This item is only available for download by members of the University of Illinois community. Students, faculty, and staff at the U of I may log in with your NetID and password to view the item. If you are trying to access an Illinois-restricted dissertation or thesis, you can request a copy through your library's Inter-Library Loan office or purchase a copy directly from ProQuest.
Permalink
https://hdl.handle.net/2142/81367
Description
Title
Information Fusion for Robust Audio -Visual Speech Recognition
Author(s)
Zhang, You
Issue Date
2000
Doctoral Committee Chair(s)
Huang, Thomas S.
Department of Study
Electrical Engineering
Discipline
Electrical Engineering
Degree Granting Institution
University of Illinois at Urbana-Champaign
Degree Name
Ph.D.
Degree Level
Dissertation
Keyword(s)
Engineering, Electronics and Electrical
Language
eng
Abstract
Computer technologies have improved significantly in both capacity and speed. The human-computer interface still lags behind, in that human-computer interaction lacks the naturalness essential for efficient communication. In order to make human-computer interaction more natural, novel sensory modalities have been used. Speech, gestures, and emotional states can be detected and understood to some extent. However, integrating these information sources to achieve performances superior to any single modal alone remains a challenging problem. Humans naturally and effortlessly perform sensory information fusion. The most important human-to-human communication tool is speech. Automatic speech recognition by machines has been able to achieve very high recognition accuracy for large vocabulary sets and speaker-independent tasks. However, in some environments, unexpected sources of noise will degrade system performance. We propose a novel integration technique that can efficiently incorporate both visual and acoustic speech signals to achieve better speech recognition accuracy than that achieved by either of the single modalities alone. The proposed fusion schemes have been tested in different situations. The experiment results show consistently improved performance by using multimodal fusion schemes.
Use this login method if you
don't
have an
@illinois.edu
email address.
(Oops, I do have one)
IDEALS migrated to a new platform on June 23, 2022. If you created
your account prior to this date, you will have to reset your password
using the forgot-password link below.