An Integrated Framework Enhanced With Appearance Model for Facial Motion Modeling, Analysis and Synthesis
Wen, Zhen
This item is only available for download by members of the University of Illinois community. Students, faculty, and staff at the U of I may log in with your NetID and password to view the item. If you are trying to access an Illinois-restricted dissertation or thesis, you can request a copy through your library's Inter-Library Loan office or purchase a copy directly from ProQuest.
Permalink
https://hdl.handle.net/2142/81652
Description
Title
An Integrated Framework Enhanced With Appearance Model for Facial Motion Modeling, Analysis and Synthesis
Author(s)
Wen, Zhen
Issue Date
2004
Doctoral Committee Chair(s)
Huang, Thomas S.
Department of Study
Computer Science
Discipline
Computer Science
Degree Granting Institution
University of Illinois at Urbana-Champaign
Degree Name
Ph.D.
Degree Level
Dissertation
Keyword(s)
Computer Science
Language
eng
Abstract
Human faces provide important cues of human activities. Thus they are useful for human-human communication, human-computer interaction (HCI) and intelligent video surveillance. Computational models for face analysis and synthesis are useful for both basic research and practical applications. In this dissertation, we present a unified framework for 3D face motion modeling, analysis and synthesis. We first derive a compact geometric facial motion model from motion capture data. Then it is used for robust 3D non-rigid face tracking and face animation. One limitation of the geometric model is that it can not handle the motion details, which are important for both human perception and computer analysis. Therefore, we enhance our framework with appearance models. To adapt the appearance model to different illumination conditions and different people, we propose the following methods: (1) modeling illumination effects from one single face image; (2) reducing person-dependency using ratio-image technique; and (3) online appearance model transformation during tracking. We demonstrate the efficacy of this framework by experimental results on face recognition, expression recognition and face synthesis in varying conditions. We will also show the use of this framework in applications such as computer-aided learning and very low bit-rate face video coding.
Use this login method if you
don't
have an
@illinois.edu
email address.
(Oops, I do have one)
IDEALS migrated to a new platform on June 23, 2022. If you created
your account prior to this date, you will have to reset your password
using the forgot-password link below.