Withdraw
Loading…
Learning from videos with deep convolutional LSTM networks
Courtney, Logan
Loading…
Permalink
https://hdl.handle.net/2142/109369
Description
- Title
- Learning from videos with deep convolutional LSTM networks
- Author(s)
- Courtney, Logan
- Issue Date
- 2020-11-23
- Director of Research (if dissertation) or Advisor (if thesis)
- Sreenivas, Ramavarapu
- Doctoral Committee Chair(s)
- Sreenivas, Ramavarapu
- Committee Member(s)
- Sirignano, Justin
- Hasegawa-Johnson, Mark
- Beck, Carolyn
- Department of Study
- Industrial&Enterprise Sys Eng
- Discipline
- Systems & Entrepreneurial Engr
- Degree Granting Institution
- University of Illinois at Urbana-Champaign
- Degree Name
- Ph.D.
- Degree Level
- Dissertation
- Keyword(s)
- Convolutional LSTM
- Convolutional neural network
- Recurrent neural network
- Deep learning
- Computer vision
- Receptive field
- Artificial Intelligence
- Machine Learning
- Abstract
- Many methods for learning from video sequences involve temporally processing 2D CNN features from the individual frames or directly utilizing 3D convolutions within high-performing 2D CNN architectures. The focus typically remains on how to incorporate the temporal processing within an already stable spatial architecture. This research explores the use of convolution LSTMs to simultaneously learn spatial- and temporal-information in videos. A deep network of convolutional LSTMs allows the model to access the entire range of temporal information at all spatial scales of the data. This work first constructs an MNIST-based video dataset with parameters controlling relevant facets of common video-related tasks: classification, ordering, and speed estimation. Models trained on this dataset are shown to differ in key ways depending on the task and their use of 2D convolutions, 3D convolutions, or convolutional LSTMs. An empirical analysis indicates a complex, interdependent relationship between the spatial and temporal dimensions with design choices having a large impact on a network's ability to learn the appropriate spatiotemporal features. In addition, experiments involving convolution LSTMs for action recognition and lipreading demonstrate the model is capable of selectively choosing which spatiotemporal scales are most relevant for a particular dataset. The proposed deep architecture also holds promise in other applications where spatiotemporal features play a vital role without having to specifically cater the design of the network for the particular spatiotemporal features existent within the problem. Our model has comparable performance with the current state of the art achieving 83.4% on the Lip Reading in the Wild (LRW) dataset. Additional experiments indicate convolutional LSTMs may be particularly data hungry considering the large performance increases when fine-tuning on LRW after pretraining on larger datasets like LRS2 (85.2%) and LRS3-TED (87.1%). However, a sensitivity analysis providing insight on the relevant spatiotemporal temporal features allows certain convolutional LSTM layers to be replaced with 2D convolutions decreasing computational cost without performance degradation indicating their usefulness in accelerating the architecture design process when approaching new problems.
- Graduation Semester
- 2020-12
- Type of Resource
- Thesis
- Permalink
- http://hdl.handle.net/2142/109369
- Copyright and License Information
- Copyright 2020 Logan Courtney
Owning Collections
Graduate Dissertations and Theses at Illinois PRIMARY
Graduate Theses and Dissertations at IllinoisManage Files
Loading…
Edit Collection Membership
Loading…
Edit Metadata
Loading…
Edit Properties
Loading…
Embargoes
Loading…