Resource and data optimization for hardware implementation of deep neural networks targeting FPGA-based edge devices
Liu, Xinheng
Loading…
Permalink
https://hdl.handle.net/2142/101228
Description
Title
Resource and data optimization for hardware implementation of deep neural networks targeting FPGA-based edge devices
Author(s)
Liu, Xinheng
Issue Date
2018-04-25
Director of Research (if dissertation) or Advisor (if thesis)
Chen, Deming
Department of Study
Electrical & Computer Eng
Discipline
Electrical & Computer Engr
Degree Granting Institution
University of Illinois at Urbana-Champaign
Degree Name
M.S.
Degree Level
Thesis
Keyword(s)
FPGA
Convolutional Neural Network
Optimization
Acceleration
High-Level Synthesis
Abstract
Targeting convolutional neural networks (CNNs), we adopt the high level synthesis (HLS) design methodology and explore various optimization and synthesis techniques to optimize design on an FPGA. Our motivation is to target embedded devices that operate as edge devices. Recently, as machine learning algorithms have become more practical, there have been much effort to implement them on devices that can be used in our daily lives. However, unlike server devices, edge devices are relatively small and thus have much more limited resources and performance. Therefore, control of resource usage and optimization play an important role when we want to implement machine learning algorithms on an edge device. The key idea explored in this thesis is backward pipeline scheduling which optimizes the pipeline between CNN layers. This optimization technique is especially useful to utilize the limited on-chip memory resource for classifying an image on an edge device. We have achieved latency of 175.7 μs for classifying one image in the MNIST data set using the LeNet and 653.5 μs for classifying one image in the Cifar-10 data set using the CifarNet. For the LeNet we were able to maintain high accuracy of 97.6% for the MNIST data set and 83.4% for the Cifar-10 data set. We achieved the best single-image latency, 5.2x faster for the LeNet and 1.95x faster for the CifarNet, compared with NVIDIA Jetson TX1.
Use this login method if you
don't
have an
@illinois.edu
email address.
(Oops, I do have one)
IDEALS migrated to a new platform on June 23, 2022. If you created
your account prior to this date, you will have to reset your password
using the forgot-password link below.