Resource-efficient FPGA acceleration for machine learning applications through HLS
Liu, Xinheng
This item is only available for download by members of the University of Illinois community. Students, faculty, and staff at the U of I may log in with your NetID and password to view the item. If you are trying to access an Illinois-restricted dissertation or thesis, you can request a copy through your library's Inter-Library Loan office or purchase a copy directly from ProQuest.
Permalink
https://hdl.handle.net/2142/115509
Description
Title
Resource-efficient FPGA acceleration for machine learning applications through HLS
Author(s)
Liu, Xinheng
Issue Date
2022-02-25
Director of Research (if dissertation) or Advisor (if thesis)
Chen, Deming
Doctoral Committee Chair(s)
Chen, Deming
Committee Member(s)
Huang, Jian
Lumetta, Steven
Cheng, Zuofu
Department of Study
Electrical & Computer Eng
Discipline
Electrical & Computer Engr
Degree Granting Institution
University of Illinois at Urbana-Champaign
Degree Name
Ph.D.
Degree Level
Dissertation
Keyword(s)
FPGA
machine learning
HLS
Abstract
The rapidly growing machine learning development has demonstrated its great capability and effectiveness in handling complicated real-world problems such as computer vision and natural language processing. However, normal CPU-based implementations cannot deliver sufficient performance for deep neural networks (DNNs) that are used in many machine learning applications due to their intensive computation and memory bandwidth requirements. As a result, application developers seek other hardware platforms to boost up the performance of deep learning workloads. Field programmable gate arrays (FPGAs), famous for their ability to maximize parallelism, flexibility to explore different hardware architectures, and high energy efficiency, have been widely employed to accelerate the DNN applications. Meanwhile, the higher productivity and better design space exploration features of High-Level Synthesis (HLS) have granted this design methodology wider acceptance for hardware design. In recent years, HLS techniques and design flows have also advanced significantly, and many new FPGA designs are developed with the HLS design flow. In this dissertation, we present several novel design methodologies for high-performance and resource-efficient DNN accelerator designs and implementations on FPGAs leveraging commercial HLS design flows. Summarizing the design methodologies explored in these works, we conclude that designing high-performance and resource-efficient FPGA-based DNN accelerators requires both novel architectural design honoring resource and bandwidth constraints and the algorithmic optimization for the DNN computation.
Use this login method if you
don't
have an
@illinois.edu
email address.
(Oops, I do have one)
IDEALS migrated to a new platform on June 23, 2022. If you created
your account prior to this date, you will have to reset your password
using the forgot-password link below.