Withdraw
Loading…
Resource-efficient FPGA acceleration for machine learning applications through HLS
Liu, Xinheng
Loading…
Permalink
https://hdl.handle.net/2142/115509
Description
- Title
- Resource-efficient FPGA acceleration for machine learning applications through HLS
- Author(s)
- Liu, Xinheng
- Issue Date
- 2022-02-25
- Director of Research (if dissertation) or Advisor (if thesis)
- Chen, Deming
- Doctoral Committee Chair(s)
- Chen, Deming
- Committee Member(s)
- Huang, Jian
- Lumetta, Steven
- Cheng, Zuofu
- Department of Study
- Electrical & Computer Eng
- Discipline
- Electrical & Computer Engr
- Degree Granting Institution
- University of Illinois at Urbana-Champaign
- Degree Name
- Ph.D.
- Degree Level
- Dissertation
- Keyword(s)
- FPGA
- machine learning
- HLS
- Abstract
- The rapidly growing machine learning development has demonstrated its great capability and effectiveness in handling complicated real-world problems such as computer vision and natural language processing. However, normal CPU-based implementations cannot deliver sufficient performance for deep neural networks (DNNs) that are used in many machine learning applications due to their intensive computation and memory bandwidth requirements. As a result, application developers seek other hardware platforms to boost up the performance of deep learning workloads. Field programmable gate arrays (FPGAs), famous for their ability to maximize parallelism, flexibility to explore different hardware architectures, and high energy efficiency, have been widely employed to accelerate the DNN applications. Meanwhile, the higher productivity and better design space exploration features of High-Level Synthesis (HLS) have granted this design methodology wider acceptance for hardware design. In recent years, HLS techniques and design flows have also advanced significantly, and many new FPGA designs are developed with the HLS design flow. In this dissertation, we present several novel design methodologies for high-performance and resource-efficient DNN accelerator designs and implementations on FPGAs leveraging commercial HLS design flows. Summarizing the design methodologies explored in these works, we conclude that designing high-performance and resource-efficient FPGA-based DNN accelerators requires both novel architectural design honoring resource and bandwidth constraints and the algorithmic optimization for the DNN computation.
- Graduation Semester
- 2022-05
- Type of Resource
- Thesis
- Copyright and License Information
- Copyright 2022 Xinheng Liu
Owning Collections
Graduate Dissertations and Theses at Illinois PRIMARY
Graduate Theses and Dissertations at IllinoisManage Files
Loading…
Edit Collection Membership
Loading…
Edit Metadata
Loading…
Edit Properties
Loading…
Embargoes
Loading…