Withdraw
Loading…
Acceleration of deep learning applications using Intel distribution of OpenVINO toolkit
Li, Haoxiang
Loading…
Permalink
https://hdl.handle.net/2142/115631
Description
- Title
- Acceleration of deep learning applications using Intel distribution of OpenVINO toolkit
- Author(s)
- Li, Haoxiang
- Issue Date
- 2022-04-29
- Director of Research (if dissertation) or Advisor (if thesis)
- Kindratenko, Volodymyr
- Department of Study
- Electrical & Computer Eng
- Discipline
- Electrical & Computer Engr
- Degree Granting Institution
- University of Illinois at Urbana-Champaign
- Degree Name
- M.S.
- Degree Level
- Thesis
- Keyword(s)
- OpenVINO
- Machine Learning
- Convolution Neural Network
- Power-efficiency
- Abstract
- Machine learning has been a popular domain of research for the past decade. The emergence of deep neural networks (DNN) brings new solutions to address complex problems, including image classification, object detection, and natural language processing (NLP). The use of convolution and deep architecture allows information to be extracted and learned effectively from large-scale datasets and has led to significant technological breakthroughs in many traditional scientific fields. In particular, astrophysicists have proposed deep learning solutions for tasks related to gravitational waves, such as detecting and characterizing such events. To satisfy the increasing demand from researchers, many open-source frameworks, such as TensorFlow and PyTorch have been developed for ease of use. With the assistance of large-scale distributed GPU systems, researchers are able to develop, train, and test domain-specific deep learning applications efficiently. However, many of such applications require deployment on edge devices, collecting and processing data in real-time. In this case, GPU may not provide a portable solution to DNN inference because they are expensive and power-inefficient. In contrast, other hardware architectures such as CPU, VPU, and FPGA can be more accessible to a larger range of customers and more applicable to many power-restricted scenarios. In this thesis, we are interested in accelerating DNN inference workload using Intel Distribution of OpenVINO toolkit on various Intel hardware. We explore the process of model conversion, workload deployment in Intel DevCloud, and performance benchmarks for several popular networks. In addition, we evaluate the inference performance of a specific deep learning algorithm developed for multi-messenger astrophysics to characterize complex gravitational waves caused by the merging of binary black holes. We make a comparative analysis in terms of inference throughput and power consumption on PyTorch with Nvidia GPUs and OpenVINO with Intel CPUs, GPUs, and VPUs.
- Graduation Semester
- 2022-05
- Type of Resource
- Thesis
- Copyright and License Information
- Copyright 2022 Haoxiang Li
Owning Collections
Graduate Dissertations and Theses at Illinois PRIMARY
Graduate Theses and Dissertations at IllinoisManage Files
Loading…
Edit Collection Membership
Loading…
Edit Metadata
Loading…
Edit Properties
Loading…
Embargoes
Loading…