An optimizing compiler for ONNX models on heterogeneous systems
Shi, Yuanjing
Loading…
Permalink
https://hdl.handle.net/2142/108171
Description
Title
An optimizing compiler for ONNX models on heterogeneous systems
Author(s)
Shi, Yuanjing
Issue Date
2020-05-11
Director of Research (if dissertation) or Advisor (if thesis)
Adve, Vikram S.
Department of Study
Computer Science
Discipline
Computer Science
Degree Granting Institution
University of Illinois at Urbana-Champaign
Degree Name
M.S.
Degree Level
Thesis
Keyword(s)
Compiler
Abstract
In order to build, train, and deploy deep learning models for modern data-driven applications, programs need to be executed on top of specialized heterogeneous systems for better performance. However, programming on those heterogeneous systems remains a fundamental challenge in terms of the interoperability issue between high-level deep learning frameworks and the programmability issue between different low-level heterogeneous systems.
In this work, we propose a portable and highly optimizing compiler for neural network models, which is based on an open format - ONNX of deep learning models, running on heterogeneous systems. It consists of a front-end and a back-end to address those above issues. The goal of this neural network compiler is also to map high-level neural network models to low-level executable programs. We evaluate this work with several deep learning neural network models and our neural network compiler is able to outperform ONNX runtime by up to 3.15x and Keras by up to 4.37x on certain workloads.
Use this login method if you
don't
have an
@illinois.edu
email address.
(Oops, I do have one)
IDEALS migrated to a new platform on June 23, 2022. If you created
your account prior to this date, you will have to reset your password
using the forgot-password link below.