This item is only available for download by members of the University of Illinois community. Students, faculty, and staff at the U of I may log in with your NetID and password to view the item. If you are trying to access an Illinois-restricted dissertation or thesis, you can request a copy through your library's Inter-Library Loan office or purchase a copy directly from ProQuest.
Permalink
https://hdl.handle.net/2142/100029
Description
Title
Graph sparsification on deep neural network
Author(s)
Qiu, Yijia
Contributor(s)
Chen, Deming
Issue Date
2018-05
Keyword(s)
graph sparsification
prune neural network
control accuracy loss
accelerate IC
Abstract
Pruning on deep neuron networks can reduce the computation cost and memory use. Graph
Sparsification is a method which considers the neuron network as a graph, where neurons are
vertices and the connections between neurons are edges. We are able to generate an ultrasparse
sub-graph that well preserves the structure of the original one. The remaining
connections of reduced neuron networks are 30 times or fewer than before and the accuracy
maintains almost the same after the re-training. Such an optimization would lead to a highly
efficient implementation of the reduced deep neural networks unto hardware accelerators
such as FPGAs.
Use this login method if you
don't
have an
@illinois.edu
email address.
(Oops, I do have one)
IDEALS migrated to a new platform on June 23, 2022. If you created
your account prior to this date, you will have to reset your password
using the forgot-password link below.