A hardware acceleration technique for gradient descent and conjugate gradient
Kesler, David R.
Loading…
Permalink
https://hdl.handle.net/2142/24241
Description
Title
A hardware acceleration technique for gradient descent and conjugate gradient
Author(s)
Kesler, David R.
Issue Date
2011-05-25T15:01:29Z
Director of Research (if dissertation) or Advisor (if thesis)
Kumar, Rakesh
Department of Study
Electrical & Computer Eng
Discipline
Electrical & Computer Engr
Degree Granting Institution
University of Illinois at Urbana-Champaign
Degree Name
M.S.
Degree Level
Thesis
Keyword(s)
Gradient Descent
Conjugate Gradient
Hardware Acceleration
Matrix Multiplication
Abstract
Gradient descent, conjugate gradient, and other iterative algorithms are a
powerful class of algorithms; however, they can take a long time for conver-
gence. Baseline accelerator designs feature insu cient coverage of operations
and do not work well on the problems we target. In this thesis we present
a novel hardware architecture for accelerating gradient descent and other
similar algorithms. To support this architecture, we also present a sparse
matrix-vector storage format, and software support for utilizing the format,
so that it can be e ciently mapped onto hardware which is also well suited for
dense operations. We show that the accelerator design outperforms similar
designs which target only the most dominant operation of a given algorithm,
providing substantial energy and performance bene ts. We further show that
the accelerator can be reasonably implemented on a general purpose CPU
with small area overhead.
Use this login method if you
don't
have an
@illinois.edu
email address.
(Oops, I do have one)
IDEALS migrated to a new platform on June 23, 2022. If you created
your account prior to this date, you will have to reset your password
using the forgot-password link below.