Universal approximation of input-output maps and dynamical systems by neural network architectures
Hanson, Joshua McKinley
Loading…
Permalink
https://hdl.handle.net/2142/108486
Description
Title
Universal approximation of input-output maps and dynamical systems by neural network architectures
Author(s)
Hanson, Joshua McKinley
Issue Date
2020-07-15
Director of Research (if dissertation) or Advisor (if thesis)
Raginsky, Maxim
Department of Study
Electrical & Computer Eng
Discipline
Electrical & Computer Engr
Degree Granting Institution
University of Illinois at Urbana-Champaign
Degree Name
M.S.
Degree Level
Thesis
Keyword(s)
Input-output maps
convolutional neural nets
dynamical systems
recurrent neural nets
deep neural networks
continuous time
discrete time
universal approximation
simulation
feedback
stability
fading memory
approximately finite memory
Abstract
It is well known that feedforward neural networks can approximate any continuous function supported on a finite-dimensional compact set to arbitrary accuracy. However, many engineering applications require modeling infinite-dimensional functions, such as sequence-to-sequence transformations or input-output characteristics of systems of differential equations. For discrete-time input-output maps having limited long-term memory, we prove universal approximation guarantees for temporal convolutional nets constructed using only a finite number of computation units which hold on an infinite-time horizon. We also provide quantitative estimates for the width and depth of the network sufficient to achieve any fixed error tolerance. Furthemore, we show that discrete-time input-output maps given by state-space realizations satisfying certain stability criteria admit such convolutional net approximations which are accurate on an infinite-time scale. For continuous-time input-output maps induced by dynamical systems that are stable in a similar sense, we prove that continuous-time recurrent neural nets are capable of reproducing the original trajectories to within arbitrarily small error tolerance over an infinite-time horizon. For a subset of these stable systems, we provide quantitative estimates on the number of neurons sufficient to guarantee the desired error bound.
Use this login method if you
don't
have an
@illinois.edu
email address.
(Oops, I do have one)
IDEALS migrated to a new platform on June 23, 2022. If you created
your account prior to this date, you will have to reset your password
using the forgot-password link below.