This item is only available for download by members of the University of Illinois community. Students, faculty, and staff at the U of I may log in with your NetID and password to view the item. If you are trying to access an Illinois-restricted dissertation or thesis, you can request a copy through your library's Inter-Library Loan office or purchase a copy directly from ProQuest.
Permalink
https://hdl.handle.net/2142/19077
Description
Title
Orthogonalization techniques for adaptive filters
Author(s)
Hull, Andrew William
Issue Date
1994
Doctoral Committee Chair(s)
Jenkins, W. Kenneth
Department of Study
Electrical and Computer Engineering
Discipline
Electrical Engineering
Degree Granting Institution
University of Illinois at Urbana-Champaign
Degree Name
Ph.D.
Degree Level
Dissertation
Keyword(s)
Engineering, Electronics and Electrical
Engineering, System Science
Language
eng
Abstract
The rate of convergence and the computational complexity of an adaptive algorithm are two essential criteria by which the performance of an adaptive filter is measured. These objectives conflict with one another; each property is successfully achieved at the expense of the other. The principal means of achieving rapid convergence is to decouple and normalize the eigenvalues governing the solution evolution. Given a suitable structure, it is possible to derive an orthogonalizing algorithm with O(N) computations. However, such algorithms currently suffer from numerical instability or require computationally expensive operations, such as square root and division.
Two different alternatives are presented in this work, each satisfying the contradictory adaptive filtering criteria. The first employs a novel nonlinear operation to whiten the input spectrum and increase the rate of convergence of the simple LMS algorithm. Not only does the richer input spectrum facilitate rapid convergence, but the now uncorrelated input signal reduces the effects of round-off error. This technique may also be applied to the O(N) fast least squares algorithms. The rate of convergence is unaffected, but the sensitivity to fixed-point implementation is reduced.
The other approach shows the method of Preconditioned Conjugate Gradients (PCG) to be a useful tool in adaptive filtering. An O(log(2N)) block algorithm incorporating the PCG method to compute the Kalman gain is derived and its performance is evaluated. This algorithm exploits the Toeplitz nature of the autocorrelation matrix and is free from fixed-point instability. The manipulation of the Kalman gain is modified to solve the IIR adaptive filtering problem. Block IIR adaptive filtering is also introduced, and a fast algorithm is derived which also exploits the PCG method to manipulate an approximate orthogonalizing updating scheme.
Use this login method if you
don't
have an
@illinois.edu
email address.
(Oops, I do have one)
IDEALS migrated to a new platform on June 23, 2022. If you created
your account prior to this date, you will have to reset your password
using the forgot-password link below.