COMPUTATIONAL COMPLEXITY ANALYSES OF ADAPTIVE EQUALIZATION ALGORITHMS IN LINEARLY DISPERSED CHANNEL SYSTEMS
Keywords:
Equalization, Least Mean Squares, Normalized Least Mean Squares, Recursive Least Squares.Abstract
This paper presents a framework for assessing the complexity of adaptive equalization
algorithms in a linearly dispersive channel that produces unknown distortion. Three
algorithms are investigated including the Least Mean Squares (LMS), Recursive Least
Squares (RLS), and Recursive Least Squares Lattice (RLSL) algorithms with respect to the
mean square error (MSE) and the sample convergence speed. The simulation results
reveal several insights: In terms of channel dispersion, the MSE performance of the LMS
deteriorates by up to 40 % when the channel eigenvalue spread doubles. In addition, the
convergence speed of LMS reduces by up to 50% for the same increase in channel spread
and is invariant to the filter order. The same observation is applicable to the RLS and
RLSL algorithms. The RLS algorithm gives the highest MSE performance for practical
signal to noise ratio (SNR) ranges; It outperforms the LMS scheme by up to 50% and the
RLSL by up to 20%. In terms of convergence speed, the RLS algorithm converges fastest at
around 100 data samples and the LMS is the slowest requiring 800 samples while the
RLSL algorithm requires up to 200 samples to converge. Observation of other metrics of
the RLS and RLSL algorithms including the last tap weight coefficient of the LMS/RLS and
the steady-state regression coefficient of the RLSL reveal the symmetry and asymmetry in
their statistics respectively. The choice of the equalization algorithm to be used depends on
a number of design tradeoffs including the propagation environment, the SNR sensitivity,
and the computational power.