A SIMULATION STUDY TO DETERMINE THE BEST ESTIMATORS FOR SOLVING PROBLEMS OF AUTOCORRELATION IN LINEAR REGRESSION MODEL

Authors

  • A.H. BELLO Department of Statistics, School of Physical Sciences, Federal University of Technology, Akure

Keywords:

Multicollinearity, Autocorrelation, Estimator, Regressors and TSP- Time Series Processor

Abstract

Violation of the assumption of independent explanatory variables and error terms in linear regression model leads to
the problems of multicollinearity and autocorrelation respectively. Different estimators that can handle these problems
separately have been developed. Moreover, in practice, these two problems do co-exist but estimators to handle them
jointly are rare. Consequently, this research proposed and validate two estimators, Feasible Ordinary Ridge Estimators
(FORE) and Feasible Generalized Ridge Estimators (FGRE), to handle the problems of autocorrelation separately.
The existing and proposed estimators were categorized into five (5) groups namely: One–Stage Estimators (OSE),
Two–Stage Estimators (TSE), Feasible Generalized Least Square Estimators (FGLSE), Two-Process Estimators
(TPE) and Modified Ridge Estimators (MRE). Monte Carlo experiments were conducted one thousand (1000) times
on a linear regression model exhibiting different degrees of multicollinearity ( 0.4, 0.6, 0.8, 0.95 and 0.99) and
autocorrelation ( ). However, the multicollinearity in this study is set to zero (λ = 0).
This was examined for both normally and uniformly distributed regressors at sample sizes (n =10, 20, 30, 50, 100 and
250). Finite sampling properties of estimators namely; Bias (BAS), Mean Absolute Error (MAE), Variance (VAR)
and most importantly Mean Square Error (MSE) of the estimators were evaluated, examined and compared at each
specified level of multicollinearity, autocorrelation and sample size by writing computer programs using Time Series
Processor (TSP 5.0) statistical software. These were done by ranking the estimators on the basis of their performances
according to the criteria so as to determine the best estimator. With normally distributed regressor, the best estimator
is N-AUTOCOFGLSE-ML except at n=10. At this instance, N-AUTOCOFGRE-ML is the best. Also, at sample size
of n=20, it is either (N-1)-AUTOCOFGLSE-CORC or OREKBAY that is best. With uniformly distributed regressor,
the best estimator is N-AUTOCOFGLSE-ML/ML except at n=50. At this instance, (N-1)-AUTOCOFGLSE-
CORC/CORC is the best. Moreover, the GRE and N-AUTOCOFOREKBAY compete at small sample sizes, n=10
and n=20 respectively. Generally, It can be observed from the results that the best estimator is either N-
AUTOCOFGLSE-ML/ML or (N-1)-AUTOCOFGLSE-CORC/CORC.

Downloads

Published

2022-06-15

Issue

Section

Articles