# Heteroskedasticity

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
34,200pages on
this wiki

In statistics, a sequence or a vector of random variables is heteroskedastic, or heteroscedastic, if the random variables have different variances. The complementary concept is called homoskedasticity. The term means "differing variance" and comes from the Greek "hetero" ('different') and "skedasis" ('dispersion').

When using some statistical techniques, such as ordinary least squares (OLS), a number of assumptions are typically made. One of these is that the error term has a constant variance. This will be true if the observations of the error term are assumed to be drawn from identical distributions. Heteroskedasticity is a violation of this assumption.

For example, the error term could vary or increase with each observation, something that is often the case with cross-sectional or time series measurements. Heteroskedasticity is often studied as part of econometrics, which frequently deals with data exhibiting it.

With the advent of robust standard errors allowing for inference without specifying the conditional second moment of error term, testing conditional homoskedasticity is not as important as in the past.[How to reference and link to summary or text]

The econometrician Robert Engle won the 2003 Nobel Memorial Prize for Economics for his studies on regression analysis in the presence of heteroskedasticity, which led to his formulation of the ARCH (AutoRegressive Conditional Heteroscedasticity) modeling technique.

## ConsequencesEdit

Heteroskedasticity does not cause OLS coefficient estimates to be biased. However, the variance (and, thus, standard errors) of the coefficients tends to be underestimated, inflating t-scores and sometimes making insignificant variables appear to be statistically significant.

Heteroskedasticity is also major practical issue encountered in ANOVA problems.[1] The F test can still be used in some circumstances.[2]

## DetectionEdit

There are several methods to test for the presence of heteroskedasticity:

These last two tests work for non-normally distributed data sets

## FixesEdit

There are three common corrections for heteroskedasticity:

• Use a different specification for the model (different X variables, or perhaps non-linear transformations of the X variables).
• Apply a weighted least squares estimation method, in which OLS is applied to transformed or weighted values of X and Y. The weights vary over observations, depending on the changing error variances.
• Heteroscedasticity-consistent standard errors (HCSE), while still biased, improve upon OLS estimates (White 1980). Generally, HCSEs are greater than their OLS counterparts, resulting in lower t-scores and a reduced probability of statistically significant coefficients. The White method corrects for heteroskedasticity without altering the values of the coefficients. This method may be superior to regular OLS because if heteroskedasticity is present it corrects for it, however, if it is not present you have not made any error.

## ExamplesEdit

Heteroskedasticity often occurs when there is a large difference among the sizes of the observations.

• A classic example of heteroskedasticity is that of income versus expenditure on meals. As one's income increases, the variability of food consumption will increase. A poorer person will spend a rather constant amount by always eating fast food; a wealthier person may occasionally buy fast food and other times eat an expensive meal. Those with higher incomes display a greater variability of food consumption.
• Imagine you are watching a rocket take off nearby and measuring the distance it has traveled once each second. In the first couple of seconds your measurements may be accurate to the nearest centimeter, say. However, 5 minutes later as the rocket recedes into space, the accuracy of your measurements may only be good to 100 m, because of the increased distance, atmospheric distortion and a variety of other factors. The data you collect would exhibit heteroskedasticity.

## ReferencesEdit

1. Gamage, Jinadasa (1998). Size performance of some tests in one-way anova. Communications in Statistics - Simulation and Computation 27: 625.
2. Bathke, A (2004). The ANOVA F test can still be used in some balanced designs with unequal variances and nonnormal data. Journal of Statistical Planning and Inference 126: 413.

Most statistics textbooks will include at least some material on heteroskedasticity. Some examples are:

1. Studenmund, A.H. Using Econometrics 2nd Ed. ISBN 0-673-52125-7. (devotes a chapter to heteroskedasticity).
2. Verbeek, Marno (2004): A Guide to Modern Econometrics, 2. ed., Chichester: John Wiley & Sons, 2004, pages
3. Greene, W.H. (1993), Econometric Analysis, Prentice-Hall, ISBN 0-13-013297-7, an introductory but thorough general text, considered the standard for a pre-doctorate university Econometrics course;
4. Hamilton, J.D. (1994), Time Series Analysis, Princeton University Press ISBN 0-691-04289-6, the text of reference for historical series analysis; it contains an introduction to ARCH models.
5. Vinod, H.D. (2008): Hands On Intermediate Econometrics Using R: Templates for Extending Dozens of Practical Examples. ISBN 10-981-281-885-5 (Section 2.8 provides R snippets) World Scientific Publishers: Hackensack, NJ .
6. White, Halbert (1980): A Heteroskedasticity-Consistent Covariance Matrix Estimator and a Direct Test for Heteroscedasticity, in: Econometrica, Vol. 48, 1980, page 817-838

Special subjects:

• Glejser test: Furno, Marilena (Universita di Cassino, Italy, 2005): The Glejser Test and the Median Regression, in: Sankhya - The Indian Journal of Statistics, Special Issue on Quantile Regression and Related Methods, 2005, Volume 67, Part 2, pp 335-358 : http://sankhya.isical.ac.in/search/67_2/2005015.pdf