Wikia

Psychology Wiki

Statistical regression

Talk0
34,139pages on
this wiki
Revision as of 20:43, January 21, 2008 by Dr Joe Kiff (Talk | contribs)

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Assessment | Biopsychology | Comparative | Cognitive | Developmental | Language | Individual differences | Personality | Philosophy | Social |
Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology |

Statistics: Scientific method · Research methods · Experimental design · Undergraduate statistics courses · Statistical tests · Game theory · Decision theory


In statistics, regression analysis is a technique which examines the relation of a dependent variable (response variable) to specified independent variables (explanatory variables). Regression analysis can be used as a descriptive method of data analysis (such as curve fitting) without relying on any assumptions about underlying processes generating the data.[1]

When paired with assumptions in the form of a statistical model, regression can be used for prediction (including forecasting of time-series data), inference, hypothesis testing, and modeling of causal relationships. These uses of regression rely heavily on the model assumptions being satisfied. Regression analysis has been criticized as being misused for these purposes in many cases where the appropriate assumptions cannot be verified to hold.[1][2] One factor contributing to the misuse of regression is that it can take considerably more skill to critique a model than to fit a model.[3]

The key relationship in a regression is the regression equation. A regression equation contains regression parameters whose values are estimated using data. The estimated parameters measure the relationship between the dependent variable and each of the independent variables. When a regression model is used, the dependent variable is modeled as a random variable because of either uncertainty as to its value or inherent variability. The data are assumed to be sample from a probability distribution, which is usually assumed to be a normal distribution.

History of regressionEdit

The term "regression" was used in the nineteenth century to describe a biological phenomenon, namely that the progeny of exceptional individuals tend on average to be less exceptional than their parents and more like their more distant ancestors. Francis Galton, a cousin of Charles Darwin, studied this phenomenon and applied the slightly misleading term "regression towards mediocrity" to it. For Galton, regression had only this biological meaning, but his work[4] was later extended by Udny Yule and Karl Pearson to a more general statistical context.[5]

Simple linear regressionEdit

File:LinearRegression.svg

The general form of a simple linear regression is

y_i=\alpha+\beta x_i +\varepsilon_i

where \alpha
is the intercept, \beta is the slope, and \varepsilon is the error term, which picks up the unpredictable part of the response variable yi. The error term is usually posited to be normally distributed. The x's and y's are the data quantities from the sample or population in question, and \alpha and \beta are the unknown parameters ("constants") to be estimated from the data. Estimates for the values of \alpha and \beta can be derived by the method of ordinary least squares. The method is called "least squares," because estimates of \alpha and \beta minimize the sum of squared error estimates for the given data set. The estimates of \alpha and \beta are often denoted by \widehat{\alpha} and \widehat{\beta} or their corresponding Roman letters. It can be shown (see Draper and Smith, 1998 for details) that least squares estimates are given by

\hat{\beta}=\frac{\sum(x_i-\bar{x})(y_i-\bar{y})}{\sum(x_i-\bar{x})^2}

and

\hat{\alpha}=\bar{y}-\hat{\beta}\bar{x}

where \bar{x} is the mean (average) of the x values and \bar{y} is the mean of the y values.

Generalizing simple linear regressionEdit

The simple model above can be generalized in different ways.

  • The number of predictors can be increased from one to several. See
Main article: linear regression
  • The relationship between the knowns (the xs and ys) and the unknowns (\alpha and the \betas) can be nonlinear. See
Main article: non-linear regression
  • The response variable may be non-continuous. For binary (zero or one) variables, there are the probit and logit model. The multivariate probit model makes it possible to estimate jointly the relationship between several binary dependent variables and some independent variables. For categorical variables with more than two values there is the multinomial logit. For ordinal variables with more than two values, there are the ordered logit and ordered probit models. An alternative to such procedures is linear regression based on polychoric or polyserial correlations between the categorical variables. Such procedures differ in the assumptions made about the distribution of the variables in the population. If the variable is positive with low values and represents the repetition of the occurrence of an event, count models like the Poisson regression or the negative binomial model may be used
  • The form of the right hand side can be determined from the data. See Nonparametric regression. These approaches require a large number of observations, as the data are used to build the model structure as well as estimate the model parameters. They are usually computationally intensive.

Regression diagnosticsEdit

Once a regression model has been constructed, it is important to confirm the goodness of fit of the model and the statistical significance of the estimated parameters. Commonly used checks of goodness of fit include R-squared, analyses of the pattern of residuals and construction of an ANOVA table. Statistical significance is checked by an F-test of the overall fit, followed by t-tests of individual parameters. Interpretations of these diagnostics rest heavily on the model assumptions. Although examination of the residuals can be used to invalidate a model, the results of a t-test or F-test are meaningless unless the modeling assumptions are satisfied.

Estimation of model parametersEdit

The parameters of a regression model can be estimated in many ways. The following list orders these methods roughly on the basis of how widely used they are in practice:

For a model with normally distributed errors the method of least squares and the method of maximum likelihood coincide (see Gauss-Markov theorem).

Interpolation and extrapolationEdit

Regression models predict a value of the y variable given known values of the x variables. If the prediction is to be done within the range of values of the x variables used to construct the model this is known as interpolation. Prediction outside the range of the data used to construct the model is known as extrapolation and it is more risky.

Assumptions underpinning regressionEdit

Regression analysis depends on certain assumptions

  1. The predictors must be linearly independent, i.e it must not be possible to express any predictor as a linear combination of the others. See Multicollinear.
  2. The error terms must be normally distributed and independent.
  3. The variance of the error terms must be constant.
  4. The sample must be representative of the population for the inference prediction.
  5. The distribution of the dependent variable must have approximately equal variability, called the assumption of homoscedasticity

ExamplesEdit

To illustrate the various goals of regression, we give an example.

Prediction of future observationsEdit

The following data set gives the average heights and weights for American women aged 30-39 (source: The World Almanac and Book of Facts, 1975).

Height (in) 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72
Weight (lb) 115 117 120 123 126 129 132 135 139 142 146 150 154 159 164

We would like to see how the weight of these women depends on their height. We are therefore looking for a function \eta such that Y=\eta(X)+\varepsilon, where Y is the weight of the women and X their height. Intuitively, we can guess that if the women's proportions are constant and their density too, then the weight of the women must depend on the cube of their height.

File:Data plot women weight vs height.svg

\vec{X} will denote the vector containing all the measured heights (\vec{X}=(58,59,60,\dots)) and \vec{Y}=(115,117,120,\dots) is the vector containing all measured weights. We can suppose the heights of the women are independent from each other and have constant variance, which means the Gauss-Markov assumptions hold. We can therefore use the least-squares estimator, i.e. we are looking for coefficients \beta_0, \beta_1 and \beta_2 satisfying as well as possible (in the sense of the least-squares estimator) the equation:

\vec{Y}=\beta_0 + \beta_1 \vec{X} + \beta_2 \vec{X}^3+\vec{\varepsilon}

Geometrically, what we will be doing is an orthogonal projection of Y on the subspace generated by the variables 1, X and X^3. The matrix X is constructed simply by putting a first column of 1's (the constant term in the model), a column with the original values (the X in the model) and a third column with these values cubed (X^3). The realization of this matrix (i.e. for the data at hand) can be written:

1 x x^3
1 58 195112
1 59 205379
1 60 216000
1 61 226981
1 62 238328
1 63 250047
1 64 262144
1 65 274625
1 66 287496
1 67 300763
1 68 314432
1 69 328509
1 70 343000
1 71 357911
1 72 373248

The matrix (\mathbf{X}^t \mathbf{X})^{-1} (sometimes called "information matrix" or "dispersion matrix") is:


\left[\begin{matrix}
1.9\cdot10^3&-45&3.5\cdot 10^{-3}\\
-45&1.0&-8.1\cdot 10^{-5}\\
3.5\cdot 10^{-3}&-8.1\cdot 10^{-5}&6.4\cdot 10^{-9}
\end{matrix}\right]

Vector \widehat{\beta}_{LS} is therefore:

\widehat{\beta}_{LS}=(X^tX)^{-1}X^{t}y= (147, -2.0, 4.3\cdot 10^{-4})

hence \eta(X) = 147 - 2.0 X + 4.3\cdot 10^{-4} X^3

File:Plot regression women.svg

The confidence intervals are computed using:

[\widehat{\beta_j}-\widehat{\sigma}\sqrt{s_j}t_{n-p;1-\frac{\alpha}{2}};\widehat{\beta_j}+\widehat{\sigma}\sqrt{s_j}t_{n-p;1-\frac{\alpha}{2}}]

with:

\widehat{\sigma}=0.52 <- this number is incorrect
s_1=1.9\cdot 10^3, s_2=1.0, s_3=6.4\cdot 10^{-9}\;
\alpha=5\%
t_{n-p;1-\frac{\alpha}{2}}=2.2

Therefore, we can say that the 95% confidence intervals are:

\beta_0\in[112 , 181]
\beta_1\in[-2.8 , -1.2]
\beta_2\in[3.6\cdot 10^{-4} , 4.9\cdot 10^{-4}]

See alsoEdit

NotesEdit

  1. 1.0 1.1 Richard A. Berk, Regression Analysis: A Constructive Critique, Sage Publications (2004)
  2. David A. Freedman, Statistical Models: Theory and Practice, Cambridge University Press (2005)
  3. [1] R. Dennis Cook; Sanford Weisberg "Criticism and Influence Analysis in Regression", Sociological Methodology, Vol. 13. (1982), pp. 313-361.
  4. Francis Galton. "Typical laws of heredity", Nature 15 (1877), 492-495, 512-514, 532-533. (Galton uses the term "reversion" in this paper, which discusses the size of peas.); Francis Galton. Presidential address, Section H, Anthropology. (1885) (Galton uses the term "regression" in this paper, which discusses the height of humans.)
  5. G. Udny Yule. "On the Theory of Correlation", J. Royal Statist. Soc., 1897, p. 812-54. Karl Pearson, G. U. Yule, Norman Blanchard, and Alice Lee. "The Law of Ancestral Heredity", Biometrika (1903). In the work of Yule and Pearson, the joint distribution of the response and explanatory variables is assumed to be Gaussian. This assumption was weakened by R.A. Fisher in his works of 1922 and 1925 (R.A. Fisher, "The goodness of fit of regression formulae, and the distribution of regression coefficients", J. Royal Statist. Soc., 85, 597-612 from 1922 and Statistical Methods for Research Workers from 1925). Fisher assumed that the conditional distribution of the response variable is Gaussian, but the joint distribution need not be. In this respect, Fisher's assumption is closer to Gauss's formulation of 1821.

ReferencesEdit

  1. 1.0 1.1 Richard A. Berk, Regression Analysis: A Constructive Critique, Sage Publications (2004)
  2. David A. Freedman, Statistical Models: Theory and Practice, Cambridge University Press (2005)
  3. [1] R. Dennis Cook; Sanford Weisberg "Criticism and Influence Analysis in Regression", Sociological Methodology, Vol. 13. (1982), pp. 313-361.
  4. Francis Galton. "Typical laws of heredity", Nature 15 (1877), 492-495, 512-514, 532-533. (Galton uses the term "reversion" in this paper, which discusses the size of peas.); Francis Galton. Presidential address, Section H, Anthropology. (1885) (Galton uses the term "regression" in this paper, which discusses the height of humans.)
  5. G. Udny Yule. "On the Theory of Correlation", J. Royal Statist. Soc., 1897, p. 812-54. Karl Pearson, G. U. Yule, Norman Blanchard, and Alice Lee. "The Law of Ancestral Heredity", Biometrika (1903). In the work of Yule and Pearson, the joint distribution of the response and explanatory variables is assumed to be Gaussian. This assumption was weakened by R.A. Fisher in his works of 1922 and 1925 (R.A. Fisher, "The goodness of fit of regression formulae, and the distribution of regression coefficients", J. Royal Statist. Soc., 85, 597-612 from 1922 and Statistical Methods for Research Workers from 1925). Fisher assumed that the conditional distribution of the response variable is Gaussian, but the joint distribution need not be. In this respect, Fisher's assumption is closer to Gauss's formulation of 1821.
  • Audi, R., Ed. (1996). "curve fitting problem," The Cambridge Dictionary of Philosophy. Cambridge, Cambridge University Press. pp.172-173.
  • William H. Kruskal and Judith M. Tanur, ed. (1978), "Linear Hypotheses," International Encyclopedia of Statistics. Free Press, v. 1,
Evan J. Williams, "I. Regression," pp. 523-41.
Julian C. Stanley, "II. Analysis of Variance," pp. 541-554.
  • Lindley, D.V. (1987). "Regression and correlation analysis," New Palgrave: A Dictionary of Economics, v. 4, pp. 120-23.
  • Birkes, David and Yadolah Dodge, Alternative Methods of Regression. ISBN 0-471-56881-3
  • Chatfield, C. (1993) "Calculating Interval Forecasts," Journal of Business and Economic Statistics, 11. pp. 121-135.
  • Draper, N.R. and Smith, H. (1998).Applied Regression Analysis Wiley Series in Probability and Statistics
  • Fox, J. (1997). Applied Regression Analysis, Linear Models and Related Methods. Sage
  • Hardle, W., Applied Nonparametric Regression (1990), ISBN 0-521-42950-1
  • Meade, N. and T. Islam (1995) "Prediction Intervals for Growth Curve Forecasts," Journal of Forecasting, 14, pp. 413-430.
  • Munro, Barbara Hazard (2005) "Statistical Methods for Health Care Research" Lippincott Williams & Wilkins, 5th ed.
  • Gujarati, Basic Econometrics, 4th edition
  • Sykes, A.O. "An Introduction to Regression Analysis" (Innaugural Coase Lecture)
  • S. Kotsiantis, D. Kanellopoulos, P. Pintelas, Local Additive Regression of Decision Stumps, Lecture Notes in Artificial Intelligence, Springer-Verlag, Vol. 3955, SETN 2006, pp. 148 – 157, 2006
  • S. Kotsiantis, P. Pintelas, Selective Averaging of Regression Models, Annals of Mathematics, Computing & TeleInformatics, Vol 1, No 3, 2005, pp. 66-75

Software Edit

External linksEdit




{{enWP|RegressionAnalysis))

Around Wikia's network

Random Wiki