Assessment | Biopsychology | Comparative | Cognitive | Developmental | Language | Individual differences | Personality | Philosophy | Social |
Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology |

Statistics: Scientific method · Research methods · Experimental design · Undergraduate statistics courses · Statistical tests · Game theory · Decision theory

The analysis of covariance (ANCOVA) is a general linear model with one continuous explanatory variable and one or more factors. ANCOVA is a merger of ANOVA and regression for continuous variables. ANCOVA tests whether certain factors have an effect after removing the variance for which quantitative predictors (covariates) account. The inclusion of covariates can increase statistical power because it accounts for some of the variability.


As any statistical procedure, ANCOVA makes certain assumptions about the data entered into the model. Only if these assumptions are met, at least approximately, ANCOVA will yield valid results. Specifically, ANCOVA, just like ANOVA, assumes that the dependent variable is normally distributed and the independent variable(s) must be orthogonal. In addition, the covariate must be normally distributed and measured with sufficent reliability.

Power ConsiderationsEdit

While the inclusion of a covariate into a ANOVA generally increases statistical power by accounting for some of the variance in the dependent variable and thus increasing the ratio of variance explained by the independent variables, adding a covariate into ANOVA also reduces the degrees of freedom (see below). Accordingly, adding a covariate which accounts for very little variance in the dependent variable might actually reduce power.==Equations==

One-factor ANCOVA analysisEdit

One factor analysis is appropriate when dealing with more than 3 populations; k populations. The single factor has k levels equal to the k populations. n samples from each population are chosen random from their respective population.

Calculating the sum of squared deviates for the independent variable X and the dependent variable YEdit

The sum of squared deviates (SS): SST_y, SSTr_y, and SSE_y must be calculated using the following equations for the dependent variable, Y. The SS for the covariate must also be calculated, the two necessary values are SST_x and SSE_x.

The total sum of squares determines the variability of all the samples. n_T represents the total number of samples:


The sum of squares for treatments determines the variablity between populations or factors. n_k represents the number of factors:


The sum of squares for error determines the variability within each population or factor. n_n represents the number of samples with a given population:


The total sum of squares is equal to the sum of the sum of squares for treatments and the sum of squares for error:


Calculating the covariance of X and YEdit

The total sum of square covariates determines the covariance of X and Y within the all the data samples:



Adjusting SSTyEdit

The correlation between X and Y is r_T^2.


The proportion of covariance is subtracted from the dependent, SS_y values:


Adjusting the means of each population kEdit

The mean of each population is adjusted in the following manner:


Analysis using adjusted sum of squares valuesEdit

Mean squares for treatments where df_{Tr} is equal to N_T-k-1. df_{Tr} is one less than in ANOVA to account for the covariance and df_E=k-1:


The F statistic is


See alsoEdit

References & BibliographyEdit

Key textsEdit



Additional materialEdit



External linksEdit

This page uses Creative Commons Licensed content from Wikipedia (view authors).

Ad blocker interference detected!

Wikia is a free-to-use site that makes money from advertising. We have a modified experience for viewers using ad blockers

Wikia is not accessible if you’ve made further modifications. Remove the custom ad blocker rule(s) and the page will load as expected.