# Changes: Analysis of covariance

Back to page

The analysis of covariance (ANCOVA) is a general linear model with one continuous explanatory variable and one or more factors. ANCOVA is a merger of ANOVA and regression for continuous variables. ANCOVA tests whether certain factors have an effect after removing the variance for which quantitative predictors (covariates) account. The inclusion of covariates can increase statistical power because it accounts for some of the variability.

## AssumptionsEdit

As any statistical procedure, ANCOVA makes certain assumptions about the data entered into the model. Only if these assumptions are met, at least approximately, ANCOVA will yield valid results. Specifically, ANCOVA, just like ANOVA, assumes that the dependent variable is normally distributed and the independent variable(s) must be orthogonal. In addition, the covariate must be normally distributed and measured with sufficent reliability.

## Power ConsiderationsEdit

While the inclusion of a covariate into a ANOVA generally increases statistical power by accounting for some of the variance in the dependent variable and thus increasing the ratio of variance explained by the independent variables, adding a covariate into ANOVA also reduces the degrees of freedom (see below). Accordingly, adding a covariate which accounts for very little variance in the dependent variable might actually reduce power.==Equations==

#### One-factor ANCOVA analysisEdit

One factor analysis is appropriate when dealing with more than 3 populations; k populations. The single factor has k levels equal to the k populations. n samples from each population are chosen random from their respective population.

#### Calculating the sum of squared deviates for the independent variable X and the dependent variable YEdit

The sum of squared deviates (SS): $SST_y$, $SSTr_y$, and $SSE_y$ must be calculated using the following equations for the dependent variable, Y. The SS for the covariate must also be calculated, the two necessary values are $SST_x$ and $SSE_x$.

The total sum of squares determines the variability of all the samples. $n_T$ represents the total number of samples:

$SST_y=\sum_{i=1}^n\sum_{j=1}^kY_{ij}^2-\frac{\left(\sum_{i=1}^n\sum_{j=1}^kY_{ij}\right)^2}{n_T}$

The sum of squares for treatments determines the variablity between populations or factors. $n_k$ represents the number of factors:

$SSTr_y=\sum_{i=1}^n\left(\sum_{j=1}^kY_{ij}-\frac{\sum_{j=1}^k(Y_{ij})^2}{n_k}\right)$

The sum of squares for error determines the variability within each population or factor. $n_n$ represents the number of samples with a given population:

$SSE_y=\sum_{j=1}^k\left(\sum_{i=1}^nY_{ij}^2-\frac{\sum_{i=1}^k(Y_{ij})^2}{n_n}\right).$

The total sum of squares is equal to the sum of the sum of squares for treatments and the sum of squares for error:

$SST_y=SSTr_y+SSE_y.\,$

#### Calculating the covariance of X and YEdit

The total sum of square covariates determines the covariance of X and Y within the all the data samples:

$SCT=\sum_{i=1}^n\sum_{j=1}^kX_{ij}^2Y_{ij}^2-\frac{\left(\sum_{i=1}^n\sum_{j=1}^kX_{ij}\right)^2Y_{ij}^2}{n_T}$

$SCE=\sum_{j=1}^k\left(\sum_{i=1}^nX_{ij}^2Y_{ij}^2-\frac{\sum_{i=1}^k(X_{ij}Y_{ij})^2}{n_n}\right)$

The correlation between X and Y is $r_T^2$.

$r_T^2=\frac{SCT^2}{SST_xSST_y}$
$r_n^2=\frac{SCE^2}{SSE_xSSE_y}$

The proportion of covariance is subtracted from the dependent, $SS_y$ values:

$SST_{yadj}=SST_y-r_T^2\,$
$SSE_{yadj}=SSE_y-r_n^2$
$SSTr_{yadj}=SST_y[adj]-SSE_y[adj]$

#### Adjusting the means of each population kEdit

The mean of each population is adjusted in the following manner:

$M_{y_iadj}=M_{y_i}-\frac{SCE_y}{SCE_x}(M_{x_i}-M_{x_T})$

#### Analysis using adjusted sum of squares valuesEdit

Mean squares for treatments where $df_{Tr}$ is equal to $N_T-k-1$. $df_{Tr}$ is one less than in ANOVA to account for the covariance and $df_E=k-1$:

$MSTr=\frac{SSTr}{df_{Tr}}$
$MSE=\frac{SSE}{df_E}$

The F statistic is

$F_{df_E,df_\mathrm{Tr}}=\frac{\mathrm{MSTr}}{\mathrm{MSE}}.$