Assessment 
Biopsychology 
Comparative 
Cognitive 
Developmental 
Language 
Individual differences 
Personality 
Philosophy 
Social 
Methods 
Statistics 
Clinical 
Educational 
Industrial 
Professional items 
World psychology 
Statistics: Scientific method · Research methods · Experimental design · Undergraduate statistics courses · Statistical tests · Game theory · Decision theory
.
An F test is any statistical test in which the test statistic has an Fdistribution if the null hypothesis is true. The name was coined by George W. Snedecor, in honour of Sir Ronald A. Fisher. Fisher initially developed the statistic as the variance ratio in the 1920s^{[1]}. Examples include:
 The hypothesis that the means of multiple normally distributed populations, all having the same standard deviation, are equal. This is perhaps the most wellknown of hypotheses tested by means of an Ftest, and the simplest problem in the analysis of variance (ANOVA).
 The hypothesis that a proposed regression model fits well. See Lackoffit sum of squares.
 The hypothesis that the standard deviations of two normally distributed populations are equal, and thus that they are of comparable origin.
Note that if it is equality of variances (or standard deviations) that is being tested, the Ftest is extremely nonrobust to nonnormality. That is, even if the data displays only modest departures from the normal distribution, the test is unreliable and should not be used.
Contents
[show]Formula and calculationEdit
The formula for an F test in multiplecomparison ANOVA problems is: F = (betweengroup variability) / (withingroup variability)
(Note: When there are only two groups for the Ftest: Fratio = t^{2} where t is the Student's t statistic.)
In many cases, the Ftest statistic can be calculated through a straightforward process. In the case of regression: consider two models, 1 and 2, where model 1 is nested within model 2. That is, model 1 has p_{1} parameters, and model 2 has p_{2} parameters, where p_{2} > p_{1}. (Any constant parameter in the model is included when counting the parameters. For instance, the simple linear model y = mx + b has p = 2 under this convention.) If there are n data points to estimate parameters of both models from, then calculate F as
 ^{[2]}
where RSS_{i} is the residual sum of squares of model i. If your regression model has been calculated with weights, then replace RSS_{i} with χ^{2}, the weighted sum of squared residuals. F here is distributed as an Fdistribution, with (p_{2} − p_{1}, n − p_{2}) degrees of freedom; the probability that the decrease in χ^{2} associated with the addition of p_{2} − p_{1} parameters is solely due to chance is given by the probability associated with the F distribution at that point. The null hypothesis, that none of the additional p_{2} − p_{1} parameters differs from zero, is rejected if the calculated F is greater than the F given by the critical value of F for some desired rejection probability (e.g. 0.05).
Table on FtestEdit
A table of Ftest critical values can be found here and is usually included in most statistical texts.
Oneway anova exampleEdit
a_{1}  a_{2}  a_{3} 

6  8  13 
8  12  9 
4  9  11 
5  11  8 
3  6  7 
4  8  12 
a_{1}, a_{2}, and a_{3} are the three levels of the factor that your are studying. To calculate the F Ratio:
Step 1: calculate the A_{i} values where i refers to the number of the condition. So:
Step 2: calculate Ȳ_{Ai} being the average of the values of condition a_{i}
Step 3 calculate these values:
 Total:

 Average overall score:

 Where = the number of conditions and = the number of participants in each condition.

 This is every score in every condition squared and then summed.
Step 4 calculate the Sum of Squared Terms:
Step 5 the Degrees of Freedom are now calculated:
Step 6 the Means Squared Terms are calculated:
Step 7 finally the ending FRatio is now ready:
Step 8 look up the F_{crit} value for the problem:
F_{crit}(2,15) = 3.68 at α = .05 so being that our F value 9.27 ≥ 3.68 the results are significant and one could reject the null hypothesis.
Note F(x, y) notation means that there are x degrees of freedom in the numerator and y degrees of freedom in the denominator.
See alsoEdit
FootnotesEdit
 ↑ Lomax, Richard G. (2007) "Statistical Concepts: A Second Course", p. 10, ISBN 0805858504
 ↑ GraphPad Software Inc. How the F test works to compare models. GraphPad Software Inc.
ReferencesEdit
This page uses Creative Commons Licensed content from Wikipedia (view authors). 