## FANDOM

34,204 Pages

In statistics, a t-statistic is, broadly speaking, a statistic whose sampling distribution is a Student's t-distribution. These are a parametric statistic, most frequently used in statistical hypothesis testing in Student's t-tests, but can be defined and used independently of hypothesis testing.

## Definition Edit

Broadly speaking, a t-statistic is any statistic whose sampling distribution is a Student's t-distribution. More narrowly, these are often defined by taking a statistic k whose sampling distribution is a normal distribution, then subtracting the expected value of the statistic (the mean $\mu_k$ of its sampling distribution), and dividing by an estimate of its standard error (an estimate of the standard deviation of the sampling distribution):[1]

$t_k := \frac{k - \mu_k}{\widehat{SE}_k}$

In the case of a single-sample t-statistic, where the statistic is a single draw from a normal distribution, and thus the standard error is the (population) standard deviation, and the estimate of the error is the sample standard deviation s, divided by $\sqrt{n}$, which yields:

$t := \frac{k - \mu}{s/\sqrt{n}}$

which is sometimes referred to as the t-statistic.

## Use Edit

Most frequently, t-statistics are used by in Student's t-tests, a form of statistical hypothesis testing.

The key property of the t-statistic is that it is a pivotal quantity – while defined in terms of the sample mean, its sampling distribution does not depend on the sample parameters, and thus it can be used regardless of what these may be.

One can also divide a residual by the sample standard deviation:

$g(x,X) = \frac{x - \overline{X}}{s}$

to compute an estimate for the number of standard deviations a given sample is from the mean, as a sample version of a z-score, the z-score requiring the population parameters.

### Prediction Edit

For more details on this topic, see Prediction interval#Normal distribution.

Given a normal distribution $N(\mu,\sigma^2)$ with unknown mean and variance, the t-statistic of a future observation $X_{n+1},$ after one has made n observations, is an ancillary statistic – a pivotal quantity (does not depend on the values of μ and σ2) that is a statistic (computed from observations). This allows one to compute a frequentist prediction interval (a predictive confidence interval), via the following t-distribution:

$\frac{X_{n+1}-\overline{X}_n}{S_n\sqrt{1+1/n}} \sim T^{n-1}$

Solving for $X_{n+1}$ yields the prediction distribution

$\overline{X}_n + S_n\sqrt{1+1/n} \cdot T^{n-1}$

from which one may compute predictive confidence intervals – given a probability p, one may compute intervals such that 100p% of the time, the next observation $X_{n+1}$ will fall in that interval.

## History Edit

For more details on this topic, see Student's t-test.

The term "t-statistic" is abbreviated from "test statistic", while "Student" was the pen name of William Sealy Gosset, who introduced the t-statistic and t-test in 1908, while working for the Guinness brewery in Dublin, Ireland.

## Related concepts Edit

z-score
If the population parameters are known, then rather than computing the t-statistic, one can compute the z-score; analogously, rather than using a t-test, one uses a z-test. This is rare outside of standardized testing.
Studentized residual
In regression analysis, the standard errors of the estimators at different data points vary (compare the middle versus endpoints of a simple linear regression), and thus one must divide the different residuals by different estimates for the error, yielding what are called studentized residuals.