Assessment |
Biopsychology |
Comparative |
Cognitive |
Developmental |
Language |
Individual differences |
Personality |
Philosophy |
Social |
Methods |
Statistics |
Clinical |
Educational |
Industrial |
Professional items |
World psychology |
Statistics: Scientific method · Research methods · Experimental design · Undergraduate statistics courses · Statistical tests · Game theory · Decision theory
In probability theory and statistics, partial correlation measures the degree of association between two random variables, with the effect of a set of controlling random variables removed.
Formal definitionEdit
Formally, the partial correlation between X and Y given a set of n controlling variables Z = {Z_{1}, Z_{2}, …, Z_{n}}, written ρ_{XY·Z}, is the correlation between the residuals R_{X} and R_{Y} resulting from the linear regression of X with Z and of Y with Z, respectively. In fact, the first-order partial correlation (i.e. when n=1) is nothing else than a difference between a correlation and the product of the removable correlations divided by the product of the coefficients of alienation of the removable correlations. The coefficient of alienation, and its relation with joint variance through correlation are available in Guilford (1973, pp. 344–345).^{[1]}
ComputationEdit
Using linear regressionEdit
A simple way to compute the partial correlation for some data is to solve the two associated linear regression problems, get the residuals, and calculate the correlation between the residuals. If we write x_{i}, y_{i} and z_{i} to denote i.i.d. samples of some joint probability distribution over X, Y and Z, solving the linear regression problem amounts to finding n-dimension vectors
with N being the number of samples and the scalar product between the vectors v and w. Note that in some implementations the regression includes a constant term, so the matrix would have an additional column of ones.
The residuals are then
and the sample partial correlation is
Using recursive formulaEdit
It can be computationally expensive to solve the linear regression problems. Actually, the nth-order partial correlation (i.e., with |Z| = n) can be easily computed from three (n - 1)th-order partial correlations. The zeroth-order partial correlation ρ_{XY·Ø} is defined to be the regular correlation coefficient ρ_{XY}.
It holds, for any :
Naïvely implementing this computation as a recursive algorithm yields an exponential time complexity. However, this computation has the overlapping subproblems property, such that using dynamic programming or simply caching the results of the recursive calls yields a complexity of .
Note in the case where Z is a single variable, this reduces to:
Using matrix inversionEdit
In time, another approach allows all partial correlations to be computed between any two variables X_{i} and X_{j} of a set V of cardinality n, given all others, i.e., , if the correlation matrix (or alternatively covariance matrix) Ω = (ω_{ij}), where ω_{ij} = ρ_{XiXj}, is invertible^{[citation needed]} . If we define P = Ω^{−1}, we have:
InterpretationEdit
GeometricalEdit
Let three variables X, Y, Z [where x is the Independent Variable (IV), y is the Dependent Variable (DV), and Z is the "control" or "extra variable"] be chosen from a joint probability distribution over n variables V. Further let v_{i}, 1 ≤ i ≤ N, be N n-dimensional i.i.d. samples taken from the joint probability distribution over V. We then consider the N-dimensional vectors x (formed by the successive values of X over the samples), y (formed by the values of Y) and z (formed by the values of Z).
It can be shown that the residuals R_{X} coming from the linear regression of X using Z, if also considered as an N-dimensional vector r_{X}, have a zero scalar product with the vector z generated by Z. This means that the residuals vector lives on a hyperplane S_{z} that is perpendicular to z.
The same also applies to the residuals R_{Y} generating a vector r_{Y}. The desired partial correlation is then the cosine of the angle φ between the projections r_{X} and r_{Y} of x and y, respectively, onto the hyperplane perpendicular to z.^{[2]}
As conditional independence testEdit
- See also: Fisher transformation
With the assumption that all involved variables are multivariate Gaussian, the partial correlation ρ_{XY·Z} is zero if and only if X is conditionally independent from Y given Z.^{[3]} This property does not hold in the general case.
To test if a sample partial correlation vanishes, Fisher's z-transform of the partial correlation can be used:
The null hypothesis is , to be tested against the two-tail alternative . We reject H_{0} with significance level α if:
where Φ(·) is the cumulative distribution function of a Gaussian distribution with zero mean and unit standard deviation, and N is the sample size. Note that this z-transform is approximate and that the actual distribution of the sample (partial) correlation coefficient is not straightforward. However, an exact t-test based on a combination of the partial regression coefficient, the partial correlation coefficient and the partial variances is available.^{[4]}
The distribution of the sample partial correlation was described by Fisher.^{[5]}
Semipartial correlation (part correlation)Edit
The semipartial (or part) correlation statistic is similar to the partial correlation statistic. Both measure variance after certain factors are controlled for, but to calculate the semipartial correlation one holds the third variable constant for either X or Y, whereas for partial correlations one holds the third variable constant for both.^{[citation needed]} The semipartial correlation measures unique and joint variance while the partial correlation measures unique variance . The semipartial (or part) correlation can be viewed as more practically relevant "because it is scaled to (i.e., relative to) the total variability in the dependent (response) variable."^{[6]} Conversely, it is less theoretically useful because it is less precise about the unique contribution of the independent variable. Although it may seem paradoxical, the semipartial correlation of X with Y is always less than or equal to the partial correlation of X with Y.
Use in time series analysisEdit
In time series analysis, the partial autocorrelation function (sometimes "partial correlation function") of a time series is defined, for lag h, as
See alsoEdit
This page uses content from the English-language version of Wikiversity. The original article was at {{{1}}}. The list of authors can be seen in the page history. As with Psychology Wiki, the text of Wikiversity is available under the GNU Free Documentation License. |
External links Edit
- Template:SpringerEOM
- What is a partial correlation?
- Mathematical formulae in the "Description" section of the IMSL Numerical Library PCORR routine
- A three-variable example
ReferencesEdit
- ↑ Guilford J. P., Fruchter B. (1973). Fundamental statistics in psychology and education, Tokyo: McGraw-Hill Kogakusha, LTD..
- ↑ Rummel, R. J. Understanding Correlation.
- ↑ Baba, Kunihiro, Ritei Shibata & Masaaki Sibuya (2004). Partial correlation and conditional correlation as measures of conditional independence. Australian and New Zealand Journal of Statistics 46 (4): 657–664.
- ↑ Kendall MG, Stuart A. (1973) The Advanced Theory of Statistics, Volume 2 (3rd Edition), ISBN 0-85264-215-6, Section 27.22
- ↑ Fisher, R.A. (1924). The distribution of the partial correlation coefficient. Metron 3 (3–4): 329–332.
- ↑ StatSoft, Inc. (2010). "Semi-Partial (or Part) Correlation", Electronic Statistics Textbook. Tulsa, OK: StatSoft, accessed January 15, 2011.
{{enWP|Partial correlation~~