Wikia

Psychology Wiki

Fleiss' kappa

Talk0
34,135pages on
this wiki

Fleiss' kappa is a generalisation of Scott's pi statistic[1], a statistical measure of inter-rater reliability. It is also related to Cohen's kappa statistic. Whereas Scott's pi and Cohen's kappa work for only two raters, Fleiss' kappa works for any number of raters giving categorical ratings (see nominal data), to a fixed number of items. It can be interpreted as expressing the extent to which the observed amount of agreement among raters exceeds what would be expected if all raters made their ratings completely randomly. Agreement can be thought of as follows, if a fixed number of people assign numerical ratings to a number of items then the kappa will give a measure for how consistent the ratings are. The kappa, \kappa\,, can be defined as

\kappa = \frac{\bar{P} - \bar{P_e}}{1 - \bar{P_e}}

The factor 1 - \bar{P_e} gives the degree of agreement that is attainable above chance, and, \bar{P} - \bar{P_e} gives the degree of agreement actually achieved above chance. The scoring range is between 0 and 1. A \kappa\, value of 1 means complete agreement. An example of the use of Fleiss' kappa may be the following: Consider fourteen psychiatrists are asked to look at ten patients. Each psychiatrist gives one of possibly five diagnoses to each patient. The Fleiss' kappa can be computed from this matrix (see example below) to show the degree of agreement between the psychiatrists above the level of agreement expected by chance.

EquationsEdit

Let N be the total number of subjects, let n be the number of ratings per subject, and let k be the number of categories into which assignments are made. The subjects are indexed by i = 1, ... N and the categories are indexed by j = 1, ... k. Let nij, represent the number of raters who assigned the i-th subject to the j-th category.

First calculate pj, the proportion of all assignments which were to the j-th category:

p_{j} = \frac{1}{N n} \sum_{i=1}^N n_{i j}

Now calculate P_{i}\,, the extent to which raters agree for the i-th subject:

P_{i} = \frac{1}{n(n - 1)} \sum_{j=1}^k n_{i j} (n_{i j} - 1)
      = \frac{1}{n(n - 1)} \left(\sum_{j=1}^k (n_{i j}^2 - n_{i j})\right)
      = \frac{1}{n(n - 1)} \left(\sum_{j=1}^k n_{i j}^2 - n\right)

Now compute \bar{P}, the mean of the P_i\,'s, and \bar{P_e} which go into the formula for \kappa\,:

\bar{P} = \frac{1}{N} \sum_{i=1}^N P_{i}
       = \frac{1}{N n (n - 1)} \left(\sum_{i=1}^N \sum_{j=1}^k n_{i j}^2 - N n\right)
\bar{P_e} = \sum_{j=1}^k p_{j} ^2

Worked exampleEdit

1 2 3 4 5 P_i\,
1 0 0 0 0 14 1.000
2 0 2 6 4 2 0.253
3 0 0 3 5 6 0.308
4 0 3 9 2 0 0.440
5 2 2 8 1 1 0.330
6 7 7 0 0 0 0.462
7 3 2 6 3 0 0.242
8 2 5 3 2 2 0.176
9 6 5 2 1 0 0.286
10 0 2 2 3 7 0.286
Total 20 28 39 21 32
p_j\, 0.143 0.200 0.279 0.150 0.229
Table of values for computing the worked example

In the following example, fourteen raters (n) assign ten "subjects" (N) to a total of five categories (k). The categories are presented in the columns, while the subjects are presented in the rows.

DataEdit

See table to the right.

N = 10, n = 14, k = 5

Sum of all cells = 140
Sum of P_{i}\, = 3.780

EquationsEdit

For example, taking the first column,

p_1 = \frac{(0+0+0+0+2+7+3+2+6+0)}{140} = 0.143


And taking the second row,

P_2 = \frac{1}{14(14 - 1)} \left(0^2 + 2^2 + 6^2 + 4^2 + 2^2 - 14\right) = 0.253

In order to calculate \bar{P}, we need to know the sum of P_i,

~ = 1.000 + 0.253 + ... + 0.286 + 0.286 = 3.780

Over the whole sheet,

\bar{P} = \frac{1}{((10) ((14) (14 - 1)))}  \left((3.780) (14) (14-1)\right) = 0.378


\bar{P}_{e} = 0.143^2 + 0.200^2 + 0.279^2 + 0.150^2 + 0.229^2 = 0.213


\kappa = \frac{0.378 - 0.213}{1 - 0.213} = 0.210

SignificanceEdit

Landis and Koch[2] gave the following table for interpreting \kappa values. This table is however by no means universally accepted; Landis and Koch supplied no evidence to support it, basing it instead on personal opinion. It has been noted that these guidelines may be more harmful than helpful[3], as the number of categories and subjects will affect the magnitude of the value. The kappa will be higher when there are fewer categories.[4]

\kappa Interpretation
< 0 Poor agreement
0.0 — 0.20 Slight agreement
0.21 — 0.40 Fair agreement
0.41 — 0.60 Moderate agreement
0.61 — 0.80 Substantial agreement
0.81 — 1.00 Almost perfect agreement

See alsoEdit

Wikibooks-logo-en
Algorithm implementation may have more about this subject.


NotesEdit

  1. ^  Scott, W. (1955) pp. 321--325.
  2. ^  Landis, J. R. and Koch, G. G. (1977) pp. 159--174
  3. ^  Gwet, K. (2001)
  4. ^  Sim, J. and Wright, C. C. (2005) pp. 257--268

ReferencesEdit

  • Fleiss, J. L. (1971) "Measuring nominal scale agreement among many raters." Psychological Bulletin, Vol. 76, No. 5 pp. 378--382
  • Gwet, K. (2001) Statistical Tables for Inter-Rater Agreement. (Gaithersburg : StatAxis Publishing)
  • Landis, J. R. and Koch, G. G. (1977) "The measurement of observer agreement for categorical data" in Biometrics. Vol. 33, pp. 159--174
  • Scott, W. (1955). "Reliability of content analysis: The case of nominal scale coding." Public Opinion Quarterly, 17, 321-325.
  • Sim, J. and Wright, C. C. (2005) "The Kappa Statistic in Reliability Studies: Use, Interpretation, and Sample Size Requirements" in Physical Therapy. Vol. 85, pp. 257--268

Further readingEdit

  • Fleiss, J. L. and Cohen, J. (1973) "The equivalence of weighted kappa and the intraclass correlation coefficient as measures of reliability" in Educational and Psychological Measurement, Vol. 33 pp. 613--619
  • Fleiss, J. L. (1981) Statistical methods for rates and proportions. 2nd ed. (New York: John Wiley) pp. 38--46

External linksEdit

Around Wikia's network

Random Wiki