Ad blocker interference detected!
Wikia is a free-to-use site that makes money from advertising. We have a modified experience for viewers using ad blockers
Wikia is not accessible if you’ve made further modifications. Remove the custom ad blocker rule(s) and the page will load as expected.
Individual differences |
Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology |
The sequential probability ratio test (SPRT) is a specific sequential hypothesis test, developed by Abraham Wald. Neyman and Pearson's 1933 result inspired Wald to reformulate it as a sequential analysis problem. The Neyman-Pearson lemma, by contrast, offers a rule of thumb for when all the data is collected (and its likelihood ratio known).
While originally developed for use in quality control studies in the realm of manufacturing, SPRT has been formulated for use in the computerized testing of human examinees as a termination criterion.
The next step is calculate the cumulative sum of the log-likelihood ratio, , as new data arrive:
The stopping rule is a simple thresholding scheme:
- : continue monitoring (critical inequality)
- : Accept
- : Accept
where a and b () depend on the desired type I and type II errors, and . They may be chosen as follows:
In other words, and must be decided beforehand in order to set the thresholds appropriately. The numerical value will depend on the application. The reason for using approximation signs is that, in the discrete case, the signal may cross the threshold between samples. Thus, depending on the penalty of making an error and the sampling frequency, one might set the thresholds more aggressively. Of course, the exact bounds may be used in the continuous case.
The hypotheses are simply and , with . Then the log-likelihood function (LLF) for one sample is
The cumulative sum of the LLFs for all x is
Accordingly, the stopping rule is
After re-arranging we finally find
Testing of human examineesEdit
The SPRT is currently the predominant method of classifying examinees in a variable-length computerized classification test (CCT). The two parameters are p1 and p2 are specified by determining a cutscore (threshold) for examinees on the proportion correct metric, and selecting a point above and below that cutscore. For instance, suppose the cutscore is set at 70% for a test. We could select p1 = 0.65 and p2 = 0.75 . The test then evaluates the likelihood that an examinee's true score on that metric is equal to one of those two points. If the examinee is determined to be at 75%, they pass, and they fail if they are determined to be at 65%.
These points are not specified completely arbitrarily. A cutscore should always be set with a legally defensible method, such as a modified Angoff procedure. Again, the indifference region represents the region of scores that the test designer is OK with going either way (pass or fail). The upper parameter p2 is conceptually the highest level that the test designer is willing to accept for a Fail (because everyone below it has a good chance of failing), and the lower parameter p1 is the lowest level that the test designer is willing to accept for a pass (because everyone above it has a decent chance of passing). While this definition may seem to be a relatively small burden, consider the high-stakes case of a licensing test for medical doctors: at just what point should we consider somebody to be at one of these two levels?
While the SPRT was first applied to testing in the days of classical test theory, as is applied in the previous paragraph , Reckase (1983) suggested that item response theory be used to determine the p1 and p2 parameters. The cutscore and indifference region are defined on the latent ability (theta) metric, and translated onto the proportion metric for computation. Research on CCT since then has applied this methodology for several reasons:
- Large item banks tend to be calibrated with IRT
- This allows more accurate specification of the parameters
- By using the item response function for each item, the parameters are easily allowed to vary between items.
- ↑ Wald, Abraham (June, 1945). Sequential Tests of Statistical Hypotheses. Annals of Mathematical Statistics 16 (2): 117–186.
- ↑ Ferguson, Richard L. (1969). The development, implementation, and evaluation of a computer-assisted branched test for a program of individually prescribed instruction. Unpublished doctoral dissertation, University of Pittsburgh.
- ↑ Reckase, M. D. (1983). A procedure for decision making using tailored testing. In D. J. Weiss (Ed.), New horizons in testing: Latent trait theory and computerized adaptive testing (pp. 237-254). New York: Academic Press.
- ↑ Eggen, T. J. H. M. (1999). Item Selection in Adaptive Testing with the Sequential Probability Ratio Test. Applied Psychological Measurement 23 (3): 249–261.
|This page uses Creative Commons Licensed content from Wikipedia (view authors).|