Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology |
Increasingly, in acknowledgement of the methodological weaknesses, poor prognostic power, symptomatic variability and general weaknesses inherent in the diagnostic validity of the term 'Schizophrenia', the psychological literature has increasingly tended to focus on specific or discrete symptoms or aspects associated with it (Bentall, 1990). The problems may be categorized as:
Loss of differential meaningEdit
By the 1970's the term schizophrenia had lost much of its differential meaning and had become employed as a "catch all" for insanity (Cooper et al, 1972). So, for example, in the US, Kuriansky et al (1974) reported that in some hospitals 80% of the patients carried the diagnosis.
Inconsistent application in practiceEdit
Cooper et al (1972) identified substantial disagreement between psychiatrists in their use of the diagnosis and in particular they drew attention to the very different diagnostic practices used in Europe when compared to the US.
When they recorded video-taped patient interviews, and asked psychiatrists from the other side of the Atlantic to rate them, they found the British practioners were using a more restricted definition than their American collegues.
In response to this finding researchers had to develop a new standardised rating scales to try and define clear criteria for the application of the diagnosis.
The most successful of these is Wing's Present State Examination. This facilitated a higher level of agreement between raters (Wing et al, 1974) which largely solved the problem of unreliability for researchers if not in actual practice for workaday clinicans. Although there has been a tightening of US diagnostic practice and this is now more similar to European guidelines!
Problems with reliablityEdit
Related to this there were a number of studies looking at the inter-rater reliability of the diagnosis.
Spitzer and Fleiss (1974) evaluated the reliability of diagnosis using data from a number of studies and found that the coefficient of agreement between assessors ('kappa') averaged only 0.6. This confirmed Beck et al. (1962) earlier finding that 32% of the disagreement arose as a result of poor and inconsistent measurement of symptomatology and 63 per cent due to unclear and differing criteria.
References & BibliographyEdit
Cooper J.E., Kendell R.E., Gurland B.J. 1972, Fsychiatnc Diagnosis in New York and London New York: Oxford University Press
Beck A T, Ward C H, Mendelson M, Mock J E and Erbaugh J K 1962,y Reliability of psychiatric diagnosis: II A study of clinical judgements and ratings, American Journal of Psychiatry 119: 351-7 Kuriansky J B, Deming W F. and Ciurland B J 1974, On trends in the diagnosis of schizophrenia. American Journal of Psychiatry 131: 402-8