Wikia

Psychology Wiki

Routine health outcomes measurement

Talk0
34,141pages on
this wiki

Assessment | Biopsychology | Comparative | Cognitive | Developmental | Language | Individual differences | Personality | Philosophy | Social |
Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology |

Clinical: Approaches · Group therapy · Techniques · Types of problem · Areas of specialism · Taxonomies · Therapeutic issues · Modes of delivery · Model translation project · Personal experiences ·


This article is in need of attention from a psychologist/academic expert on the subject.
Please help recruit one, or improve this page yourself if you are qualified.
This banner appears on articles that are weak and whose contents should be approached with academic caution
.Evidence-based practice describes a healthcare system in which evidence from published studies, often mediated by systematic reviews or processed into medical guidelines is incorporated into clinical practice. The flow of information is one way; from research to practice. However many interventions by health systems and treatments by their staff have never been, or cannot easily be, subject to research study. Of the rest, quite a lot is from research that is graded as low quality.[1] All health staff intervene in their patients on the basis of both information from research evidence and from their own experience. The latter is personal, subjective and strongly influenced by stark instances which may not be representative.[2] However when information on these interventions and their outcomes are collected systematically it becomes "practice-based evidence"[3] and can complement that from academic research. To date, such initiatives have been largely confined to primary care[4] and rheumatology.[5] An example of practice-based evidence is found in the evaluation of a simple intervention like a medication. Efficacy is the degree with which it can improve patients in randomised controlled trials– the epitome of evidence-based practice. Effectiveness is the degree with which the same drug improves patients in the uncontrolled hurly-burly of everyday practice; data which are much more difficult to come by. Routine health outcomes measurement has the potential to provide such evidence.

The information required for practice-based evidence is of three sorts: context (e.g. case mix), intervention (treatment) and outcomes (change)[6]. Some mental health services are developing a practice-based evidence culture with the routine measurement of clinical outcomes[7][8] and creating behavioral health outcomes management programs.

Definition of health outcomesEdit

There are many similar, overlapping definitions of health outcomes. They all involve change in health status; some stipulate that the population or group has to be defined (different outcomes are expected for different people & conditions), whilst others specify also that health outcomes are the result of interventions or their lack, rather than simply change over time. A strong example is that of Australia’s New South Wales Health Department: health outcome is

"change in the health of an individual, group of people or population which is attributable to an intervention or series of interventions"[9]
In its purest form, measurement of health outcomes implies identifying the context (diagnosis, demographics etc.), measuring health status before an intervention is carried out, measuring the intervention, measuring health status again and then plausibly relating the change to the intervention.

History of routine health outcomes measurementEdit

Florence Nightingale Edit

An early example of a routine clinical outcomes system was set up by Florence Nightingale in the Crimean War. The outcome under study was death. The context was the season and the cause of death– wounds, infections and any other cause. The interventions were nursing and administrative. She arrived just before the barracks in Scutari were accepting the first soldiers wounded at the battle of Inkerman in November 1854, and mortality was already high. She was appalled at the disorganisation and standards of hygiene and set about cleaning and reorganisation. However, mortality continued to rise. It was only after the sewers were cleared and ventilation improved in March 1856 that mortality fell. On return to the UK she reflected on these data and produced new sorts of chart (she had trained in mathematics rather than "worsted work and practising quadrilles") to show that it was most likely that these excess deaths were caused by living conditions rather than, as she initially believed, poor nutrition. She also showed that soldiers in peacetime also had an excess mortality over other young men, presumably from the same causes. Her reputation was damaged, however, when she and William Farr, Registrar General, collaborated in producing a table which appeared to show a mortality in London hospitals of over 90% compared with less than 13% in Margate. They had made an elementary error in the denominator; the true rate for London hospitals was actually 9% for admitted patients.[10] She was never too keen on hospital mortality figures as outcome measures anyway:

"If the function of a hospital were to kill the sick, statistical comparisons of this nature would be admissible. As, however, its proper function is to restore the sick to health as speedily as possible, the elements which really give information as to whether this is done or not, are those which show the proportion of sick restored to health, and the average time which has been required for this object…"[11]
Here she presaged the next key figure in the development of routine outcomes measurement

Ernest Amory Codman Edit

Codman was a Boston orthopaedic surgeon who developed the "end result idea". At its core was

"The common sense notion that every hospital should follow every patient it treats, long enough to determine whether or not the treatment has been successful, and then to inquire 'if not, why not?' with a view of preventing similar failures in the future."[12]
He is said to have first articulated this idea to his gynaecologist colleague and Chicagoan Franklin H Martin, who later founded the American College of Surgeons, in a Hansom Cab journey from Frimley Park, Surrey, UK in the summer of 1910. He put this idea into practice in Massachusetts General Hospital.
"Each patient who entered the operating room was provided with a 5-inch by 8-inch card on which the operating surgeon filled out the details of the case before and after surgery. This card was brought up 1 year later, the patient was examined, and the previous years' treatment was then evaluated based on the patient's condition. This system enabled the hospital and the public to evaluate the results of treatments and to provide comparisons among individual surgeons and different hospitals"[13]
He was able to demonstrate his own patients’ outcomes and those of some of his colleagues but unaccountably this system was not embraced by his colleagues. Frustrated by their resistance, he provoked an uproar at a public meeting and thus fell dramatically from favour in the hospital and at Harvard, where he held a teaching post, and he was only able to fully realize the idea in his own, struggling small private hospital[14] although some colleagues continued with it at the larger hospitals. He died in 1940 disappointed that his dream of publicly available outcomes data was not even on the horizon, but hoped that posterity would vindicate him.

Avedis Donabedian Edit

In a classic 1966 paper, Avedis Donabedian, the renowned public health pioneer, described three distinct aspects of quality in health care: outcome, process and structure (in that order in the original paper).[15] He had misgivings about solely using outcomes as a measure of quality, but concluded that

"Outcomes, by and large, remain the ultimate validation of the effectiveness and quality of medical care."[15]
He may have muddied the waters a bit when discussing patient satisfaction with treatment (usually regarded as a measure of process) as an outcome, but more importantly it has become apparent that his three-aspect model has been subverted into what is called the "structure-process-outcomes" model, a directional, putatively causal chain that he never originally described. This subversion has been the justification for repeated attempts to improve process and thus outcomes by reorganizing the structure of health care, wittily described by Oxman et al.[16] Donabedian himself cautioned that outcomes measurement cannot distinguish efficacy from effectiveness: (outcomes may be poor because the right treatment is badly applied or the wrong treatment is carried out well), that outcomes measurement must always take into account context (factors other than the intervention may be very important in determining outcomes, and also that the most important outcomes may be the least easy to measure, so easily-measured but irrelevant outcomes are chosen (e.g. mortality instead of disability).

Mortality as an outcome measureEdit

Perhaps because of instances of scandalously poor care (for example at the Bristol Royal Infirmary 1984-1995[17]) mortality data have become more and more openly available as a proxy for other health outcomes in hospitals,[18] and even for individual surgeons.[19] However Florence Nightingale’s astringent judgement and Donabedian’s reservations retain their full force for most health services, where routine non-mortal health outcomes measurement remains the most appropriate method.

Principles of routine health outcomes measurementEdit

  1. All three dimensions (context, intervention as well as outcomes) must be measured. It is not possible to understand outcomes data without all three of these.
  2. Different perspectives on outcomes need to be acknowledged. For instance, patients, carers and clinical staff may have different views of what outcomes are important, how you would measure them, and even which were desirable[20]
  3. Prospective and repeated measurement of health status is superior to retrospective measurement of change such as Clinical Global Impressions[21]. The latter relies on memory and may not be possible if the rater changes.
  4. The reliability (statistics) and validity (statistics) of any measure of health status must be known so that their impact on the assessment of health outcomes can be taken into account. In mental health services these values may be quite low, especially when carried out routinely by staff rather than by trained researchers, and when using short measures that are feasible in everyday practice.
  5. Data collected must be fed back to them to maximize data quality, reliability and validity.[22] Feedback should be of content (e.g. relationship of outcomes to context and interventions) and of process (data quality of all three dimensions)

Current status of routine health outcomes measurementEdit

One can find reports of routine health outcomes measurement in many medical specialties and in many countries. However, the vast majority of these reports are by or about enthusiasts who have set up essentially local systems, with little connection with other similar systems elsewhere, even down the street. In order to realise the full benefits of an outcomes measurement system we need large-scale implementation using standardised methods with data from high proportions of suitable healthcare episodes being trapped. In order to analyse change in health status (health outcomes) we also need data on context, as recommended by Donabedian[15] and others, and data on the interventions being used, all in a standardised manner. Such large-scale systems are only at present evident in the field of mental health services, and only well developed in two locations: Ohio[7] and Australia[8],even though in both of these data on context and interventions are much less prominent than data on outcomes. The major challenge for health outcomes measurement is now the development of usable and discriminatory categories of interventions and treatments, especially in the field of mental health.

Benefits of routine health outcomes measurementEdit

Aspirations include the following benefits

  • Aggregated data
    • Can form the basis of effectiveness data that complement efficacy data. This could show the actual benefits in everyday clinical practice of interventions previously tested by randomised clinical trials, or the benefits of interventions that have not been or cannot be tested in Randomized Controlled Trials and systematic reviews
    • Can identify hazardous interventions that are only apparent in large datasets
    • Can be used to show differences between clinical services with similar case mix and thus stimulate search for testable hypotheses that might explain these differences and lead to improvements in treatment or management
    • Can be used to compare the outcomes of treatment and care from different perspectives– e.g. clinical staff and patient
  • Data about individual patients
    • Can be used to track changes during treatment over periods of time too long to be amenable to memory by an individual patient or clinician, and especially when more than one clinician or team is involved
    • Can, especially when different perspectives are available, be used in discussions between patients, clinicians and carers about progress[23]
    • Can be used to speed up and crispen clinical meetings[24]

Risks of routine health outcomes measurementEdit

  1. If attempts are made to purchase or commission health services using outcomes data, bias may be introduced that will negate the benefits
  2. Inadequate attention may be paid to the analysis of context data, such as case mix, leading to dubious conclusions
  3. If data are not fed back to clinicians participating then data quality (and quantity) will fall below the thresholds necessary for reasonable interpretation.
  4. If only a small proportion of episodes of health care have completed outcomes data, then these data may not be representative of all episodes, although the threshold for this effect will vary from service to service, measure to measure.
  5. Some risks of bias, widely foretold,[25] are proving to be insubstantial but need guarding against

Practical issues in routine health outcomes measurementEdit

Experience suggests that the following factors are necessary for routine health outcomes measurement

  1. an electronic patient record system with easy extraction from data warehouse. Entry of outcomes data can then become part of the everyday entry of clinical data. Without this, aggregate data analysis and feedback is very difficult indeed.
  2. resources and staff time set aside for training and receiving feedback
  3. resources and personnel to extract, analyse and proactively present outcomes, casemix and, where available, intervention data to clinical teams
  4. regular reports on data quality as part of a performance management process by senior managers can supplement, but not replace, feedback

ConclusionEdit

Routine health outcomes measurement is in its infancy, but technological advances in clinical information systems mean that it is now feasible. It is still possible that it will be implemented badly (e.g. without feedback) or for the wrong purposes (e.g. purchasing or commissioning services) or both, but as long as these are avoided it promises to be of great benefit to individual patients, clinicians and, ultimately, health care overall. It is strongly supported by patients.[26]

ReferencesEdit

  1. Guyatt GH, Oxman AD, Kunz R, Vist GE, Falck-Ytter Y, Schünemann HJ (May 2008). What is "quality of evidence" and why is it important to clinicians?. BMJ 336 (7651): 995–8.
  2. Malterud K (August 2001). The art and science of clinical knowledge: evidence beyond measures and numbers. Lancet 358 (9279): 397–400.
  3. Horn SD, Gassaway J (October 2007). Practice-based evidence study design for comparative effectiveness research. Medical Care 45 (10 Supl 2): S50–7.
  4. Ryan JG (1 March 2004). Practice-Based Research Networking for Growing the Evidence to Substantiate Primary Care Medicine. Annals of Family Medicine 2 (2): 180–1.
  5. Pincus T, Sokka T (March 2006). Evidence-based practice and practice-based evidence. Nature Clinical Practice. Rheumatology 2 (3): 114–5.
  6. Pawson R, Tilley N. Realistic Evaluation. London: Sage Publications Ltd; 1997
  7. 7.0 7.1 Callaly T, Hallebone EL (2001). Introducing the routine use of outcomes measurement to mental health services. Australian Health Review 24 (1): 43–50.
  8. 8.0 8.1 Ohio Mental Health Datamart
  9. Frommer, Michael; Rubin, George; Lyle, David (1992). The NSW Health Outcomes program. New South Wales Public Health Bulletin 3 (12): 135.
  10. Iezzoni LI (15 June 1996). 100 apples divided by 15 red herrings: a cautionary tale from the mid-19th century on comparing hospital mortality rates. Annals of Internal Medicine 124 (12): 1079–85.
  11. Nightingale F. Notes on Hospitals. 3rded. London: Longman, Green, Longman, Roberts, and Green; 1863
  12. Codman EA. The Shoulder. Rupture of the supraspinatus tendon and other lesions in or about the subacromial bursa. Privately published 1934 Reprint 1965 Malabar, Florida: Kreiger;
  13. Kaska SC, Weinstein JN (March 1998). Historical perspective. Ernest Amory Codman, 1869-1940. A pioneer of evidence-based medicine: the end result idea. Spine 23 (5): 629–33.
  14. Codman EA. A study in hospital efficiency. As demonstrated by the case report of the first five years of a private hospital. Published privately 1817. Reprinted 1996 Joint Commission on Accreditation of Healthcare Organizations Oakbrook Terrace, IL, USA:
  15. 15.0 15.1 15.2 Donabedian A. Evaluating the quality of medical care. Milbank Memorial Fund Quarterly 1966;44:166-206
  16. Oxman AD, Sackett DL, Chalmers I, Prescott TE (December 2005). A surrealistic mega-analysis of redisorganization theories. Journal of the Royal Society of Medicine 98 (12): 563–8.
  17. http://www.bristol-inquiry.org.uk/
  18. http://www.stgeorges.nhs.uk/mortalityintro.asp
  19. Bridgewater B (March 2005). Mortality data in adult cardiac surgery for named surgeons: retrospective examination of prospectively collected data on coronary artery surgery and aortic valve replacement. BMJ 330 (7490): 506–10.
  20. Long A, Jefferson J. The significance of outcomes within European health sector reforms: towards the development of an outcomes culture. International Journal of Public Administration, 1999;22(3):385-424
  21. NIMH Early Clinical Drug Evaluation PRB. Clinical global impressions. In: Guy W, editor. ECDEU Assessment manual for psychopharmacology, revised. US Department of Health and Human Services Public Health Service, Alcohol Drug Abuse and Mental Health Administration, NIMH Psychopharmacology Research Branch; 1976. p. 217-22
  22. De Lusignan S, Stephens PN, Adal N, Majeed A (2002). Does Feedback Improve the Quality of Computerized Medical Records in Primary Care?. Journal of the American Medical Informatics Association 9 (4): 395–401.
  23. Stewart M (April 2009). Service user and significant other versions of the Health of the Nation Outcome Scales. Australasian Psychiatry 17 (2): 156–63.
  24. Stewart M. Making the HoNOS(CA) clinically useful: A strategy for making the HoNOS, HoNOSCA, and HoNOS65+ useful to the clinical team. 2nd Australasian Mental Health Outcomes Conference; 2008
  25. Bilsker D, Goldner EM (November 2002). Routine outcome measurement by mental health-care providers: is it worth doing?. Lancet 360 (9346): 1689–90.
  26. Black J (February 2009). It's not that bad: the views of consumers and carers about routine outcome measurement in mental health. Australian Health Review 33 (1): 93–9.

See also Edit

This page uses Creative Commons Licensed content from Wikipedia (view authors).

Around Wikia's network

Random Wiki