Wikia

Psychology Wiki

Criteria for the assessment of clinical studies

Talk37
34,135pages on
this wiki
Revision as of 01:00, October 30, 2007 by AWeidman (Talk | contribs)

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Assessment | Biopsychology | Comparative | Cognitive | Developmental | Language | Individual differences | Personality | Philosophy | Social |
Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology |

Clinical: Approaches · Group therapy · Techniques · Types of problem · Areas of specialism · Taxonomies · Therapeutic issues · Modes of delivery · Model translation project · Personal experiences ·


A number of criteria have been proposed to aid in the evaluation of clinical studies. These criteria are to be used in designation of interventions as "evidence-based practice" (EBP) or as belonging to a lesser category.

The criteria for "evidence-based practice" originally suggested by Chambless and Hollon on behalf of a task force of the American Psychological Association were very stringent. In order for a treatment to qualify as evidence-based, two studies using randomized designs had to be reported in a peer reviewed journal, with one being a replication by an independent researcher [1]

Since that time,the definition of the term "evidence-based" has expanded as more studies and articles have appeared in professional journals. Some less rigid and strigent criteria have been proposed as means of assessing the evidentiary foundation of mental health interventions, and some authors have downplayed the role of rigorous research in evaluation of practice, stating that the EBP category must include the use of practice wisdom and family values [2] As other authors have noted, the present definition of "evidence-based practice" is a matter of considerable disagreement among social services practitioners [3]


Chambless and HollonEdit

Chambless and Hollon took as their starting point the Division 12 (Clinical Psychology) Taskforce on Promotion and Dissemination of Psychological Procedures (1995;Chambless et al; 1996) and the Americal Psychological Association Taskforce on Psychological Intervention Guidelines (1995) and made a number of changes. The starting point was that treatment efficacy must be demonstrated in controlled research in which it is reasonable to conclude that benefits observed are due to the effects of the treatment and not to chance or confounding factors such as passage of time, the effects of psychological assessment or the presence of different types of clients in various treatment conditions.[1] They came to the conclusion that efficacy was best demonstrated in randomized controlled trials (RCT's). RCT's involve either groups in which patients are randomly assigned to the treatment of interest or one or more comparisons, or carefully controlled single case experiments and their group analogues. Chambless & Hollon p7. (1998).

Further, in their view replication was critical to prevent erroneous conclusions based on one aberrant finding. Also such replication must be by an independent team of researchers to protect against investigator bias or reliance on findings unique to a particular setting or group of therapists. Efficacy research must be conducted with methods adequately sound to justify reasonable confidence in the data. All this is required before a treament can be described as efficacious in accordance with their system. Chambless & Hollon p8 (1998).

Throughout their report and recommendations it is assumed that treatments wishing to be established as 'efficacious' or 'possibly efficacious' will be independently evaluated.

Research design Comparisons with no treatment are acceptable but comparisons with a placebo and/or other treatments are preferred, and if compared with other treatments may be described as efficacious and specific in their mechanism of action. Such findings are more 'highly prized' because, it is stated, they demonstrate that efficacy goes beyond the effect of receiving any treatment and in the case of 'rival' treatments provides information and contributes to both theory and practice. Combination treatments are considered acceptable to show efficacy but must be controlled for specifics in order to be considered efficacious and specific. There must be a clearly designated section of the population or specific problem. They cite the use of DSM as a diagnostic system but say this is not the only reliable method of clearly defining or diagnosing a specific section of the population.

Outcome assessment Outcome assessment tools should have demonstrated reliability and validity in previous research. Reliability of interview related measures should be tested and interviewers should be 'blind'. Multiple assessment measures are desirable. Researchers should not rely solely on self-report. Follow-up studies are considered desirable but it is recognised that they are extremely hard to conduct and to control.p 10.

Clinical significance Effects need to be clinically meaningful in addition to being statistically significant and evaluators are urged to consider this aspect. A number of ways of assessing this are set out at p11.

Treatment implementation Treatment manuals are essential unless a procedure is so simple that it can be adequately outlined in its journal article. The requirement for a manual is that it sets out "a cogent and extensive description of the treament approach therapists are to follow" and a "clear and explicit description of the kind of techniques and strategies that constitute the intervention". This may be session by session outlines of interventions or broad principles and phases of treatment with examples of interventions. Without this the treatment cannot be replicated for research. Usually the therapists will have to be trained to deliver the intervention, and monitored to ensure the treatments being tested are delivered in an adequate fashion but given the relative newness of this latter requirement its absence should be commented on by evaluators but not considered fatal. Investigator alliegance (the fact that therapists experienced in and wedded to a particular therapy do better than others merely trained to undertake it) is recognised.

Data Analysis Evaluators are asked to make their own analysis of outcome data as author errors are common such as favouring the test that produces the most favourable result, using uncontrolled pre-test/post-test comparisons as a baseline, ignoring different rates of refusal or drop-out and failing to conduct 'intention-to-treat' analyses, and failing to test for therapist or site effects.

Single-case experiments

Conflicting results/meta-analyses

Effectiveness (as opposed to just efficacy)

Practical issues of application

Summaries of proposed evaluative methodsEdit

It is not realistic to expect that all clinical studies will be able to be in compliance with the demanding criteria established by Chambless and Hollon. The criterion of randomization may be very difficult to meet.However, the goal of randomization is to control confounding variables, and other methods may achieve a measure of success in this. For this reason, the contribution of nonrandomized designs to outcome research has been recognized. A number of further sets of criteria have been proposed. In an effort to consider different levels of acceptability for research evidence, some authors have proposed taxonomies in which categories of research design were equated with the evidentiary power of the studies. Other writers have simply considered whether a research report did or did not meet standards.

Some of these (e.g., Saunders et al.) have received severe criticism [4][5] None of these methods appears to be perfect for evaluation of mental health treatments, yet all have something to offer.

The Kaufman Best Practices ProjectEdit

The Kaufman Best Practices Project can be summarized as follows: At least one randomized design is required to be eligible for consideration, and nonrandomized designs are not considered acceptable.[6] If multiple outcome studies have been undertaken the overall weight of evidence must indicate efficacy. The nature of comparison groups is not specified, nor does this approach discuss confounding variables or reliability and validity of outcome measures. Intervention fidelity (generally, the existence of a treatment manual) is a factor. The approach does not consider statistical methods, the question of missing data or attrition, or intention-to-treat analysis, neither does it examine blinding issues. The Kaufman approach does examine the theoretical background of an intervention,consider evidence of harm, and examine the appropriateness of a treatment for the setting and practitioners for which it is proposed.

"TO BE CONSIDERED A CANDIDATE FOR BEST PRACTICE, A TREATMENT PROTOCOL HAD TO MEET THE FOLLOWING CRITERIA CONCERNING ITS CLINICAL UTILITY:

  • 1. The treatment has a sound theoretical basis in generally accepted psychological principles indicating that it would be effective in treating at least some problems known to be outcomes of child abuse.
  • 2. The treatment is generally accepted in clinical practice as appropriate for use with abused children, their parents, and/or their families. Candidate practices were sought that met several minimum criteria.
  • 3. A substantial clinical-anecdotal literature exists indicating the treatment’s value with abused children, their parents, and/or their families from a variety of cultural and ethnic backgrounds.
  • 4. There is no clinical or empirical evidence, or theoretical basis indicating

that the treatment constitutes a substantial risk of harm to those receiving it, compared to its likely benefits.

  • 5. The treatment has at least one randomized, controlled treatment outcome study indicating its efficacy with abused children and/or their families.
  • 6. If multiple treatment outcome studies have been conducted, the overall weight of evidence supports the efficacy of the treatment.

IN ADDITION TO THESE CRITERIA SUPPORTING THE EFFICACY OF THE TREATMENT, THE FOLLOWING CRITERIA ABOUT ITS TRANSPORTABILITY TO COMMON CLINICAL SETTINGS HAD TO BE MET:

  • 7. The treatment has a book, manual, or other writings available to clinical

professionals that specifies the components of the treatment protocol and describes how to conduct it.

  • 8. The treatment can be delivered in common service delivery settings serving abused children and their families with a reasonable degree of treatment fidelity.
  • 9. The treatment can be delivered by typical mental health professionals who have received a reasonable level of training and supervision in its use. Once these candidates were identified, the project focused on reviewing the support for each one, understanding their weaknesses in clinical or empirical support, and building consensus among the participants."

Oxford Centre for Evidence-based MedicineEdit

Oxford Centre for Evidence-based Medicine uses these "grades of recommendations" according to the study designs and critical appraisal of prevention, diagnosis, prognosis, therapy, and harm studies:

  • Level A: consistent Randomised Controlled Clinical Trial, Cohort Study, All or None, Clinical Decision Rule validated in different populations.
  • Level B: consistent Retrospective Cohort, Exploratory Cohort, Ecological Study, Outcomes Research, Case-Control Study; or extrapolations from level A studies.
  • Level C: Case-series Study or extrapolations from level B studies
  • Level D: Expert opinion without explicit critical appraisal, or based on physiology, bench research or first principles.

Levels of Evidence From the Centre for Evidence-Based Medicine, Oxford For the most up-to-date levels of evidence, see [4]

Therapy/Prevention/Etiology/Harm:

  • 1a: Systematic reviews (with homogeneity ) of randomized controlled trials
  • 1a-: Systematic review of randomized trials displaying worrisome heterogeneity
  • 1b: Individual randomized controlled trials (with narrow confidence interval)
  • 1b-: Individual randomized controlled trials (with a wide confidence interval)
  • 1c: All or none randomized controlled trials
  • 2a: Systematic reviews (with homogeneity) of cohort studies
  • 2a-: Systematic reviews of cohort studies displaying worrisome heterogeneity
  • 2b: Individual cohort study or low quality randomized controlled trials (<80% follow-up)
  • 2b-: Individual cohort study or low quality randomized controlled trials (<80% follow-up / wide confidence interval)
  • 2c: 'Outcomes' Research; ecological studies
  • 3a: Systematic review (with homogeneity) of case-control studies
  • 3a-: Systematic review of case-control studies with worrisome heterogeneity
  • 3b: Individual case-control study
  • 4: Case-series (and poor quality cohort and case-control studies)
  • 5: Expert opinion without explicit critical appraisal, or based on physiology, bench research or 'first principles'

Diagnosis:

  • 1a: Systematic review (with homogeneity) of Level 1 diagnostic studies; or a clinical rule validated on a test set.
  • 1a-: Systematic review of Level 1 diagnostic studies displaying worrisome heterogeneity
  • 1b: Independent blind comparison of an appropriate spectrum of consecutive patients, all of whom have undergone both the diagnostic test and the reference standard; or a clinical decision rule not validated on a second set of patients
  • 1c: Absolute SpPins And SnNouts (An Absolute SpPin is a diagnostic finding whose Specificity is so high that a Positive result rules-in the diagnosis. An Absolute SnNout is a diagnostic finding whose Sensitivity is so high that a Negative result rules-out the diagnosis).
  • 2a: Systematic review (with homogeneity) of Level >2 diagnostic studies
  • 2a-: Systematic review of Level >2 diagnostic studies displaying worrisome heterogeneity
  • 2b: Any of: 1)independent blind or objective comparison; 2)study performed in a set of non-consecutive patients, or confined to a narrow spectrum of study individuals (or both) all of whom have undergone both the diagnostic test and the reference standard; 3) a diagnostic clinical rule not validated in a test set.
  • 3a: Systematic review (with homogeneity) of case-control studies
  • 3a-: Systematic review of case-control studies displaying worrisome heterogeneity
  • 4: Any of: 1)reference standard was unobjective, unblinded or not independent; 2) positive and negative tests were verified using separate reference standards; 3) study was performed in an inappropriate spectrum of patients.
  • 5: Expert opinion without explicit critical appraisal, or based on physiology, bench research or 'first principles'

Prognosis:

  • 1a: Systematic review (with homogeneity) of inception cohort studies; or a clinical rule validated on a test set.
  • 1a-: Systematic review of inception cohort studies displaying worrisome heterogeneity
  • 1b: Individual inception cohort study with > 80% follow-up; or a clinical rule not validated on a second set of patients
  • 1c: All or none case-series
  • 2a: Systematic review (with homogeneity) of either retrospective cohort studies or untreated control groups in RCTs.
  • 2a-: Systematic review of either retrospective cohort studies or untreated control groups in RCTs displaying worrisome heterogeneity
  • 2b: Retrospective cohort study or follow-up of untreated control patients in an RCT; or clinical rule not validated in a test set.
  • 2c: 'Outcomes' research
  • 4: Case-series (and poor quality prognostic cohort studies)
  • 5: Expert opinion without explicit critical appraisal, or based on physiology, bench research or 'first principles'

Key to interpretation of practice guidelines

  • Agency for Healthcare Research and Quality:
  • A: There is good research-based evidence to support the recommendation.
  • B: There is fair research-based evidence to support the recommendation.
  • C: The recommendation is based on expert opinion and panel consensus.
  • X: There is evidence of harm from this intervention.

USPSTF Guide to Clinical Preventive Services:

  • A: There is good evidence to support the recommendation that the condition be specifically considered in a periodic health examination.
  • B: There is fair evidence to support the recommendation that the condition be specifically considered in a periodic health examination.
  • C: There is insufficient evidence to recommend for or against the inclusion of the condition in a periodic health examination, but recommendations may be made on other grounds.
  • D: There is fair evidence to support the recommendation that the condition be excluded from consideration in a periodic health examination.
  • E: There is good evidence to support the recommendation that the condition be excluded from consideration in a periodic health examination.

University of Michigan Practice Guideline:

  • A: Randomized controlled trials.
  • B: Controlled trials, no randomization.
  • C: Observational trials.
  • D: Opinion of the expert panel.

Other guidelines:

  • A: There is good research-based evidence to support the recommendation.
  • B: There is fair research-based evidence to support the recommendation.
  • C: The recommendation is based on expert opinion and panel consensus.
  • X: There is evidence that the intervention is harmful. </ref>

"Extrapolations" are where data is used in a situation which has potentially clinically important differences than the original study situation. Other explanations described elsewhere in the Centre’s pages [5]

Saunder, Berliner and HansonEdit

The approach suggested by Saunders, Berliner, and Hanson is a taxonomic approach.[7] The only category that requires randomized clinical trials is Category 1. In order to be considered for assessment under this method, "Treatments for which manuals, books, or other writings describing their components and application were readily available were given preference" (pg18.)

This system uses the following categories:

Category 1:Well-supported, efficacious treatment

  • 1. The treatment has a sound theoretical basis in generally accepted psychological principles.
  • 2. A substantial clinical, anecdotal literature exists indicating the treatment’s efficacy with at-risk children and foster children.
  • 3. The treatment is generally accepted in clinical practice for at-risk children and foster children.
  • 4. There is no clinical or empirical evidence or theoretical basis indicating that the treatment constitutes a substantial risk of harm to those receiving it, compared to its likely benefits.
  • 5. The treatment has a manual that clearly specifies the components and administration characteristics of the treatment that allows for replication.
  • 6. At least two randomized, controlled outcome studies have demonstrated the treatment’s efficacy with at-risk children and foster children. This means the treatment was demonstrated to be better than placebo or no different or better than an already established treatment.
  • 7. If multiple outcome studies have been conducted, the large majority of outcome studies support the efficacy of the treatment.

Category 2: Supported and probably efficacious

  • 1. The treatment has a sound theoretical basis in generally accepted psychological principles.
  • 2. A substantial clinical, anecdotal literature exists indicating the treatment’s efficacy with at-risk children and foster children.
  • 3. The treatment is generally accepted in clinical practice for at risk children and foster children.
  • 4. There is no clinical or empirical evidence or theoretical basis indicating - that the treatment constitutes a substantial risk of harm to those receiving it, compared to its likely benefits.
  • 5. The treatment has a manual that clearly specifies the components and administration characteristics of the treatment that allows for implementation.
  • 6. At least two studies utilizing some form of control without randomization (e.g., wait list, untreated group, placebo group) have established the treatment’s efficacy over the passage of time, efficacy over placebo, or found it to be comparable to or better than an already established treatment.
  • 7. If multiple treatment outcome studies have been conducted, the overall weight of evidence supported the efficacy of the treatment.

Category 3: Supported and acceptable treatment

  • 1. The treatment has a sound theoretical basis in generally accepted psychological principles.
  • 2. A substantial clinical, anecdotal literature exists indicating the treatment’s efficacy with at-risk children and foster children.
  • 3. The treatment is generally accepted in clinical practice for at-risk children and foster children.
  • 4. There is no clinical or empirical evidence or theoretical basis indicating - that the treatment constitutes a substantial risk of harm to - those receiving it, compared to its likely benefits.
  • 5. The treatment has a manual that clearly specifies the components and administration characteristics of the treatment that allows for replication.
  • 6a. At least one group study (controlled or uncontrolled), or a series of single subject studies have demonstrated the efficacy of the treatment with at-risk children and foster children; - or -
  • 6b. A treatment that has demonstrated efficacy with other populations - has a sound theoretical basis for use with at-risk children and foster children, but has not been tested or used extensively with these populations.
  • 7. If multiple treatment outcome studies have been conducted, the overall weight of evidence supported the efficacy of the treatment.

Category 4: Promising and acceptable treatments

  • 1. The treatment has a sound theoretical basis in generally accepted psychological principles.
  • 2. A substantial clinical-anecdotal literature exists indicating the treatment's value with abused children, their parents, and/or their families.
  • 3. The treatment is generally accepted in clinical practice as appropriate for use with abused children, their parents, and/or their families.
  • 4. There is no clinical or empirical evidence or theoretical basis indicating that the treatment constitutes a substantial risk of harm to those receiving it, compared to its likely benefits.
  • 5. The treatment has a book, manual, or other available writings that specifies the components of the treatment protocol and describes how to administer it.

Category 5: Novel and experimental treatments

  • 1. The treatment may have a theoretical basis that is an innovative or novel, but reasonable, application of generally accepted psychological principles.
  • 2. A relatively small clinical literature exists to suggest the value of the treatment.
  • 3. The treatment is not widely used or generally accepted by practitioners working with abused children.
  • 4. There is no clinical or empirical evidence or theoretical basis suggesting that the treatment constitutes a substantial risk of harm to those receiving it, compared to its likely benefits.
  • 5. The treatment has a book, manual, or other available writings that specifies the components of the treatment protocol and describes how to administer it.

Category 6: Concerning treatment

  • 1. The theoretical basis for the treatment is unknown, a misapplication of psychological principles, or a novel, unique, and concerning application of psychological principles.
  • 2. Only a small and limited clinical literature exists suggesting the value of the treatment.
  • 3. There is a reasonable theoretical, clinical, or empirical basis suggesting that compared to its likely benefits, the treatment constitutes a risk of harm to those receiving it.
  • 4. The treatment has a manual or other writings that specifies the components and administration characteristics of the treatment that allows for implementation.

Grays HierarchyEdit

1. Intervention programs that have been critically tested and found to help clients.

2. Intervention programs that have not been critically tested and are not in a good experimental trial.

3. Intervention programs that have been critically tested and shown to harm clients.

4. Intervention programs of unknown effectiveness that are in a rigorous experimental trial. [8]

Khan et alEdit

The standards set by Khan et al. differed in their requirements from the previous approaches. To summarize, this group preferred randomized designs but accepted nonrandomized approaches as well.[9] They required that comparison groups be specified and confounding variables taken into consideration.Blinding measures were emphasized. Missing data and consideration of attrition were to be discussed, and an intention-to-treat analysis was to be done. However, no evidence about outcome measures was included. no manual was required, and there was no consideration of theoretical background, evidence of harm, or appropriateness of the treatment for the setting and practitioners.

U.S. National Registry of Evidence-Based Practices and Programs criteriaEdit

This approach is used by the U.S. National Registry of Evidence-Based Practices and Programs [How to reference and link to summary or text]. This evaluative method prefers randomized designs but accepts nonrandomized studies. The criteria include the specification of comparison groups, the consideration of confounding variables, evidence for the validity and reliability of outcome measures,appropriateness of statistical measures, consideration of missing data and attrition, and the existence of a treatment manual. However, the NREPP standards do not require an intention-to-treat analysis, blinding designs, an examination of theoretical background, or consideration of evidence of harm or appropriateness of a treatment for a setting or for practitioners.

U.S. Preventive Services Task ForceEdit

Systems to stratify evidence by quality have been developed, such as this one by the U.S. Preventive Services Task Force[How to reference and link to summary or text]:

  • Level I: Evidence obtained from at least one properly designed randomized controlled trial.
  • Level II-1: Evidence obtained from well-designed controlled trials without randomization.
  • Level II-2: Evidence obtained from well-designed cohort or case-control analytic studies, preferably from more than one center or research group.
  • Level II-3: Evidence obtained from multiple time series with or without the intervention. Dramatic results in uncontrolled trials might also be regarded as this type of evidence.
  • Level III: Opinions of respected authorities, based on clinical experience, descriptive studies, or reports of expert committees.

The UK National Health Service uses a similar system with categories labelled A, B, C, and D.

Categories of recommendationsEdit

In guidelines and other publications, recommendations are classified according to the level of evidence on which they are based. The U.S. Preventive Service Task Force uses:

  • Level A: Recommendations are based on good and consistent scientific evidence.
  • Level B: Recommendations are based on limited or inconsistent scientific evidence.
  • Level C: Recommendations are based primarily on consensus and expert opinion.

This is a distinct and conscious improvement on older fashions in recommendation and the interpretation of recommendations where it was less clear which parts of a guideline were most firmly established.

Reporting issuesEdit

Evaluation of research on the basis of a published report can only be done if sufficient information is included in the report. Guidelines for reporting of randomized studies have been suggested by Moher [10], and Des Jarlais and colleagues have outlined criteria for reports of nonrandomized studies [11]

Criticism of evidence-based approachesEdit

Critics of evidence-based approaches maintain that good evidence is often deficient in many areas, that lack of evidence and lack of benefit are not the same, and that evidence-based medicine applies to populations, not necessarily to individuals. In The limits of evidence-based medicine, Tonelli argues that "the knowledge gained from clinical research does not directly answer the primary clinical question of what is best for the patient at hand." Tonelli suggests that proponents of evidence-based medicine discount the value of clinical experience.

Although evidence-based practice is quickly becoming the "gold standard" for clinical practice and treatment guidelines, there are a number of reasons why most current medical, psychological, and surgical practices do not have a strong literature base supporting them. First, in some cases, conducting randomized controlled trials would be unethical--such as in open-heart surgery--although observational studies are designed to address these problems to some degree. Second, certain groups have been historically under-researched (women, racial minorities, people with many co-morbid diseases) and thus the literature is very sparse in areas that do not allow for generalizeability. Third, the types of trials considered 'gold standard' (i.e. randomized double-blind placebo-controlled trials) are very expensive and thus funding sources play a role in what gets investigated. For example, the government funds a large number of preventive medicine studies that endeavor to improve public health as a whole, while pharmaceutical companies fund studies intended to demonstrate the efficacy and safety of particular drugs. Fourth, the studies that are published in medical journals may not be representative of all the studies that are completed on a given topic (published and unpublished) or may be misleading due to conflicts of interest (i.e. publication bias).[6] Thus the array of evidence available on particular therapies may not be well-represented in the literature. Fifth, there is an enormous range in the quality of studies performed, making it difficult to generalize about the results.

Large randomized controlled trials are extraordinarly useful for examining discrete interventions for carefully defined conditions. The more complex the patient population, the conditions and diagnoses, and the intervention, the more difficult it is to separate the treatment effect from random variation. Because of this, a number of studies obtain insignificant results, either because there is insufficient power to show a difference, or because the groups are not well-enough 'controlled'.

Evidence-based medicine has been most practised when the intervention tested is a drug. Applying the methods to other forms of treatment may be harder, particularly those requiring the active participation of the patient because blinding is more difficult.

In managed healthcare systems evidence-based guidelines have been used as a basis for denying insurance coverage for some treatments some of which are held by the physicians involved to be effective, but of which randomized controlled trials have not yet been published.

See alsoEdit

ReferencesEdit

  1. 1.0 1.1 Chambless, D., & Hollon, S. (1998). "Defining empirically supportable therapies". Journal of Consulting and Clinical Psychology, 66, 7-18
  2. Buysse, V., & Wesley, P.W.(2006). "Evidence-based practice: How did it emerge and what does it really mean for the early childhood field?" Zero to Three, 27(2), 50-55.
  3. Glasby, J., Walshe, K., & Harvey, G. (2007). What counts as 'evidence' in 'evidence-based practice'? Evidence & Policy, 3(3), 325-327; Sempik, J., Becker. S., & Bryman, A. (2007). The quality of research evidence in social policy: Consensus and dissension among researchers. Evidence & Policy, 3(3), 407-423.
  4. Gambrill, E.(2006). "Evidence-based practice and policy: Choices ahead". Research on Social Work Practice, 16(3), 338-357
  5. Pignotti, M., & Mercer, J.(2007). "Holding Therapy and Dyadic Developmental Psychotherapy are not supported and acceptable social work interventions: A Systematic Research Synthesis revisited". Research on Social Work Practice, 17(4), 513-519
  6. Kaufman Best Practices Project Final Report: Closing the Quality Chasm in Child Abuse Treatment; Identifying and Disseminating Best Practices. [1]
  7. Saunders, B.,Berliner, L., & Hanson, R. (2004). "Child physical and sexual abuse: Guidelines for treaments". [2]
  8. Gray, J. A. M. (2001a). Evidence-based health care: How to make health policy and management decisions (2nd ed.). New York: Churchill Livingstone.
  9. Khan, K.S., ter Riet, G., Popay, J., Nixon, J., & Kleijnen, J. (2001) CRD Report 4. Stage II. Conducting the review. phase 5. Study quality assessment. York,UK: Centre for Reviews and Dissemination, university of York. [www.york.ac.uk/inst/crd/pdf/crd_4ph5.pdf] [3]
  10. Moher, D., Jones, A., & Lepage, L., for the CONSORT Group (2001). Use of the CONSORT statement and quality of reports of randomized trials: A comparative before-and-after evaluation. Journal of the American Medical Association, 285, 1992-1995
  11. Des Jarlais, D.C., Lyles, C., & Crepaz, N., for the TREND Group (2004). Improving the reporting quality of nonrandomized evaluations of behavioral and public health interventions. American Journal of Public health, 94, 361-366

Around Wikia's network

Random Wiki