Wikia

Psychology Wiki

Effect size (statistical)

Talk0
34,138pages on
this wiki
Revision as of 23:39, September 23, 2012 by CPwikiCHATlogger (Talk | contribs)

Assessment | Biopsychology | Comparative | Cognitive | Developmental | Language | Individual differences | Personality | Philosophy | Social |
Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology |

Statistics: Scientific method · Research methods · Experimental design · Undergraduate statistics courses · Statistical tests · Game theory · Decision theory


In statistics, effect size is a measure of the strength of the relationship between two variables. In scientific experiments, it is often useful to know not only whether an experiment has a statistically significant effect, but also the size of any observed effects. In practical situations, effect sizes are helpful for making decisions. Effect size measures are the common currency of meta-analysis studies that summarize the findings from a specific area of research.

Summary

The concept of effect size appears in everyday language. For example, a weight loss program may boast that it leads to an average weight loss of 30 pounds. In this case, 30 pounds is an indicator of the claimed effect size. Another example is that a tutoring program may claim that it raises school performance by one letter grade. This grade increase is the claimed effect size of the program.

An effect size is best explained through an example: if you had no previous contact with humans, and one day visited England, how many people would you need to see before you realize that, on average, men are taller than women there? The answer relates to the effect size of the difference in average height between men and women. The larger the effect size, the easier it is to see that men are taller. If the height difference were small, then it would require knowing the heights of many men and women to notice that (on average) men are taller than women. This example is demonstrated further below.

In inferential statistics, an effect size helps to determine whether a statistically significant difference is a difference of practical concern. In other words, given a sufficiently large sample size, it is always possible to show that there is a difference between two means being compared out to some decimal position. The effects size helps us to know whether the difference observed is a difference that matters. Effect size, sample size, critical significance level (\alpha), and power in statistical hypothesis testing are related: any one of these values can be determined, given the others. In meta-analysis, effect sizes are used as a common measure that can be calculated for different studies and then combined into overall analyses.

The term effect size is most commonly used to described standardized measures of effect (e.g., r, Cohen's d, odds ratio, etc.). However, unstandardized measures (e.g., the raw difference between group means, unstandardized regression coefficients, etc.) can equally be effect size measures. Standardized effect size measures are typically used when the metrics of variables being studied do not have intrinsic meaning to the reader (e.g., a score on a personality test on an arbitrary scale), or when results from multiple studies are being combined when some or all of the studies use different scales. Some students mistook the recommendation of Wilkinson & APA Task Force on Statistical Inference (1999, p. 599)--Always present effect sizes for primary outcomes—as that reporting standardized measures of effect like Cohen's d is the default requirement. Actually, just following the sentence the authors added that -- If the units of measurement are meaningful on a practical level (e.g., number of cigarettes smoked per day), then we usually prefer an unstandardized measure (regression coefficient or mean difference) to a standardized measure (r or d).

Types

Pearson r correlation

Pearson's r correlation, introduced by Karl Pearson, is one of the most widely used effect sizes. It can be used when the data are continuous or binary; thus the Pearson r is arguably the most versatile effect size. This was the first important effect size to be developed in statistics. Pearson's r can vary in magnitude from -1 to 1, with -1 indicating a perfect negative relationship, 1 indicating a perfect positive relationship, and 0 indicating no relationship between two variables. Cohen (1988, 1992) gives the following guidelines for the social sciences: small effect size, r = 0.1; medium, r = 0.3; large, r = 0.5. (Note that correlation coefficients for the physical sciences are typically of a different order of magnitude.)

Another often-used measure of the strength of the relationship between two variables is the coefficient of determination (the square of r, referred to as "r-squared"). This is a measure of the proportion of variance shared by the two variables, and varies from 0 to 1. An of 0.21 means that 21% of the total variance is shared by the two variables.

Cohen's d

Cohen's d is the appropriate effect size measure to use in the context of a t-test on means. d is defined as the difference between two means divided by the pooled standard deviation for those means. Thus, in the case where both samples are the same size,

d = {\mathrm{mean}_1 - \mathrm{mean}_2 \over \sqrt{(\mathrm{SD}_1^2 + \mathrm{SD}_2^2) /2 \ }}
where meani and SDi are the mean and standard deviation for group i, for i = 1, 2.

Different people offer different advice regarding how to interpret the resultant effect size, but the most accepted opinion is that of Cohen (1992) where 0.2 is indicative of a small effect, 0.5 a medium and 0.8 a large effect size.

So, in the example above of visiting England and observing men and women's height, the data (Aaron,Kromrey,& Ferron, 1998, November; from a 2004 UK representative sample of 2436 men and 3311 women) is:

  • Men: Mean Height = 1750 mm; Standard Deviation = 89.93 mm
  • Women: Mean Height = 1612 mm; Standard Deviation = 69.05 mm

The effect size (using Cohen's d) would equal 1.72 (95% confidence intervals: 1.66 - 1.78). This is very large and you should have no problem in detecting that there is a consistent height difference, on average, between men and women.

One point worth noting, though, is that in some cases it may be wise to use just one of the standard deviations (e.g., pre-treatment standard deviation in a therapeutic trial). Either way, note that sample size does not play a part in the calculation - points noted by Hedges.

Hedges' ĝ

Hedges and Olkin (1985) noted that one could adjust effect size estimates by taking into account the sample size. The problem with Cohen's d is that the outcome is heavily influenced by the denominator in the equation. If one standard deviation is larger than the other than the denominator is weighted in that direction and the effect size is more conservative. However, surely it makes more sense to put stock in the larger sample size? Hedges' ĝ incorporates sample size by both computing a denominator which looks at the sample sizes of the respective standard deviations and also makes an adjustment to the overall effect size based on this sample size. The formula for Hedges' ĝ (as used by software such as the Effect Size Generator) is:

\hat{g} = \frac{\bar{x}_1 - \bar{x}_2}{\sqrt{\frac{(n_1 - 1) SD_1^2 + (n_2 - 1) SD_2^2}{(N_\mathrm{total} - 2)}}} \times \bigg(1-\frac{3}{4(n_1+n_2)-9}\bigg).

In the above 'height' example, Hedges' ĝ effect size equals 1.76 (95% confidence intervals: 1.70 - 1.82). Notice how the large sample size has increased the effect size from Cohen's d? If, instead, the available data were from only 90 men and 80 women Hedges' ĝ would provide a more conservative estimate of effect size: 1.70 (with larger 95% confidence intervals: 1.35 - 2.05).

Cohen's f^{2}

Cohen's f^{2} is the appropriate effect size measure to use in the context of an F-test for multiple correlation or multiple regression. The f^{2} effect size measure for multiple regression is defined as:

f^{2} = {R^{2} \over 1 - R^{2}}
where R^{2} is the squared multiple correlation.

The f^{2} effect size measure for hierarchical multiple regression is defined as:

f^{2} = {(R^{2}_{AB} - R^{2}_A) \over 1 - R^{2}_{AB}}
where R^{2}_A is the variance accounted for by a set of one or more independent variables A, and R^{2}_{AB} is the combined variance accounted for by A and another set of one or more independent variables B.

By convention, f^{2} effect sizes of 0.02, 0.15, and 0.35 are considered small, medium, and large, respectively (Cohen, 1988).

φ, Cramer's φ, or Cramer's V

 \phi = \sqrt{ \frac{\chi^2}{N}} 

 \phi_c = \sqrt{ \frac{\chi^2}{N(k - 1)}} 

Phi (φ) Cramer's Phi (φc)

The best measure of association for the chi-square test is phi (or Cramer's phi or V). Phi is related to the point-biserial correlation coefficient and Cohen's d and estimates the extent of the relationship between two variables (2 x 2).[1] Cramer's Phi may be used with variables having more than two levels.

Phi can be computed by finding the square root of the chi-square statistic divided by the sample size.

Similarly, Cramer's phi can be found through a slightly more complex formula that takes the number of rows or columns into account (k).

Odds ratio

The odds ratio is another useful effect size. It is appropriate when both variables are binary. For example, consider a study on spelling. In a control group, two students pass the class for every one who fails, so the odds of passing are two to one (or more briefly 2/1 = 2). In the treatment group, six students pass for every one who fails, so the odds of passing are six to one (or 6/1 = 6). The effect size can be computed by noting that the odds of passing in the treatment group are three times higher than in the control group (because 6 divided by 2 is 3). Therefore, the odds ratio is 3. However, odds ratio statistics are on a different scale to Cohen's d. So, this '3' is not comparable to a Cohen's d of '3'.

See also

References

  1. Aaron, B., Kromrey, J. D., & Ferron, J. M. (1998, November). Equating r-based and d-based effect-size indices: Problems with a commonly recommended formula. Paper presented at the annual meeting of the Florida Educational Research Association, Orlando, FL. ((ERIC Document Reproduction Service No. ED433353))
  • Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd ed.). Hillsdale, NJ: Erlbaum
  • Cohen, J. (1992). A power primer. Psychological Bulletin, 112, 155-159.
  • Hedges, L. V., & Olkin, I. (1985). Statistical methods for meta-analysis. San Diego, CA: Academic Press.
  • Lipsey, M.W., & Wilson, D.B. (2001). Practical meta-analysis. Sage: Thousand Oaks, CA.
  • Wilkinson, L., & APA Task Force on Statistical Inference. (1999). Statistical methods in psychology journals: Guidelines and explanations. American Psychologist, 54, 594-604.
  • Adair, J. G., Sharpe, D., & Huynh, C.-l. (1989). Hawthorne control procedures in educational experiments: A reconsideration of their use and effectiveness: Review of Educational Research Vol 59(2) Sum 1989, 215-228.
  • Adams, G., & Carnine, D. (2003). Direct instruction. New York, NY: Guilford Press.
  • Aegisdottir, S., Spengler, P. M., & White, M. J. (2006). Should I Pack My Umbrella?: Clinical Versus Statistical Prediction of Mental Health Decisions: Counseling Psychologist Vol 34(3) May 2006, 410-419.
  • Aegisdottir, S., White, M. J., Spengler, P. M., Maugherman, A. S., Anderson, L. A., Cook, R. S., et al. (2006). The Meta-Analysis of Clinical Judgment Project: Fifty-Six Years of Accumulated Research on Clinical Versus Statistical Prediction Stefania AEgisdottir: Counseling Psychologist Vol 34(3) May 2006, 341-382.
  • Agnoli, F., & Tressoldi, P. (2005). The presentation of results of an experimental research: How to integrate information provided by statistical tests: Eta Evolutiva No 81(1) 2005, 83-94.
  • Aguinis, H., Beaty, J. C., Boik, R. J., & Pierce, C. A. (2005). Effect Size and Power in Assessing Moderating Effects of Categorical Variables Using Multiple Regression: A 30-Year Review: Journal of Applied Psychology Vol 90(1) Jan 2005, 94-107.
  • Aguinis, H., & Pierce, C. A. (2006). Computation of Effect Size for Moderating Effects of Categorical Variables in Multiple Regression: Applied Psychological Measurement Vol 30(5) Sep 2006, 440-442.
  • Alexander, R. A., Scozzaro, M. J., & Borodkin, L. J. (1989). Statistical and empirical examination of the chi-square test for homogeneity of correlations in meta-analysis: Psychological Bulletin Vol 106(2) Sep 1989, 329-331.
  • Algina, J., & Keselman, H. J. (2003). Approximate confidence intervals for effect sizes: Educational and Psychological Measurement Vol 63(4) Aug 2003, 537-553.
  • Algina, J., Keselman, H. J., & Penfield, R. D. (2005). An Alternative to Cohen's Standardized Mean Difference Effect Size: A Robust Parameter and Confidence Interval in the Two Independent Groups Case: Psychological Methods Vol 10(3) Sep 2005, 317-328.
  • Algina, J., Keselman, H. J., & Penfield, R. D. (2005). Effect Sizes and their Intervals: The Two-Level Repeated Measures Case: Educational and Psychological Measurement Vol 65(2) Apr 2005, 241-258.
  • Algina, J., Keselman, H. J., & Penfield, R. D. (2006). Confidence Interval Coverage for Cohen's Effect Size Statistic: Educational and Psychological Measurement Vol 66(6) Dec 2006, 945-960.
  • Alliger, G. M. (1995). The small sample performance of four tests of the difference between pairs of meta-analytically derived effect sizes: Journal of Management Vol 21(4) 1995, 789-799.
  • Allison, D. B., Silverstein, J. M., & Gorman, B. S. (1996). Power, sample size estimation, and early stopping rules. Hillsdale, NJ, England: Lawrence Erlbaum Associates, Inc.
  • Andersen, M. B., McCullagh, P., & Wilson, G. J. (2007). But what do the numbers really tell us? Arbitrary metrics and effect size reporting in sport psychology research: Journal of Sport & Exercise Psychology Vol 29(5) Oct 2007, 664-672.
  • Anderson, B. S., Butts, C., & Carley, K. (1999). The interaction of size and density with graph-level indices: Social Networks Vol 21(3) Jul 1999, 239-267.
  • Anderson, R. B., Doherty, M. E., Berg, N. D., & Friedrich, J. C. (2005). Postscript: Psychological Review Vol 112(1) Jan 2005, 279.
  • Archer, J., Graham-Kevan, N., & Davies, M. (2005). Testosterone and aggression: A reanalysis of Book, Starzyk, and Quinsey's (2001) study: Aggression and Violent Behavior Vol 10(2) Jan-Feb 2005, 241-261.
  • Arent, S. M., Landers, D. M., & Etnier, J. L. (2000). The effects of exercise on mood in older adults: A meta-analytic review: Journal of Aging and Physical Activity Vol 8(4) Oct 2000, 407-430.
  • Armstrong, S. A., & Henson, R. K. (2004). Statistical and Practical Significance in the IJPT: A Research Review from 1993-2003: International Journal of Play Therapy Vol 13(2) 2004, 9-30.
  • Austin, E. (2003). Review of Contrasts and effect sizes in behavioral research. A correlational approach: British Journal of Mathematical and Statistical Psychology Vol 56(2) Nov 2003, 379-380.
  • Austin, J. T., Boyle, K. A., & Lualhati, J. C. (1998). Statistical conclusion validity for organizational science researchers: A review: Organizational Research Methods Vol 1(2) Apr 1998, 164-208.
  • Avtgis, T. A. (1998). Locus of control and persuasion, social influence, and conformity: A meta-analytic review: Psychological Reports Vol 83(3, Pt 1) Dec 1998, 899-903.
  • Bakeman, R. (2005). Infancy Asks That Authors Report and Discuss Effect Sizes: Infancy Vol 7(1) 2005, 5-6.
  • Bakeman, R. (2005). Recommended effect size statistics for repeated measures designs: Behavior Research Methods Vol 37(3) Aug 2005, 379-384.
  • Bakeman, R. (2006). VII. The practical importance of findings: Monographs of the Society for Research in Child Development Vol 71(3) Dec 2006, 127-145.
  • Baker, S. B., & Daniels, T. G. (1989). Integrating research on the microcounseling program: A meta-analysis: Journal of Counseling Psychology Vol 36(2) Apr 1989, 213-222.
  • Baker, S. B., & Taylor, J. G. (1998). Effects of career education interventions: A meta-analysis: The Career Development Quarterly Vol 46(4) Jun 1998, 376-385.
  • Barber, J. P., & Milrod, B. (2004). Pitfalls of Meta-Analyses: American Journal of Psychiatry Vol 161(6) Jun 2004, 1131.
  • Baugh, F. (2002). Correcting effect sizes for score reliability: A reminder that measurement and substantive issues are linked inextricably: Educational and Psychological Measurement Vol 62(2) Apr 2002, 254-263.
  • Bech, P. (2001). Meta-analysis of placebo-controlled trials with mirtazapine using the core items of the Hamilton Depression Scale as evidence of a pure antidepressive effect in the short-term treatment of major depression: International Journal of Neuropsychopharmacology Vol 4(4) Dec 2001, 337-345.
  • Becker, B. J. (1987). Applying tests of combined significance in meta-analysis: Psychological Bulletin Vol 102(1) Jul 1987, 164-171.
  • Bem, D. J., Palmer, J., & Broughton, R. S. (2001). Updating the ganzfeld database: A victim of its own success? : Journal of Parapsychology Vol 65(3) Sep 2001, 207-218.
  • Bennett, M. (2002). Editorial: Reporting of effect sizes: Infant and Child Development Vol 11(3) Sep 2002, 211-212.
  • Beretvas, S. N. (2005). Methodological Challenges Encountered in Summarizing Evidence-Based Practice: School Psychology Quarterly Vol 20(4) Win 2005, 498-503.
  • Berger, R. E., & Persinger, M. A. (1991). Geophysical variables and behavior: LXVII. Quieter annual geomagnetic activity and larger effect size for experimental psi (ESP) studies over six decades: Perceptual and Motor Skills Vol 73(3, Pt 2), Spec Issue Dec 1991, 1219-1223.
  • Berry, K. J., Johnston, J. E., & Mielke, P. W., Jr. (2007). An alternative measure of effect size for Cochran's Q test for related proportions: Perceptual and Motor Skills Vol 104(3, Pt2) Jun 2007, 1236-1242.
  • Berry, K. J., & Mielke, P. W., Jr. (2002). Least sum of Euclidean regression residuals: Estimation of effect size: Psychological Reports Vol 91(3,Pt1) Dec 2002, 955-962.
  • Bezeau, S., & Graves, R. (2001). Statistical power and effect sizes of clinical neuropsychology research: Journal of Clinical and Experimental Neuropsychology Vol 23(3) Jun 2001, 399-406.
  • Bieliauskas, L. A., Fastenau, P. S., Lacy, M. A., & Roper, B. L. (1997). Use of the odds ratio to translate neuropsychological test scores into real-world outcomes: From statistical significance to clinical significance: Journal of Clinical and Experimental Neuropsychology Vol 19(6) Dec 1997, 889-896.
  • Bird, K. D. (2002). Confidence intervals for effect sizes in analysis of variance: Educational and Psychological Measurement Vol 62(2) Apr 2002, 197-226.
  • Bjorgvinsson, T., & Kerr, P. (1995). Use of a common language effect size statistic: American Journal of Psychiatry Vol 152(1) Jan 1995, 151.
  • Blok, H., Oostdam, R., Otter, M. E., & Overmaat, M. (2002). Computer-assisted instruction in support of beginning reading instruction: A review: Review of Educational Research Vol 72(1) Spr 2002, 101-130.
  • Bly, P. R. (2001). Understanding the effectiveness of ProMES: An analysis of indicators and contingencies. Dissertation Abstracts International: Section B: The Sciences and Engineering.
  • Bobko, P., Roth, P. L., & Bobko, C. (2001). Correcting the effect size of d for range restriction and unreliability: Organizational Research Methods Vol 4(1) Jan 2001, 46-61.
  • Bollen, K. A. (1990). Overall fit in covariance structure models: Two types of sample size effects: Psychological Bulletin Vol 107(2) Mar 1990, 256-259.
  • Bond, C. F., Jr., Wiitala, W. L., & Richard, F. D. (2003). Meta-Analysis of Raw Mean Differences: Psychological Methods Vol 8(4) Dec 2003, 406-418.
  • Book, A. S., & Quinsey, V. L. (2005). Re-examining the issues: A response to Archer et al.: Aggression and Violent Behavior Vol 10(6) Sep-Oct 2005, 637-646.
  • Book, A. S., Starzyk, K. B., & Quinsey, V. L. (2001). The relationship between testosterone and aggression: A meta-analysis: Aggression and Violent Behavior Vol 6(6) Nov-Dec 2001, 579-599.
  • Borges, A., Sanchez, A., & Canadas, I. (1996). The significance of the differences in means in small groups, with ordinal scales and in the absence of normal distribution: Psicologica Vol 17(3) 1996, 455-465.
  • Bosch, H., Steinkamp, F., & Boller, E. (2006). Examining Psychokinesis: The Interaction of Human Intention With Random Number Generators- A Meta-Analysis: Psychological Bulletin Vol 132(4) Jul 2006, 497-523.
  • Bosch, H., Steinkamp, F., & Boller, E. (2006). In the Eye of the Beholder: Reply to Wilson and Shadish (2006) and Radin, Nelson, Dobyns, and Houtkooper (2006): Psychological Bulletin Vol 132(4) Jul 2006, 533-537.
  • Boyle, M. H., & Pickles, A. R. (1998). Strategies to manipulate reliability: Impact on statistical associations: Journal of the American Academy of Child & Adolescent Psychiatry Vol 37(10) Oct 1998, 1077-1084.
  • Bradley, J. V. (1984). Antinonrobustness: A case study in the sociology of science: Bulletin of the Psychonomic Society Vol 22(5) Sep 1984, 463-466.
  • Bradley, M. T., & Gupta, R. D. (1997). Estimating the effect of the file drawer problem in meta-analysis: Perceptual and Motor Skills Vol 85(2) Oct 1997, 719-722.
  • Brady, F. (2004). Contextual interference: A meta-analytic study: Perceptual and Motor Skills Vol 99(1) Aug 2004, 116-126.
  • Brame, R. (2000). Investigating treatment effects in a domestic violence experiment with partially missing outcome data: Journal of Quantitative Criminology Vol 16(3) Sep 2000, 283-314.
  • Brewin, C. R. (2005). Risk Factor Effect Sizes in PTSD: What This Means for Intervention. New York, NY: Haworth Press.
  • Brokaw, D. K. (2002). Are SSRIs and TCAs equally effective for the treatment of panic disorder? : The Journal of Family Practice Vol 51(3) Mar 2002, 279.
  • Brossart, D. F., Parker, R. I., Olson, E. A., & Mahadevan, L. (2006). The Relationship Between Visual Analysis and Five Statistical Analyses in a Simple AB Single-Case Research Design: Behavior Modification Vol 30(5) Sep 2006, 531-563.
  • Burke, B. L., Arkowitz, H., & Menchola, M. (2003). The efficacy of motivational interviewing: A meta-analysis of controlled clinical trials: Journal of Consulting and Clinical Psychology Vol 71(5) Oct 2003, 843-861.
  • Burns, M. K. (2003). Reexamining Data from the National Reading Panel's Meta-Analysis: Implications for School Psychology: Psychology in the Schools Vol 40(6) Nov 2003, 605-612.
  • Bushman, B. J., & Wang, M. C. (1996). A procedure for combining sample standardized mean differences and vote counts to estimate the population standardized mean difference in fixed event models: Psychological Methods Vol 1(1) Mar 1996, 66-80.
  • Bushway, S., Brame, R., & Paternoster, R. (1999). Assessing stability and change in criminal offending: A comparison of random effects, semiparametric, and fixed effects modeling strategies: Journal of Quantitative Criminology Vol 15(1) Mar 1999, 23-61.
  • Busk, P. L., & Serlin, R. C. (1992). Meta-analysis for single-case research. Hillsdale, NJ, England: Lawrence Erlbaum Associates, Inc.
  • Cai, X. (2005). Stability of externalizing problem behaviors with onset in early childhood: A meta-analytic review. Dissertation Abstracts International Section A: Humanities and Social Sciences.
  • Campbell, J. I. D., & Thompson, V. A. (2002). More power to you: Simple power calculations for treatment effects with one degree of freedom: Behavior Research Methods, Instruments & Computers Vol 34(3) Aug 2002, 332-337.
  • Campbell, J. M. (2004). Statistical Comparison of Four Effect Sizes for Single-Subject Designs: Behavior Modification Vol 28(2) Mar 2004, 234-246.
  • Cankar, G., & Bajec, B. (2003). Effect size as a supplement to statistical significance testing: Psiholoska Obzorja/Horizons of Psychology Vol 12(2) 2003, 97-112.
  • Capraro, M. M., & Capraro, R. M. (2003). Exploring the APA Fifth Edition Publication Manual's impact on the analytic preferences of journal editorial board members: Educational and Psychological Measurement Vol 63(4) Aug 2003, 554-565.
  • Capraro, R. M. (2004). Statistical Significance, Effect Size Reporting, and Confidence Intervals: Best Reporting Strategies: Journal for Research in Mathematics Education Vol 35(1) Jan 2004, 57-62.
  • Capraro, R. M., & Capraro, M. M. (2002). Treatments of effect sizes and statistical significance tests in textbooks: Educational and Psychological Measurement Vol 62(5) Oct 2002, 771-782.
  • Carlson, K. D., & Schmidt, F. L. (1999). Impact of experimental design on effect size: Findings from the research literature on training: Journal of Applied Psychology Vol 84(6) Dec 1999, 851-862.
  • Carlton, P. L., & Strawderman, W. E. (1996). Evaluating cumulated research I: The inadequacy of traditional methods: Biological Psychiatry Vol 39(1) Jan 1996, 65-72.
  • Caspi, O. (2004). How good are we? A Meta-Analytic study of effect sizes in medicine. Dissertation Abstracts International: Section B: The Sciences and Engineering.
  • Center, B. A., Skiba, R. J., & Casey, A. (1985). A methodology for the quantitative synthesis of intra-subject design research: The Journal of Special Education Vol 19(4) Win 1985-1986, 387-400.
  • Champoux, J. E., & Peters, W. S. (1987). Form, effect size and power in moderated regression analysis: Journal of Occupational Psychology Vol 60(3) 1987, 243-255.
  • Chang, L. (1993). A power analysis of the test of homogeneity in effect-size meta-analysis: Dissertation Abstracts International.
  • Chen, M. J., & Fan, X. (1998). The relationship between variance components and mean difference effect size: Current Psychology: Developmental, Learning, Personality, Social Vol 17(4) Win 1998-1999, 301-312.
  • Cheung, S. F., & Chan, D. K. S. (2004). Dependent Effect Sizes in Meta-Analysis: Incorporating the Degree of Interdependence: Journal of Applied Psychology Vol 89(5) Oct 2004, 780-791.
  • Chow, S. L. (1988). Significance test or effect size? : Psychological Bulletin Vol 103(1) Jan 1988, 105-110.
  • Chwalisz, K. (2006). Statistical Versus Clinical Prediction:: From Assessment to Psychotherapy Process and Outcome: Counseling Psychologist Vol 34(3) May 2006, 391-399.
  • Clark-Carter, D. (2003). Effect size: The missing piece in the jigsaw: The Psychologist Vol 16(12) Dec 2003, 636-638.
  • Clark-Carter, D. (2005). The importance of considering effect size and statistical power in research. New York, NY: Oxford University Press.
  • Coe, R., & Soto, C. M. (2003). Effect Size: A guide for researchers and users: Revista de Psicologia Vol 21(1) 2003, 145-177.
  • Cogger, K. O. (2007). Rating rater improvement: A method for estimating increased effect size and reduction of clinical trial costs: Journal of Clinical Psychopharmacology Vol 27(4) Aug 2007, 418-420.
  • Cohen, J. (1992). A power primer: Psychological Bulletin Vol 112(1) Jul 1992, 155-159.
  • Cohen, J. (2003). A power primer. Washington, DC: American Psychological Association.
  • Cohn, L. D., & Becker, B. J. (2003). How Meta-Analysis Increases Statistical Power: Psychological Methods Vol 8(3) Sep 2003, 243-253.
  • Cole, J. A., & Burkhart, B. R. (1987). When a placebo is not a placebo: The value of effect size measures in assessing the validity of deception used in the balanced placebo design: British Journal of Addiction Vol 82(6) Jun 1987, 649-652.
  • Colliver, J. A. (2007). Effect-size measures and research in developmental and behavioral pediatrics: Journal of Developmental & Behavioral Pediatrics Vol 28(2) Apr 2007, 145-150.
  • Colom, R., Juan-Espinosa, M., & Garcia, L. F. (2001). The secular increase in test scores is a "Jensen effect." Personality and Individual Differences Vol 30(4) Mar 2001, 553-559.
  • Conboy, J. E. (2003). Some typical univariate measures of the magnitude of effect: Analise Psicologica Vol 21(2) Apr-Jun 2003, 145-158.
  • Conn, V. S., & Rantz, M. J. (2003). Research methods: Managing primary study quality in meta-analyses: Research in Nursing & Health Vol 26(4) Aug 2003, 322-333.
  • Connor, D. F., Boone, R. T., Steingard, R. J., Lopez, I. D., & Melloni, R. H., Jr. (2003). Psychopharmacology and Aggression: II. A meta-analysis of nonstimulant medication effects on overt aggression-related behaviors in youth with SED: Journal of Emotional and Behavioral Disorders Vol 11(3) Fal 2003, 157-168.
  • Connor, D. F., Glatt, S. J., Lopez, I. D., Jackson, D., & Melloni, R. H., Jr. (2002). Psychopharmacology and aggression. I: A meta-analysis of stimulant effects on overt/covert aggression-related behaviors in ADHD: Journal of the American Academy of Child & Adolescent Psychiatry Vol 41(3) Mar 2002, 253-261.
  • Cook, S. R. (2004). A Note on Testing for Homogeneity Among Effect Sizes Sharing a Common Control Group: Psychological Methods Vol 9(4) Dec 2004, 446-452.
  • Cooper, H. M. (1990). Moving beyond meta-analysis. New York, NY: Russell Sage Foundation.
  • Cormier, P. (1993). Commentary on the colloquium "Alternatives to classical statistical procedures." Canadian Psychology/Psychologie Canadienne Vol 34(4) Oct 1993, 446-460.
  • Corroyer, D., & Rouanet, H. (1994). On the importance of effect size and indicators of effect size in the statistical analysis of data: L'annee Psychologique Vol 94(4) Dec 1994, 607-623.
  • Cottrill, S. D. (1996). The psychological modulation of the immune system: A meta-analysis. Dissertation Abstracts International: Section B: The Sciences and Engineering.
  • Crits-Christoph, P. (1997). Limitations of the dodo bird verdict and the role of clinical trials in psychotherapy research: Comment on Wampold et al. (1997): Psychological Bulletin Vol 122(3) Nov 1997, 216-220.
  • Crits-Christoph, P., Tu, X., & Gallop, R. (2003). Therapists as Fixed Versus Random Effects-Some Statistical and Conceptual Issues: A Comment on Siemer and Joormann (2003): Psychological Methods Vol 8(4) Dec 2003, 518-523.
  • Crow, E. L. (1991). Response to Rosenthal's comment "How are we doing in soft psychology?" American Psychologist Vol 46(10) Oct 1991, 1083.
  • Cumming, G., & Finch, S. (2001). A primer on the understanding, use, and calculation of confidence intervals that are based on central and noncentral distributions: Educational and Psychological Measurement Vol 61(4) Aug 2001, 532-574.
  • Curlette, W. L. (1987). The Meta-Analysis Effect Size Calculator: A BASIC program for reconstructing unbiased effect sizes: Educational and Psychological Measurement Vol 47(1) Spr 1987, 107-109.
  • Dasborough, M. (2007). Review of Effect sizes for research: A broad practical approach: Organizational Research Methods Vol 10(3) Jul 2007, 542-545.
  • Davidson, J. R. T., Tharwani, H. M., & Connor, K. M. (2002). Davidson Trauma Scale (DTS): Normative scores in the general population and effect sizes in placebo-controlled SSRI trials: Depression and Anxiety Vol 15(2) 2002, 75-78.
  • De Corte, W., & Lievens, F. (2005). The Risk of Adverse Impact in Selections Based on a Test with Known Effect Size: Educational and Psychological Measurement Vol 65(5) Oct 2005, 643-664.
  • del Rosal, A. B., San Luis, C., & Sanchez-Bruno, A. (2003). Dominance Statistics: A Simulation Study on the d Statistic: Quality & Quantity: International Journal of Methodology Vol 37(3) Aug 2003, 303-316.
  • DeMars, C. (2001). Group differences based on IRT scores: Does the model matter? : Educational and Psychological Measurement Vol 61(1) Feb 2001, 60-70.
  • DeVaney, T. A. (2001). Statistical significance, effect size, and replication: What do the journals say? : Journal of Experimental Education Vol 69(3) Spr 2001, 310-320.
  • Dietz, E., & Weist, K. (2003). Meta-Analysis in Hospital and Clinical Epidemiology. Ashland, OH: Hogrefe & Huber Publishers.
  • Donnay, D. A. C., & Borgen, F. H. (1996). Validity, structure, and content of the 1994 Strong Interest Inventory: Journal of Counseling Psychology Vol 43(3) Jul 1996, 275-291.
  • Douglas, E. S. (2007). Modality effects on adult learning. Dissertation Abstracts International Section A: Humanities and Social Sciences.
  • Dunlap, W. P. (1994). Generalizing the common language effect size indicator to bivariate normal correlations: Psychological Bulletin Vol 116(3) Nov 1994, 509-511.
  • Dunlap, W. P. (1999). A program to compute McGraw and Wong's common language effect size indicator: Behavior Research Methods, Instruments & Computers Vol 31(4) Nov 1999, 706-709.
  • Dunlap, W. P., Cortina, J. M., Vaslow, J. B., & Burke, M. J. (1996). Meta-analysis of experiments with matched groups or repeated measures designs: Psychological Methods Vol 1(2) Jun 1996, 170-177.
  • Eiser, C. (1988). Do children benefit from psychological preparation for hospitalization? : Psychology & Health Vol 2(2) Oct 1988, 133-138.
  • Elaad, E. (2003). Appropriate Presentation of Skin Conductance Accuracy With the Guilty Knowledge Test: Psychological Reports Vol 93(3,Pt2) Dec 2003, 1047-1048.
  • Elosua, P., & Lopez-Jauregui, A. (2007). Application of four procedures for detecting differential item functioning in polytomous items: Psicothema Vol 19(2) May 2007, 329-336.
  • Entsuah, R., Shaffer, M., & Zhang, J. (2002). A critical examination of the sensitivity of unidimensional subscales derived from the Hamilton Depression Rating Scale to antidepressant drug effects: Journal of Psychiatric Research Vol 36(6) Nov-Dec 2002, 437-448.
  • Erdfelder, E., Faul, F., & Buchner, A. (1996). GPOWER: A general power analysis program: Behavior Research Methods, Instruments & Computers Vol 28(1) Feb 1996, 1-11.
  • Erez, A., Bloom, M. C., & Wells, M. T. (1996). Using random rather than fixed effects models in meta-analysis: Implications for situational specificity and validity generalization: Personnel Psychology Vol 49(2) Sum 1996, 275-306.
  • Fagley, N. S. (1985). Applied statistical power analysis and the interpretation of nonsignificant results by research consumers: Journal of Counseling Psychology Vol 32(3) Jul 1985, 391-396.
  • Faith, M. S., Allison, D. B., & Gorman, B. S. (1996). Meta-analysis of single-case research. Hillsdale, NJ, England: Lawrence Erlbaum Associates, Inc.
  • Fan, X. (2001). Statistical significance and effect size in education research: Two sides of a coin: Journal of Educational Research Vol 94(5) May-Jun 2001, 275-282.
  • Faraone, S. V., Goldstein, S., & Naglieri, J. A. (2007). Editors' note: Journal of Attention Disorders Vol 11(1) Aug 2007, 3.
  • Faul, F., Erdfelder, E., Lang, A.-G., & Buchner, A. (2007). G*Power3: A flexible statistical power analysis program for the social, behavioral, and biomedical sciences: Behavior Research Methods Vol 39(2) May 2007, 175-191.
  • Favreau, O. E., & Everett, J. C. (1996). A tale of two tails: American Psychologist Vol 51(3) Mar 1996, 268-269.
  • Feingold, A. (1993). Joint effects of gender differences in central tendency and gender differences in variability: Review of Educational Research Vol 63(1) Spr 1993, 106-109.
  • Feingold, A. (1995). The additive effects of differences in central tendency and variability are important in comparisons between groups: American Psychologist Vol 50(1) Jan 1995, 5-13.
  • Fern, E. F., & Monroe, K. B. (1996). Effect-size estimates: Issues and problems in interpretation: Journal of Consumer Research Vol 23(2) Sep 1996, 89-105.
  • Fidalgo, A. M., Mellenbergh, G. J., & Muniz, J. (2000). Effects of amount of DIF, test length, and purification type on robustness and power of Mantel-Haenszel procedures: Methods of Psychological Research Vol 5(3) 2000, 43-53.
  • Fidler, F., Faulkner, C., & Cumming, G. (2008). Analyzing and presenting outcomes. New York, NY: Oxford University Press.
  • Fidler, F., Thomason, N., Cumming, G., Finch, S., & Leeman, J. (2004). Editors Can Lead Researchers to Confidence Intervals, but Can't Make Them Think: Statistical Reform Lessons From Medicine: Psychological Science Vol 15(2) Mar 2004, 119-126.
  • Fidler, F., Thomason, N., Cumming, G., Finch, S., & Leeman, J. (2005). Commentary: Still Much to Learn About Confidence Intervals: Reply to Rouder and Morey (2005): Psychological Science Vol 16(6) Jun 2005, 494-495.
  • Fidler, F., & Thompson, B. (2001). Computing correct confidence intervals for ANOVA fixed-and random-effects effect sizes: Educational and Psychological Measurement Vol 61(4) Aug 2001, 575-604.
  • Field, A. (2005). Meta-analysis. New York, NY: Oxford University Press.
  • Field, A. P. (2001). Meta-analysis of correlation coefficients: A Monte Carlo comparison of fixed- and random-effects methods: Psychological Methods Vol 6(2) Jun 2001, 161-180.
  • Field, A. P. (2003). Can meta-analysis be trusted? : The Psychologist Vol 16(12) Dec 2003, 642-645.
  • Field, A. P. (2003). The problems in using fixed-effects models of meta-analysis on real-world data: Understanding Statistics Vol 2(2) Apr 2003, 105-124.
  • Field, A. P. (2005). Is the Meta-Analysis of Correlation Coefficients Accurate When Population Correlations Vary? : Psychological Methods Vol 10(4) Dec 2005, 444-467.
  • Finch, W. H., & Schneider, M. K. (2006). Misclassification Rates for Four Methods of Group Classification: Impact of Predictor Distribution, Covariance Inequality, Effect Size, Sample Size, and Group Size Ratio: Educational and Psychological Measurement Vol 66(2) Apr 2006, 240-257.
  • Fisher, S. L., & Nelson, D. L. (2006). Recursive reminding: Effects of repetition, printed frequency, connectivity, and set size on recognition and judgments of frequency: Memory & Cognition Vol 34(2) Mar 2006, 295-306.
  • Fleiss, J. L. (1994). Measures of effect size for categorical data. New York, NY: Russell Sage Foundation.
  • Flint, J., & Munafo, M. R. (2007). The endophenotype concept in psychiatric genetics: Psychological Medicine Vol 37(2) Feb 2007, 163-180.
  • Foster, E. K. (2003). METASTATS: Behavioral science statistics for Microsoft Windows and the HP49G programmable calculator: Behavior Research Methods, Instruments & Computers Vol 35(2) May 2003, 325-328.
  • Fowler, R. L. (1987). A general method for comparing effect magnitudes in ANOVA designs: Educational and Psychological Measurement Vol 47(2) Sum 1987, 361-367.
  • Fowler, R. L. (1988). Estimating the standardized mean difference in intervention studies: Journal of Educational Statistics Vol 13(4) Win 1988, 337-350.
  • Frazier, T. W., Demaree, H. A., & Youngstrom, E. A. (2004). Meta-Analysis of Intellectual and Neuropsychological Test Performance in Attention-Deficit/Hyperactivity Disorder: Neuropsychology Vol 18(3) Jul 2004, 543-555.
  • French, B. F., & Maller, S. J. (2007). Iterative purification and effect size use with logistic regression for differential item functioning detection: Educational and Psychological Measurement Vol 67(3) Jun 2007, 373-393.
  • Frick, R. W. (1999). Defending the statistical status quo: Theory & Psychology Vol 9(2) Apr 1999, 183-189.
  • Friedman, L. (2001). Why vote-count reviews don't count: Biological Psychiatry Vol 49(2) Jan 2001, 161-162.
  • Friedmann, L. (2000). Estimators of random effects variance components in meta-analysis: Journal of Educational and Behavioral Statistics Vol 25(1) Spr 2000, 1-12.
  • Froman, R. D. (2004). Numbers, numbers everywhere: Research in Nursing & Health Vol 27(3) Jun 2004, 145-147.
  • Furr, R. M. (2004). Interpreting Effect Sizes in Contrast Analysis: Understanding Statistics Vol 3(1) 2004, 1-25.
  • Gaffan, E. A., Tsaousis, J., & Kemp-Wheeler, S. M. (1995). Researcher allegiance and meta-analysis: The case of cognitive therapy for depression: Journal of Consulting and Clinical Psychology Vol 63(6) Dec 1995, 966-980.
  • Gallucci, M. (2007). Review of Effect sizes for research: A broad practical approach: British Journal of Mathematical and Statistical Psychology Vol 60(1) May 2007, 196-197.
  • Gavett, B. E., Lynch, J. K., & McCaffrey, R. J. (2005). Third party observers: The effect size is greater than you might think: Journal of Forensic Neuropsychology Vol 4(2) 2005, 49-64.
  • Gibbons, R. D., Hedeker, D. R., & Davis, J. M. (1993). Estimation of effect size from a series of experiments involving paired comparisons: Journal of Educational Statistics Vol 18(3) Fal 1993, 271-279.
  • Gillett, R. (1986). Sample size determination in replication attempts: The standard normal z test: British Journal of Mathematical and Statistical Psychology Vol 39(2) Nov 1986, 190-207.
  • Gillett, R. (1991). A FORTRAN 77 program for sample-size determination in replication attempts when effect size is uncertain: Behavior Research Methods, Instruments & Computers Vol 23(3) Aug 1991, 442-446.
  • Gillett, R. (1996). Sample size determination in a chi-squared test given information from an earlier study: Journal of Educational and Behavioral Statistics Vol 21(3) Fal 1996, 230-246.
  • Gillett, R. (2001). Sample size determination for a t test given a t value from a previous study: A FORTRAN 77 program: Behavior Research Methods, Instruments & Computers Vol 33(4) Nov 2001, 544-548.
  • Gillett, R. (2003). The Metric Comparability of Meta-Analytic Effect-Size Estimators From Factorial Designs: Psychological Methods Vol 8(4) Dec 2003, 419-433.
  • Glaser, R. R. (2002). Accuracy of effect size calculation methods for repeated measures and ANCOVA data. Dissertation Abstracts International: Section B: The Sciences and Engineering.
  • Gleser, L. J., & Olkin, I. (1994). Stochastically dependent effect sizes. New York, NY: Russell Sage Foundation.
  • Gliner, J. A., Leech, N. L., & Morgan, G. A. (2002). Problems with null hypothesis significance testing (NHST): What do the textbooks say? : Journal of Experimental Education Vol 71(1) Fal 2002, 83-92.
  • Gliner, J. A., Morgan, G. A., & Harmon, R. J. (2003). Meta-analysis: Formulation and interpretation: Journal of the American Academy of Child & Adolescent Psychiatry Vol 42(11) Nov 2003, 1376-1379.
  • Gold, C. (2004). The Use of Effect Sizes in Music Therapy Research: Music Therapy Perspectives Vol 22(2) 2004, 91-95.
  • Goldstein, B. A. (2005). From the editor: Language, Speech, and Hearing Services in Schools Vol 36(2) Apr 2005, 91.
  • Gordon, R. A., & Arvey, R. D. (2004). Age Bias in Laboratory and Field Settings: A Meta-Analytic Investigation: Journal of Applied Social Psychology Vol 34(3) Mar 2004, 468-492.
  • Goulden, K. J. (2006). Review of Effect Sizes for Research: A Broad Practical Approach: Journal of Developmental & Behavioral Pediatrics Vol 27(5) Oct 2006, 419-420.
  • Gregory, R. J., Schwer Canning, S., Lee, T. W., & Wise, J. C. (2004). Cognitive Bibliotherapy for Depression: A Meta-Analysis: Professional Psychology: Research and Practice Vol 35(3) Jun 2004, 275-280.
  • Grissom, R. J. (1994). Probability of the superior outcome of one treatment over another: Journal of Applied Psychology Vol 79(2) Apr 1994, 314-316.
  • Grissom, R. J., & Kim, J. J. (2001). Review of assumptions and problems in the appropriate conceptualization of effect size: Psychological Methods Vol 6(2) Jun 2001, 135-146.
  • Grissom, R. J., & Kim, J. J. (2005). Effect sizes for research: A broad practical approach. Mahwah, NJ: Lawrence Erlbaum Associates Publishers.
  • Grote, I. (1997). Visual analysis of graphic data by scientists in training: A self-evaluation with replications. Dissertation Abstracts International: Section B: The Sciences and Engineering.
  • Guilbault, R. L., Bryant, F. B., Brockway, J. H., & Posavac, E. J. (2004). A Meta-Analysis of Research on Hindsight Bias: Basic and Applied Social Psychology Vol 26(2-3) 2004, 103-117.
  • Guo, C., & Zhu, Y. (1997). A comparative study of significance t-test and meta-analysis: Acta Psychologica Sinica Vol 29(4) 1997, 436-442.
  • Haase, R. F. (1986). A BASIC program to compute statistical power for atypical values of alpha: Educational and Psychological Measurement Vol 46(3) Fal 1986, 629-632.
  • Haase, R. F., Ellis, M. V., & Ladany, N. (1989). Multiple criteria for evaluating the magnitude of experimental effects: Journal of Counseling Psychology Vol 36(4) Oct 1989, 511-516.
  • Haddock, C. K., Rindskopf, D., & Shadish, W. R. (1998). Using odds ratios as effect sizes for meta-analysis of dichotomous data: A primer on methods and issues: Psychological Methods Vol 3(3) Sep 1998, 339-353.
  • Hadzi-Pavlovic, D. (2007). Effect sizes I: Differences between means: Acta Neuropsychiatrica Vol 19(5) Oct 2007, 318-320.
  • Hager, W. (2000). A principle of derivation and decisions based on statistical tests and effect sizes: Methods of Psychological Research Vol 5(3) 2000, 9-41.
  • Hamasaki, T., & Goto, M. (2002). On Inferences of Parameters in the Bivariate Power-Normal Distribution: Japanese Journal of Behaviormetrics Vol 29(2) Dec 2002, 199-222.
  • Hancock, G. R. (2001). Effect size, power, and sample size determination for structured means modeling and MIMIC approaches to between-groups hypothesis testing of means on a single latent construct: Psychometrika Vol 66(3) Sep 2001, 373-388.
  • Harman, J. S., Manning, W. G., Lurie, N., & Liu, C.-F. (2001). Interpreting results in mental health research: Mental Health Services Research Vol 3(2) Jun 2001, 91-97.
  • Harris, M. J. (1991). Significance tests are not enough: The role of effect-size estimation in theory corroboration: Theory & Psychology Vol 1(3) Aug 1991, 375-382.
  • Hartmann, A., & Herzog, T. (1995). Calculating effect size by varying formulas: Are there varying results? : Zeitschrift fur Klinische Psychologie Vol 24(4) 1995, 337-343.
  • Hawthorne, G., Herrman, H., & Murphy, B. (2006). Interpreting the WHOQOL-Bref: Preliminary population norms and effect sizes: Social Indicators Research Vol 77(1) May 2006, 37-59.
  • Hayes, T. L. (2005). [Review of Methods of meta-analysis: Correcting error and bias in research findings and Meta-analysis: A comparison of approaches]: Personnel Psychology Vol 58(4) Win 2005, 1104-1108.
  • Heather, N. (2002). Effectiveness of brief interventions proved beyond reasonable doubt: Addiction Vol 97(3) Mar 2002, 293-294.
  • Hedges, L. V. (1994). Fixed effects models. New York, NY: Russell Sage Foundation.
  • Hedges, L. V., & Friedman, L. (1993). Computing gender difference effects in tails of distributions: The consequences of differences in tail size, effect size, and variance ratio: Review of Educational Research Vol 63(1) Spr 1993, 110-112.
  • Hedges, L. V., & Friedman, L. (1993). Gender differences in variability in intellectual abilities: A reanalysis of Feingold's results: Review of Educational Research Vol 63(1) Spr 1993, 94-105.
  • Hedges, L. V., & Olkin, I. (1984). Nonparametric estimators of effect size in meta-analysis: Psychological Bulletin Vol 96(3) Nov 1984, 573-580.
  • Hedges, L. V., & Vevea, J. L. (1996). Estimating effect size under publication bias: Small sample properties and robustness of a random effects selection model: Journal of Educational and Behavioral Statistics Vol 21(4) Win 1996, 299-332.
  • Heinsman, D. T. (1994). Effect sizes in meta-analysis: Does random assignment make a difference? Dissertation Abstracts International: Section B: The Sciences and Engineering.
  • Heinsman, D. T., & Shadish, W. R. (1996). Assignment methods in experimentation: When do nonrandomized experiments approximate answers from randomized experiments? : Psychological Methods Vol 1(2) Jun 1996, 154-169.
  • Hemphill, J. F. (2003). Interpreting the magnitudes of correlation coefficients: American Psychologist Vol 58(1) Jan 2003, 78-79.
  • Henson, R. K. (2006). Book Review: Beyond Significance Testing: Reforming Data Analysis Methods in Behavioral Research: Applied Psychological Measurement Vol 30(5) Sep 2006, 452-455.
  • Henson, R. K. (2006). Effect-Size Measures and Meta-Analytic Thinking in Counseling Psychology Research: Counseling Psychologist Vol 34(5) Sep 2006, 601-629.
  • Henson, R. K., & Smith, A. D. (2000). State of the art in statistical significance and effect size reporting: A review of the APA Task Force report and current trends: Journal of Research & Development in Education Vol 33(4) Sum 2000, 285-296.
  • Herzog, W., Boomsma, A., & Reinecke, S. (2007). The model-size effect on traditional and modified tests of covariance structures: Structural Equation Modeling Vol 14(3) 2007, 361-390.
  • Hess, B., Olejnik, S., & Huberty, C. J. (2001). The efficacy of two improvement-over-chance effect sizes for two-group univariate comparisons under variance heterogeneity and nonnormality: Educational and Psychological Measurement Vol 61(6) Dec 2001, 909-936.
  • Hess, B. J. (2001). The efficacy of two improvement-over-chance indices and the standardized mean difference as measures of effect size for two-group univariate comparisons under variance heterogeneity and nonnormality. Dissertation Abstracts International Section A: Humanities and Social Sciences.
  • Hess, M. R., Hogarty, K. Y., Ferron, J. M., & Kromrey, J. D. (2007). Interval Estimates of Multivariate Effect Sizes: Coverage and Interval Width Estimates Under Variance Heterogeneity and Nonnormality: Educational and Psychological Measurement Vol 67(1) Feb 2007, 21-40.
  • Hevey, D., & McGee, H. M. (1998). The effect size statistic: Useful in health outcomes research? : Journal of Health Psychology Vol 3(2) Apr 1998, 163-170.
  • Hidalgo, M. D., & Lopez-Pina, J. A. (2004). Differential item functioning detection and effect size: A comparison between logistic regression and Mantel-Haenszel procedures: Educational and Psychological Measurement Vol 64(6) Dec 2004, 903-915.
  • Higgins, J. P. T., Thompson, S. G., Deeks, J. J., & Altman, D. G. (2003). Measuring inconsistency in meta-analyses: BMJ: British Medical Journal Vol 327(7414) Sep 2003, 557-560.
  • Hilton, N. Z., Harris, G. T., & Rice, M. E. (2006). Sixty-Six Years of Research on the Clinical Versus Actuarial Prediction of Violence: Counseling Psychologist Vol 34(3) May 2006, 400-409.
  • Hojat, M., & Xu, G. (2004). A Visitor's Guide to Effect Sizes: Advances in Health Sciences Education Vol 9(3) 2004, 241-249.
  • Holden, R. R. (2008). Underestimating the effects of faking on the validity of self-report personality scales: Personality and Individual Differences Vol 44(1) Jan 2008, 311-321.
  • Holmes, C. T. (1984). Effect size estimation in meta-analysis: Journal of Experimental Education Vol 52(2) Win 1984, 106-109.
  • Honekopp, J., Becker, B. J., & Oswald, F. L. (2006). The meaning and suitability of various effect sizes for structured rater x ratee designs: Psychological Methods Vol 11(1) Mar 2006, 72-86.
  • Howard, K. I., Krause, M. S., Saunders, S. M., & Kopta, S. M. (1997). Trials and tribulations in the meta-analysis of treatment differences: Comment on Wampold et al. (1997): Psychological Bulletin Vol 122(3) Nov 1997, 221-225.
  • Hsu, L. M. (2004). Biases of Success Rate Differences Shown in Binomial Effect Size Displays: Psychological Methods Vol 9(2) Jun 2004, 183-197.
  • Hsu, L. M. (2005). Some Properties of r-sub(equivalent): A Simple Effect Size Indicator: Psychological Methods Vol 10(4) Dec 2005, 420-427.
  • Huberty, C. J. (2002). A history of effect size indices: Educational and Psychological Measurement Vol 62(2) Apr 2002, 227-240.
  • Huberty, C. J., & Lowman, L. L. (2000). Group overlap as a basis for effect size: Educational and Psychological Measurement Vol 60(4) Aug 2000, 543-563.
  • Hullett, C. R., & Levine, T. R. (2003). The overestimation of effect sizes from F values in meta-analysis: The cause and a solution: Communication Monographs Vol 70(1) Mar 2003, 52-67.
  • Hunsley, J., & Westmacott, R. (2007). Interpreting the magnitude of the placebo effect: Mountain or molehill? : Journal of Clinical Psychology Vol 63(4) Apr 2007, 391-399.
  • Hyde, J. S. (2001). Reporting effect sizes: The roles of editors, textbook authors, and publication manuals: Educational and Psychological Measurement Vol 61(2) Apr 2001, 225-228.
  • Ives, B. (2003). Effect Size Use in Studies of Learning Disabilities: Journal of Learning Disabilities Vol 36(6) Nov-Dec 2003, 490-504.
  • Jegerski, J. A. (1990). Replication in behavioral research: Journal of Social Behavior & Personality Vol 5(4), Spec Issue 1990, 37-39.
  • Jodoin, M. G., & Gierl, M. J. (2001). Evaluating type I error and power rates using an effect size measure with the logistic regression procedure for DIF detection: Applied Measurement in Education Vol 14(4) Oct 2001, 329-349.
  • Johnson, B. T., & Turco, R. M. (1992). The value of goodness-of-fit indices in meta-analysis: A comment on Hall and Rosenthal: Communication Monographs Vol 59(4) Dec 1992, 388-396.
  • Johnson-Selfridge, M., & Zalewski, C. (2001). Moderator variables of executive functioning in schizophrenia: Meta-analytic findings: Schizophrenia Bulletin Vol 27(2) 2001, 305-316.
  • Johnston, J. E., Berry, K. J., & Mielke, P. W., Jr. (2004). A measure of effect size for experimental designs with heterogeneous variances: Perceptual and Motor Skills Vol 98(1) Feb 2004, 3-18.
  • Jones, S. S. (2003). The effect of all-day kindergarten on student cognitive growth: A meta analysis. Dissertation Abstracts International Section A: Humanities and Social Sciences.
  • Jouriles, E. N., Bourg, W. J., & Farris, A. M. (1991). Marital adjustment and child conduct problems: A comparison of the correlation across subsamples: Journal of Consulting and Clinical Psychology Vol 59(2) Apr 1991, 354-357.
  • Juslin, P., & Olsson, H. (2005). Postscript: Psychological Review Vol 112(1) Jan 2005, 267.
  • Kalaian, H. A., & Raudenbush, S. W. (1996). A multivariate mixed linear model for meta-analysis: Psychological Methods Vol 1(3) Sep 1996, 227-235.
  • Kalichman, S. C., Carey, M. P., & Johnson, B. T. (1996). Prevention of sexually transmitted HIV infection: A meta-analytic review of the behavioral outcome literature: Annals of Behavioral Medicine Vol 18(1) Win 1996, 6-15.
  • Kareev, Y. (2005). And Yet the Small-Sample Effect Does Hold: Reply to Juslin and Olsson (2005) and Anderson, Doherty, Berg, and Friedrich (2005): Psychological Review Vol 112(1) Jan 2005, 280-285.
  • Kazantzis, N. (2000). Power to detect homework effects in psychotherapy outcome research: Journal of Consulting and Clinical Psychology Vol 68(1) Feb 2000, 166-170.
  • Kazdin, A. E., & Bass, D. (1989). Power to detect differences between alternative treatments in comparative psychotherapy outcome research: Journal of Consulting and Clinical Psychology Vol 57(1) Feb 1989, 138-147.
  • Keef, S. P., & Roberts, L. A. (2004). The meta-analysis of partial effect sizes: British Journal of Mathematical and Statistical Psychology Vol 57(1) May 2004, 97-129.
  • Kelley, K. (2005). The Effects of Nonnormal Distributions on Confidence Intervals Around the Standardized Mean Difference: Bootstrap and Parametric Confidence Intervals: Educational and Psychological Measurement Vol 65(1) Feb 2005, 51-69.
  • Kennedy, J. E. (2003). To the editor: Journal of Parapsychology Vol 67(2) Fal 2003, 406-408.
  • Keselman, H. J., & Murray, R. (1978). Erratum to Articles by Keselman and Keselman and Murray: Psychological Bulletin Vol 85(1) Jan 1978, 69.
  • Khan, A., Khan, S. R., Shankles, E. B., & Polissar, N. L. (2002). Relative sensitivity of the Montgomery-Asberg Depression Rating Scale, the Hamilton Depression rating scale and the Clinical Global Impressions Rating Scale in antidepressant clinical trials: International Clinical Psychopharmacology Vol 17(6) Nov 2002, 281-285.
  • Killworth, P. D., McCarty, C., Johnsen, E. C., Bernard, H. R., & Shelley, G. A. (2006). Investigating the Variation of Personal Network Size Under Unknown Error Conditions: Sociological Methods & Research Vol 35(1) Aug 2006, 84-112.
  • Kim, S.-H., Cohen, A. S., Alagoz, C., & Kim, S. (2007). DIF detection and effect size measures for polytomously scored items: Journal of Educational Measurement Vol 44(2) Sum 2007, 93-116.
  • Kinsey, T. L. (2004). A comparison of IRT and Rasch procedures in a mixed-item format test. Dissertation Abstracts International: Section B: The Sciences and Engineering.
  • Kirk, R. E. (2003). The importance of effect magnitude. Malden, MA: Blackwell Publishing.
  • Kisamore, J. L., & Brannick, M. T. (2008). An illustration of the consequences of meta-analysis model choice: Organizational Research Methods Vol 11(1) Jan 2008, 35-53.
  • Klein, J. B., Jacobs, R. H., & Reinecke, M. A. (2007). Cognitive-behavioral therapy for adolescent depression: A meta-analytic investigation of changes in effect-size estimates: Journal of the American Academy of Child & Adolescent Psychiatry Vol 46(11) Nov 2007, 1403-1413.
  • Kline, R. B. (2004). Effect Size Estimation in Multifactor Designs. Washington, DC: American Psychological Association.
  • Kline, R. B. (2004). Effect Size Estimation in One-Way Designs. Washington, DC: American Psychological Association.
  • Kline, R. B. (2004). Nonparametric Effect Size Indexes. Washington, DC: American Psychological Association.
  • Kline, R. B. (2004). Parametric Effect Size Indexes. Washington, DC: American Psychological Association.
  • Kobayashi, K. (2006). Conditional effects of interventions in note-taking procedures on learning: A meta-analysis: Japanese Psychological Research Vol 48(2) May 2006, 109-114.
  • Koslowski, B., Okagaki, L., Lorenz, C., & Umbach, D. (1989). When covariation is not enough: The role of causal mechanism, sampling method, and sample size in causal reasoning: Child Development Vol 60(6) Dec 1989, 1316-1327.
  • Kowal, K. H. (1993). The range effect as a function of stimulus set, presence of a standard, and modulus: Perception & Psychophysics Vol 54(4) Oct 1993, 555-561.
  • Kowatch, R. A., Suppes, T., Carmody, T. J., Bucci, J. P., Hume, J. H., Kromelis, M., et al. (2000). Effect size of lithium, divalproex sodium, and carbamazepine in children and adolescents with bipolar disorder: Journal of the American Academy of Child & Adolescent Psychiatry Vol 39(6) Jun 2000, 713-720.
  • Kraemer, H. C. (1984). Nonparametric effect size estimation: A reply: Psychological Bulletin Vol 96(3) Nov 1984, 569-572.
  • Kraemer, H. C. (1991). To increase power in randomized clinical trials without increasing sample size: Psychopharmacology Bulletin Vol 27(3) 1991, 217-224.
  • Kraemer, H. C. (1992). Reporting the size of effects in research studies to facilitate assessment of practical or clinical significance: Psychoneuroendocrinology Vol 17(6) Nov 1992, 527-536.
  • Kraemer, H. C. (2005). A Simple Effect Size Indicator for Two-Group Comparisons? A Comment on r-sub(equivalent): Psychological Methods Vol 10(4) Dec 2005, 413-419.
  • Kraemer, H. C., & Kupfer, D. J. (2006). Size of Treatment Effects and Their Importance to Clinical Research and Practice: Biological Psychiatry Vol 59(11) Jun 2006, 990-996.
  • Kraemer, H. C., Morgan, G. A., Leech, N. L., Gliner, J. A., Vaske, J. J., & Harmon, R. J. (2003). Measures of Clinical Significance: Journal of the American Academy of Child & Adolescent Psychiatry Vol 42(12) Dec 2003, 1524-1529.
  • Kromrey, J. D., & Foster-Johnson, L. (1996). Determining the efficacy of intervention: The use of effect sizes for data analysis in single-subject research: Journal of Experimental Education Vol 65(1) Fal 1996, 73-93.
  • Kugler, J., Seelbach, H., & Kruskemper, G. M. (1994). Effects of rehabilitation exercise programmes on anxiety and depression in coronary patients: A meta-analysis: British Journal of Clinical Psychology Vol 33(3) Sep 1994, 401-410.
  • Lance, C. E., & James, L. R. (1999). nu superscript 2: A proportional variance-accounted-for index for some cross-level and person-situation research designs: Organizational Research Methods Vol 2(4) Oct 1999, 395-418.
  • Larzelere, R. E., & Kuhn, B. R. (2005). Comparing Child Outcomes of Physical Punishment and Alternative Disciplinary Tactics: A Meta-Analysis: Clinical Child and Family Psychology Review Vol 8(1) Mar 2005, 1-37.
  • Leon, A. C., Shear, M. K., Portera, L., & Klerman, G. L. (1993). Effect size as a measure of symptom-specific drug change in clinical trials: Psychopharmacology Bulletin Vol 29(2) 1993, 163-167.
  • Levine, T. R., & Hullett, C. R. (2002). Eta squared, partial eta squared, and misreporting of effect size in communication research: Human Communication Research Vol 28(4) Oct 2002, 612-625.
  • Lincoln, T. M., & Rief, W. (2004). How much do sample characteristics affect the effect size? An investigation of studies testing the treatment effects for social phobia: Journal of Anxiety Disorders Vol 18(4) 2004, 515-529.
  • Lipsey, M. W., & Wilson, D. B. (2001). The way in which intervention studies have "personality" and why it is important to meta-analysis: Evaluation & the Health Professions Vol 24(3) Sep 2001, 236-254.
  • Liu, X., & Raudenbush, S. (2004). A Note on the Noncentrality Parameter and Effect Size Estimates for the F Test in ANOVA: Journal of Educational and Behavioral Statistics Vol 29(2) Sum 2004, 251-255.
  • Luce, R. D. (1990). "On the possible psychophysical laws" revisited: Remarks on cross-modal matching: Psychological Review Vol 97(1) Jan 1990, 66-77.
  • Lucke, J. F. (2004). Fall-Prevention Programs for the Elderly: A Bayesian Secondary Meta-analysis: CJNR: Canadian Journal of Nursing Research Vol 36(3) Sep 2004, 49-64.
  • Ludlow, L. H. (1987). The graphical representation of quantitative research synthesis residual variation: Educational and Psychological Measurement Vol 47(4) Win 1987, 941-951.
  • Lyday, N. L. (1984). A meta-analysis of the adjunct question literature: Dissertation Abstracts International.
  • Maddock, J. E. (2000). Statistical power and effect size in the field of health psychology. Dissertation Abstracts International: Section B: The Sciences and Engineering.
  • Malgady, R. G. (2007). How skewed are psychological data? A standardized index of effect size: Journal of General Psychology Vol 134(3) Jul 2007, 355-359.
  • Mangino, C. (2005). A meta-analysis of Dunn and Dunn model correlational research with adult populations. Dissertation Abstracts International Section A: Humanities and Social Sciences.
  • Maramba, G. G., & Nagayama Hall, G. C. (2002). Meta-analyses of ethnic match as a predictor of dropout, utilization, and level of functioning: Cultural Diversity and Ethnic Minority Psychology Vol 8(3) Aug 2002, 290-297.
  • Marin, F., & Sanchez, J. (1996). Estimations of the effect size in meta-analysis: A Monte Carlo study of bias and efficiency: Psicologica Vol 17(3) 1996, 467-482.
  • Marin-Martinez, F., & Sanchez-Meca, J. (1999). Averaging dependent effect sizes in meta-analysis: A cautionary note about procedures: The Spanish Journal of Psychology Vol 2(1) May 1999, 32-38.
  • Marquis, J. G., Horner, R. H., Carr, E. G., Turnbull, A. P., Thompson, M., Behrens, G. A., et al. (2000). A meta-analysis of positive behavior support. Mahwah, NJ: Lawrence Erlbaum Associates Publishers.
  • Marshall, W. L., & McGuire, J. (2003). Effect Sizes in the Treatment of Sexual Offenders: International Journal of Offender Therapy and Comparative Criminology Vol 47(6) Dec 2003, 653-663.
  • Matsumoto, D., Grissom, R. J., & Dinnel, D. L. (2001). Do between-culture differences really mean that people are different? A look at some measures of cultural effect size: Journal of Cross-Cultural Psychology Vol 32(4) Jul 2001, 478-490.
  • Matt, G. E. (1987). Effects of treatment and characteristics of research studies: A meta-analytical investigation of the practice of psychotherapy outcome research: Gruppendynamik Vol 18(4) Nov 1987, 345-360.
  • Matt, G. E. (1989). Decision rules for selecting effect sizes in meta-analysis: A review and reanalysis of psychotherapy outcome studies: Psychological Bulletin Vol 105(1) Jan 1989, 106-115.
  • Matt, G. E., & Navarro, A. M. (1997). What meta-analyses have and have not taught us about psychotherapy effects: A review and future directions: Clinical Psychology Review Vol 17(1) 1997, 1-32.
  • Mawhinney, T. C. (1999). Cumulatively large benefits of incrementally small intervention effects: Costing metacontingencies of chronic absenteeism: Journal of Organizational Behavior Management Vol 18(4) 1999, 83-95.
  • Maxwell, S. E. (2000). Sample size and multiple regression analysis: Psychological Methods Vol 5(4) Dec 2000, 434-458.
  • Maxwell, S. E. (2004). The Persistence of Underpowered Studies in Psychological Research: Causes, Consequences, and Remedies: Psychological Methods Vol 9(2) Jun 2004, 147-163.
  • Maxwell, S. E., Kelley, K., & Rausch, J. R. (2008). Sample size planning for statistical power and accuracy in parameter estimation: Annual Review of Psychology Vol 59 2008, 537-563.
  • May, K., & Hittner, J. B. (1997). Tests for comparing dependent correlations revisited: A Monte Carlo study: Journal of Experimental Education Vol 65(3) Spr 1997, 257-269.
  • Mazen, A. M., Graf, L. A., Kellogg, C. E., & Hemmasi, M. (1987). Statistical power in contemporary management research: Academy of Management Journal Vol 30(2) Jun 1987, 369-380.
  • McCarley, R. W., Wible, C. G., Frumin, M., Hirayasu, Y., Levitt, J., & Shenton, M. E. (2001). "MRI anatomy of schizophrenia": Reply: Biological Psychiatry Vol 49(2) Jan 2001, 162-163.
  • McCartney, K., & Rosenthal, R. (2000). Effect size, practical importance, and social policy for children: Child Development Vol 71(1) Jan-Feb 2000, 173-180.
  • McConaghy, N. (1990). Can reliance be placed on a single meta-analysis? : Australian and New Zealand Journal of Psychiatry Vol 24(3) Sep 1990, 405-415.
  • McGowan, A. S. (2003). New and Practical Sections in the Journal of Counseling & Development: Information for the Prospective Author and the Readership: Journal of Counseling & Development Vol 81(4) Fal 2003, 387-388.
  • McGrath, R. E., & Ingersoll, J. (1999). Writing a good cookbook: II. A synthesis of MMPI high-point code system study effect sizes: Journal of Personality Assessment Vol 73(2) Oct 1999, 179-198.
  • McGrath, R. E., & Meyer, G. J. (2006). When Effect Sizes Disagree: The Case of r and d: Psychological Methods Vol 11(4) Dec 2006, 386-401.
  • McGraw, K. O. (1991). Problems with the BESD: A comment on Rosenthal's "How are we doing in soft psychology?" American Psychologist Vol 46(10) Oct 1991, 1084-1086.
  • McGraw, K. O., & Wong, S. P. (1992). A common language effect size statistic: Psychological Bulletin Vol 111(2) Mar 1992, 361-365.
  • McLeod, B. D., & Weisz, J. R. (2004). Using Dissertations to Examine Potential Bias in Child and Adolescent Clinical Trials: Journal of Consulting and Clinical Psychology Vol 72(2) Apr 2004, 235-251.
  • Meline, T., & Wang, B. (2004). Effect-size reporting practices in AJSLP and other ASHA journals, 1999-2003: American Journal of Speech-Language Pathology Vol 13(3) Aug 2004, 202-207.
  • Mellott, D. S. (2003). Measuring implicit attitudes and stereotypes: Increasing internal consistency reveals the convergent validity of iat and priming measures. Dissertation Abstracts International: Section B: The Sciences and Engineering.
  • Meyer, G. E. (1995). Power & Effect: A statistical utility for Macintosh and Windows systems: Behavior Research Methods, Instruments & Computers Vol 27(2) May 1995, 134-138.
  • Michalak, J., Kosfelder, J., Meyer, F., & Schulte, D. (2003). Measuring therapy outcome—pre-post effect sizes and retrospective measurement: Zeitschrift fur Klinische Psychologie und Psychotherapie: Forschung und Praxis Vol 32(2) 2003, 94-103.
  • Miller, N., Lee, J.-y., & Carlson, M. (1991). The validity of inferential judgments when used in theory-testing meta-analysis: Personality and Social Psychology Bulletin Vol 17(3) Jun 1991, 335-343.
  • Miller, W. R. (2005). Editorial: Motivational interviewing and the incredible shrinking treatment effect: Addiction Vol 100(4) Apr 2005, 421.
  • Moller, A. P., Thornhill, R., & Gangestad, S. W. (2005). Direct and indirect tests for publication bias: Asymmetry and sexual selection: Animal Behaviour Vol 70(3) Sep 2005, 497-506.
  • Moore, T. M., Scarpa, A., & Raine, A. (2002). A meta-analysis of serotonin metabolite 5-HIAA and antisocial behavior: Aggressive Behavior Vol 28(4) 2002, 299-316.
  • Morris, S. B. (2000). Distribution of the standardized mean change effect size for meta-analysis on repeated measures: British Journal of Mathematical and Statistical Psychology Vol 53(1) May 2000, 17-29.
  • Morris, S. B., & DeShon, R. P. (2002). Combining effect size estimates in meta-analysis with repeated measures and independent-groups designs: Psychological Methods Vol 7(1) Mar 2002, 105-125.
  • Morse, D. T. (1998). MINSIZE: A computer program for obtaining minimum sample size as an indicator of effect size: Educational and Psychological Measurement Vol 58(1) Feb 1998, 142-153.
  • Morse, D. T. (1999). MINSIZE2: A computer program for determining effect size and minimum sample size for statistical significance for univariate, multivariate, and nonparametric tests: Educational and Psychological Measurement Vol 59(3) Jun 1999, 518-531.
  • Moyer, A., & Finney, J. W. (2002). Outcomes for untreated individuals involved in randomized trials of alcohol treatment: Journal of Substance Abuse Treatment Vol 23(3) Oct 2002, 247-252.
  • Moyer, A., Finney, J. W., Swearingen, C. E., & Vergun, P. (2002). Brief interventions for alcohol problems: A meta-analytic review of controlled investigations in treatment-seeking and non-treatment-seeking populations: Addiction Vol 97(3) Mar 2002, 279-292.
  • Murray, L. W., & Dosser, D. A. (1987). How significant is a significant difference? Problems with the measurement of magnitude of effect: Journal of Counseling Psychology Vol 34(1) Jan 1987, 68-72.
  • Natesan, P., & Thompson, B. (2007). Extending Improvement-Over-Chance I-Index Effect Size Simulation Studies to Cover Some Small-Sample Cases: Educational and Psychological Measurement Vol 67(1) Feb 2007, 59-72.
  • Nordhus, I. H., & Pallesen, S. (2003). Psychological treatment of late-life anxiety: An empirical review: Journal of Consulting and Clinical Psychology Vol 71(4) Aug 2003, 643-651.
  • Norman, G. (2003). The Effectiveness and the Effects of Effect Sizes: Advances in Health Sciences Education Vol 8(3) 2003, 183-187.
  • Nouri, H., & Greenberg, R. H. (1995). Meta-analytic procedures for estimation of effect sizes in experiments using complex analysis of variance: Journal of Management Vol 21(4) 1995, 801-812.
  • Nugent, W. R. (2006). The Comparability of the Standardized Mean Difference Effect Size Across Different Measures of the Same Construct: Measurement Considerations: Educational and Psychological Measurement Vol 66(4) Aug 2006, 612-623.
  • Nummenmaa, L. (2005). Effect size in psychological science: Psykologia Vol 40(5-6) 2005, 559-567.
  • Nummenmaa, L., & Niemi, P. (2004). Inducing Affective States With Success-Failure Manipulations: A Meta-Analysis: Emotion Vol 4(2) Jun 2004, 207-214.
  • Oh, I.-S. (2007). In Search of Ideal Methods of Research Synthesis Over 30 Years (1977-2006): Comparison of Hunter-Schmidt Meta-Analysis Methods With Other Methods and Recent Improvements: International Journal of Testing Vol 7(1) 2007, 89-93.
  • Okun, M. A., & Lockwood, C. M. (2003). Does level of assessment moderate the relation between social support and social negativity? A meta-analysis: Basic and Applied Social Psychology Vol 25(1) Mar 2003, 15-35.
  • Olejnik, S., & Algina, J. (2000). Measures of effect size for comparative studies: Applications, interpretations, and limitations: Contemporary Educational Psychology Vol 25(3) Jul 2000, 241-286.
  • Olejnik, S., & Algina, J. (2003). Generalized Eta and Omega Squared Statistics: Measures of Effect Size for Some Common Research Designs: Psychological Methods Vol 8(4) Dec 2003, 434-447.
  • Olive, M. L., & Smith, B. W. (2005). Effect Size Calculations and Single Subject Designs: Educational Psychology Vol 25(2-3) Apr-Jun 2005, 313-324.
  • Onwuegbuzie, A. J. (2003). Effect Sizes in Qualitative Research: A Prolegomenon: Quality & Quantity: International Journal of Methodology Vol 37(4) Nov 2003, 393-409.
  • Onwuegbuzie, A. J., & Leech, N. L. (2004). Post Hoc Power: A Concept Whose Time Has Come: Understanding Statistics Vol 3(4) 2004, 201-230.
  • Osborne, J. W. (2008). Sweating the small stuff in educational psychology: How effect size and power reporting failed to change from 1969 to 1999, and what that means for the future of changing practices: Educational Psychology Vol 28(2) Apr 2008, 1-10.
  • Ottenbacher, K. J. (1984). Measures of relationship strength in occupational therapy research: Occupational Therapy Journal of Research Vol 4(4) Oct 1984, 271-285.
  • Otto, M. W., Tuby, K. S., Gould, R. A., McLean, R. Y. S., & Pollack, M. H. (2001). An effect-size analysis of the relative efficacy and tolerability of serotonin selective reuptake inhibitors for panic disorder: American Journal of Psychiatry Vol 158(12) Dec 2001, 1989-1992.
  • Overton, R. C. (1998). A comparison of fixed-effects and mixed (random-effects) models for meta-analysis tests of moderator variable effects: Psychological Methods Vol 3(3) Sep 1998, 354-379.
  • Ozer, D. J. (1985). Correlation and the coefficient of determination: Psychological Bulletin Vol 97(2) Mar 1985, 307-315.
  • Ozer, D. J. (2007). Evaluating effect size in personality research. New York, NY: Guilford Press.
  • Parker, R. I., Brossart, D. F., Vannest, K. J., Long, J. R., De-Alba, R. G., Baugh, F. G., et al. (2005). Effect Sizes in Single Case Research: How Large is Large? : School Psychology Review Vol 34(1) 2005, 116-132.
  • Parker, R. I., & Hagan-Burke, S. (2007). Median-based overlap analysis for single case data: A second study: Behavior Modification Vol 31(6) Nov 2007, 919-936.
  • Parker, R. I., & Hagan-Burke, S. (2007). Useful effect size interpretations for single case research: Behavior Therapy Vol 38(1) Mar 2007, 95-105.
  • Parker, R. I., Hagan-Burke, S., & Vannest, K. (2007). Percentage of All Non-Overlapping Data (PAND): An Alternative to PND: The Journal of Special Education Vol 40(4) Win 2007, 194-204.
  • Parker, S. (1995). The "difference of means" may not be the "effect size": American Psychologist Vol 50(12) Dec 1995, 1101-1102.
  • Parsons, T. D., & Nelson, N. W. (2004). Paradigm Shift in Social Science Research: A Significance Testing and Effect Size Estimation Rapprochement? : PsycCRITIQUES Vol 49 (Suppl 3), 2004.
  • Pato, M. T., & Gluhoski, V. (1992). Serotonin and effect sizes of antiobsessive agents: American Journal of Psychiatry Vol 149(3) Mar 1992, 420-421.
  • Paul, K. M., & Plucker, J. A. (2004). Two steps forward, one step back: Effect size reporting in gifted education research from 1995-2000: Roeper Review Vol 26(2) Win 2004, 68-72.
  • Peach, R. K. (2003). From the editor: American Journal of Speech-Language Pathology Vol 12(3) Aug 2003, 258.
  • Pedersen, S. (2003). Effect sizes and "what if" analyses as supplements to statistical significance tests: Journal of Early Intervention Vol 25(4) Sum 2003, 310-319.
  • Penny, J. A. (2003). Exploring differential item functioning in a 360-degree assessment: Rater source and method of delivery: Organizational Research Methods Vol 6(1) Jan 2003, 61-79.
  • Peterson, R. A., Albaum, G., & Beltramini, R. F. (1985). A meta-analysis of effect sizes in consumer behavior experiments: Journal of Consumer Research Vol 12(1) Jun 1985, 97-103.
  • Peterson, R. A., & Brown, S. P. (2005). On the Use of Beta Coefficients in Meta-Analysis: Journal of Applied Psychology Vol 90(1) Jan 2005, 175-181.
  • Phillips, G. A. (2004). The aggregation of interaction effect sizes from primary psychotherapy outcome studies: A meta-analysis. Dissertation Abstracts International: Section B: The Sciences and Engineering.
  • Pierce, C. A., Block, R. A., & Aguinis, H. (2004). Cautionary Note on Reporting Eta-Squared Values From Multifactor ANOVA Designs: Educational and Psychological Measurement Vol 64(6) Dec 2004, 916-924.
  • Pigott, T. D. (2001). Missing predictors in models of effect size: Evaluation & the Health Professions Vol 24(3) Sep 2001, 277-307.
  • Plucker, J. A. (1997). Debunking the myth of the "highly significant" result: Effect sizes in gifted education research: Roeper Review Vol 20(2) Dec 1997, 122-126.
  • Posavac, E. J., & Miller, T. Q. (1990). Some problems caused by not having a conceptual foundation for health research: An illustration from studies of the psychological effects of abortion: Psychology & Health Vol 5(1) 1990, 13-23.
  • Prendergast, M. L., Podus, D., Chang, E., & Urada, D. (2002). The effectiveness of drug abuse treatment: A meta-analysis of comparison group studies: Drug and Alcohol Dependence Vol 67(1) Jun 2002, 53-72.
  • Prendergast, M. L., Podus, D., Chang, E., & Urada, D. (2006). Erratum to "The effectiveness of drug abuse treatment: A meta-analysis of comparison group studies": Drug and Alcohol Dependence Vol 84(1) Sep 2006, 133.
  • Prentice, D. A., & Miller, D. T. (1992). When small effects are impressive: Psychological Bulletin Vol 112(1) Jul 1992, 160-164.
  • Prentice, D. A., & Miller, D. T. (2003). When small effects are impressive. Washington, DC: American Psychological Association.
  • Radin, D., Nelson, R., Dobyns, Y., & Houtkooper, J. (2006). Reexamining Psychokinesis: Comment on Bosch, Steinkamp, and Boller (2006): Psychological Bulletin Vol 132(4) Jul 2006, 529-532.
  • Ramirez, M. T. G., & Botella, J. (2006). Comparison among effect-size indices for dichotomized outcomes in meta-analysis: Psicologica Vol 27(2) 2006, 269-293.
  • Raudenbush, S. W. (1994). Random effects models. New York, NY: Russell Sage Foundation.
  • Raudenbush, S. W., Becker, B. J., & Kalaian, H. (1988). Modeling multivariate effect sizes: Psychological Bulletin Vol 103(1) Jan 1988, 111-120.
  • Raudenbush, S. W., & Bryk, A. S. (1985). Empirical Bayes meta-analysis: Journal of Educational Statistics Vol 10(2) Sum 1985, 75-98.
  • Raudenbush, S. W., & Liu, X. (2000). Statistical power and optimal design for multisite randomized trials: Psychological Methods Vol 5(2) Jun 2000, 199-213.
  • Ray, J. W. (1996). An evaluation of the agreement between exact and inexact effect sizes in meta-analysis. Dissertation Abstracts International: Section B: The Sciences and Engineering.
  • Ray, J. W., & Shadish, W. R. (1996). How interchangeable are different estimators of effect size? : Journal of Consulting and Clinical Psychology Vol 64(6) Dec 1996, 1316-1325.
  • Ray, J. W., & Shadish, W. R. (1998). "How interchangeable are different estimators of effect size?": Correction to Ray and Shadish (1996): Journal of Consulting and Clinical Psychology Vol 66(3) Jun 1998, 532.
  • Ree, M. J. (2003). Review of Score reliability: Contemporary thinking on reliability issues: Personnel Psychology Vol 56(4) Win 2003, 1090-1092.
  • Reichardt, C. S. (2006). The principle of parallelism in the design of studies to estimate treatment effects: Psychological Methods Vol 11(1) Mar 2006, 1-18.
  • Rice, M. E., & Harris, G. T. (2005). Comparing effect sizes in follow-up studies: ROC Area, Cohen's d, and r: Law and Human Behavior Vol 29(5) Oct 2005, 615-620.
  • Richardson, J. T. E. (1996). Measures of effect size: Behavior Research Methods, Instruments & Computers Vol 28(1) Feb 1996, 12-22.
  • Riopelle, A. J. (2000). Are effect sizes and confidence levels problems for or solutions to the null hypothesis test? : Journal of General Psychology Vol 127(2) Apr 2000, 198-216.
  • Rispens, J., Aleman, A., & Goudena, P. P. (1997). Prevention of child sexual abuse victimization: A meta-analysis of school programs: Child Abuse & Neglect Vol 21(10) Oct 1997, 975-987.
  • Roberts, J. K., & Henson, R. K. (2002). Correction for bias in estimating effect sizes: Educational and Psychological Measurement Vol 62(2) Apr 2002, 241-253.
  • Robey, R. R. (2004). Reporting point and interval estimates of effect-size for planned contrasts: Fixed within effect analyses of variance: Journal of Fluency Disorders Vol 29(4) Win 2004, 307-341.
  • Robinson, D. H., Fouladi, R. T., Williams, N. J., & Bera, S. J. (2002). Some effects of including effect size and "What If" in formation: Journal of Experimental Education Vol 70(4) Sum 2002, 365-382.
  • Robinson, D. H., Whittaker, T. A., Williams, N. J., & Beretvas, S. N. (2003). It's Not Effect Sizes So Much as Comments About Their Magnitude That Mislead Readers: Journal of Experimental Education Vol 72(1) Fal 2003, 51-64.
  • Rochelle, K. S. H., & Talcott, J. B. (2006). Impaired balance in developmental dyslexia? A meta-analysis of the contending evidence: Journal of Child Psychology and Psychiatry Vol 47(11) Nov 2006, 1159-1166.
  • Romanoski, J., & Douglas, G. (2002). Rasch-transformed raw scores and two-way ANOVA: A simulation analysis: Journal of Applied Measurement Vol 3(4) 2002, 421-430.
  • Rosendahl, E. (2007). Effect size underestimates the effects of interventions among older people with severe physical or cognitive impairments? : Journal of the American Geriatrics Society Vol 55(8) Aug 2007, 1315-1316.
  • Rosenthal, J. A. (1996). Qualitative descriptors of strength of association and effect size: Journal of Social Service Research Vol 21(4) 1996, 37-59.
  • Rosenthal, R. (1990). How are we doing in soft psychology? : American Psychologist Vol 45(6) Jun 1990, 775-777.
  • Rosenthal, R. (1991). Effect sizes: Pearson's correlation, its display via the BESD, and alternative indices: American Psychologist Vol 46(10) Oct 1991, 1086-1087.
  • Rosenthal, R. (1994). Parametric measures of effect size. New York, NY: Russell Sage Foundation.
  • Rosenthal, R. (2006). Praising Pearson Properly: Correlations, Contrasts, and Construct Validity. Mahwah, NJ: Lawrence Erlbaum Associates Publishers.
  • Rosenthal, R., Rosnow, R. L., & Rubin, D. B. (2000). Contrasts and effect sizes in behavioral research: A correlational approach. New York, NY: Cambridge University Press.
  • Rosenthal, R., & Rubin, D. B. (1986). Meta-analytic procedures for combining studies with multiple effect sizes: Psychological Bulletin Vol 99(3) May 1986, 400-406.
  • Rosenthal, R., & Rubin, D. B. (1989). Effect size estimation for one-sample multiple-choice-type data: Design, analysis, and meta-analysis: Psychological Bulletin Vol 106(2) Sep 1989, 332-337.
  • Rosenthal, R., & Rubin, D. B. (1991). Further issues in effect size estimation for one-sample multiple-choice-type data: Psychological Bulletin Vol 109(2) Mar 1991, 351-352.
  • Rosenthal, R., & Rubin, D. B. (1992). An effect size estimator for parapsychological research: European Journal of Parapsychology Vol 9 1992-1993, 1-11.
  • Rosenthal, R., & Rubin, D. B. (1994). The counternull value of an effect size: A new statistic: Psychological Science Vol 5(6) Nov 1994, 329-334.
  • Rosenthal, R., & Rubin, D. B. (2003). r-sub(equivalent): A Simple Effect Size Indicator: Psychological Methods Vol 8(4) Dec 2003, 492-496.
  • Rosnow, R. L., & Rosenthal, R. (1988). Focused tests of significance and effect size estimation in counseling psychology: Journal of Counseling Psychology Vol 35(2) Apr 1988, 203-208.
  • Rosnow, R. L., & Rosenthal, R. (1992). Focused tests of significance and effect size estimation in counseling psychology. Washington, DC: American Psychological Association.
  • Rosnow, R. L., & Rosenthal, R. (1996). Computing contrasts, effect sizes, and counternulls on other people's published data: General procedures for research consumers: Psychological Methods Vol 1(4) Dec 1996, 331-340.
  • Rosnow, R. L., & Rosenthal, R. (2002). Contrasts and correlations in theory assessment: Journal of Pediatric Psychology Vol 27(1) Jan 2002, 59-66.
  • Rosnow, R. L., & Rosenthal, R. (2003). Effect sizes for experimenting psychologists: Canadian Journal of Experimental Psychology/Revue canadienne de psychologie experimentale Vol 57(3) Sep 2003, 221-237.
  • Rosnow, R. L., & Rosenthal, R. (2008). Assessing the effect size of outcome research. New York, NY: Oxford University Press.
  • Rosnow, R. L., Rosenthal, R., & Rubin, D. B. (2000). Contrasts and correlations in effect-size estimation: Psychological Science Vol 11(6) Nov 2000, 446-453.
  • Rossi, J. S. (1985). Statistical power of psychological research: The artifactual basis of controversial results: Dissertation Abstracts International.
  • Rossi, J. S. (1985). Tables of effect size for z score tests of differences between proportions and between correlation coefficients: Educational and Psychological Measurement Vol 45(4) Win 1985, 737-743.
  • Rothstein, H. R., McDaniel, M. A., & Borenstein, M. (2002). Meta-analysis: A review of quantitative cumulation methods. San Francisco, CA: Jossey-Bass.
  • Rouder, J. N., & Morey, R. D. (2005). Relational and Arelational Confidence Intervals: A Comment on Fidler, Thomason, Cumming, Finch, and Leeman (2004): Psychological Science Vol 16(1) Jan 2005, 77-79.
  • Rubin, D. B. (1992). Meta-analysis: Literature synthesis or effect-size surface estimation? : Journal of Educational Statistics Vol 17(4) Win 1992, 363-374.
  • Rubin, D. B. (1993). Statistical tools for meta-analysis: From straightforward to esoteric. New York, NY ; Paris, France: Cambridge University Press; Editions de la Maison des Sciences de l'Homme.
  • Rudas, T., & Zwick, R. (1997). Estimating the importance of differential item functioning: Journal of Educational and Behavioral Statistics Vol 22(1) Spr 1997, 31-45.
  • Russell, C. J., Pinto, J. K., & Bobko, P. (1991). Appropriate moderated regression and inappropriate research strategy: A demonstration of information loss due to scale coarseness: Applied Psychological Measurement Vol 15(3) Sep 1991, 257-266.
  • Rutledge, T., & Loh, C. (2004). Effect sizes and statistical testing in the determination of clinical significance in behavioral medicine research: Annals of Behavioral Medicine Vol 27(2) Spr 2004, 138-145.
  • Sack, M., Lempa, W., & Lamprecht, F. (2001). Study quality and effect-sizes: A meta-analysis of EMDR-treatment for posttraumatic stress disorder: Psychotherapie Psychosomatik Medizinische Psychologie Vol 51(9-10) Sep-Oct 2001, 350-355.
  • Sailor, K. M., & Antoine, M. (2005). Is memory for stimulus magnitude Bayesian? : Memory & Cognition Vol 33(5) Jul 2005, 840-851.
  • Sanchez-Meca, J., & Marin-Martinez, F. (1998). Weighting by inverse variance or by sample size in meta-analysis: A simulation study: Educational and Psychological Measurement Vol 58(2) Apr 1998, 211-220.
  • Sanchez-Meca, J., Marin-Martinez, F., & Chacon-Moscoso, S. (2003). Effect-Size Indices for Dichotomized Outcomes in Meta-Analysis: Psychological Methods Vol 8(4) Dec 2003, 448-467.
  • Sanderson, K., Andrews, G., Corry, J., & Lapsley, H. (2004). Using the effect size to model change in preference values from descriptive health status: Quality of Life Research: An International Journal of Quality of Life Aspects of Treatment, Care & Rehabilitation Vol 13(7) Sep 2004, 1255-1264.
  • Sapp, M. (2004). Confidence Intervals Within Hypnosis Research: Sleep and Hypnosis Vol 6(4) 2004, 169-176.
  • Schalm, R. L., & Kelloway, E. K. (2001). The relationship between response rate and effect size in occupational health psychology research: Journal of Occupational Health Psychology Vol 6(2) Apr 2001, 160-163.
  • Schatz, P., Jay, K. A., McComb, J., & McLaughlin, J. R. (2005). Misuse of statistical tests in Archives of Clinical Neuropsychology publications: Archives of Clinical Neuropsychology Vol 20(8) Dec 2005, 1053-1059.
  • Schmidt, F. L., & Hunter, J. E. (2003). Meta-analysis. Hoboken, NJ: John Wiley & Sons Inc.
  • Schnurr, P. P. (1989). Measuring amount of symptom change in the diagnosis of premenstrual syndrome: Psychological Assessment: A Journal of Consulting and Clinical Psychology Vol 1(4) Dec 1989, 277-283.
  • Schooler, N. R. (2001). The statistical comparison of clinical trials: Journal of Clinical Psychiatry Vol 62(Suppl9) 2001, 35-39.
  • Schulze, R. (2004). Meta-analysis: A comparison of approaches. Ashland, OH: Hogrefe & Huber Publishers.
  • Schulze, R. (2007). Current methods for meta-analysis: Approaches, issues, and developments: Zeitschrift fur Psychologie/Journal of Psychology Vol 215(2) 2007, 90-103.
  • Schuster, C. (2004). How measurement error in dichotomous predictors affects the analysis of continuous criteria: Psychology Science Vol 46(1) 2004, 128-136.
  • Sedlmeier, P., & Gigerenzer, G. (1989). Do studies of statistical power have an effect on the power of studies? : Psychological Bulletin Vol 105(2) Mar 1989, 309-316.
  • Sedlmeier, P., & Gigerenzer, G. (1992). Do studies of statistical power have an effect on the power of studies? Washington, DC: American Psychological Association.
  • Sedlmeier, P., & Kilinc, B. (2002). Beyond Uncritical Significance Testing: Contrasts and Effect Sizes: PsycCRITIQUES Vol 47 (4), Aug, 2002.
  • Seignourel, P., & Albarracin, D. (2002). Calculating Effect Sizes for Designs with Between-Subjects and Within-Subjects Factors: Methods for Partially Reported Statistics in Meta-analysis: Metodologia de las Ciencias del Comportamiento Vol 4(2) 2002, 273-289.
  • Seltzer, M. H. (1993). Sensitivity analysis for fixed effects in the hierarchical model: A Gibbs sampling approach: Journal of Educational Statistics Vol 18(3) Fal 1993, 207-235.
  • Serlin, R. C., Wampold, B. E., & Levin, J. R. (2003). Should Providers of Treatment Be Regarded as a Random Factor? If It Ain't Broke, Don't "Fix" It: A Comment on Siemer and Joormann (2003): Psychological Methods Vol 8(4) Dec 2003, 524-534.
  • Shadish, W. R., & Haddock, C. K. (1994). Combining estimates of effect size. New York, NY: Russell Sage Foundation.
  • Shadish, W. R., & Ragsdale, K. (1996). Random versus nonrandom assignment in controlled experiments: Do you get the same answer? : Journal of Consulting and Clinical Psychology Vol 64(6) Dec 1996, 1290-1305.
  • Shaffer, J. P. (1991). Comment on "Effect size estimation for one-sample multiple-choice-type data: Design, analysis, and meta-analysis" by Rosenthal and Rubin (1989): Psychological Bulletin Vol 109(2) Mar 1991, 348-350.
  • Sheridan, S. M., Eagle, J. W., Cowan, R. J., & Mickelson, W. (2001). The effects of conjoint behavioral consultation results of a 4-year investigation: Journal of School Psychology Vol 39(5) Sep-Oct 2001, 361-385.
  • Sherwood, S. J., & Roe, C. A. (2003). A review of dream ESP studies conducted since the Maimonides dream ESP programme. Charlottesville, VA: Imprint Academic.
  • Shrum, L. J. (2007). The Implications of Survey Method for Measuring Cultivation Effects: Human Communication Research Vol 33(1) Jan 2007, 64-80.
  • Siemer, M., & Joormann, J. (2003). Assumptions and Consequences of Treating Providers in Therapy Studies as Fixed Versus Random Effects: Reply to Crits-Christoph, Tu, and Gallop (2003) and Serlin, Wampold, and Levin (2003): Psychological Methods Vol 8(4) Dec 2003, 535-544.
  • Siemer, M., & Joormann, J. (2003). Power and Measures of Effect Size in Analysis of Variance With Fixed Versus Random Nested Factors: Psychological Methods Vol 8(4) Dec 2003, 497-517.
  • Sink, C. A., & Stroh, H. R. (2006). Practical Significance: The Use of Effect Sizes in School Counseling Research: Professional School Counseling Vol 9(5,SpecIssue) Jun 2006, 401-411.
  • Small, G. W., Schneider, L. S., Hamilton, S. H., & Bystritsky, A. (1996). Site variability in a multisite geriatric depression trial: International Journal of Geriatric Psychiatry Vol 11(12) Dec 1996, 1089-1095.
  • Smith, C. J. (1996). Pluralistic ignorance: An integration of perceptions of difference. Dissertation Abstracts International: Section B: The Sciences and Engineering.
  • Smith, M., Wells, J., & Borrie, M. (2006). Treatment Effect Size of Memantine Therapy in Alzheimer Disease and Vascular Dementia: Alzheimer Disease & Associated Disorders Vol 20(3) Jul-Sep 2006, 133-137.
  • Smithson, M. (2001). Correct confidence intervals for various regression effect sizes and parameters: The importance of noncentral distributions in computing intervals: Educational and Psychological Measurement Vol 61(4) Aug 2001, 605-632.
  • Snyder, P., & Lawson, S. (1993). Evaluating results using corrected and uncorrected effect size estimates: Journal of Experimental Education Vol 61(4) Sum 1993, 334-349.
  • Sobel, M. E. (1990). Effect analysis and causation in linear structural equation models: Psychometrika Vol 55(3) Sep 1990, 495-515.
  • Sohn, D. (1982). Sex differences in achievement self-attributions: An effect-size analysis: Sex Roles Vol 8(4) Apr 1982, 345-357.
  • Sporer, S. L., & Schwandt, B. (2006). Paraverbal Indicators of Deception: A Meta-analytic Synthesis: Applied Cognitive Psychology Vol 20(4) May 2006, 421-446.
  • Stajkovic, A. D. (1999). Fitting Parametric Fixed Effect Categorical Models to Effect Sizes: A Neglected Meta-Analytic Approach in Organizational Studies: Organizational Research Methods Vol 2(1) Jan 1999, 88-102.
  • Stam, R., van Laar, T.-J., Akkermans, L. M. A., & Wiegant, V. M. (2002). Variability factors in the expression of stress-induced behavioural sensitisation: Behavioural Brain Research Vol 132(1) Apr 2002, 69-76.
  • Standley, J. M. (1996). A brief introduction to meta-analysis: Journal of Research in Music Education Vol 44(2) Sum 1996, 101-104.
  • Stark, S., Chernyshenko, O. S., & Drasgow, F. (2004). Examining the Effects of Differential Item (Functioning and Differential) Test Functioning on Selection Decisions: When Are Statistically Significant Effects Practically Important? : Journal of Applied Psychology Vol 89(3) Jun 2004, 497-508.
  • Steiger, J. H. (2004). Beyond the F Test: Effect Size Confidence Intervals and Tests of Close Fit in the Analysis of Variance and Contrast Analysis: Psychological Methods Vol 9(2) Jun 2004, 164-182.
  • Steinberg, L., & Thissen, D. (2006). Using Effect Sizes for Research Reporting: Examples Using Item Response Theory to Analyze Differential Item Functioning: Psychological Methods Vol 11(4) Dec 2006, 402-415.
  • Stigleitner, I. (1966). The influence of irrelevant, concomittant attributes on the exponent of a psychophysical function. (Experiments of weight-size-effects): Zeitschrift fur Experimentelle und Angewandte Psychologie 13(2) 1966, 334-344.
  • Strahan, R. F. (1991). Remarks on the binomial effect size display: American Psychologist Vol 46(10) Oct 1991, 1083-1084.
  • Strohmer, D. C., & Arm, J. R. (2006). The More Things Change, the More They Stay the Same: Reaction to AEgisdottir et al.: Counseling Psychologist Vol 34(3) May 2006, 383-390.
  • Strube, M. J. (1988). Some comments on the use of magnitude-of-effect estimates: Journal of Counseling Psychology Vol 35(3) Jul 1988, 342-345.
  • Strube, M. J., & Goldstein, M. D. (1995). A computer program that demonstrates the difference between main effects and interactions: Teaching of Psychology Vol 22(3) Oct 1995, 207-208.
  • Sun, S. (2007). The role of nonnormality and model specification in testing latent variable interactions: A Monte Carlo study. Dissertation Abstracts International Section A: Humanities and Social Sciences.
  • Susman, E. B. (1998). Cooperative learning: A review of factors that increase the effectiveness of cooperative computer-based instruction: Journal of Educational Computing Research Vol 18(4) 1998, 303-322.
  • Swanson, H. L. (1999). Instructional components that predict treatment outcomes for students with learning disabilities: Support for a combined strategy and direct instruction model: Learning Disabilities Research & Practice Vol 14(3) Sum 1999, 129-140.
  • Swanson, H. L. (2000). What instruction works for students with learning disabilities? Summarizing the results from a meta-analysis of intervention studies. Mahwah, NJ: Lawrence Erlbaum Associates Publishers.
  • Swanson, H. L., & Sachse-Lee, C. (2000). A meta-analysis of single-subject-design intervention research for students with LD: Journal of Learning Disabilities Vol 33(2) Mar-Apr 2000, 114-136.
  • Swim, J. K. (1994). Perceived versus meta-analytic effect sizes: An assessment of the accuracy of gender stereotypes: Journal of Personality and Social Psychology Vol 66(1) Jan 1994, 21-36.
  • Tachibana, T. (1984). A critical view of the utility of positive controls in a test battery: Neurobehavioral Toxicology & Teratology Vol 6(2) Mar 1984, 155-159.
  • Tachibana, T., Terada, Y., Fukunishi, K., & Tanimura, T. (1996). Estimated magnitude of behavioural effects of phenytoin in rats and its reproducibility: A collaborative behavioral teratology study in Japan: Physiology & Behavior Vol 60(3) Sep 1996, 941-952.
  • Tatsuoka, M. (1993). Effect size. Hillsdale, NJ, England: Lawrence Erlbaum Associates, Inc.
  • Taylor, M. J., & White, K. R. (1992). An evaluation of alternative methods for computing standardized mean difference effect size: Journal of Experimental Education Vol 61(1) Fal 1992, 63-72.
  • Thomas, H. (1986). Effect size standard errors for the non-normal non-identically distributed case: Journal of Educational Statistics Vol 11(4) Win 1986, 293-303.
  • Thompson, B. (1986). ANOVA versus regression analysis of ATI designs: An empirical investigation: Educational and Psychological Measurement Vol 46(4) Win 1986, 917-928.
  • Thompson, B. (1988). CANPOW: A program that estimates effect or sample sizes required for canonical correlation analysis: Educational and Psychological Measurement Vol 48(3) Fal 1988, 693-696.
  • Thompson, B. (1989). Statistical significance, result importance, and result generalizability: Three noteworthy but somewhat different issues: Measurement and Evaluation in Counseling and Development Vol 22(1) Apr 1989, 2-6.
  • Thompson, B. (1999). Improving research clarity and usefulness with effect size indices as supplements to statistical significance tests: Exceptional Children Vol 65(3) Spr 1999, 329-337.
  • Thompson, B. (1999). Statistical significance tests, effect size reporting and the vain pursuit of pseudo-objectivity: Theory & Psychology Vol 9(2) Apr 1999, 191-196.
  • Thompson, B. (2002). "Statistical," "practical", and "clinical": How many kinds of significance do counselors need to consider? : Journal of Counseling & Development Vol 80(1) Win 2002, 64-71.
  • Thompson, B. (2006). Research Synthesis: Effect Sizes. Mahwah, NJ: Lawrence Erlbaum Associates Publishers.
  • Thompson, B. (2006). Review of Effect Sizes for Research: A Broad Practical Approach: Applied Psychological Measurement Vol 30(1) Jan 2006, 75-77.
  • Thompson, B. (2006). Role of effect sizes in contemporary research in counseling: Counseling and Values Vol 50(3) Apr 2006, 176-186.
  • Thompson, B. (2007). Effect sizes, confidence intervals, and confidence intervals for effect sizes: Psychology in the Schools Vol 44(5) May 2007, 423-432.
  • Thompson, B., & Noferi, G. (2003). Statistical, practical, clinical: How many types of significance should be considered in counseling research? : Bollettino di Psicologia Applicata No 240 May-Aug 2003, 3-13.
  • Thompson, K. N., & Schumacker, R. E. (1997). An evaluation of Rosenthal and Rubin's binomial effect size display: Journal of Educational and Behavioral Statistics Vol 22(1) Spr 1997, 109-117.
  • Thum, Y. M. (2003). Measuring Progress Toward a Goal: Estimating Teacher Productivity Using a Multivariate Multilevel Model for Value-Added Analysis: Sociological Methods & Research Vol 32(2) Nov 2003, 153-207.
  • Thurber, S. (1985). Effect size estimates in chemical aversion treatments of alcoholism: Journal of Clinical Psychology Vol 41(2) Mar 1985, 285-287.
  • Tillitski, C. J. (1990). A meta-analysis of estimated effect sizes for group versus individual versus control treatments: International Journal of Group Psychotherapy Vol 40(2) Apr 1990, 215-224.
  • Timm, N. H. (1999). A note on testing for multivariate effect sizes: Journal of Educational and Behavioral Statistics Vol 24(2) Sum 1999, 132-145.
  • Timm, N. H. (1999). Testing multivariate effect sizes in multiple-endpoint studies: Multivariate Behavioral Research Vol 34(4) 1999, 457-465.
  • Timm, N. H. (2002). Testing non-nested multivariate effect size models in meta-analysis: Journal of Educational and Behavioral Statistics Vol 27(4) Win 2002, 321-333.
  • Trusty, J., Thompson, B., & Petrocelli, J. V. (2004). Practical Guide for Reporting Effect Size in Quantitative Research in the Journal of Counseling & Development: Journal of Counseling & Development Vol 82(1) Win 2004, 107-110.
  • Vacha-Haase, T. (2001). Statistical significance should not be considered one of life's guarantees: Effect sizes are needed: Educational and Psychological Measurement Vol 61(2) Apr 2001, 219-224.
  • Vacha-Haase, T., Kogan, L. R., & Thompson, B. (2000). Sample compositions and variabilities in published studies versus those in test manuals: Validity of score reliability inductions: Educational and Psychological Measurement Vol 60(4) Aug 2000, 509-522.
  • Vacha-Haase, T., Nilsson, J. E., Reetz, D. R., Lance, T. S., & Thompson, B. (2000). Reporting practices and APA editorial policies regarding statistical significance and effect size: Theory & Psychology Vol 10(3) Jun 2000, 413-425.
  • Vacha-Haase, T., & Thompson, B. (2004). How to Estimate and Interpret Various Effect Sizes: Journal of Counseling Psychology Vol 51(4) Oct 2004, 473-481.
  • Van Den Noortgate, W., & Onghena, P. (2003). Estimating the mean effect size in meta-analysis: Bias, precision, and mean squared error of different weighting methods: Behavior Research Methods, Instruments & Computers Vol 35(4) Nov 2003, 504-511.
  • Van den Noortgate, W., & Onghena, P. (2003). Hierarchical linear models for the quantitative integration of effect sizes in single-case research: Behavior Research Methods, Instruments & Computers Vol 35(1) Apr 2003, 1-10.
  • Vargha, A., & Delaney, H. D. (2000). A critique and improvement of the CL common language effect size statistics of McGraw and Wong: Journal of Educational and Behavioral Statistics Vol 25(2) Sum 2000, 101-132.
  • Verguts, T., Fias, W., & Stevens, M. (2005). A model of exact small-number representation: Psychonomic Bulletin & Review Vol 12(1) Feb 2005, 66-80.
  • Vevea, J. L., & Hedges, L. V. (1995). A general linear model for estimating effect size in the presence of publication bias: Psychometrika Vol 60(3) Sep 1995, 419-435.
  • Vevea, J. L., & Woods, C. M. (2005). Publication Bias in Research Synthesis: Sensitivity Analysis Using A Priori Weight Functions: Psychological Methods Vol 10(4) Dec 2005, 428-443.
  • Viechtbauer, W. (2005). Bias and Efficiency of Meta-Analytic Variance Estimators in the Random-Effects Model: Journal of Educational and Behavioral Statistics Vol 30(3) Fal 2005, 261-293.
  • Voelkle, M. C., Ackerman, P. L., & Wittman, W. W. (2007). Effect Sizes and F Ratios < 1.0. Sense or Nonsense? : Methodology: European Journal of Research Methods for the Behavioral and Social Sciences Vol 3(1) 2007, 35-46.
  • Volker, M. A. (2006). Reporting Effect Size Estimates in School Psychology Research: Psychology in the Schools Vol 43(6) Jul 2006, 653-672.
  • Voyer, D., Voyer, S., & Bryden, M. P. (1995). Magnitude of sex differences in spatial abilities: A meta-analysis and consideration of critical variables: Psychological Bulletin Vol 117(2) Mar 1995, 250-270.
  • Wahlsten, D. (2007). Sample size requirements for experiments on laboratory animals. Boca Raton, FL: CRC Press.
  • Wainer, H. (2000). Rescuing computerized testing by breaking Zipf's law: Journal of Educational and Behavioral Statistics Vol 25(2) Sum 2000, 203-224.
  • Wampold, B. E., Imel, Z. E., & Minami, T. (2007). The placebo effect: "Relatively large" and "robust" enough to survive another assault: Journal of Clinical Psychology Vol 63(4) Apr 2007, 401-403.
  • Wampold, B. E., Mondin, G. W., Moody, M., & Ahn, H.-n. (1997). The flat earth as a metaphor for the evidence for uniform efficacy of bona fide psychotherapies: Reply to Crits-Christoph (1997) and Howard et al. (1997): Psychological Bulletin Vol 122(3) Nov 1997, 226-230.
  • Wampold, B. E., Mondin, G. W., Moody, M., Stich, F., Benson, K., & Ahn, H.-n. (1997). A meta-analysis of outcome studies comparing bona fide psychotherapies: Empiricially, "all must have prizes." Psychological Bulletin Vol 122(3) Nov 1997, 203-215.
  • Wampold, B. E., & Serlin, R. C. (2000). The consequence of ignoring a nested factor on measures of effect size in analysis of variance: Psychological Methods Vol 5(4) Dec 2000, 425-433.
  • Wang, Q., & Li, J. (1998). A research on the computer simulation of effect comparison of the test validities of discrimination distribution and difficulty distribution: Psychological Science (China) Vol 21(2) 1998, 111-114.
  • Wang, S., Jiao, H., Young, M. J., Brooks, T., & Olson, J. (2007). A meta-analysis of testing mode effects in grade K-12 mathematics tests: Educational and Psychological Measurement Vol 67(2) Apr 2007, 219-238.
  • Wang, W.-C. (2004). Direct Estimation of Correlation as a Measure of Association Strength Using Multidimensional Item Response Models: Educational and Psychological Measurement Vol 64(6) Dec 2004, 937-955.
  • Wang, W.-C., & Chen, H.-C. (2004). The Standardized Mean Difference Within the Framework of Item Response Theory: Educational and Psychological Measurement Vol 64(2) Apr 2004, 201-223.
  • Wang, W.-C., & Chyi-In, W. (2004). Gain Score in Item Response Theory as an Effect Size Measure: Educational and Psychological Measurement Vol 64(5) Oct 2004, 758-780.
  • Wang, Z., & Thompson, B. (2007). Is the Pearson rsuperscript 2 Biased, and if So, What Is the Best Correction Formula? : Journal of Experimental Education Vol 75(2) Win 2007, 109-125.
  • Ward, R. M. (2002). Highly significant findings in psychology: A power and effect size survey. Dissertation Abstracts International: Section B: The Sciences and Engineering.
  • Watanabe, H. (1993). A Bayesian meta-analysis for reviewing effect sizes: Japanese Psychological Research Vol 35(3) 1993, 153-156.
  • Wendorf, C. A. (2004). Primer on Multiple Regression Coding: Common Forms and the Additional Case of Repeated Contrasts: Understanding Statistics Vol 3(1) 2004, 47-57.
  • Westen, D., & Rosenthal, R. (2003). Quantifying construct validity: Two simple measures: Journal of Personality and Social Psychology Vol 84(3) Mar 2003, 608-618.
  • Wilcox, R. R. (1995). ANOVA: A paradigm for low power and misleading measures of effect size? : Review of Educational Research Vol 65(1) Spr 1995, 51-77.
  • Wilcox, R. R. (2003). Power: Basics, practical problems, and possible solutions. Hoboken, NJ: John Wiley & Sons Inc.
  • Wilcox, R. R. (2004). Kernel Density Estimators: An Approach to Understanding How Groups Differ: Understanding Statistics Vol 3(4) 2004, 333-348.
  • Wilcox, R. R. (2004). A multivariate projection-type analogue of the Wilcoxon-Mann-Whitney test: British Journal of Mathematical and Statistical Psychology Vol 57(2) Nov 2004, 205-213.
  • Wilcox, R. R. (2006). Graphical Methods for Assessing Effect Size: Some Alternatives to Cohen's d: Journal of Experimental Education Vol 74(4) Sum 2006, 353-367.
  • Wilcox, R. R., & Muska, J. (1999). Measuring effect size: A non-parametric analogue of omega superscript 2: British Journal of Mathematical and Statistical Psychology Vol 52(1) May 1999, 93-110.
  • Wilkerson, M., & Olson, M. R. (1997). Misconceptions about sample size, statistical significance, and treatment effect: Journal of Psychology: Interdisciplinary and Applied Vol 131(6) Nov 1997, 627-631.
  • Wilson, D. B., & Lipsey, M. W. (2001). The role of method in treatment effectiveness research: Evidence from meta-analysis: Psychological Methods Vol 6(4) Dec 2001, 413-429.
  • Wilson, D. B., & Lipsey, M. W. (2003). The role of method in treatment effectiveness research: Evidence from meta-analysis. Washington, DC: American Psychological Association.
  • Wilson, D. B., & Shadish, W. R. (2006). On Blowing Trumpets to the Tulips: To Prove or Not to Prove the Null Hypothesis—Comment on Bosch, Steinkamp, and Boller (2006): Psychological Bulletin Vol 132(4) Jul 2006, 524-528.
  • Wood, G. R. (1989). A simulation of conjoint measurement and functional measurement procedures under varying conditions of measurement error, effect size pattern, scale translation, and replications: Dissertation Abstracts International.
  • Wu, Y.-w. B. (1984). The effects of heterogeneous regression slopes on the robustness of two test statistics in the analysis of covariance: Educational and Psychological Measurement Vol 44(3) Fal 1984, 647-663.
  • Wu, Y.-W. B., & Wooldridge, P. J. (2005). The Impact of Centering First-Level Predictors on Individual and Contextual Effects in Multilevel Data Analysis: Nursing Research Vol 54(3) May-Jun 2005, 212-216.
  • Wyma, R. J. (1990). Involving children as active agents of their own treatment: A meta-analysis of self-management training: Dissertation Abstracts International.
  • Yin, R. K., Schmidt, R. J., & Besag, F. (2006). Aggregating Student Achievement Trends Across States With Different Tests: Using Standardized Slopes as Effect Sizes: Peabody Journal of Education Vol 81(2) 2006, 47-61.
  • Yoder, P. J., & Feurer, I. D. (2000). Quantifying the magnitude of sequential association between events or behaviors. Baltimore, MD: Paul H Brookes Publishing.
  • Yong, W., Mouchao, M., Li, L., & Xiaqi, D. (2003). Memory effect of banner web - ad: Acta Psychologica Sinica Vol 35(6) 2003, 830-834.
  • Yuan, K.-H., & Maxwell, S. (2005). On the Post Hoc Power in Testing Mean Differences: Journal of Educational and Behavioral Statistics Vol 30(2) Sum 2005, 141-167.
  • Zakzanis, K. K. (2001). Statistics to tell the truth, the whole truth, and nothing but the truth: Formulae, illustrative numerical examples, and heuristic interpretation of effect size analyses for neuropsychological researchers: Archives of Clinical Neuropsychology Vol 16(7) Oct 2001, 653-667.
  • Zelazo, P. D., & Shultz, T. R. (1989). Concepts of potency and resistance in causal prediction: Child Development Vol 60(6) Dec 1989, 1307-1315.
  • Zhang, Z., & Schoeps, N. (1997). On robust estimation of effect size under semiparametric models: Psychometrika Vol 62(2) Jun 1997, 201-214.
  • Zimmerman, D. W. (2002). A warning about statistical significance tests performed on large samples of nonindependent observations: Perceptual and Motor Skills Vol 94(1) Feb 2002, 259-263.
  • Zou, G. Y. (2005). Quantifying responsiveness of quality of life measures without an external criterion: Quality of Life Research: An International Journal of Quality of Life Aspects of Treatment, Care & Rehabilitation Vol 14(6) Aug 2005, 1545-1552.
  • Zumbo, B. D., & Jennings, M. J. (2002). The robustness of validity and efficiency of the related samples t-test in the presence of outliers: Psicologica Vol 23(2) 2002, 415-450.
  • Zwick, R., & Thayer, D. T. (1996). Evaluating the magnitude of differential item functioning in polytomous items: Journal of Educational and Behavioral Statistics Vol 21(3) Fal 1996, 187-201.

External links

Software

Further Explanations


This page uses Creative Commons Licensed content from Wikipedia (view authors).

Around Wikia's network

Random Wiki