Psychology Wiki
Register
No edit summary
 
(emphasis on measure rather than test)
 
(2 intermediate revisions by the same user not shown)
Line 1: Line 1:
 
{{AssessPsy}}
 
{{AssessPsy}}
  +
A '''norm-referenced measure''' is a type of [[Test (student assessment)|test]], [[assessment]], or [[evaluation]] in which the tested individual is compared to a [[Sample (statistics)|sample]] of his or her peers (referred to as a "normative sample").<ref name="auburn">[https://fp.auburn.edu/rse/trans_media/08_Publications/06_Transition_in%20_Action/chap8.htm Assessment Guided Practices]</ref> The term "normative assessment" refers to the process of comparing one test-taker to his or her peers.<ref name="auburn"/>
A test is said to be '''norm-referenced''' when the translated score tells where the person stands in some population of persons who have taken the test. By contrast, a test is [[criterion-referenced test|criterion-referenced]] when provision is made for translating the test score into a statement about the behavior to be expected of a person with that score. The same test can be used in both ways. {{ref|Cronbach}} Robert Glaser originally coined both terms. {{ref|Glaser}}
 
   
==Notes and references==
+
==Other types==
  +
Alternative to normative testing, tests can be [[Ipsative assessment|ipsative]], that is, the individual assessment is compared to him- or her-self through time.<ref name="teach">[http://www.dmu.ac.uk/~jamesa/teaching/assessment.htm Assessment]</ref><ref name="role">[http://www.psychology.nottingham.ac.uk/staff/nfr/rolefunction.pdf PDF presentation]</ref>
   
  +
By contrast, a test is [[criterion-referenced test|criterion-referenced]] when provision is made for translating the test score into a statement about the behavior to be expected of a person with that score. The same test can be used in both ways.<ref name="Cronbach">Cronbach, L. J. (1970). ''Essentials of psychological testing'' (3rd ed.). New York: Harper & Row.</ref> Robert Glaser originally coined the terms "norm-referenced test" and "criterion-referenced test".<ref name="Glaser">Glaser, R. (1963). Instructional technology and the measurement of learning outcomes. ''American Psychologist, 18'', 510-522.</ref>
# {{note|Cronbach}} Cronbach, L. J. (1970). ''Essentials of psychological testing'' (3rd ed.). New York: Harper & Row.
 
# {{note|Glaser}} Glaser, R. (1963). Instructional technology and the measurement of learning outcomes. ''American Psychologist, 18'', 510-522.
 
   
  +
Standards-based education reform is based on the belief that public education should establish what every student should know and be able to do.<ref>[http://www.isbe.state.il.us/ils/] Illinois Learning Standards </ref> Students should be tested against a fixed yardstick, rather than against each other or sorted into a mathematical [[bell curve]]. <ref>[http://www.fairtest.org/nattest/times stories 5-01.html] Fairtest.org: Times on Testing "criterion referenced" tests measure students against a fixed yardstick, not against each other.</ref> By assessing that every student must pass these new, higher standards, education officials believe that all students will achieve a diploma that prepares them for success in the 21st century.<ref>[http://www.newhorizons.org/spneeds/improvement/bergeson.htm] By the Numbers: Rising Student Achievement in Washington State by Terry Bergesn "She continues her pledge ... to ensure all students achieve a diploma that prepares them for success in the 21st century."</ref>
==See also==
 
   
  +
==Common use==
  +
{{unreferenced|date=October 2006}}
  +
Most state achievement tests are criterion referenced. In other words, a predetermined level of acceptable performance is developed and students pass or fail in achieving or not achieving this level. Tests that set goals for students based on the average student's performance are norm-referenced tests. Tests that set goals for students based on a set standard (e.g., 80 words spelled correctly) are criterion-referenced tests.
  +
  +
Many college entrance exams and nationally used school tests use norm-referenced tests. The [[SAT]], [[Graduate Record Examination]] (GRE), and [[Wechsler Intelligence Scale for Children]] (WISC) compare individual student performance to the performance of a normative sample. Test-takers cannot "fail" a norm-referenced test, as each test-taker receives a score that compares the individual to others that have taken the test, usually given by a percentile. This is useful when there is a wide range of acceptable scores that is different for each college. For example one estimate of the average SAT score for [[Harvard University]] is 2200 out of 2400 possible. The average for [[Indiana University]] is 1650<ref>[http://collegeapps.about.com/od/sat/f/goodsatscore.htm] About.com "What is a Good SAT Score?" From Jay Brody Aug 2006</ref>.
  +
  +
By contrast, nearly two-thirds of US high school students will be required to pass a criterion-referenced [[high school graduation examination]]. One high fixed score is set at a level adequate for university admission whether the high school graduate is college bound or not. Each state gives its own test and sets its own passing level, with states like Massachusetts showing very high pass rates, while in Washington State, even average students are failing, as well as 80 percent of some minority groups. This practice is opposed by many in the education community such as [[Alfie Kohn]] as unfair to groups and individuals who don't score as high as others.
  +
  +
==Advantages and limitations==
  +
<!-- {{unreferenced}}what section has more references than this?? -->
  +
  +
An obvious disadvantage of norm-referenced tests is that it cannot measure progress of the population of a whole, only where individuals fall within the whole. Thus, only measuring against a fixed goal can be used to measure the success of an educational reform program which seeks to raise the achievement of all students against new standards which seek to assess skills beyond choosing among multiple choices. However, while this is attractive in theory, in practice the bar has often been moved in the face of excessive failure rates, and improvement sometimes occurs simply because of familiarity with and teaching to the same test.
  +
  +
With a norm-referenced test, grade level was traditionally set at the level set by the middle 50 percent of scores.<ref>[http://www.nctm.org/news/assessment/2004_04nb.htm] NCTM: News & Media: Assessment Issues (Newsbulletin April 2004) "by definition, half of the nation's students are below grade level at any particular moment"</ref> By contrast, the [[National Children's Reading Foundation]] believes that it is essential to assure that virtually all of our children read at or above grade level by third grade, a goal which cannot be achieved with a norm referenced definition of grade level.<ref>[http://www.readingfoundation.org/about/about_us.asp] National Children's Reading Foundation website</ref>
  +
  +
Critics of criterion-referenced tests point out that judges set bookmarks around items of varying difficulty without considering whether the items actually are compliant with grade level content standards or are developmentally appropriate.<ref>[http://www.leg.wa.gov/pub/billinfo/2001-02/house/2075-2099/2087_hbr.pdf] HOUSE BILL REPORT HB 2087 "A number
  +
of critics ... continue to assert that the mathematics
  +
WASL is not developmentally appropriate for fourth grade students."</ref> Thus, the original 1997 sample problems published for the [[WASL]] 4th grade mathematics contained items that were difficult for college educated adults, or easily solved with 10th grade level methods such as similar triangles.<ref>Prof Don Orlich, Washington State University</ref>
  +
  +
The difficulty level of items themselves, as are the cut-scores to determine passing levels are also changed from year to year.<ref>[http://archives.seattletimes.nwsource.com/cgi-bin/texis.cgi/web/vortex/display?slug=wasl11m&date=20040511']Panel lowers bar for passing parts of WASL By Linda Shaw, Seattle Times May 11, 2004 "A blue-ribbon panel voted unanimously yesterday to lower the passing bar in reading and math for the fourth- and seventh-grade exam, and in reading on the 10th-grade test" </ref> Pass rates also vary greatly from the 4th to the 7th and 10th grade graduation tests in some states.<ref>[http://archives.seattletimes.nwsource.com/cgi-bin/texis.cgi/web/vortex/display?slug=mathtest06m&date=20021206&query=WASL+7th+grade] Seattle Times December 06, 2002 ''Study: Math in 7th-grade WASL is hard'' By Linda Shaw "Those of you who failed the math section ... last spring had a harder test than your counterparts in the fourth or 10th grades." </ref>
  +
  +
One of the faults of [[No Child Left Behind]] is that each state can choose or construct its own test which cannot be compared to any other state.<ref>[http://www.state.nj.us/njded/njpep/assessment/naep/index.html] New Jersey Department of Education: "But we already have tests in New Jersey, why have another test? Our statewide test is an assessment that only New Jersey students take. No comparisons should be made to other states, or to the nation as a whole.</ref>
  +
A Rand study of Kentucky results found indications of artificial inflation of pass rates which were not reflected in increasing scores in other tests such as the NAEP or SAT given to the same student populations over the same time.<ref>[http://www.rand.org/pubs/research_briefs/RB8017/index1.html] Test-Based Accountability Systems (Rand) "NAEP data are particularly important ...Taken together, these trends suggest appreciable inflation of gains on KIRIS. ...</ref>
  +
  +
Graduation test standards are typically set at a level consistent for native born 4 year university applicants. An unusual side effect is that while colleges often admit immigrants with very strong math skills who may be deficient in english, there is no such leeway in high school graduation tests, which usually require passing all sections, including language. Thus, it is not unusual for institutions like the [[University of Washington]] to admit strong Asian American or Latino students who did not pass the writing portion of the state WASL test, but such students would not even receive a diploma once the testing requirement is in place.
  +
  +
Although the tests such as the WASL are intended as a minimal bar for high school, 27 percent of 10th graders applying for [[Running Start]] in Washington State failed the math portion of the WASL. These students applied to take college level courses in high school, and achieve at a much higher level than average students. The same studyconcluded the level of difficulty was comparable to, or greater than that of tests intended to place students already admitted to the college. <ref>[http://www.transitionmathproject.org/assetts/docs/highlights/wasl_report.doc]Relationship of the Washington Assessment of Student Learning (WASL) and Placement Tests
  +
Used at Community and Technical Colleges By: Dave Pavelchek, Paul Stern and Dennis Olson
  +
Social & Economic Sciences Research Center, Puget Sound Office, WSU "The average difficulty ratings for WASL test questions fall in the middle of the range of difficulty ratings for the college placement tests."</ref>
  +
  +
A norm referenced test has none of these problems because it does not seek to enforce any expectation of what all students should know or be able to do other than what actual students demonstrate. Present levels of performance and inequity are taken as fact, not as defects to be removed by a redesigned system. Goals of student performance are not raised every year until all are proficient. Scores are not required to show continuous improvement through Total Quality Management systems.
  +
  +
A rank-based system only produces data which tell which average students perform at an average level, which students do better, and which students do worse. This contradicts the fundamental beliefs, whether optimistic or simply unfounded, that all will perform at one uniformly high level in a standards based system if enough incentives and punishments are put into place. This difference in beliefs underlies the most significant differences between a traditional and a standards based education system.
  +
 
==See also==
 
* [[Assessment]]
 
* [[Assessment]]
  +
* [[Criterion-referenced test]]
  +
* [[Evaluation]]
 
* [[Psychometrics]]
 
* [[Psychometrics]]
* [[Standardized testing]]
+
* [[Standardized test]]
   
  +
==References==
  +
<references />
   
  +
==External links==
{{edu-stub}}
 
  +
* [http://www.citrus.kcusd.com/instruction.htm A webpage] about instruction that discusses assessment
   
[[Category:Educational assessment]]
+
[[Category:Psychometrics]]
[[Category: Psychometrics]]
+
[[Category:Standardized tests]]
[[Category: Standardized tests]]
 
 
[[Category:Educational psychology]]
 
[[Category:Educational psychology]]
  +
 
{{enWP|Norm-referenced test}}
 
{{enWP|Norm-referenced test}}

Latest revision as of 20:46, 14 November 2008

Assessment | Biopsychology | Comparative | Cognitive | Developmental | Language | Individual differences | Personality | Philosophy | Social |
Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology |

Social Processes: Methodology · Types of test


A norm-referenced measure is a type of test, assessment, or evaluation in which the tested individual is compared to a sample of his or her peers (referred to as a "normative sample").[1] The term "normative assessment" refers to the process of comparing one test-taker to his or her peers.[1]

Other types

Alternative to normative testing, tests can be ipsative, that is, the individual assessment is compared to him- or her-self through time.[2][3]

By contrast, a test is criterion-referenced when provision is made for translating the test score into a statement about the behavior to be expected of a person with that score. The same test can be used in both ways.[4] Robert Glaser originally coined the terms "norm-referenced test" and "criterion-referenced test".[5]

Standards-based education reform is based on the belief that public education should establish what every student should know and be able to do.[6] Students should be tested against a fixed yardstick, rather than against each other or sorted into a mathematical bell curve. [7] By assessing that every student must pass these new, higher standards, education officials believe that all students will achieve a diploma that prepares them for success in the 21st century.[8]

Common use

Stop hand This article seems to be biased or has no references.
You can help the Psychology Wiki by citing appropriate references.
Please see the relevant discussion on the talk page.


Most state achievement tests are criterion referenced. In other words, a predetermined level of acceptable performance is developed and students pass or fail in achieving or not achieving this level. Tests that set goals for students based on the average student's performance are norm-referenced tests. Tests that set goals for students based on a set standard (e.g., 80 words spelled correctly) are criterion-referenced tests.

Many college entrance exams and nationally used school tests use norm-referenced tests. The SAT, Graduate Record Examination (GRE), and Wechsler Intelligence Scale for Children (WISC) compare individual student performance to the performance of a normative sample. Test-takers cannot "fail" a norm-referenced test, as each test-taker receives a score that compares the individual to others that have taken the test, usually given by a percentile. This is useful when there is a wide range of acceptable scores that is different for each college. For example one estimate of the average SAT score for Harvard University is 2200 out of 2400 possible. The average for Indiana University is 1650[9].

By contrast, nearly two-thirds of US high school students will be required to pass a criterion-referenced high school graduation examination. One high fixed score is set at a level adequate for university admission whether the high school graduate is college bound or not. Each state gives its own test and sets its own passing level, with states like Massachusetts showing very high pass rates, while in Washington State, even average students are failing, as well as 80 percent of some minority groups. This practice is opposed by many in the education community such as Alfie Kohn as unfair to groups and individuals who don't score as high as others.

Advantages and limitations

An obvious disadvantage of norm-referenced tests is that it cannot measure progress of the population of a whole, only where individuals fall within the whole. Thus, only measuring against a fixed goal can be used to measure the success of an educational reform program which seeks to raise the achievement of all students against new standards which seek to assess skills beyond choosing among multiple choices. However, while this is attractive in theory, in practice the bar has often been moved in the face of excessive failure rates, and improvement sometimes occurs simply because of familiarity with and teaching to the same test.

With a norm-referenced test, grade level was traditionally set at the level set by the middle 50 percent of scores.[10] By contrast, the National Children's Reading Foundation believes that it is essential to assure that virtually all of our children read at or above grade level by third grade, a goal which cannot be achieved with a norm referenced definition of grade level.[11]

Critics of criterion-referenced tests point out that judges set bookmarks around items of varying difficulty without considering whether the items actually are compliant with grade level content standards or are developmentally appropriate.[12] Thus, the original 1997 sample problems published for the WASL 4th grade mathematics contained items that were difficult for college educated adults, or easily solved with 10th grade level methods such as similar triangles.[13]

The difficulty level of items themselves, as are the cut-scores to determine passing levels are also changed from year to year.[14] Pass rates also vary greatly from the 4th to the 7th and 10th grade graduation tests in some states.[15]

One of the faults of No Child Left Behind is that each state can choose or construct its own test which cannot be compared to any other state.[16] A Rand study of Kentucky results found indications of artificial inflation of pass rates which were not reflected in increasing scores in other tests such as the NAEP or SAT given to the same student populations over the same time.[17]

Graduation test standards are typically set at a level consistent for native born 4 year university applicants. An unusual side effect is that while colleges often admit immigrants with very strong math skills who may be deficient in english, there is no such leeway in high school graduation tests, which usually require passing all sections, including language. Thus, it is not unusual for institutions like the University of Washington to admit strong Asian American or Latino students who did not pass the writing portion of the state WASL test, but such students would not even receive a diploma once the testing requirement is in place.

Although the tests such as the WASL are intended as a minimal bar for high school, 27 percent of 10th graders applying for Running Start in Washington State failed the math portion of the WASL. These students applied to take college level courses in high school, and achieve at a much higher level than average students. The same studyconcluded the level of difficulty was comparable to, or greater than that of tests intended to place students already admitted to the college. [18]

A norm referenced test has none of these problems because it does not seek to enforce any expectation of what all students should know or be able to do other than what actual students demonstrate. Present levels of performance and inequity are taken as fact, not as defects to be removed by a redesigned system. Goals of student performance are not raised every year until all are proficient. Scores are not required to show continuous improvement through Total Quality Management systems.

A rank-based system only produces data which tell which average students perform at an average level, which students do better, and which students do worse. This contradicts the fundamental beliefs, whether optimistic or simply unfounded, that all will perform at one uniformly high level in a standards based system if enough incentives and punishments are put into place. This difference in beliefs underlies the most significant differences between a traditional and a standards based education system.

See also

References

  1. 1.0 1.1 Assessment Guided Practices
  2. Assessment
  3. PDF presentation
  4. Cronbach, L. J. (1970). Essentials of psychological testing (3rd ed.). New York: Harper & Row.
  5. Glaser, R. (1963). Instructional technology and the measurement of learning outcomes. American Psychologist, 18, 510-522.
  6. [1] Illinois Learning Standards
  7. stories 5-01.html Fairtest.org: Times on Testing "criterion referenced" tests measure students against a fixed yardstick, not against each other.
  8. [2] By the Numbers: Rising Student Achievement in Washington State by Terry Bergesn "She continues her pledge ... to ensure all students achieve a diploma that prepares them for success in the 21st century."
  9. [3] About.com "What is a Good SAT Score?" From Jay Brody Aug 2006
  10. [4] NCTM: News & Media: Assessment Issues (Newsbulletin April 2004) "by definition, half of the nation's students are below grade level at any particular moment"
  11. [5] National Children's Reading Foundation website
  12. [6] HOUSE BILL REPORT HB 2087 "A number of critics ... continue to assert that the mathematics WASL is not developmentally appropriate for fourth grade students."
  13. Prof Don Orlich, Washington State University
  14. [7]Panel lowers bar for passing parts of WASL By Linda Shaw, Seattle Times May 11, 2004 "A blue-ribbon panel voted unanimously yesterday to lower the passing bar in reading and math for the fourth- and seventh-grade exam, and in reading on the 10th-grade test"
  15. [8] Seattle Times December 06, 2002 Study: Math in 7th-grade WASL is hard By Linda Shaw "Those of you who failed the math section ... last spring had a harder test than your counterparts in the fourth or 10th grades."
  16. [9] New Jersey Department of Education: "But we already have tests in New Jersey, why have another test? Our statewide test is an assessment that only New Jersey students take. No comparisons should be made to other states, or to the nation as a whole.
  17. [10] Test-Based Accountability Systems (Rand) "NAEP data are particularly important ...Taken together, these trends suggest appreciable inflation of gains on KIRIS. ...
  18. [11]Relationship of the Washington Assessment of Student Learning (WASL) and Placement Tests Used at Community and Technical Colleges By: Dave Pavelchek, Paul Stern and Dennis Olson Social & Economic Sciences Research Center, Puget Sound Office, WSU "The average difficulty ratings for WASL test questions fall in the middle of the range of difficulty ratings for the college placement tests."

External links

  • A webpage about instruction that discusses assessment
This page uses Creative Commons Licensed content from Wikipedia (view authors).