Psychology Wiki
(Hey Everyone Newcomer Here)
m (fixing dead links)
(24 intermediate revisions by 14 users not shown)
Line 1: Line 1:
  +
{{EdPsy}}
The '''SAT Reasoning Test''' is a [[standardized testing|standardized test]] for college admissions in the [[Education in the United States|United States]]. The SAT is administered by the College Board corporation, a [[non-profit organization]] in the United States,<ref name=about>{{cite web |url=http://www.collegeboard.com/about/index.html |title=About the College Board |accessmonthday=[[May 29]] |accessyear=[[2007]] |publisher=[[College Board]]}}</ref> and is developed, published, and scored by the [[Educational Testing Service]] (ETS).
 
  +
{{about|the college admissions test in the United States|the exams in England colloquially known as SATs|National Curriculum assessment|other uses|SAT (disambiguation)}}
  +
[[File:SAT logo.gif|right|thumb|SAT Reasoning Test|alt=SAT Reasoning Test]]
   
  +
The '''College Entrance Examination Board Scholastic Aptitude Test''' ('''SAT''') is a [[standardized testing|standardized test]] for most [[College admissions in the United States|college admission]]s in the [[Education in the United States|United States]]. The SAT is owned, published, and developed by the [[College Board]], a [[nonprofit organization]] in the United States. It was formerly developed, published, and scored by the [[Educational Testing Service]]<ref>{{cite web |url=http://about.collegeboard.org/ |title=About the College Board |accessdate=May 29, 2007 |publisher=[[College Board]]}}</ref> which still administers the exam. The test is intended to assess a student's readiness for college. It was first introduced in 1926, and its name and scoring have changed several times. It was first called the '''Scholastic Aptitude Test''', then the '''Scholastic Assessment Test'''.
The current SAT Reasoning Test is administered in about 4 hours and costs $43 ($68 International),<ref name=costs>{{cite web |url=http://www.collegeboard.com/student/testing/sat/calenfees/fees.html |title=SAT Fees: 2007–08 Fees |accessmonthday=[[May 29]] |accessyear=[[2007]] |publisher=[[College Board]]}}</ref> excluding late fees. After SAT's introduction in [[1901]], its name and scoring has changed several times. In 2005, the test was renamed as "SAT Reasoning Test" with possible scores from 600 to 2400 combining test results from three 800-point sections (math, critical reading, and writing), along with other subsections scored separately (see details below).
 
  +
  +
The current SAT Reasoning Test, introduced in 2005, takes three hours and forty-five minutes to finish, and costs $50 ($81 International), excluding late fees.<ref name=costs>{{cite web |url=http://sat.collegeboard.com/register/sat-fees |title=SAT Fees: 2010–11 Fees |accessdate=September 5, 2010 |publisher=[[College Board]]}}</ref> Possible scores range from 600 to 2400, combining test results from three 800-point sections (Mathematics, Critical Reading, and Writing). Taking the SAT or its competitor, the [[ACT (test)|ACT]], is required for freshman entry to many, but not all, universities in the United States.<ref>{{cite news|url=http://www.nytimes.com/2009/07/26/education/edlife/26guidance-t.html|title=The Other Side of 'Test Optional'|last=O'Shaughnessy|first=Lynn|date=26 July 2009|work=The New York Times|page=6|accessdate=22 June 2011}}</ref>
  +
  +
==Function==
  +
The College Board states that SAT measures literacy and writing skills that are needed for academic success in [[college]]. They state that the SAT assesses how well the test takers analyze and solve problems—skills they learned in school that they will need in college. The SAT is typically taken by [[high school]] [[Tenth grade|sophomores]], [[Eleventh grade|juniors]] and [[Twelfth grade|seniors]].<ref>{{cite web |url=http://www.collegeboard.com/student/testing/sat/about/SATI.html |title=Official SAT Reasoning Test page |accessdate=June 2007 |publisher=[[College Board]]}}</ref>
  +
Specifically, the College Board states that use of the SAT in combination with high school grade point average (GPA) provides a better indicator of success in college than high school grades alone, as measured by college freshman [[Grade (education)|GPA]]. Various studies conducted over the lifetime of the SAT show a statistically significant increase in [[correlation]] of high school grades and freshman grades when the SAT is factored in.<ref>[http://web.archive.org/web/20050225085121/http://www.collegeboard.com/research/pdf/rn10_10755.pdf 01-249.RD.ResNoteRN-10 rv.1<!-- Bot generated title -->]</ref>
  +
  +
There are substantial differences in funding, curricula, grading, and difficulty among U.S. secondary schools due to American [[federalism]], local control, and the prevalence of private, distance, and [[Homeschooling|home schooled]] students. SAT (and [[ACT (test)|ACT]]) scores are intended to supplement the secondary school record and help admission officers put local data—such as course work, grades, and class rank—in a national perspective.<ref>Korbin, L. (2006). SAT Program Handbook. A Comprehensive Guide to the SAT Program for School Counselors and Admissions Officers, 1, 33+. Retrieved January 24, 2006, from College Board Preparation Database.</ref>
  +
  +
Historically, the SAT has been more popular among colleges on the coasts and the ACT more popular in the Midwest and South. There are some colleges that require the ACT to be taken for college course placement, and a few schools that formerly did not accept the SAT at all. Nearly all colleges accept the test.<ref>{{cite web |url=http://www.collegeboard.com/student/testing/sat/about.html |title=College Admissions – SAT & SAT Subject Tests |accessdate=November 2009 |publisher=[[College Board]]}}</ref>
  +
  +
Certain high [[Intelligence quotient|IQ]] societies, like [[Mensa International|Mensa]], the [[Prometheus Society]] and the [[Triple Nine Society]], use scores from certain years as one of their admission tests. For instance, the [[Triple Nine Society]] accepts scores of 1450 on tests taken before April 1995, and scores of at least 1520 on tests taken between April 1995 and February 2005.<ref>http://www.triplenine.org/main/admission.asp</ref>
  +
  +
The SAT is sometimes given to students younger than 13 by organizations such as the [[Study of Mathematically Precocious Youth]], who use the results to select, study and mentor students of exceptional ability.
  +
  +
While the exact manner in which SAT scores will help to determine admission of a student at American institutions of higher learning is generally a matter decided by the individual institution, some foreign countries have made SAT (and ACT) scores a legal criterion in deciding whether holders of American high school diplomas will be admitted at their public universities.
   
 
==Structure==
 
==Structure==
SAT consists of three major sections: [[mathematics|Mathematics]], Critical [[Reading]], and [[Writing]]. Each section receives a score on the scale of 200&ndash;800. All scores are multiples of 10. Total scores are calculated by adding up scores of the three sections. Each major section is divided into three parts. There are 10 sub-sections, including an additional 25-minute experimental or "equating" section that may be in any of the three major sections. The experimental section is used to [[Norm-referenced test|normalize]] questions for future administrations of the SAT and does not count toward the final score. The test contains 3 hours and 45 minutes of actual timed sections,<ref name=FAQ>{{cite web |url=http://www.collegeboard.com/student/testing/sat/about/sat/FAQ.html |title=SAT FAQ: Frequently Asked Questions |accessmonthday=[[May 29]] |accessyear=[[2007]] |publisher=[[College Board]]}}</ref> although most administrations, including orientation, distribution of materials, and completion of the biographical sections, run about 4 hours (10&ndash;25 minutes per sub-section) long.
+
SAT consists of three major sections: Critical [[Reading (process)|Reading]], [[Mathematics]], and [[Writing]]. Each section receives a score on the scale of 200–800. All scores are multiples of 10. Total scores are calculated by adding up scores of the three sections. Each major section is divided into three parts. There are 10 sub-sections, including an additional 25-minute experimental or "equating" section that may be in any of the three major sections. The experimental section is used to [[Norm-referenced test|normalize]] questions for future administrations of the SAT and does not count toward the final score. The test contains 3 hours and 45 minutes of actual timed sections;<ref name=FAQ>{{cite web |url=http://www.collegeboard.com/student/testing/sat/about/sat/FAQ.html |title=SAT FAQ: Frequently Asked Questions |accessdate=May 29, 2007 |publisher=[[College Board]]}}</ref> most administrations (after including orientation, distribution of materials, completion of biographical sections, and eleven minutes of timed breaks) run for about four and a half hours. The questions range from easy, medium, and hard depending on the scoring from the experimental sections. Easier questions typically appear closer to the beginning of the section while harder questions are toward the end in certain sections. This is not true for every section (the Critical Reading section is in chronological order) but it is the rule of thumb mainly for math and the 19 sentence completions on the test.
[[Image:No test material on this page.svg|200px|thumb|[[Intentionally blank page]] in the style used in the SAT.]]
 
   
 
===Critical Reading===
 
===Critical Reading===
The Critical Reading, formerly verbal, section of the SAT is made up of three scored sections, two 25-minute sections and one 20-minute section, with varying types of questions, including sentence completions and questions about short and long reading passages. Critical Reading sections normally begin with 5 to 8 sentence completion questions; the remainder of the questions are focused on the reading passages. Sentence completions generally test the student's [[vocabulary]] and understanding of sentence structure and organization by requiring the student to select one or two words that best complete a given sentence. The bulk of the Critical Reading questions is made up of questions regarding reading passages, in which students read short excerpts on social sciences, humanities, physical sciences, or personal narratives and answer questions based on the passage. Certain sections contain passages asking the student to compare two related passages; generally, these consist of short reading passages as well as longer passages. Since this is a timed test, the number of questions about each passage is proportional to the length of the passage.
+
The Critical Reading (formerly Verbal) section of the SAT is made up of three scored sections: two 25-minute sections and one 20-minute section, with varying types of questions, including sentence completions and questions about short and long reading passages. Critical Reading sections normally begin with 5 to 8 sentence completion questions; the remainder of the questions are focused on the reading passages. Sentence completions generally test the student's [[vocabulary]] and understanding of sentence structure and organization by requiring the student to select one or two words that best complete a given sentence. The bulk of the Critical Reading section is made up of questions regarding reading passages, in which students read short excerpts on social sciences, humanities, physical sciences, or personal narratives and answer questions based on the passage. Certain sections contain passages asking the student to compare two related passages; generally, these consist of shorter reading passages. The number of questions about each passage is proportional to the length of the passage. Unlike in the Mathematics section, where questions go in the order of difficulty, questions in the Critical Reading section go in the order of the passage. Overall, question sets near the beginning of the section are easier, and question sets near the end of the section are harder.
   
 
===Mathematics===
 
===Mathematics===
  +
[[File:SAT Grid-in mathematics question.png|thumb|right|An example of a "grid in" mathematics question in which the answer should be written into the box below the question.]]
The [[Mathematics]] sections of the SAT consists of three scored sections. There are two 25-minute sections and one 20-minute section, as follows:
 
  +
The [[Mathematics]] section of the SAT is widely known as the Quantitative Section or Calculation Section. The mathematics section consists of three scored sections. There are two 25-minute sections and one 20-minute section, as follows:
* One of the 25-minute sections is entirely multiple choice, with 20 questions.
 
* The other 25-minute section contains eight multiple choice questions and 10 grid-in questions.
+
* One of the 25-minute sections is entirely [[multiple choice]], with 20 questions.
  +
* The other 25-minute section contains 8 multiple choice questions and 10 grid-in questions. For grid-in questions, test-takers write the answer inside a grid on the answer sheet. Unlike multiple choice questions, there is no penalty for incorrect answers on grid-in questions because the test-taker is not limited to a few possible choices.
* The shorter section is all multiple choice, with only 16 questions.
 
  +
* The 20-minute section is all multiple choice, with 16 questions.
Notably, the SAT has done away with [[quantitative comparison]] questions on the math section, leaving only questions with straightforward [[Symbol|symbolic]] or [[Number|numerical]] answers. Since the quantitative comparison questions were well-known for their deceptive nature&mdash;often turning on the student's recognition of a single exception to a rule or pattern&mdash;this choice has been equated to a philosophical shift away from "trickery" and toward "straight math" on the SAT. Also, many test experts have attributed this change, like the addition of the new writing section, to an attempt to make the SAT more like the [[ACT_(examination)|ACT]]. Indeed, there is a correlation between ACT scores and SAT scores.<ref>[http://admissionchances.com/college_graph.php?college_names=144&status=applied&yog=All&y=5&x=3&u=&submit=View+College+Stats Scatterplots of Harvard ACT and SAT as a crude example]</ref><ref>[http://admissionchances.com/college_graph.php?college_names=405&status=applied&yog=All&y=5&x=1&u=&submit=View+College+Stats Scatterplots of Berkley ACT and SAT as a crude example]</ref>
 
  +
The SAT has done away with quantitative comparison questions on the math section<!-- what are quantitative comparison questions? Someone unfamiliar with the test might not know-->, leaving only questions with <!-- "straightforward" is subjective-->[[symbol]]ic or [[Number|numerical]] answers.
  +
* New topics include Algebra II and scatter plots. These recent changes have resulted in a shorter, more quantitative exam requiring higher level mathematics courses relative to the previous exam.
  +
  +
====Calculator use====
  +
Four-function, scientific, and graphing calculators are permitted on the SAT math section; however, calculators are not permitted on either of the other sections. Calculators with [[QWERTY]] keyboards, cell phone calculators, portable computers, and personal organizers are not permitted.<ref>http://sat.collegeboard.org/register/calculator-policy</ref>
  +
  +
With the recent changes to the content of the SAT math section, the need to save time while maintaining accuracy of calculations has led some to use [[SAT calculator program|calculator programs]] during the test. These programs allow students to complete problems faster than would normally be possible when making calculations manually.
  +
  +
The use of a [[graphing calculator]] is sometimes preferred, especially for [[geometry]] problems and [[exercise (mathematics)|exercise]]s involving multiple calculations. According to [http://research.collegeboard.org/publications/content/2012/05/calculator-use-and-sat-i-math research] conducted by the CollegeBoard, performance on the math sections of the exam is associated with the extent of calculator use, with those using calculators on about a third to a half of the items averaging higher scores than those using calculators less frequently.<ref>[http://research.collegeboard.org/publications/content/2012/05/calculator-use-and-sat-i-math Calculator Use and the SAT<!-- Bot generated title -->]</ref> The use of a graphing calculator in mathematics courses, and also becoming familiar with the calculator outside of the classroom, is known to have a positive effect on the performance of students using a graphing calculator during the exam.
  +
  +
===Writing===
  +
{{multiple image
  +
| direction = vertical
  +
| image1 = EssayImageAction.png
  +
| alt1 = Page 1 of an SAT essay
  +
| image2 = EssayImageAction2.png
  +
| alt2 = Page 2 of an SAT essay
  +
| footer = SAT essay. This student received a 10/12 from two judges, each giving 5/6
  +
}}
  +
The writing
  +
portion of the SAT, based on but not directly comparable to the old SAT II subject test in writing (which in turn was developed from the old TSWE), includes multiple choice questions and a brief essay. The essay subscore contributes about 28% to the total writing score, with the multiple choice questions contributing 70%. This section was implemented in March 2005 following complaints from colleges about the lack of uniform examples of a student's writing ability and critical thinking.
  +
  +
The multiple choice questions include error identification questions, sentence improvement questions, and paragraph improvement questions. Error identification and sentence improvement questions test the student's knowledge of grammar, presenting an awkward or grammatically incorrect sentence; in the error identification section, the student must locate the word producing the source of the error or indicate that the sentence has no error, while the sentence improvement section requires the student to select an acceptable fix to the awkward sentence. The paragraph improvement questions test the student's understanding of logical organization of ideas, presenting a poorly written student essay and asking a series of questions as to what changes might be made to best improve it.
  +
  +
The essay section, which is always administered as the first section of the test, is 25 minutes long. All essays must be in response to a given prompt. The prompts are broad and often philosophical and are designed to be accessible to students regardless of their educational and social backgrounds. For instance, test takers may be asked to expand on such ideas as their opinion on the value of work in human life or whether technological change also carries negative consequences to those who benefit from it. No particular essay structure is required, and the College Board accepts examples "taken from [the student's] reading, studies, experience, or observations." Two trained readers assign each essay a score between 1 and 6, where a score of 0 is reserved for essays that are blank, off-topic, non-English, not written with a Number 2 pencil, or considered illegible after several attempts at reading. The scores are summed to produce a final score from 2 to 12 (or 0). If the two readers' scores differ by more than one point, then a senior third reader decides. The average time each reader/grader spends on each essay is less than 3 minutes.<ref name = nytimes>{{cite news |url= http://www.nytimes.com/2005/05/04/education/04education.html|title= SAT Essay Test Rewards Length and Ignores Errors|accessdate=2008-03-06 |first= Michael|last= Winerip |date= May 5, 2005|work= |publisher=New York Times}}</ref>
  +
  +
In March 2004, Les Perelman analyzed 15 scored sample essays contained in the College Board's ''ScoreWrite'' book along with 30 other training samples and found that in over 90% of cases, the essay's score could be predicted from simply counting the number of words in the essay.<ref name = "nytimes"/> Two years later, Perelman trained high school seniors to write essays that made little sense but contained infrequently used words such as "plethora" and "myriad." All of the students received scores of "10" or better, which placed the essays in the 92nd percentile or higher.<ref name = "Fooling the College Board">{{cite news |title= Fooling the College Board |url= http://www.insidehighered.com/news/2007/03/26/writing|accessdate=2010-07-17 |first= Scott |last= Jaschik |date= March 26, 2007|work= |publisher=Inside Higher Education}}</ref>
   
  +
===Style of questions===
Hello everybody, I saw this forum recently and I think it is time for me to join. I'm attracted to your topics here. I hope I can find trustable people here. C ya around
 
  +
Most of the questions on the SAT, except for the essay and the grid-in math responses, are [[multiple choice]]; all multiple-choice questions have five answer choices, one of which is correct. The questions of each section of the same type are generally ordered by difficulty. However, an important exception exists: Questions that follow the long and short reading passages are organized chronologically, rather than by difficulty. Ten of the questions in one of the math sub-sections are not multiple choice. They instead require the test taker to bubble in a number in a four-column grid.
Hello everybody, I saw this forum recently and I think it is time for me to join. I'm attracted to your topics here. I hope I can find trustable people here. C ya around
 
   
  +
The questions are weighted equally. For each correct answer, one raw point is added. For each incorrect answer one-fourth of a point is deducted.<ref>{{cite web| url=http://www.collegeboard.com/student/testing/sat/prep_one/test_tips.html |title=Collegeboard Test Tips|publisher=Collegeboard|accessdate=September 9, 2008}}</ref> No points are deducted for incorrect math grid-in questions. This ensures that a student's [[Expected value|mathematically expected gain]] from guessing is zero. The final score is derived from the raw score; the precise conversion chart varies between test administrations.
===Questions===
 
Most of the questions on the SAT are [[multiple choice]]; all multiple-choice questions have five answer choices, one of which is correct. The questions of each section of the same type are generally ordered by difficulty. However, an important exception exists: Questions that follow the long and short reading passages are organized chronologically, rather than by difficulty. Ten of the questions in one of the math sub-sections are not multiple choice. They instead require the test taker to bubble in a number in a four-column grid.
 
   
  +
The SAT therefore recommends only making educated guesses, that is, when the test taker can eliminate at least one answer he or she thinks is wrong. Without eliminating any answers one's probability of answering correctly is 20%. Eliminating one wrong answer increases this probability to 25% (and the expected gain to 1/16 of a point); two, a 33.3% probability (1/6 of a point); and three, a 50% probability (3/8 of a point).
The questions are weighted equally. For each correct answer, one raw point is added. For each incorrect answer one-fourth of a point is deducted. No points are deducted for incorrect math grid-in questions. This ensures that a student's [[Expected value|mathematically expected gain]] from guessing is zero. The final score is derived from the raw score; the precise conversion chart varies between test administrations.
 
   
  +
{| class="wikitable" border="1"
The SAT therefore recommends only making educated guesses, that is, when the test taker can eliminate at least one answer he or she thinks is wrong. Without eliminating any answers one's probability of answering correctly is 20%. Eliminating one wrong answer increases this probability to 25%; two, a 33.3% probability; three, a 50% probability of choosing the correct answer and thus earning the full point for the question.
 
{| class="prettytable"
 
 
! Section !! Average Score !! Time (Minutes)!!Content
 
! Section !! Average Score !! Time (Minutes)!!Content
 
|-
 
|-
| Writing ||497||60||[[Grammar]], [[usage]], and [[word]] choice
+
| Writing ||493||60||[[Grammar]], [[usage]], and [[diction]].
 
|-
 
|-
| Mathematics ||518||70||[[Number]] and [[Operation (mathematics)|operations]]; [[algebra]] and [[Function (mathematics)|functions]]; [[geometry]]; [[statistics]], [[probability]], and [[data analysis]]
+
| Mathematics ||515||70||[[Number]] and [[Operation (mathematics)|operations]]; [[algebra]] and [[Function (mathematics)|functions]]; [[geometry]]; [[statistics]], [[probability]], and [[data analysis]]
 
|-
 
|-
| Critical Reading ||503||70||[[Critical]] [[reading]] and [[Sentence (linguistics)|sentence]]-level reading
+
| Critical Reading ||501||70||[[Vocabulary]], [[Critique|Critical]] [[Reading (process)|reading]], and [[Sentence (linguistics)|sentence]]-level reading
 
|}
 
|}
   
  +
==Preparations==
===History of the structure of the test===
 
  +
The SAT is offered seven times a year in the United States; in October, November, December, January, March (or April, alternating), May, and June. The test is typically offered on the first Saturday of the month for the November, December, May, and June administrations. In other countries, the SAT is offered on the same dates as in the United States except for the first spring test date (i.e., March or April), which is not offered. In 2006, the test was taken 1,465,744 times.<ref name="SATpercentiles">The scoring categories are the following, Reading, Math, Writing, and Essay.</ref>
In the early 1990s, the SAT consisted of six sections: Two math sections (scored together on a 200&ndash;800 scale), two verbal sections (scored together on a 200&ndash;800 scale), the '''Test of Standard Written English''' (scored on a 20&ndash;60+ scale), and an equating section. In 1994, the exam was modified, removing [[antonym]] questions, adding math questions that were not multiple choice, and allowing the use of a calculator for the first time. The average score on the 1994 modification of the SAT I was usually around 1000 (500 on the verbal, 500 on the math). The most selective schools in the United States (for example, those in the [[Ivy League]]) typically had SAT averages exceeding 1400 on the old test.
 
   
  +
Candidates may take either the SAT Reasoning Test or up to three [[SAT Subject Tests]] on any given test date, except the first spring test date, when only the SAT Reasoning Test is offered. Candidates wishing to take the test may register online at the College Board's website, by mail, or by telephone, at least three weeks before the test date.
Beginning with the [[March 12]], [[2005]] administration of the exam, the SAT Reasoning Test was modified and lengthened. Changes included the removal of [[analogy]] questions from the Critical Reading (formerly Verbal) section and quantitative comparisons from the Math section, and the inclusion of a writing section (with an essay) based on the former SAT II Writing Subject Test. The Mathematics section was expanded to cover three years of high school mathematics.
 
   
  +
The SAT Subject Tests are all given in one large book on test day. Therefore, it is actually immaterial which tests, and how many, the student signs up for; with the possible exception of the language tests with listening, the student may change his or her mind and take ''any'' tests, regardless of his or her initial sign-ups. Students who choose to take more subject tests than they signed up for will later be billed by College Board for the additional tests and their scores will be withheld until the bill is paid. Students who choose to take fewer subject tests than they signed up for are not eligible for a refund.
==Taking the test==
 
The SAT is offered seven times a [[year]] in the [[United States]], in October, November, December, January, March (or April, alternating), May, and June. The test is typically offered on the first Saturday of the month for the November, December, May, and June administrations. In other countries, the SAT is offered on the same dates as in the [[United States]] except for the first spring test date (i.e., March or April), which is not offered. In 2006, the test was taken 1,465,744 times.<ref name=SATpercentiles />
 
 
Candidates may either take the SAT Reasoning Test or up to three [[SAT Subject Tests]] on any given test date, except the first spring test date, when only the SAT Reasoning Test is offered. Candidates wishing to take the test may register online at the College Board's website, by mail, or by telephone, at least three weeks before the test date.
 
 
The SAT Subject Tests are all given in one large book on test day. Therefore, it is actually immaterial which tests, and how many, the student signs up for; with the possible exception of the language tests with listening, the student may change his or her mind and take ''any'' tests, regardless of his or her initial sign-ups.
 
   
The SAT Reasoning Test costs $43 ($68 International). For the Subject tests, students pay a $20 Basic Registration Fee and $8 per test (except for language tests with listening, which cost $20 each).<ref name=costs /> The College Board makes fee waivers available for low income students. Additional fees apply for late registration, standby testing, registration changes, scores by telephone, and extra score reports (beyond the four provided for free).
+
The SAT Reasoning Test costs $49 ($78 International, $99 for India and Pakistan, since the older testing system is in place). For the Subject tests, students pay a $22 ($49 International, $73 for India and Pakistan) Basic Registration Fee and $11 per test (except for language tests with listening, which cost $21 each).<ref name="costs"/> The College Board makes fee waivers available for low income students. Additional fees apply for late registration, standby testing, registration changes, scores by telephone, and extra score reports (beyond the four provided for free).
 
Candidates whose religious beliefs prevent them from taking the test on a Saturday may request to take the test on the following Sunday, except for the October test date in which the Sunday test date is eight days after the main test offering. Such requests must be made at the time of registration and are subject to denial.
 
   
  +
Candidates whose religious beliefs prevent them from taking the test on a Saturday may request to take the test on the following day, except for the October test date in which the Sunday test date is eight days after the main test offering. Such requests must be made at the time of registration and are subject to denial.
Students with verifiable disabilities, including physical and learning disabilities, are eligible to take the SAT with accommodations. The standard time increase for students requiring additional time due to learning disabilities is 50%.
 
   
  +
Students with verifiable disabilities, including physical and learning disabilities, are eligible to take the SAT with accommodations. The standard time increase for students requiring additional time due to learning disabilities is time + 50%; time + 100% is also offered.
==Raw scores, scaled scores, and percentiles==
 
  +
Students receive their online score report approximately three weeks after administration of the test (six weeks for mailed, paper scores), with each section graded on a scale of 200&ndash;800 and two sub scores for the writing section: the essay score and the multiple choice sub score. In addition to their score, students receive their [[percentile]] (the percentage of other test takers with lower scores). The raw score, or the number of points gained from correct answers and lost from incorrect answers (ranges from just under 50 to just under 60, depending upon the test), is also included.<ref>http://www.collegeboard.com/sat/help/global/viewscores/</ref> Students may also receive, for an additional fee, the Question and Answer Service, which provides the student's answer, the correct answer to each question, and online resources explaining each question.
 
  +
SAT [[test preparation|preparation]] is a highly lucrative field.<ref>[http://www.researchandmarkets.com/reports/682204/2009_worldwide_exam_preparation_and_tutoring 2009 Worldwide Exam Preparation & Tutoring Industry Report – Market Research Reports – Research and Markets<!-- Bot generated title -->]</ref> and many companies and organizations offer test preparation in the form of books, classes, online courses, and tutoring. Although the [[College Board]] maintains that the SAT is essentially uncoachable, some research suggests that tutoring courses result in an average increase of about 20 points on the math section and 10 points on the verbal section.<ref>[http://collegeapps.about.com/od/sat/f/SAT-test-prep.htm SAT Prep - Are SAT Prep Courses Worth the Cost?<!-- Bot generated title -->]</ref>
 
  +
The corresponding percentile of each scaled score varies from test to test&mdash;for example, in 2003, a scaled score of 800 in both sections of the SAT Reasoning Test corresponded to a percentile of 99.9, while a scaled score of 800 in the SAT Physics Test corresponded to the 94th percentile. The differences in what scores mean with regard to percentiles are because of the content of the exam and the caliber of students choosing to take each exam. Subject Tests are subject to intensive study (often in the form of an [[Advanced Placement Program|AP]], which is relatively more difficult), and only those who know they will perform well tend to take these tests, creating a skewed or non-linear distribution of scores.
 
  +
==Raw scores, scaled scores, and percentiles==
 
  +
Students receive their online score reports approximately three weeks after test administration (six weeks for mailed, paper scores), with each section graded on a scale of 200–800 and two sub scores for the writing section: the essay score and the multiple choice sub score. In addition to their score, students receive their [[percentile]] (the percentage of other test takers with lower scores). The raw score, or the number of points gained from correct answers and lost from incorrect answers (ranges from just under 50 to just under 60, depending upon the test), is also included.<ref>[http://www.collegeboard.com/sat/help/global/viewscores/ My SAT: Help<!-- Bot generated title -->]</ref> Students may also receive, for an additional fee, the Question and Answer Service, which provides the student's answer, the correct answer to each question, and online resources explaining each question.
The percentiles that various SAT scores for college-bound seniors correspond to are summarized in the following chart:<ref>{{cite web |url=http://www.collegeboard.com/prod_downloads/highered/ra/sat/SATPercentileRanksCompositeCR_M_W.pdf |title=SAT Percentile Ranks for Males, Females, and Total Group:2006 College-Bound Seniors—Critical Reading + Mathematics + Writing |accessmonthday=[[May 29]] |accessyear=[[2007]] |format=[[PDF]] |publisher=[[College Board]]}}</ref><ref name=SATpercentiles>{{cite web |url=http://www.collegeboard.com/prod_downloads/highered/ra/sat/SATPercentileRanksCompositeCR_M.pdf |title=SAT Percentile Ranks for Males, Females, and Total Group:2006 College-Bound Seniors—Critical Reading + Mathematics|accessmonthday=[[May 29]] |accessyear=[[2007]] |format=[[PDF]] |publisher=[[College Board]]}}</ref>
 
  +
 
  +
The corresponding percentile of each scaled score varies from test to test—for example, in 2003, a scaled score of 800 in both sections of the SAT Reasoning Test corresponded to a percentile of 99.9, while a scaled score of 800 in the SAT Physics Test corresponded to the 94th percentile. The differences in what scores mean with regard to percentiles are due to the content of the exam and the caliber of students choosing to take each exam. Subject Tests are subject to intensive study (often in the form of an [[Advanced Placement Program|AP]], which is relatively more difficult), and only those who know they will perform well tend to take these tests, creating a skewed distribution of scores.
{| class="prettytable"
 
  +
  +
The percentiles that various SAT scores for college-bound seniors correspond to are summarized in the following chart:<ref name=autogenerated1>{{cite web |url=http://www.collegeboard.com/prod_downloads/highered/ra/sat/SATPercentileRanksCompositeCR_M.pdf |title=SAT Percentile Ranks for Males, Females, and Total Group:2006 College-Bound Seniors—Critical Reading + Mathematics|accessdate=May 29, 2007 |format=PDF |publisher=[[College Board]]}}</ref><ref>{{cite web |url=http://www.collegeboard.com/prod_downloads/highered/ra/sat/SATPercentileRanksCompositeCR_M_W.pdf |title=SAT Percentile Ranks for Males, Females, and Total Group:2006 College-Bound Seniors—Critical Reading + Mathematics + Writing |accessdate=May 29, 2007 |format=PDF |publisher=[[College Board]]}}</ref>
  +
  +
{| class="wikitable"
 
! Percentile !! Score, 1600 Scale<br/>(official, 2006) !! Score, 2400 Scale<br/> (official, 2006)
 
! Percentile !! Score, 1600 Scale<br/>(official, 2006) !! Score, 2400 Scale<br/> (official, 2006)
 
|-
 
|-
| 99.98 || 1600 || 2400
+
| 99.93/99.98* || 1600 || 2400
 
|-
 
|-
| 99.65 || &ge;1550 || &ge;2300
+
| 99+ ** || ≥1540 || ≥2280
 
|-
 
|-
| 99 || &ge;1480 || &ge;2200
+
| 99 || ≥1480 || ≥2200
 
|-
 
|-
| 98 || &ge;1450 || &ge;2140
+
| 98 || ≥1450 || ≥2140
 
|-
 
|-
| 97 || &ge;1420 || &ge;2100
+
| 97 || ≥1420 || ≥2100
 
|-
 
|-
| 88 || &ge;1380 || &ge;1900
+
| 93 || ≥1340 || ≥1990
 
|-
 
|-
| 83 || &ge;1280 || &ge;1800
+
| 88 || ≥1280 || ≥1900
 
|-
 
|-
| 78 || &ge;1200 || &ge;1770
+
| 81 || ≥1220 || ≥1800
 
|-
 
|-
| 72 || &ge;1150 || &ge;1700
+
| 72 || ≥1150 || ≥1700
 
|-
 
|-
| 61 || &ge;1090 || &ge;1600
+
| 61 || ≥1090 || ≥1600
 
|-
 
|-
| 48 || &ge;1010 || &ge;1500
+
| 48 || ≥1010 || ≥1500
 
|-
 
|-
| 36 || &ge;950 || &ge;1400
+
| 36 || ≥950 || ≥1400
 
|-
 
|-
| 15 || &ge;810 || &ge;1200
+
| 24 || ≥870 || ≥1300
|-
 
| 4 || &ge;670 || &ge;1010
 
 
|-
 
|-
| 1 || &ge;520 || &ge;790
+
| 15 || ≥810 || ≥1200
|}
+
|-
  +
| 8 || ≥730 || ≥1090
 
  +
|-
The older SAT (before 1995) had a very high ceiling. In any given year, only seven of the million test-takers scored above 1580. A score above 1580 was equivalent to the 99.9995 percentile.
 
  +
| 4 || ≥650 || ≥990
<ref name="psychom">{{cite paper |author= Membership Committee |date= 1999 |url= http://www.prometheussociety.org/mcreport/memb_comm_rept.html#Some%20Available%20Psychometric%20Instruments
 
  +
|-
|format= |title= 1998/99 Membership Committee Report |publisher= [[Prometheus Society]] |version=
 
  +
| 2 || ≥590 || ≥890
|accessdate= 2006-07-26 }} </ref>
 
  +
|-
  +
|colspan="3"|* The [[percentile]] of the perfect score was 99.98 on the 2400 scale and 99.93 on the 1600 scale.
  +
|-
  +
|colspan="3"|** 99+ means better than 99.5 percent of test takers.
  +
|}
  +
  +
The older SAT (before 1995) had a very high ceiling. In any given year, only seven of the million test-takers scored above 1580. A score above 1580 was equivalent to the 99.9995 percentile.<ref name="psychom">{{cite journal |author= Membership Committee |year= 1999 |url= http://prometheussociety.org/cms/docs/memcom-report?view=document&id=55
  +
|title= 1998/99 Membership Committee Report |publisher= [[Prometheus Society]] |version=
  +
|accessdate= 2013-06-19 }}</ref>
  +
  +
==SAT-ACT score comparisons==
  +
[[Image:SAT-ACT Preference Map.svg|thumb|Map of states according to high school graduates' (2006) preference of exam. States in orange had more students taking the SAT than the [[ACT (test)|ACT]].]]
  +
Although there is no official conversion chart between the SAT and its biggest rival, the [[ACT (test)|ACT]], the College Board released an unofficial chart based on results from 103,525 test takers who took both tests between October 1994 and December 1996;<ref>[http://web.archive.org/web/20061106113221/http://www.collegeboard.com/prod_downloads/highered/ra/sat/satACT_concordance.pdf 404 Error page<!-- Bot generated title -->]</ref> however, both tests have changed since then. Several colleges have also issued their own charts. The following is based on the University of California's conversion chart.<ref>[http://www.universityofcalifornia.edu/admissions/undergrad_adm/paths_to_adm/freshman/scholarship_reqs.html University of California Scholarship Requirement]. . Retrieved June 26, 2006.</ref>
  +
{| class="wikitable" |style="text-align:center"
  +
! SAT (prior to Writing Test addition) !! SAT (with Writing Test addition) !! ACT Composite score
  +
|-
  +
| 1600 || 2400 || 36
  +
|-
  +
| 1560–1590 || 2340–2390 || 35
  +
|-
  +
| 1520–1550 || 2280–2330 || 34
  +
|-
  +
| 1480–1510 || 2220–2270 || 33
  +
|-
  +
| 1440–1470 || 2160–2210 || 32
  +
|-
  +
| 1400–1430 || 2100–2150 || 31
  +
|-
  +
| 1360–1390 || 2040–2090 || 30
  +
|-
  +
| 1320–1350 || 1980–2030 || 29
  +
|-
  +
| 1280–1310 || 1920–1970 || 28
  +
|-
  +
| 1240–1270 || 1860–1910 || 27
  +
|-
  +
| 1200–1230 || 1800–1850 || 26
  +
|-
  +
| 1160–1190 || 1740–1790 || 25
  +
|-
  +
| 1120–1150 || 1680–1730 || 24
  +
|-
  +
| 1080–1110 || 1620–1670 || 23
  +
|-
  +
| 1040–1070 || 1560–1610 || 22
  +
|-
  +
| 1000–1030 || 1500–1550 || 21
  +
|-
  +
| 960–990 || 1440–1490 || 20
  +
|-
  +
| 920–950 || 1380–1430 || 19
  +
|-
  +
| 880–910 || 1320–1370 || 18
  +
|-
  +
| 840–870 || 1260–1310 || 17
  +
|-
  +
| 800–830 || 1200–1250 || 16
  +
|-
  +
| 760–790 || 1140–1190 || 15
  +
|-
  +
| 720–750 || 1080–1130 || 14
  +
|-
  +
| 680–710 || 1020–1070 || 13
  +
|-
  +
| 640–670 || 960–1010 || 12
  +
|-
  +
| 600–630 || 900–950 || 11
  +
|}
  +
  +
==History==
  +
{| class="wikitable" border="1" style="text-align:center; margin-left:1em; margin-right:0; float:right;"
  +
|+Mean SAT Scores by year<ref name=06Report>{{cite web |url=http://professionals.collegeboard.com/data-reports-research/sat/cb-seniors-2010/tables |title=2010 SAT Trends|year=2010 |publisher=The College Board}}</ref>
  +
|-
  +
|Year of<br />exam|| Reading<br />/Verbal<br />Score||Math <br />Score
  +
|-
  +
|'''1972''' || 530 || 509
  +
|-
  +
|'''1973''' || 523 || 506
  +
|-
  +
|'''1974''' || 521 || 505
  +
|-
  +
|'''1975''' || 512 || 498
  +
|-
  +
|'''1976''' || 509 || 497
  +
|-
  +
|'''1977''' || 507 || 496
  +
|-
  +
|'''1978''' || 507 || 494
  +
|-
  +
|'''1979''' || 505 || 493
  +
|-
  +
|'''1980''' || 502 || 492
  +
|-
  +
|'''1981''' || 502 || 492
  +
|-
  +
|'''1982''' || 504 || 493
  +
|-
  +
|'''1983''' || 503 || 494
  +
|-
  +
|'''1984''' || 504 || 497
  +
|-
  +
|'''1985''' || 509 || 500
  +
|-
  +
|'''1986''' || 509 || 500
  +
|-
  +
|'''1987''' || 507 || 501
  +
|-
  +
|'''1988''' || 505 || 501
  +
|-
  +
|'''1989''' || 504 || 502
  +
|-
  +
|'''1990''' || 500 || 501
  +
|-
  +
|'''1991''' || 499 || 500
  +
|-
  +
|'''1992''' || 500 || 501
  +
|-
  +
|'''1993''' || 500 || 503
  +
|-
  +
|'''1994''' || 499 || 504
  +
|-
  +
|'''1995''' || 504 || 506
  +
|-
  +
|'''1996''' || 505 || 508
  +
|-
  +
|'''1997''' || 505 || 511
  +
|-
  +
|'''1998''' || 505 || 512
  +
|-
  +
|'''1999''' || 505 || 511
  +
|-
  +
|'''2000''' || 505 || 514
  +
|-
  +
|'''2001''' || 506 || 514
  +
|-
  +
|'''2002''' || 504 || 516
  +
|-
  +
|'''2003''' || 507 || 519
  +
|-
  +
|'''2004''' || 508 || 518
  +
|-
  +
|'''2005''' || 508 || 520
  +
|-
  +
|'''2006''' || 503 || 518
  +
|-
  +
|'''2007''' || 502 || 515
  +
|-
  +
|'''2008''' || 502 || 515
  +
|-
  +
|'''2009''' || 501 || 515
  +
|-
  +
|'''2010''' || 501 || 516
  +
|-
  +
|'''2011''' || 497 || 514
  +
|}
   
  +
[[Image:Mean SAT Score by year.png|thumb|left|Mean SAT Reading and Math test scores over time.]]
==Historical development==
 
Originally used mainly by colleges and universities in the north-eastern United States, and developed by [[Carl Brigham]], one of the psychologists who worked on the Army Alpha and Beta tests<!-- Reference, Kline, K. and Davidshofer, C 2005 -->, the SAT was originally developed as a way to eliminate test bias between people from different socio-economic backgrounds.
+
Originally used mainly by colleges and universities in the northeastern United States, and developed by [[Carl Brigham]], one of the psychologists who worked on the Army Alpha and Beta tests<!-- Reference, Kline, K. and Davidshofer, C 2005 -->, the SAT was originally developed as a way to eliminate test bias between people from different socio-economic backgrounds.
   
 
===1901 test===
 
===1901 test===
The College Board began on [[June 17]], [[1901]], when 973 students took its first test, across 67 locations in the United States, and two in Europe. Although those taking the test came from a variety of backgrounds, approximately one third were from [[New York]], [[New Jersey]], or [[Pennsylvania]]. The majority of those taking the test were from private schools, academies, or endowed schools. About 60% of those taking the test applied to [[Columbia University]]. The test contained sections on English, [[French (language)|French]], [[German (language)|German]], [[Latin]], [[Greek (language)|Greek]], history, mathematics, [[chemistry]], and [[physics]]. The test was not multiple choice, but instead was evaluated based on essay responses as "excellent," "good," "doubtful," "poor," or "very poor."<ref name=Frontline1>{{cite web|url=http://www.pbs.org/wgbh/pages/frontline/shows/sats/where/1901.html | title=frontline: secrets of the sat: where did the test come from?: the 1901 college board | accessdate=2007-10-20 |work=Secrets of the SAT |publisher=[[Frontline (US TV series)|Frontline]]}}</ref>
+
The College Board began on June 17, 1901, when 973 students took its first test, across 67 locations in the United States, and two in Europe. Although those taking the test came from a variety of backgrounds, approximately one third were from [[New York]], [[New Jersey]], or [[Pennsylvania]]. The majority of those taking the test were from private schools, academies, or endowed schools. About 60% of those taking the test applied to [[Columbia University]]. The test contained sections on English, [[French (language)|French]], [[German (language)|German]], [[Latin]], [[Greek (language)|Greek]], history, mathematics, [[chemistry]], and [[physics]]. The test was not multiple choice, but instead was evaluated based on essay responses as "excellent", "good", "doubtful", "poor" or "very poor".<ref name=Frontline1>{{cite web|url=http://www.pbs.org/wgbh/pages/frontline/shows/sats/where/1901.html | title=frontline: secrets of the sat: where did the test come from?: the 1901 college board | accessdate=2007-10-20 |work=Secrets of the SAT |publisher=[[Frontline (US TV series)|Frontline]]}}</ref>
   
 
===1926 test===
 
===1926 test===
The first administration of the SAT occurred on [[June 23]], [[1926]], when it was known as the Scholastic Aptitude Test.<ref name=CBHistorical /><ref name=Frontline2 /> This test had sections of [[definition]]s, [[arithmetic]], classification<!-- possibly [[Classification (literature)]] ? I don't know, and it's hard to find a source -->, artificial language<!-- I'm hesitant to link to [[artificial language]], since the meaning could be so broad -->, [[antonym]]s, number series, [[analogies]], [[logical inference]], and paragraph reading. It was administered to over 8,000 students at over 300 test centers. Men composed 60% of the test-takers. Slightly over a quarter of males and females applied to [[Yale University]] and [[Smith College]] respectively. <ref name=Frontline2>{{cite web|url=http://www.pbs.org/wgbh/pages/frontline/shows/sats/where/1926.html | title=frontline: secrets of the sat: where did the test come from?: the 1926 sat | accessdate=2007-10-20 |work=Secrets of the SAT |publisher=[[Frontline (US TV series)|Frontline]]}}</ref> The test was paced considerably quickly, with 315 questions asked in only a little over an hour and a half.<ref name=CBHistorical>{{cite web |url=http://www.collegeboard.com/research/pdf/rr20027_11439.pdf |format=[[PDF]] |title=Research Report No. 2002-7: A Historical Perspective on the SAT®: 1926–2001 |year=[[2002]] |accessdate=2007-10-20 |last=Lawrence |first=Ida |coauthors=Rigol, Gretchen W.; Van Essen, Thomas; Jackson, Carol A. |publisher=College Entrance Examination Board}}</ref>
+
The first administration of the SAT occurred on June 23, 1926, when it was known as the Scholastic Aptitude Test.<ref name=CBHistorical /><ref name=Frontline2 /> This test, prepared by a committee headed by Princeton psychologist [[Carl Brigham|Carl Campbell Brigham]], had sections of [[definition]]s, [[arithmetic]], classification<!-- possibly [[Classification (literature)]] ? I do not know, and it's hard to find a source -->, artificial language<!-- I'm hesitant to link to [[artificial language]], since the meaning could be so broad -->, [[antonym]]s, number series, [[analogies]], [[logical inference]], and paragraph reading. It was administered to over 8,000 students at over 300 test centers. Men composed 60% of the test-takers. Slightly over a quarter of males and females applied to [[Yale University]] and [[Smith College]].<ref name=Frontline2>{{cite web|url=http://www.pbs.org/wgbh/pages/frontline/shows/sats/where/1926.html | title=frontline: secrets of the sat: where did the test come from?: the 1926 sat | accessdate=2007-10-20 |work=Secrets of the SAT |publisher=[[Frontline (US TV series)|Frontline]]}}</ref> The test was paced rather quickly, test-takers being given only a little over 90 minutes to answer 315 questions.<ref name=CBHistorical>{{cite web |url=http://www.collegeboard.com/research/pdf/rr20027_11439.pdf |format=PDF |title=Research Report No. 2002-7: A Historical Perspective on the SAT: 1926–2001 |year=2002 |accessdate=2007-10-20 |last=Lawrence |first=Ida |coauthors=Rigol, Gretchen W.; Van Essen, Thomas; Jackson, Carol A. |publisher=College Entrance Examination Board|archiveurl=http://web.archive.org/web/20041103132451/http://www.collegeboard.com/research/pdf/rr20027_11439.pdf|archivedate=2004-11-03}}</ref>
   
 
===1928 and 1929 tests===
 
===1928 and 1929 tests===
Line 113: Line 305:
   
 
===1930 test and 1936 changes===
 
===1930 test and 1936 changes===
In [[1930]] the SAT was first split into the verbal and math sections, a structure that would continue through [[2004]]. The verbal section of the 1930 test covered a more narrow range on content than its predecessors, examining only antonyms, double definitions (somewhat similar to sentence completions), and paragraph reading. In [[1936]], analogies were re-added. Between 1936 and [[1946]], students had between 80 and 115 minutes to answer 250 verbal questions (over a third of which were on antonyms). The mathematics test introduced in 1930 contained 100 free response questions to be answered in 80 minutes, and focused primarily on speed. From 1936 to [[1941]], like the 1928 and 1929 tests, the mathematics section was eliminated entirely. When the mathematics portion of the test was re-added in [[1942]], it consisted of multiple choice questions.<ref name=CBHistorical />
+
In 1930 the SAT was first split into the verbal and math sections, a structure that would continue through 2004. The verbal section of the 1930 test covered a more narrow range of content than its predecessors, examining only antonyms, double definitions (somewhat similar to sentence completions), and paragraph reading. In 1936, analogies were re-added. Between 1936 and 1946, students had between 80 and 115 minutes to answer 250 verbal questions (over a third of which were on antonyms). The mathematics test introduced in 1930 contained 100 free response questions to be answered in 80 minutes, and focused primarily on speed. From 1936 to 1941, like the 1928 and 1929 tests, the mathematics section was eliminated entirely. When the mathematics portion of the test was re-added in 1942, it consisted of multiple choice questions.<ref name=CBHistorical />
   
 
===1946 test and associated changes===
 
===1946 test and associated changes===
Paragraph reading was eliminated from the verbal portion of the SAT in 1946, and replaced with reading comprehension, and "double definition" questions were replaced with sentence completions. Between 1946 and [[1957]] students were given 90 to 100 minutes to complete 107 to 170 verbal questions. Starting in [[1958]] time limits became more stable, and for 17 years, until [[1975]], students had 75 minutes to answer 90 questions. In [[1959]] questions on data sufficiency were introduced to the mathematics section, and then replaced with quantitative comparisons in [[1974]]. In 1974 both verbal and math sections were reduced from 75 minutes to 60 minutes each, with changes in test composition compensating for the decreased time.<ref name=CBHistorical />
+
Paragraph reading was eliminated from the verbal portion of the SAT in 1946, and replaced with reading comprehension, and "double definition" questions were replaced with sentence completions. Between 1946 and 1957 students were given 90 to 100 minutes to complete 107 to 170 verbal questions. Starting in 1958 time limits became more stable, and for 17 years, until 1975, students had 75 minutes to answer 90 questions. In 1959 questions on data sufficiency were introduced to the mathematics section, and then replaced with quantitative comparisons in 1974. In 1974 both verbal and math sections were reduced from 75 minutes to 60 minutes each, with changes in test composition compensating for the decreased time.<ref name=CBHistorical />
  +
  +
===1980 test and associated changes===
  +
The inclusion of the "Strivers" Score study was implemented. This study was introduced by The Educational Testing Service, which administers the SAT, and has been conducting research on how to make it easier for minorities and individuals who suffer from social and economic barriers.
  +
The original "Strivers" project, which was in the research phase from 1980–1994, awarded special "Striver" status to test-takers who scored 200 points higher than expected for their race, gender and income level. The belief was that this would give minorities a better chance at being accepted into a college of higher standard, e.g. an Ivy League school. In 1992, the Strivers Project was leaked to the public; as a result the Strivers Project was terminated in 1993. After Federal Courts heard arguments from the ACLU, NAACP and the Educational Testing Service, the courts ordered the study to alter its data collection process, stating that only the age, race and zip code could be used to determine the test-takers eligibility for "Strivers" points.
  +
These changes were introduced to the SAT effective in 1994.
   
 
===1994 changes===
 
===1994 changes===
In [[1994]] the verbal section received a dramatic change in focus. Among these changes were the removal of antonyms, and an increased focus on passage reading. The mathematics section also saw a dramatic change in 1994, thanks in part to pressure from the [[National Council of Teachers of Mathematics]]. For the first time since 1935, the SAT asked some non-multiple choice questions, instead requiring students to supply the answers. 1994 also saw the introduction of calculators into the mathematics section for the first time in the test's history. The mathematics section introduced concepts of probability, slope, elementary statistics, counting problems, median and mode.<ref name=CBHistorical />
+
In 1994 the verbal section received a dramatic change in focus. Among these changes were the removal of [[antonym]] questions, and an increased focus on passage reading. The mathematics section also saw a dramatic change in 1994, thanks in part to pressure from the [[National Council of Teachers of Mathematics]]. For the first time since 1935, the SAT asked some non-multiple choice questions, instead requiring students to supply the answers. 1994 also saw the introduction of calculators into the mathematics section for the first time in the test's history. The mathematics section introduced concepts of probability, slope, elementary statistics, counting problems, median and mode.<ref name=CBHistorical />
  +
  +
The average score on the 1994 modification of the SAT I was usually around 1000 (500 on the verbal, 500 on the math). The most selective schools in the United States (for example, those in the [[Ivy League]]) typically had SAT averages exceeding 1400 on the old test{{Citation needed|date=June 2010}}.
  +
  +
===1995 re-centering (raising mean score back to 500)===
  +
  +
The test scoring was initially scaled to make 500 the mean score on each section with a [[standard deviation]] of 100.<ref name="Intelligence">{{cite encyclopedia|url=http://encarta.msn.com/encyclopedia_761570026_3/intelligence.html |title=Intelligence |publisher=[[MSN Encarta]] |accessdate=2008-03-02}}</ref> As the test grew more popular and more students from less rigorous schools began taking the test, the average dropped to about 428 Verbal and 478 Math. The SAT was "recentered" in 1995, and the average "new" score became again close to 500. Scores awarded after 1994 and before October 2001 are officially reported with an "R" (e.g. 1260R) to reflect this change. Old scores may be recentered to compare to 1995 to present scores by using official College Board tables,<ref name=autogenerated2>[http://research.collegeboard.org/programs/sat/data/equivalence/sat-individual SAT I Individual Score Equivalents<!-- Bot generated title -->]</ref>
  +
which in the middle ranges add about 70 points to Verbal and 20 or 30 points to Math. In other words, current students have a 100 (70 plus 30) point advantage over their parents.
  +
  +
===1995 re-centering controversy===
  +
  +
Certain educational organizations viewed the SAT re-centering initiative as an attempt to stave off international embarrassment in regard to continuously declining test scores, even among top students. As evidence, it was presented that the number of pupils who scored above 600 on the verbal portion of the test had fallen from a peak of 112,530 in 1972 to 73,080 in 1993, a 36% backslide, despite the fact that the total number of test-takers had risen over 500,000.<ref>{{cite news|last=The Center for Education Reform|title=SAT Increase--The Real Story, Part II|url=http://www.edreform.com/Press_Box/Press_Releases/?SAT_Increase_The_Real_Story_Part_II&year=1996|date=1996-08-22}}</ref>
  +
  +
===2002 changes – Score Choice===
  +
In October 2002, the College Board dropped the Score Choice Option for SAT-II exams. Under this option, scores were not released to colleges until the student saw and approved of the score.<ref>Schoenfeld, Jane. College board drops 'score choice' for SAT-II exams. St. Louis Business Journal, May 24, 2002.</ref> The College Board has since decided to re-implement Score Choice in the spring of 2009. It is described as optional, and it is not clear if the reports sent will indicate whether or not this student has opted-in or not. A number of highly selective colleges and universities, including [[Yale]], the [[University of Pennsylvania]], and [[Stanford]], have announced they will require applicants to submit all scores. Stanford, however, only prohibits Score Choice for the traditional SAT.<ref name="Freshman Requirements & Process: Testing">{{cite web|url=http://admission.stanford.edu/application/freshman/testing|work=stanford.edu|title = Freshman Requirements & Process: Testing|publisher=Stanford University Office of Undergraduate Admissions|accessdate=13 August 2011}}</ref> Others, such as [[MIT]] and [[Harvard University|Harvard]], have fully embraced Score Choice.
   
 
===2005 changes===
 
===2005 changes===
In 2005, the test was changed again, largely in response to criticism by the [[University of California system]].<ref>http://www.dailynexus.com/article.php?a=3385</ref> Because of issues concerning ambiguous questions, especially [[analogy|analogies]], certain types of questions were eliminated (the analogies disappeared altogether). The test was made marginally harder, as a corrective to the rising number of perfect scores. A new writing section was added, in part to increase the chances of closing the opening gap between the highest and midrange scores. Other factors included the desire to test the writing ability of each student in a personal manner; hence the essay. The New SAT (known as the SAT Reasoning Test) was first offered on [[March 12]], [[2005]], after the last administration of the "old" SAT in January of 2005.
+
In 2005, the test was changed again, largely in response to criticism by the [[University of California|University of California system]].<ref>[http://www.dailynexus.com/article.php?a=3385 College Board To Alter SAT I for 2005–06 – Daily Nexus<!-- Bot generated title -->]</ref> Because of issues concerning ambiguous questions, especially [[analogy|analogies]], certain types of questions were eliminated (the analogies from the verbal and quantitative comparisons from the Math section). The test was made marginally harder, as a corrective to the rising number of perfect scores. A new writing section, with an essay, based on the former SAT II Writing Subject Test, was added,<ref>{{Cite book
  +
| title = The Official SAT Study Guide
  +
| publisher = The College Board
  +
| year = 2009
  +
| edition = Second
  +
| chapter = Chapter 12: Improving Paragraphs
  +
| page = 169
  +
| isbn = 978-0-87447-852-5
  +
| postscript = <!-- Bot inserted parameter. Either remove it; or change its value to "." for the cite to end in a ".", as necessary. -->{{inconsistent citations}}}}</ref> in part to increase the chances of closing the opening gap between the highest and midrange scores. Other factors included the desire to test the writing ability of each student; hence the essay. The New SAT (known as the SAT Reasoning Test) was first offered on March 12, 2005, after the last administration of the "old" SAT in January 2005. The Mathematics section was expanded to cover three years of high school mathematics. The Verbal section's name was changed to the Critical Reading section.
  +
  +
===2008 changes===
  +
In late 2008, a new variable came into play. Previously, applicants to most colleges were required to submit all scores, with some colleges that embraced Score Choice retaining the option of allowing their applicants not to have to submit all scores. However, in 2008, an initiative to make Score Choice universal had begun, with some opposition from colleges desiring to maintain score report practices. While students theoretically now have the choice to submit their best score (in theory one could send any score one wishes to send) to the college of their choice, some popular colleges and universities, such as [[Cornell]], ask that students send all test scores.<ref name="Cornell Rejects SAT Score Choice Option">{{cite web|url=http://cornellsun.com/section/news/content/2009/01/20/cornell-rejects-sat-score-choice-option|title=Cornell Rejects SAT Score Choice Option |publisher=[[The Cornell Daily Sun]] |accessdate=2008-02-13}}</ref> This had led the College Board to display on their web site which colleges agree with or dislike Score Choice, with continued claims that students will still never have scores submitted against their will.<ref>{{cite web|format=PDF|url=http://professionals.collegeboard.com/profdownload/sat-score-use-practices-list.pdf|title=Universities Requesting All Scores|accessdate=2009-06-22}}</ref> Regardless of whether a given college permits applicants to exercise Score Choice options, most colleges do not penalize students who report poor scores along with high ones; many universities, such as Columbia{{Citation needed|date=March 2011}} and Cornell,{{Citation needed|date=March 2011}} expressly promise to overlook those scores that may be undesirable to the student and/or to focus more on those scores that are most representative of the student's achievement and academic potential. College Board maintains a list of colleges and their respective score choice policies that is recent as of November 2011.<ref>http://professionals.collegeboard.com/profdownload/sat-score-use-practices-list.pdf</ref>
  +
  +
===2012 changes===
  +
Beginning in 2012, test takers became subject to a new requirement that entails uploading a digital [[portrait]] for enhanced [[Proof of identity|identification]] purposes. Subsequent critical commentary raised substantive concerns of [[racial discrimination|racial]] and other [[discrimination]] because provided photos are obligatorily submitted alongside test scores in the [[university and college admission|admissions process]].<ref name='Caldwell 2012-03-27'>{{cite news | first = Tanya | last = Caldwell | title = SAT Reforms May Have Negative Impact on Students, Counselors Say | date = 2012-03-27 | url = http://thechoice.blogs.nytimes.com/2012/03/27/sat-reforms-may-have-negative-impact-on-students-counselors-say/ | work = The New York Times | accessdate = 2012-10-31}}</ref>
   
 
==Name changes and recentered scores==
 
==Name changes and recentered scores==
In 1990, because of uncertainty about the SAT's ability to function as an [[intelligence test]], the name was changed to Scholastic Assessment Test. Finally, in 1994, the name was changed to simply SAT (with the [[Pseudo-acronym|letters not standing for anything]]). Now the test is commonly referred to as the SAT Reasoning Test.
+
The name originally stood for "Scholastic Aptitude Test".<ref name="SAT FAQ">{{cite web|url=http://www.collegeboard.com/student/testing/sat/about/sat/FAQ.html#quest14|title=SAT FAQ |publisher=[[The College Board]] |accessdate=2008-09-13}}</ref> But in 1990, because of uncertainty about the SAT's ability to function as an [[Intelligence quotient|intelligence test]], the name was changed to Scholastic Assessment Test. In 1993 the name was changed to SAT I: Reasoning Test (with the [[Pseudo-acronym|letters not standing for anything]]) to distinguish it from the [[SAT II: Subject Tests]].<ref name="SAT FAQ" /> In 2004, the roman numerals on both tests were dropped, and the SAT I was renamed the SAT Reasoning Test.<ref name="SAT FAQ" /> The scoring categories are now the following: Critical Reading (comparable to some of the Verbal portions of the old SAT I), Mathematics, and Writing. The writing section now includes an essay, whose score is involved in computing the overall score for the Writing section, as well as grammar sections (also comparable to some Verbal portions of the previous SAT).
   
The test scoring was initially scaled to make 500 the mean score on each section with a [[standard deviation]] of 100.{{fact|date=October 2007}} As the test grew more popular and more students from less rigorous schools began taking the test, the average dropped to about 428 Verbal and 478 Math. The SAT was "recentered" in 1995, and the average "new" score became again close to 500. Scores awarded after 1994 and before October 2001 are officially reported with an "R" (e.g. 1260R) to reflect this change. Old scores may be recentered to compare to 1995 to present scores by using official College Board tables, which in the middle ranges add about 70 points to Verbal and 20 or 30 points to Math. In other words, current students have a 70 and 30 point advantage over their parents.
+
The test scoring was initially scaled to make 500 the mean score on each section with a [[standard deviation]] of 100.<ref name="Intelligence"/> The SAT was "recentered" in 1995, and the average "new" score became again close to 500. Scores awarded after 1994 and before October 2001 are officially reported with an "R" (e.g. 1260R) to reflect this change. Old scores may be recentered to compare to 1995 to present scores by using official College Board tables,<ref name=autogenerated2 />
  +
which in the middle ranges add about 70 points to Verbal and 20 or 30 points to Math.
   
 
==Scoring problems of October 2005 tests==
 
==Scoring problems of October 2005 tests==
In March of 2006, it was announced that a small percentage of the SAT tests taken in October 2005 had been scored incorrectly due to the test papers being moist and not scanning properly, and that some students had received substantially erroneous scores. The College Board announced they would change the scores for the students who were given a lower score than they earned, but at this point many of those students had already applied to colleges using their original scores. The College Board decided not to change the scores for the students who were given a higher score than they earned. A lawsuit was filed in 2005 by about 4,400 students who received an incorrect low score on the SAT. The class-action suit was settled in August 2007 when The College Board and another company that administers the college-admissions test announced they would pay $2.85 million to over 4,000 students. Under the agreement each student can either elect to receive $275 or submit a claim for more money if he or she feels the damage was even greater.<ref>{{cite web|url=http://chronicle.com/news/index.php?id=2911 |title=$2.85-Million Settlement Proposed in Lawsuit Over SAT-Scoring Errors |accessdate=2007-08-27 |date= [[2007-08-24]] |last=Hoover |first=Eric |Publisher=The Chronicle of Higher Education}}</ref>
+
In March 2006, it was announced that a small percentage of the SATs taken in October 2005 had been scored incorrectly due to the test papers' being moist and not scanning properly, and that some students had received erroneous scores. The College Board announced they would change the scores for the students who were given a lower score than they earned, but at this point many of those students had already applied to colleges using their original scores. The College Board decided not to change the scores for the students who were given a higher score than they earned. A lawsuit was filed in 2005 by about 4,400 students who received an incorrect low score on the SAT. The class-action suit was settled in August 2007 when The College Board and another company that administers the college-admissions test announced they would pay $2.85 million to over 4,000 students. Under the agreement each student can either elect to receive $275 or submit a claim for more money if he or she feels the damage was even greater.<ref>{{cite web|url=http://chronicle.com/news/index.php?id=2911 |title=$2.85-Million Settlement Proposed in Lawsuit Over SAT-Scoring Errors |accessdate=2007-08-27 |date= 2007-08-24 |last=Hoover |first=Eric |publisher=The Chronicle of Higher Education |archiveurl = http://web.archive.org/web/20070930180901/http://chronicle.com/news/index.php?id=2911 <!-- Bot retrieved archive --> |archivedate = 2007-09-30}}</ref> A similar scoring error occurred on a secondary school admission test in 2010-2011 when the ERB ([[Educational Records Bureau]]) announced after the admission process was over that an error had been made in the scoring of the tests of 2010 (17%) of the students who had taken the [[Independent School Entrance Examination]] for admission to private secondary schools for 2011. Commenting on the effect of the error on students' school applications in the ''[[New York Times]]'', David Clune, President of the ERB stated "It is a lesson we all learn at some point — that life isn’t fair."<ref>{{cite news |url=http://www.nytimes.com/2011/04/09/nyregion/09tests.html |title=7,000 Private School Applicants Got Incorrect Scores, Company Says |newspaper=New York Times |date=April 8, 2011 |last=Maslin Nir |first=Sarah }}</ref>
   
  +
==The math-verbal achievement gap==
==Criticism==
 
  +
{{main|Math-verbal achievement gap}}
===Bias===
 
  +
In 2002, Richard Rothstein (education scholar and columnist) wrote in ''The New York Times'' that the U.S. math averages on the SAT & ACT continued its decade-long rise over national verbal averages on the tests.<ref>{{cite news |last=Rothstein |first=Richard |date=August 28, 2002 |title=Better sums than at summerizing; The SAT gap |newspaper=[[The New York Times]] |url=http://www.nytimes.com/2002/08/28/nyregion/lessons-sums-vs-summarizing-sat-s-math-verbal-gap.html }}</ref>
   
  +
==Perception==
A famous example of alleged bias in the SAT I is the [[oarsman]]-[[regatta]] analogy question.<ref>''Don't Believe the Hype'', Chideya, 1995; ''[http://epaa.asu.edu/epaa/v3n2.html The Bell Curve]'', Hernstein and Murray, 1994</ref> The object of the question was to find the pair of terms that have the relationship most similar to the relationship between "runner" and "marathon". The correct answer was "oarsman" and "regatta".
 
   
  +
===Correlations with IQ===
[[Image:1995-SAT-Education2.png|right|thumb|200px|]]
 
  +
Frey and Detterman (2003) analyzed the correlation of SAT scores with intelligence test scores.<ref>{{cite journal |last=Frey |first=M. C. |last2=Detterman |first2=D. K. |year=2003 |title=Scholastic Assessment or ''g''? The Relationship Between the Scholastic Assessment Test and General Cognitive Ability |journal=Psychological Science |volume=15 |issue=6 |pages=373–378 |url=http://www.psychologicalscience.org/pdf/ps/frey.pdf |doi=10.1111/j.0956-7976.2004.00687.x |pmid=15147489}}</ref> They found SAT scores to be highly correlated with ''[[General intelligence factor|general mental ability]]'', or ''g'' (r=.82 in their sample, .86 when corrected for non-linearity). The correlation between SAT scores and scores on the [[Raven's Advanced Progressive Matrices]] was .483 (.72 corrected for restricted range). They concluded that the SAT is primarily a test of ''g''. Beaujean and colleagues (2006) have reached similar conclusions.<ref>{{cite journal |last=Beaujean |first=A. A. |last2=Firmin |first2=M. W. |last3=Knoop |first3=A. J. |last4=Michonski |first4=J. D. |last5=Berry |first5=T. B. |last6=Lowrie |first6=R. E. |year=2006 |title=Validation of the Frey and Detterman (2004) IQ prediction equations using the Reynolds Intellectual Assessment Scales |journal=Personality and Individual Differences |volume=41 |issue= |pages=353–357 |url=http://www.iapsych.com/articles/beaujean2006.pdf |archiveurl=http://web.archive.org/web/20110713001038/http://www.iapsych.com/articles/beaujean2006.pdf|archivedate=2011-07-13}}</ref>
[[Image:1995-SAT-Income2.png|right|thumb|200px|As depicted above, SAT scores vary according to race, income, and parental educational background.]]
 
   
  +
===Cultural bias===
The question relied upon students knowing the meaning of the two terms, referring to a sport popular with the wealthy. While 53% of white students correctly answered the question, only 22% of black students did.<ref>http://www.zmag.org/racewatch/znet_race_instructional5.htm</ref> Analogy questions have since been replaced by short reading passages. However, gaps in scoring between black students and white students persist.<ref>http://www.collegeboard.com/prod_downloads/highered/ra/sat/CR_M_%20W_PercentileRanksGenderEthnicGroups.pdf</ref>
 
  +
For decades many critics have accused designers of the verbal SAT of cultural bias toward the white and wealthy. A famous (and long past) example of this bias in the SAT I was the [[oarsman]]&ndash;[[regatta]] analogy question.<ref>''Don't Believe the Hype'', Chideya, 1995; ''[http://epaa.asu.edu/epaa/v3n2.html The Bell Curve]'', Hernstein and Murray, 1994</ref> The object of the question was to find the pair of terms that have the relationship most similar to the relationship between "runner" and "marathon". The correct answer was "oarsman" and "regatta". The choice of the correct answer presupposed students' familiarity with [[rowing (sport)|rowing]], a sport popular with the wealthy, and so upon their knowledge of its structure and terminology. Fifty-three percent (53%) of white students correctly answered the question, while only 22% of black students also scored correctly.<ref>[http://www.zmag.org/racewatch/znet_race_instructional5.htm Culture And Racism<!-- Bot generated title -->]{{dead link|date=May 2011}}</ref> However, according to Murray and Herrnstein, the black-white gap is smaller in culture-loaded questions like this one than in questions that appear to be culturally neutral.<ref>{{cite book |last=Herrnstein |first=Richard J. |first2=Charles |last2=Murray |year=1994 |title=The Bell Curve: Intelligence and Class Structure in American Life |location=New York |publisher=Free Press |isbn=0-02-914673-9 |pages=281–282 }}</ref> Analogy questions have since been replaced by short reading passages.
   
 
===Dropping SAT===
 
===Dropping SAT===
Some [[Liberal arts colleges in the United States|liberal arts colleges]] have responded to this criticism by joining the [[Liberal_arts_colleges_in_the_United_States#SAT_optional_movement|SAT optional movement]]. These colleges do not require the SAT for admission.
+
A growing number of [[colleges]] have responded to this criticism by joining the [[Liberal arts colleges in the United States#SAT optional movement|SAT optional movement]]. These colleges do not require the SAT for admission.
   
In a 2001 speech to the [[American Council on Education]], [[Richard C. Atkinson]], then president of the [[University of California]], urged dropping the SAT Reasoning Test as a college admissions requirement:
+
In a 2001 speech to the [[American Council on Education]], [[Richard C. Atkinson]], the president of the [[University of California]], urged dropping the SAT Reasoning Test as a college admissions requirement:
   
:"Anyone involved in education should be concerned about how overemphasis on the SAT is distorting educational priorities and practices, how the test is perceived by many as unfair, and how it can have a devastating impact on the self-esteem and aspirations of young students. There is widespread agreement that overemphasis on the SAT harms American education."<ref>http://www.ucop.edu/pres/speeches/achieve.htm</ref>
+
:"Anyone involved in education should be concerned about how overemphasis on the SAT is distorting educational priorities and practices, how the test is perceived by many as unfair, and how it can have a devastating impact on the self-esteem and aspirations of young students. There is widespread agreement that overemphasis on the SAT harms American education."<ref>[http://www.ucop.edu/pres/speeches/achieve.htm Achievement Versus Aptitude Tests in College Admissions<!-- Bot generated title -->]</ref>
   
 
In response to threats by the University of California to drop the SAT as an admission requirement, the College Entrance Examination Board announced the restructuring of the SAT, to take effect in March 2005, as detailed above.
 
In response to threats by the University of California to drop the SAT as an admission requirement, the College Entrance Examination Board announced the restructuring of the SAT, to take effect in March 2005, as detailed above.
   
  +
In the 1960s and 70's there was a movement to drop achievement scores. After a period of time, the countries, states and provinces that reintroduced them agreed that academic standards had dropped, students had studied less, and had taken their studying less seriously. They reintroduced the tests after studies and research concluded that the high-stakes tests produced benefits that outweighed the costs.<ref>{{cite book|last=Phelps|first=Richard|title=Kill the Messenger|year=2003|publisher=Transaction Publishers|location=New Brunswick, New Jersey|isbn=0-7658-0178-7|pages=220}}</ref>
===Essay===
 
  +
In 2005, [[MIT]] Writing Director [[Les Perelman]] plotted essay length versus essay score on the new SAT from released essays and found a high correlation between them. After studying 23 graded essays he found that the longer the essay was, the higher the score. He also discovered that several of these essays were full of factual errors. However, the official SAT guide for scorers state that the essays should be scored according to their quality of writing and not factual accuracy. The National Council of Teachers of English also criticize the 25-minute writing section of the test, arguing that the basic principles of writing encourage the revision of written material several times. They say that the amount of time allowed for the test pushes schools to develop a formulaic system of writing.<ref>http://www.nytimes.com/2005/05/04/education/04education.html?ei=5090&en=94808505ef7bed5a&ex=1272859200&partner=rssuserland&emc=rss&pagewanted=all</ref>
 
  +
===MIT study===
  +
In 2005, [[MIT]] Writing Director Les Perelman plotted essay length versus essay score on the new SAT from released essays and found a high correlation between them. After studying over 50 graded essays, he found that longer essays consistently produced higher scores. In fact, he argues that by simply gauging the length of an essay without reading it, the given score of an essay could likely be determined correctly over 90% of the time. He also discovered that several of these essays were full of factual errors; the College Board does not claim to grade for factual accuracy.
  +
  +
Perelman, along with the National Council of Teachers of English also criticized the 25-minute writing section of the test for damaging standards of writing teaching in the classroom. They say that writing teachers training their students for the SAT will not focus on revision, depth, accuracy, but will instead produce long, formulaic, and wordy pieces.<ref>{{Cite news |url=http://www.nytimes.com/2005/05/04/education/04education.html |newspaper=The New York Times |title=SAT Essay Test Rewards Length and Ignores Errors |first=Michael |last=Winerip |date=May 4, 2005 }}</ref> "You're getting teachers to train students to be bad writers", concluded Perelman.<ref>{{cite news |url=http://dir.salon.com/story/mwt/feature/2005/05/17/sat/index.html |title=Testing, testing |last=Harris |first=Lynn |date=May 17, 2005 |work=[[Salon.com]] }}</ref>
  +
  +
===Test score disparity by income===
  +
Recent research has linked high family incomes to higher mean scores. Test score data from California has shown that test-takers with family incomes of less than $20,000 a year had a mean score of 1310 while test-takers with family incomes of over $200,000 had a mean score of 1715, a difference of 405 points. The estimates of correlation of SAT scores and household income range from 0.23 to 0.4 (explaining about 5-16% of the variation).<ref>http://hypertextbook.com/eworld/sat.shtmlM</ref> One calculation has shown a 40-point average score increase for every additional $20,000 in income.<ref>[http://www.domesatreview.com/content/sat-test-demographics-income-and-ethnicity SAT Test Demographics by Income and Ethnicity<!-- Bot generated title -->]</ref> There are conflicting opinions on the source of this correlation. Some think it is evidence of superior education and tutoring that is accessible to the more affluent adolescents. Still others propose it relates to wealthier families being exposed to a broader range of cultural ideas and experiences, because of travel and other means of wider exposure, and that "Cultural Literacy" can lead to enhancement of aptitude.<ref>Hirsh, E.D. "The Schools We Need: And Why We Don't Have Them", Doubleday, 1996</ref>
  +
  +
==History of test development==
  +
*[[Carl Brigham]]
  +
*[[College Board]]
   
 
==See also==
 
==See also==
  +
*[[ACT (test)]], a college entrance exam, competitor to the SAT
{{wikibooks|SAT Study Guide}}
 
  +
*[[College admissions in the United States]]
*[[SAT Subject Tests]]
 
 
*[[List of admissions tests]]
 
*[[List of admissions tests]]
*[[SAT Essay Prompts]]
 
 
*[[PSAT/NMSQT]]
 
*[[PSAT/NMSQT]]
  +
*[[SAT calculator program]]
*[[ACT (examination)]], a college entrance exam, competitor to the SAT
 
  +
*[[SAT Subject Tests]]
*[[Swedish Scholastic Aptitude Test]]
 
   
 
==References==
 
==References==
  +
{{Reflist|30em}}
<div class="references-small">
 
<references/>
 
</div>
 
 
*Achter, J. A., Lubinski, D., & Benbow, C. P. (1996). Multipotentiality among the intellectually gifted: "It was never there and already it's vanishing." Journal of Counseling Psychology Vol 43(1) Jan 1996, 65-76.
 
*Afshari, M. R. (1980). A comparison of the predictive validities of the Scholastic Aptitude Test and a Piagetian Test of Formal Operations: Dissertation Abstracts International.
 
*Albright, R. L. (1979). An analysis of the effectiveness of the Scholastic Aptitude Test battery and high school rank as predictors of the long-term academic success and differential academic achievement outcomes for Black students attending two predominantly Black colleges: Dissertation Abstracts International.
 
*Alderman, D. L. (1981). Student self-selection and test repetition: Educational and Psychological Measurement Vol 41(4) Win 1981, 1073-1081.
 
*Alderman, D. L. (1982). Language proficiency as a moderator variable in testing academic aptitude: Journal of Educational Psychology Vol 74(4) Aug 1982, 580-587.
 
*Alderman, D. L., & Powers, D. E. (1980). The effects of special preparation on SAT-Verbal scores: American Educational Research Journal Vol 17(2) Sum 1980, 239-251.
 
*Aleamoni, L. M., & Eitelbach, S. B. (1976). Comparison of six examinations given in Rhetoric 101, at the University of Illinois, Fall 1965: Research in Higher Education Vol 4(4) 1976, 347-354.
 
*Aleamoni, L. M., & Oboler, L. (1978). ACT versus SAT in predicting first semester GPA: Educational and Psychological Measurement Vol 38(2) Sum 1978, 393-399.
 
*Alington, D. E., & Leaf, R. C. (1991). Elimination of SAT-Verbal sex differences was due to policy-guided changes in item content: Psychological Reports Vol 68(2) Apr 1991, 541-542.
 
*Allen, W. B. (1988). Rhodes handicapping, or slowing the pace of integration: Journal of Vocational Behavior Vol 33(3) Dec 1988, 365-378.
 
*Altenhof, J. C. (1985). Influence of item characteristics on male and female performance on SAT-Math: Dissertation Abstracts International.
 
*Angoff, W. H. (1986). Some contributions of the College Board SAT to psychometric theory and practice: Educational Measurement: Issues and Practice Vol 5(3) Fal 1986, 7-11.
 
*Angoff, W. H. (1996). Equating. Lanham, MD, England: University Press of America.
 
*Angoff, W. H., Pomplun, M., McHale, F., & Morgan, R. (1990). Comparative study of factors related to the predictive validities of 1974-75 and 1984-85 forms of the SAT. Princeton, NJ: Educational Testing Service.
 
*Angoff, W. H., & Schrader, W. B. (1984). A study of hypotheses basic to the use of rights and formula scores: Journal of Educational Measurement Vol 21(1) Spr 1984, 1-17.
 
*Ashmore, R., & Cork, K. (1985). Estimating reading grade level equivalents using the American College Testing Program: Journal of College Student Personnel Vol 26(6) Nov 1985, 547-548.
 
*Astin, A. W., & Henson, J. W. (1977). New measures of college selectivity: Research in Higher Education Vol 6(1) Mar 1977, 1-9.
 
*Baird, L. L. (1984). Relationships between ability, college attendance, and family income: Research in Higher Education Vol 21(4) 1984, 373-395.
 
*Baker, F. B., & Al-Karni, A. (1991). A comparison of two procedures for computing IRT equating coefficients: Journal of Educational Measurement Vol 28(2) Sum 1991, 147-162.
 
*Baker, J. S. (2006). Effect of extended time testing accommodations on grade point averages of college students with learning disabilities. Dissertation Abstracts International: Section B: The Sciences and Engineering.
 
*Baker-Sennett, J. G. (1991). Components of efficient problem-solving: A perspective on creative discovery: Dissertation Abstracts International.
 
*Barner, R. R., & Bruno, J. E. (1991). Mathematics attainment at inner-city schools: Establishing the need for systematic formative evaluation practices: Urban Review Vol 23(4) Dec 1991, 251-270.
 
*Barnes, T. R. (1978). Dumber by the dozen or by the decade? : Psychological Reports Vol 42(3, Pt 1) Jun 1978, 970.
 
*Barnes, V., Potter, E. H., & Fiedler, F. E. (1983). Effect of interpersonal stress on the prediction of academic performance: Journal of Applied Psychology Vol 68(4) Nov 1983, 686-697.
 
*Barney, J. A., Fredericks, J., Fredericks, M., & Robinson, P. (1987). Relationship between social class, ACT scores, SAT scores and academic achievement of business students: A comparison: College Student Journal Vol 21(4) Win 1987, 395-401.
 
*Baron, J., & Norman, M. F. (1992). SATs, achievement tests, and high-school class rank as predictors of college performance: Educational and Psychological Measurement Vol 52(4) Win 1992, 1047-1055.
 
*Baydar, N. (1990). Effects of coaching on the validity of the SAT: Results of a simulation study. Princeton, NJ: Educational Testing Service.
 
*Beaujean, A. A., Firmin, M. W., Knoop, A. J., Michonski, J. D., Berry, T. P., & Lowrie, R. E. (2006). Validation of the Frey and Detterman (2004) IQ prediction equations using the Reynolds Intellectual Assessment Scales: Personality and Individual Differences Vol 41(2) Jul 2006, 353-357.
 
*Becker, B. J. (1990). Item characteristics and gender differences on the SAT-M for mathematically able youths: American Educational Research Journal Vol 27(1) Spr 1990, 65-87.
 
*Bejar, I. I., & Blew, E. O. (1981). Grade inflation and the validity of the Scholastic Aptitude Test: American Educational Research Journal Vol 18(2) Sum 1981, 143-156.
 
*Belz, H. F., & Geary, D. C. (1984). Father's occupation and social background: Relation to SAT scores: American Educational Research Journal Vol 21(2) Sum 1984, 473-478.
 
*Benbow, C. P. (1992). Academic achievement in mathematics and science of students between ages 13 and 23: Are there differences among students in the top one percent of mathematical ability? : Journal of Educational Psychology Vol 84(1) Mar 1992, 51-61.
 
*Benbow, C. P., & Stanley, J. (1987). Yes: Sex differences in mathematical reasoning ability: More facts. New Haven, CT: Yale University Press.
 
*Benbow, C. P., & Stanley, J. C. (1983). Differential course-taking hypothesis revisited: American Educational Research Journal Vol 20(4) Win 1983, 469-473.
 
*Benbow, C. P., & Wolins, L. (1996). The utility of out-of-level testing for gifted seventh and eighth graders using the SAT-M: An examination of item bias. Baltimore, MD: Johns Hopkins University Press.
 
*Bennett, R. E., Rock, D. A., & Chan, K. L. (1987). SAT Verbal-Math discrepancies: Accurate indicators of college learning disability? : Journal of Learning Disabilities Vol 20(3) Mar 1987, 189-192.
 
*Bennett, R. E., Rock, D. A., & Kaplan, B. A. (1987). SAT differential item performance for nine handicapped groups: Journal of Educational Measurement Vol 24(1) Spr 1987, 41-55.
 
*Bennett, R. E., Rock, D. A., & Kaplan, B. A. (1988). Level, reliability, and speededness of SAT scores for nine handicapped groups: Special Services in the Schools Vol 4(3-4) 1988, 37-54.
 
*Bennett, R. E., Rock, D. A., & Novatkoski, I. (1989). Differential item functioning on the SAT-M braille edition: Journal of Educational Measurement Vol 26(1) Spr 1989, 67-79.
 
*Bickel, R., & Chang, M. (1985). Public schools, private schools, and high school achievement: A review and response with College Board Achievement Test data: The High School Journal Vol 69(2) Dec-Jan 1985-1986, 91-106.
 
*Boldt, K. E. (2000). Predicting academic performance of high-risk college students using Scholastic Aptitude Test scores and noncognitive variables. Dissertation Abstracts International Section A: Humanities and Social Sciences.
 
*Bolinger, R. W. (1992). The effect of socioeconomic levels and similar instruction on Scholastic Aptitude Test scores of Asian, Black, Hispanic, and White students: Dissertation Abstracts International.
 
*Bond, L. (1989). The effects of special preparation on measures of scholastic ability. New York, NY, England: Macmillan Publishing Co, Inc; American Council on Education.
 
*Bond, L. (1990). Analyzing the Predictive Value of the SAT: PsycCRITIQUES Vol 35 (4), Apr, 1990.
 
*Bond, L. (1992). Clarification of sources of views on SAT: PsycCRITIQUES Vol 37 (6), Jun, 1992.
 
*Branberg, K., Henriksson, W., Nyquist, H., & Wedman, I. (1990). The influence of sex, education and age on test scores on the Swedish Scholastic Aptitude Test: Scandinavian Journal of Educational Research Vol 34(3) 1990, 189-203.
 
*Brazziel, W. F. (1988). Improving SAT scores: Pros, cons, methods: Journal of Negro Education Vol 57(1) Win 1988, 81-93.
 
*Breland, H. M., & Duran, R. P. (1985). Assessing English composition skills in Spanish-speaking populations: Educational and Psychological Measurement Vol 45(2) Sum 1985, 309-318.
 
*Bridgeman, B. (1982). Comparative validity of the College Board Scholastic Aptitude Test--Mathematics and the Descriptive Tests of Mathematics Skills for predicting performance in college mathematics courses: Educational and Psychological Measurement Vol 42(1) Spr 1982, 361-366.
 
*Bridgeman, B., Harvey, A., & Braswell, J. (1995). Effects of calculator use on scores on a test of mathematical reasoning: Journal of Educational Measurement Vol 32(4) Win 1995, 323-340.
 
*Bridgeman, B., Trapani, C., & Curley, E. (2004). Impact of Fewer Questions per Section on SAT I Scores: Journal of Educational Measurement Vol 41(4) Win 2004, 291-310.
 
*Bridgeman, B., & Wendler, C. (1991). Gender differences in predictors of college mathematics performance and in college mathematics course grades: Journal of Educational Psychology Vol 83(2) Jun 1991, 275-284.
 
*Briggs, D. C. (2005). Meta-Analysis: A Case Study: Evaluation Review Vol 29(2) Apr 2005, 87-127.
 
*Britton, B. K., & Tesser, A. (1991). Effects of time-management practices on college grades: Journal of Educational Psychology Vol 83(3) Sep 1991, 405-410.
 
*Brody, L. E. (1985). The effects of an intensive summer program on the SAT scores of gifted seventh graders: Dissertation Abstracts International.
 
*Brody, L. E., & Benbow, C. P. (1990). Effects of high school coursework and time on SAT scores: Journal of Educational Psychology Vol 82(4) Dec 1990, 866-875.
 
*Brounstein, P. J., & Holahan, W. (1987). Patterns of change in Scholastic Aptitude Test performance among academically talented adolescents: Roeper Review Vol 10(2) Dec 1987, 110-116.
 
*Bullock-McNeill, J. (1993). Variables related to grade performance of African American freshmen at a predominately White university: Dissertation Abstracts International.
 
*Burke, K. B. (1987). A model reading course and its effects on the verbal scores of eleventh and twelfth grade students on the Nelson Denny Test, the Preliminary Scholastic Aptitude Test, and the Scholastic Aptitude Test: Dissertation Abstracts International.
 
*Burns, R. W. (1976). Minorities, instructional objectives and the SAT: Educational Technology Vol 16(6) Jun 1976, 39-41.
 
*Burton, E., & Burton, N. W. (1993). The effect of item screening on test scores and test characteristics. Hillsdale, NJ, England: Lawrence Erlbaum Associates, Inc.
 
*Burton, L. A., Henninger, D., & Hafetz, J. (2005). Gender Differences in Relations of Mental Rotation, Verbal Fluency, and SAT Scores to Finger Length Ratios as Hormonal Indexes: Developmental Neuropsychology Vol 28(1) 2005, 493-505.
 
*Burton, N. (1996). Have changes in the SAT affected women's mathematics performance? : Educational Measurement: Issues and Practice Vol 15(4) Win 1996, 5-9.
 
*Butler, R. P., & McCauley, C. (1987). Extraordinary stability and ordinary predictability of academic success at the United States Military Academy: Journal of Educational Psychology Vol 79(1) Mar 1987, 83-86.
 
*Byrnes, J. P., & Takahira, S. (1993). Explaining gender differences on SAT-math items: Developmental Psychology Vol 29(5) Sep 1993, 805-810.
 
*Byrnes, J. P., & Takahira, S. (1994). Why some students perform well and others perform poorly on SAT math items: Contemporary Educational Psychology Vol 19(1) Jan 1994, 63-78.
 
*Cablas, A. (1991). The Scholastic Aptitude Test and ethnic minorities: A predictive and differential validation study at the University of Hawai'i at Manoa: Dissertation Abstracts International.
 
*Caldwell, E. (1973). Analysis of an innovation (CLEP): Proceedings of the Annual Convention of the American Psychological Association 1973, 7-8.
 
*Cantwell, Z. M. (1966). Relationships between scores on the standard Progressive Matrices (1938) and on the D.48 Test of Non-Verbal Intelligence and 3 measures of academic achievement: Journal of Experimental Education 34(4) 1966, 28-31.
 
*Carey, T. J. (1982). The relationship of verbal development of parents and their offspring, and the effects of home and classroom instruction on the SAT-Verbal: Dissertation Abstracts International.
 
*Carter, T. M. (1934). The effect of college fraternities on scholarship: Journal of Applied Psychology Vol 18(3) Jun 1934, 393-400.
 
*Casey, M. B., Nuttall, R., Pezaris, E., & Benbow, C. P. (1995). The influence of spatial ability on gender differences in mathematics college entrance test scores across diverse samples: Developmental Psychology Vol 31(4) Jul 1995, 697-705.
 
*Casey, M. B., Nuttall, R. L., & Pezaris, E. (1997). Mediators of gender differences in mathematics college entrance test scores: A comparison of spatial skills with internalized beliefs and anxieties: Developmental Psychology Vol 33(4) Jul 1997, 669-680.
 
*Casillas Scott, C. (1976). Longer-term predictive validity of college admission tests for Anglo, Black, and Mexican American students: Dissertation Abstracts International.
 
*Centra, J. A. (1986). Handicapped student performance on the Scholastic Aptitude Test: Journal of Learning Disabilities Vol 19(6) Jun-Jul 1986, 324-327.
 
*Chenoweth, G. D. (1987). The Preliminary Scholastic Aptitude Test: An investigation of verbal performance among students attending a large metropolitan high school: Dissertation Abstracts International.
 
*Chiachiere, F. J. (1994). The relationship of attitude to foreign language study, gender, grades and length of foreign language study to scores on the verbal scholastic aptitude test. Dissertation Abstracts International Section A: Humanities and Social Sciences.
 
*Chissom, B. S., & Lanier, D. (1975). Prediction of first quarter freshman GPA using SAT scores and high school grades: Educational and Psychological Measurement Vol 35(2) Sum 1975, 461-463.
 
*Christopher, C. C. (1990). Predictors of success on the Scholastic Aptitude Test: Dissertation Abstracts International.
 
*Clawar, H. J. (1968). A short highly-efficient prediction of College Entrance Examination Board Scholastic Aptitude Test performance: Educational Records Bulletin No 94 1968, 42-44.
 
*Claytor, M. M. (1991). The development, implementation, and initial evaluation of the South Carolina SAT-Mathematics Improvement Project: Dissertation Abstracts International.
 
*Cliffordson, C. (2004). Effects of Practice and Intellectual Growth on Performance on the Swedish Scholastic Aptitude Test (SweSAT): European Journal of Psychological Assessment Vol 20(3) 2004, 192-204.
 
*Cohen, G. L., & Sherman, D. K. (2005). Stereotype Threat and the Social and Scientific Contexts of the Race Achievement Gap: American Psychologist Vol 60(3) Apr 2005, 270-271.
 
*Collins, W. (1982). Some correlates of achievement among students in a supplemental instruction program: Journal of Learning Skills Vol 2(1) Fal 1982, 19-28.
 
*Cook, L. L., Dorans, N. J., & Eignor, D. R. (1988). An assessment of the dimensionality of three SAT-Verbal test editions: Journal of Educational Statistics Vol 13(1) Spr 1988, 19-43.
 
*Cooper, T. C. (1987). Foreign language study and SAT-verbal scores: Modern Language Journal Vol 71(4) Win 1987, 381-387.
 
*Corley, E. R., Goodjoin, R., & York, S. (1991). Differences in grades and SAT scores among minority college students from urban and rural environments: The High School Journal Vol 74(3) Feb-Mar 1991, 173-177.
 
*Crouse, J., & Trusheim, D. (1988). The case against the SAT. Chicago, IL: University of Chicago Press.
 
*Crouse, J., & Trusheim, D. (1991). Bond, ETS, and the College Board are Wrong: PsycCRITIQUES Vol 36 (1), Jan, 1991.
 
*Crouse, J., & Trusheim, D. (1991). How colleges can correctly determine selection benefits from the SAT: Harvard Educational Review Vol 61(2) May 1991, 125-147.
 
*Cullen, M. J., Hardison, C. M., & Sackett, P. R. (2004). Using SAT-Grade and Ability-Job Performance Relationships to Test Predictions Derived From Stereotype Threat Theory: Journal of Applied Psychology Vol 89(2) Apr 2004, 220-230.
 
*Curran, R. G. (1988). The effectiveness of computerized coaching for the Preliminary Scholastic Aptitude Test (PSAT/NMSQT) and the Scholastic Aptitude Test (SAT): Dissertation Abstracts International.
 
*Cuthbert, L. C. (1981). An investigation of the relationships of certain student and curriculum variables in a selected comprehensive high school to scholastic aptitude test scores: Dissertation Abstracts International.
 
*Cyr, J. (2007). Emotional intelligence as a predictor of performance in college courses. Dissertation Abstracts International: Section B: The Sciences and Engineering.
 
*Dalton, S. (1976). A decline in the predictive validity of the SAT and high school achievement: Educational and Psychological Measurement Vol 36(2) Sum 1976, 445-448.
 
*Daneman, M., & Hannon, B. (2001). Using working memory theory to investigate the construct validity of multiple-choice reading comprehension tests such as the SAT: Journal of Experimental Psychology: General Vol 130(2) Jun 2001, 208-223.
 
*de Beer, M., & Visser, D. (1998). Comparability of the paper-and-pencil and computerised adaptive versions of the General Scholastic Aptitude Test (GSAT) Senior: South African Journal of Psychology Vol 28(1) Mar 1998, 21-27.
 
*Decker, W. D. (1977). An investigation of the procedures used to assign students to remedial oral communication instruction: Dissertation Abstracts International.
 
*DerSimonian, R., & Laird, N. M. (1983). Evaluating the effect of coaching on SAT scores: A meta-analysis: Harvard Educational Review Vol 53(1) Feb 1983, 1-15.
 
*Dinnan, J. A., Ulmer, R. C., & Figa, L. (1976). Variations between sender and receiver code processing in testing: College Student Journal Vol 10(4) Win 1976, 324-328.
 
*Diones, R. E. (1995). An integration of cognitive theory and psychometrics: Analogical reasoning. Dissertation Abstracts International Section A: Humanities and Social Sciences.
 
*Doby, W. C. (1975). The effects of selected school characteristics on the validity of high school grades, Scholastic Aptitude and Achievement Test scores as predictors of academic success at UCLA: Dissertation Abstracts International.
 
*Doctor, T. (2005). Does video-based and live attribution training improve college freshman performance on academic-based tasks? Dissertation Abstracts International Section A: Humanities and Social Sciences.
 
*Doebler, L. K., & Foreman, S. T. (1979). National Educational Developmental Tests as a predictor of College Entrance Examination Board Scholastic Aptitude Test scores: Educational and Psychological Measurement Vol 39(4) Win 1979, 909-911.
 
*Donlon, T. F., & Breland, N. (1983). The old days test: Scholastic Aptitude Test items from the 1920s revisited: Measurement & Evaluation in Guidance Vol 15(4) Jan 1983, 274-282.
 
*Dorans, N. J. (2002). Recentering and realigning the SAT score distributions: How and why: Journal of Educational Measurement Vol 39(1) Spr 2002, 59-84.
 
*Dorans, N. J. (2004). Freedle's Table 2: Fact or Fiction? : Harvard Educational Review Vol 74(1) Spr 2004, 62-72.
 
*Dorans, N. J., & Kulick, E. (1986). Demonstrating the utility of the standardization approach to assessing unexpected differential item performance on the Scholastic Aptitude Test: Journal of Educational Measurement Vol 23(4) Win 1986, 355-368.
 
*Dorans, N. J., & Livingston, S. A. (1987). Male-female differences in SAT-Verbal ability among students of high SAT-Mathematical ability: Journal of Educational Measurement Vol 24(1) Spr 1987, 65-71.
 
*Dorans, N. J., Schmitt, A. P., & Bleistein, C. A. (1992). The standardization approach to assessing comprehensive differential item functioning: Journal of Educational Measurement Vol 29(4) Win 1992, 309-319.
 
*Dorman, N. H. (1990). The effects of a problem-solving course on secondary school students' analytical skills, reasoning ability and scholastic aptitude: Dissertation Abstracts International.
 
*Douglas, B. E. (1987). An analysis of the academic composites of the Armed Services Vocational Aptitude Battery (ASVAB) and the math and verbal sections of the Preliminary Scholastic Aptitude Test (PSAT), the Scholastic Aptitude Test (SAT), and the American College Test (ACT): A correlation study: Dissertation Abstracts International.
 
*Douglass, P. B. (1991). A study of factors associated with attrition of black students at a historically black four-year college: 1985-1989: Dissertation Abstracts International.
 
*Drakulich, E. M. (1993). An analysis of the involvement of ten high schools in Scholastic Aptitude Testing student preparation: Dissertation Abstracts International.
 
*Dreher, M. J., & Singer, H. (1985). Predicting college success: Learning from text, background knowledge, attitude toward school, and the SAT as predictors: National Reading Conference Yearbook No 34 1985, 362-368.
 
*Dreyden, J. I., & Gallagher, S. A. (1989). The effects of time and direction changes on the SAT performance of academically talented adolescents: Journal for the Education of the Gifted Vol 12(3) Spr 1989, 187-204.
 
*Duren, T. E. (2001). Examining the gap: The relationship between levels of acculturation and Scholastic Achievement Test (SAT) scores among African American college students. Dissertation Abstracts International Section A: Humanities and Social Sciences.
 
*Ebmeier, H., & Schmulbach, S. (1989). An examination of the selection practices used in the Talent Search Program: Gifted Child Quarterly Vol 33(4) Fal 1989, 134-141.
 
*Edmonds, R. R., Krumboltz, J. D., & Mehrens, W. A. (1982). Comments on "Should we relabel the SAT... or replace it?" New Directions for Testing & Measurement No 13 Mar 1982, 51-57.
 
*Elliott, R., & Strenta, A. C. (1988). Effects of improving the reliability of the GPA on prediction generally and on comparative predictions for gender and race particularly: Journal of Educational Measurement Vol 25(4) Win 1988, 333-347.
 
*Ervin, L., Hogrebe, M. C., Dwinell, P. L., & Newman, I. (1984). Comparison of the prediction of academic performance for college developmental students and regularly admitted students: Psychological Reports Vol 54(1) Feb 1984, 319-327.
 
*Escoe, B. D. (1985). The relationship between Scholastic Aptitude Test and a composite of student and curriculum variables in a school district: Dissertation Abstracts International.
 
*Espenshade, T. J., Hale, L. E., & Chung, C. Y. (2005). The frog pond revisited: High school academic context, class rank, and elite college admission: Sociology of Education Vol 78(4) Oct 2005, 269-293.
 
*Everson, H. T., & Millsap, R. E. (2004). Beyond individual differences: Exploring school effects on SAT scores: Educational Psychologist Vol 39(3) Sum 2004, 157-172.
 
*Everson, H. T., & Millsap, R. E. (2004). Erratum: Educational Psychologist Vol 39(4) Fal 2004, 261.
 
*Ewing, M. (2007). Using multilevel modeling to predict performance on advanced placement examinations. Dissertation Abstracts International: Section B: The Sciences and Engineering.
 
*Faigel, H. C. (1991). The effect of beta blockade on stress-induced cognitive dysfunction in adolescents: Clinical Pediatrics Vol 30(7) Jul 1991, 441-445.
 
*Famili, L. (1986). Prediction of the academic performance of students enrolled in a general studies program: Dissertation Abstracts International.
 
*Fanelli, R. M. (1991). Cognitive assessment of high school student's test anxiety regarding SAT performance: Dissertation Abstracts International.
 
*Fellbaum, C. (1987). A preliminary analysis of cognitive-linguistic aspects of sentence completion tasks. Westport, CT: Ablex Publishing.
 
*Flake, W. L., & Goldman, B. A. (1991). Comparison of grade point averages and SAT scores between reporting and nonreporting men and women and freshmen and sophomores: Perceptual and Motor Skills Vol 72(1) Feb 1991, 177-178.
 
*Fleming, J. (2000). Affirmative action and standardized test scores: Journal of Negro Education Vol 69(1-2) Win-Spr 2000, 27-37.
 
*Fleming, J. (2002). Who will succeed in college? When the SAT predicts Black students' performance: Review of Higher Education: Journal of the Association for the Study of Higher Education Vol 25(3) Spr 2002, 281-296.
 
*Flynn, J. R. (1988). The decline and rise of scholastic aptitude scores: American Psychologist Vol 43(6) Jun 1988, 479-480.
 
*Foster, T. R. (1998). A comparative study of the study skills, self-concept, academic achievement and adjustment to college of freshmen intercollegiate athletes and non-athletes. Dissertation Abstracts International Section A: Humanities and Social Sciences.
 
*Franco, J. N. (1983). Aptitude Tests: Will the Debate Ever End? : American Psychologist Vol 38(3) Mar 1983, 356.
 
*Freedle, R., & Kostin, I. (1994). Can mutiple-choice reading tests be construct-valid? A reply to Katz, Lautenschlager, Blackburn, and Harris: Psychological Science Vol 5(2) Mar 1994, 107-110.
 
*Freedle, R., & Kostin, I. (1997). Predicting Black and White differential item functioning in verbal analogy performance: Intelligence Vol 24(2) Mar-Apr 1997, 417-444.
 
*Freedle, R. O. (2003). Correcting the SAT's ethnic and social-class bias: A method for reestimating SAT scores: Harvard Educational Review Vol 73(1) Spr 2003, 1-43.
 
*Freedle, R. O. (2004). The Truth and the Truthful Sages That Spin It: A Review of Dorans: Harvard Educational Review Vol 74(1) Spr 2004, 73-79.
 
*French, J. W. (1958). Validation of new item types against four-year academic criteria: Journal of Educational Psychology Vol 49(2) Apr 1958, 67-76.
 
*Frucot, V. G., & Cook, G. L. (1994). Further research on the accuracy of students' self-reported grade point averages, SAT scores, and course grades: Perceptual and Motor Skills Vol 79(2) Oct 1994, 743-746.
 
*Fuertes, J. N., & Sedlacek, W. E. (1994). Predicting the academic success of Hispanic college students using SAT scores: College Student Journal Vol 28(3) Sep 1994, 350-352.
 
*Fuertes, J. N., Sedlacek, W. E., & Liu, W. M. (1994). Using the SAT and noncognitive variables to predict the grades and retention of Asian American university students: Measurement and Evaluation in Counseling and Development Vol 27(2) Jul 1994, 74-84.
 
*Fuller, W. E., & Wehman, P. (2003). College entrance exams for students with disabilities: Accommodations and testing guidelines: Journal of Vocational Rehabilitation Vol 18(3) 2003, 191-197.
 
*Furr, L. A. (1998). Fathers' characteristics and their children's scores on college entrance exams: A comparison of intact and divorced families: Adolescence Vol 33(131) Fal 1998, 533-542.
 
*Gaire, L. (1991). The development of TestSkills: A test familiarization kit on the PSAT/NMSQT for Hispanic students. Albany, NY: State University of New York Press.
 
*Gallagher, A. M. (1991). Sex differences in problem-solving strategies used by high scoring examinees on the SAT-M: Dissertation Abstracts International.
 
*Gallagher, A. M., & De Lisi, R. (1994). Gender differences in Scholastic Aptitude Test: Mathematics problem solving among high-ability students: Journal of Educational Psychology Vol 86(2) Jun 1994, 204-211.
 
*Gallagher, S. A. (1989). Predictors of SAT Mathematics scores of gifted male and gifted female adolescents: Psychology of Women Quarterly Vol 13(2) Jun 1989, 191-203.
 
*Gandara, P., & Lopez, E. (1998). Latino students and college entrance exams: How much do they really matter? : Hispanic Journal of Behavioral Sciences Vol 20(1) Feb 1998, 17-38.
 
*Gayles, J. (2006). Free Throws, All Stars and the SAT: An Analogical Exercise: Radical Pedagogy Vol 8(1) Spr 2006, No Pagination Specified.
 
*Geiser, S., & Studley, R. (2002). UC and the SAT: Predictive Validity and Differential Impact of the SAT I and SAT II at the University of California: Educational Assessment Vol 8(1) Sep 2002, 1-26.
 
*Gibbins, N., & Bickel, R. (1991). Comparing public and private high schools using three SAT data sets: Urban Review Vol 23(2) Jun 1991, 101-115.
 
*Gierl, M. J., & ElAtia, S. (2007). Review of Adapting Educational and Psychological Tests for Cross-Cultural Assessment: Applied Psychological Measurement Vol 31(1) Jan 2007, 74-78.
 
*Gillespie, M. A. (2006). Critical thinking about values: The effects of an instructional program, reasons for attending college, and general life goals on the application of critical thinking to values expressed in an essay prompt. Dissertation Abstracts International Section A: Humanities and Social Sciences.
 
*Goldman, B. A., Flake, W. L., & Matheson, M. B. (1990). Accuracy of college students' perceptions of their SAT scores, high school and college grade point averages relative to their ability: Perceptual and Motor Skills Vol 70(2) Apr 1990, 514.
 
*Goldman, R. D., & Hewitt, B. N. (1975). An investigation of test bias for Mexican-American college students: Journal of Educational Measurement Vol 12(3) Fal 1975, 187-196.
 
*Goldman, R. D., & Hewitt, B. N. (1976). Predicting the success of Black, Chicano, Oriental and White college students: Journal of Educational Measurement Vol 13(2) Sum 1976, 107-117.
 
*Goldman, R. D., & Widawski, M. H. (1976). An analysis of types of errors in the selection of minority college students: Journal of Educational Measurement Vol 13(3) Fal 1976, 185-200.
 
*Goldstein, D., & Stocking, V. B. (1994). TIP studies of gender differences in talented adolescents. Ashland, OH: Hogrefe & Huber Publishers.
 
*Golu, M. (1969). The diagnostic and prognostic value of the university entrance examination: Revista de Pedagogie Vol 18(11) Nov 1969, 85-88.
 
*Gottfredson, L. S., & Crouse, J. (1986). Validity versus utility of mental tests: Example of the SAT: Journal of Vocational Behavior Vol 29(3) Dec 1986, 363-378.
 
*Gougeon, D. (1984). CEEB SAT mathematics scores and their correlation with college performance in math: Educational Research Quarterly Vol 9(2) 1984-1985, 8-11.
 
*Gramzow, R. H., & Willard, G. (2006). Exaggerating Current and Past Performance: Motivated Self-Enhancement Versus Reconstructive Memory: Personality and Social Psychology Bulletin Vol 32(8) Aug 2006, 1114-1125.
 
*Grant-Henry, S. B. (1990). A study of college and university students in reaction time, movement time and intelligence: Dissertation Abstracts International.
 
*Green, B. F., Crone, C. R., & Folk, V. G. (1989). A method for studying differential distractor functioning: Journal of Educational Measurement Vol 26(2) Sum 1989, 147-160.
 
*Grissmer, D. W. (2000). The continuing use and misuse of SAT scores: Psychology, Public Policy, and Law Vol 6(1) Mar 2000, 223-232.
 
*Gussett, J. C. (1974). College Entrance Examination Board Scholastic Aptitude Test scores as a predictor for college freshman mathematics grades: Educational and Psychological Measurement Vol 34(4) Win 1974, 953-955.
 
*Gussett, J. C. (1980). Achievement Test scores and Scholastic Aptitude Test scores as predictors of College Level Examination Program scores: Educational and Psychological Measurement Vol 40(1) Spr 1980, 213-218.
 
*Gustafsson, J.-E., Wedman, I., & Westerlund, A. (1992). The dimensionality of the Swedish Scholastic Aptitude Test: Scandinavian Journal of Educational Research Vol 36(1) 1992, 21-39.
 
*Guthrie, L. S. (1999). African American college students: The relationship between living arrangements and student success and satisfaction. (campus involvement, persistence). Dissertation Abstracts International Section A: Humanities and Social Sciences.
 
*Haier, R. J., & Benbow, C. P. (1995). Sex differences and lateralization in temporal lobe glucose metabolism during mathematical reasoning: Developmental Neuropsychology Vol 11(4) 1995, 405-414.
 
*Hall, C. W. (2001). A measure of executive processing skills in college students: College Student Journal Vol 35(3) Sep 2001, 442-450.
 
*Hall, C. W., Bolen, L. M., & Gupton, R. H. (1995). Predictive validity of the Study Process Questionnaire for undergraduate students: College Student Journal Vol 29(2) Jun 1995, 234-239.
 
*Halpern, D. F. (1989). The disappearance of cognitive gender differences: What you see depends on where you look: American Psychologist Vol 44(8) Aug 1989, 1156-1158.
 
*Halpin, G., Halpin, G., & Schaer, B. B. (1981). Relative effectiveness of the California Achievement Tests in comparison with the ACT Assessment, College Board Scholastic Aptitude Test, and high school grade point average in predicting college grade point average: Educational and Psychological Measurement Vol 41(3) Fal 1981, 821-827.
 
*Harris, A. M., & Carlton, S. T. (1993). Patterns of gender differences on mathematics items on the Scholastic Aptitude Test: Applied Measurement in Education Vol 6(2) 1993, 137-151.
 
*Harris, W. U. (1976). The SAT score decline: Facts, figures and emotions: Educational Technology Vol 16(6) Jun 1976, 15-20.
 
*Hashway, R. M., Clark, J., Roberts, G. H., & Schnuth, M. L. (1990). An econometric model of the Scholastic Aptitude Test performance of state educational systems: Educational Research Quarterly Vol 14(4) 1990, 27-31.
 
*Hayes, D. P., Wolfer, L. T., & Wolfe, M. F. (1996). Schoolbook simplification and its relation to the decline in SAT-verbal scores: American Educational Research Journal Vol 33(2) Sum 1996, 489-508.
 
*Helms, J. E. (2005). Stereotype Threat Might Explain the Black-White Test-Score Difference: American Psychologist Vol 60(3) Apr 2005, 269-270.
 
*Henricksson, W. (1994). Meta-analysis as a method for integrating results of studies about effects of practice and coaching on test scores: British Journal of Educational Psychology Vol 64(2) Jun 1994, 319-329.
 
*Henriksson, W., & Wolming, S. (1998). Academic performance in four study programmes: A comparison of students admitted on the basis of GPA and SweSAT scores, with and without credits for work experience: Scandinavian Journal of Educational Research Vol 42(2) Jun 1998, 135-150.
 
*Hewer, V. H. (1965). Are tests fair to college students from homes with low socio-economic status? : Personnel & Guidance Journal 43(8) 1965, 764-769.
 
*Higham, P. A. (2007). No Special K! A Signal Detection Framework for the Strategic Regulation of Memory Accuracy: Journal of Experimental Psychology: General Vol 136(1) Feb 2007, 1-22.
 
*Higham, S. L. (1985). Cognitive processes in verbally and mathematically talented adolescents: Sex differences cerebral asymmetry, and the SAT: Dissertation Abstracts International.
 
*Hogrebe, M. C., Ervin, L., Dwinell, P. L., & Newman, I. (1983). The moderating effects of gender and race in predicting the academic performance of college developmental students: Educational and Psychological Measurement Vol 43(2) Sum 1983, 523-530.
 
*Holland, J. L. (1959). The prediction of college grades from the California Psychological Inventory and the Scholastic Aptitude Test: Journal of Educational Psychology Vol 50(4) Aug 1959, 135-142.
 
*Holmes, C. T., & Keffer, R. L. (1995). A computerized method to teach Latin and Greek root words: Effect on verbal SAT scores: Journal of Educational Research Vol 89(1) Sep-Oct 1995, 47-50.
 
*Hoover, H. D. (2003). Some Common Misconceptions about Tests and Testing: Educational Measurement: Issues and Practice Vol 22(1) Spr 2003, 5-14.
 
*Hopmeier, G. H. (1985). The effectiveness of computerized coaching for Scholastic Aptitude Test in individual and group modes: Dissertation Abstracts International.
 
*Hosseini, A. A. (1978). Preliminary report on the validity of the Scholastic Aptitude Test in Iran: Psychological Reports Vol 43(1) Aug 1978, 99-102.
 
*Houston, L. N. (1980). Predicting academic achievement among specially admitted Black female college students: Educational and Psychological Measurement Vol 40(4) Win 1980, 1189-1195.
 
*Houston, L. N. (1983). The comparative predictive validities of high school rank, the Ammons Quick Test, and two scholastic aptitude test measures for a sample of Black female college students: Educational and Psychological Measurement Vol 43(4) Win 1983, 1123-1126.
 
*Hughes, G. B. (1993). Effects of using total score versus subscore criteria on differential item functioning: An investigation of the SAT-verbal subscores: Dissertation Abstracts International.
 
*Humphries, J. T. (1991). Predictors of College-level Academic Skills Test scores in Florida state universities: Dissertation Abstracts International.
 
*Ibn Bari, M. (1995). Gifted students: Achievement and need for structure. A study of the Texas gifted student program. Dissertation Abstracts International Section A: Humanities and Social Sciences.
 
*Isaacs, T. (2001). Entry to University in the United States: The role of SATs and Advanced Placement in a competitive sector: Assessment in Education: Principles, Policy & Practice Vol 8(3) Nov 2001, 391-406.
 
*Jackson, D. N., & Rushton, J. P. (2006). Males have greater g: Sex differences in general mental ability from 100,000 17- to 18-year-olds on the Scholastic Assessment Test: Intelligence Vol 34(5) Sep-Oct 2006, 479-486.
 
*Jarosewich, T., & Stocking, V. B. (2003). Talent Search: Student and Parent Perceptions of Out-of-Level Testing: Journal of Secondary Gifted Education Vol 14(3) Spr 2003, 137-150.
 
*Jencks, C., & Crouse, J. (1982). Should we relabel the SAT... or replace it? : New Directions for Testing & Measurement No 13 Mar 1982, 33-49.
 
*Jensen, V. K. (1991). "The effect of beta blockade on stress-induced cognitive dysfunction in adolescents": Editorial: Clinical Pediatrics Vol 30(7) Jul 1991, 446-448.
 
*Jing, G., & Yao-xian, G. (2005). Scholastic aptitude test for pupils in grades 4-6 II: Validity: Chinese Journal of Clinical Psychology Vol 13(3) Aug 2005, 271-273.
 
*Johnson, S. T., & Wallace, M. B. (1989). Characteristics of SAT quantitative items showing improvement after coaching among Black students from low-income families: An exploratory study: Journal of Educational Measurement Vol 26(2) Sum 1989, 133-145.
 
*Jones, L. V. (1984). White-Black achievement differences: The narrowing gap: American Psychologist Vol 39(11) Nov 1984, 1207-1213.
 
*Jones, P. L. (1997). The effects of academic team participation on SAT scores and self-esteem of high school students. Dissertation Abstracts International Section A: Humanities and Social Sciences.
 
*Joseph, M. W. (2004). A detailed and comprehensive operationalization of SAT coaching and an analysis of coaching efficacy. Dissertation Abstracts International Section A: Humanities and Social Sciences.
 
*Kanarick, R. (1992). High school characteristics related to performance on the Scholastic Aptitude Test, high school proficiency test and other educational outcomes: Dissertation Abstracts International.
 
*Kaplan, R. M. (1982). Nader's raid on the testing industry: Is it in the best interest of the consumer? : American Psychologist Vol 37(1) Jan 1982, 15-23.
 
*Katz, I. R., Bennett, R. E., & Berger, A. E. (2000). Effects of response format on difficulty of SAT-mathematics items: It's not the strategy: Journal of Educational Measurement Vol 37(1) Spr 2000, 39-57.
 
*Katz, S. (1991). Whose Review--Lloyd Bond's or ETS/College Board's? : PsycCRITIQUES Vol 36 (1), Jan, 1991.
 
*Katz, S., Blackburn, A. B., & Lautenschlager, G. J. (1991). Answering reading comprehension items without passages on the SAT when items are quasi-randomized: Educational and Psychological Measurement Vol 51(3) Fal 1991, 747-754.
 
*Katz, S., Brown, J. M., Smith, F. G., & Greene, H. (1998). Using the computer to examine behavior on the SAT reading comprehension task: Psychology: A Journal of Human Behavior Vol 35(2) 1998, 45-55.
 
*Katz, S., Johnson, C., & Pohl, E. (1999). Answering reading comprehension items without the passages on the SAT--I: Psychological Reports Vol 85(3, Pt 2 [Spec Issue]) Dec 1999, 1157-1163.
 
*Katz, S., & Lautenschlager, G. J. (1994). Answering reading comprehension items without passages on the SAT--I, the ACT, and the GRE: Educational Assessment Vol 2(4) Fal 1994, 295-308.
 
*Katz, S., & Lautenschlager, G. J. (1995). The SAT reading task in question: Reply to Freedle and Kostin: Psychological Science Vol 6(2) Mar 1995, 126-127.
 
*Katz, S., & Lautenschlager, G. J. (2001). The contribution of passage and no-passage factors to item performance on the SAT reading task: Educational Assessment Vol 7(2) May 2001, 165-176.
 
*Katz, S., Lautenschlager, G. J., Blackburn, A. B., & Harris, F. H. (1990). Answering reading comprehension items without passages on the SAT: Psychological Science Vol 1(2) Mar 1990, 122-127.
 
*Katz, S., Marsh, R. L., Johnson, C., & Pohl, E. (2001). Answering quasi-randomized reading items without the passages on the SAT--I: Journal of Educational Psychology Vol 93(4) Dec 2001, 772-775.
 
*Kaufmann, J. D., & Dempster, D. (1964). Prediction of SAT scores: Personnel & Guidance Journal 42(10) 1964, 1026-1027.
 
*Kean, D. K., & Glynn, S. M. (1987). Writing persuasive documents: Audience considerations: Journal of Instructional Psychology Vol 14(1) Mar 1987, 36-40.
 
*Keating, D. P., & Stanley, J. C. (1972). Extreme measures for the exceptionally gifted in mathematics and science: Educational Research Vol 1(9) Sep 1972, 3-7.
 
*Keffer, R. L. (1992). The impact of a computerized method of teaching Latin and Greek root words on the verbal Scholastic Aptitude Test score: Dissertation Abstracts International.
 
*Kelemen, W. L., Winningham, R. G., & Weaver, C. A., III. (2007). Repeated testing sessions and scholastic aptitude in college students' metacognitive accuracy: European Journal of Cognitive Psychology Vol 19(4-5) Jul 2007, 689-717.
 
*Kelly, F. S. (1993). A comparison of two distinctive preparations for quantitative items in the Scholastic Aptitude Test: Dissertation Abstracts International.
 
*Kelly, T. F. (1978). Differential verbal quantitative achievement and self-attribution in college bound high school students: Dissertation Abstracts International.
 
*Knopp, S. L. (1983). Sex-role self-concept and cognitive functioning: The relationships among androgyny, attribution, and math and verbal aptitudes: Dissertation Abstracts International.
 
*Koivula, N., Hassmen, P., & Hunt, D. P. (2001). Performance on the Swedish Scholastic Aptitude Test: Effects of self-assessment and gender: Sex Roles Vol 44(11-12) Jun 2001, 629-645.
 
*Kubey, R. W. (1979). Radiation and decline of scholastic aptitude scores: Psychological Reports Vol 45(3) Dec 1979, 862.
 
*Lang, E. L., & Reifman, A. (1988). Reestimating Zajonc's confluence effects on the SAT: American Psychologist Vol 43(6) Jun 1988, 477-478.
 
*Larson, J. R., & Scontrino, M. P. (1976). The consistency of high school grade point average and of the verbal and mathematical portions of the Scholastic Aptitude Test of the College Entrance Examination Board, as predictors of college performance: An eight year study: Educational and Psychological Measurement Vol 36(2) Sum 1976, 439-443.
 
*Laschewer, A. D. (1986). The effect of computer assisted instruction as a coaching technique for the Scholastic Aptitude Test preparation of high school juniors: Dissertation Abstracts International.
 
*Lavender, J. S. (2006). The relationship between locus of control orientation and academic success in college. Dissertation Abstracts International Section A: Humanities and Social Sciences.
 
*Lawlor, S., Richman, S., & Richman, C. L. (1997). The validity of using the SAT as a criterion for black and white students' admission to college: College Student Journal Vol 31(4) Dec 1997, 507-515.
 
*Lee, M., & Chung, H. (1997). Development of Adolescent University Entrance Examination Stress Scale: Korean Journal of Developmental Psychology Vol 10(1) 1997, 144-154.
 
*Lenning, O. T. (1975). Predictive validity of the ACT tests at selective colleges: ACT Research Reports No 69 Aug 1975, 14.
 
*Lester, D. (1994). Scholastic aptitude and rates of personal violence in the USA: Perceptual and Motor Skills Vol 79(2) Oct 1994, 738.
 
*Leung, B. P. (1991). Teaching self-monitoring to Chinese-American and Anglo-American adolescents: Dissertation Abstracts International.
 
*Levine, M. V. (1982). Fundamental measurement of the difficulty of test items: Journal of Mathematical Psychology Vol 25(3) Jun 1982, 243-268.
 
*Levine, M. V., & Drasgow, F. (1983). The relation between incorrect option choice and estimated ability: Educational and Psychological Measurement Vol 43(3) Fal 1983, 675-685.
 
*Lewis, S. M. (2004). Using Precision Teaching to Prepare Students with Learning Differences for the SAT: Journal of Precision Teaching & Celeration Vol 20(1) Spr 2004, 28-29.
 
*Lighthouse, A. G. (2006). The relationship between SAT scores and grade point averages among post-secondary students with disabilities. Dissertation Abstracts International Section A: Humanities and Social Sciences.
 
*Lin, Y., & McKeachie, W. (1973). Student characteristics related to achievement in introductory psychology courses: British Journal of Educational Psychology Vol 43(1) Feb 1973, 70-76.
 
*Lindstrom, J. H., & Gregg, N. (2007). The role of extended time on the SATReg. for students with learning disabilities and/or attention-deficit/hyperactivity disorder: Learning Disabilities Research & Practice Vol 22(2) May 2007, 85-95.
 
*Linn, R. L. (1986). Comments on the g factor in employment testing: Journal of Vocational Behavior Vol 29(3) Dec 1986, 438-444.
 
*Liu, J., Cahn, M. F., & Dorans, N. J. (2006). An Application of Score Equity Assessment: Invariance of Linkage of New SAT to Old SAT Across Gender Groups: Journal of Educational Measurement Vol 43(2) Sum 2006, 113-129.
 
*Liu, J., & Walker, M. E. (2007). Score linking issues related to test content changes. New York, NY: Springer Science + Business Media.
 
*Lollis, T. J., Einstein, G. O., & Brewer, C. L. (1987). Using undergraduate factors to predict psychology Graduate Record Examination scores: Teaching of Psychology Vol 14(4) Dec 1987, 202-206.
 
*Longstreth, L. E., Walsh, D. A., Alcorn, M. B., Szeszulski, P. A., & et al. (1986). Backward masking, IQ, SAT and reaction time: Interrelationships and theory: Personality and Individual Differences Vol 7(5) 1986, 643-651.
 
*Lubinski, D., & Benbow, C. P. (1995). "An Opportunity for Empiricism" and "An Opportunity for 'Accuracy'": Erratum: PsycCRITIQUES Vol 40 (12), Dec, 1995.
 
*Lubinski, D., Webb, R. M., Morelock, M. J., & Benbow, C. P. (2001). Top 1 in 10,000: A 10-year follow-up of the profoundly gifted: Journal of Applied Psychology Vol 86(4) Aug 2001, 718-729.
 
*Makitalo, A. (1996). Gender differences in performance on the DTM subtest in the Swedish Scholastic Aptitude Test as a function of item position and cognitive demands: Scandinavian Journal of Educational Research Vol 40(3) Sep 1996, 189-201.
 
*Malloch, D. C. (1981). Predicting student grade point average at a community college from scholastic aptitude tests and from measures representing three constructs in Vroom's expectancy theory model of motivation: Dissertation Abstracts International.
 
*Malloch, D. C., & Michael, W. B. (1981). Predicting student grade point average at a community college from Scholastic Aptitude Tests and from measures representing three constructs in Vroom's expectancy theory model of motivation: Educational and Psychological Measurement Vol 41(4) Win 1981, 1127-1135.
 
*Marco, G. L., & Crone, C. R. (1990). The relationship of trends in SAT content and statistical characteristics to SAT predictive validity. Princeton, NJ: Educational Testing Service.
 
*Markel, G., Bizer, L., & Wilhelm, R. M. (1985). The LD adolescent and the SAT: Academic Therapy Vol 20(4) Mar 1985, 397-409.
 
*Matyas, M. L. (1986). Prediction of attrition among male and female college biology majors using specific attitudinal, socio-cultural, and traditional predictive variables: Dissertation Abstracts International.
 
*Mau, W.-C., & Lynn, R. (2001). Gender differences on the Scholastic Aptitude Test, the American College Test and college grades: Educational Psychology Vol 21(2) Jun 2001, 133-136.
 
*Mauger, P. A., & Kolmodin, C. A. (1975). Long-term predictive validity of the Scholastic Aptitude Test: Journal of Educational Psychology Vol 67(6) Dec 1975, 847-851.
 
*Mayer, R. E., Stull, A. T., Campbell, J., Almeroth, K., Bimber, B., Chun, D., et al. (2007). Overestimation bias in self-reported SAT scores: Educational Psychology Review Vol 19(4) Dec 2007, 443-454.
 
*McCarthy, S. V. (1979). College women with differential linguistic-quantitative ability patterns: Performance on two visual serial-search tasks: Perceptual and Motor Skills Vol 49(3) Dec 1979, 791-794.
 
*McCornack, R. L. (1983). Bias in the validity of predicted college grades in four ethnic minority groups: Educational and Psychological Measurement Vol 43(2) Sum 1983, 517-522.
 
*McCornack, R. L., & McLeod, M. M. (1988). Gender bias in the prediction of college course performance: Journal of Educational Measurement Vol 25(4) Win 1988, 321-331.
 
*McDonald, R. T., & Gawkoski, R. S. (1979). Predictive value of SAT scores and high school achievement for success in a college honors program: Educational and Psychological Measurement Vol 39(2) Sum 1979, 411-414.
 
*McTeer, D. E., Jr. (2004). An evaluation of the South Carolina laptop program to improve SAT scores. Dissertation Abstracts International Section A: Humanities and Social Sciences.
 
*Messick, S., & Jungeblut, A. (1981). Time and method in coaching for the SAT: Psychological Bulletin Vol 89(2) Mar 1981, 191-216.
 
*Michael, N. (1987). Item bias in the verbal segment of the Scholastic Aptitude Test (SAT) for high school students in the United States Virgin Islands: Dissertation Abstracts International.
 
*Millsap, R. E. (1995). Measurement invariance, predictive invariance, and the duality paradox: Multivariate Behavioral Research Vol 30(4) 1995, 577-605.
 
*Minor, L. L., & Benbow, C. P. (1996). Construct validity of the SAT-M: A comparative study of high school students and gifted seventh graders. Baltimore, MD: Johns Hopkins University Press.
 
*Mollette, M. J. (2004). Longitudinal study of gender differences and trends in preliminary SAT and SAT performance: Effects of sample restriction and omission rate. Dissertation Abstracts International Section A: Humanities and Social Sciences.
 
*Morgan, R. (1990). Analyses of the predictive validity of the SAT and high school grades from 1976 to 1985. Princeton, NJ: Educational Testing Service.
 
*Mouw, J. T., & Khanna, R. K. (1993). Prediction of academic success: A review of the literature and some recommendations: College Student Journal Vol 27(3) Sep 1993, 328-336.
 
*Nankervis, B. (2007). Predicting sex differences in performance on the SAT-I quantitative section: How content and stereotype threat affect achievement. Dissertation Abstracts International Section A: Humanities and Social Sciences.
 
*Neal, K. S., Schaer, B. B., Ley, T. C., & Wright, J. P. (1990). Predicting achievement in a teacher preparatory course of reading methods from the ACT and Teale-Lewis reading attitude scores: Reading Psychology Vol 11(2) 1990, 131-139.
 
*Nevo, B., & Oren, C. (1986). Concurrent validity of the American Scholastic Aptitude Test (SAT) and the Israeli Inter-University Psychometric Entrance Test (IUPET): Educational and Psychological Measurement Vol 46(3) Fal 1986, 723-725.
 
*Nisbet, J., Ruble, V. E., & Schurr, K. T. (1982). Predictors of academic success with high risk college students: Journal of College Student Personnel Vol 23(3) May 1982, 227-235.
 
*No authorship, i. (2002). Notice: Journal of Emotional and Behavioral Disorders Vol 10(4) Win 2002, 222.
 
*Noftle, E. E., & Robins, R. W. (2007). Personality predictors of academic outcomes: Big five correlates of GPA and SAT scores: Journal of Personality and Social Psychology Vol 93(1) Jul 2007, 116-130.
 
*Olszewski-Kubilius, P. M., Kulieke, M. J., Willis, G. B., & Krasney, N. S. (1989). An analysis of the validity of SAT entrance scores for accelerated classes: Journal for the Education of the Gifted Vol 13(1) Fal 1989, 37-54.
 
*Oshman, H. P. (1975). Some effects of father-absence upon the psychosocial development of male and female late adolescents: Theoretical and empirical considerations: Dissertation Abstracts International.
 
*Owen, D. (1986). The S.A.T. and social stratification: Journal of Education, Boston Vol 168(1) 1986, 81-92.
 
*Packer, J. (1989). How much extra time do visually impaired people need to take examinations: The case of the SAT: Journal of Visual Impairment & Blindness Vol 83(7) Sep 1989, 358-360.
 
*Paden, R. A. (1993). The relationship between teachers' predictions of students' scores on standardized tests and teacher-made tests: Dissertation Abstracts International.
 
*Padilla, A. M. (1993). The Scholastic Aptitude Test Debate: A New Chapter: PsycCRITIQUES Vol 38 (4), Apr, 1993.
 
*Page, E. B., & Feifs, H. (1985). SAT scores and American states: Seeking for useful meaning: Journal of Educational Measurement Vol 22(4) Win 1985, 305-312.
 
*Pallas, A. M., & Alexander, K. L. (1983). Sex differences in quantitative SAT performance: New evidence on the differential coursework hypothesis: American Educational Research Journal Vol 20(2) Sum 1983, 165-182.
 
*Park, G., Lubinski, D., & Benbow, C. P. (2007). Contrasting intellectual patterns predict creativity in the arts and sciences: Tracking intellectually precocious youth over 25 years: Psychological Science Vol 18(11) Nov 2007, 948-952.
 
*Paulhus, D., & Shaffer, D. R. (1981). Sex differences in the impact of number of older and number of younger siblings on scholastic aptitude: Social Psychology Quarterly Vol 44(4) Dec 1981, 363-368.
 
*Payne, D. A., & Evans, K. A. (1985). The relationship of laterality to academic aptitude: Educational and Psychological Measurement Vol 45(4) Win 1985, 971-976.
 
*Payne, D. A., Goolsby, C. E., Evans, K. A., & Barton, R. M. (1990). Multivariate analyses of cognitive and cognitive style variables based on hemisphere specialization theory predictive of success in a college developmental studies program: Perceptual and Motor Skills Vol 71(2) Oct 1990, 545-546.
 
*Payne, D. E. (1996). Effects of differential instructional sets of SAT performance. Dissertation Abstracts International Section A: Humanities and Social Sciences.
 
*Payne, O. L. (1991). An examination of factors influencing the verbal and mathematics SAT scores among Black secondary students: Dissertation Abstracts International.
 
*Pearson, B. Z. (1993). Predictive validity of the Scholastic Aptitude Test (SAT) for Hispanic bilingual students: Hispanic Journal of Behavioral Sciences Vol 15(3) Aug 1993, 342-356.
 
*Pedersen, L. G. (1975). The correlation of partial and total scores of the Scholastic Aptitude Test of the College Entrance Examination Board with grades in freshman chemistry: Educational and Psychological Measurement Vol 35(2) Sum 1975, 509-511.
 
*Pennock-Roman, M., Powers, D. E., & Perez, M. E. (1991). A preliminary evaluation of TestSkills: A kit to prepare Hispanic students for the PSAT/NMSQT. Albany, NY: State University of New York Press.
 
*Pentecoste, J. C., & Lowe, W. F. (1977). The Quick Test as a predictive instrument for college success: Psychological Reports Vol 41(3, Pt 1) Dec 1977, 759-762.
 
*Petersen, N. S., Cook, L. L., & Stocking, M. L. (1983). IRT versus conventional equating methods: A comparative study of scale stability: Journal of Educational Statistics Vol 8(2) Sum 1983, 137-156.
 
*Petrie, T. A., & Stoever, S. (1997). Academic and nonacademic predictors of female student-athletes' academic performances: Journal of College Student Development Vol 38(6) Nov-Dec 1997, 599-608.
 
*Phelps, R. P. (2003). Kill the messenger: The war on standardized testing. New Brunswick, NJ: Transaction Publishers.
 
*Platt, L. S., Turocy, P. S., & McGlumphy, B. E. (2001). Preadmission criteria as predictors of academic success in entry-level athletic training and other allied health educational programs: Journal of Athletic Training Vol 36(2) Apr-Jun 2001, 141-144.
 
*Pollins, L. D. (1985). The construct validity of the Scholastic Aptitude Test for young gifted students: Dissertation Abstracts International.
 
*Pommerich, M. (2007). Concordance: The good, the bad, and the ugly. New York, NY: Springer Science + Business Media.
 
*Pomplun, M., Burton, N., & Lewis, C. (1990). A preliminary evaluation of the stability of freshman GPA, 1978-1985. Princeton, NJ: Educational Testing Service.
 
*Powell, B., & Steelman, L. C. (1996). Bewitched, bothered and bewildering: The use and misuse of state SAT and ACT scores: Harvard Educational Review Vol 66(1) Spr 1996, 27-59.
 
*Powers, D. E. (1993). Coaching for the SAT: A summary of the summaries and an update: Educational Measurement: Issues and Practice Vol 12(2) Sum 1993, 24-30, 39.
 
*Powers, D. E., & Alderman, D. L. (1983). Effects of test familiarization on SAT performance: Journal of Educational Measurement Vol 20(1) Spr 1983, 71-79.
 
*Powers, D. E., Alderman, D. L., & Noeth, R. J. (1983). Helping students prepare for the SAT: Alternative strategies for counselors: School Counselor Vol 30(5) May 1983, 350-357.
 
*Powers, D. E., & Leung, S. W. (1995). Answering the new SAT reading comprehension questions without the passages: Journal of Educational Measurement Vol 32(2) Sum 1995, 105-129.
 
*Powers, D. E., & Rock, D. A. (1999). Effects of coaching on SAT I: Reasoning Test scores: Journal of Educational Measurement Vol 36(2) Sum 1999, 93-118.
 
*Price, F. W., & Suk Hi, K. (1976). The association of college performance with high school grades and college entrance test scores: Educational and Psychological Measurement Vol 36(4) Win 1976, 965-970.
 
*Pugh, R. C., & Sassenrath, J. M. (1968). Comparable scores for the CEEB Scholastic Aptitude Test and the American College Test Program: Measurement & Evaluation in Guidance 1(2) 1968, 103-109.
 
*Ramist, L., Lewis, C., & McCamley, L. (1990). Implications of using freshman GPA as the criterion for the predictive validity of the SAT. Princeton, NJ: Educational Testing Service.
 
*Ramist, L., & Weiss, G. (1990). The predictive validity of the SAT, 1964 to 1988. Princeton, NJ: Educational Testing Service.
 
*Ramos, I. (1992). Gender differences in mathematics: The relationship of risk-taking and estimation ability on the PSAT: Dissertation Abstracts International.
 
*Reglin, G. L., & Adams, D. R. (1990). Why Asian-American high school students have higher grade point averages and SAT scores than other high school students: The High School Journal Vol 73(3) Feb-Mar 1990, 143-149.
 
*Richek, H. G., & Brown, O. H. (1989). Personality and aptitude correlates of parental separation in "healthy" college freshman: Indian Journal of Psychometry & Education Vol 20(1) Jan 1989, 1-10.
 
*Rippey, R. M. (1976). The test score decline: If you don't know where you're going, how do you expect to get there? : Educational Technology Vol 16(6) Jun 1976, 30-38.
 
*Rives, J. A. (1995). Proposition 48 and graduation of student athletes. Dissertation Abstracts International Section A: Humanities and Social Sciences.
 
*Robinson, S. E. (1983). Nader versus ETS: Who should we believe? : Personnel & Guidance Journal Vol 61(5) Jan 1983, 260-262.
 
*Robinson, S. P. (1997). An assessment of the appropriateness of traditional admissions criteria for admission and eligibility of student-athletes. Dissertation Abstracts International Section A: Humanities and Social Sciences.
 
*Rock, D. A., Bennett, R. E., & Kaplan, B. A. (1987). Internal construct validity of a college admissions test across handicapped and nonhandicapped groups: Educational and Psychological Measurement Vol 47(1) Spr 1987, 193-205.
 
*Rodgers, J. L. (1988). Birth order, SAT, and confluence: Spurious correlations and no causality: American Psychologist Vol 43(6) Jun 1988, 476-477.
 
*Rogers, G. K. (1991). A score performance comparison between mathematics subtests of college entrance examinations and a state-mandated basic skills test: Dissertation Abstracts International.
 
*Rose, R. J., Hall, C. W., Bolen, L. M., & Webster, R. E. (1996). Locus of control and college students' approaches to learning: Psychological Reports Vol 79(1) Aug 1996, 163-171.
 
*Rothschild, L. H. (1987). Scholastic Aptitude Test preparation for the adolescent dyslexic: Annals of Dyslexia Vol 37 1987, 212-227.
 
*Royer, J. M., Abranovic, W. A., & Sinatra, G. M. (1987). Using entering reading comprehension performance as a predictor of performance in college classes: Journal of Educational Psychology Vol 79(1) Mar 1987, 19-26.
 
*Roznowski, M. (1988). A comment on Zajonc: American Psychologist Vol 43(6) Jun 1988, 478-479.
 
*Sackett, P. R., Hardison, C. M., & Cullen, M. J. (2004). On Interpreting Stereotype Threat as Accounting for African American-White Differences on Cognitive Tests: American Psychologist Vol 59(1) Jan 2004, 7-13.
 
*Sackett, P. R., Hardison, C. M., & Cullen, M. J. (2004). "On interpreting stereotype threat as accounting for African American-White differences on cognitive tests": Clarification: American Psychologist Vol 59(3) Apr 2004, 189.
 
*Sackett, P. R., Hardison, C. M., & Cullen, M. J. (2005). On Interpreting Research on Stereotype Threat and Test Performance: American Psychologist Vol 60(3) Apr 2005, 271-272.
 
*Saka, T. T. (1992). Differential item functioning among mainland US and Hawaii examinees on the verbal subtest of the Scholastic Aptitude Test: Dissertation Abstracts International.
 
*Sawyer, R. (2007). Some further thoughts on concordance. New York, NY: Springer Science + Business Media.
 
*Schaffner, P. E. (1985). Competitive admission practices when the SAT is optional: Journal of Higher Education Vol 56(1) Jan-Feb 1985, 55-72.
 
*Scheuneman, J. D., Camara, W. J., Cascallar, A. S., Wendler, C., & Lawrence, I. (2002). Calculator access, use, and type in relation to performance on the SAT I: Reasoning in mathematics: Applied Measurement in Education Vol 15(1) Jan 2002, 95-112.
 
*Scheuneman, J. D., & Gerritz, K. (1990). Using differential item functioning procedures to explore sources of item difficulty and group performance characteristics: Journal of Educational Measurement Vol 27(2) Sum 1990, 109-131.
 
*Schmitt, A. P. (1988). Language and cultural characteristics that explain differential item functioning for Hispanic examinees on the Scholastic Aptitude Test: Journal of Educational Measurement Vol 25(1) Spr 1988, 1-13.
 
*Schmitt, A. P., & Dorans, N. J. (1990). Differential item functioning for minority examinees on the SAT: Journal of Educational Measurement Vol 27(1) Spr 1990, 67-81.
 
*Schmitt, A. P., & Dorans, N. J. (1991). Factors related to differential item functioning for Hispanic examinees on the Scholastic Aptitude Test. Albany, NY: State University of New York Press.
 
*Schroeder, B. (1992). Problem-solving strategies and the mathematics SAT: A study of enhanced performance: Dissertation Abstracts International.
 
*Schuchman, M. C. (1977). A comparison of three techniques for reducing Scholastic Aptitude Test anxiety: Dissertation Abstracts International.
 
*Schurr, K. T., Henriksen, L. W., Alcorn, B. K., & Dillard, N. (1992). Tests and psychological types for nurses and teachers: Classroom achievement and standardized test scores measuring specific training objectives and general ability: Journal of Psychological Type Vol 23 1992, 38-44.
 
*Schurr, K. T., Ruble, V. E., & Henriksen, L. W. (1988). Relationships of Myers-Briggs Type Indicator personality characteristics and self-reported academic problems and skill ratings with Scholastic Aptitude Test scores: Educational and Psychological Measurement Vol 48(1) Spr 1988, 187-196.
 
*Sedlacek, W. E., & Adams-Gaston, J. (1992). Predicting the academic success of student-athletes using SAT and noncognitive variables: Journal of Counseling & Development Vol 70(6) Jul-Aug 1992, 724-727.
 
*Sesnowitz, M., Bernhardt, K. L., & Knain, D. M. (1982). An analysis of the impact of commercial test preparation courses on SAT scores: American Educational Research Journal Vol 19(3) Fal 1982, 429-441.
 
*Sgan, M. R. (1964). An alternative approach to scholastic aptitude tests as predictors of graduation rank at selective colleges: Educational and Psychological Measurement 24(2) 1964, 347-352.
 
*Shahani, C. (1989). The incremental contribution of the selection interview in college admissions: Dissertation Abstracts International.
 
*Shaughnessy, M. F., Spray, K., Moore, J., & Siegel, C. (1995). Prediction of success in college calculus: Personality, Scholastic Aptitude Test, and screening scores: Psychological Reports Vol 77(3, Pt 2) Dec 1995, 1360-1362.
 
*Shaw, E. (1993). The effects of short-term coaching on the Scholastic Aptitude Test: Dissertation Abstracts International.
 
*Sheehan, K. R. (1990). The relationship of gender bias and standardized tests to the mathematics competency of university men and women: Dissertation Abstracts International.
 
*Sheehan, K. R., & Gray, M. W. (1992). Sex bias in the SAT and the DTMS: Journal of General Psychology Vol 119(1) Jan 1992, 5-14.
 
*Shepperd, J. A. (1993). Student derogation of the Scholastic Aptitude Test: Biases in perceptions and presentations of college board scores: Basic and Applied Social Psychology Vol 14(4) Dec 1993, 455-473.
 
*Shepperd, J. A. (1996). Student derogation of the Scholastic Aptitude test: Biases in perceptions and presentations of College Board scores. Boston, MA: Houghton, Mifflin and Company.
 
*Simmons, S. D. (1978). A study of selected characteristics of minority and majority students attending predominantly White and predominantly Black universities: Dissertation Abstracts International.
 
*Simmons, T. W. (1977). The effectiveness of aptitude test scores and high school rank for predicting academic success of Black college freshmen enrolled in an innovative program during the 1973-74 school year: Dissertation Abstracts International.
 
*Sinha, D. K. (1986). Relationships between Scholastic Aptitude Test performance and curricula of high school seniors in Georgia: Dissertation Abstracts International.
 
*Sinha, D. K. (1986). Relationships of graduation requirements and course offerings to Scholastic Aptitude Test performance of seniors in high school: Journal of Educational Research Vol 80(1) Sep-Oct 1986, 5-9.
 
*Slack, W. V., & Porter, D. (1980). The Scholastic Aptitude Test: A critical appraisal: Harvard Educational Review Vol 50(2) May 1980, 154-175.
 
*Slinde, J. A., & Linn, R. L. (1978). An exploration of the adequacy of the Rasch model for the problem of vertical equating: Journal of Educational Measurement Vol 15(1) Spr 1978, 23-35.
 
*Smith, G. M., & Fogg, C. P. (1979). Predicting college performance from a bivariate grid: Analysis and discussion of the grid's practical utility, accuracy, and multivariate logic: Educational and Psychological Measurement Vol 39(4) Win 1979, 843-857.
 
*Smith, J. K. (1982). Converging on correct answers: A peculiarity of multiple choice items: Journal of Educational Measurement Vol 19(3) Fal 1982, 211-220.
 
*Smith, L. F., & Smith, J. K. (2004). The Influence of Test Consequence on National Examinations: North American Journal of Psychology Vol 6(1) 2004, 13-26.
 
*Smith, R. M., & Schumacher, P. (2006). Academic Attributes of College Freshmen That Lead to Success in Actuarial Studies in a Business College: Journal of Education for Business Vol 81(5) May-Jun 2006, 256-260.
 
*Smith, R. M., & Schumacher, P. A. (2005). Predicting Success for Actuarial Students in Undergraduate Mathematics Courses: College Student Journal Vol 39(1) Mar 2005, 165-177.
 
*Sowles, G. S. (1991). The relationship between background variables and the academic performance of college freshmen: Dissertation Abstracts International.
 
*Spivey, R. D. (2007). Strategic reading instruction in the upper elementary grades: Leading and supporting a community of learners: A mixed methods study. Dissertation Abstracts International Section A: Humanities and Social Sciences.
 
*Sprinthall, R. C. (1983). Thanks for the Verities: American Psychologist Vol 38(3) Mar 1983, 355.
 
*Stage, C. (1988). Gender differences in test results: Scandinavian Journal of Educational Research Vol 32(3) Sep 1988, 101-111.
 
*Stanley, J. C. (1988). Some characteristics of SMPY's "700-800 on SAT-M Before Age 13 Group": Youths who reason extremely well mathematically: Gifted Child Quarterly Vol 32(1) Win 1988, 205-209.
 
*Stanley, J. C., & Brody, L. E. (1989). Comment about Ebmeier and Schmulbach's "An examination of the selection practices used in the Talent Search Program." Gifted Child Quarterly Vol 33(4) Fal 1989, 142-143.
 
*Stanley, J. C., Fox, L. H., & Keating, D. P. (1972). Annual report to the Spencer Foundation on its five-year grant to the Johns Hopkins University covering the first year of the grant, 1 September 1971 through 31 August 1972, "Study of mathematically and scientifically precocious youth" (SMSPY). Oxford, England: Johns Hopkins U.
 
*Steelman, L. C., Powell, B., & Carini, R. M. (2000). Do teacher unions hinder educational performance? Lessons learned from state SAT and ACT scores: Harvard Educational Review Vol 70(4) Win 2000, 437-466.
 
*Sternberg, R. J. (2005). Augmenting the SAT Through Assessments of Analytical, Practical, and Creative Skills. Mahwah, NJ: Lawrence Erlbaum Associates Publishers.
 
*Sternberg, R. J. (2006). The Rainbow Project: Enhancing the SAT through assessments of analytical, practical, and creative skills: Intelligence Vol 34(4) Jul-Aug 2006, 321-350.
 
*Stricker, L. J. (1984). Test disclosure and retest performance on the SAT: Applied Psychological Measurement Vol 8(1) Win 1984, 81-87.
 
*Stricker, L. J. (1991). Current validity of 1975 and 1985 SATs: Implications for validity trends since the mid-1970s: Journal of Educational Measurement Vol 28(2) Sum 1991, 93-98.
 
*Stricker, L. J., Rock, D. A., & Burton, N. W. (1996). Using the SAT and high school record in academic guidance: Educational and Psychological Measurement Vol 56(4) Aug 1996, 626-641.
 
*Stumpf, H., & Stanley, J. C. (1998). Stability and change in gender-related differences on the College Board Advanced Placement and Achievement Tests: Current Directions in Psychological Science Vol 7(6) Dec 1998, 192-196.
 
*Sue, S., & Abe, J. (1995). Predictors of academic achievement among Asian American and White students. Florence, KY: Taylor & Frances/Routledge.
 
*Sysler, J. (2000). Cognitive ability and achievement measures as predictors of SAT performance: A longitudinal study. Dissertation Abstracts International Section A: Humanities and Social Sciences.
 
*Tan, X. (2007). Evaluating detect indices and item classification using simulated and real data that display both simple and complex structure. Dissertation Abstracts International Section A: Humanities and Social Sciences.
 
*Taylor, J. R. (1994). Equality, school finance, and educational performance in America: Theory and evidence. Dissertation Abstracts International Section A: Humanities and Social Sciences.
 
*Thomas, C. R. (1986). Field independence and technology students' achievement: Perceptual and Motor Skills Vol 62(3) Jun 1986, 859-862.
 
*Thomas, E., & Thomas, P. (1965). Validation of the 1962 Navy College Aptitude Test: USN Personnel Research Activity Technical Bulletin No 66-6 1965, 30.
 
*Thomas, J. C. (2007). My SAT is down and my ACT is up: Making sense of test scores: PsycCRITIQUES Vol 52 (52), 2007.
 
*Thomas, M. K. (2004). The SAT II: Minority/Majority Test-Score Gaps and What They Could Mean for College Admissions: Social Science Quarterly Vol 85(5) Dec 2004, 1318-1334.
 
*Thomas, M. K. (2004). Where College-Bound Students Send Their SAT Scores: Does Race Matter? : Social Science Quarterly Vol 85(5) Dec 2004, 1373-1389.
 
*Thompson, T. P. (1984). A research note on deviant behavior and declining scholastic aptitude test scores: Deviant Behavior Vol 5(1-4) 1984, 305-312.
 
*Thorndike, R. L. (1947). The prediction of intelligence at college entrance from earlier test: Journal of Educational Psychology Vol 38(3) Mar 1947, 129-148.
 
*Thurmond, V. B., & Lewis, L. (1986). Correlations between SAT scores and MCAT scores of Black students in a summer program: Journal of Medical Education Vol 61(8) Aug 1986, 640-643.
 
*Tiedeman, D. V. (1972). Righting the Balance: PsycCRITIQUES Vol 17 (4), Apr, 1972.
 
*Ting, S.-M. R. (2000). Predicting Asian Americans' academic performance in the first year of college: An approach combining SAT scores and noncognitive variables: Journal of College Student Development Vol 41(4) Jul-Aug 2000, 442-449.
 
*Trice, A. D. (1990). Reliability of students' self-reports of scholastic aptitude scores: Data from juniors and seniors: Perceptual and Motor Skills Vol 71(1) Aug 1990, 290.
 
*Troutman, J. G. (1978). Cognitive predictors of final grades in finite mathematics: Educational and Psychological Measurement Vol 38(2) Sum 1978, 401-404.
 
*Tyler, L. E. (1978). Who's to Blame? : PsycCRITIQUES Vol 23 (8), Aug, 1978.
 
*Ungerleider, D., & Maslow, P. (2001). Association of Educational Therapists: Position paper on the SAT: Journal of Learning Disabilities Vol 34(4) Jul-Aug 2001, 311-314.
 
*VanTassel-Baska, J., & Willis, G. (1987). A three year study of the effects of low income on SAT scores among the academically able: Gifted Child Quarterly Vol 31(4) Fal 1987, 169-173.
 
*VanTassell-Baska, J. (1986). The use of aptitude tests for identifying the gifted: The talent search concept: Roeper Review Vol 8(3) Feb 1986, 185-189.
 
*Vars, F. E., & Bowen, W. G. (1998). Scholastic Aptitude Test scores, race, and academic performance in selective colleges and universities. Washington, DC: Brookings Institution.
 
*Vogel, R. E., & Halinski, R. S. (1977). Success on the CLEP General Examinations as a function of ACT performance: Measurement & Evaluation in Guidance Vol 10(1) Apr 1977, 44-47.
 
*von Davier, A. A., & Liu, M. (2008). Population invariance of test equating and linking: Theory extension and applications across exams: Applied Psychological Measurement Vol 32(1) Jan 2008, 9-10.
 
*Wainer, H. (1984). An exploratory analysis of performance on the SAT: Journal of Educational Measurement Vol 21(2) Sum 1984, 81-91.
 
*Wainer, H. (1986). Five pitfalls encountered while trying to compare states on their SAT scores: Journal of Educational Measurement Vol 23(1) Spr 1986, 69-81.
 
*Wainer, H. (1986). Minority contributions to the SAT score turnaround: An example of Simpson's Paradox: Journal of Educational Statistics Vol 11(4) Win 1986, 239-244.
 
*Wainer, H. (1988). How accurately can we assess changes in minority performance on the SAT? : American Psychologist Vol 43(10) Oct 1988, 774-778.
 
*Wainer, H., Saka, T., & Donoghue, J. R. (1993). The validity of the SAT at the University of Hawaii: A riddle wrapped in an enigma: Educational Evaluation and Policy Analysis Vol 15(1) Spr 1993, 91-98.
 
*Wainer, H., & Steinberg, L. S. (1992). Sex differences in performance on the mathematics section of the Scholastic Aptitude Test: A bidirectional validity study: Harvard Educational Review Vol 62(3) Fal 1992, 323-336.
 
*Wainer, H., & Thissen, D. (1996). How is reliability related to the quality of test scores? What is the effect of local dependence on reliability? : Educational Measurement: Issues and Practice Vol 15(1) Spr 1996, 22-29.
 
*Wallace, P. E. (1980). The efficacy of tenth grade DAT scores in predicting scores on the SAT: Dissertation Abstracts International.
 
*Walpole, M., McDonough, P. M., Bauer, C. J., Gibson, C., Kanyi, K., & Toliver, R. (2005). This Test is Unfair: Urban African American and Latino High School Students' Perceptions of Standardized College Admission Tests: Urban Education Vol 40(3) May 2005, 321-349.
 
*Warden, M. R. (1991). Withdrawal of academic effort: Implications for self-handicapping and self-worth theories: Dissertation Abstracts International.
 
*Waters, B. K., Eitelberg, M. J., & Laurence, J. H. (1982). Military and civilian test score trends (1950-1980): HumRRO Professional Paper PP 1-82 Mar 1982, 11-19.
 
*Wedman, I., & Stage, C. (1983). The significance of contents for sex differences in test results: Scandinavian Journal of Educational Research Vol 27(1) 1983, 49-71.
 
*Weiss, J. (1987). The Golden Rule bias reduction principle: A practical reform: Educational Measurement: Issues and Practice Vol 6(2) Sum 1987, 23-25.
 
*Weitzman, R. A. (1982). The prediction of college achievement by the Scholastic Aptitude Test and the high school record: Journal of Educational Measurement Vol 19(3) Fal 1982, 179-191.
 
*West, P. V. (1933). Review of A Study of Error: Psychological Bulletin Vol 30(7) Jul 1933, 532-533.
 
*White, W. F., Nylin, W. C., & Esser, P. R. (1985). Academic course grades as better predictors of graduation from a commuter-type college than SAT scores: Psychological Reports Vol 56(2) Apr 1985, 375-378.
 
*Wicherts, J. M. (2005). Stereotype Threat Research and the Assumptions Underlying Analysis of Covariance: American Psychologist Vol 60(3) Apr 2005, 267-269.
 
*Widerstrom, A. H., Jengeleski, J. L., & Chansky, N. M. (1979). Predicting freshman GPA of law/justice students: Educational and Psychological Measurement Vol 39(2) Sum 1979, 439-443.
 
*Wikoff, R. L., & Kafka, G. F. (1978). Interrelationships between the choice of college major, the ACT and the Sixteen Personality Factor Questionnaire: Journal of Educational Research Vol 71(6) Jul-Aug 1978, 320-324.
 
*Williams, V. S. (1998). The effects of metaphoric. Dissertation Abstracts International Section A: Humanities and Social Sciences.
 
*Willingham, W. W. (1990). Conclusions and implications. Princeton, NJ: Educational Testing Service.
 
*Willingham, W. W., & Lewis, C. (1990). Institutional differences in prediction trends. Princeton, NJ: Educational Testing Service.
 
*Willingham, W. W., Lewis, C., Morgan, R., & Ramist, L. (1990). Predicting college grades: An analysis of institutional trends over two decades. Princeton, NJ: Educational Testing Service.
 
*Winokur, H. (1984). The effects of special preparation for the verbal section of the SAT: Dissertation Abstracts International.
 
*Wollam, J. (1986). The relationship of measures of self-actualization to gifted students' academic achievement: Dissertation Abstracts International.
 
*Worsham, A. M. (1983). The effects of Think--a language arts thinking skills program--on Scholastic Aptitude Test (SAT) verbal scores at a Baltimore City senior high school: Dissertation Abstracts International.
 
*Wright, R. E., Palmer, J. C., & Miller, J. C. (1996). An examination of gender-based variations in the predictive ability of the SAT: College Student Journal Vol 30(1) Mar 1996, 81-84.
 
*Yang, C.-y. (1978). The predictive validity of the Scholastic Aptitude Test for Chinese college students: Dissertation Abstracts International.
 
*Zajonc, R. B. (1986). The decline and rise of scholastic aptitude scores: A prediction derived from the confluence model: American Psychologist Vol 41(8) Aug 1986, 862-867.
 
*Zajonc, R. B., & Bargh, J. A. (1980). Birth order, family size, and decline of SAT scores: American Psychologist Vol 35(7) Jul 1980, 662-668.
 
*Zeidner, M. (1987). Age bias in the predictive validity of Scholastic Aptitude Tests: Some Israeli data: Educational and Psychological Measurement Vol 47(4) Win 1987, 1037-1047.
 
*Zeidner, M. (1987). A cross-cultural test of sex bias in the predictive validity of scholastic aptitude examinations: Some Israeli findings: Evaluation and Program Planning Vol 10(3) 1987, 289-295.
 
*Zeidner, M. (1987). Gender and culture interaction effects on scholastic aptitude test performance: Some Israeli findings: International Journal of Psychology Vol 22(1) Feb 1987, 111-119.
 
*Zeidner, M. (1987). Test of the cultural bias hypothesis: Some Israeli findings: Journal of Applied Psychology Vol 72(1) Feb 1987, 38-48.
 
*Zeidner, M. (1988). Cultural fairness in aptitude testing revisited: A cross-cultural parallel: Professional Psychology: Research and Practice Vol 19(3) Jun 1988, 257-262.
 
*Zeidner, M. (1990). Does test anxiety bias scholastic aptitude test performance by gender and sociocultural group? : Journal of Personality Assessment Vol 55(1-2) Fal 1990, 145-160.
 
*Zeidner, M. (1991). Test anxiety and aptitude test performance in an actual college admissions testing situation: Temporal considerations: Personality and Individual Differences Vol 12(2) 1991, 101-109.
 
*Zeidner, M., & Nevo, B. (1992). Test anxiety in examinees in a college admission testing situation: Incidence, dimensionality, and cognitive correlates. Lisse, Netherlands: Swets & Zeitlinger Publishers.
 
*Zeleznik, C., Hojat, M., & Veloski, J. (1983). Long-range predictive and differential validities of the Scholastic Aptitude Test in medical school: Educational and Psychological Measurement Vol 43(1) Spr 1983, 223-232.
 
*Zwick, R., & Green, J. G. (2007). New perspectives on the correlation of SAT scores, high school grades, and socioeconomic factors: Journal of Educational Measurement Vol 44(1) Mar 2007, 23-45.
 
*Zwick, R., & Schlemer, L. (2004). SAT Validity for Linguistic Minorities at the University of California, Santa Barbara: Educational Measurement: Issues and Practice Vol 23(1) Spr 2004, 6-16.
 
*Zwick, R., & Sklar, J. C. (2005). Predicting College Grades and Degree Completion Using High School Grades and SAT Scores: The Role of Student Ethnicity and First Language: American Educational Research Journal Vol 42(3) Fal 2005, 439-464.
 
*Zyphur, M. J., Islam, G., & Landis, R. S. (2007). Testing 1, 2, 3, ...4? The personality of repeat SAT test takers and their testing outcomes: Journal of Research in Personality Vol 41(3) Jun 2007, 715-722.
 
 
   
 
==Further reading==
 
==Further reading==
  +
*{{cite journal |last=Coyle |first=T. R. |lastauthoramp=yes |last2=Pillow |first2=D. R. |year=2008 |title=SAT and ACT predict college GPA after removing ''g'' |journal=Intelligence |volume=36 |issue=6 |pages=719–729 |doi=10.1016/j.intell.2008.05.001 }}
* Frey, M.C. and Detterman, D.K. (2003) Scholastic Assessment or ''g''? The Relationship Between the Scholastic Assessment Test and General Cognitive Ability. ''Psychological Science,'' 15(6):373&ndash;378. [http://www.missouri.edu/~aab2b3/Detterman.g.Psychological.Science.pdf PDF]
 
  +
*{{cite journal |last=Coyle |first=T. |last2=Snyder |first2=A. |last3=Pillow |first3=D. |last4=Kochunov |first4=P. |year=2011 |title=SAT predicts GPA better for high ability subjects: Implications for Spearman's Law of Diminishing Returns |journal=Personality and Individual Differences |volume=50 |issue=4 |pages=470–474 |doi=10.1016/j.paid.2010.11.009 }}
* Gould, Stephen Jay. ''The Mismeasure of Man''. W. W. Norton & Company; Rev/Expd edition 1996. ISBN 0-393-31425-1.
 
  +
*{{cite journal |last=Frey |first=M. C. |last2=Detterman |first2=D. K. |year=2003 |title=Scholastic Assessment or ''g''? The Relationship Between the Scholastic Assessment Test and General Cognitive Ability |journal=Psychological Science |volume=15 |issue=6 |pages=373–378 |doi= 10.1111/j.0956-7976.2004.00687.x|url=http://www.missouri.edu/~aab2b3/Detterman.g.Psychological.Science.pdf |pmid=15147489|archiveurl=http://web.archive.org/web/20041101213954/http://www.missouri.edu/~aab2b3/Detterman.g.Psychological.Science.pdf|archivedate=2004-11-01}}
* Hoffman, Banesh. ''The Tyranny of Testing''. Orig. pub. Collier, 1962. ISBN 0-486-43091-X (and others).
 
  +
*{{cite book |last=Gould |first=Stephen Jay |title=The Mismeasure of Man |publisher=W. W. Norton & Company |edition=Rev/Expd |year=1996 |isbn=0-393-31425-1 }}
* Owen, David. ''None of the Above: The Truth Behind the SATs''. Revised edition. Rowman & Littlefield, 1999. ISBN 0-8476-9507-7.
 
  +
*{{cite book |last=Hoffman |first=Banesh |title=The Tyranny of Testing |publisher=Orig. pub. Collier |year=1962 |isbn=0-486-43091-X }} (and others)
* Sacks, Peter. ''Standardized Minds: The High Price of America's Testing Culture and What We Can Do to Change It''. Perseus, 2001. ISBN 0-7382-0433-1.
 
  +
*{{cite book |last=Hubin |first=David R. |title=The Scholastic Aptitude Test: Its Development and Introduction, 1900–1948 |publisher=Ph.D. dissertation in American History at the University of Oregon |year=1988 |url=http://www.uoregon.edu/~hubin/ }}
* Zwick, Rebecca. ''Fair Game? The Use of Standardized Admissions Tests in Higher Education''. Falmer, 2002. ISBN 0-415-92560-6.
 
  +
*{{cite book |last=Owen |first=David |title=None of the Above: The Truth Behind the SATs |edition=Revised |publisher=Rowman & Littlefield |year=1999 |isbn=0-8476-9507-7 }}
  +
*{{cite book |last=Sacks |first=Peter |title=Standardized Minds: The High Price of America's Testing Culture and What We Can Do to Change It |location= |publisher=Perseus |year=2001 |isbn=0-7382-0433-1 }}
  +
*{{cite book |last=Zwick |first=Rebecca |title=Fair Game? The Use of Standardized Admissions Tests in Higher Education |location= |publisher=Falmer |year=2002 |isbn=0-415-92560-6 }}
  +
*{{cite news |date=December 17, 2001 |last=Gladwell |first=Malcolm |title=Examined Life: What Stanley H. Kaplan taught us about the S.A.T. |url=http://www.newyorker.com/archive/2001/12/17/011217crat_atlarge |work=[[The New Yorker]] }}
   
 
==External links==
 
==External links==
  +
{{wikibooks|SAT Study Guide}}
*[http://www.collegeboard.com/student/testing/sat/about/SATI.html Official SAT Reasoning Test page]
 
  +
* [http://sat.collegeboard.org/register/ Official SAT Test website]
[[Category:Aptitude measures]]
 
  +
[[Category:Entrance examinations]]
 
  +
{{Admission tests}}
[[Category:Psychometrics]]
 
  +
  +
{{DEFAULTSORT:Sat}}
  +
[[Category:Standardized tests]]
  +
[[Category:1901 introductions]]
  +
{{EnWP|SAT}}

Revision as of 13:02, 30 August 2014

Assessment | Biopsychology | Comparative | Cognitive | Developmental | Language | Individual differences | Personality | Philosophy | Social |
Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology |

Educational Psychology: Assessment · Issues · Theory & research · Techniques · Techniques X subject · Special Ed. · Pastoral



File:SAT logo.gif

SAT Reasoning Test

The College Entrance Examination Board Scholastic Aptitude Test (SAT) is a standardized test for most college admissions in the United States. The SAT is owned, published, and developed by the College Board, a nonprofit organization in the United States. It was formerly developed, published, and scored by the Educational Testing Service[1] which still administers the exam. The test is intended to assess a student's readiness for college. It was first introduced in 1926, and its name and scoring have changed several times. It was first called the Scholastic Aptitude Test, then the Scholastic Assessment Test.

The current SAT Reasoning Test, introduced in 2005, takes three hours and forty-five minutes to finish, and costs $50 ($81 International), excluding late fees.[2] Possible scores range from 600 to 2400, combining test results from three 800-point sections (Mathematics, Critical Reading, and Writing). Taking the SAT or its competitor, the ACT, is required for freshman entry to many, but not all, universities in the United States.[3]

Function

The College Board states that SAT measures literacy and writing skills that are needed for academic success in college. They state that the SAT assesses how well the test takers analyze and solve problems—skills they learned in school that they will need in college. The SAT is typically taken by high school sophomores, juniors and seniors.[4] Specifically, the College Board states that use of the SAT in combination with high school grade point average (GPA) provides a better indicator of success in college than high school grades alone, as measured by college freshman GPA. Various studies conducted over the lifetime of the SAT show a statistically significant increase in correlation of high school grades and freshman grades when the SAT is factored in.[5]

There are substantial differences in funding, curricula, grading, and difficulty among U.S. secondary schools due to American federalism, local control, and the prevalence of private, distance, and home schooled students. SAT (and ACT) scores are intended to supplement the secondary school record and help admission officers put local data—such as course work, grades, and class rank—in a national perspective.[6]

Historically, the SAT has been more popular among colleges on the coasts and the ACT more popular in the Midwest and South. There are some colleges that require the ACT to be taken for college course placement, and a few schools that formerly did not accept the SAT at all. Nearly all colleges accept the test.[7]

Certain high IQ societies, like Mensa, the Prometheus Society and the Triple Nine Society, use scores from certain years as one of their admission tests. For instance, the Triple Nine Society accepts scores of 1450 on tests taken before April 1995, and scores of at least 1520 on tests taken between April 1995 and February 2005.[8]

The SAT is sometimes given to students younger than 13 by organizations such as the Study of Mathematically Precocious Youth, who use the results to select, study and mentor students of exceptional ability.

While the exact manner in which SAT scores will help to determine admission of a student at American institutions of higher learning is generally a matter decided by the individual institution, some foreign countries have made SAT (and ACT) scores a legal criterion in deciding whether holders of American high school diplomas will be admitted at their public universities.

Structure

SAT consists of three major sections: Critical Reading, Mathematics, and Writing. Each section receives a score on the scale of 200–800. All scores are multiples of 10. Total scores are calculated by adding up scores of the three sections. Each major section is divided into three parts. There are 10 sub-sections, including an additional 25-minute experimental or "equating" section that may be in any of the three major sections. The experimental section is used to normalize questions for future administrations of the SAT and does not count toward the final score. The test contains 3 hours and 45 minutes of actual timed sections;[9] most administrations (after including orientation, distribution of materials, completion of biographical sections, and eleven minutes of timed breaks) run for about four and a half hours. The questions range from easy, medium, and hard depending on the scoring from the experimental sections. Easier questions typically appear closer to the beginning of the section while harder questions are toward the end in certain sections. This is not true for every section (the Critical Reading section is in chronological order) but it is the rule of thumb mainly for math and the 19 sentence completions on the test.

Critical Reading

The Critical Reading (formerly Verbal) section of the SAT is made up of three scored sections: two 25-minute sections and one 20-minute section, with varying types of questions, including sentence completions and questions about short and long reading passages. Critical Reading sections normally begin with 5 to 8 sentence completion questions; the remainder of the questions are focused on the reading passages. Sentence completions generally test the student's vocabulary and understanding of sentence structure and organization by requiring the student to select one or two words that best complete a given sentence. The bulk of the Critical Reading section is made up of questions regarding reading passages, in which students read short excerpts on social sciences, humanities, physical sciences, or personal narratives and answer questions based on the passage. Certain sections contain passages asking the student to compare two related passages; generally, these consist of shorter reading passages. The number of questions about each passage is proportional to the length of the passage. Unlike in the Mathematics section, where questions go in the order of difficulty, questions in the Critical Reading section go in the order of the passage. Overall, question sets near the beginning of the section are easier, and question sets near the end of the section are harder.

Mathematics

File:SAT Grid-in mathematics question.png

An example of a "grid in" mathematics question in which the answer should be written into the box below the question.

The Mathematics section of the SAT is widely known as the Quantitative Section or Calculation Section. The mathematics section consists of three scored sections. There are two 25-minute sections and one 20-minute section, as follows:

  • One of the 25-minute sections is entirely multiple choice, with 20 questions.
  • The other 25-minute section contains 8 multiple choice questions and 10 grid-in questions. For grid-in questions, test-takers write the answer inside a grid on the answer sheet. Unlike multiple choice questions, there is no penalty for incorrect answers on grid-in questions because the test-taker is not limited to a few possible choices.
  • The 20-minute section is all multiple choice, with 16 questions.

The SAT has done away with quantitative comparison questions on the math section, leaving only questions with symbolic or numerical answers.

  • New topics include Algebra II and scatter plots. These recent changes have resulted in a shorter, more quantitative exam requiring higher level mathematics courses relative to the previous exam.

Calculator use

Four-function, scientific, and graphing calculators are permitted on the SAT math section; however, calculators are not permitted on either of the other sections. Calculators with QWERTY keyboards, cell phone calculators, portable computers, and personal organizers are not permitted.[10]

With the recent changes to the content of the SAT math section, the need to save time while maintaining accuracy of calculations has led some to use calculator programs during the test. These programs allow students to complete problems faster than would normally be possible when making calculations manually.

The use of a graphing calculator is sometimes preferred, especially for geometry problems and exercises involving multiple calculations. According to research conducted by the CollegeBoard, performance on the math sections of the exam is associated with the extent of calculator use, with those using calculators on about a third to a half of the items averaging higher scores than those using calculators less frequently.[11] The use of a graphing calculator in mathematics courses, and also becoming familiar with the calculator outside of the classroom, is known to have a positive effect on the performance of students using a graphing calculator during the exam.

Writing

<div class="thumb tright" style="width: Expression error: Unrecognized punctuation character "[".px; ">

SAT essay. This student received a 10/12 from two judges, each giving 5/6

The writing portion of the SAT, based on but not directly comparable to the old SAT II subject test in writing (which in turn was developed from the old TSWE), includes multiple choice questions and a brief essay. The essay subscore contributes about 28% to the total writing score, with the multiple choice questions contributing 70%. This section was implemented in March 2005 following complaints from colleges about the lack of uniform examples of a student's writing ability and critical thinking.

The multiple choice questions include error identification questions, sentence improvement questions, and paragraph improvement questions. Error identification and sentence improvement questions test the student's knowledge of grammar, presenting an awkward or grammatically incorrect sentence; in the error identification section, the student must locate the word producing the source of the error or indicate that the sentence has no error, while the sentence improvement section requires the student to select an acceptable fix to the awkward sentence. The paragraph improvement questions test the student's understanding of logical organization of ideas, presenting a poorly written student essay and asking a series of questions as to what changes might be made to best improve it.

The essay section, which is always administered as the first section of the test, is 25 minutes long. All essays must be in response to a given prompt. The prompts are broad and often philosophical and are designed to be accessible to students regardless of their educational and social backgrounds. For instance, test takers may be asked to expand on such ideas as their opinion on the value of work in human life or whether technological change also carries negative consequences to those who benefit from it. No particular essay structure is required, and the College Board accepts examples "taken from [the student's] reading, studies, experience, or observations." Two trained readers assign each essay a score between 1 and 6, where a score of 0 is reserved for essays that are blank, off-topic, non-English, not written with a Number 2 pencil, or considered illegible after several attempts at reading. The scores are summed to produce a final score from 2 to 12 (or 0). If the two readers' scores differ by more than one point, then a senior third reader decides. The average time each reader/grader spends on each essay is less than 3 minutes.[12]

In March 2004, Les Perelman analyzed 15 scored sample essays contained in the College Board's ScoreWrite book along with 30 other training samples and found that in over 90% of cases, the essay's score could be predicted from simply counting the number of words in the essay.[12] Two years later, Perelman trained high school seniors to write essays that made little sense but contained infrequently used words such as "plethora" and "myriad." All of the students received scores of "10" or better, which placed the essays in the 92nd percentile or higher.[13]

Style of questions

Most of the questions on the SAT, except for the essay and the grid-in math responses, are multiple choice; all multiple-choice questions have five answer choices, one of which is correct. The questions of each section of the same type are generally ordered by difficulty. However, an important exception exists: Questions that follow the long and short reading passages are organized chronologically, rather than by difficulty. Ten of the questions in one of the math sub-sections are not multiple choice. They instead require the test taker to bubble in a number in a four-column grid.

The questions are weighted equally. For each correct answer, one raw point is added. For each incorrect answer one-fourth of a point is deducted.[14] No points are deducted for incorrect math grid-in questions. This ensures that a student's mathematically expected gain from guessing is zero. The final score is derived from the raw score; the precise conversion chart varies between test administrations.

The SAT therefore recommends only making educated guesses, that is, when the test taker can eliminate at least one answer he or she thinks is wrong. Without eliminating any answers one's probability of answering correctly is 20%. Eliminating one wrong answer increases this probability to 25% (and the expected gain to 1/16 of a point); two, a 33.3% probability (1/6 of a point); and three, a 50% probability (3/8 of a point).

Section Average Score Time (Minutes) Content
Writing 493 60 Grammar, usage, and diction.
Mathematics 515 70 Number and operations; algebra and functions; geometry; statistics, probability, and data analysis
Critical Reading 501 70 Vocabulary, Critical reading, and sentence-level reading

Preparations

The SAT is offered seven times a year in the United States; in October, November, December, January, March (or April, alternating), May, and June. The test is typically offered on the first Saturday of the month for the November, December, May, and June administrations. In other countries, the SAT is offered on the same dates as in the United States except for the first spring test date (i.e., March or April), which is not offered. In 2006, the test was taken 1,465,744 times.[15]

Candidates may take either the SAT Reasoning Test or up to three SAT Subject Tests on any given test date, except the first spring test date, when only the SAT Reasoning Test is offered. Candidates wishing to take the test may register online at the College Board's website, by mail, or by telephone, at least three weeks before the test date.

The SAT Subject Tests are all given in one large book on test day. Therefore, it is actually immaterial which tests, and how many, the student signs up for; with the possible exception of the language tests with listening, the student may change his or her mind and take any tests, regardless of his or her initial sign-ups. Students who choose to take more subject tests than they signed up for will later be billed by College Board for the additional tests and their scores will be withheld until the bill is paid. Students who choose to take fewer subject tests than they signed up for are not eligible for a refund.

The SAT Reasoning Test costs $49 ($78 International, $99 for India and Pakistan, since the older testing system is in place). For the Subject tests, students pay a $22 ($49 International, $73 for India and Pakistan) Basic Registration Fee and $11 per test (except for language tests with listening, which cost $21 each).[2] The College Board makes fee waivers available for low income students. Additional fees apply for late registration, standby testing, registration changes, scores by telephone, and extra score reports (beyond the four provided for free).

Candidates whose religious beliefs prevent them from taking the test on a Saturday may request to take the test on the following day, except for the October test date in which the Sunday test date is eight days after the main test offering. Such requests must be made at the time of registration and are subject to denial.

Students with verifiable disabilities, including physical and learning disabilities, are eligible to take the SAT with accommodations. The standard time increase for students requiring additional time due to learning disabilities is time + 50%; time + 100% is also offered.

SAT preparation is a highly lucrative field.[16] and many companies and organizations offer test preparation in the form of books, classes, online courses, and tutoring. Although the College Board maintains that the SAT is essentially uncoachable, some research suggests that tutoring courses result in an average increase of about 20 points on the math section and 10 points on the verbal section.[17]

Raw scores, scaled scores, and percentiles

Students receive their online score reports approximately three weeks after test administration (six weeks for mailed, paper scores), with each section graded on a scale of 200–800 and two sub scores for the writing section: the essay score and the multiple choice sub score. In addition to their score, students receive their percentile (the percentage of other test takers with lower scores). The raw score, or the number of points gained from correct answers and lost from incorrect answers (ranges from just under 50 to just under 60, depending upon the test), is also included.[18] Students may also receive, for an additional fee, the Question and Answer Service, which provides the student's answer, the correct answer to each question, and online resources explaining each question.

The corresponding percentile of each scaled score varies from test to test—for example, in 2003, a scaled score of 800 in both sections of the SAT Reasoning Test corresponded to a percentile of 99.9, while a scaled score of 800 in the SAT Physics Test corresponded to the 94th percentile. The differences in what scores mean with regard to percentiles are due to the content of the exam and the caliber of students choosing to take each exam. Subject Tests are subject to intensive study (often in the form of an AP, which is relatively more difficult), and only those who know they will perform well tend to take these tests, creating a skewed distribution of scores.

The percentiles that various SAT scores for college-bound seniors correspond to are summarized in the following chart:[19][20]

Percentile Score, 1600 Scale
(official, 2006)
Score, 2400 Scale
(official, 2006)
99.93/99.98* 1600 2400
99+ ** ≥1540 ≥2280
99 ≥1480 ≥2200
98 ≥1450 ≥2140
97 ≥1420 ≥2100
93 ≥1340 ≥1990
88 ≥1280 ≥1900
81 ≥1220 ≥1800
72 ≥1150 ≥1700
61 ≥1090 ≥1600
48 ≥1010 ≥1500
36 ≥950 ≥1400
24 ≥870 ≥1300
15 ≥810 ≥1200
8 ≥730 ≥1090
4 ≥650 ≥990
2 ≥590 ≥890
* The percentile of the perfect score was 99.98 on the 2400 scale and 99.93 on the 1600 scale.
** 99+ means better than 99.5 percent of test takers.

The older SAT (before 1995) had a very high ceiling. In any given year, only seven of the million test-takers scored above 1580. A score above 1580 was equivalent to the 99.9995 percentile.[21]

SAT-ACT score comparisons

File:SAT-ACT Preference Map.svg

Map of states according to high school graduates' (2006) preference of exam. States in orange had more students taking the SAT than the ACT.

Although there is no official conversion chart between the SAT and its biggest rival, the ACT, the College Board released an unofficial chart based on results from 103,525 test takers who took both tests between October 1994 and December 1996;[22] however, both tests have changed since then. Several colleges have also issued their own charts. The following is based on the University of California's conversion chart.[23]

SAT (prior to Writing Test addition) SAT (with Writing Test addition) ACT Composite score
1600 2400 36
1560–1590 2340–2390 35
1520–1550 2280–2330 34
1480–1510 2220–2270 33
1440–1470 2160–2210 32
1400–1430 2100–2150 31
1360–1390 2040–2090 30
1320–1350 1980–2030 29
1280–1310 1920–1970 28
1240–1270 1860–1910 27
1200–1230 1800–1850 26
1160–1190 1740–1790 25
1120–1150 1680–1730 24
1080–1110 1620–1670 23
1040–1070 1560–1610 22
1000–1030 1500–1550 21
960–990 1440–1490 20
920–950 1380–1430 19
880–910 1320–1370 18
840–870 1260–1310 17
800–830 1200–1250 16
760–790 1140–1190 15
720–750 1080–1130 14
680–710 1020–1070 13
640–670 960–1010 12
600–630 900–950 11

History

Mean SAT Scores by year[24]
Year of
exam
Reading
/Verbal
Score
Math
Score
1972 530 509
1973 523 506
1974 521 505
1975 512 498
1976 509 497
1977 507 496
1978 507 494
1979 505 493
1980 502 492
1981 502 492
1982 504 493
1983 503 494
1984 504 497
1985 509 500
1986 509 500
1987 507 501
1988 505 501
1989 504 502
1990 500 501
1991 499 500
1992 500 501
1993 500 503
1994 499 504
1995 504 506
1996 505 508
1997 505 511
1998 505 512
1999 505 511
2000 505 514
2001 506 514
2002 504 516
2003 507 519
2004 508 518
2005 508 520
2006 503 518
2007 502 515
2008 502 515
2009 501 515
2010 501 516
2011 497 514
File:Mean SAT Score by year.png

Mean SAT Reading and Math test scores over time.

Originally used mainly by colleges and universities in the northeastern United States, and developed by Carl Brigham, one of the psychologists who worked on the Army Alpha and Beta tests, the SAT was originally developed as a way to eliminate test bias between people from different socio-economic backgrounds.

1901 test

The College Board began on June 17, 1901, when 973 students took its first test, across 67 locations in the United States, and two in Europe. Although those taking the test came from a variety of backgrounds, approximately one third were from New York, New Jersey, or Pennsylvania. The majority of those taking the test were from private schools, academies, or endowed schools. About 60% of those taking the test applied to Columbia University. The test contained sections on English, French, German, Latin, Greek, history, mathematics, chemistry, and physics. The test was not multiple choice, but instead was evaluated based on essay responses as "excellent", "good", "doubtful", "poor" or "very poor".[25]

1926 test

The first administration of the SAT occurred on June 23, 1926, when it was known as the Scholastic Aptitude Test.[26][27] This test, prepared by a committee headed by Princeton psychologist Carl Campbell Brigham, had sections of definitions, arithmetic, classification, artificial language, antonyms, number series, analogies, logical inference, and paragraph reading. It was administered to over 8,000 students at over 300 test centers. Men composed 60% of the test-takers. Slightly over a quarter of males and females applied to Yale University and Smith College.[27] The test was paced rather quickly, test-takers being given only a little over 90 minutes to answer 315 questions.[26]

1928 and 1929 tests

In 1928 the number of verbal sections was reduced to 7, and the time limit was increased to slightly under two hours. In 1929 the number of sections was again reduced, this time to 6. These changes in part loosened time constraints on test-takers. Math was eliminated entirely for these tests, instead focusing only on verbal ability.[26]

1930 test and 1936 changes

In 1930 the SAT was first split into the verbal and math sections, a structure that would continue through 2004. The verbal section of the 1930 test covered a more narrow range of content than its predecessors, examining only antonyms, double definitions (somewhat similar to sentence completions), and paragraph reading. In 1936, analogies were re-added. Between 1936 and 1946, students had between 80 and 115 minutes to answer 250 verbal questions (over a third of which were on antonyms). The mathematics test introduced in 1930 contained 100 free response questions to be answered in 80 minutes, and focused primarily on speed. From 1936 to 1941, like the 1928 and 1929 tests, the mathematics section was eliminated entirely. When the mathematics portion of the test was re-added in 1942, it consisted of multiple choice questions.[26]

1946 test and associated changes

Paragraph reading was eliminated from the verbal portion of the SAT in 1946, and replaced with reading comprehension, and "double definition" questions were replaced with sentence completions. Between 1946 and 1957 students were given 90 to 100 minutes to complete 107 to 170 verbal questions. Starting in 1958 time limits became more stable, and for 17 years, until 1975, students had 75 minutes to answer 90 questions. In 1959 questions on data sufficiency were introduced to the mathematics section, and then replaced with quantitative comparisons in 1974. In 1974 both verbal and math sections were reduced from 75 minutes to 60 minutes each, with changes in test composition compensating for the decreased time.[26]

1980 test and associated changes

The inclusion of the "Strivers" Score study was implemented. This study was introduced by The Educational Testing Service, which administers the SAT, and has been conducting research on how to make it easier for minorities and individuals who suffer from social and economic barriers. The original "Strivers" project, which was in the research phase from 1980–1994, awarded special "Striver" status to test-takers who scored 200 points higher than expected for their race, gender and income level. The belief was that this would give minorities a better chance at being accepted into a college of higher standard, e.g. an Ivy League school. In 1992, the Strivers Project was leaked to the public; as a result the Strivers Project was terminated in 1993. After Federal Courts heard arguments from the ACLU, NAACP and the Educational Testing Service, the courts ordered the study to alter its data collection process, stating that only the age, race and zip code could be used to determine the test-takers eligibility for "Strivers" points. These changes were introduced to the SAT effective in 1994.

1994 changes

In 1994 the verbal section received a dramatic change in focus. Among these changes were the removal of antonym questions, and an increased focus on passage reading. The mathematics section also saw a dramatic change in 1994, thanks in part to pressure from the National Council of Teachers of Mathematics. For the first time since 1935, the SAT asked some non-multiple choice questions, instead requiring students to supply the answers. 1994 also saw the introduction of calculators into the mathematics section for the first time in the test's history. The mathematics section introduced concepts of probability, slope, elementary statistics, counting problems, median and mode.[26]

The average score on the 1994 modification of the SAT I was usually around 1000 (500 on the verbal, 500 on the math). The most selective schools in the United States (for example, those in the Ivy League) typically had SAT averages exceeding 1400 on the old test[citation needed].

1995 re-centering (raising mean score back to 500)

The test scoring was initially scaled to make 500 the mean score on each section with a standard deviation of 100.[28] As the test grew more popular and more students from less rigorous schools began taking the test, the average dropped to about 428 Verbal and 478 Math. The SAT was "recentered" in 1995, and the average "new" score became again close to 500. Scores awarded after 1994 and before October 2001 are officially reported with an "R" (e.g. 1260R) to reflect this change. Old scores may be recentered to compare to 1995 to present scores by using official College Board tables,[29] which in the middle ranges add about 70 points to Verbal and 20 or 30 points to Math. In other words, current students have a 100 (70 plus 30) point advantage over their parents.

1995 re-centering controversy

Certain educational organizations viewed the SAT re-centering initiative as an attempt to stave off international embarrassment in regard to continuously declining test scores, even among top students. As evidence, it was presented that the number of pupils who scored above 600 on the verbal portion of the test had fallen from a peak of 112,530 in 1972 to 73,080 in 1993, a 36% backslide, despite the fact that the total number of test-takers had risen over 500,000.[30]

2002 changes – Score Choice

In October 2002, the College Board dropped the Score Choice Option for SAT-II exams. Under this option, scores were not released to colleges until the student saw and approved of the score.[31] The College Board has since decided to re-implement Score Choice in the spring of 2009. It is described as optional, and it is not clear if the reports sent will indicate whether or not this student has opted-in or not. A number of highly selective colleges and universities, including Yale, the University of Pennsylvania, and Stanford, have announced they will require applicants to submit all scores. Stanford, however, only prohibits Score Choice for the traditional SAT.[32] Others, such as MIT and Harvard, have fully embraced Score Choice.

2005 changes

In 2005, the test was changed again, largely in response to criticism by the University of California system.[33] Because of issues concerning ambiguous questions, especially analogies, certain types of questions were eliminated (the analogies from the verbal and quantitative comparisons from the Math section). The test was made marginally harder, as a corrective to the rising number of perfect scores. A new writing section, with an essay, based on the former SAT II Writing Subject Test, was added,[34] in part to increase the chances of closing the opening gap between the highest and midrange scores. Other factors included the desire to test the writing ability of each student; hence the essay. The New SAT (known as the SAT Reasoning Test) was first offered on March 12, 2005, after the last administration of the "old" SAT in January 2005. The Mathematics section was expanded to cover three years of high school mathematics. The Verbal section's name was changed to the Critical Reading section.

2008 changes

In late 2008, a new variable came into play. Previously, applicants to most colleges were required to submit all scores, with some colleges that embraced Score Choice retaining the option of allowing their applicants not to have to submit all scores. However, in 2008, an initiative to make Score Choice universal had begun, with some opposition from colleges desiring to maintain score report practices. While students theoretically now have the choice to submit their best score (in theory one could send any score one wishes to send) to the college of their choice, some popular colleges and universities, such as Cornell, ask that students send all test scores.[35] This had led the College Board to display on their web site which colleges agree with or dislike Score Choice, with continued claims that students will still never have scores submitted against their will.[36] Regardless of whether a given college permits applicants to exercise Score Choice options, most colleges do not penalize students who report poor scores along with high ones; many universities, such as Columbia[citation needed] and Cornell,[citation needed] expressly promise to overlook those scores that may be undesirable to the student and/or to focus more on those scores that are most representative of the student's achievement and academic potential. College Board maintains a list of colleges and their respective score choice policies that is recent as of November 2011.[37]

2012 changes

Beginning in 2012, test takers became subject to a new requirement that entails uploading a digital portrait for enhanced identification purposes. Subsequent critical commentary raised substantive concerns of racial and other discrimination because provided photos are obligatorily submitted alongside test scores in the admissions process.[38]

Name changes and recentered scores

The name originally stood for "Scholastic Aptitude Test".[39] But in 1990, because of uncertainty about the SAT's ability to function as an intelligence test, the name was changed to Scholastic Assessment Test. In 1993 the name was changed to SAT I: Reasoning Test (with the letters not standing for anything) to distinguish it from the SAT II: Subject Tests.[39] In 2004, the roman numerals on both tests were dropped, and the SAT I was renamed the SAT Reasoning Test.[39] The scoring categories are now the following: Critical Reading (comparable to some of the Verbal portions of the old SAT I), Mathematics, and Writing. The writing section now includes an essay, whose score is involved in computing the overall score for the Writing section, as well as grammar sections (also comparable to some Verbal portions of the previous SAT).

The test scoring was initially scaled to make 500 the mean score on each section with a standard deviation of 100.[28] The SAT was "recentered" in 1995, and the average "new" score became again close to 500. Scores awarded after 1994 and before October 2001 are officially reported with an "R" (e.g. 1260R) to reflect this change. Old scores may be recentered to compare to 1995 to present scores by using official College Board tables,[29] which in the middle ranges add about 70 points to Verbal and 20 or 30 points to Math.

Scoring problems of October 2005 tests

In March 2006, it was announced that a small percentage of the SATs taken in October 2005 had been scored incorrectly due to the test papers' being moist and not scanning properly, and that some students had received erroneous scores. The College Board announced they would change the scores for the students who were given a lower score than they earned, but at this point many of those students had already applied to colleges using their original scores. The College Board decided not to change the scores for the students who were given a higher score than they earned. A lawsuit was filed in 2005 by about 4,400 students who received an incorrect low score on the SAT. The class-action suit was settled in August 2007 when The College Board and another company that administers the college-admissions test announced they would pay $2.85 million to over 4,000 students. Under the agreement each student can either elect to receive $275 or submit a claim for more money if he or she feels the damage was even greater.[40] A similar scoring error occurred on a secondary school admission test in 2010-2011 when the ERB (Educational Records Bureau) announced after the admission process was over that an error had been made in the scoring of the tests of 2010 (17%) of the students who had taken the Independent School Entrance Examination for admission to private secondary schools for 2011. Commenting on the effect of the error on students' school applications in the New York Times, David Clune, President of the ERB stated "It is a lesson we all learn at some point — that life isn’t fair."[41]

The math-verbal achievement gap

Main article: Math-verbal achievement gap

In 2002, Richard Rothstein (education scholar and columnist) wrote in The New York Times that the U.S. math averages on the SAT & ACT continued its decade-long rise over national verbal averages on the tests.[42]

Perception

Correlations with IQ

Frey and Detterman (2003) analyzed the correlation of SAT scores with intelligence test scores.[43] They found SAT scores to be highly correlated with general mental ability, or g (r=.82 in their sample, .86 when corrected for non-linearity). The correlation between SAT scores and scores on the Raven's Advanced Progressive Matrices was .483 (.72 corrected for restricted range). They concluded that the SAT is primarily a test of g. Beaujean and colleagues (2006) have reached similar conclusions.[44]

Cultural bias

For decades many critics have accused designers of the verbal SAT of cultural bias toward the white and wealthy. A famous (and long past) example of this bias in the SAT I was the oarsmanregatta analogy question.[45] The object of the question was to find the pair of terms that have the relationship most similar to the relationship between "runner" and "marathon". The correct answer was "oarsman" and "regatta". The choice of the correct answer presupposed students' familiarity with rowing, a sport popular with the wealthy, and so upon their knowledge of its structure and terminology. Fifty-three percent (53%) of white students correctly answered the question, while only 22% of black students also scored correctly.[46] However, according to Murray and Herrnstein, the black-white gap is smaller in culture-loaded questions like this one than in questions that appear to be culturally neutral.[47] Analogy questions have since been replaced by short reading passages.

Dropping SAT

A growing number of colleges have responded to this criticism by joining the SAT optional movement. These colleges do not require the SAT for admission.

In a 2001 speech to the American Council on Education, Richard C. Atkinson, the president of the University of California, urged dropping the SAT Reasoning Test as a college admissions requirement:

"Anyone involved in education should be concerned about how overemphasis on the SAT is distorting educational priorities and practices, how the test is perceived by many as unfair, and how it can have a devastating impact on the self-esteem and aspirations of young students. There is widespread agreement that overemphasis on the SAT harms American education."[48]

In response to threats by the University of California to drop the SAT as an admission requirement, the College Entrance Examination Board announced the restructuring of the SAT, to take effect in March 2005, as detailed above.

In the 1960s and 70's there was a movement to drop achievement scores. After a period of time, the countries, states and provinces that reintroduced them agreed that academic standards had dropped, students had studied less, and had taken their studying less seriously. They reintroduced the tests after studies and research concluded that the high-stakes tests produced benefits that outweighed the costs.[49]

MIT study

In 2005, MIT Writing Director Les Perelman plotted essay length versus essay score on the new SAT from released essays and found a high correlation between them. After studying over 50 graded essays, he found that longer essays consistently produced higher scores. In fact, he argues that by simply gauging the length of an essay without reading it, the given score of an essay could likely be determined correctly over 90% of the time. He also discovered that several of these essays were full of factual errors; the College Board does not claim to grade for factual accuracy.

Perelman, along with the National Council of Teachers of English also criticized the 25-minute writing section of the test for damaging standards of writing teaching in the classroom. They say that writing teachers training their students for the SAT will not focus on revision, depth, accuracy, but will instead produce long, formulaic, and wordy pieces.[50] "You're getting teachers to train students to be bad writers", concluded Perelman.[51]

Test score disparity by income

Recent research has linked high family incomes to higher mean scores. Test score data from California has shown that test-takers with family incomes of less than $20,000 a year had a mean score of 1310 while test-takers with family incomes of over $200,000 had a mean score of 1715, a difference of 405 points. The estimates of correlation of SAT scores and household income range from 0.23 to 0.4 (explaining about 5-16% of the variation).[52] One calculation has shown a 40-point average score increase for every additional $20,000 in income.[53] There are conflicting opinions on the source of this correlation. Some think it is evidence of superior education and tutoring that is accessible to the more affluent adolescents. Still others propose it relates to wealthier families being exposed to a broader range of cultural ideas and experiences, because of travel and other means of wider exposure, and that "Cultural Literacy" can lead to enhancement of aptitude.[54]

History of test development

See also

  • ACT (test), a college entrance exam, competitor to the SAT
  • College admissions in the United States
  • List of admissions tests
  • PSAT/NMSQT
  • SAT calculator program
  • SAT Subject Tests

References

  1. About the College Board. College Board. URL accessed on May 29, 2007.
  2. 2.0 2.1 SAT Fees: 2010–11 Fees. College Board. URL accessed on September 5, 2010.
  3. includeonly>O'Shaughnessy, Lynn. "The Other Side of 'Test Optional'", The New York Times, 26 July 2009, p. 6. Retrieved on 22 June 2011.
  4. Official SAT Reasoning Test page. College Board. URL accessed on June 2007.
  5. 01-249.RD.ResNoteRN-10 rv.1
  6. Korbin, L. (2006). SAT Program Handbook. A Comprehensive Guide to the SAT Program for School Counselors and Admissions Officers, 1, 33+. Retrieved January 24, 2006, from College Board Preparation Database.
  7. College Admissions – SAT & SAT Subject Tests. College Board. URL accessed on November 2009.
  8. http://www.triplenine.org/main/admission.asp
  9. SAT FAQ: Frequently Asked Questions. College Board. URL accessed on May 29, 2007.
  10. http://sat.collegeboard.org/register/calculator-policy
  11. Calculator Use and the SAT
  12. 12.0 12.1 includeonly>Winerip, Michael. "SAT Essay Test Rewards Length and Ignores Errors", New York Times, May 5, 2005. Retrieved on 2008-03-06.
  13. includeonly>Jaschik, Scott. "Fooling the College Board", Inside Higher Education, March 26, 2007. Retrieved on 2010-07-17.
  14. Collegeboard Test Tips. Collegeboard. URL accessed on September 9, 2008.
  15. The scoring categories are the following, Reading, Math, Writing, and Essay.
  16. 2009 Worldwide Exam Preparation & Tutoring Industry Report – Market Research Reports – Research and Markets
  17. SAT Prep - Are SAT Prep Courses Worth the Cost?
  18. My SAT: Help
  19. SAT Percentile Ranks for Males, Females, and Total Group:2006 College-Bound Seniors—Critical Reading + Mathematics. (PDF) College Board. URL accessed on May 29, 2007.
  20. SAT Percentile Ranks for Males, Females, and Total Group:2006 College-Bound Seniors—Critical Reading + Mathematics + Writing. (PDF) College Board. URL accessed on May 29, 2007.
  21. Membership Committee (1999). 1998/99 Membership Committee Report.
  22. 404 Error page
  23. University of California Scholarship Requirement. . Retrieved June 26, 2006.
  24. (2010). 2010 SAT Trends. The College Board.
  25. frontline: secrets of the sat: where did the test come from?: the 1901 college board. Secrets of the SAT. Frontline. URL accessed on 2007-10-20.
  26. 26.0 26.1 26.2 26.3 26.4 26.5 Lawrence, Ida, Rigol, Gretchen W.; Van Essen, Thomas; Jackson, Carol A. (2002). Research Report No. 2002-7: A Historical Perspective on the SAT: 1926–2001. (PDF) College Entrance Examination Board. URL accessed on 2007-10-20.
  27. 27.0 27.1 frontline: secrets of the sat: where did the test come from?: the 1926 sat. Secrets of the SAT. Frontline. URL accessed on 2007-10-20.
  28. 28.0 28.1 "Intelligence".. MSN Encarta. Retrieved on 2008-03-02. 
  29. 29.0 29.1 SAT I Individual Score Equivalents
  30. includeonly>The Center for Education Reform. "SAT Increase--The Real Story, Part II", 1996-08-22.
  31. Schoenfeld, Jane. College board drops 'score choice' for SAT-II exams. St. Louis Business Journal, May 24, 2002.
  32. Freshman Requirements & Process: Testing. stanford.edu. Stanford University Office of Undergraduate Admissions. URL accessed on 13 August 2011.
  33. College Board To Alter SAT I for 2005–06 – Daily Nexus
  34. (2009) "Chapter 12: Improving Paragraphs" The Official SAT Study Guide, Second, The College Board.
  35. Cornell Rejects SAT Score Choice Option. The Cornell Daily Sun. URL accessed on 2008-02-13.
  36. Universities Requesting All Scores. (PDF) URL accessed on 2009-06-22.
  37. http://professionals.collegeboard.com/profdownload/sat-score-use-practices-list.pdf
  38. includeonly>Caldwell, Tanya. "SAT Reforms May Have Negative Impact on Students, Counselors Say", The New York Times, 2012-03-27. Retrieved on 2012-10-31.
  39. 39.0 39.1 39.2 SAT FAQ. The College Board. URL accessed on 2008-09-13.
  40. Hoover, Eric $2.85-Million Settlement Proposed in Lawsuit Over SAT-Scoring Errors. The Chronicle of Higher Education. URL accessed on 2007-08-27.
  41. includeonly>Maslin Nir, Sarah. "7,000 Private School Applicants Got Incorrect Scores, Company Says", April 8, 2011.
  42. includeonly>Rothstein, Richard. "Better sums than at summerizing; The SAT gap", August 28, 2002.
  43. Frey, M. C. (2003). Scholastic Assessment or g? The Relationship Between the Scholastic Assessment Test and General Cognitive Ability. Psychological Science 15 (6): 373–378.
  44. Beaujean, A. A. (2006). Validation of the Frey and Detterman (2004) IQ prediction equations using the Reynolds Intellectual Assessment Scales. Personality and Individual Differences 41: 353–357.
  45. Don't Believe the Hype, Chideya, 1995; The Bell Curve, Hernstein and Murray, 1994
  46. Culture And Racism[dead link]
  47. Herrnstein, Richard J. (1994). The Bell Curve: Intelligence and Class Structure in American Life, 281–282, New York: Free Press.
  48. Achievement Versus Aptitude Tests in College Admissions
  49. Phelps, Richard (2003). Kill the Messenger, 220, New Brunswick, New Jersey: Transaction Publishers.
  50. includeonly>Winerip, Michael. "SAT Essay Test Rewards Length and Ignores Errors", May 4, 2005.
  51. includeonly>Harris, Lynn. "Testing, testing", Salon.com, May 17, 2005.
  52. http://hypertextbook.com/eworld/sat.shtmlM
  53. SAT Test Demographics by Income and Ethnicity
  54. Hirsh, E.D. "The Schools We Need: And Why We Don't Have Them", Doubleday, 1996

Further reading

External links

Wikibooks
SAT Study Guide may have more about this subject.


Template:Admission tests

This page uses Creative Commons Licensed content from Wikipedia (view authors).