Test Prep Resource Guide: The Secret to Higher Achievement

At Aino Education, we believe that real learning and full immersion is the secret to higher achievement, but don’t just take our word for it. There are plenty of others making the case for an educationally sound and sustained program like Aino’s, including news organizations, university professors, independent researchers, college advisement organizations, and the makers of the SAT themselves:

Paying for Test Prep Doesn’t Yield Big Returns. Students and families may not be getting as much help as they think from commercial admission test preparation...average gains as a result of commercial test preparation are in the neighborhood of 30 points on the SAT and less than one point on the ACT, substantially lower than gains marketed by test preparation companies. Read more ››

NACAC Discussion Paper: Preparation for College Admission Exams. Contrary to the claims made by many test preparation providers of large increases of 100 points or more on the SAT, research suggests that average gains are more in the neighborhood of 30 points. Read more ››

College Board Research Notes. The College Board (1999) has acknowledged that long-term, intensive preparation that resembles genuine education may improve reasoning skills that are both measured by the SAT I and required for success in college. Read more ››

Curbing Boasts About Test Prep. Critics have said that they believe test-prep companies' initial tests yield low results, encouraging people to sign up for courses and to credit the companies for large gains later. Read more ››

Princeton Review To Stop Claiming 255-Point Boost In Test Scores. Most kids who take the SAT twice simply do not see large improvements in their scores. Read more ››

Most Test Prep Score Increases Are Based on Self Selection and Small Samples. A study from the University of Colorado at Boulder. Read more ››

Report Highlights Test Prep Paradox—Paying for Test Prep Doesn’t Yield Big Returns, But Returns May Still Matter in Light of Admission Practice

May 20, 2009 – (Arlington, VA) – Students and families may not be getting as much help as they think from commercial admission test preparation, according to a report commissioned by the National Association for College Admission Counseling (NACAC). Existing academic research suggests average gains as a result of commercial test preparation are in the neighborhood of 30 points on the SAT and less than one point on the ACT, substantially lower than gains marketed by test preparation companies. However, the research report also indicates that some colleges and universities may make inappropriate distinctions among applications based on small differences in admission test scores, making even minimal test score gains potentially important in those decisions. The report suggests more comprehensive research is needed to further understand the impact of specific types of test preparation, as distinct from other factors that may improve test scores.

Read the original article ››

2009 NACAC Discussion Paper: Preparation for College Admission Exams

Summary of Test Preparation Research
The existing academic research base indicates that, on average, test preparation efforts yield a positive but small effect on standardized admission test scores. Contrary to the claims made by many test preparation providers of large increases of 100 points or more on the SAT, research suggests that average gains are more in the neighborhood of 30 points. Although extensive, the academic research base does have limitations. Most notably, few published studies have been conducted on students taking admission tests since 2000. Only two studies have been published on the effects for ACT scores, and no studies have been published since the 2005 change to the SAT, which added the Writing section among other changes. In addition, many previous studies were conducted on small samples or had other methodological flaws. Additional large-scale studies of test preparation—including both the ACT and SAT and examining a variety of test preparation methods—will be important to understanding more about the relative value of different types of test preparation. However, even with these caveats in mind, students and families would be wise to consider whether the cost of a given test preparation option is worth what is likely to be a small gain in test scores.

Download the full article in PDF format ››

College Board Research Notes

Coaching and the SAT® I
In addition to test familiarization activities, some test takers undertake much more extensive preparation. For instance, Schwartz (1999) described an intensive yearlong tutoring program that “isn’t so much teaching the test as simply teaching—in this case, intensive tutorials in math or reading” (p. 56). Reportedly, this program has resulted in large score gains on the SAT I. The assertions are based largely on anecdotal reports instead of formal research. Such programs, which extend well over a year in duration, differ from more popular test preparation programs, as they focus much more heavily on content that is relevant not only to the test, but also to education and learning. Such instruction may come at a very high price (fees of $415 per session were reported by Schwartz).

The College Board (1999) has acknowledged that such long-term, intensive preparation that resembles genuine education may improve reasoning skills that are both measured by the SAT I and required for success in college. While no controlled research study has yet examined the effects of such intensive tutoring programs, the College Board has maintained that students’ math and verbal reasoning skills (and their SAT I scores) can improve as a result of rigorous academic study and other efforts (both in school and out of school). In evaluating the effects of coaching programs on test scores, the duration of the program and the focus of the program should be considered.

Download the full article in PDF format ››

Curbing Boasts About Test Prep

May 13, 2010
The Princeton Review, a leading test-prep company, has agreed to stop using claims about average score gains in its marketing materials. While company officials say that they believe the claims were accurate, and that they were preparing to move away from such claims without outside prodding, the decision came after an investigation by the National Advertising Division of the Council of Better Business Bureaus, which found the decision to stop making such claims to "be necessary and appropriate." (The organization acts as an arbitrator among companies that agree to have complaints probed.)

The inquiry was based on a complaint from Kaplan Inc., a major competitor in the test-prep industry. Kaplan asserted that Princeton Review had no basis to talk about score gains because the start point for measuring gains was generally determined by diagnostic tests, while the end point was a live test. Critics have said that they believe test-prep companies' initial tests yield low results, encouraging people to sign up for courses and to credit the companies for large gains later.

Whatever the accuracy of the claims, they have been quite visible. Examples cited in the investigation of the Princeton Review include: "In fact, our students improve their GMAT scores by an average of 90 points" or "Our students improve their GRE scores an average of 206 points" or "Our SAT Ultimate Classroom students average a score improvement of 255 points."

Read the complete article ››

Princeton Review To Stop Claiming 255-Point Boost In Test Scores

NEW YORK — Why don't most students' SAT scores dramatically improve the more times they take the test?

A. They don't study hard enough.

B. Their parents don't enroll them in fancy test-prep classes.

C. Most kids who take the SAT twice simply do not see large improvements in their scores.

The correct answer is C, according to the College Board, the nonprofit organization that administers the SATs. And here's the latest development in the debate over whether kids can dramatically improve their scores: The Princeton Review company no longer claims that its "Ultimate Classroom" SAT test-preparation course can boost SAT scores by 255 points.

The National Advertising Division of the Council of Better Business Bureaus, which examines accuracy in advertising, announced May 12 that The Princeton Review would "voluntarily discontinue certain advertising claims . following a challenge by Kaplan, Inc., a competing test-preparation service."

Read the complete article ››

Most Test Prep Score Increases Are Based on Self Selection and Small Samples

Comment: Jack Kaplan's "A New Study of SAT Coaching"" By Derek C. Briggs

Kaplan calculates average score gains on the SAT-M of 60 and 87 points respectively for two different cohorts of nine students (excluding three students who had been previously coached in the second cohort). How much of this gain can be attributed to Kaplan's coaching? To estimate the coaching effect, Kaplan needs a comparable control group of high school students who take the test in the spring of their junior year and then again in the fall of their senior year . Not having such a group in his sample, he looks elsewhere, first at national score gains calculated from a 1997-98 College Board study, then at score gains calculated for 50 students from a local public high school in 2000. In each case Kaplan finds the average score gain on the SAT-M to be 13 points. If 13 points is used as a control baseline, the estimated effect of Kaplan's coaching for each cohort of students would be 47 and 74 points.

Because of the self-selected nature of Kaplan's students, the suitability of this baseline is highly questionable. We have little reason to expect Kaplan's cohorts to be comparable to the 1997-98 national population. To begin with , the average 13 point gain calculated by the College Board comes from all students taking the test twice, including those with prior SAT-M scores as low as 280. In Kaplan's cohorts the prior SAT-M scores are 460 and 480 respectively. Beyond this, students in the national population are likely to vary substantially along any number of variables correlated with SAT-M performance. The 2000 high school sample Kaplan uses is more comparable with respect to prior SAT-M scores and socioeconomic background, but other potentially confounding variables—for which we have no information—include race/ethnicity, academic achievement, personal motivation and other test preparation activities.

How much of an impact might self-selection bias have on estimated coaching effects? This is pure speculation. In the analysis I did using data from NELS:88, controlling just for demographic and academic achievement variables reduced the coaching effect for SAT-M by 26 percent. Extrapolating this reduction to Kaplan's study would reduce his coaching effect per cohort to 35 and 55 points.

It is difficult to attach much confidence to the precision of Kaplan's estimated coaching effects because they are based on extremely small samples of students. This makes it more likely that the effect is either over or underestimated due to chance variability. For example, if the underlying population standard deviation for Kaplan's coached students was about 60 points, then a single standard error around his coaching effect estimate would be 20 points. Again, assuming for the moment that self-selection bias reduces Kaplan's effect estimates by about 26 percent, the 95% confidence interval for his coaching effect would be between –5 and 75 points for the first cohort, and between 15 and 95 points for the second cohort.

Read the complete article ››