Hacker News new | past | comments | ask | show | jobs | submit login
Major Changes in SAT Announced by College Board (nytimes.com)
113 points by daegloe on March 5, 2014 | hide | past | favorite | 147 comments



This sounds nice, but I'm not holding my breath for hope that the "new" SAT will be any better.

I took both the "old" SAT (the one that they discontinued around 2004) and the "new" SAT (the one that they're now discontinuing). I actually thought that the structure old one was better in many ways - for example, the analogies were often terribly written, but the idea of testing analogies as a reasoning tool is very powerful, and much more so than just doing passage after passage of reading comprehension.

Furthermore, this is the exact same language that they used to justify the decision to change the SAT 10 years ago. Coleman isn't saying anything new when he's criticizing the SAT today; he's just recycling the same PR language that they used a decade ago.

Of course, perhaps they really are genuine. I'd love to be pleasantly surprised. But reading this gives me total deja vu from the news stories I remember reading in 2002.


Analogies are legitimate tests of reasoning ability. As I recall, however, the analogies of the "old" old test often relied on obscure words. This turned them into de facto vocabulary tests, defeating their purpose as pure assessments of reasoning. At the very least, vocabulary became a confounding variable in assessing the specific aptitude that analogies were meant to assess.

One could make a strong argument for plain-English analogies, absent the $10 words. If every test taker understood most of the words, then we could test for reasoning ability on a more normalized basis.

That's not to say testing vocabulary is totally invalid. Opinions vary. But if we assume vocabulary is worth testing, then we should test it independently.


I always assumed analogies were explicitly designed as a mechanism for testing vocabulary, not reasoning. I don't understand how analogies would have any value in testing reasoning skills. I can't imagine an analogy question that isn't either obvious or subjective.


The whole point of analogy problems is to test reasoning skills. Specifically, logical skills, inferences, categorization, and so forth. The vocabulary test nested within the analogy test is incidental. It creates a bivariate challenge (vocabulary + categorization), which is not necessarily an invalid test. It's just not a pure test of analogies, and it's also duplicative of the vocabulary portion of the test.

In re the value of analogies, there are some computer scientists and philosophers who believe analogy-drawing is the irreducible core of higher cognition. I'm not sure I'd go that far, but Douglas Hofstadter has come close:

http://www.amazon.com/Surfaces-Essences-Analogy-Fuel-Thinkin...


why not both?


when you test for both, its tougher to get a good measure on either.


I did decently the last time I took that analogies test. For me, it was all about analyzing the sub-word component meanings (i.e. each syllable and the relations of the syllables,) not just analyzing the vocabulary at the word level. Maybe I'm just strange.


At Khan Academy, we're really excited to work with the College Board to provide awesome, free test prep to everyone. We're putting huge emphasis on really learning the material instead of practicing test-taking skills that won't be useful afterwards.

Here's a little more about the partnership if you're interested:

http://techcrunch.com/2014/03/05/khan-academy-gets-major-par...


"Instead of teaching to the test we're going to put huge emphasis on really learning the material instead of practicing test-taking skills that won't be useful afterwards."

Now how do we get the schools to actually do that?


Changing the incentives would be a start. If you give schools very specific metrics, and then judge them on how well they optimize those metrics for the least money, it's no surprise that they are going to... try to find the most direct way of optimizing those metrics. In fact, the ones who don't do that will be penalized!


"Beware your metrics, for they will improve."


Ideally you test better, such that teaching to the test is identical to teaching well. That's the theory behind the Oxbridge tutorial system. Unfortunately good tests and scalable tests seem to be mutually incompatible, and non-scalable tests incommensurable.


People in discussions like this often assume that it's easy to make test scores go up without teaching the material.

Here are specific tests from some large states (the first I found via google):

http://www.nysedregents.org/Grade8/Mathematics/20100505book1...

http://www.nysedregents.org/Grade8/EnglishLanguageArts/04261...

http://www.cde.ca.gov/ta/tg/sr/documents/cstrtqmath7.pdf

How do you make scores on these specific tests go up to any significant degree without making students learn the material? Please explain the techniques in detail.


Well, all of the tests are multiple choice. Switching away from the format would certainly encourage a better understanding of material. Given that the format of the 4 choices is general:

1 correct answer, 1 or 2 close but not correct answer, and 1 or 2 obviously wrong answers.

Now that format makes thing significantly easier to get the correct answer than simply presenting the problem and asking the student to solve the problem. I'd imagine some of you are thinking "Well hell, they would still need to know the formula", but they also give you formula sheets with the tests. Since they give you a list of possible answers and a list of applicable formulas, you can pretty quickly plug in some numbers and determine the correct answer. Does it still require some base understanding? Yes. But I can tell you with absolute certainty that my grades would not have been as high had I not known how to game the testing system in such a manner.

I'm only a few years removed from high school, so I cannot imagine things have changed too much, especially for the better. For reference I am from South Carolina, albeit I went to one of the more performant/richer public school systems in the state.


Teaching students how to get into the ballpark of the right answer and reject obviously wrong ones increases their understanding. It's an important skill and there are adults out there who embarrass themselves by failing to do it (for example http://www.buzzfeed.com/charlesclymer/5-things-more-likely-t... ).

Same thing for taking a reference work (e.g. a formula sheet, a wikipedia page) and figuring out which piece of information you need and applying it.

Remember, I'm not asking if the tests could be better. I'm asking what you do to make test scores go up without making understanding go up.

I'd be at least moderately convinced if you could even show me two teaching techniques, A and B, where A is superior for test scores but significantly inferior for understanding.


I used to play rock-paper-scissors with my child. Could win every time. You could see on his face his simple logic.

You can learn to take tests - I did that for the GRE, got a 99% but hadn't ever studied 2 of the area tests I took. There's a logic to test questions, you can learn it.

There was no understanding involved, I can assure you. Just a recognition of what is considered a 'fair' question, which lets you eliminate answers that are too convoluted.


"I did it, it's totally possible, I'm not going to support this claim at all but I assure you it's true."

It would be far more convincing if you actually skimmed the NY Reagents test and presented a teaching strategy which you believe would raise scores but not increase understanding.

Note that if such a strategy does not exist for the NY test but does exist for every other test, then we have a solution to all the complaints about testing - use the NY test for everyone.


You provide them with a premade test-prep curriculum that is interwoven with genuine educational videos, perhaps in a subtle way, such as providing coarse links to review material in response to incorrect answers. Then you hope that educators will either see the value in your cross-links or won't have the time to scrape your question bank and eliminate the cross-references ("condensing" the review material).

Disclaimer: I have no actual idea if this is what they're doing, I'm just saying that it could be done this way.


As a former SAT prep teacher, I'm very excited to learn about Khan Academy's partnership with the College Board. Previously, success on the SAT seemed to depend solely on how much money was spent on SAT prep. Glad to hear that the College Board is interested in leveling the playing field.


SAT prep has little effect on SAT scores. From research on this topic [1]:

"Does test preparation help improve student performance on the SAT and ACT? For students that have taken the test before and would like to boost their scores, coaching seems to help, but by a rather small amount. After controlling for group differences, the average coaching boost on the math section of the SAT is 14 to 15 points. The boost is smaller on the verbal section of the test, just 6 to 8 points. The combined effect of coaching on the SAT for the NELS sample is about 20 points. The effect of coaching is similar on comparable sections of the ACT. The average score increase on the ACT math section probably lies within the range of 0 to .4 points, while the coaching effect on the English section is about .3 to .6 points"

[1] http://nepc.colorado.edu/files/Briggs_Theeffectofadmissionst...


There is a lot of bad test prep out there, especially any test prep taught by high school teachers. I have done test prep privately for six years and I can guarantee you coaching can have substantial effects. These studies group all test prep under a giant umbrella, but take elite 1 to 1 test prep and the premium price it costs that parents actually pay and you'll see substantial gains. I have spend $0.00 on advertising in the last six years and I am routinely overbooked because of word of mouth. There's just so many awful teachers and courses out there.


Hm, it's very possible that SAT prep companies game the system by making their diagnostic tests quite difficult.


> Previously, success on the SAT seemed to depend solely on how much money was spent on SAT prep.

Most likely, a correlation is not causation thing -- people for whom success on the SAT is most important are likely to spend more money on test prep, but also be best generally prepared. As ueqirat points out, the research that has been done on SAT test prep utility hasn't actually born out a real effect.


I know it's just one datapoint, and SAT II, not SAT, but I took the SAT II writing test around 2000 and got a 600. I then took a test prep course over a few weeks, took the test again, and got a 690, which is a pretty big difference I'd say. I can tell you the test prep course certainly didn't improve my writing at all...


Eh, maybe I'm just an outlier but I got a 1370 in 1997 without really trying, let alone actually spending money to prepare. I'm also a 100% product of public schools.


Not sure what you are saying. The claim is that you could have done better had you really tried and taking courses is one way of doing that.


Maybe I misinterpreted "success on the SAT" and "solely". In context, I took it to mean people had test test prep did well, in the absolute sense, and those who did not have test prep did not do well. I was providing myself as an example of someone who did well in the absolute sense with no preparation. I was not saying I could not have done better with preparation.


Some feel if they succeed without trying, then this success is more meaningful than those whose foundation for success was built on hard work.


Some also feel their time is better spent on learning and smart work and not dry, repetitive practice of test-taking strategies.


I did literally no prep whatsoever (just walked in to the test location the day I was told to), and did very well. I'm sure test prep would have helped my writing score (my lowest one) by teaching me how to write a god awful essay that contained the pieces to get you extra points, but my verbal and math scores were about as high as they were going to get.

The secret to standardized testing is fast reading comprehension. I've always been a fast reader (without sacrificing comprehension). It frees up time to go back and really spend time on the tougher questions that just isn't available if you have to spend a long time just reading the question (it's more than just in the reading comprehension section, since you have to actually read every question to get through the test).


Yeah I meant more than it is very easy to go from the 400-500's range to the 600-700's range with SAT prep. Especially on the writing section -- they only test on 20ish grammar rules, many of which are subtle and wouldn't necessarily be noticed without prep.


I scored very well on the SAT's back in 2001 or so (better than 700 on each section) with no prep. I dont think anything on there was beyond the reach of a normal HS student. I went to public HS and took no advanced math or english classes prior to the test.


I can see that. I remember quite a bit of the test being formulaic, and preparation for what the formulas are would certainly be worth a few hundred points. I can't speak for the writing section, because it didn't exist in 1997.


Where are you getting this information? I've taught hundreds of kids SAT prep and I can tell you that it is definitely not "very easy" to go from 400-500s to 600-700s.


The writing section is the only one I think test prep would have helped with. Especially with essay structuring. I just wrote a response to a prompt the way I've always written.


For the writing section, you have to follow an exact structure and use the right amount of words. You can also make stuff up to support your arguments.

It's basically "Write a generic 5 paragraph essay using memorized argument flow."


Yea, with no test prep, I just wrote an essay as I always had. It was my weakest section.


I didn't spend a dime on SAT prep and I did reasonably well (good enough to get into Caltech).


Perhaps you didn't give money to Princeton Review or Kaplan, but I'm guessing you didn't walk home barefoot to abusive parents in Louisiana either.


Good thing there were no questions about extrapolating from a sample size of 1.


Since the parent said "solely" all it takes is a single counterexample :-)


and that's why you are posting on HN, with other members of above-average baseline intelligence.


This is going to be huge for you guys, congratulations and I hope it all works out!


I do not envy the College Board. We are in a social and political environment where many issues that bear directly on the test are things you can't discuss in polite company (e.g. whether IQ is a real and relevant phenomenon, the 0.8 correlation between IQ tests and the SAT, the correlation between IQ and socioeconomic status). This makes it extremely difficult to both optimize for their goals, and to communicate that to the involved parties. Because of this it's hard to even know what their true goals are and whether they are actually going to achieve them.


I think these changes are long overdue. To me, the CR portion of the SAT seemed incredibly biased against people from other cultures. Not only was there a language barrier, but also there was the problem of allusions. Many of the passages contained references to things that you wouldn't expect a typical immigrant to know (things like Greek mythology, the indulgences of the wealthy, cultural icons, etc).

I think the waivers for low-income students are also pretty great. When I was in high school, taking all of the standardized exams cost me hundreds of dollars. I went to school in a place with a lot of gentrification, so a lot of my classmates could barely afford to get their high school transcript (let alone pay to take tests). There were many efforts to get the school system to subsidize standardized exams, but they were all unsuccessful.


These tests are for admission to an American college / university. Basic knowledge of Western culture is immensely useful if you're going to be in such an environment, both for studies and for social purposes.


Yes, basic knowledge is useful even if you aren't going to college. But the problem is that the tests reference arcane parts of western culture that an immigrant wouldn't be reasonably expected to know. My parents are both immigrants and I was born in the US (I have lived here my entire life), and I still had a hard time understanding some of the allusions in the CR portion. They aren't references to popular culture; they are references to an insular culture.


The only question I recall from my SAT required knowledge of the recipes for various alcoholic drinks. I thought that was a bit unfair as I was well below drinking age, so why would I know that stuff? I'm sure I got that one wrong.


Not just the exams -- the practice exams, the prep courses, the books, the tutors.

And not just the money -- the time investment in preparing and taking the SAT/ACT is massive. High schoolers in low-income areas or ones forced into situations where they need to support their family with home care/employment are at a huge disadvantage (though the same can be said about other college admission aspects as well.)


Am I the only one here that's bothered by a further narrowing of the expected knowledge of students? A good engineer or programmer isn't a human calculator who can bang out hundreds of caclulations per minute or recite petty facts in a very specific area. Good programmers and engineers draw on their wide breadth of knowledge, and the deep understanding it generates, to solve complex and abstract problems.

Anecdotally, I'm seeing a substantial number, if not the majority, of CS students who can't code worth a damn, but got into the department solely because they did well in a small number of unrelated subjects.

While I'm not terribly concerned with the changes in the SAT, I am concerned about it adding more momentum to this trend.


No. I was also depressed about the approach of 'we're excluding arcane words!' I do really well on those tests - not because I have encountered every word that might come up, but because I do have a big enough vocabulary to draw analogies and guess at the meaning of arcane words by looking at their internal structure, eg Membranous seems to consist of membrane + ous, so I guess it's an adjective to describe something that has a membrane-like quality.

I developed a love of etymology from spending many long hours poring over dictionaries as a child, including the bits about word roots and so on. When I see that things like this are being dropped from the SAT, it's like being told that that knowledge and the effort to acquire it lacks value.


TepidSolarSoul, your account seems to be banned so I can't reply to your comment directly.

I certainly agree that 'maudlin' is harder to figure out etymologically, but on the other hand I don't think it's all that unusual either; one would certainly have come across it if one had read any amount of literature from the late 19th/early 20th century, whether than was something by Dickens or a bunch of Sherlock Holmes stories.

I personally think the fact that everyday vocabularies are shrinking is a terribly bad thing, and a trend we should be resisting rather than accommodating.

I didn't grow up with a silver spoon in my mouth or as the offspring of professional educators, incidentally - indeed they went through a phase of trying to discourage me from reading material that was too advanced or adult-themed. My parents read to me when I was small, my mother enjoyed crosswords, but the main thing they did for me was tell me to go look up a word in the dictionary rather than supplying a definition whenever it was asked for. I enjoyed reading enough to find the dictionary an interest in thing to browse in its own right.

Better keep opening Old Nassau's gates to the upper echelons of society for anybody who can ace this sort of trivia.

I'm not sure what you're trying to say here, but I'm not holding my breath for an invitation from Princeton.


I agree. I would actually prefer if the test subjects were considered new to all students, and they were expected to become experts in those subjects before taking the test. What you need to score for is the ability to learn and apply the knowledge that you already have to new subjects, not to repeat the basics. You're ready for college when you're able to learn new things and apply them quickly.

It's a good thing IMO that people have to learn 'SAT words', it filters out the ones that are able to put in the effort to expand their vocabulary, and also retain this new knowledge.


"I'm seeing a substantial number, if not the majority, of CS students who can't code worth a damn, but got into the department solely because they did well in a small number of unrelated subjects."

Isnt that argument for more relevant testing? If the current selection picked up people with poor aptitude for programming or whatever, then it makes sense to change it.

Engineers needs ability to think hard and solve complex problems. So, lets measure those instead of how many arcane words they memorized, how great are they in writing self promoting essays, how much irrelevant facts they memorized or how fast they are in solving simple memorizable math problems.

There is nothing wrong with adjusting expectations to current world.


> For many students, Mr. Coleman said, the tests are mysterious and “filled with unproductive anxiety.” Nor, he acknowledged, do they inspire much respect from classroom teachers: only 20 percent, he said, see the college-admissions tests as a fair measure of the work their students have done in school.

What is really interesting to me is that while the rhetoric against the SAT (and similar tests like the MCAT and LSAT), have increased over the last few decades, in practice the tests are more important than ever. In the last 30-40 years, median scores at the top 10-15 schools are up about 100 points adjusted for recentering in 1995. In practice, SAT scores are almost entirely determinative for college admissions. High schools pump out so many 4.0+ students, that 100-200 point differences on the SAT dominate differences in admissions outcomes.

And apparently, even companies are asking people for their SAT scores these days: http://www.businessinsider.com/goldman-sachs-bain-mckinsey-j.... Without taking a position on either side, I have to wonder why the rhetoric and the practice are so out of step on this issue.


Needless to say, all speculation:

It's pretty hard to be dumb as a post and score 1500+ (excluding the short-lived essay). It may not be the most predictive factor for college success overall, but as the top schools have become increasingly competitive, they can afford to use higher and higher scores as a baseline. If they miss some diamonds in the rough who test poorly, they still won't have any trouble filling a class. Test-taking wizards with little else to recommend them will get filtered out through other methods.

On the other hand, if you're a mid-tier school trying to choose between a pair of otherwise similar applicants with scores of 1000 and 1200, it might matter to you a great deal how predictive the SAT is vs. the ACT or some other metric. Meanwhile, no matter how confident you are in its predictive power, at least it's an objective metric. One that, for better or worse, also has a marketing impact, since average incoming class scores show up near the top of college survey data.

I think most of the controversy about the SAT and other standardized tests centers around the predictive power of results near the median. Especially since re-centering, the test doesn't try hard to distinguish the top 1% from the top .01%, but it's very important that it correctly distinguish the 60th percentile from the 40th.


Higher population numbers would account for both effects, no?


To a degree. In 1976, someone at both medians at Yale would have a 1410 on the 1996 scale.[1] In 2013, that was a 1510. I can't find percentile charts from back then, but I believe the distribution has stayed roughly the same, which means that Yale has gone from a median score in the top 4% to one in the top 1%.[2] In 1976, about a million people took the SAT, so a top 4% score was a pool of about 40,000 potential students. In 2013, about 1.7 million people took the SAT, so a top 1% score was a pool of about 17,000 potential students.

So population growth explains part of it, and much of the rest can probably be explained by improved financial aid causing applications at Yale to go up by a higher percentage than the overall increase in number of test-takers.

What's really telling is that, despite all the criticism of the SAT, Yale responded to the increasing number of applicants by using the SAT more aggressively to filter applicants. In other words, when faced with a larger applicant pool, Yale decided to reject students with SAT performance that it had previous deemed adequate, because it couldn't think of any better criteria to use to distinguish the additional applicants.

[1] Yale's data: http://oir.yale.edu/sites/default/files/W032_Fresh_SATs_5.pd.... Tables for calculating recentered scores pre and post-1995: http://research.collegeboard.org/programs/sat/data/equivalen....

[2] http://www.collegeboard.com/prod_downloads/highered/ra/sat/S....


Yeah, to expand on what I meant,

Population growth -> Naturally leads to more people willing to criticize the SAT because percentages. Especially if there remains a similar or greater level of exposure to it, which there has.

Population growth -> Larger applicant pool, and assuming Yale or whoever is unwilling to drop the SAT as a filtering mechanism, naturally means they'll raise their various bars, including the SAT score cutoff.


Obviously you would expect Yale to raise SAT medians in addition to other criteria as the number of applicants goes up, in order to maintain the class size (which it has done for the relevant time). What's interesting is how much the bars are raised relative to each other. Remember, the criteria aren't perfectly correlated. If you go from top 4% SAT to top 2%, and top 10% GPA to top 5%, you should expect the number of people passing the filter to shrink by much more than 50%. It could be as much as a 75% reduction if SAT and GPA aren't correlated at all.

Apparently, the number of applications at Yale have gone up by 2.6x since 1976, and the admissions rate has gone down by 3.5x. So Yale needs to raise its various bars to let through 1/3 or 1/4 as much of the applicant pool, in totol. The fact that it has raised its SAT bar from top 4℅ to top 1% suggests that all the increased stringency in the admission standard comes from the SAT criterion. Indeed, to achieve that increase in SAT median with only a 2.6x increase in apps, Yale probably had to make some criteria less stringent.

That's the crazy part. Not that SAT went up, but that it seems to have been the primary tool to filter the additional applicants, to the exclusion of other criteria.


Hrm. Couple things that come to mind, if you want to keep poking:

* how did the distribution of SAT scores among applicants change during that period?

* how did tuitions change over that period?

* what's the GPA distribution versus the SAT distribution? At some point we went from a 0-4 GPA scale to a 0-5 GPA scale, and IIRC, the SAT's numbers have moved around as well. Can the preference for modifying the SAT bar be related to finer granularity being available? You sort of allude to this in your original comment. GPA may not have gone up simply because it's a worse metric than even the SAT.

* did the rise coincide with all standardized tests, or just the SAT I? SAT II? AP exams? IB?

* was the applicant pool affected by accepting transfer students or affirmative action type of deals?

* what does "top X%" actually mean in this case? Is it a stated admissions policy? Or is it it just that students below a certain SAT score virtually never qualify on some other criteria?

Sorry if some of these would be answered by your links. I'm less interested finding the answer and more just giving you a sounding board.


Removing the penalty for wrong answers seems to be an interesting choice to me. I thought a strong part of the test was the emphasis on carefully considering the questions so that the answers given were deliberate.

Without a penalty, guessing is just a chance for free points. Not really a measure of any skill aside from test taking (of which the SAT already suffers from enough). But I guess if all of your tests at a university are similar to the SAT, it might be a good measure of your potential success.


Penalties for wrong answers are the ultimate test of test-taking abilities (actually, to an extent, an ability to implement probabilities). Your strategy changes based on how many answers you can eliminate, rather than on if you know the correct answer.


Being able to eliminate wrong answers is pretty much the same skill as being able to notice that you made a mistake when you look over your work. It's valuable for more than test taking.


I disagree. Checking your work and realizing you forgot a semi colon or that you made an addition mistake is a very different skill than being able to eliminate an answer since you know it is incorrect. Further, it's an even more distinct skill to know when you have eliminated enough answers for your expected correct answer value >= your expected wrong answer value.

That's not saying that they are not valuable skills on their own, but at age 16, they are far more associated with test prep than what the SAT purports to measure.


No, if you're actually checking your work properly, you're going in with the assumption that there are mistakes to be found, and you're looking for reasons why your answer might be wrong - the same as when you're trying to eliminate a candidate answer in a multiple choice test (especially if "none of the above" is one of the choices). The gaming the odds aspect of the SAT only comes into play when you've run out of time or run out of knowledge to use to support or contradict possible answers.


Checking my work with the assumption I made mistakes is distinct from knowing that 2 of the 5 answers are wrong, and therefore my expected value of a guess is positive vs. when i cannot eliminate any. Saying that checking your work and knowing some answers are wrong but not which one is right are the same thing is simply not true.


Way to ignore the last sentence of my comment. I consider the guess to be separate from the process of elimination that precedes it and sometimes obviates it.


If you know all the answers though, you won't skip any or get any wrong. I didn't get a perfect score by guessing on the hard questions.


So no penalty for wrong answers increases the chances of an undeserved perfect score but decreases the test-taking skills advantage in understanding when to guess and when not to guess. The latter is much more common than the former.

EDIT: Said the penalty, meant no penalty


This is the best news about college admissions that I've heard in a very long time. As someone who was fortunate enough not to be disadvantaged by the clear socioeconomic preferences of the SAT (and I'm guessing the ACT as well, though I had much less exposure to it), this is absolutely a step in the right direction.


What are these "clear socioeconomic preferences" of the SAT?



That's hardly specific to the SAT, that's true in basically every single thing measured. Including health.

Lower income people do worse. Or perhaps people who are "worse" have lower income.


Yes. And these are problems that need addressing. Especially when the people who are doing worse are not even the ones earning the low income, just those with the bad fortune of having low income parents.


It's not the College Board's problem to solve. Their goal with the SAT is to offer colleges a predictor of college success. Why should they try to incorporate a form of affirmative action into their test? Why not leave that up to the admissions offices that will have a more complete picture of the applicant's background?


Making test prep materials more accessible doesn't really sound like affirmative action to me. It's not an attempt to counterbalance this particular socioeconomic bias, but rather to remove it.


The socioeconomic correlation isn't a bias - it reflects the reality that a poor childhood leaves you less prepared to do well in college. Erasing that effect without fully erasing the causes of the effect is a cover-up that defeats the purpose of the SAT.

Making the test harder to hack by acquiring shallow knowledge (like memorizing "SAT words" without actually being well-read enough to have seen or used them before) is a good thing, because it will make the test a more accurate predictor of the applicant's general level of knowledge. But simply making those test-prep tools more affordable and easier to find will only have the effect of compressing the distribution of scores, making the SAT a less discriminating test in the technical sense and thus a less useful measurement.


"But simply making those test-prep tools more affordable and easier to find will only have the effect of compressing the distribution of scores, making the SAT a less discriminating test in the technical sense and thus a less useful measurement."

What you are advocating amounts to intentional "handicap" for those who find test-prep tools expensive. Yes, you are getting higher distribution of scores. If you would single out brown eyed people and denied them access to those tools while giving them everybody else, you would get high scores distribution too.

Both tests would be just about the same useful. If the childhood leaves some less prepared to do well in college, then you do not need to add another handicap to them. They are more likely to fail even with access to those tools. If the access makes them perform better, then your "it is fair cause they are not prepared" goes out of window.

Note: I'm not saying they should not sell test prep. It is business and it is ok business to try to earn money. Just that the more expensive materials are, the less you choose on actual aptitude.


The fact that there's a natural correlation doesn't mean the SAT's correlation is identical to the natural one. And if it's worse than it should certainly try and change that.


Of course - I never said otherwise. But the changes they're making don't seem to be simply about making the test more accessible and less elitist; they seem to invite ceiling effects, and I don't see how that helps anyone.

Rather than making test prep widely available, the College Board should strive to design a test for which the only effective prep is getting a solid well-rounded education, and they should pay no attention to the complaints that some of the questions are too hard for almost anyone to answer.


I agree that, if the SAT were ideal, there'd be no way to prepare for it in particular.

However, a nationally standardized test must have consistent scores to be useful, and currently the best way to do that is to keep the general format and content consistent. That means that a student can discover and prepare for the test's general format and content in advance, which is definitely advantageous.

A truly ideal SAT would have a different format and test wildly different skills every time it's issued in order to defeat the idea of test-specific preparation, but still produce meaningful scores. I'm skeptical that such a test can produce sufficiently consistent scores, though I'm very willing to someday be proven wrong by some clever test developer. That'd be pretty neat.

So, until we create an SAT for which test-specific preparation is meaningless, it's important that test prep resources are distributed evenly so that students with good educations won't have artificially lower scores due to unfamiliarity with the SAT's standard format and content. It's an unfortunate but necessary compromise :/


This image tells me that wealthy kids are better educated. Why is that striking?


It isn't striking, but it does help perpetuate the cycle of the rich being rich and the poor being poor. This is one of the reasons why we have affirmative action, and why it continues to be necessary, otherwise we essentially create a walled garden for those who were lucky enough to be born to affluent families.


What's the solution from SAT's perspective? Normalize scores with each student's family income? Give easier questions to poorer students?


I don't think College Board wants to normalize for background differences; that would be a lazy and hugely treacherous to do. Instead, they are tackling the background differences head on by doling out free test prep. I believe education as a whole is heading toward widespread accessibility of top-quality education for everyone. But for now, Khan Academy is taking a small step to democratize test prep, and it should be enough to make a noticeable difference.


This is a horrible graphic. What are the numbers written? Is it a mean, median, etc.? What's time frame? And if there is a single number, why is it a bar? I'd like to see a box plot.

Not to mention all the junk in the background. What do the dotted lines even mean? Tufte would have a heart attack.


Poor people perform poorly.


As someone who was perhaps the poorest member of my graduating class at an elite, private college, I can say with some degree of certainty that the SAT was an important, if not the most important, factor in helping me gain access to opportunities that no one from my rural community had ever had before. So in that sense, the SAT can be a tremendous equalizer, even if it is often the opposite.


I agree. My family income was in one of the bottom three ranges, and I had a score much higher than what is displayed for the top range. Looking back, it is a huge factor in where I've come from there.


My immediate reaction: My score is now forever unrelatable to all except those who took the exam between 2005 and 2014. Not that anyone has asked my SAT score since I got into undergrad, but now a sizable group of people who did substantially poorer than me will seem as though they did extremely well (a 1550, for example, means something very different depending on which exam you took, while my score of above 1600 will immediate remind people I took the 2400 exam).

It doesn't affect me at all, and I'm not complaining, but that's what ran through my head.


This is something that's been going on for a long time, something I watch as one who took the exam in 1979. Since then it's been successively watered down to the point it's much less useful. E.g. MIT now accepts the ACT ... and finds it's a better predictor than the SAT (!!!).


If you chop off the essay score, you'll still be related to everybody ever (with respect to test revisions).


Yes, I know. But if I got a 1500 on the new 2400 test, you could always just say you got a 1500. There's going to be some number of people who forget about the change and when it happened (people do it now, I doubt they'll forget less often now). However, you say a score above 2400 and everyone immediately remembers the change.

Again, I don't care about how this affects me, it's just the most interesting and relevant part to me.


Unlike alma maters or perhaps Xoogler status, I don't think SAT scores ever comes up casually in a conversation, and where it does I doubt someone scoring 1500/2400 would be mixed up for someone scoring 1500/1600.


I'm not really sure why SAT scores matter to anyone other than kids trying to get into their university of choice, other than some sort of bragging rights. I had some of the highest standardized test scores in the history of my high school, but I'm certainly not the most accomplished.


I haven't thought about mine since applying to college until this story came out. Just thought what ran through my head was interesting, not that it was a real problem for me.


"Math questions will focus on three areas: linear equations; complex equations or functions; and ratios, percentages and proportional reasoning. Calculators will be permitted on only part of the math section."

As if math requirements for American students aren't low enough already.


Voluntary extra tests are required for people who need to prove more than competence in fundamentals. I'm sure there are many people who wish their field was better represented in the test. But you have 3 hours to get a read on quite a spread of topics, it necessarily must be rather hit and run, with the deep dives saved for later.


Having taken several such tests, I can't help observing that they're also administered by the College board and cost money to sit - something on the order of $75 each.


What do you propose to add to this list?


Basic algebra and geometry: Order of operations; simplifying expressions; solving quadratic equations by factoring; finding area and perimeter of shapes; the classic "garden problem" (maximize area of a rectangle given fixed sum of 3 sides); naming the Platonic solids; adding, subtracting, multiplying, and dividing complex numbers; logarithms; solving exponential equations.

I think probability, in particular Bayes' Theorem, and basic number theory should be covered in a high school curriculum for the college-bound as well. Including converting numbers between base 2, base 10, and base 16; solving linear congruences with small moduli by trial and error; finding gcd and lcm; prime factorization; determining whether a pair of numbers is relatively prime. Teaching the extended Euclidean algorithm in high school would be on my "nice-to-have" wishlist, but there's only so much classroom time available.


Part of the problem for why this stuff isn't taught in high school as much is that (and I do say this with some hesitance, as an engineering student who has had a strong math education, imo) I feel most people would probably not benefit much from having that additional content added into the high school course. The other and more major issue I see is that universities seem to be (just from personal experience and hearing friends stories too) starting to want students to take most of their higher maths, that above basic algebra it seems sometimes, at the university. While you could argue that this is because they are greedy and want more money from forcing students to take more classes, I'd like to think that the reason why is because of how fragmented american high school math courses can be.

Personally, as an engineering student, I'd rather take most of my math classes here at my university because they tend to all be tied together (both with the math department and engineering department courses) well and I know that where I leave off in Ordinary Diff Eq, I will pick right back up from in Ord. Diff. Eq 2. Coming from having taken calculus and calculus 2 in high school and having entered straight to calculus 3 in university, I didn't have this comfort and in fact ended up missing some content between the classes as a result. There is also the issue of choice, and the limits of it within the context of high school curriculum.

Just my 2 cents against teaching additional stuff in high school (or probably addressing the wrong point within this comment). Also, I'm not sure (obviously) what its like across the nation, but at my high school at least these topics where all covered and taught to (nearly) all students: > Basic algebra and geometry: Order of operations; simplifying expressions; solving quadratic equations by factoring; finding area and perimeter of shapes; the classic "garden problem" (maximize area of a rectangle given fixed sum of 3 sides); naming the Platonic solids; adding, subtracting, multiplying, and dividing complex numbers; logarithms; solving exponential equations. [...] solving linear congruences with small moduli by trial and error; finding gcd and lcm; prime factorization;


> most people would probably not benefit much from having that additional content added into the high school course

It would be great if we could switch from content to teaching actual mathematical reasoning. You can read a famous essay called "Lockhart's Lament" for more about this subject. But that's not something that's happening right now either in classrooms or on the standardized tests.

My point is that you have to teach something in high school, and it doesn't feel like the new SAT tests anywhere near four years' worth of content.

> linear equations; complex equations or functions; and ratios, percentages and proportional reasoning

"Linear equations" are a topic that can easily be taught in a month or less. "Complex equations or functions" is unclear, but I assume this means quadratic equations or maybe basic trigonometry -- probably a semester's worth or less. "Ratios, percentages and proportional reasoning" is really middle school level math -- or even elementary school level. It doesn't belong in a high school curriculum, except as review or remedial material.

So all of this content consumes less than a year of high school. I agree that maybe four years of intensive math coursework may be the wrong bar to set for the SAT, but is it setting the bar too high to expect college-bound seniors to know more than a single year worth of math content?

For that matter, only the best students should be expected to get a high score on the SAT; otherwise, the SAT score becomes meaningless. So I should re-phrase the question:

Is it setting the bar too high to expect the best math students among college-bound seniors to know more than a single year worth of math content?


The suggested prep for the Math 2 subject test: More than three years of college-preparatory mathematics, including two years of algebra, one year of geometry, and elementary functions (precalculus) or trigonometry or both. I think the "put it on the subject test" is a powerful argument.


Those are what the Subject Tests and (for calculus) AP Tests are for.


In my experience, you should know most of those skills before high school anyway.


David Coleman, president of the College Board, criticized his own test, the SAT, and its main rival, the ACT, saying that both “have become disconnected from the work of our high schools.”

Understatement of the year.


When I took the ACT, two of the sections were Science Reasoning and Reading Comprehension (or something similar). The thing is, they required the exact same set of skills: the ability to read a passage and answer questions based upon what you read. The only real difference between them was that the Science Reasoning portion dealt with science related topics. I scored 36 in both sections the first and only time I took the test.


Sounds like they're dumbing down the SAT. Those soon-to-be-cut "obscure" vocabulary words are critical for mastering English.


First of all, I don't think it's true. And secondly, even if that's true, mastering English isn't exactly a top priority skill needed for young people to succeed in today's world, especially not up there with math and science (which is ridiculously easy on these standard tests and is weighted less. Actually, does SAT even have a science portion???). SAT does not test general knowledge either. (Yes, I know of SAT-II, but they aren't taken by most people and aren't required by colleges).


No, they're not. Learning them outside of a useful or pleasurable context is almost a complete waste of time, akin to collecting porcelain figurines that you dislike and have no place to store.

If you run into a word you don't know, look it up.


They're being replaced by other words and one which actually seem useful, if the two examples they give there are representative.


Besides it's fun to tell people you have logolepsy, but the seizures are controllable.


Really? When studying for the SAT I saw words that I have never seen again.


To be honest I'm somewhat sad that Khan Academy is jumping on board with this and planning on offering SAT prep videos. I don't think it's appropriate to mix such meaningless skills as test taking, with the true skills Khan Academy helps teach. I believe it could distract students from the important material, by allowing them to care more of what will help them improve that one particular test score.


The official SAT problems are well-written and so we're happy to teach students how to do them.

We're not going to teach test-taking.


But by specifically focusing on the SAT questions aren't you not focusing on the fundamental skills, but rather the specific of how they show up in the SAT's?


This is great. I think the changes to the essay portion are excellent.

As a native english speaker, my instinct is to emphasize the essay part - but I realize that's idealistic. Still, I think essays and writing are critical for showing one's ability to synthesize (there's that word) and organize one's ideas or those of others. I think writing should be emphasized and multiple native languages supported... but that's work for the next revision. This one is certainly welcome.


Sucks really badly for dyslexics though.


Agreed... a test that aims to be as broadly applicable as possible should accommodate as many variations and common problems as possible. It may be the difference between 20 million people taking it and 20.3 million, but for that .3 million it's incredibly important.


Yes probably needs the ability to do an oral Viva - funnily enough one of my uncles did that at Oxford back in the 50's for his MA as his writing was so bad :-)


Why is "ending the longstanding penalty for guessing wrong" a good thing?


When there is a penalty for wrong guesses, students who have a pretty good idea that they know the answer, but aren't certain, must waste time determining whether it is in their favor to answer the question.

Evaluating whether it is worth it to take a guess is a test-taking skill, and the SAT is trying to shift away from test-taking skills.


What's hard about it? If you can eliminate even one answer it's in your favor to guess.


Really you can just always guess. When you can't eliminate any answers, guessing has the same expected value as leaving the question blank. If you can eliminate one or more answers, the expected value is higher for guessing than for leaving the question blank. So in no situations does guessing decrease your expected value.


Test questions like the SAT are designed to have wrong answers that look like they are right at first glance, so people who think they have a better than average chance by guessing often don't because they were baited by one of the wrong answers without thinking it all the way through.


Reducing student anxiety, probably.


It's worth noting the similarities to the Common Core. David Coleman was one of the main authors of the Common Core and is now president of the College Board.

This article: "Sometimes, students will be asked not just to select the right answer, but to justify it by choosing the quote from a text that provides the best supporting evidence for their answer." "Going forward, though, students will get a source document and be asked to analyze it for its use of evidence, reasoning and persuasive or stylistic technique."

CCSS CCSS.ELA-Literacy.CCRA.R.1 Read closely to determine what the text says explicitly and to make logical inferences from it; cite specific textual evidence when writing or speaking to support conclusions drawn from the text. CCSS.ELA-Literacy.CCRA.R.8 Delineate and evaluate the argument and specific claims in a text, including the validity of the reasoning as well as the relevance and sufficiency of the evidence.


As a member of the first year of students who were required to do the writing section, I'm glad it's gone. What an absolute crock that was, and it probably screwed up the other sections as well. When you make an already ponderous test over an hour longer, it becomes less about aptitude and more about endurance.


I always found it interesting the super strong correlation between essay length and score, and that the scorers are supposed to not care about factual accuracy at all. I also found that almost every single essay question had "right"-sounding and a "wrong"-sounding answer, as in answering one way would always make you sound smarter to the reader. I guess if you just know that you can do pretty well by following these rules you'll follow the formula.


As if real life never presented such challenges.


Real life has never sprung a random topic on me at 7:30 AM and required me to have a handwritten essay written 25 minutes later. This essay, in a wildly naive attempt to limit the subjective value of writing, was judged largely on "coherency".

We had computers back in 2005, and almost all students typed their school essays out, but the college board hadn't got with it. Since it took me so long to handwrite things, I didn't have a moment to think about the topic. Instead, my strategy was to prepare an extremely formulaic essay, and then furiously write as fast as I could. Hey, it worked, except I was already stressed and tired by the end of the essay with over 3 hours of test to go.

To backtrack a bit, I've always assumed the SAT to be about aptitude for reasoning, not about throwing "real life" challenges at you.


Real life has never sprung a random topic on me at 7:30 AM and required me to have a handwritten essay written 25 minutes later.

But you knew in advance taht you were going to have to write an essay when you took the SAT, no? And while it may not have happened to you since, I can think of numerous examples where people have to do exactly that - lawyers, managers, doctors...

I'm sorry you find it hard to write things by hand, but that's something most people learn to deal with in junior school. I was a very slow writer myself, and I overcame it with...practice. Indeed, when under time pressure to write something I often find it easier to write by hand because I don't have editing facilities available beyond basic crossing-out.


He did knew it advance and thus memorized formulaic essay. Yes, you can also get faster at writing by practice.

The problem is that speed of writing is irrelevant in almost all contexts and training it ridiculous waste of time - unless you need SAT. Which is reasonable complaint about SAT. If they would be measuring how fast you run, then you could prepare for it by running a lot. It would not make it the best possible test.

Mangers and doctors do not write essays on random topics. They write texts about their work on topics they supposedly know well.


"She still embodies all the awful stereotypes she did before!" "But she's got a new hat!"


I wish I had had the chance to take this, I could have gotten a perfect score. Taking the SAT as an ESL student those obscure words just kill you. There is a set of words that appear nowhere in modern English other than on an SAT test.


If I was on a college admissions committee, I'd take into account ESL status when comparing verbal scores from one applicant to the next. It only makes sense.



I'll start respecting the SAT just as soon as the results can be accepted as a measure of general intelligence. It shouldn't be that difficult to simultaneously test for both content mastery and aptitude.


It used to be a fairly good measure of 'g', and I note the tables I found on the net that equate the scores of the 1979 exam I took are very close to my results on formal IQ tests, like within 2-3 points.

Reversing the watering down of the exam ... that's pretty hard to imagine for the foreseeable future.


It's great when a standardized test keeps on changing. It kind of negates the point. Fundamentally the SAT (or any general test) is an arbitrary measure of you much you study. Just leave it be I say.


One thing nice about the change is that for a wrong answer students won't be penalized.


R.I.P. SAT Vocab Flashcards [& every App dedicated to them]


I take the SAT this saturday. To be blunt: They owe everyone a fucking apology over this.


The education system is designed to operate backwards on purpose. Think about it.


Throw independent thought out the window please.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: