Hacker News new | past | comments | ask | show | jobs | submit login

Problem is most states assessments can't measure growth and "growth measures" are numerically nonsense.

Growth measures do exist, but they require adaptive tests.




Surely growth measures require repeated testing (so one can see how the differences between student at Time A vs. Time B), not necessarily adaptive tests? Maybe I'm missing something; could you help me understand?


I'm abstract, you're correct, and a typical setup for an educational experiment is a calibrated pre / post test to see what changed over the course of interacting with an activity. Both tests are effectively identical and measure the same thing.

However, state tests are annual. You don't want to measure the same thing. If you don't measure the same thing, you can't compare.

There are technically non-adaptive test designs which might work, but it's not what we're using, and they're a lot more complex than adaptive.


Strong claim! Can you elaborate?

Also, the SBAC test administered in California, Washington, and many others is now adaptive, but still schools are measured based on point-in-time aggregates.


I can elaborate a little bit but not a lot since that would be essay-length. I can simplify and explain roughly.

The short story is that most states assessments are designed almost as binary measures, to see if students are above or below a cutoff threshold set by Common Core State Standards. They're designed to measure schools, and in particular, to flag failing schools where kids aren't meeting standards.

They're very good at that, actually.

However, that's almost meaningless as a measure for kids well above or below standards, or unaligned to standards.

"Growth" is almost meaningless here. If I know one number is less than three and another less than four, I can't subtract them.

It's more mathematically fancy, but that's the jist. It gets even worse since measures constructs are highly multidimensional and different dimensions are measured each year. It's like subtracting apples from oranges.

It also encourages the wrong behavior. For kids behind, I'll get the best "growth" by discussing on grade level material and leaving gaps for what kids failed to learn before. I'll also do well to ignore my students who are ahead. Indeed, students who did week last year will inevitably hurt my "growth."

As a footnote, I would not call this a strong claim. Talk to a psychometrician and you'll see it's common knowledge.

If adaptive tests move beyond those few states, the problem goes away.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: