Hacker News new | past | comments | ask | show | jobs | submit login
'A Test You Need to Fail': A Teacher's Open Letter to Her 8th Grade Students (commondreams.org)
464 points by saulrh on March 24, 2012 | hide | past | favorite | 175 comments



Standardized tests are optimized for grading.

They are designed to be evaluated quickly, and objectively. Scantrons can be graded mechanically, and these essays described in the article can be graded with no thought and a minimum of judgment. These goals, efficiency and objectivity, impose constraints on how you can test competence, and as far as I can tell those constraints are simply insurmountable.

I have taken one and exactly one category of standardized test that I respected, and that was the AP tests. The essay questions for AP Lit was graded by having three English teachers read the essay and evaluate whether your answer indicated you understood the meaning of the passage, and how it relates to the work as a whole. In other words, it was done the hard way. And it was better than any other standardized test I've ever taken at correlating performance with understanding.

The problem is scale. Scaling is the only advantage of the current form of tests, and that's enough. Any replacement is going to have to address scaling.

(My hare-brained solution: Grading essays is what teachers do over summer vacation. It's enough excess labor to remove that design constraints.)


Rather than worry about scaling, why not get rid of the essay sections completely? I have an issue with standardized tests basically setting public school curriculum, but most of the essays I remember didn't even do that (and based on the OP's piece, these essays aren't testing writing ability either.)

I recall the (Texas) essay prompts used to be incredibly vague and had nothing to do with what we were learning in class at the time. Prompts like "Write about a time you overcame a challenge" come to mind. Back then, I assumed they were just testing our ability to communicate ideas to an audience, but really all they wanted was the default 5-paragraph essay. It was little more than fill-in-the-blanks to this outline, rather than actual writing.

What made the AP Literature and Rhetoric tests so good was that everything we learned in class was applicable to them. Obviously, my teachers took time to explicitly practice the AP essays, but every other activity also helped feed my knowledge of the subject that I could use to answer the essays on the AP test. Other standardized test essays just seem to be antithetical to any possible purpose.


>Grading essays is what teachers do over summer vacation

One of the few perks of being a teacher is the extra vacation time compared to the rest of us. Cutting into that, with the typically already-poor wages, doesn't seem like a good plan.


Teacher wages are not that low. I do some math here, read the whole thread (I made an error in my initial post).

http://news.ycombinator.com/item?id=1673075

Teachers also have many perks - summer vacation, minimal accountability, defined-benefit pensions, etc. In fact, a rather significant chunk of teacher's comp is perks, not pay, far more so than most people in the private sector.


And in-district status for their children, even if they live out of district. For a parent of 3 children living in EPA or San Jose or something, working as a teacher in Palo Alto saves basically $20k x 3 post-tax every year, vs. going to a comparable private school.

(which is why I think "people sneaking into schools with fake residence addresses" should be punished; if they're trying to send their children to another district, it should be by making a sacrifice and working in the district, vs. indirectly taxing everyone who lives in that district)


> minimal accountability

Name any job other than "Congressman" in which a single nude photo of you will put you name in the tabloids and end your career.


Thats an extreme case. The problem, is that after a certain point in your career it will take nothing short of that nude photo to get a teacher fired. No matter how ineffective or incompetent they are.


Teachers still have a ton of accountability. They aren't necessarily accountable to be good teachers, but there's a lot of stuff they are accountable to do. There's usually a lot of paperwork and all kinds of requirements.


Agreed: My wife is a teacher and I'm not sure I'd describe it as extra vacation rather than compensation for the unpaid overtime during the school school year, particularly since teachers have a very rigid schedule.


> The problem is scale.

And not only for grading, but for teaching as a whole. There is just no smart solution for educating children at a larger scale - we have to do "brute force". The less children per teacher, the better.

(Different story for adults who can autodidact, I guess, but even at universities the student / prof ratio is important).


I'm not sure that's the case.

One example of a teacher scaling is khan academy. I think that something like khan academy could have been done even before the internet, even with VCR's. And automated practice(in math) using computers was available when i was a kid and we even used it through grade 6 , but in grade 7 when we moved to another school we stopped using it.

I think the problem is more due to the fact that the teaching profession is conservative(mostly as a way to maintain power, which is not unique to teaching by any means).


"Standardized tests are optimized for grading."

And there's the money quote.


The author mentions Noam Chomsky, and I think he cuts to the point of the educational system: (http://www.youtube.com/watch?v=Xq6lFOhLJ0c)

Points out that one goal is obedience; and even "stupidity" in the system is useful, in that if you're willing enough to go along with obviously stupid orders, you'll pass through to the next level. (In other words, it filters for obedience.)

Obviously, an educational system reflects the distribution of power in a society. This is particularly obvious when we observe an official enemy nation; we have no trouble seeing how they try indoctrinating students along the interests of those with power. Unfortunately, we're taught to have blindspots when it comes to our own societies.

There are of course better educational systems. Unlike the model of dumping knowledge into your empty head, they focus on encouraging the growth of your natural capacities and internal forces. I suspect that many "autodidacts" are just people who want to escape the problems of dominant education, with whatever resources they have.


This is pretty typical. Slamming the standardised tests, without actually giving a better alternative. Maybe instead of complaining about this, these teachers should get together and figure out how to keep from letting so many kids fail out of the educational system.

Good, bad or indifferent this was the plan of the "no child left behind" initiative (which had very strong bi-partisan support mind you) which is now heavily under fire for other reasons.

Its too bad most of the local governments believe throwing money at the situation is the solution. In the Minneapolis School Districts, they're spending close to $13K per student. By comparison, in a suburban school district, they're actually just over $9K per student.

The Minneapolis graduation rate? 49%. The suburban school? 98%.


An alternative to what? Standardized testing? How about, non-standard testing? Do we have such a dearth of qualified teachers that we think we need to strip the people in charge of the classroom of any discretion? What sort of cognitive dissonance does it take to say, "Gee, this person can teach our kids the material, but they probably have no ability to evaluate whether the kids learned it".


What makes you think that your child's teacher is actually qualified to teach the material that he or she is teaching? Teaching degrees in the United States are a joke. They consistently attract the dregs of the college applicant pool.

The fact that standardized tests measure students is a secondary benefit. The entire point of No Child Left Behind was to identify the worst teachers and either 1) get them to improve or 2) get them to leave. I wholly agree with this goal. Unlike most of the people making educational policy, I'm still young enough to remember my high school experience. I remember math teachers that didn't know how to solve the problems they assigned. I remember civics teachers who knew less about the American Revolution than I did. And I didn't grow up in the inner city. I grew up in a fairly prosperous suburb. My school was regarded as being in the top 10% of schools in the state. I shudder to think how bad the teachers are in high poverty schools.

To get at your point more directly, I don't think there's any cognitive dissonance at all. I think the basic premise is being questioned: "Is this person qualified to teach our kids the material?" Do I agree with the way we're measuring how the material is being learned? No. Our current standardized tests are like yardsticks, rather than calipers. But even a yardstick is better than guesswork.


> Teaching degrees in the United States ... consistently attract the dregs of the college applicant pool.

I have to take issue with this statement because it unfairly paints an entire discipline with a broad brush. First of all, I don't think it's true that the teaching programs are particularly scraping the barrel (I'd pin that on some other programs, like business and criminal justice), and if it's truly "consistent" I'd love to see a cite. My suspicion is that the average intellectual quality of ed majors is about in line with the average student in general.

Second, even if that subgroup average were lower than the middle of the general curve, your statement implicates all teachers as bad, and comments like this contribute to undermining the classroom authority of all of them. I have no problem with calling out incompetence where it may be found, and there most certainly are individual incompetent teachers out there---we've all had a few. But if we paint all teachers as the "dregs" of academia, we make the job harder for the many competent and the several outstanding teachers that are also out there.


I'd love to see a cite.

The common citation is this one: http://www.ncsu.edu/chass/philo/GRE%20Scores%20by%20Intended...


Ok, now we're talking about grad school, rather than college in general. This is an entirely different thing than what you seemed to be talking about (and what I responded to). As a result, we're also talking more about aspiring administrators rather than people who mostly just want to educate---even outside the "Education - Administration" specialty, a prime reason to get an MEd or an EdD is to jump tracks over into an administrative post of some sort.

Even at that, let's be a little careful how much we generalise and where we sling mud. In the first column in that table, "Computer and Info Sciences" ranks well below one subcategory of "Education" and only two spots above another; in the third column, it's below five categories of Education and tied to four more.


"Gee, this person can teach our kids the material, but they probably have no ability to evaluate whether the kids learned it".

How do we know this person actually can teach our kids the material?

Asking teachers to write their own tests is like asking military contractors to audit themselves and determine whether or not they charged the government the right amount.



Thanks for finding those. I looked through the 2010 test and the essay questions specifically say "Use details from the passage to support your answer." Details, plural, meaning two or more. I'm not seeing essay grading instructions, but I'm leaning towards thinking this is a bit overblown.


Or, in the alternative, learn the rubric, take the silly test according to that rubric (it looks to me as if any student who is reasonably bright and really well taught could rapidly learn to produce answers that fit that rubric), and then write a really well crafted proposal, based on research, for how to make the test better. An eighth grade class that produced a lot of students capable of doing that would be very impressive and would get a lot of attention.


It seems in America we solve our problems in one of two ways:

A. Spending more money indiscriminately. Money always makes everything better. Just look at health care!

B: Getting the federal government to take over and centralize everything.

Creative ideas are almost always ignored unless they are a means of executing strategy A or B.


I think it is simpler than this, we take an action that we can both take and demonstrate having taken it, whether or not that solves the problem.

If you are a politician and your constituents are telling you to 'fix education' you can't create a 'Khan Academy' what you can do is add requirements that teachers 'do better' and since better is subjective you create an artificial measure of 'better.' Then you report back that you helped to 'fix education' when you not only didn't fix it, you didn't even move the ball down the field.

However this mythical representative didn't have take the hard road of pissing off some entrenched interests in education in order to change it. Not only would that cut into next year's re-election budget, there would be nothing concrete to show for it and a lot of soft money ads in your district saying how you were bad for education.


I worked for a publisher of standardized tests for a number of years, though this company did not develop the assessments used by the State of New York. What this teacher doesn't seem to appreciate is the process behind how these tests are written. They are, seemingly by the nature of this society, produced in a manner which can't capture distinctive teaching styles like this woman has.

The process of making a test can start with whatever a state legislature has mandated will be assessed. Mind you, before that, there is all the negotiations and politicking that takes place. Surprising to most, this involves state education leaders, business people, religious folks, politicians, parents, etc. It's a kitchen sink of divergent interests with everyone claiming to have the best interests of the children at the fore.

Once the legislation is in place and a contractor has been secured to aid with development, there are loads more meetings and committees deciding what is appropriate assessment within each subject (math, reading, writing, etc.). Again, there are loads of different people with loads of different interests, all of whom believe they are thinking foremost about the children.

It's also in this stage that whatever research or trends in assessment styles will be considered (though some takes place earlier, too). There's usually a mentality of "You go first" to new assessment techniques. States are more willing to try something if another state has already done something similar and there is publicly available data to support the perceived efficacy.

At the next stage is actual development of the assessment materials. We would split this up, part of it being done in-house and a bunch contracted to teachers around the state. Yes, we tried to get teachers from every district. For the contracted work, this would mean paying teachers to write a number of questions for a specific test (say, fifth-grade math). These people got paid for each question they wrote regardless of the quality or usability of what they submitted.

The worst material to write was probably math simply due to the dry nature of the subject and the fact that creative approaches to math are usually verboten in education here. Reading tests were often the most difficult to develop. The work on the tests wasn't bad, but securing copyright permissions and, often, permission to edit was brutal. If there was a magazine piece, the complications were often much worse because usage rights might have to be secured from multiple parties (publisher, author and photographers).

Mind you, this was also in the late '90s, so the Internet wasn't as useful a tool for tracking down rights holders or potential materials, and email was still a secondary means of communication, definitely behind the phone and often behind the fax, too. Securing rights for all the materials we wanted to use took months just because of how hard it was to find people and communicate with them. And states didn't have much of any budget to pay, so securing rights at minimal cost was a big hurdle. Often, the best pieces were never used due to how much a rights holder wanted.

So questions would come in from all over the state, then we would clean them up. That was multi-layered work. It might mean simple grammar and punctuation fixes, but it also meant correcting the format mandated by the state education departments. For example, when I was doing this work, states would not allow us to put a negative in the question. But there were loads of these types of rules, like making certain there was parallel structure among answer choices, not having any choices significant;y shorter or longer, etc.

Once we had done an initial tightening of the new bank of material, all the teachers we'd contracted and state administrators were brought in for a week of refinement and further development of materials. These were simultaneously productive and political sessions. A lot of work would be done, but there was also a lot of on-site jockeying. Teachers would say things like, "This is a great story, but my kids won't be able to relate to it." State administrators would hear this a few times about a piece and then pull the material from any further consideration, not even pilot testing. Quality was often a secondary consideration to how teachers felt their students would do, and it was sometimes tertiary to other teacher goals (what they believed was important, their personal agendas, etc.).

Those last few steps would then repeat themselves. We would tighten up the work that had been developed, the graphics department would develop accompanying graphics where needed and handle page layout, proofreading was a persistent process, and then we would bring the teachers back in for another review of the nearly final materials.

Then we would do another round of tightening-up the material. Proofing, requesting minuscule tweaks from the graphics department, getting state approval for any substantive change (no matter how minor), etc. That was when we could begin building an actual test using these new materials and existing questions from previous tests. We'd also begin to development the accompanying manuals which instructed the schools how to handle the materials and the teachers how to administer the tests. As you can imagine, these had to be perfect. When you have a 100-page document with loads of instructions around specific details, errors are not permissible.

(I haven't done this work in 12 years, but to this day my eyes proofread nearly everything that comes before them. I can be at a simple restaurant, and the menu will list "pan fried chicken." I instinctively note the missing hyphen.)

Of course, all that development work only went to pilot materials. I don't remember exactly, but a student might take a test that was about 70% questions that counted and the rest were new questions being evaluated. Once the tests came back, data analysis was run on everything, enabling us to see what worked and what didn't. Sometimes a question was too hard or too easy, sometimes one group of people simply had issues with a question. My memory is hazy, but I want to say that about one-third of the questions that were piloted became usable. Maybe 10-20% of them got re-piloted because the data showed a way we could possibly fix the question (e.g., one of the answer choices was too attractive, so a re-write of that might be enough of a fix to make the question worth trying again).

On the other side was the scoring for written questions. We had the state-issued rubrics, and those were our guiding force. I (and others) would train the part-time people we hired to do this scoring. The company I worked for hired these people largely off of a standard bank of psychological assessments. The company owner felt these gave all the information we needed to evaluate these potential employees.

Easily, the biggest challenge was getting scorers to accept the rubrics. A student might write a quality piece about something, but it might have been well off topic or not sufficiently on topic based on what the state wanted to assess. During training sessions, I spent a lot of my time diffusing anger from these people and getting them to focus on the rubrics. Gently humor was key in that regard, and I don't recall anyone proving to be a long-term problem in terms of accepting the rubrics.

The other big challenge to this work was the repetition. Reading the answers to the same questions over and over was mentally challenging for people. I don't blame them. Most kids of a specific age aren't too creative when fed a question for a state test. For example, ask them who is a public figure they admire and why, and you're likely to get the bulk of the answers focusing on just a few people (athletes, popular music stars, etc.). For the written assessments, 10% of the student materials were scored twice (by separate people) to ensure accuracy of grades and as a way to identify issues with potential scorers.

I've tried to refrain from too much commentary, but there is no doubt that the materials developed for tests are beaten down throughout the process by bureaucracy and various interests. It's much like the corporate world when the firm has way too many meetings in the course of developing something and there is a leadership vacuum. Oh, sure, there is a person or two who is technically leading things and may have veto power, but there are far too many diverse interests for anything of distinct quality to emerge.

With one of the states for which my employer did work, a woman like the teacher in the link would be invited to participate in the following year's development. The lead state administrator always referred to this as "getting that person's buy-in." And truthfully, it seemed to work because the teachers brought in for this reason felt like they had a voice in the process. None that I saw seemed to appreciate the depth of the whole process, so they all seemed to think they had made a difference in the development of the tests.

More specific to the author of the link, she seems like she's probably a good teacher, better than most. The education system, especially when it comes to statewide assessments, isn't prepared to appreciably handle outliers like her. She probably knows that. She probably knows the real battle to change these kinds of tests is not one she's prepared to tackle. I don't blame her. Nor do I blame her for making a public critique like she did.


“I told you, didn’t I, about hearing Noam Chomsky speak recently? When the great man was asked about the chaos in public education, he responded quickly, decisively, and to the point: “Public education in this country is under attack.” The words, though chilling, comforted me in a weird way. I’d been feeling, the past few years of my 30-plus-year tenure in public education, that there was something or somebody out there, a power of a sort, that doesn’t really want you kids to be educated. I felt a force that wants you ignorant and pliable, and that needs you able to fill in the boxes and follow instructions. Now I’m sure.”

This speaks to a rather odd state of mind on the part of the teacher. “The great man”?

Believing that the state of education and testing is the result of some powerful villain behind the curtain requires more credulity than simply attributing it to the massive complexity and inertia you describe.


This speaks to a rather odd state of mind on the part of the teacher. “The great man”?

Believing that the state of education and testing is the result of some powerful villain behind the curtain requires more credulity than simply attributing it to the massive complexity and inertia you describe.

Certainly Chomsky is a great man. He's the founder of modern linguistics. He wrote the paper that demolished behavioral psychology, paving the way for cognitive psychology. He invented the hierarchy of formal languages that is still taught in every Computer Science program in the world. He is the most cited academic in the world. He is the most influential thinker in two completely unrelated fields: linguistics and political activism. He's an "Institute Professor" at MIT, which is the highest honor that MIT bestows. (I think that there are fewer of them at MIT than there are Nobel Laureates.)

Just what is there not to consider great?

As to "some powerful villain", where did either Chomsky or the OP ever assert that the "villain" isn't a system? Perhaps a system comprised largely of complexity and inertia?

I know for a fact that Chomsky believes something along these lines. He's stated many times that the harmful influences that he describes are not the result of a cabal, as he is often misattributed as asserting, but that he is and always has been speaking about the effects of a complex system, whose intent is not the intent of any individual part of the system.


You make the case much more convincingly than the OP. The passage I quoted came across to me as uncritical fawning.


As opposed to your uncritical lack of appreciation?


Not really sure what you mean, but don't want to fight about it.


Questioning whether Chomsky was worth the teacher's praise demonstrated that you held the uncritical viewpoint, not her.


It was never about Chomsky or whether he's "worth the teacher's praise".

I don't care how great somebody is. It still comes off badly to refer to them as "the great man" and appeal to their authority if you want to persuade your reader of something.


I disagree. Replace Chomsky with Lincoln or Jefferson or Newton or Einstein. Would you object then? "Great" doesn't imply correctness. It implies only profound respect for one's achievements.

Furthermore, the OP did not appeal to authority. She cited a claim from a reputable source, and asserted that this claimed seemed to be evident in her own experience. She never asserted that the claim was true merely because Chomsky said so. Your criticism is no more valid than it would be for the case where she might document evidence from her personal life for one of Newton's theories.


"Never ascribe to malice that which is adequately explained by incompetence."


I read that quote too often. Greed exists and often masquerades as incompetence. Do not be always that adamant.


I found it jarring, too.

Fun link: a fake interview with Chomsky: http://proteinwisdom.com/?p=16470


Warning, link is to a right wing propaganda website.


Do you give equal time fair warnings regarding left-wing propaganda websites? Hmmm?


Perhaps guelo would like something less likely to offend his/her sensibilities. Here you go: http://norvig.com/chomsky.html


That's a very different sort of read than your first link. Your first link was basically snark. I laughed a few times, although I didn't agree with where the author was coming from. I didn't come away thinking the author had very much to say.

Go to the root domain of that site, and you find what I think most would agree is right-wing spin on current events.

Your second one is to Peter Norvig, a super-smart person at the top of his field, writing inside his area of expertise. He isn't snarking, or being disrespectful, but he's definitely giving substantive answers to legitimate questions. Norvig clearly has a lot of interesting things to say.

Visit the root domain and you find a bunch of other interesting stuff by Norvig, most of it technical. No axes to grind jump out at me.

More of the second, please :)


Let me first agree with you that the first link was not very HNish.

But I don't think it was mere snark. It was mockery, of quite a high order. In case its not obvious most of the words quoted are actually Chomsky, from http://www.chomsky.info/interviews/200307--.htm

But let me draw some parallels.

PW paraphrases Chomsky as saying "Language has nothing much to do with language" Norvig paraphrases Chomsky as saying "Observed use of language ... may provide evidence ... but surely cannot constitute the subject-matter of linguistics" Not so different.

Lastly, you could say Chomsky might even take more offense at Norvigs calling him 'Mystic' and likening him to Bill O'Reilly!

Anyway, I don't take PW so seriously, but he is quite entertaining.


fair enough :)


>Public education in this country is under attack

I think it would be far more accurate to say that our entire culture is under attack, from multiple vectors, to reduce the current upwardly ambitious mass populous to neo-feudalism.

Keeping the peasants stupid is just one tactic in a complex strategy.


what is odd about admiring noam chomsky?


Nothing. I admire him. But I wouldn't cite him in such a hero-worshiping manner. Doing so would detract from my argument and undermine my credibility.


Why does a person have to admire?

(FYI writing the first letters of a person's first and last name in lower-case is not very polite)


And what kind of middle school manages to get Noam Chomsky invited for a talk? Is that something he does normally?


Doesn't sound like he did. Sounds like the teacher heard him speak, and then told students about it.


Thanks for your fascinating comment.

Unfortunately, all the excuses in the world don't change the fact that these tests hurt education and hurt children. The production of these things is a shameful act.

I thought the tests we had in the UK were bad, and they are, but the US obviously has a far greater problem. I hope this changes, because you are hurting your children and damaging your future.


>Unfortunately, all the excuses in the world don't change the fact that these tests hurt education and hurt children.

The rub is that: so do terrible teachers.

It's inevitable in a world where bureaucrats take a top down approach to test taking and measurement that you end up with people who "teach the test". It sucks.

But, when left to their own devices, some teachers don't teach at all. Teachers follow the same distribution of personalities as any other occupation on this planet; some are good, some are bad, most are average. These standardized tests were an attempt at saying, "You MUST teach at least this well", with "well" being defined by factoids. So, on one hand you have a free-for-all, where great teachers influence and inspire, while poor teachers ruin lives. The other side is a middle ground where what is to be taught is quantified, and everyone gets a mediocre (at best) education.

What we need a hybrid system that allows good teachers to do what they want, while bad teachers are held accountable and forced to, at the least, teach the test.


> What we need a hybrid system that allows good teachers to do what they want, while bad teachers are held accountable and forced to, at the least, teach the test.

Bad teachers being held accountable is a good idea. It is such a good idea that bad teachers should lose their jobs. In some cases they should even lose their certification.


But, when left to their own devices, some teachers don't teach at all. Teachers follow the same distribution of personalities as any other occupation on this planet; some are good, some are bad, most are average.

In my experience there are precious few teachers who don't teach at all. In fact, I never had a single one that bad. At worst, they hand you the text book and make you read it over the course of the year and give you some tests on it.

The best teachers don't teach much really. Rather they inspire. The worst teachers may teach a lot, but they make you despise the material.

It seems to me that having teachers be forced to teach to standardized tests will kill off the first type of teacher, while having little or no effect on the second kind of teacher.

The worst of all worlds.


> The best teachers don't teach much really. Rather they inspire. The worst teachers may teach a lot, but they make you despise the material.

Going to my 11th grade Biology class every day, I used to joke with a couple friends "Do you think we'll be learning today?", "Ha ha, Probably not." The class was fun, not very serious, with the bulk of the period spent listening to our teacher tell stories. All that and the homework was always easy.

But at the end of the year, while studying for the final, we realized that we had learned a huge amount of biology. Our teacher had spent the classes telling us stories that related to the material, and the homework was easy because she inspired us to be interested in the questions being asked. I didn't remember toiling over the Krebs[1] and Calvin[2] cycles, but I certainly knew them. More importantly though, I found them interesting.

[1]http://en.wikipedia.org/wiki/Citric_acid_cycle

[2]http://en.wikipedia.org/wiki/Calvin_cycle


Bad teachers teaching bad tests is worse than bad teachers left to their own devices.

"Factoids" is a laughable metric for learning. (Especially when you're looking for factoids in a long-form response. If you want factoids, you need to ask for factoids. This test asked for complex thought, the grading should require complex thought.)


>"Factoids" is a laughable metric for learning.

Not if learning those factoids allows one to move up the chain, from what is probably a terrible teacher to one that is hopefully, better. Or, those factoids might get you a job where one can begin to learn outside of a scholastic setting which isn't ideal for every personality. I realize it's not ideal. That's why it's such a difficult problem.

I come from a country (and province) with an incredibly powerful teacher's union. So I've seen what happens when teacher's are not held accountable for anything.

>"This test asked for complex thought, the grading should require complex thought"

Do you trust the average teacher to be a good judge of "complex thought"?


I trust the average teacher more than I trust the average school administrator. I don't want to give the average school administrator tools that would allow them to remove above average teachers.

Most of these blunt instruments will catch just as many good teachers as bad ones.


Bad teachers teaching bad tests is worse than bad teachers left to their own devices.

How is it possible that a student is worse off having the knowledge necessary to pass a test than to have no knowledge?

This test asked for complex thought, the grading should require complex thought

Kids are dumb. Their "complex thought" is really quite simple. Further, the iPhone required a stunning amount of complex thought to create, yet I can asses its functionality with only simple observation. I doesn't necessarily require complex thought to asses the product of complex thought, particularly when that's complex thought at grade level X.


>How is it possible that a student is worse off having the knowledge necessary to pass a test than to have no knowledge?

That's a totally false dichotomy. My point is that the test rewards students for simplistic thinking. It rewards the children who are not learning how to learn, but learning how to memorize.

>Kids are dumb. Their "complex thought" is really quite simple.

That's a terrible attitude to design a curriculum around. Curricula need to be designed around the concept that every child has an immense capacity for learning and complex thought. Curricula should be designed to reward complex thought. Rewarding rote memorization is not a bad thing, but it is at best a supplement to critical thinking.


That's a totally false dichotomy.

Not really, since you replied to:

Bad teachers teaching bad tests is worse than bad teachers left to their own devices.

Bad teachers don't teach. They don't convey knowledge.

That's a terrible attitude

No it isn't, it's reality. Kids are dumb. That's the point of growing up: you become inherently more intelligent and schooling gives you progressive knowledge which is (hopefully) just beyond your full grasp at any given time.

every child has an immense capacity for learning and complex thought

Do you actually know any children? They only have an immense capacity for complex thought if your expectations are low enough.

By the way, I love kids and enjoy interacting with them. But I don't for a second think they're precious, brilliant little snowflakes. They're still dumb. They may be smart among their peers, but they're still dumb on an absolute scale.


> Bad teachers don't teach. They don't convey knowledge

Teaching is not a simple matter of shoveling some quantity of "knowledge" into children. Education requires purpose, focus, and motivation. Good teachers inspire children to learn on their own. Bad teachers actively harm students' achievement.

As for my comment on tests, tests that encourage students to write terrible essays loaded with factoids are bad because they encurage students to write terrible essays. This has nothing to do with the teacher - except that if used as a metric for evaluating teachers, it will cause us to devalue teachers who teach students to write better essays.

And I stand by my assertion that no one who calls children dumb should be involved in their education. Motivation is a primary piece of education, and calling students dumb is profoundly demotivating.


Teaching is not ... Education requires ...

I don't know why you're telling me the obvious. Nor do I see what it has to do with the distinction between good and bad teachers. You're making a lot of proclamations without any supporting evidence.

if used as a metric for evaluating teachers, it will cause us to devalue teachers who teach students to write better essays.

Why? What is the basis for that claim? Teaching a testing strategy for a unified test has very little to do with week-to-week teaching and testing. That's sort of the point, it's a standardized test meant to establish baseline comprehension. Why does an essay that hits at least a couple of key points somehow qualify as 'bad'?

And I stand by my assertion that no one who calls children dumb should be involved in their education.

What are you talking about? Who was talking about people involved in education calling children dumb? Who was even talking about calling children dumb? I describe children as dumb, because that's the reality, I don't call them dumb.


Unfortunately, all the excuses in the world don't change the fact that these tests hurt education and hurt children.

I agree, but my goal was more to share what I've seen of the process than to judge it. I think folks here are percipient enough to make those judgments independently. On top of that, I've long been out of the field and am far too ignorant too suggest alternatives, and that's what we seem to need more than critiques.


  I've long been out of the field
Speaking as someone that came out of high school a couple years ago and watched No Child Left Behind start taking effect, I've seen two different styles of test: one from the mid-90's, when it sounds like you were involved, and one from after No Child Left Behind. The NCLB tests today are utterly unlike the tests that your methods produced. Basically, instead of trying to rank students against each other, they began ranking every student against the same fixed hurdles. On top of that, they started testing once every year or two rather than once every four or five years, and question quality dropped precipitously as they just ran out of material. The changes were, to me, obvious, and the effects were disastrous. Null-value questions ("What shape is the end of a pencil? Cone, square, pyramid.", on a 7th-grade test), bad and wrong essays (as described in the OP), science questions that asked for rote memorization rather than actual understanding and thinking, and so on.

These observations, though amateur and unscientific, suggest a few solutions. Basically, go back to relative rankings, do a lot less testing, and start rewarding uniqueness. Teachers need to have time to actually teach outside hitting every checkbox on list for the end-of-the-year test. Teachers need to be able to teach to their own students, rather than to the lowest-common-denominator child in rural Alabama. Teachers need to be able to encourange and guide every student individually, developing artists, scientists, thinkers, and doers, rather than having to force every single child into the same standardized mold.

</rant>


Right, I was in the field from something like 1995-2001, so I was before the No Child Left Behind style of testing. And I've been nowhere near the field in my work since.

If I were to guess, the types of questions you received were different, but the general process of how the tests were made remained the same. And that's the bureaucratic, lowest-common denominator mentality that pervades broadly given tests. And I agree with you that "[t]eachers need to be able to teach to their own students." I think that was a big part of the OP's point since that's something she does.

It's all part of the conundrum: Everyone wants good teachers and accountability in education, but there doesn't appear to be a high-level way of making such assessments that can pass the necessary political muster.


(I haven't done this work in 12 years, but to this day my eyes proofread nearly everything that comes before them. I can be at a simple restaurant, and the menu will list "pan fried chicken." I instinctively note the missing hyphen.)

...lowest-common denominator mentality...

After reading your initial comment, I couldn't help noticing that you might have meant "lowest-common-denominator mentality." Consider this pedantism a subtle way of saying, "I read your comment completely; thanks for posting it."


NaOH is applying one of the standard forms of hyphenation: many styles dislike joining phrases used as attributives with many hyphens.


If we are being pedantic: it should be "greatest common denominator".


I meant it when I thanked you for the comment. It was a fascinating insight into how these tests actually come about. Thanks for writing it!


Unfortunately, all the excuses in the world don't change the fact that these tests hurt education and hurt children.

They also help, by identifying bad teachers and bad schools.

Do you have any data suggesting they hurt more than they help?


In the UK, we have spent ages and a lot of effort in testing, league tables, improvement plans, improvement targets for schools.

There is a slowly growing realisation that the result may not be good for the children.

See the Wolf report

https://www.education.gov.uk/publications/standard/publicati...

and also a short OFSTED report into Maths teaching, see the newspaper summary below. The full report may be of interest to you as it explains the pedagogy of mathematics well.

http://www.telegraph.co.uk/education/2982483/Ofsted-testing-...

Teaching is a highly dimensional task. Assuming you could devise a metric space adequate to the task, my teaching at any point would be represented by a set of coordinates. The norm of the coordinates of my 'point' in the space might be higher or lower than the norm of another teacher. How do you decide which one of us is less 'bad'?


Your first link, near as I can tell, doesn't even attempt to address the question of whether standardized tests are beneficial or harmful. It seems to be about the merits of vocational vs higher education.

Your second link provides no data on whether testing is good for children. It merely shows that actual teaching methods do not conform to what the author's believe are the best teaching methods. No data is provided on student outcomes.

Teaching is a highly dimensional task...How do you decide which one of us is less 'bad'?

Any goal-oriented system is designed to maximize some arbitrary objective function. With standardized tests, you are forced to write down your objective function and admit it's an arbitrary choice.

What benefit do we receive from having an unspecified objective function and no uniform method of measurement?


Link 1: Appendix VIII talks about maths assessment, but I accept that this reference may be too UK specific to be useful anywhere else. It fits into a discourse about syllabus content &c

Link 2: Outcomes are improving year on year, but OFSTED found that understanding of Maths is decreasing. Implication is that outcome measures are not appropriate

"What benefit do we receive from having an unspecified objective function and no uniform method of measurement?"

Children who can think for themselves.

I think we may have an example of paradigm incommensurability here. You are seeing some kind of Goals -> Measuring Instrument -> Optimisation system, I'm seeing a political system with a lot of stakeholders and children that need to learn. As a practitioner, I find it hard to abstract from the daily process of meeting the needs of a very diverse student group. We may be talking past each other.


I read Appendix VIII, I really can't see where it shows math assessments have harmed outcomes. Nor can I figure out where your second link shows math understanding is decreasing. Could you maybe quote the text you feel shows this?

I must be missing something here.

Children who can think for themselves.

How do you know you get this without uniform tests? And for that matter, what does it even mean?

As for me, I'm well aware of political realities. I just don't see how avoiding careful measurement, clearly stated objects and transparency helps a political system give better results.


The second link summarises an OFSTED report into the teaching of mathematics. The report suggests that teaching to the test is a problem in some UK schools. The report says that teaching to the test damages understanding in maths. I accept that the short abstract I linked to may not provide evidence of those things.

"How do you know you get this without uniform tests?"

The health or otherwise of our small companies, and the creativeness of artists, musicians, mathematicians, and the vigorousness of our politics.

"And for that matter, what does it even mean?"

It means what it has always meant! It means young adults who can take responsibility for themselves and others and who can act in society. Seriously, there is a level where one has to simply point rather than define (Wittgensteinish argument).

"I just don't see how avoiding careful measurement, clearly stated objects and transparency helps a political system give better results."

Because there are different stakeholders, and each part of your process will be challenged, and interpreted differently by some, and others will 'collapse' the wider concept of education down to a narrow focus on measurable outcomes.

Not sure if I'm making sense here because I'm in a different place I suspect.


My two favorite metrics for evaluating the effectiveness of teaching are whether the students can teach other students, and whether they can create something new to share.

They are very simple metrics which completely disregard test scores, a metric I reject.

Gatto has some good things to say about this. So does Neil Postman, in his book "Technopoly."


> They also help, by identifying bad teachers and bad schools.

Aside from the arguments posted in sibling comments about what's being measured, you also have a problem of unaligned incentives. In particular, your claim only has a chance of being true if the students are actually interested in scoring as high as they can on the exam. Even at the AP level (I teach at university level and interact with secondary teachers that teach smart, high-level students taking CS), there are students who have decided that they don't care about the subject, or maybe even like the subject but have no (perceived) benefit from a high score due to their chosen college not giving credit in that subject or whatever. Such students may leave their answer book blank, or doodle in it, or maybe just blast through for the easy points and finish early and not worry about thinking about it.

This isn't even necessarily a particularly irrational choice on their part!

But it's a strong argument why the exams shouldn't be used to evaluate the teacher or the school. In a lot of places the students aren't permitted to opt out of the exam, even if they don't care about it, but there's no penalty to the student for taking a dive on it (and any penalty you could try to assess would have false positives and false negatives and still not motivate many of the students with differently-aligned incentives).

All of these problems are going to be a million times worse on a general-education primary- or secondary-level assessment than they are on AP exams.


The data has proved to be virtually useless in assessing teachers. See http://garyrubinstein.teachforus.org/2012/02/26/analyzing-re...

They're useful for showing students are underperforming. But you can't say anything about teacher quality. In fact I suggest if you took the teachers from the "worst schools" and put them in the "best schools" and vice-versa the results the following year wouldn't look noticeably different.

Does that mean that teacher have no impact at all. I don't believe that. But I do think the large gaps in achievement are systemic problems more than the problem of teacher quality at certain schools.


The fact that some teacher can't see the correlations using an inappropriate plotting technique (he uses a scatterplot, he should use hexbin or other density plot) doesn't mean they aren't there. And the numbers he gathers and dismisses show the correlation actually does exist.

A simple numerical example, where the correlation is guaranteed (i.e., I took y=x+noise):

http://i.imgur.com/rrmUI.png https://gist.github.com/2183927

Most of the correlations he expects to find are present. They are noisy, but present.

The fact that in one case, reality is "contrary to what every teacher in the world knows" just suggests maybe teachers don't have a great grip on reality.

...if you took the teachers from the "worst schools" and put them in the "best schools" and vice-versa the results the following year wouldn't look noticeably different...Does that mean that teacher have no impact at all?

Not quite, but almost. It means that variation among existing teachers impact less than the size of measurement, and the current crop of teachers are basically interchangeable cogs.


> Not quite, but almost. It means that variation among existing teachers impact less than the size of measurement, and the current crop of teachers are basically interchangeable cogs.

Or that the students - and their parents - are a very important factor with significant variation and classes a relatively small sample size. Ask any teacher and they can tell stories about how different classes can be - and how much a school having a weak discipline policy will ensure that many students will have poorer classroom experiences because of one or two difficult students.

It's not like we don't know how to do scientific measurements of complex systems but it's expensive and slow at a time when the political requirement is fast and cheap. It'd be awesome if school districts actually hired people with backgrounds doing serious statistical analysis or large-scale studies with human subjects but nobody is jumping to fund that.


Annual tests end up directly costing students a month of education time, I think you need to prove benefit and not the other way around. As to their value, collages still trust SAT's and GPA more than any of the state tests in large part because they are uniformly terrible. If you really want to test teachers then randomly assign 1-2 tests to each student, it's just as statistically valid and takes ~1/3th the time.


Well, ultimately the point I'm making is that we can't prove anything about anything without standardized tests. Until we have a measurement procedure, all we are doing is groping around in the dark without any way of knowing if we are helping or harming things.

It's similarly difficult to prove that clinical trials are beneficial in medicine. You might try to compare medicine developed with clinical trials to medicine developed without it, but how would you actually make such a comparison? Not with a clinical trial, obviously...


There are plenty of useful pieces of information that have nothing to do with standardized tests. Attendance, Graduation Rates, teen pregnancy, collage admittance, GPA etc. It's true that you could gain useful information from great standardized tests administered well. Unfortunately, we don't actually have any of them that are worth a damn. Each state has it's own tests which are developed alongside the curriculum. So, is Virginia doing better than Maryland, sorry the test's can't actually tell you that. Are graduates in 2010 better prepared than ones from 2000, sorry they can't tell you that.

The idea of tests sounds good, but the implementation is so bad they are effectively worthless. Again, you can argue that great tests could mean something but we don't have them we just have crap. So, if you want to defend tests you need to defend that crap because that's the reality.


Attendance, graduation rates and teen pregnancy are meaningless. They are measures of hours of butt-seat contact, low graduation standards and horniness, respectively.

GPA is closer to a useful metric. GPA and college admittance are basically the same thing as standardized tests. Except that unlike standardized tests, you can't compare any set of grades to any other set of grades. How does that help?

Incidentally, do you realize your biggest criticism of standardized tests is that they are not standardized enough? I.e., there is too much geographic and spatial variation in them?


How exactly teen pregnancy is related?


Consider this, when it comes to breaking the cycle of poverty a teen is better of being being one grade level behind in reading and avoid pregnancy than reading at grade level and becoming pregnant. Sex education is something that most schools teach and they have various levels of success.

You can also track where a school systems switch to abstinence only education and the rate increases. Now granted it's not supposed to be the most useful thing teenagers get from public schools, but it is a vary important part of their job and we have fairly good data on it as well.


But that's the thing, they don't identify bad teachers. And they do penalize good teachers. Being taught to pass these tests is not the same as being educated. All the tests do is identify teachers who don't put the majority of their teaching effort into getting children to pass the tests, to the detriment of everything that matters.


Being taught to pass these tests is not the same as being educated.

How do they differ?

And once you have a definition of "being educated", why not simply measure this? Once you change the tests to a measurement of "being educated", "passing the test" and "being educated" would be identical.


At the extreme, I can be taught to identify metadata in the test and use that to answer the question. E.g., if the question begins with "Why ..." the answer is 95% (c). If the question is an inequality with an irrational number than answer is almost always (b).

Princeton Review was particular good at teaching metadata identification in helping to "guess" when you didn't know the answer.


E.g., if the question begins with "Why ..." the answer is 95% (c)

This sounds like a job for random.shuffle.

Incidentally, independent (i.e., not funded by Kaplan) investigations of test prep centers show minimal effect on performance.

http://online.wsj.com/article/SB124278685697537839.html


Princeton Review teaches students to use logic and some knowledge to exclude wrong answers, to help the top 10ish percent raise their scores. Students who are capable of using these techniques are much more intelligent/education than students at risk for failing state tests.


Here are two tests: one expects you to quickly provide low-precision values for common trig calculations which can be easily passed using rote memory to learn sin/cos/tin for common values. The other is a free-response question requiring you to explain what sin/cos/tan actually mean and label a triangle accordingly.

Both of those are standardized tests. The latter actually tells us whether the student understood the material but the former is much cheaper to grade and report on.


Do they identify bad schools? Or do they identify schools with children that are harder to teach?

This is a genuine question.

I'm also a bit disappointed about the lack of rigorous evidence base with education. (Well, with everything, really.) In the UK the Department for Families and Schools has a lot of power over education. In theory that should help with an evidence base because they should be flowing good research down through to schools. But there's such a political atmosphere about teaching that interference from government is often seen as unwelcome. And, sometimes, that attitude is correct be government is suggesting something that's stupid.

There's also a problem that bad teachers are difficult to remove from teaching.


Do they identify bad schools? Or do they identify schools with children that are harder to teach?

Test results are a measurement of how well educated the student is at a point in time. That's all.

You can measure the effect of a school by studying value added - how well a student performs after education compared to how well they performed before. Or similarly, how well a student performs compared to statistically similar students.

You can determine that some subsets of students are harder to teach by correlating observable characters (parental income, race, gender, etc) with outcomes.


Everything you mention is technically possible but it's unclear that they are actually well used, with considerable evidence (see NYC's recent data release) that the numbers are used by people who are completely unprepared to work with complex data.

Even the value add comparisons are hard to draw correct conclusions from without more context: a student shows no improvement in math. Did they have a bad teacher, an attendance problem or was it something like being an ESL student who has never received the help needed to understand their classes? That's not the kind of analysis which happens when the tests are effectively a single number which determines careers.


So, carefully used, with a statistician doing analysis, they're useful.

How often are they carefully used, with an honest statistician doing the analysis, and without politicians / reporters / etc mangling the numbers?


Tell you what - lets eliminate all government accountability, since we can't avoid politicians/reporters/etc mangling things. End the FOIA, CBO, GAO, etc? Shut down thomas.loc.gov, and stop reporting the federal budget?

Or maybe we can accept that nothing is perfect, and stop letting perfection be the enemy of the good.


There's a difference between something being "not perfect but useful" and something being "not perfect and harmful".

Since there's very little research on the benefits or disadvantages of constant testing it's surprising that people are happy to spend so much time and money on it.


> They also help, by identifying bad teachers and bad schools.

Not really. Here in Chicago, the Noble charter high schools will often teach directly to the ACT. This results in some dramatic boosting of ACT scores, with some schools taking kids from the dysfunctional CPS System and getting a school average of 23. Unfortunately, once they reach college, they see little academic success despite being straight-A students with 25+ ACT scores.

This has a detrimental impact on the curriculum:

1) The English program structured heavily around basic reading comprehension, with little to no emphasis on writing composition. A students understanding of essay composition s roughly: "Organize the things you want to talk about into paragraphs... then write a conclusion." However, to their credit, they're really good at reading test questions.

2) Math is focused on teaching Pre-algebra fundamentals and then layering on test-specific Algebra, Geometric (with that goofy proof system), and basic Trig. It's a sad, narrow sample of our already sad & narrow HS math curriculum. It covers few "advanced" Algebra and Trig subjects. This means anyone who has to take "college math" will need courses in Trig and Precalc in college, with the possibility of an Algebra refresher course before proceeding.

3) Social Studies and Science? All rote-to-test. Students are drilled on step-by-step procedures on how to interpret graphs that'll score correctly on tests... without giving them actual knowledge on how to critically think about information - be it historical or scientific.

NCLB schools do the same song-and-dance, except with a much less rigorous test. If you've actually seen the questions on most NCLB tests, you'd be disappointed. Unfortunately, the composition of such tests is so political and messy it's impossible to provide any measure of quality.

You cannot assume testing will provide you accurate information or a better outcome for students. If you're going to implement a testing regiment, you need to be very mindful of the Observer Effect: You can very easily change the outcome by measuring it. This is not an easy problem to solve.


Your issue is with the objectives of the school system, not testing. Tests measure whether a school meets it's objectives, they don't define the objectives.

Similarly, if your manager writes a bunch of nonsensical unit tests, it's ridiculous to blame unit testing if you wind up building the wrong product.


Tests are the only thing defining the objectives. They may not reflect the intended goals, but passing the test is the only objective, and that is why standardized tests can undermine education.


Tests are what hold people accountable for meeting the objectives, they don't define them.

Why do you feel that not holding teachers accountable for meeting the objectives of their school system will improve education?

I agree that some schools may have bad goals, but why do you believe teachers/admins have better ones?


No - tests define the objectives - they serve as an operational definition. The rest is just description and explanation.

The second two sentences have nothing to do with anything I said.


The other big challenge to this work was the repetition. Reading the answers to the same questions over and over was mentally challenging for people. I don't blame them. Most kids of a specific age aren't too creative when fed a question for a state test. For example, ask them who is a public figure they admire and why, and you're likely to get the bulk of the answers focusing on just a few people (athletes, popular music stars, etc.). For the written assessments, 10% of the student materials were scored twice (by separate people) to ensure accuracy of grades and as a way to identify issues with potential scorers.

I'm curious - do you happen to know whether the essay portions ever offered any actual statistical utility above and beyond the multiple choice ones? As in, do they actually measure anything that the multiple choice questions can't?

I spent several years teaching SAT and GRE classes, I always noticed that I could predict people's essay scores pretty accurately based on their multiple choice scores. Percentile ranks always tended to be very close between the sections, which always made me wonder whether there was any point in having the essay at all. I always suspected that since the overall level of competence was so low on the SAT (sadly, the GRE was not all that much better...), anyone that was writing at the level where actual quality of writing matters was already scoring top marks on the essay, so they were essentially unmeasured by the scale.

I realize that when it comes to setting up these tests, "include an essay" is likely a political mandate, because people for some reason think that essay sections are "more fair" to "bad testers" or "fluid thinkers" or something like that (I'm pretty sure ETS was more or less forced to include an essay for this reason), so it probably wasn't even an option not to include them. But I'd be curious to know whether anyone ever looked into whether they actually told you anything you didn't already know. ETS does not provide this data or analysis, otherwise I'd check it myself as it relates to the SAT.


I'm curious - do you happen to know whether the essay portions ever offered any actual statistical utility above and beyond the multiple choice ones? As in, do they actually measure anything that the multiple choice questions can't?

I believe written assessments were relatively new to standardized tests when I was in the field. My understanding was that the purpose was to assess writing, just like we do with, say, math or reading comprehension. Multiple-choice questions can't evaluate a student's ability to respond to an open-ended question, convey ideas, stay on topic, demonstrate grammar and punctuation, etc.

But I didn't get the impression that adding writing components to the test added anything of practical value. Sure, it's one more score-based judgment that can be assigned to each student and be used to assess schools and districts, but as far as I could tell it wasn't benefitting student education. And it certainly came with a high price tag. But I was definitely not in classrooms, so that's a key caveat to anything I share.

Does writing seem any better? Well, these written assessments have been widespread for about 20 years or so. I don't get the sense from what I see that people nowadays are generally good writers. But my exposure is certainly limited, even if others I speak to seem to agree, and I'm not certain if enough time has elapsed whereby we would broadly see the effects of such a change in educational assessment.


I'm curious - do you happen to know whether the essay portions ever offered any actual statistical utility above and beyond the multiple choice ones?

I don't think statistical utility was the goal - the goal might have just been to reduce math from 50% of the test to 33% of the test.

The (unproven, but widely repeated) story is that after Prop 209, too many of the wrong type of people were getting into UC due in part to high math scores [1]. California proposed dropping the SAT requirement as a result, and the College Board came up with a way to reduce it's demographic impact (and keep CA students paying them).

[1] http://professionals.collegeboard.com/data-reports-research/...


That the process is onerous does not make it correct.

That a good teacher is an exceptional case that we simply cannot handle with our bureaucracy is a thought that should terrify you to your core.

We get the quality we demand. Stop apologizing for the inadequate status quo.


I liked the middle paragraph in your comment. The rest was pointless garbage. The GP gave a very thorough and enlightening description of the test development process. There's no sense in attacking the messenger: he wasn't positioning himself as an apologist; he was merely informing us what goes into these tests.


Fair enough. I didn't think of my post as an attack, but if it read as one, I should consider softening my language more.

But from my perspective, the subtext of the post was "you don't know how hard this is, you should just be happy about what we do have". I think we need to demand more, and those of us who can contribute to educational reform, contribute more.


I think a problem is that humane teachers are made to be "outliers". She correctly perceives these standardized tests as an attack against students and decent teachers. From bureaucracies like the ones you describe. If these bureaucracies were more efficient, I don't think that'd improve the situation; probably even strengthen their attacks.


"worst material to write was probably math simply due to the dry nature of the subject"

Math is one the most fascinating, creative, and empowering pursuits of our species.


Good teaching consists of intelligently applying basic principles to particular situations, which are themselves understood by subjective criteria. It isn't hard to differentiate a good teacher from a bad one, once you take time to look, though it does sometimes take some imagination and observation.

The tests are efforts to identify good teaching on a general and objective basis. That's probably doomed to fail from the start, and what theoretical potential is available is quickly crushed by the political components of the process.

The real question is why we need general and objective measures in the first place. The answers have nothing to do with education, and everything to do with the governance of educational institutions. That governance is the real problem, and nothing's going to matter until it's addressed.


"Creative approaches to maths are usually verboten in education"

I will quote this.


Our standardized tests lead classes to a sort of "malicious compliance" http://en.wikipedia.org/wiki/Malicious_compliance - the tests are So Important to the schools that everything becomes a cram session, in lieu of actual teaching and exploring ideas.

It doesn't have to be that way. There are counterexamples: http://www.theatlantic.com/national/archive/2011/12/what-ame...

[Edit: (Technically it's just Goodhart's Law when there's no malicious intent. Hard to tell the difference sometimes, though. http://lesswrong.com/lw/1ws/the_importance_of_goodharts_law/ ) ]


You see a lot of complaints about standardized tests, but the simple fact of the matter is that standardized tests are not going away. People want a way to evaluate student performance in a way that works across many schools, districts, and states. By definition, that's going to be via standardized testing. Complaining about them will not help.

Make better tests. Teach better around the tests. Those options are fine. But implying that the very idea of standardized testing is constricting is a waste of time. They aren't going anywhere.


>People want a way to evaluate student performance in a way that works across many schools, districts, and states.

Except what they are evaluating holds no meaning. Standardized tests are nonsense. Tests in general are. What you're measuring isn't knowledge, or understanding, what you're measuring is compliance to artificial and highly contrived environments. Which is a pretty useless skill, especially when compared to a deep understanding of the subject matter.

>Teach better around the tests.

By the FSM, NO. Teach around understanding, around critical thinking, logic and reason. But not around tests. This is exactly the kind of teaching that destroyed any interest I had in subjects that where taught back in school. There's nothing more frustrating to learn things not because they might be interesting, or because the deep understanding of the subject might be useful, but because they are on the next test.

Granted, there are subjects where you can't teach "understanding" per se. History is such a subject. It's based on a lot of numbers, names and places, most of which are simple facts (WWII happened from 1933-1945 etc). However, I would argue that this means it's completely useless to do tests on history. What are you testing? Essentially memorization. Which is a useful skill, but not one on which a lot of your grades should depend. Especially because a lot of students spend hours memorizing facts which they will have forgotten a week after the test, and that time time could have been better spend on sensible things.

>But implying that the very idea of standardized testing is constricting is a waste of time. They aren't going anywhere.

You're essentially saying that because we can't do much about them makes them any less useless and constricting. Arguing that the problem can't be solved doesn't make the problem go away.


I am stunned that you think history is just a list of dates. It's an understanding of motives and events. WWII isn't just "from here to here", it's why each of the Axis powers did what they did. why the Allied powers did what they did. It does not matter that Pearl Harbour happened on Dec 7 1941, what matters is why the Japanese attacked, how it came about. Understanding both the US and Japanese points of view around that event. Understanding the different arguments for and against "I have in my hand a piece of paper", of leibensraum, of making the trains run on time (or at least saying you did)...

An understanding of history is essential to understand both why and how politics works in the modern day. To toss it all aside as 'dates, names, and places' is just... uneducated. From the rest of your comment, it sounds like you're a hard science person doing the usual thoughtless dismissal of the soft sciences.


I think you're just confirming his position. He says he learnt because the tests required it. That information is what tests would require - dry facts, not understanding. And yeah - that would be uneducated. Isn't that why the tests exists - so that we aren't?


>Isn't that why the tests exists - so that we aren't?

False. Tests don't serve education, they hinder it. That's my entire point.


Or perhaps simple presenting the view of history that was acquired from school. Demonstrating the limited viepoint that standardized testing promotes.


My point is that it doesn't matter if standardized testing has "meaning" or not, people still want it. They will continue to want it. People want to compare student and school and district performance, and standardized testing is how they're going to measure that.

It doesn't really matter if it "works" or not, the decision making behind it is not logical, and it never will be.

I'm not saying to center the teaching on the test, but to teach around them. The tests are here to stay. You have to do well on them. This is not something there is a choice about. But that doesn't mean you can't do well on the test and at the same time teach around it. Not for it, around it.

Also, that's a strange idea that history is about rote memorization and not about understanding. Maybe I got lucky with my history teachers, but for me it was all about understanding how it all fit together, not raw facts.


>My point is that it doesn't matter if standardized testing has "meaning" or not, people still want it. They will continue to want it. People want to compare student and school and district performance, and standardized testing is how they're going to measure that.

That isn't an argument. That's a status quo fallacy. It doesn't matter if people want it or not, it's nonsense and should be abolished. Also, I highly doubt that people actually want this. The ones who want this broken system are businesses and corporations, because it saves them time and effort, and society as a whole has to shoulder the cost for their laziness. The education system must not bow to the demands of businesses and corporations; it has be the other way round.

>The tests are here to stay. You have to do well on them. This is not something there is a choice about.

Sorry, if everyone thought and acted like this, we'd never have any progress. This way of thinking is exactly the reason why this broken system is still around. Stop telling people that you can't change it and that they should shut up and accept it. That's cancerous and harmful. Speak out. Voice your discontent about this nonsense. Tell other people to do the same. People who just accept anything are everything that's wrong with the world, and the ones who propagate this attitude are even worse.


In what way do businesses or corporations care about standardized testing?

Sure, the businesses that directly provide the testing services, I guess. Beyond that, which businesses?

This is not some sin of the corporate world here. This is an artifact of the educational system. Of administrators and politicians and parents. It's a way to look like they're making some sort of progress. Businesses couldn't care less.


Please don't be so defeatist. The US education system is horribly broken. Maybe the chances of it ever being fixed are low but I hope people continue to try.


It doesn't have to be a black and white question - I think there would be benefits to having fewer standardized tests, and to de-emphasize them when making policy (especially funding decisions) in favor of other metrics.


What kind of other metrics?

Keeping in mind that if it's not standardized, the numbers are statistical garbage...


Is your compensation determined by a standardized test? Why not? How can the world possibly work if we don't use standardized metrics for everything?!


My compensation is not determined by a hard metric because my employer has no way of measuring my direct contribution to the company's profits. If they did, you can be sure they'd pay me accordingly; this is why people in sales are often paid on commission, and why people in "billable hour" fields are scolded or fired if they don't hit their billing targets. In performance reviews, or (in my current line of work) consulting, it's not uncommon at all to bring up any and all evidence that you can think of to support a claim that you have (or will) contributed positively to the company's bottom line.

Granted, we lack such a clear measure of success in education, and current standardized tests are a ham-fisted approach to creating one. Don't get me wrong - most of the standardized tests we use today follow the exact same approach to testing as they did 50 years ago, and whatever it is that we would like to measure when we say "educational progress", they no more accurately capture it now than they did then.

But I'm not convinced that the general idea of measuring education through testing is wrong. I think what most anti-testing people balk at is the particular (and extremely limited) set of knowledge that most standardized tests measure - they tend to be more okay with the idea of measuring mastery of a particular subject (either through in-class tests, or even standardized subject tests, which draw far less criticism than the general tests). I'd be all for trying to figure out a way to measure general achievement in terms of subject-specific but still standardized tests (which not every student takes every one of), but there are a lot of difficulties there, too.


I would personally want to see it work in industry first. Once someone comes up with a way to directly tie programmers' pay/promotions to code metrics in a fully automated, standardized way, and shows that it works, then I'll believe there's a possible way to make that approach work for education, too.

(If anything, the code-metrics problem should be easier, because you get a large sample of data over an extended period of time that represents their actual work output, not an artificially staged test that takes a few hours.)


The problem is that programming is a far more heterogeneous task than teaching and the goals are often not known apriori. How do you compare code output while writing an MRI reconstruction algorithm to code output while writing HFT software?

In contrast, I taught calculus many times. The goals and methods were exactly the same each time (ignoring small differences between the Rutgers and NYU syllabus): students should know how to differentiate and integrate, understand linear approximation, etc.

Measuring performance in reality is usually not that hard. It's just a few special professions (typically creative ones) where you run into difficulties.


> The problem is that programming is a far more heterogeneous task than teaching.

I read this part of your comment, and immediately assumed that you were not a teacher. Then you said you were, and furthermore that your "methods were exactly the same", which surprised me for two reasons. First of all, the task itself is extremely heterogeneous within-task even if you re-teach the same subject several times; second, no two classes are really the same, and although there are classes I've taught seven or eight times now, the particular dynamic of the particular classroom required adaptation, using different examples, different activities, and in some cases different evaluations. Not to mention all the different levels and subjects of classes that I've needed to teach.

At least, that's what good teachers do. (I don't mean that as an attack on the parent poster's teaching---I find it at least possible that they did all these things but didn't realise how much variability there was in the task.) There are, of course, mediocre teachers, who get through their task by learning a pattern and doing it the same way each time. But that kind of gets back to the OP: it may be that the middle is easy to measure, but the high end, not so much.


How is the task heterogeneous? Yeah, class dynamics differ a little bit depending on who the students are, and occasionally you get a few extra geniuses or dunces. But the goal of calc 1 is always to get students to integrate, differentiate, and do linear approximations.

In contrast, my current programming job was to build a visual search engine, my last was to trade stocks. Before that it was research in MRI, before that quantum mechanics simulations. Who knows what my next will be?

It makes sense to compare # of students who can integrate in 2012 to # of students who can integrate in 2011. It's a noisy measurement, but it works. How do you compare search engines to quantum mechanics simulations?

As for my teaching, the data I have suggests I was average. But there was not enough data to get a good picture - I only taught a couple of classes with standardized finals.


> How is the task heterogeneous?

Within a single course, the heterogeneity comes from how to actually do things, how to understand what you did, how to understand what other people are doing when they do the same things, how to think about changes to those patterns when the task changes a little, how to write about what you did, how to talk about what you did, how to read/listen to other people talking about what they did, and many other aspects of understanding the material. None of these are actually the same thing; the unifying thread in a given content area is you might have to memorise the same jargon and diagram conventions for each of those different tasks.

A given course may also be somewhat heterogeneous in its content; for instance, a typical AI course might include algorithms and strategies based on discrete probability, others based on highly-architected structural representations of knowledge, and still others based on self-rewriting code. At a lower level (high school), you might consider a biology teacher, who gets to cover cellular biology, taxonomy and cladistics, genetics, anatomy and physiology, and possibly some other things I'm forgetting right now.

Between courses, teaching is heterogeneous because you're teaching a lot of different things. Even if it's all "math", for instance, there's a fair amount of variation between algebra, geometry, logarithms and function analysis, differential calculus, vector calculus, trigonometry, and probability and statistics, and that's just among the courses often taught at the high school level.

And all of that is only talking about the things taught, which speaks directly to the question of evaluation. But on the subject of heterogeneity, classroom dynamics can vary significantly just between semesters at the same school, not to mention between different schools, and within a single classroom you have students with assorted learning disabilities (documented or not) and simply different learning styles—some are more verbal, some need to see it done, some need to do it on their own first, some benefit more from working together, some really need to try it first and crash and then hear the way they were supposed to do it and try again. It's a lot more than just "a few extra geniuses or dunces".

Teaching is pretty damn heterogeneous. The best teachers I know are, and have to be, among the most mentally-agile people I know.


You are completely missing the point, so let me repeat:

"It makes sense to compare # of students who can integrate in 2012 to # of students who can integrate in 2011. It's a noisy measurement, but it works. How do you compare search engines to quantum mechanics simulations?"


It's on point because "number of students who can integrate in 2012 vs 2011" is rather more like "percent of regression tests passed in 2012 projects vs 2011 projects" than it is like "quality of (?) search engine code vs quantum simulation code", or better, "number of successfully completed customer tasks using search engine product vs quantum simulation product". That is it measures something that is not irrelevant, but awfully specific, and possibly less indicative of the larger whole than you might think. Because teaching is pretty heterogeneous. And the measurement is extremely indirect, i.e. how good another person is at something after interacting with the programmer/teacher's product.


I love standardized tests, they are easy. Really easy. Anyone who wants to ace them, can and will, many without studying. Are they helpful? Who knows, most likely not at all.


> Anyone who wants to ace them, can and will

This is an amazingly provincial comment and it does not reflect well upon you.

Rather than assuming malice in your comment, however, I would gently suggest that it would be rather difficult for someone passed from grade to grade without basic functional literacy to ace such a test. Or, for a sneakier, harder to quantify example, there are "standardized" tests in circulation (and I was exposed to some of them in middle and high school) where certain cultural information was implicitly demanded. Probably not a problem for middle-class suburban kids, but unlikely to be appropriate for underprivileged very-rural kids (of which the testing area had many).

A narrow worldview isn't something to be ashamed of, but it is something to recognize and compensate for when making statements like yours.


This kind of stuff is the backbone of the public education system. John Taylor Gatto outlines very well the six lessons every student is taught. I think this fits well with lesson 5:

In lesson five I teach that your self-respect should depend on an observer's measure of your worth. My kids are constantly evaluated and judged. A monthly report, impressive in its precision, is sent into students' homes to spread approval or to mark exactly -- down to a single percentage point -- how dissatisfied with their children parents should be. Although some people might be surprised how little time or reflection goes into making up these records, the cumulative weight of the objective- seeming documents establishes a profile of defect which compels a child to arrive at a certain decisions about himself and his future based on the casual judgment of strangers.

Self-evaluation -- the staple of every major philosophical system that ever appeared on the planet -- is never a factor in these things. The lesson of report cards, grades, and tests is that children should not trust themselves or their parents, but must rely on the evaluation of certified officials. People need to be told what they are worth.

http://www.cantrip.org/gatto.html


And keep pushing. Keep writing these letters. Keep not accepting things "because that's just the way it is". Keep getting the news out. Even if it takes 10 or 30 years.

It's worth it to provide real educations for the current youth of society. It means a future worth living in.


The author claims that these stupid and clumsy tests are of "a force that wants you ignorant and pliable, and that needs you able to fill in the boxes and follow instructions".

If we're going to believe in some malign conspiracy hindering the public schools, wouldn't we look for it first in those institutions determining what does and doesn't happen in those schools? If we compare those institutions' stated purposes with their interests and actual behaviors, would we find them self-consistent? admirable? What would happen to their influence if that analysis were performed more deeply and frequently?

And if we imagined the tools of such a force, what would they be? Control over the language of debate, insistence on particular assumptions, a particular orthodoxy of procedure and calculation, prohibition of certain questions as unnecessary or beside the point?

Would such a force be honest about its aims and methods? Or would it seek to obscure them, and claim some different, more popular aims?

We might begin looking for this malign 'fungus' by noting these tests were instituted to establish some accountability. Why exactly was that? And why were these obviously lousy tools chosen -- what alternatives were discussed, and why were they rejected?

Yes, I can very well imagine a force, answering that description, wishing to limit the critical faculties produced by our public schools. I can imagine it very well indeed.


Well, don't link to or quote from the material under discussion, or gives examples of any questions or the like - that might leave your readers informed rather than merely exercised.


... you mean the intended audience that has already taken the test? Surely they're already aware of not just example questions, but the specific ones asked?


Putting aside for the moment that favorite punching bag - standardized tests - I question the rather overwrought tone of the piece, beginning with the title: 'A Test You Need to Fail'. This doesn't seem particularly good advice for students, nor does applauding the kinds of test answers she cites. Anyone should know that a response like "I don't think it applies to either one" with no supporting argument to exhibit the slightest knowledge of the subject would, even must, receive zero credit, with a "SAY WHY!!" scribbled in red in the margin. Students are very good at holding facile opinions out of ignorance, and should not be praised for it. Yes, a good teacher can spin a response like that to gold in the classroom, by eliciting the threads of actual knowledge upon which the opinion hangs, but a test can hardly do so. I can understand a teacher lamenting that she didn't know and pass on to her students that test graders would be looking for facts and not opinions, but it's not the test's fault she didn't. It is hardly "criminal" that standardized tests are designed to be objectively gradable. If the questions are poorly designed - which other commenters seem to have assumed, even though no evidence for it is presented here - isn't that more likely to be the result of mediocrity than of malevolence? I was likewise unconvinced by her citation of Noam Chomsky's remark. However much or little one considers Chomsky a "great man", one thing he is not is an expert on elementary/secondary education. He has opinions, like the rest of us. Why not cite the opinions of Jonas Salk or Stephen Sondheim? Chomsky can, however, be counted upon to state his opinions in stark, emotionally charged language, and a bit of this polemical propensity seems to have rubbed off on the (ex-)teacher.


I hate to sound McCarthyan, but if a teacher recommends her students to fail tests while also name-dropping Chomsky and referring to him as "the great man" in the same text, her ulterior goal can be hardly anything other than subversion of capitalistic system. Sometimes things are more black-and-white than we are willing to believe.


Could a voucher system really be worse than this? It would put all the control back in the hands of the parents. Some would misuse it; we know this. But we have the Internet now. Surely a Yelp-equivalent for voucher schools would quickly identify the good and bad schools. Competition would ensue.

Would it be perfect? No. Some parents would prefer religious indoctrination to actual education. Others simply wouldn't care. But perfection is not the correct standard. What we have now is dismal and getting worse.


How exactly do vouchers solve the problem of poorly written standardized tests?


By junking the entire system we have now.

The poorly written standardized tests are just a symptom of a system in which many people, with various interests, have a say in what and how children are to be educated. The result is education by committee -- exactly what the OP is complaining about.

A voucher system would render the educational bureaucracy superfluous. Parents who thought their kids were being poorly served could move them to a different school, or start their own school if nothing else were available.

This isn't as far-fetched as it sounds. Some parents home-school now. There already exists a small industry providing them with educational materials.

I know, this is a radical idea. It hasn't caught on in the 20 or so years since it was first seriously floated -- why would it now? But meanwhile the quality of our kids' education continues to spiral downward. I think the only possible solutions will be radical ones.


"There's a reason education sucks, and it's the same reason that it will never, ever, ever be fixed. ... Because the owners of this country don't want that. ... They don't want a population of citizens capable of critical thinking. ... They want obedient workers, people who are just smart enough to run their machines and do the paperwork, and just dumb enough to passively accept ..." -- George Carlin

http://www.youtube.com/watch?v=GseyaEibb_4


I'm definitely not for standardized testing, and I think grade school is more about subsidized daycare than effective learning. But the letter is a little bit extreme when suggesting failure of the test. Any sufficiently intelligent child will be able to express their individuality and creativity outside of that particular testing environment. While writing the test, you can choose to recognize what it is, and supply the formulaic answers expected to do well. Then forget about it and continue being creative an excellent in your other endeavours.

There are many scenarios in life where you are expected to follow a procedure. The procedure may not be ideal, and it may even be completely counterproductive. But you jump through the hoops, and apply yourself to the expected outcome of the procedure if you want to succeed.

If I were advising my child I would tell them it's nonsense, the adults messed it up, but try to do well anyway given the rules and expected answers. Playing along can be an important skill. To be used judiciously.


"ou can compose a “Gettysburg Address” for the 21st century on the apportioned lines in your test booklet, but if you’ve provided only one fact from the text you read in preparation, then you will earn only half credit. In your constructed response—no matter how well written, correct, intelligent, noble, beautiful, and meaningful it is—if you’ve not collected any specific facts from the provided readings (even if you happen to know more information about the chosen topic than the readings provide), then you will get a zero."

I of course wrote Gettysburg addresses routinely in 8th grade English. No doubt the reporters at the Washington Post did, too, which is why I've sometimes had to read a story two or three times to find out who did what to whom. Creativity, ain't it great?

Having said that, I think that the mania for measurement does have little to do with actual instruction.


I have no way to judge this teacher's letter. It would be really helpful if the tests were published online, then I could evaluate if this teacher's waxing poetic has merit. Does anyone have a links to sample questions or a past test?

Everyone likes to praise or bitch about the testing, but has anyone actually seen the test?


Elsewhere in this thread, user wtn has posted a link to past years' exams.


Merely complaining about the standard tests does not help. Unless, students and teachers make a protest against it, eg all students decide to score zero marks for a national test. But that is very unlikely to happen, some people are very stubborn about the test system because they get the tricks, they can do well in it. What a better way of changing the education system is to start a new private school applying a better education system and to get good results in a sense that high percent of students become the top leaders in their arena; to show the education ministry what a better education system is like ( the education ministry is surely aware of the complaint from the public, the only reason they have not changed the system yet is that they really don't what kind of system would be better).


> I will also give you the best advice I can, ... “When they give you lined paper, write the other way.”

What the hell does that even mean? If that is your best advice, you are probably full of bad advice. I cannot comprehend the level of confusion required to think that quote is clever.


I interpreted as "write on the paper at a 90 degree angle to the lines" (aka not the way you should be doing it). This seems to be a standard rehashing of "going against the grain", advice to be different, to not follow expectations. That's not what I would necessarily call bad advice.


The Esteemed Great Leader pg might say, or rather he did say in http://paulgraham.com/hs.html which I'll just quote:

"Rebellion is almost as stupid as obedience. In either case you let yourself be defined by what they tell you to do. The best plan, I think, is to step onto an orthogonal vector. Don't just do what they tell you, and don't just refuse to."


The quote is a metaphor. It is clearly not about going the other way on lined paper, but rather the basic principle that one should not blindly follow instructions because someone else says they are right. If what you are told to do is wrong, don't do it.


Actually it's even stronger than that. There was no instruction to follow the line, it's just an assumption something that comes naturally which is why it is such a pernicious thing. The quote is a warning against having your thoughts and actions guided in a predetermined direction even without you being aware of it.


Bring a metaphor to a room full of over-literal logic-minded engineers and watch it get kicked to death I guess. :-(


>What the hell does that even mean? If that is your best advice, you are probably full of bad advice. I cannot comprehend the level of confusion required to think that quote is clever.

It means go against the grain? I can't comprehend the level of ignorance that makes you incapable of understanding one of the classic dystopian novels.


I agree. The school-issued iPads will auto-rotate the virtual paper making this advice pointless.


There is more being said with those words than just whats written.

"le contexte est plus fort que le concept"


or, curiously juxtaposed, "to read between the lines"...


Any student who wrote in a direction perpendicular to the lines of the paper would likely be sent in for well-deserved psychological counseling.


It means "don't submit to authority".


If i could i would up vote this to oblivion.

The current system stomps out creativity and meaningful questioning.

And the best part about this, it doesn't stop after highschool. College just is more BS and more tests. Followed by the promise of mind-numbing jobs and a weekend existence.

This is me, a student who has had enough the crap.

http://12most.com/2012/03/21/reasons-for-not-going-back-to-c...


Hmm, that post does not reflect my college experience at all. For one, none of my tests are about memorization or rote learning--all of them are either open book or allow you a page or two of notes and all require critical and a thorough understanding of the material. In fact, a bunch of my tests actually introduced new material in the questions--the first time I learned about tries was on a midterm! The question explained what they are and how they work; I just had to implement them.

My classes also encourage creative thinking. Most of the projects are open-ended to some degree, and some are almost entirely free. The lectures (at least for the more advanced courses) tend to be more interactive and involve thinking as much as listening to the professor.

The people I've met are also different--nobody cares too much about grades and everybody has side projects and interests outside of classes. Also, quite a lot of them are absurdly smart and exactly the sort of people I would like to work with in the future.

And the courses I've taken do provide value: I have a much better breadth in various subjects than I would had I covered the same material on my own. Also, I have done a bunch of cool stuff I would not have done on my own, like designing a simple processor. I would never have thought to do it on my own, but it has really helped me to understand how computers work.

Also, there are resources (like graduate courses) for getting more depth in topics I'm interested in. I think I would learn more from talking about papers and developments with a group (basically how some graduate courses work) than I would just by reading the same papers on my own. It also provides more structure and organization, so I don't have to curate what to learn about myself.

So really, I think you may have had an unfortunate experience with college, but your criticisms are not entirely universal. College may not be for everybody, but I think it's the right choice for plenty of people.


Yeah my experience sucked at this school (school wise, the social experience is awesome haha), but i've been at two other universities before this and while my experience there was better, something was missing.

I had side projects too. I think the side projects should be curriculum.

College may not be for everybody, but i think it is the right choice for some people.

But i found out that outside of school, there are even smarter people than me as well which is awesome. I won't bound myself to 4 walls and roof away from home for education.


Education is one of the things that democratic states should not get involved in: there is a troubling feedback loop when a state whose claim to legitimacy is the votes of it's citizens gets involved in activities which determine what those votes will be, and education is just such an activity: for the same reason that there shouldn't be government newspapers or television stations, there shouldn't be government educators.


It's truly disgusting what has become of education in the United States. I know many teachers and all of them are outstanding people and I trust them fully. Yet it is the system that seems determined to hinder them at every corner.


She's 100% right. Standardized testing is a disease to the education in this country.

The grade school system in this country is nothing but a babysitting program for people who aren't old enough to vote.


As I recall the written portion of AP science exams was like this. No prose necessary - you could just write a bullet list of the required facts and get full credit


I stopped reading when she mentioned "the great man" Chomsky.


Also see, a study done by an MIT professor about SAT essay scores and essay length.

http://www.nytimes.com/2005/05/04/education/04education.html...

http://www.npr.org/templates/story/story.php?storyId=4634566

Money quote: "It appeared to me that regardless of what a student wrote, the longer the essay, the higher the score."


What better way to qualify people for academia?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: