Hacker News new | past | comments | ask | show | jobs | submit login

> While I can understand the desire to avoid misallocation, this model assumes the funder has a better idea of what needs resources that the funded.

Reminds me of a (probably the) reason we're doing standardized testing in schools. A decent teacher is able to teach and test kids much better than standardized tests, but the society needs consistency more than quality, and we don't trust that every teacher will try to be good at their job. That (arguably, justified) lack of trust leads us as a society to choose worse but more consistent and people-independent process.

I'm starting to see this trend everywhere, and I'm not sure I like it. Consistency is an important thing (it lets us abstract things away more easily, helping turn systems into black boxes which can be composed better), but we're losing a lot of efficiency that comes from just trusting the other guy.

I'd argue we don't have many PARC and MIT around anymore because the R&D process matured. The funding structures are now established, and they prefer consistency (and safety) over effectiveness. But while throwing money at random people will indeed lead to lots of waste, I'd argue that in some areas, we need to take that risk and start throwing extra money at some smart people, also isolating them from demands of policy and markets. It's probably easier for companies to do that (especially before processes mature), because they're smaller - that's why we had PARC, Skunk Works, experimental projects at Google, and an occasional billionaire trying to save the world.

TL;DR: we're being slowed down by processes, exchanging peak efficiency for consistency of output.




> A decent teacher is able to teach and test kids much better than standardized tests, but the society needs consistency more than quality, and we don't trust that every teacher will try to be good at their job.

It's more about it being way to hard to fire bad teachers and standardized tests, in theory at least, are an attempt at providing objective proof of bad teaching that won't be subject to union pushback and/or lawsuits.

I think a lot of the desire for standardized testing would dissolve if principals could just make personnel decisions like normal managers. And, of course, it's up to their superiors to hold them accountable for being good at that job.

> ...we're losing a lot of efficiency that comes from just trusting the other guy.

I think people have been trusting public schools a lot. That generally worked out great for people in affluent suburban districts. And it worked out horribly for people in poor, remote, and urban districts.


Well, that's my point expressed in different examples. Standardized testing makes the system easier to manage from the top, and makes the result consistent. Firing "bad teachers" is a complex problem - some teachers can get fired because they suck at educating, but others just because the principal doesn't like them, etc. With standardized rules and procedures, you try to sidestep the whole issue, at the cost of the rules becoming what you optimize for, instead of actual education (which is hard to measure).

> I think a lot of the desire for standardized testing would dissolve if principals could just make personnel decisions like normal managers.

That could maybe affect the desire coming from the bottom, but not the one from the top - up there, the school's output is an input to further processes. Funding decisions are made easier thanks to standardized tests. University recruitment is made easier thanks to standardized tests. Etc.

> I think people have been trusting public schools a lot. That generally worked out great for people in affluent suburban districts. And it worked out horribly for people in poor, remote, and urban districts.

That's the think I'm talking about. I believe it's like this (with Q standing for Quality):

  Q(affluent_free) + Q(poor_free) > Q(affluent_standardized) + Q(poor_standardized)
  Q(affluent_standardized) < Q(affluent_free)
  Q(poor_standardized) > Q(poor_free)
E.g. education without standardized tests may be better for society in total in terms of quality, but then it totally sucks to be poor. Standardized educaion gives mediocre results for everyone.


The usual "business school" approach to preventing metrics from causing unintended effects is to assemble a balanced scorecard of a dozen or metrics. If you do it right the negative factors of each cancel each other out. So for example to evaluate public school teachers instead of just looking at student improvements in standardized test scores relative to their peers you could also factor in subjective (but quantified) ratings from principals / students / peers, number of continuing education credit hours, student participation levels in extracurricular programs, counts of "suggestion box" ideas, etc.

http://www.balancedscorecard.org/Resources/About-the-Balance...


Public education is a bad comparison because it drags in a lot of contentious political debates about how we deal with poverty, racism, difficult home environments, taxation, etc. Based on a fair amount of reading on the subject I think the primary thing driving the standardized testing push is the dream that with enough data we'll find some miracle teaching technique which will avoid the need to tackle the actual hard problems.


Agreed. Optimization for process over outcomes seems to be a recurring problem in large organizations. Unfortunately it's not hard to understand why decision maker's prefer this approach given the pressures of shareholders/voters. No one gets fired for following the process.

A similar issue was just discussed yesterday in a thread on Jeff Bezos' letter to shareholders:

"A common example is process as proxy. Good process serves you so you can serve customers. But if you’re not watchful, the process can become the thing. This can happen very easily in large organizations. The process becomes the proxy for the result you want. You stop looking at outcomes and just make sure you’re doing the process right. Gulp."

https://news.ycombinator.com/item?id=14107766


Standardized testing has also restricted teachers to standardized teaching. They must complete X number of units from the binder every Y number of days. This leaves little time for the teacher to come up with their own lesson plans. I think there needs to be a balance between a teacher's autonomy in the classroom and standardization. I've heard many teachers complain while I was in school that they felt like they don't have the time to focus on the things they want to do in class. In my elementry school, the class would go to a different teacher for either STEM or liberal arts. I think this is a good start since because teachers can solely focus on material that is related, they can tie in lesson plans and make sure that the standardized subjects are taught efficiently to leave time for the other topics they want to focus on.

I remember in middle school, I had a couple teachers who would say "ok, this is the standardized stuff I have to say" and then go into a very boring lecture of what we just had been learning except the way we first learned it was interesting and engaging. They were simply dotting their i's to make sure that they did their job in conveying the material the state wanted us to know.


But is it really a trade-off , for every teacher or scientist ?

Sure , some teachers could use the freedom to teach better(and create new teaching methods) , but statistically , won't most teachers perform better when driven by above average methods ?

And as for research , sure we need those big breakthroughs , but we also need many people to work on the many small evolutionary steps on each technology . Surely we cannot skip that ?


The issue here is IMO that the trustless processes deliver consistency and reliability at such huge costs to quality of outcome, that the outcome becomes below average. In other words, an average teacher could do a better job free than directed by standardized testing, but the results would be so varied in quality and direction[0] as to be useless within the current social framework. And it's not that the society couldn't handle it, it's that when building systems, we want to make things more consistent, so they can be more easily built upon.

Basically, think of why you build layers of abstraction in code, even though it almost always costs performance.

--

[0] - e.g. one teacher focusing on quality math skills, another on quality outdoors skills, etc.


Layers of abstraction in technology not always cost in performance, when you look historically and system wide. For example the fact that digital technologies we're modularized(physics+lithography/transistors/gates/../cpu's/software developed without tight coupling/optimization) allowed faster improvements across the whole stack , according to some researchers, who also think, that in general, modular technologies advance faster. And modularity always requires some standardization.

And as for education, i don't think standardization always leads for performance loss. For example ,the area of reading has seen a lot of research, and one of the results is "direct instruction", one of the best methods to teach reading, and is a highly standardized method.

But maybe what's your saying is true for standardized tests in current implementation.


You always trade something off when you're abstracting. Some things become easier, but at the expense of other things becoming more difficult.

To use your example - more modularized technologies are more flexible and can be developed faster, but they no longer use resources efficiently. A modular CPU design makes design process easier, but a particular modular CPU will not outperform a hypothetical "mudball" CPU designed so that each transistor has close-to-optimal utilization (with "optimal" defined by requirements).

Or compare standard coding practices vs. the code demoscene people write for constrained machines, in which a single variable can have 20 meanings, each code line does 10 things in parallel, and sometimes compiled code is itself its own data source.

--

The way I see it, building abstractions on top of something is shifting around difficulty distributions in the space of things you could do with that thing.


Sure, in the small, a few developers working - fully optimizing stuff usually works better.

But on bigger design spaces, when you let hundreds of thousands of people collaborate, create bigger markets faster, and grab more revenue - you enable a much more detailed exploration of the design space.

And often, you discover hidden gold. But also - if you've discovered you've made a terrible mistake in your abstractions - you can often fix that, maybe in the next generation.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: