Hacker News new | past | comments | ask | show | jobs | submit login

> Would you rather maintain a couple hundred lines of straightforward code or thousands of lines of class hierarchies with delegates and whatnot?

I feel this is a gross misrepresentation of the problem.

With higher-level frameworks, you can add complex features with one-liners, which by their very nature (generic, extendable, and general purpose) are bloated and underperform when compared with the code you could roll yourself. However you need to write far more code than one-liners to reimplements those features, not to mention the time it you'd take your team to test, validate, and maintain it.

Therefore, contrary to your initial assumption, there is indeed a tradeoff between reusing someone else's general-purpose but battle-hardened code with your specialized, lean, but untested code, and the cost to go with rolling our own implementation hardly justifies the potential performance gains.




The truth is that the framework is built on another framework, which is built on another, which is built on another, until we get down to individual transistors.

The highest level framework isn't always the one best suited to solving the problem you have. Maybe it's a one-liner, but due to the overhead you have to run a giant distributed system instead of a single machine with share memory. Then the overhead of orchestrating all these machines might be more than writing 10 lines in a lower level framework.


I also feel like your post is another gross misrepresentation of the problem.

The issue with slow software is rarely the framework itself. Even frameworks like Rails and Django are more than fast enough for most things. If you need absurd performance (for, I don't know, HFT?), then there are other frameworks in other languages. There is no need for bespoke code! Fast frameworks do exist. Also, when frameworks are slow in some parts, someone can just go there and optimise for everyone!

However the issues we normally encounter regarding speed are often caused by convoluted bespoke architectures.

It's always because Database access has to go trough ten, twenty classes, and not only it's slow, it's also hard to maintain, as you lost control over what the SQL looks like. It's always because serialisation requires some crazy Reflection that is several orders of magnitude slower and more complex than a simple "to json" call. It's always because the hot-loops of your sorting algorithms have to go trough some unnecessary only-used-once abstraction that makes the whole hot loop slow.

Java is fast as heck but got a reputation of being slow among users. Also Enterprise Java projects had a reputation of being difficult to navigate and therefore more expensive. The issue wasn't Java: it was the convoluted bespoke architectures that plague it.

It is widely acknowledged by its proponents that those difficult architectures take more time to build. However there is zero evidence that such things help with maintainability. In fact I'd argue that those arcane architectures make it worse for the general-case scenarios of: bug fixing (because more classes mean more bugs and more places for bugs to hide), optimisation (because measurement is harder in complex programs, and optimising often requires dismantling and rebuilding things), adding features (because it was hard to build the first features, it's gonna be hard for future brand new features too) and even refactoring (if the problem is the complex architecture itself, refactoring in parts will lead to a messier program).

So there you go: waste of man-hours and of processors.

So no, the parent poster's complain has nothing to do with the reuse of frameworks or libraries.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: