Hacker News new | past | comments | ask | show | jobs | submit login

Scaling is hard. True.

But the question is, are you trying to make your life miserable by scaling before exhausting other options?

Most applications can be optimised for performance by orders of magnitude, much easier than trying to scale them by orders of magnitude. Any problem is much easier to solve and faster to deliver when you can fit it on a single server.




Some people just don’t know how many users can be served from one server.

Usually it is simple thinking that goes wrong. System is slow - add more hardware. But then it turns out developers did bad job and you could still run it from single small server only that someone would have to write code understanding big O notation.

Main point of big O notation is that there are things that implemented incorrectly will be slow regardless of how much hardware you throw at it.


I don't know if knowledge of big O notation is that big a deal - many of the issues I've seen at least in recent years have come about in O(n) code where the value of n in production use cases was several orders of magnitude higher than anyone had bothered to test (classic example, all our test files were ~50k in size. But recent customers expected to be able to work with files over 1Gb - which of course took 20000x longer to process). And the user perception between something taking 100ms and taking well over 30 minutes is rather a lot. In fact in this particular case realistically there's no way we could process a 1gb file in the sort of time it's reasonable to have a user sit there and wait for it, so it really requires a rethink of the whole UX. In other cases it turned out some basic DB lookup consolidation was sufficient, even if it did require writing new DB queries and accepting significantly higher memory usage (as previously the data was read and discarded per item). If I have found the occasional bit of O(n^2) code that didn't need to be it was usually just a simple mistake.


Notation alone maybe not but O(n) just like you write needs to be addressed. Users or stakeholders expect that they can get "all data" loaded with one click and it should always be instant. With N getting bigger just like you write UX or workflow often has to be changed to have data partitioning even if it O(n) - like adding pagination, moving crunching statistics to OLAP. It quickly gets worse when you have database joins and you might have to understand what database does because you can also have O(n^2) queries then even if db engines are insanely fast and optimized on their own knowing what query does like full table scans also can kill performance.


Until it was bought by AOL, ICQ presence scaled as the largest Digital 64-bit Unix box they sold. (Messages were peer to peer.) it worked remarkably well (solid hardware platform, so not HA but pretty available nonetheless). Network communication was UDP, which was quite a lot cheaper at the time.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: