Hacker News new | past | comments | ask | show | jobs | submit login

Philosophically: I've seen some "bespoke" systems like this that live long enough for a nice off-the-shelf solution to come around that solves the problem much more elegantly and efficiently than the "bespoke" one does. This seems like a normal and dare I say organic path for these kind of systems to take.

I don't even mind senior devs putting together things like this at the cornerstone of the company provided there are always at any given point in time 2 people that know how it works and can work on it, and sufficient time was spent looking at existing solutions to make that call. It should be made with full expectations that the first paragraph is inevitable.

Specifically, in this case: Without any actual data (# of reads, # of writes, size of writes, size of data, read patterns, consistency requirements) it is not possible to judge whether going custom on such a system was merited or not. I would find it VERY difficult to come to the conclusion that this use case couldn't be solved with very common tooling such as spark and/or nats-streaming. "provided the entire dataset fits in memory" is a very large design liberty when designing such a solution and doesn't scream "scalability" or n+1 node write-consistency to me. I say this acknowledging full well that etcd is an unbelievably well written piece of software with durability and speed beyond it's years.

Keeping my eyes open for that post-series-a-post-mortem post.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: