Hacker News new | past | comments | ask | show | jobs | submit login

Hehe "shenanigans" ;-)

The truth is, most things can be made to work - though there are various trade-offs ("shenanigans") involved.

e.g: In my current project, there's a bit that loads large-ish files, parses/converts the data, and loads them (order of tens of thousands of rwos) into SQL tables.

For reasons that best escape me - since after the files are parsed the entire dataset is _literally_ in memory, instead of doing a bulk sql update there- and then, thousands of messages are enqueued to be consumed by a practically unlimited number of lambda functions that then... bomb out as the SQL database hits it's connection limit and keels over.

I guess those shenanigans have better buzzword compliance!




Sounds like one of my favorite shenanigans when the stakeholders tell you it needs to handle billions, so you engineer it that way. Reality: it needs to handle thousands. Sounds like you got the reverse shenanigans.


Except that there's only one database, you it wouldn't even billions. In fact, I suspect it will handle significantly less load than a batched model will. I've been able to get 100x speedups by batching SQL queries.


I miss the days of just one database sometimes…

Batching is the way to go, 100%. Remove the network as much as you possibly can.


You should really put a connection manager in between your lambda functions and database, like HAProxy or AWS RDS Proxy.

(still, that doesn't change the fact the code is badly designed)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: