Hacker News new | past | comments | ask | show | jobs | submit login

>Find solid dependencies, don't use too many of them, make sure you update them, use static analysis tools and dependency scanners, etc. Same drill as for anything.

That's hilarious simply because most heavy Docker users I've interacted with basically use it as a way to build houses of cards which glue-sticked piles and piles of dependencies together rapidly.

This is largely because application complexity in some domains and expectations of development pace have ballooned to insanity but there are plenty of heavy Docker users that do this even when these sort of pressures don't exist.

I think as software progresses and more development techniques become easier to leverage and automated (correctly and incorrectly), we're getting to a point advanced statistical software packages have been at for quite some time: no matter what data you throw at the software for analysis and what analysis you choose, the software has so much experience baked into it that you'll get some result that makes assumptions what you're doing is valid that may look like reasonable outputs even if they aren't. If you do need to do some preprocessing, much of that is even automated so it's incredibly easy to do something very wrong if you dont know what you're doing but not realize it. Automation has made it much easier to fail late instead of fail early.




This was all achingly true well before Docker, though. (That's why Docker got popular, after all!)

Docker isn't the disease, its use in such manner is just a symptom.


The trouble is finding balance between simple DIY implementations or cobbling together a house of cards.

"Allow CRUD of names" ok, you'll have it next sprint. "Allow upload of their resumé docs" no problem, two more weeks. "Parse the docx and PDF files users upload and extract the contents and fill our db schema with relevant data". Well shit. The specs to understand either of those formats are gigantic and now I'm forced to cobble together something using more dependencies if you want it done in less than two year's time.


Is it really that troublesome to strike a balance between implementing something yourself and using a library when the difference is two weeks versus two years?

If, for example, a file format is really complex, it has most likely one implementation that almost everyone uses. If there is a problem with it, you will hear about it. Use that.

On the other hand, if a file format is really simple or you only need a simple subset of it, rolling your own could avoid you getting 0-dayed when the most popular implementation has a vulnerability.


>you getting 0-dayed when the most popular implementation has a vulnerability

vs getting hit when your vulnerable implementation becomes popular. Not an obvious choice indeed.


> vs getting hit when your vulnerable implementation becomes popular.

Which is not going to happen, because there's no point in publishing something that is tailored to your specific purpose.


This reminds me of one Finnish student publishing some sort of OS kernel in 1991 or so which supported only his PC-compatible machine...


Of course, who could rule out that a crappy little domain-specific parser turns into a trillion dollar industry? One should let that remote possibility guide the decision making at all times.


Can confirm: my whole home server / coworker collaborative setup is run from a Docker house of cards...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: