We provide a download link on the left which will download the file with all the dependencies shrink-wrapped, so that you can certainly start hacking on it locally too. This is a great start and a lot easier than manually trying to npm install the right combination of sub-dependencies, etc. If you have any other ideas we're happy to implement!
The point is that complexity is most effectively managed by discrete (modular) consideration, not another layer of abstraction, which tends to hide it. Build process is a discrete development process problem space with its own group of established solutions. It should not be conflated with bug reporting.
All problems in computer science can be solved by another level of indirection, except of course for the problem of too many indirections. - David Wheeler
If you are in the type of bug runkit targets, getting the exact same env right will take you days. With runkit, you do have a little setup to do, but the days can be used to debug.
This is a great concept : remove the work to reproduce an environment, and use the freed resource to understand said env.
Major advantages are:
- the bug reporter may not know his/her case is special and won't provide you with the information you need to see it. Providing a running container with the bug just obliviate that.
- the running container can come with a docker file, which gives you everything: libs, versions, settings. It's basically a summary of the stuff you need for it to go wrong.
- you can try locally the stuff, even if your setup is completly different, without messing with your setup. Because complex bugs rarely are reproducible with a bunch or pip install, apt get and other yum incantations.
It's not good for all bugs. I would say it's actually bad for most bugs. But for the bugs that needs it, it seems fantastic.
getting the exact same env right will take you days
If your codebase has significant undocumented environmental dependencies and these are hard to script in place, there's something far more fundamentally wrong with your development process than bug reporting.
Lol you react like most of us have software deployed in controlled environments.
I create Python libs that are used in so many various configurations by so many people I don't know. They come back and say "hey, I got this stack trace" (in the best case).
Now I have to play the guessing game. Is it me, is it you ? Wrong path ? Permission problem ? Bad conf ? Network is having trouble ? Server is this particular linux version and and there is something important about the SEL setup here ? Oh but upstart/systemd don't behave the same way. Output is redirected here look. Stuff is not the expected encoding. Na it was not a bug but you're file is corrupted. What the heck is this data format ?
Etc, etc.
You thing you though about every single thing ? Your error handling is perfect for all IO ? You deal with all encoding, all user inputs perfectly ? You know all the little OS peculiarities that will make your subprocess run in the exact way you think ?
Of course you don't, nobody is perfect, we don't have infinite resources. But there are infinite ways to fail.
And for many things, you can figure it out with with just your code base and the error because it's a simple cause. But from time to time arrive this terrible bug that is a mix between a strange LOCAL, this particular version of the VM you use but only in one time zone with this env variable set. And for that, yes, a good container with a reproducible bug in the proper env is an interesting idea.
Apparently, you don't ship software that is wildly used enough or you would not be that arrogant.
Deploying on your own 100 servers is hard. Try seing your code deployed on 1000 servers that you don't own, nor configure.
I'm trying to help point you in the right direction based on my own experience, not be arrogant. It sounds like you would get a lot out of spending some time working on CI/CD concepts.
In short throwing your hands up in frustration at the complexity of software is not a solution and gets you nowhere. The way we deal with "infinite ways to fail" is to control the environment. These days, quality projects are expected to version control their environments and conduct test deployments within a representative set of environments using a representative set of configurations.
Docker provides an easy way to do this ("always deploy on <distro>-<os>-<version>"), but it's only one approach. Another free and relatively straightforward place to start getting up to speed would be automating build and test processes with Travis CI for an open source project.
Deploying to any number of tested environments is trivial.
If you create a Python lib, 10000 people will pip install it. You have no control on the env.
If you create a deb package, you will see ppa and it will be installed on many various env. You have no control on the env.
Again, you definitely have no experience in shipping software outside of your bubble.
A lib is not "a web project". A cmd line tools is not either. You still need to debug them. People will run them on windows, linux, mac, bsd, and who knows where. And they will come for you.
Now you can choose to simplify the problem and only support a limited number of env. But I guess I'm quite happy the guys who created apache, ffmpeg didn't force me to only used them on Linux with LOCALE set to accept only ascii and CEST.
No. As I said, in these cases you use a CI tool to test on a broad range of environments.
For example, here is a library I maintain with 12,000+ installs per month that is tested on 7 different environments every commit using Travis: https://github.com/globalcitizen/php-iban