I used LXC in the past, the current docker still puzzles me a bit.
To save resource(cpu/memory,etc) and stay lightweight, the docker I pulled needs to reuse whatever on my host Linux to achieve that goal, otherwise it has to provide its own dependencies which nearly-doubles the storage. Pulling from others' docker image normally means it has different libraries etc from my host(e.g. Linux), so I ended up running duplicated libraries/dependencies on the same host, how does that save anything? Why not just do kvm?
For small applications that use the same libraries/dependencies as the host I can run way more dockers than KVMs as the former is indeed resource efficient because it can share them with the host OS, but again it only happens when the host has essentially the same software installed for dockers to reuse, and secondly docker itself should be light-weight(otherwise, why not kvm to avoid all the docker-container complexity).
So, is it true that, docker is _only_ good for light-weight-application that happens to running the same libraries/dependencies as the host OS to save resources? At least that's what I did with LXC in the past.
Or is Docker just for easy of deployment, which most of the time it does not save any resource, but increases it(as most of the time host OS will not have the same shared resource installed)?
Also I can not understand why Linux-Docker saves any resources on Windows host, other than it can be easier to deploy? Then again, the past *.exe installation worked fine as well, why docker?
It seems docker is being "abused" as a way to build one-off playrooms for spare time tinkering.
And frankly i can't help wonder if this is because more and more languages is sprouting their own package managers, while at the same time developers are getting ever more lax about dependencies hygiene.
To save resource(cpu/memory,etc) and stay lightweight, the docker I pulled needs to reuse whatever on my host Linux to achieve that goal, otherwise it has to provide its own dependencies which nearly-doubles the storage. Pulling from others' docker image normally means it has different libraries etc from my host(e.g. Linux), so I ended up running duplicated libraries/dependencies on the same host, how does that save anything? Why not just do kvm?
For small applications that use the same libraries/dependencies as the host I can run way more dockers than KVMs as the former is indeed resource efficient because it can share them with the host OS, but again it only happens when the host has essentially the same software installed for dockers to reuse, and secondly docker itself should be light-weight(otherwise, why not kvm to avoid all the docker-container complexity).
So, is it true that, docker is _only_ good for light-weight-application that happens to running the same libraries/dependencies as the host OS to save resources? At least that's what I did with LXC in the past.
Or is Docker just for easy of deployment, which most of the time it does not save any resource, but increases it(as most of the time host OS will not have the same shared resource installed)?
Also I can not understand why Linux-Docker saves any resources on Windows host, other than it can be easier to deploy? Then again, the past *.exe installation worked fine as well, why docker?