what's the point?
it looks more "deploy elk in our cloud infrastructure" than "deploy elk with docker".
Plus, there are images already built on the docker hub, and the added value of the post should be explaining the docker-compose file.
Yeah, deploying an ELK stack in docker in anything other than development is probably not a great idea anyway.
Specifically the elasticsearch piece. Elasticsearch uses a lot of memory for the jvm heap and also off heap, it needs a lot of resources. When it OOMs, you want to be able to easily check out whats happened, and recover the instance, on the same instance with ephemeral instance storage. You dont want to operate ES in a scheduled environment, unless you have scheduling rules that stick it to a single instance.
Elasticsearch aside, Kibana is fine for a docker container, its stateless and very little config. However, logstash config is more involved and not trivial, I still use CM for this, but would like to deploy it in containers in future using something like habitat perhaps...
I'm curious - can you please share a bit of details on how Docker complicates debug and recovery of an OOM-killed process (be it ES or anything else - personally, I have Redis in mind)?
I've assumed it's almost no different from non-isolated case. If the container dies - it would just get to "stopped" state and can be restarted. Exactly the same as if process had died and I run it again, except for process is wrapped. And if all the non-ephemeral data process writes goes to host-mounted directories ("volume"), then a container can be safely discarded as it's nothing but read-only layers. Guess, I just don't see something because I haven't yet got bitten.
(I never used Docker besides toy/tiny stuff, but considering its images as a sort of quick-and-dirty good-enough packaging system - just because proper .debs would take much more time and effort to produce and maintain. So, really curious about hearing the possible drawbacks I don't know yet.)