Interesting, though I don't see how it is better than a plain docker image over kubernetes? Not much of a hassle now too. And how is it different from what DL4J is already doing with Zeppelin and supporting both Keras, TF, MXNet and PyTorch on the way?
> ...how [is it] better than a plain docker image over kubernetes?
Scalability for people with existing on-premise (or cloud based), kubernetes workflows, especially once it comes to training or heavy crunching.
That's not to say that Docker Machine/Swarm/Compose couldn't handle the same, but it's an extra step for kubernetes users and pushes people onto a slightly different toolchain than minikube->K8s.
Correct! Many folks have more complicated deployments in the cloud, and we're trying to align (as close as humanly possible) your on-prem stack with your cloud stack, to minimize the pain in migration.
If you have a single container, and a simple pipeline, this may be a bit more than you need. We've just found that there are normally 5 or more services/systems that people wire together to create an ML stack, and that's what we're trying to solve for/simplify.