Hacker News new | past | comments | ask | show | jobs | submit login

Great to hear this not tied to TensorFlow! How would one use a different DL platform, say PyTorch or DyNet?



The steps would basically be:

- Containerize the DL platform

- Create a k8s manifest (similar to our CRD if necessary)

- Create a service endpoint

- integrate all that into the JH deployment

This is less hard than it sounds, but we'd love help! We only started with TF because that's what we know.

Disclosure: I work at Google on Kubeflow


Interesting, though I don't see how it is better than a plain docker image over kubernetes? Not much of a hassle now too. And how is it different from what DL4J is already doing with Zeppelin and supporting both Keras, TF, MXNet and PyTorch on the way?


> ...how [is it] better than a plain docker image over kubernetes?

Scalability for people with existing on-premise (or cloud based), kubernetes workflows, especially once it comes to training or heavy crunching.

That's not to say that Docker Machine/Swarm/Compose couldn't handle the same, but it's an extra step for kubernetes users and pushes people onto a slightly different toolchain than minikube->K8s.


Correct! Many folks have more complicated deployments in the cloud, and we're trying to align (as close as humanly possible) your on-prem stack with your cloud stack, to minimize the pain in migration.

If you have a single container, and a simple pipeline, this may be a bit more than you need. We've just found that there are normally 5 or more services/systems that people wire together to create an ML stack, and that's what we're trying to solve for/simplify.

Disclosure: I work at Google on Kubeflow




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: