Hacker News new | past | comments | ask | show | jobs | submit login

How does this compare to KubeFlow?



We have a lot of respect for the work that the KubeFlow team is doing. Their focus seems to be on helping you deploy a wide variety of open source ML tooling to Kubernetes. We use a more narrow stack and focus more on automating common workflows.

For example, we take a fully declarative approach; the “cortex deploy” command is a request to “make it so”, rather than “run this training job”. Cortex determines at runtime exactly what pipeline needs to be created to achieve the desired state, caching as aggressively as it can (e.g. if a hyperparameter to one model changes, only that model is re-trained and re-deployed, whereas if a transformer is updated, all transformed_columns which use that transformer are regenerated, all models which use those columns are re-trained, etc). We view it as an always-on ML application, rather than a one-off ML workload.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: