Hacker News new | past | comments | ask | show | jobs | submit login

Nope, none of it is JVM stuff. It's pretty standard stuff if you want to ship an API into a production environment and expect other developers/services to interact with your model. How do you know your model is failing/slow to serve requests? You need monitoring/logging. How do add security? I'm talking API security, like JWT tokens with scopes and claims.

Maybe we mean different things by "productionising ML applications" but building a docker container with an R runtime and the correct package versions is not all, or even half, of what's required for production.




Why would you have any of this tightly coupled to your model?

Set up a separate API gateway, which covers all your points (REST endpoints, monitoring, security) - there's plenty of off-the-shelf options. Route authenticated requests to the backend that runs your model.


Depends on your model. Mine score users daily, so I don't need to worry about building an API.

Logging is pretty available in both (though better in Python to be fair).

I don't really see how building my model in Python would make it easier to add this API functionality either, so it's a bit irrelevant. Like my docker container (which appears to be almost essential in Python but nice in R) can call predict in any language, and then pass through to the API using the tools noted above.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: