Hacker News new | past | comments | ask | show | jobs | submit login

tfjs is dead, looking at the commit history. The standard now is to convert PyTorch to onnx, then use onnxruntime (https://github.com/microsoft/onnxruntime/tree/main/js/web) to run the model on the browser using webassembly/webGL (and nodejs if you wanted to, but why?).



> nodejs if you wanted to, but why?

Node.js is better backend than something like Flask.


Performance is a lot worse on NodeJS with a WebAssembly/WebGL backend versus Flask with a PyTorch/CUDA backend.


If you're using Node you can write whatever you want in C++ and then add a binding to call it from within your Node app. Don't need WebGL.


But you're having to write whatever in C++ versus just using Flask/Pytorch.


A lot of what you need is already written, you just need to find the right libraries and write the bindings. From my encounters with Python ML it seems like "just use Pytorch" is a bit like "simply walk into Mordor".


Yeah, but why...?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: