Hacker News new | past | comments | ask | show | jobs | submit login

FWIW you can use roboflow models on-device as well. detect.roboflow.com is just a hosted version of our inference server (if you run the docker somewhere you can swap out that URL for localhost or wherever your self-hosted one is running). Behind the scenes it’s an http interface for our inference[1] Python package which you can run natively if your app is in Python as well.

Pi inference is pretty slow (probably ~1 fps without an accelerator). Usually folks are using CUDA acceleration with a Jetson for these types of projects if they want to run faster locally.

Some benefits are that there are over 100k pre-trained models others have already published to Roboflow Universe[2] you can start from, supports many of the latest SOTA models (with an extensive library[3] of custom training notebooks), tight integration with the dataset/annotation tools that are at the core of Roboflow for creating custom models, and good support for common downstream tasks via supervision[4].

[1] https://github.com/roboflow/inference

[2] https://universe.roboflow.com

[3] https://github.com/roboflow/notebooks

[4] https://github.com/roboflow/supervision




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: