Hacker News new | past | comments | ask | show | jobs | submit login

No issue, I'm simply unfamiliar with python machine learning APIs.

I managed to run inference locally by installing the requirements and running app.py from the demo: https://huggingface.co/spaces/replit/replit-code-v1-3b-demo/...

It is very fast on my RTX 3070, VRAM usage goes to ~= 6.3GB during inference.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: