Hacker News new | past | comments | ask | show | jobs | submit login

Usually the inference time is small compared with download time so even if this were technically feasible you wouldn’t save much time.

For reference I have a 31mb vision transformer I run in my browser. Building the inputs, running inference, and parsing the response takes less than half a second.




> Usually the inference time is small compared with download time so even if this were technically feasible you wouldn’t save much time.

I can understand that but where time is not a factor and solely a question of data, can a model be streamed?


LLMs like ChatGPT only generate one token at a time. To generate more you run inference repeatedly until you reach a stop token or some other predetermined limit.

I don't see streaming helping anything besides maybe Time-To-First-Inference, but regardless, you're still not getting any output until the entire weights are downloaded.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: