Hacker News new | past | comments | ask | show | jobs | submit login

Alternatively, are there ways to train on consumer graphics cards, similar to SETI@Home or Folding@Home? I would personally be happy to donate gpu time, as I imagine many others would as well.



There absolutely are! Check out hivemind (https://github.com/learning-at-home/hivemind), a general library for deep learning over the Internet, or Petals (https://petals.ml/), a system that leverages Hivemind and allows you to run BLOOM-176B (or other large language models) that is distributed over many volunteer PCs. You can join it and host some layers of the model by running literally one command on a Linux machine with Docker and a recent enough GPU.

Disclaimer: I work on these projects, both are based on our research over the past three years


The cost of moving data from one gpu to the next will destroy performance.

The system are moving in the opposite direction (look at Dojo architecture or TensTorrent)

The silver lining is that the cost of training will fall substantially with those architecture that are not based in reusing gpu.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: