Hacker News new | past | comments | ask | show | jobs | submit login

And that's often because network hardware understands MPI and is able to optimize flows between nodes at far lower latency than TCP.



That's really cool. Source?


I used to work in HPC. The Mellanox gear, specifically InfiniBand is very good.

Fun fact: if you're working at a Saudi Arabian HPC center, say KAUST, your interconnects are purely Ethernet. Mellanox is (partially?) an Israeli company, and that's not very politically comfortable with procurement.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: