Hacker News new | past | comments | ask | show | jobs | submit login

100 cores? I forgot how to count that low.

The workflows I deal with generally involve moving hundreds of terabytes of storage into memory, processing it, and writing it out. Single machines (even beefy ones) tend to hit their limits (networking, max RAM, cache size, TLB, etc).

Maybe there's another tool better than spark, i don't know, the important thing is that spark is the most ubiquitous.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: