Hacker News new | past | comments | ask | show | jobs | submit login

I think even 10 years ago that argument was less often true than people might have thought. A raid of a few rotational disks, with multiple controllers, could easily deliver a few hundred MB/s back then too. And thus it was easy to get CPU bottlenecked back then too.

The pg community often said so as well, particularly because it was not commonly used in a major analytical capacity...




Also, DRAM hasn't gotten that much cheaper in 10 years. In 2008 you could already get reasonably priced commodity x86 servers with 128 GB RAM, so for most DB applications you could keep all your working set cached and only worry about writes on your storage layer.

https://blogs-images.forbes.com/jimhandy/files/2011/12/DRAM-...

http://thememoryguy.com/wp-content/uploads/2016/05/2016-05-0...

https://en.wikipedia.org/wiki/List_of_Dell_PowerEdge_Servers...


I don't recall the exact discussion from P2D2, but AFAIK the reasoning was more along the lines "There are other bottlenecks that we need to address first." That is, issues that would either limit the JIT gains or issues with better cost/benefit ratio (measured in developer time).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: