Twirrim: ARM has typically been more efficient, for both heat and power consumption, while losing out on computational power.
i_have_to_speak : Current ARM server chips are much worse than Intel chips on the performance/watt metric.
It's hard to square these two statements. It might depend on how you define "efficiency". If efficiency is the amount of energy to perform a calculation, I don't think ARM is more efficient. This paper from a couple years ago concludes that ISA is no longer a defining factor: http://www.embedded.com/design/connectivity/4436593/Analysis...
Separately, this is an excellent article comparing recent generations of Intel against each other, showing that although power use has been going up, "instructions per cycle" has been going up even faster, resulting in a net improvement in energy efficiency: http://kentcz.com/downloads/P149-ISCA14-Preprint.pdf
I wonder if the question we ought to be asking is "How idle is your server farm"? ARM still uses significantly less power than Intel chips do when idle. If your fleet is working hard all the time, it would be a no-brainer to go with Intel. What if your fleet spends 50% of the time not working? 70%? There presumably is some tipping point there.
If my server farm is idle any significant amount, I'm going to take those servers offline. If I need capacity, I'll spin up some cloud instances in the short term, and I'll bring my own servers online until I'm at an appropriate idle/busy metric.
I suspect that the days of 99% idle servers are long gone. I suspect that utilization is probably above 70%.
I suspect there is a tipping point, but that it's probably much lower than you suggest: maybe 10-25% of a single core, thus some single digit percentage of the entire processor.
Modern Intel processors can shut down unneeded cores almost completely, and frequency scaling gives you another range of efficient power reduction. It's only when you get lower than that that you are losing significant power at 'idle'.
Unlike a small battery powered device, I'd guess that the difference in idle power for the CPU is never going to be the deciding factor, as keeping the non-CPU rest-of-the-machine running will dwarf the difference. What workload would you envision as having the greatest advantage? Maybe if you were running a single instance per dedicated core?
i_have_to_speak : Current ARM server chips are much worse than Intel chips on the performance/watt metric.
It's hard to square these two statements. It might depend on how you define "efficiency". If efficiency is the amount of energy to perform a calculation, I don't think ARM is more efficient. This paper from a couple years ago concludes that ISA is no longer a defining factor: http://www.embedded.com/design/connectivity/4436593/Analysis...
Separately, this is an excellent article comparing recent generations of Intel against each other, showing that although power use has been going up, "instructions per cycle" has been going up even faster, resulting in a net improvement in energy efficiency: http://kentcz.com/downloads/P149-ISCA14-Preprint.pdf