Hacker News new | past | comments | ask | show | jobs | submit login

Cooling is a pretty big factor in the energy usage of datacenters, and there's a lot of room for creative use of technology there - or simply geographic advantages.



It's a lot smaller than you think these days with all the effort that has gone into it.

Google reports an average PUE[1] of 1.11 over the past year, 1.09 over the past quarter, and the latest/best data centers are at 1.06 [2]. In simple terms this means that the total power overhead for cooling etc is just 6% of that to power the machines in them.

Google (and others) have gone to great lengths using AI, and radical new ideas for cooling.

Disclaimer: I work at Google, but not in Datacenters. All info public domain

[1] Definition of PUE: https://en.wikipedia.org/wiki/Power_usage_effectiveness

[2] Source: https://www.google.com/about/datacenters/efficiency/


You can use steam from cooling to produce electricity just like nuclear plants do, right? :)


No, because computer chips don't operate at 100°C (let alone 300°C like a nuclear reactor). Low-temperature heat flows contain little usable work. Typically the best option is to dump the spare heat into the nearest body of water, or into the air.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: