Space would be a terrible environment for building a datacenter. The main goals of a datacenter is to make computation as cheap, as fast and as reliable as possible. Having the datacenter orbit around Earth would not help us accomplish any of these goals.
First off, building a datacenter in space would not be cheap. It costs around $25,000 to send a kilogram of equipment into a geostationary orbit. [1] So let assume we were to use Dell PowerEdge C1100. Each server costs $14,000 and weights 18kg. [2] This means for each server sent in orbit, you could buy 32 extra ones on Earth.
Then, there is the issue of cooling. Although outer space is really cold, its vacuum prevents the heat generated by the machines from being dissipated quickly. Controlling the temperature of a such datacenter would be a very interesting engineering challenge.
And then how would you power this datacenter? Converting the excess heat back into electricity could be an interesting option. But most likely, it would need a lot of solar panels. This would make the datacenter cheap to run once built, but the upfront costs would be enormous.
And we haven't talked about speed and reliability yet. Since the signal would need to travel about 35,000 km from the geostationary orbit to reach us, communications between Earth and the datacenter would have significant delays. Even at the speed of light, the minimum round trip time would be about 250 milliseconds if we ignore all other possible sources of delay.
The hostile space weather would also make it pretty hard to run servers reliability. Radiations would destroy electronics, caused bit to flip randomly and do all bunch of fun stuff to the equipment.
But... anyhow! Let's assume anyway that by some magical work of science and Google engineering, we figure ways to manufacture a datacenter directly space for almost nothing by mining the Moon, discover some amazing thermoelectric generators with near 100% efficiency and space shields that blocks almost all radiations.
So back our previous example, a high performance PowerEdge gives us up to about 300 GFLOPS of computing power, 192 GB of RAM and 12 TB of storage.
Now if we were to convert the total mass of the Moon (7.34767309 × 10²² kg) into one monstrous datacenter, this would give us about 4.0 × 10²¹ servers. It would gives us a whooping 1.2 billions YottaFLOPS (or put differently, 1.2 × 10³³ FLOPS) of compute madness, 0.8 billions YottaBytes and 49 billions YottaBytes of storage. This monster would consume about the equivalent of 1% of the Sun's total power output.
Thanks for playing along! But realize that one of the great reasons to put a datacenter in space is physical security. Another reason would be unparalleled data connectivity to the entire planet. But yes, it is a very harsh environment, and waste heat is difficult to dissipate. And of course launch costs are very high. The real reason I asked is because I think it's bloody good fun (and figured the Google folks would get a kick out of it).
Some follow up questions: let's assume that we need to move 10^10 yottabytes from the MoonPC to the earth. How do we do it? What's the fastest we could do it without transfering so much heat that it melts either end of the connection?
A heatsink works because there is some sort of medium that absorbs heat from the sink and moves away, thereby moving heat away. On earth, we use air for this, sometimes with the help of a fan.
If you stick a heatsink on equipment in space, there's no air that can move the heat away, since space is mostly empty. You'll bleed off some through infrared radiation, but that's not going to be enough.
First off, building a datacenter in space would not be cheap. It costs around $25,000 to send a kilogram of equipment into a geostationary orbit. [1] So let assume we were to use Dell PowerEdge C1100. Each server costs $14,000 and weights 18kg. [2] This means for each server sent in orbit, you could buy 32 extra ones on Earth.
Then, there is the issue of cooling. Although outer space is really cold, its vacuum prevents the heat generated by the machines from being dissipated quickly. Controlling the temperature of a such datacenter would be a very interesting engineering challenge.
And then how would you power this datacenter? Converting the excess heat back into electricity could be an interesting option. But most likely, it would need a lot of solar panels. This would make the datacenter cheap to run once built, but the upfront costs would be enormous.
And we haven't talked about speed and reliability yet. Since the signal would need to travel about 35,000 km from the geostationary orbit to reach us, communications between Earth and the datacenter would have significant delays. Even at the speed of light, the minimum round trip time would be about 250 milliseconds if we ignore all other possible sources of delay.
The hostile space weather would also make it pretty hard to run servers reliability. Radiations would destroy electronics, caused bit to flip randomly and do all bunch of fun stuff to the equipment.
But... anyhow! Let's assume anyway that by some magical work of science and Google engineering, we figure ways to manufacture a datacenter directly space for almost nothing by mining the Moon, discover some amazing thermoelectric generators with near 100% efficiency and space shields that blocks almost all radiations.
So back our previous example, a high performance PowerEdge gives us up to about 300 GFLOPS of computing power, 192 GB of RAM and 12 TB of storage.
Now if we were to convert the total mass of the Moon (7.34767309 × 10²² kg) into one monstrous datacenter, this would give us about 4.0 × 10²¹ servers. It would gives us a whooping 1.2 billions YottaFLOPS (or put differently, 1.2 × 10³³ FLOPS) of compute madness, 0.8 billions YottaBytes and 49 billions YottaBytes of storage. This monster would consume about the equivalent of 1% of the Sun's total power output.
[1]: http://www.futron.com/upload/wysiwyg/Resources/Whitepapers/S... [2]: http://www.dell.com/us/enterprise/p/poweredge-c1100/pd#TechS...