These Internet-scale datacenters have really taken off in recent years. Last month the Open Compute community held their second Open Compute Summit, and part of that effort was the establishment of a foundation to guide the work as it moves forward; read more about that effort here. I haven't seen too much technical information flowing from the Open Compute Summit, although James Hamilton of Amazon posted his slides online here: here
Meanwhile (was this part of the summit, or independent?), the team at AnandTech have done some independent testing of the Open Compute server components; in their conclusion, they commend the Open Compute work as showing tremendous potential:
The Facebook Open Compute servers have made quite an impression on us. Remember, this is Facebook's first attempt to build a cloud server! This server uses very little power when running at low load (see our idle numbers) and offers slightly better performance while consuming less energy than one of the best general purpose servers on the market. The power supply power factor is also top notch, resulting in even more savings (e.g. power factoring correction) in the data center.
While it's possible to look at the Open Compute servers as a "Cloud only" solution, we imagine anyone with quite a few load-balanced web servers will be interested in the hardware. So far only Cloud / hyperscale data center oriented players like Rackspace have picked up the Open Compute idea, but a lot of other people could benefit from buying these kind of "keep it simple" servers in smaller quantities.
Lastly, since much of the activity in this area of computing has to do with power efficiency, let me draw your attention to this interesting work on power management in Android.
Cheaper, faster, and more power-efficient: the future of computing beckons!