Over the last few months I have been thinking about data centers, how we power and cool them and what we put in them. I’ve been looking at what the very largest consumers of IT infrastructure do with their data centers compared to industry norms. The first take away is that almost none of them conform to industry norms. As an example, almost none of them use refrigeration, ac power or conventional enterprise systems.
Try and find a SAN, Blade Servers, or 10G Ethernet in a Google Data Center – Google deliberately use commodity parts. You won’t find conventional CRAC chillers in the most recent Yahoo sites. Microsoft is building it’s newest data centers from containers made by Verari Systems.
Google have tried to coin a new phrase to describe what they do – The Warehouse Sized Computer. It makes some sense as a description for Google as their approach is to ruthlessly take out costs, both capital and operational. So they design data centers from the ground up missing out the stuff they don’t need. Missing out stuff from the data center mechanical and electrical systems and missing out stuff from their computing, network and storage platforms. In fact it is not such as good a generic category name for normal enterprises that want to follow the same path.
I think we can categorize the phases that the most advanced enterprizes have gone through are:
- Server Scale Computing – each computer is self contained with it’s own power unit, self contained cooling, enclosed in it’s own cabinet.
- Blade Scale Computing – computers are delivered in small batches with some shared components such as cooling and power modules in a shared cabinet.
- Data Center Scale Computing – computers are delivered at extreme scale with exclusively shared components. DC Power, shared cooling, typically in a specialist large cabinet or container.
My sense is that the largest scale consumers of IT infrastructure are breaking away from the pack, abandoning pizza box servers and blade servers, moving towards clustered storage and commodity components, turning away from the hierarchical network designs where most traffic traverses the core network at ultra high speeds (with ultra high costs to match) towards leveraging software as the cornerstone of relaibility and scale.